1
|
Leiderman YI, Gerber MJ, Hubschman JP, Yi D. Artificial intelligence applications in ophthalmic surgery. Curr Opin Ophthalmol 2024; 35:526-532. [PMID: 39145488 DOI: 10.1097/icu.0000000000001033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/16/2024]
Abstract
PURPOSE OF REVIEW Technologies in healthcare incorporating artificial intelligence tools are experiencing rapid growth in static-image-based applications such as diagnostic imaging. Given the proliferation of artificial intelligence (AI)-technologies created for video-based imaging, ophthalmic microsurgery is likely to experience significant benefits from the application of emerging technologies to multiple facets of the care of the surgical patient. RECENT FINDINGS Proof-of-concept research and early phase clinical trials are in progress for AI-based surgical technologies that aim to provide preoperative planning and decision support, intraoperative image enhancement, surgical guidance, surgical decision-making support, tactical assistive technologies, enhanced surgical training and assessment of trainee progress, and semi-autonomous tool control or autonomous elements of surgical procedures. SUMMARY The proliferation of AI-based technologies in static imaging in clinical ophthalmology, continued refinement of AI tools designed for video-based applications, and development of AI-based digital tools in allied surgical fields suggest that ophthalmic surgery is poised for the integration of AI into our microsurgical paradigm.
Collapse
Affiliation(s)
- Yannek I Leiderman
- Departments of Ophthalmology and Bioengineering, University of Illinois Chicago
| | - Matthew J Gerber
- Department of Ophthalmology, University of California at Los Angeles, Los Angeles, California, USA
| | - Jean-Pierre Hubschman
- Department of Ophthalmology, University of California at Los Angeles, Los Angeles, California, USA
| | - Darvin Yi
- Departments of Ophthalmology and Bioengineering, University of Illinois Chicago
| |
Collapse
|
2
|
Jian M, Wu R, Xu W, Zhi H, Tao C, Chen H, Li X. VascuConNet: an enhanced connectivity network for vascular segmentation. Med Biol Eng Comput 2024:10.1007/s11517-024-03150-8. [PMID: 38898202 DOI: 10.1007/s11517-024-03150-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Accepted: 06/06/2024] [Indexed: 06/21/2024]
Abstract
Medical image segmentation commonly involves diverse tissue types and structures, including tasks such as blood vessel segmentation and nerve fiber bundle segmentation. Enhancing the continuity of segmentation outcomes represents a pivotal challenge in medical image segmentation, driven by the demands of clinical applications, focusing on disease localization and quantification. In this study, a novel segmentation model is specifically designed for retinal vessel segmentation, leveraging vessel orientation information, boundary constraints, and continuity constraints to improve segmentation accuracy. To achieve this, we cascade U-Net with a long-short-term memory network (LSTM). U-Net is characterized by a small number of parameters and high segmentation efficiency, while LSTM offers a parameter-sharing capability. Additionally, we introduce an orientation information enhancement module inserted into the model's bottom layer to obtain feature maps containing orientation information through an orientation convolution operator. Furthermore, we design a new hybrid loss function that consists of connectivity loss, boundary loss, and cross-entropy loss. Experimental results demonstrate that the model achieves excellent segmentation outcomes across three widely recognized retinal vessel segmentation datasets, CHASE_DB1, DRIVE, and ARIA.
Collapse
Affiliation(s)
- Muwei Jian
- School of Computer Science and Technology, Shandong University of Finance and Economics, Jinan, China.
- School of Information Science and Technology, Linyi University, Linyi, China.
| | - Ronghua Wu
- School of Information Science and Technology, Linyi University, Linyi, China
| | - Wenjin Xu
- School of Information Science and Technology, Linyi University, Linyi, China
| | - Huixiang Zhi
- School of Information Science and Technology, Linyi University, Linyi, China
| | - Chen Tao
- School of Information Science and Technology, Linyi University, Linyi, China
| | - Hongyu Chen
- School of Information Science and Technology, Linyi University, Linyi, China
| | - Xiaoguang Li
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Beijing University of Technology, Beijing, China.
| |
Collapse
|
3
|
Akhtar Y, Udupa JK, Tong Y, Liu T, Wu C, Kogan R, Al-Noury M, Hosseini M, Tong L, Mannikeri S, Odhner D, Mcdonough JM, Lott C, Clark A, Cahill PJ, Anari JB, Torigian DA. Auto-segmentation of thoraco-abdominal organs in pediatric dynamic MRI. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.05.04.24306582. [PMID: 38766023 PMCID: PMC11100850 DOI: 10.1101/2024.05.04.24306582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2024]
Abstract
Purpose Analysis of the abnormal motion of thoraco-abdominal organs in respiratory disorders such as the Thoracic Insufficiency Syndrome (TIS) and scoliosis such as adolescent idiopathic scoliosis (AIS) or early onset scoliosis (EOS) can lead to better surgical plans. We can use healthy subjects to find out the normal architecture and motion of a rib cage and associated organs and attempt to modify the patient's deformed anatomy to match to it. Dynamic magnetic resonance imaging (dMRI) is a practical and preferred imaging modality for capturing dynamic images of healthy pediatric subjects. In this paper, we propose an auto-segmentation set-up for the lungs, kidneys, liver, spleen, and thoraco-abdominal skin in these dMRI images which have their own challenges such as poor contrast, image non-standardness, and similarity in texture amongst gas, bone, and connective tissue at several inter-object interfaces. Methods The segmentation set-up has been implemented in two steps: recognition and delineation using two deep neural network (DL) architectures (say DL-R and DL-D) for the recognition step and delineation step, respectively. The encoder-decoder framework in DL-D utilizes features at four different resolution levels to counter the challenges involved in the segmentation. We have evaluated on dMRI sagittal acquisitions of 189 (near-)normal subjects. The spatial resolution in all dMRI acquisitions is 1.46 mm in a sagittal slice and 6.00 mm between sagittal slices. We utilized images of 89 (10) subjects at end inspiration for training (validation). For testing we experimented with three scenarios: utilizing (1) the images of 90 (=189-89-10) different (remaining) subjects at end inspiration for testing, (2) the images of the aforementioned 90 subjects at end expiration for testing, and (3) the images of the aforesaid 99 (=89+10) subjects but at end expiration for testing. In some situations, we can take advantage of already available ground truth (GT) of a subject at a particular respiratory phase to automatically segment the object in the image of the same subject at a different respiratory phase and then refining the segmentation to create the final GT. We anticipate that this process of creating GT would require minimal post hoc correction. In this spirit, we conducted separate experiments where we assume to have the ground truth of the test subjects at end expiration for scenario (1), end inspiration for (2), and end inspiration for (3). Results Amongst these three scenarios of testing, for the DL-R, we achieve a best average location error (LE) of about 1 voxel for the lungs, kidneys, and spleen and 1.5 voxels for the liver and the thoraco- abdominal skin. The standard deviation (SD) of LE is about 1 or 2 voxels. For the delineation approach, we achieve an average Dice coefficient (DC) of about 0.92 to 0.94 for the lungs, 0.82 for the kidneys, 0.90 for the liver, 0.81 for the spleen, and 0.93 for the thoraco-abdominal skin. The SD of DC is lower for the lungs, liver, and the thoraco-abdominal skin, and slightly higher for the spleen and kidneys. Conclusions Motivated by applications in surgical planning for disorders such as TIS, AIS, and EOS, we have shown an auto-segmentation system for thoraco-abdominal organs in dMRI acquisitions. This proposed setup copes with the challenges posed by low resolution, motion blur, inadequate contrast, and image intensity non-standardness quite well. We are in the process of testing its effectiveness on TIS patient dMRI data.
Collapse
|
4
|
Jiang J, Shi R, Lu J, Wang M, Zhang Q, Zhang S, Wang L, Alberts I, Rominger A, Zuo C, Shi K. Detection of individual brain tau deposition in Alzheimer's disease based on latent feature-enhanced generative adversarial network. Neuroimage 2024; 291:120593. [PMID: 38554780 DOI: 10.1016/j.neuroimage.2024.120593] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Revised: 03/17/2024] [Accepted: 03/25/2024] [Indexed: 04/02/2024] Open
Abstract
OBJECTIVE The conventional methods for interpreting tau PET imaging in Alzheimer's disease (AD), including visual assessment and semi-quantitative analysis of fixed hallmark regions, are insensitive to detect individual small lesions because of the spatiotemporal neuropathology's heterogeneity. In this study, we proposed a latent feature-enhanced generative adversarial network model for the automatic extraction of individual brain tau deposition regions. METHODS The latent feature-enhanced generative adversarial network we propose can learn the distribution characteristics of tau PET images of cognitively normal individuals and output the abnormal distribution regions of patients. This model was trained and validated using 1131 tau PET images from multiple centres (with distinct races, i.e., Caucasian and Mongoloid) with different tau PET ligands. The overall quality of synthetic imaging was evaluated using structural similarity (SSIM), peak signal to noise ratio (PSNR), and mean square error (MSE). The model was compared to the fixed templates method for diagnosing and predicting AD. RESULTS The reconstructed images archived good quality, with SSIM = 0.967 ± 0.008, PSNR = 31.377 ± 3.633, and MSE = 0.0011 ± 0.0007 in the independent test set. The model showed higher classification accuracy (AUC = 0.843, 95 % CI = 0.796-0.890) and stronger correlation with clinical scales (r = 0.508, P < 0.0001). The model also achieved superior predictive performance in the survival analysis of cognitive decline, with a higher hazard ratio: 3.662, P < 0.001. INTERPRETATION The LFGAN4Tau model presents a promising new approach for more accurate detection of individualized tau deposition. Its robustness across tracers and races makes it a potentially reliable diagnostic tool for AD in practice.
Collapse
Affiliation(s)
- Jiehui Jiang
- Institute of Biomedical Engineering, School of Life Sciences, Shanghai University, Shanghai, China.
| | - Rong Shi
- School of Information and Communication Engineering, Shanghai University, Shanghai, China
| | - Jiaying Lu
- Department of Nuclear Medicine & PET Center, Huashan Hospital, Fudan University, Shanghai, China; National Research Center for Aging and Medicine and National Center for Neurological Disorders, Huashan Hospital, Fudan University, Shanghai, China
| | - Min Wang
- Institute of Biomedical Engineering, School of Life Sciences, Shanghai University, Shanghai, China.
| | - Qi Zhang
- School of Information and Communication Engineering, Shanghai University, Shanghai, China
| | - Shuoyan Zhang
- School of Information and Communication Engineering, Shanghai University, Shanghai, China
| | - Luyao Wang
- Institute of Biomedical Engineering, School of Life Sciences, Shanghai University, Shanghai, China
| | - Ian Alberts
- Department of Nuclear Medicine, Inselspital, University of Bern, Bern, Switzerland
| | - Axel Rominger
- Department of Nuclear Medicine, Inselspital, University of Bern, Bern, Switzerland
| | - Chuantao Zuo
- Department of Nuclear Medicine & PET Center, Huashan Hospital, Fudan University, Shanghai, China; National Research Center for Aging and Medicine and National Center for Neurological Disorders, Huashan Hospital, Fudan University, Shanghai, China; Human Phenome Institute, Fudan University, Shanghai, China.
| | - Kuangyu Shi
- Department of Nuclear Medicine, Inselspital, University of Bern, Bern, Switzerland; Department of Informatics, Technical University of Munich, Munich, Germany
| |
Collapse
|
5
|
Patel K, Xie Z, Yuan H, Islam SMS, Xie Y, He W, Zhang W, Gottlieb A, Chen H, Giancardo L, Knaack A, Fletcher E, Fornage M, Ji S, Zhi D. Unsupervised deep representation learning enables phenotype discovery for genetic association studies of brain imaging. Commun Biol 2024; 7:414. [PMID: 38580839 PMCID: PMC10997628 DOI: 10.1038/s42003-024-06096-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Accepted: 03/22/2024] [Indexed: 04/07/2024] Open
Abstract
Understanding the genetic architecture of brain structure is challenging, partly due to difficulties in designing robust, non-biased descriptors of brain morphology. Until recently, brain measures for genome-wide association studies (GWAS) consisted of traditionally expert-defined or software-derived image-derived phenotypes (IDPs) that are often based on theoretical preconceptions or computed from limited amounts of data. Here, we present an approach to derive brain imaging phenotypes using unsupervised deep representation learning. We train a 3-D convolutional autoencoder model with reconstruction loss on 6130 UK Biobank (UKBB) participants' T1 or T2-FLAIR (T2) brain MRIs to create a 128-dimensional representation known as Unsupervised Deep learning derived Imaging Phenotypes (UDIPs). GWAS of these UDIPs in held-out UKBB subjects (n = 22,880 discovery and n = 12,359/11,265 replication cohorts for T1/T2) identified 9457 significant SNPs organized into 97 independent genetic loci of which 60 loci were replicated. Twenty-six loci were not reported in earlier T1 and T2 IDP-based UK Biobank GWAS. We developed a perturbation-based decoder interpretation approach to show that these loci are associated with UDIPs mapped to multiple relevant brain regions. Our results established unsupervised deep learning can derive robust, unbiased, heritable, and interpretable brain imaging phenotypes.
Collapse
Affiliation(s)
- Khush Patel
- McWilliams School of Biomedical Informatics, University of Texas Health Science Center, Houston, TX, 77030, USA
| | - Ziqian Xie
- McWilliams School of Biomedical Informatics, University of Texas Health Science Center, Houston, TX, 77030, USA
| | - Hao Yuan
- Department of Computer Science and Engineering, Texas A&M University, College Station, TX, 77843, USA
| | | | - Yaochen Xie
- Department of Computer Science and Engineering, Texas A&M University, College Station, TX, 77843, USA
| | - Wei He
- McWilliams School of Biomedical Informatics, University of Texas Health Science Center, Houston, TX, 77030, USA
| | - Wanheng Zhang
- School of Public Health, University of Texas Health Science Center, Houston, TX, 77030, USA
| | - Assaf Gottlieb
- McWilliams School of Biomedical Informatics, University of Texas Health Science Center, Houston, TX, 77030, USA
| | - Han Chen
- McWilliams School of Biomedical Informatics, University of Texas Health Science Center, Houston, TX, 77030, USA
- School of Public Health, University of Texas Health Science Center, Houston, TX, 77030, USA
| | - Luca Giancardo
- McWilliams School of Biomedical Informatics, University of Texas Health Science Center, Houston, TX, 77030, USA
| | - Alexander Knaack
- Department of Neurology and Imaging of Dementia and Aging (IDeA) Laboratory, University of California at Davis, Davis, CA, 95618, USA
| | - Evan Fletcher
- Department of Neurology and Imaging of Dementia and Aging (IDeA) Laboratory, University of California at Davis, Davis, CA, 95618, USA
| | - Myriam Fornage
- School of Public Health, University of Texas Health Science Center, Houston, TX, 77030, USA
- McGovern Medical School, University of Texas Health Science Center, Houston, TX, 77030, USA
| | - Shuiwang Ji
- Department of Computer Science and Engineering, Texas A&M University, College Station, TX, 77843, USA
| | - Degui Zhi
- McWilliams School of Biomedical Informatics, University of Texas Health Science Center, Houston, TX, 77030, USA.
| |
Collapse
|
6
|
Raut P, Baldini G, Schöneck M, Caldeira L. Using a generative adversarial network to generate synthetic MRI images for multi-class automatic segmentation of brain tumors. FRONTIERS IN RADIOLOGY 2024; 3:1336902. [PMID: 38304344 PMCID: PMC10830800 DOI: 10.3389/fradi.2023.1336902] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/11/2023] [Accepted: 12/28/2023] [Indexed: 02/03/2024]
Abstract
Challenging tasks such as lesion segmentation, classification, and analysis for the assessment of disease progression can be automatically achieved using deep learning (DL)-based algorithms. DL techniques such as 3D convolutional neural networks are trained using heterogeneous volumetric imaging data such as MRI, CT, and PET, among others. However, DL-based methods are usually only applicable in the presence of the desired number of inputs. In the absence of one of the required inputs, the method cannot be used. By implementing a generative adversarial network (GAN), we aim to apply multi-label automatic segmentation of brain tumors to synthetic images when not all inputs are present. The implemented GAN is based on the Pix2Pix architecture and has been extended to a 3D framework named Pix2PixNIfTI. For this study, 1,251 patients of the BraTS2021 dataset comprising sequences such as T1w, T2w, T1CE, and FLAIR images equipped with respective multi-label segmentation were used. This dataset was used for training the Pix2PixNIfTI model for generating synthetic MRI images of all the image contrasts. The segmentation model, namely DeepMedic, was trained in a five-fold cross-validation manner for brain tumor segmentation and tested using the original inputs as the gold standard. The inference of trained segmentation models was later applied to synthetic images replacing missing input, in combination with other original images to identify the efficacy of generated images in achieving multi-class segmentation. For the multi-class segmentation using synthetic data or lesser inputs, the dice scores were observed to be significantly reduced but remained similar in range for the whole tumor when compared with evaluated original image segmentation (e.g. mean dice of synthetic T2w prediction NC, 0.74 ± 0.30; ED, 0.81 ± 0.15; CET, 0.84 ± 0.21; WT, 0.90 ± 0.08). A standard paired t-tests with multiple comparison correction were performed to assess the difference between all regions (p < 0.05). The study concludes that the use of Pix2PixNIfTI allows us to segment brain tumors when one input image is missing.
Collapse
Affiliation(s)
- P. Raut
- Department of Pediatric Pulmonology, Erasmus Medical Center, Rotterdam, Netherlands
- Department of Radiology & Nuclear Medicine, Erasmus Medical Center, Rotterdam, Netherlands
- Institute for Diagnostic and Interventional Radiology, University Hospital Cologne, Cologne, Germany
| | - G. Baldini
- Institute of Interventional and Diagnostic Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
| | - M. Schöneck
- Institute for Diagnostic and Interventional Radiology, University Hospital Cologne, Cologne, Germany
| | - L. Caldeira
- Institute for Diagnostic and Interventional Radiology, University Hospital Cologne, Cologne, Germany
| |
Collapse
|
7
|
Zeng X, Puonti O, Sayeed A, Herisse R, Mora J, Evancic K, Varadarajan D, Balbastre Y, Costantini I, Scardigli M, Ramazzotti J, DiMeo D, Mazzamuto G, Pesce L, Brady N, Cheli F, Pavone FS, Hof PR, Frost R, Augustinack J, van der Kouwe A, Iglesias JE, Fischl B. Segmentation of supragranular and infragranular layers in ultra-high resolution 7T ex vivo MRI of the human cerebral cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.12.06.570416. [PMID: 38106176 PMCID: PMC10723438 DOI: 10.1101/2023.12.06.570416] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/19/2023]
Abstract
Accurate labeling of specific layers in the human cerebral cortex is crucial for advancing our understanding of neurodevelopmental and neurodegenerative disorders. Leveraging recent advancements in ultra-high resolution ex vivo MRI, we present a novel semi-supervised segmentation model capable of identifying supragranular and infragranular layers in ex vivo MRI with unprecedented precision. On a dataset consisting of 17 whole-hemisphere ex vivo scans at 120 μm, we propose a multi-resolution U-Nets framework (MUS) that integrates global and local structural information, achieving reliable segmentation maps of the entire hemisphere, with Dice scores over 0.8 for supra- and infragranular layers. This enables surface modeling, atlas construction, anomaly detection in disease states, and cross-modality validation, while also paving the way for finer layer segmentation. Our approach offers a powerful tool for comprehensive neuroanatomical investigations and holds promise for advancing our mechanistic understanding of progression of neurodegenerative diseases.
Collapse
Affiliation(s)
- Xiangrui Zeng
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Harvard Medical School, Department of Radiology, Boston, MA, USA
| | - Oula Puonti
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital - Amager and Hvidovre, Copenhagen, Denmark
| | - Areej Sayeed
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Harvard Medical School, Department of Radiology, Boston, MA, USA
| | - Rogeny Herisse
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Harvard Medical School, Department of Radiology, Boston, MA, USA
| | - Jocelyn Mora
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Harvard Medical School, Department of Radiology, Boston, MA, USA
| | - Kathryn Evancic
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Harvard Medical School, Department of Radiology, Boston, MA, USA
| | - Divya Varadarajan
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Harvard Medical School, Department of Radiology, Boston, MA, USA
| | - Yael Balbastre
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Harvard Medical School, Department of Radiology, Boston, MA, USA
| | - Irene Costantini
- National Research Council - National Institute of Optics (CNR-INO), Sesto Fiorentino, Italy
- European Laboratory for Non-Linear Spectroscopy (LENS), Sesto Fiorentino, Italy
- Department of Biology, University of Florence, Italy
| | - Marina Scardigli
- European Laboratory for Non-Linear Spectroscopy (LENS), Sesto Fiorentino, Italy
| | | | - Danila DiMeo
- European Laboratory for Non-Linear Spectroscopy (LENS), Sesto Fiorentino, Italy
| | - Giacomo Mazzamuto
- National Research Council - National Institute of Optics (CNR-INO), Sesto Fiorentino, Italy
- European Laboratory for Non-Linear Spectroscopy (LENS), Sesto Fiorentino, Italy
- Department of Physics and Astronomy, University of Florence, Italy
| | - Luca Pesce
- European Laboratory for Non-Linear Spectroscopy (LENS), Sesto Fiorentino, Italy
| | - Niamh Brady
- European Laboratory for Non-Linear Spectroscopy (LENS), Sesto Fiorentino, Italy
| | - Franco Cheli
- European Laboratory for Non-Linear Spectroscopy (LENS), Sesto Fiorentino, Italy
| | - Francesco Saverio Pavone
- National Research Council - National Institute of Optics (CNR-INO), Sesto Fiorentino, Italy
- European Laboratory for Non-Linear Spectroscopy (LENS), Sesto Fiorentino, Italy
- Department of Physics and Astronomy, University of Florence, Italy
| | - Patrick R. Hof
- Nash Family Department of Neuroscience and Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Robert Frost
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Harvard Medical School, Department of Radiology, Boston, MA, USA
| | - Jean Augustinack
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Harvard Medical School, Department of Radiology, Boston, MA, USA
| | - André van der Kouwe
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Harvard Medical School, Department of Radiology, Boston, MA, USA
| | - Juan Eugenio Iglesias
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Harvard Medical School, Department of Radiology, Boston, MA, USA
| | - Bruce Fischl
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Harvard Medical School, Department of Radiology, Boston, MA, USA
| |
Collapse
|
8
|
Papi Z, Fathi S, Dalvand F, Vali M, Yousefi A, Tabatabaei MH, Amouheidari A, Abedi I. Auto-Segmentation and Classification of Glioma Tumors with the Goals of Treatment Response Assessment Using Deep Learning Based on Magnetic Resonance Imaging. Neuroinformatics 2023; 21:641-650. [PMID: 37458971 DOI: 10.1007/s12021-023-09640-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/26/2023] [Indexed: 10/18/2023]
Abstract
Glioma is the most common primary intracranial neoplasm in adults. Radiotherapy is a treatment approach in glioma patients, and Magnetic Resonance Imaging (MRI) is a beneficial diagnostic tool in treatment planning. Treatment response assessment in glioma patients is usually based on the Response Assessment in Neuro Oncology (RANO) criteria. The limitation of assessment based on RANO is two-dimensional (2D) manual measurements. Deep learning (DL) has great potential in neuro-oncology to improve the accuracy of response assessment. In the current research, firstly, the BraTS 2018 Challenge dataset included 210 HGG and 75 LGG were applied to train a designed U-Net network for automatic tumor and intra-tumoral segmentation, followed by training of the designed classifier with transfer learning for determining grading HGG and LGG. Then, designed networks were employed for the segmentation and classification of local MRI images of 49 glioma patients pre and post-radiotherapy. The results of tumor segmentation and its intra-tumoral regions were utilized to determine the volume of different regions and treatment response assessment. Treatment response assessment demonstrated that radiotherapy is effective on the whole tumor and enhancing region with p-value ≤ 0.05 with a 95% confidence level, while it did not affect necrosis and peri-tumoral edema regions. This work demonstrated the potential of using deep learning in MRI images to provide a beneficial tool in the automated treatment response assessment so that the patient can obtain the best treatment.
Collapse
Affiliation(s)
- Zahra Papi
- Department of Medical Physics, School of Medicine, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Sina Fathi
- Department of Health Information Management, School of Health Management and Information Sciences, Iran University of Medical Sciences, Tehran, Iran
| | - Fatemeh Dalvand
- Department of Medical Radiation, Shahid Beheshti University, Tehran, Iran
| | - Mahsa Vali
- Department of Electrical and Computer Engineering, Isfahan University of Technology, Isfahan, Iran
| | - Ali Yousefi
- Department of Management- Operations Research, University of Isfahan, Isfahan, Iran
| | | | | | - Iraj Abedi
- Department of Medical Physics, School of Medicine, Isfahan University of Medical Sciences, Isfahan, Iran.
| |
Collapse
|
9
|
Bouget D, Alsinan D, Gaitan V, Helland RH, Pedersen A, Solheim O, Reinertsen I. Raidionics: an open software for pre- and postoperative central nervous system tumor segmentation and standardized reporting. Sci Rep 2023; 13:15570. [PMID: 37730820 PMCID: PMC10511510 DOI: 10.1038/s41598-023-42048-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 09/05/2023] [Indexed: 09/22/2023] Open
Abstract
For patients suffering from central nervous system tumors, prognosis estimation, treatment decisions, and postoperative assessments are made from the analysis of a set of magnetic resonance (MR) scans. Currently, the lack of open tools for standardized and automatic tumor segmentation and generation of clinical reports, incorporating relevant tumor characteristics, leads to potential risks from inherent decisions' subjectivity. To tackle this problem, the proposed Raidionics open-source software has been developed, offering both a user-friendly graphical user interface and stable processing backend. The software includes preoperative segmentation models for each of the most common tumor types (i.e., glioblastomas, lower grade gliomas, meningiomas, and metastases), together with one early postoperative glioblastoma segmentation model. Preoperative segmentation performances were quite homogeneous across the four different brain tumor types, with an average Dice around 85% and patient-wise recall and precision around 95%. Postoperatively, performances were lower with an average Dice of 41%. Overall, the generation of a standardized clinical report, including the tumor segmentation and features computation, requires about ten minutes on a regular laptop. The proposed Raidionics software is the first open solution enabling an easy use of state-of-the-art segmentation models for all major tumor types, including preoperative and postsurgical standardized reports.
Collapse
Affiliation(s)
- David Bouget
- Department of Health Research, SINTEF Digital, 7465, Trondheim, Norway
| | - Demah Alsinan
- Department of Health Research, SINTEF Digital, 7465, Trondheim, Norway
| | - Valeria Gaitan
- Department of Health Research, SINTEF Digital, 7465, Trondheim, Norway
| | - Ragnhild Holden Helland
- Department of Health Research, SINTEF Digital, 7465, Trondheim, Norway
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology (NTNU), 7491, Trondheim, Norway
| | - André Pedersen
- Department of Health Research, SINTEF Digital, 7465, Trondheim, Norway
| | - Ole Solheim
- Department of Neurosurgery, St. Olavs Hospital, Trondheim University Hospital, 7491, Trondheim, Norway
- Norwegian University of Science and Technology (NTNU), Department of Neuromedicine and Movement Science, 7491, Trondheim, Norway
| | - Ingerid Reinertsen
- Department of Health Research, SINTEF Digital, 7465, Trondheim, Norway.
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology (NTNU), 7491, Trondheim, Norway.
| |
Collapse
|
10
|
Andrieux G, Das T, Griffin M, Straehle J, Paine SML, Beck J, Boerries M, Heiland DH, Smith SJ, Rahman R, Chakraborty S. Spatially resolved transcriptomic profiles reveal unique defining molecular features of infiltrative 5ALA-metabolizing cells associated with glioblastoma recurrence. Genome Med 2023; 15:48. [PMID: 37434262 PMCID: PMC10337060 DOI: 10.1186/s13073-023-01207-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Accepted: 06/26/2023] [Indexed: 07/13/2023] Open
Abstract
BACKGROUND Spatiotemporal heterogeneity originating from genomic and transcriptional variation was found to contribute to subtype switching in isocitrate dehydrogenase-1 wild-type glioblastoma (GBM) prior to and upon recurrence. Fluorescence-guided neurosurgical resection utilizing 5-aminolevulinic acid (5ALA) enables intraoperative visualization of infiltrative tumors outside the magnetic resonance imaging contrast-enhanced regions. The cell population and functional status of tumor responsible for enhancing 5ALA-metabolism to fluorescence-active PpIX remain elusive. The close spatial proximity of 5ALA-metabolizing (5ALA +) cells to residual disease remaining post-surgery renders 5ALA + biology an early a priori proxy of GBM recurrence, which is poorly understood. METHODS We performed spatially resolved bulk RNA profiling (SPRP) analysis of unsorted Core, Rim, Invasive margin tissue, and FACS-isolated 5ALA + /5ALA - cells from the invasive margin across IDH-wt GBM patients (N = 10) coupled with histological, radiographic, and two-photon excitation fluorescence microscopic analyses. Deconvolution of SPRP followed by functional analyses was performed using CIBEROSRTx and UCell enrichment algorithms, respectively. We further investigated the spatial architecture of 5ALA + enriched regions by analyzing spatial transcriptomics from an independent IDH-wt GBM cohort (N = 16). Lastly, we performed survival analysis using Cox Proportinal-Hazards model on large GBM cohorts. RESULTS SPRP analysis integrated with single-cell and spatial transcriptomics uncovered that the GBM molecular subtype heterogeneity is likely to manifest regionally in a cell-type-specific manner. Infiltrative 5ALA + cell population(s) harboring transcriptionally concordant GBM and myeloid cells with mesenchymal subtype, -active wound response, and glycolytic metabolic signature, was shown to reside within the invasive margin spatially distinct from the tumor core. The spatial co-localization of the infiltrating MES GBM and myeloid cells within the 5ALA + region indicates PpIX fluorescence can effectively be utilized to resect the immune reactive zone beyond the tumor core. Finally, 5ALA + gene signatures were associated with poor survival and recurrence in GBM, signifying that the transition from primary to recurrent GBM is not discrete but rather a continuum whereby primary infiltrative 5ALA + remnant tumor cells more closely resemble the eventual recurrent GBM. CONCLUSIONS Elucidating the unique molecular and cellular features of the 5ALA + population within tumor invasive margin opens up unique possibilities to develop more effective treatments to delay or block GBM recurrence, and warrants commencement of such treatments as early as possible post-surgical resection of the primary neoplasm.
Collapse
Affiliation(s)
- Geoffroy Andrieux
- Institute of Medical Bioinformatics and Systems Medicine, Medical Center - University of Freiburg Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Tonmoy Das
- Institute of Medical Bioinformatics and Systems Medicine, Medical Center - University of Freiburg Faculty of Medicine, University of Freiburg, Freiburg, Germany
- Systems Cell-Signaling Laboratory, Department of Biochemistry and Molecular Biology, University of Dhaka, Dhaka, Bangladesh
| | - Michaela Griffin
- Children's Brain Tumour Research Centre, Biodiscovery Institute, School of Medicine, University of Nottingham, Nottingham, UK
| | - Jakob Straehle
- Department of Neurosurgery, Medical Center - University of Freiburg, Freiburg, Germany
| | - Simon M L Paine
- Children's Brain Tumour Research Centre, Biodiscovery Institute, School of Medicine, University of Nottingham, Nottingham, UK
| | - Jürgen Beck
- Department of Neurosurgery, Medical Center - University of Freiburg, Freiburg, Germany
| | - Melanie Boerries
- Institute of Medical Bioinformatics and Systems Medicine, Medical Center - University of Freiburg Faculty of Medicine, University of Freiburg, Freiburg, Germany
- German Cancer Consortium (DKTK) Partner Site Freiburg, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Dieter H Heiland
- Department of Neurosurgery, Medical Center - University of Freiburg, Freiburg, Germany
- German Cancer Consortium (DKTK) Partner Site Freiburg, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Microenvironment and Immunology Research Laboratory, Medical Center - University of Freiburg, Freiburg, Germany
- Department of Neurological Surgery, Lou and Jean Malnati Brain Tumor Institute, Robert H. Lurie Comprehensive Cancer Center, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA
| | - Stuart J Smith
- Children's Brain Tumour Research Centre, Biodiscovery Institute, School of Medicine, University of Nottingham, Nottingham, UK
| | - Ruman Rahman
- Children's Brain Tumour Research Centre, Biodiscovery Institute, School of Medicine, University of Nottingham, Nottingham, UK.
| | - Sajib Chakraborty
- Institute of Medical Bioinformatics and Systems Medicine, Medical Center - University of Freiburg Faculty of Medicine, University of Freiburg, Freiburg, Germany.
- Systems Cell-Signaling Laboratory, Department of Biochemistry and Molecular Biology, University of Dhaka, Dhaka, Bangladesh.
| |
Collapse
|
11
|
Zeng Y, Zeng P, Shen S, Liang W, Li J, Zhao Z, Zhang K, Shen C. DCTR U-Net: automatic segmentation algorithm for medical images of nasopharyngeal cancer in the context of deep learning. Front Oncol 2023; 13:1190075. [PMID: 37546396 PMCID: PMC10402756 DOI: 10.3389/fonc.2023.1190075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 05/30/2023] [Indexed: 08/08/2023] Open
Abstract
Nasopharyngeal carcinoma (NPC) is a malignant tumor that occurs in the wall of the nasopharyngeal cavity and is prevalent in Southern China, Southeast Asia, North Africa, and the Middle East. According to studies, NPC is one of the most common malignant tumors in Hainan, China, and it has the highest incidence rate among otorhinolaryngological malignancies. We proposed a new deep learning network model to improve the segmentation accuracy of the target region of nasopharyngeal cancer. Our model is based on the U-Net-based network, to which we add Dilated Convolution Module, Transformer Module, and Residual Module. The new deep learning network model can effectively solve the problem of restricted convolutional fields of perception and achieve global and local multi-scale feature fusion. In our experiments, the proposed network was trained and validated using 10-fold cross-validation based on the records of 300 clinical patients. The results of our network were evaluated using the dice similarity coefficient (DSC) and the average symmetric surface distance (ASSD). The DSC and ASSD values are 0.852 and 0.544 mm, respectively. With the effective combination of the Dilated Convolution Module, Transformer Module, and Residual Module, we significantly improved the segmentation performance of the target region of the NPC.
Collapse
Affiliation(s)
- Yan Zeng
- State Key Laboratory of Marine Resource Utilization in South China Sea, School of Information and Communication Engineering, Hainan University, Haikou, China
- ChinaPersonnel Department, Hainan Medical University, Haikou, China
| | - PengHui Zeng
- State Key Laboratory of Marine Resource Utilization in South China Sea, School of Information and Communication Engineering, Hainan University, Haikou, China
| | - ShaoDong Shen
- State Key Laboratory of Marine Resource Utilization in South China Sea, School of Information and Communication Engineering, Hainan University, Haikou, China
| | - Wei Liang
- State Key Laboratory of Marine Resource Utilization in South China Sea, School of Information and Communication Engineering, Hainan University, Haikou, China
| | - Jun Li
- State Key Laboratory of Marine Resource Utilization in South China Sea, School of Information and Communication Engineering, Hainan University, Haikou, China
| | - Zhe Zhao
- State Key Laboratory of Marine Resource Utilization in South China Sea, School of Information and Communication Engineering, Hainan University, Haikou, China
| | - Kun Zhang
- State Key Laboratory of Marine Resource Utilization in South China Sea, School of Information and Communication Engineering, Hainan University, Haikou, China
- School of Information Science and Technology, Hainan Normal University, Haikou, China
| | - Chong Shen
- State Key Laboratory of Marine Resource Utilization in South China Sea, School of Information and Communication Engineering, Hainan University, Haikou, China
| |
Collapse
|
12
|
Golkar E, Parker D, Brem S, Almairac F, Verma R. CrOssing fiber Modeling in the Peritumoral Area using dMRI (COMPARI). BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.05.07.539770. [PMID: 37215003 PMCID: PMC10197585 DOI: 10.1101/2023.05.07.539770] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Visualization of fiber tracts around the tumor is critical for neurosurgical planning and preservation of crucial structural connectivity during tumor resection. Biophysical modeling approaches estimate fiber tract orientations from differential water diffusivity information of diffusion MRI. However, the presence of edema and tumor infiltration presents a challenge to visualize crossing fiber tracts in the peritumoral region. Previous approaches proposed free water modeling to compensate for the effect of water diffusivity in edema, but those methods were limited in estimating complex crossing fiber tracts. We propose a new cascaded multi-compartment model to estimate tissue microstructure in the presence of edema and pathological contaminants in the area surrounding brain tumors. In our model (COMPARI), the isotropic components of diffusion signal, including free water and hindered water, were eliminated, and the fiber orientation distribution (FOD) of the remaining signal was estimated. In simulated data, COMPARI accurately recovered fiber orientations in the presence of extracellular water. In a dataset of 23 patients with highly edematous brain tumors, the amplitudes of FOD and anisotropic index distribution within the peritumoral region were higher with COMPARI than with a recently proposed multi-compartment constrained deconvolution model. In a selected patient with metastatic brain tumor, we demonstrated COMPARI's ability to effectively model and eliminate water from the peritumoral region. The white matter bundles reconstructed with our model were qualitatively improved compared to those of other models, and allowed the identification of crossing fibers. In conclusion, the removal of isotropic components as proposed with COMPARI improved the bio-physical modeling of dMRI in edema, thus providing information on crossing fibers, thereby enabling improved tractography in a highly edematous brain tumor. This model may improve surgical planning tools to help achieve maximal safe resection of brain tumors.
Collapse
Affiliation(s)
- Ehsan Golkar
- DiCIPHR (Diffusion and Connectomics in Precision Healthcare Research) Lab, Perelman School of Medicine,University of Pennsylvania, Philadelphia, PA, USA
| | - Drew Parker
- DiCIPHR (Diffusion and Connectomics in Precision Healthcare Research) Lab, Perelman School of Medicine,University of Pennsylvania, Philadelphia, PA, USA
| | - Steven Brem
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania,Philadelphia, PA
| | - Fabien Almairac
- Neurosurgery department, Pasteur 2 Hospital, University Hospital of Nice, France
- UR2CA PIN, Université Côte d’Azur, France
| | - Ragini Verma
- DiCIPHR (Diffusion and Connectomics in Precision Healthcare Research) Lab, Perelman School of Medicine,University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
13
|
Aggarwal M, Tiwari AK, Sarathi MP, Bijalwan A. An early detection and segmentation of Brain Tumor using Deep Neural Network. BMC Med Inform Decis Mak 2023; 23:78. [PMID: 37101176 PMCID: PMC10134539 DOI: 10.1186/s12911-023-02174-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Accepted: 04/12/2023] [Indexed: 04/28/2023] Open
Abstract
BACKGROUND Magnetic resonance image (MRI) brain tumor segmentation is crucial and important in the medical field, which can help in diagnosis and prognosis, overall growth predictions, Tumor density measures, and care plans needed for patients. The difficulty in segmenting brain Tumors is primarily because of the wide range of structures, shapes, frequency, position, and visual appeal of Tumors, like intensity, contrast, and visual variation. With recent advancements in Deep Neural Networks (DNN) for image classification tasks, intelligent medical image segmentation is an exciting direction for Brain Tumor research. DNN requires a lot of time & processing capabilities to train because of only some gradient diffusion difficulty and its complication. METHODS To overcome the gradient issue of DNN, this research work provides an efficient method for brain Tumor segmentation based on the Improved Residual Network (ResNet). Existing ResNet can be improved by maintaining the details of all the available connection links or by improving projection shortcuts. These details are fed to later phases, due to which improved ResNet achieves higher precision and can speed up the learning process. RESULTS The proposed improved Resnet address all three main components of existing ResNet: the flow of information through the network layers, the residual building block, and the projection shortcut. This approach minimizes computational costs and speeds up the process. CONCLUSION An experimental analysis of the BRATS 2020 MRI sample data reveals that the proposed methodology achieves competitive performance over the traditional methods like CNN and Fully Convolution Neural Network (FCN) in more than 10% improved accuracy, recall, and f-measure.
Collapse
Affiliation(s)
- Mukul Aggarwal
- Dr. A.P.J. Abdul Kalam Technical University, Lucknow, Uttar Pradesh, India
| | | | - M Partha Sarathi
- Amity School of Engineering and Technology, Amity University, Noida, Uttar Pradesh, India
| | - Anchit Bijalwan
- Faculty of Electrical and Computer Engineering, Arba Minch University, Arba Minch, Ethiopia.
| |
Collapse
|
14
|
Chougala RD, Havaldar R H. Systematic assessment and review of techniques based on tumour detection in brain using MRI. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2023. [DOI: 10.1080/21681163.2023.2181020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/19/2023]
Affiliation(s)
- Raviraj D. Chougala
- Electronics and communication engineering, Angadi Institute of Technology & Management, Karnataka
| | - Havaldar R H
- Department of Biomedical Engineering, KLE Technological University's Dr. M.S. Sheshgiri College of Engineering and Technology, Udyambag, Belgaum, Karnataka, India
| |
Collapse
|
15
|
Kn BP, Cs A, Mohammed A, Chitta KK, To XV, Srour H, Nasrallah F. An end-end deep learning framework for lesion segmentation on multi-contrast MR images-an exploratory study in a rat model of traumatic brain injury. Med Biol Eng Comput 2023; 61:847-865. [PMID: 36624356 DOI: 10.1007/s11517-022-02752-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Accepted: 12/24/2022] [Indexed: 01/11/2023]
Abstract
Traumatic brain injury (TBI) engenders traumatic necrosis and penumbra-areas of secondary neural injury which are crucial targets for therapeutic interventions. Segmenting manually areas of ongoing changes like necrosis, edema, hematoma, and inflammation is tedious, error-prone, and biased. Using the multi-parametric MR data from a rodent model study, we demonstrate the effectiveness of an end-end deep learning global-attention-based UNet (GA-UNet) framework for automatic segmentation and quantification of TBI lesions. Longitudinal MR scans (2 h, 1, 3, 7, 14, 30, and 60 days) were performed on eight Sprague-Dawley rats after controlled cortical injury was performed. TBI lesion and sub-regions segmentation was performed using 3D-UNet and GA-UNet. Dice statistics (DSI) and Hausdorff distance were calculated to assess the performance. MR scan variations-based (bias, noise, blur, ghosting) data augmentation was performed to develop a robust model.Training/validation median DSI for U-Net was 0.9368 with T2w and MPRAGE inputs, whereas GA-UNet had 0.9537 for the same. Testing accuracies were higher for GA-UNet than U-Net with a DSI of 0.8232 for the T2w-MPRAGE inputs.Longitudinally, necrosis remained constant while oligemia and penumbra decreased, and edema appearing around day 3 which increased with time. GA-UNet shows promise for multi-contrast MR image-based segmentation/quantification of TBI in large cohort studies.
Collapse
Affiliation(s)
- Bhanu Prakash Kn
- Clinical Data Analytics & Radiomics, Cellular Image Informatics, Bioinformatics Institute, A*STAR, 30 Biopolis St Matrix, Singapore, 138671, Singapore. .,Cellular Image Informatics, Bioinformatics Institute, A*STAR Horizontal Technology Centers, Singapore, Singapore.
| | - Arvind Cs
- Clinical Data Analytics & Radiomics, Cellular Image Informatics, Bioinformatics Institute, A*STAR, 30 Biopolis St Matrix, Singapore, 138671, Singapore
| | - Abdalla Mohammed
- Queensland Brain Institute, The University of Queensland, Building 79, Upland Road, Saint Lucia, Brisbane, QLD, 4072, Australia
| | - Krishna Kanth Chitta
- Signal and Image Processing Group, Laboratory of Molecular Imaging, Singapore Bioimaging Consortium, A*STAR, 02-02 Helios 11, Biopolis Way, Singapore, 138667, Singapore
| | - Xuan Vinh To
- Queensland Brain Institute, The University of Queensland, Building 79, Upland Road, Saint Lucia, Brisbane, QLD, 4072, Australia
| | - Hussein Srour
- Queensland Brain Institute, The University of Queensland, Building 79, Upland Road, Saint Lucia, Brisbane, QLD, 4072, Australia
| | - Fatima Nasrallah
- Queensland Brain Institute, The University of Queensland, Building 79, Upland Road, Saint Lucia, Brisbane, QLD, 4072, Australia
| |
Collapse
|
16
|
Kazerooni AF, Arif S, Madhogarhia R, Khalili N, Haldar D, Bagheri S, Familiar AM, Anderson H, Haldar S, Tu W, Kim MC, Viswanathan K, Muller S, Prados M, Kline C, Vidal L, Aboian M, Storm PB, Resnick AC, Ware JB, Vossough A, Davatzikos C, Nabavizadeh A. Automated Tumor Segmentation and Brain Tissue Extraction from Multiparametric MRI of Pediatric Brain Tumors: A Multi-Institutional Study. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.01.02.22284037. [PMID: 36711966 PMCID: PMC9882407 DOI: 10.1101/2023.01.02.22284037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
Abstract
Background Brain tumors are the most common solid tumors and the leading cause of cancer-related death among all childhood cancers. Tumor segmentation is essential in surgical and treatment planning, and response assessment and monitoring. However, manual segmentation is time-consuming and has high interoperator variability. We present a multi-institutional deep learning-based method for automated brain extraction and segmentation of pediatric brain tumors based on multi-parametric MRI scans. Methods Multi-parametric scans (T1w, T1w-CE, T2, and T2-FLAIR) of 244 pediatric patients (n=215 internal and n=29 external cohorts) with de novo brain tumors, including a variety of tumor subtypes, were preprocessed and manually segmented to identify the brain tissue and tumor subregions into four tumor subregions, i.e., enhancing tumor (ET), non-enhancing tumor (NET), cystic components (CC), and peritumoral edema (ED). The internal cohort was split into training (n=151), validation (n=43), and withheld internal test (n=21) subsets. DeepMedic, a three-dimensional convolutional neural network, was trained and the model parameters were tuned. Finally, the network was evaluated on the withheld internal and external test cohorts. Results Dice similarity score (median±SD) was 0.91±0.10/0.88±0.16 for the whole tumor, 0.73±0.27/0.84±0.29 for ET, 0.79±19/0.74±0.27 for union of all non-enhancing components (i.e., NET, CC, ED), and 0.98±0.02 for brain tissue in both internal/external test sets. Conclusions Our proposed automated brain extraction and tumor subregion segmentation models demonstrated accurate performance on segmentation of the brain tissue and whole tumor regions in pediatric brain tumors and can facilitate detection of abnormal regions for further clinical measurements. Key Points We proposed automated tumor segmentation and brain extraction on pediatric MRI.The volumetric measurements using our models agree with ground truth segmentations. Importance of the Study The current response assessment in pediatric brain tumors (PBTs) is currently based on bidirectional or 2D measurements, which underestimate the size of non-spherical and complex PBTs in children compared to volumetric or 3D methods. There is a need for development of automated methods to reduce manual burden and intra- and inter-rater variability to segment tumor subregions and assess volumetric changes. Most currently available automated segmentation tools are developed on adult brain tumors, and therefore, do not generalize well to PBTs that have different radiological appearances. To address this, we propose a deep learning (DL) auto-segmentation method that shows promising results in PBTs, collected from a publicly available large-scale imaging dataset (Children's Brain Tumor Network; CBTN) that comprises multi-parametric MRI scans of multiple PBT types acquired across multiple institutions on different scanners and protocols. As a complementary to tumor segmentation, we propose an automated DL model for brain tissue extraction.
Collapse
|
17
|
Fathi Kazerooni A, Arif S, Madhogarhia R, Khalili N, Haldar D, Bagheri S, Familiar AM, Anderson H, Haldar S, Tu W, Chul Kim M, Viswanathan K, Muller S, Prados M, Kline C, Vidal L, Aboian M, Storm PB, Resnick AC, Ware JB, Vossough A, Davatzikos C, Nabavizadeh A. Automated tumor segmentation and brain tissue extraction from multiparametric MRI of pediatric brain tumors: A multi-institutional study. Neurooncol Adv 2023; 5:vdad027. [PMID: 37051331 PMCID: PMC10084501 DOI: 10.1093/noajnl/vdad027] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/18/2023] Open
Abstract
Background Brain tumors are the most common solid tumors and the leading cause of cancer-related death among all childhood cancers. Tumor segmentation is essential in surgical and treatment planning, and response assessment and monitoring. However, manual segmentation is time-consuming and has high interoperator variability. We present a multi-institutional deep learning-based method for automated brain extraction and segmentation of pediatric brain tumors based on multi-parametric MRI scans. Methods Multi-parametric scans (T1w, T1w-CE, T2, and T2-FLAIR) of 244 pediatric patients ( n = 215 internal and n = 29 external cohorts) with de novo brain tumors, including a variety of tumor subtypes, were preprocessed and manually segmented to identify the brain tissue and tumor subregions into four tumor subregions, i.e., enhancing tumor (ET), non-enhancing tumor (NET), cystic components (CC), and peritumoral edema (ED). The internal cohort was split into training ( n = 151), validation ( n = 43), and withheld internal test ( n = 21) subsets. DeepMedic, a three-dimensional convolutional neural network, was trained and the model parameters were tuned. Finally, the network was evaluated on the withheld internal and external test cohorts. Results Dice similarity score (median ± SD) was 0.91 ± 0.10/0.88 ± 0.16 for the whole tumor, 0.73 ± 0.27/0.84 ± 0.29 for ET, 0.79 ± 19/0.74 ± 0.27 for union of all non-enhancing components (i.e., NET, CC, ED), and 0.98 ± 0.02 for brain tissue in both internal/external test sets. Conclusions Our proposed automated brain extraction and tumor subregion segmentation models demonstrated accurate performance on segmentation of the brain tissue and whole tumor regions in pediatric brain tumors and can facilitate detection of abnormal regions for further clinical measurements.
Collapse
Affiliation(s)
- Anahita Fathi Kazerooni
- Center for Data-Driven Discovery in Biomedicine (D3b), Children’s Hospital of Philadelphia, Philadelphia, PA, USA
- AI2D Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, PA, USA
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Neurosurgery, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | - Sherjeel Arif
- Center for Data-Driven Discovery in Biomedicine (D3b), Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | - Rachel Madhogarhia
- Center for Data-Driven Discovery in Biomedicine (D3b), Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | - Nastaran Khalili
- Center for Data-Driven Discovery in Biomedicine (D3b), Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | - Debanjan Haldar
- Center for Data-Driven Discovery in Biomedicine (D3b), Children’s Hospital of Philadelphia, Philadelphia, PA, USA
- Institute of Translational Medicine and Therapeutics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Sina Bagheri
- Center for Data-Driven Discovery in Biomedicine (D3b), Children’s Hospital of Philadelphia, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Ariana M Familiar
- Center for Data-Driven Discovery in Biomedicine (D3b), Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | - Hannah Anderson
- Center for Data-Driven Discovery in Biomedicine (D3b), Children’s Hospital of Philadelphia, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Shuvanjan Haldar
- Department of Biomedical Engineering, Rutgers University, New Brunswick, NJ, USA
| | - Wenxin Tu
- College of Arts and Sciences, University of Pennsylvania, Philadelphia, PA, USA
| | - Meen Chul Kim
- Center for Data-Driven Discovery in Biomedicine (D3b), Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | - Karthik Viswanathan
- Center for Data-Driven Discovery in Biomedicine (D3b), Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | - Sabine Muller
- Department of Neurology and Pediatrics, University of California San Francisco, San Francisco, CA, USA
| | - Michael Prados
- Department of Neurological Surgery and Pediatrics, University of California San Francisco, San Francisco, CA, USA
| | - Cassie Kline
- Division of Oncology, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | - Lorenna Vidal
- Division of Radiology, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | - Mariam Aboian
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Phillip B Storm
- Center for Data-Driven Discovery in Biomedicine (D3b), Children’s Hospital of Philadelphia, Philadelphia, PA, USA
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Neurosurgery, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | - Adam C Resnick
- Center for Data-Driven Discovery in Biomedicine (D3b), Children’s Hospital of Philadelphia, Philadelphia, PA, USA
- Department of Neurosurgery, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | - Jeffrey B Ware
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Arastoo Vossough
- Center for Data-Driven Discovery in Biomedicine (D3b), Children’s Hospital of Philadelphia, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Division of Radiology, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | - Christos Davatzikos
- AI2D Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Ali Nabavizadeh
- Center for Data-Driven Discovery in Biomedicine (D3b), Children’s Hospital of Philadelphia, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
18
|
Hibi A, Jaberipour M, Cusimano MD, Bilbily A, Krishnan RG, Aviv RI, Tyrrell PN. Automated identification and quantification of traumatic brain injury from CT scans: Are we there yet? Medicine (Baltimore) 2022; 101:e31848. [PMID: 36451512 PMCID: PMC9704869 DOI: 10.1097/md.0000000000031848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Accepted: 10/26/2022] [Indexed: 12/02/2022] Open
Abstract
BACKGROUND The purpose of this study was to conduct a systematic review for understanding the availability and limitations of artificial intelligence (AI) approaches that could automatically identify and quantify computed tomography (CT) findings in traumatic brain injury (TBI). METHODS Systematic review, in accordance with PRISMA 2020 and SPIRIT-AI extension guidelines, with a search of 4 databases (Medline, Embase, IEEE Xplore, and Web of Science) was performed to find AI studies that automated the clinical tasks for identifying and quantifying CT findings of TBI-related abnormalities. RESULTS A total of 531 unique publications were reviewed, which resulted in 66 articles that met our inclusion criteria. The following components for identification and quantification regarding TBI were covered and automated by existing AI studies: identification of TBI-related abnormalities; classification of intracranial hemorrhage types; slice-, pixel-, and voxel-level localization of hemorrhage; measurement of midline shift; and measurement of hematoma volume. Automated identification of obliterated basal cisterns was not investigated in the existing AI studies. Most of the AI algorithms were based on deep neural networks that were trained on 2- or 3-dimensional CT imaging datasets. CONCLUSION We identified several important TBI-related CT findings that can be automatically identified and quantified with AI. A combination of these techniques may provide useful tools to enhance reproducibility of TBI identification and quantification by supporting radiologists and clinicians in their TBI assessments and reducing subjective human factors.
Collapse
Affiliation(s)
- Atsuhiro Hibi
- Institute of Medical Science, University of Toronto, Toronto, Ontario, Canada
| | - Majid Jaberipour
- Department of Medical Imaging, University of Toronto, Toronto, Ontario, Canada
| | - Michael D. Cusimano
- Institute of Medical Science, University of Toronto, Toronto, Ontario, Canada
- Division of Neurosurgery, St Michael’s Hospital, University of Toronto, Toronto, Canada
| | - Alexander Bilbily
- Department of Medical Imaging, University of Toronto, Toronto, Ontario, Canada
- Sunnybrook Health Sciences Centre, Toronto, Canada
| | - Rahul G. Krishnan
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Department of Laboratory Medicine & Pathobiology, University of Toronto, Toronto, Ontario, Canada
| | - Richard I. Aviv
- Department of Radiology, Radiation Oncology and Medical Physics, University of Ottawa, Ottawa, Ontario, Canada
| | - Pascal N. Tyrrell
- Institute of Medical Science, University of Toronto, Toronto, Ontario, Canada
- Department of Medical Imaging, University of Toronto, Toronto, Ontario, Canada
- Department of Statistical Sciences, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
19
|
Zhao L, Ma J, Shao Y, Jia C, Zhao J, Yuan H. MM-UNet: A multimodality brain tumor segmentation network in MRI images. Front Oncol 2022; 12:950706. [PMID: 36059677 PMCID: PMC9434799 DOI: 10.3389/fonc.2022.950706] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Accepted: 07/20/2022] [Indexed: 11/30/2022] Open
Abstract
The global annual incidence of brain tumors is approximately seven out of 100,000, accounting for 2% of all tumors. The mortality rate ranks first among children under 12 and 10th among adults. Therefore, the localization and segmentation of brain tumor images constitute an active field of medical research. The traditional manual segmentation method is time-consuming, laborious, and subjective. In addition, the information provided by a single-image modality is often limited and cannot meet the needs of clinical application. Therefore, in this study, we developed a multimodality feature fusion network, MM-UNet, for brain tumor segmentation by adopting a multi-encoder and single-decoder structure. In the proposed network, each encoder independently extracts low-level features from the corresponding imaging modality, and the hybrid attention block strengthens the features. After fusion with the high-level semantic of the decoder path through skip connection, the decoder restores the pixel-level segmentation results. We evaluated the performance of the proposed model on the BraTS 2020 dataset. MM-UNet achieved the mean Dice score of 79.2% and mean Hausdorff distance of 8.466, which is a consistent performance improvement over the U-Net, Attention U-Net, and ResUNet baseline models and demonstrates the effectiveness of the proposed model.
Collapse
Affiliation(s)
- Liang Zhao
- School of Software Technology, Dalian University of Technology, Dalian, China
| | - Jiajun Ma
- School of Software Technology, Dalian University of Technology, Dalian, China
| | - Yu Shao
- School of Software Technology, Dalian University of Technology, Dalian, China
| | - Chaoran Jia
- School of Software Technology, Dalian University of Technology, Dalian, China
| | - Jingyuan Zhao
- Stem Cell Clinical Research Center, The First Affiliated Hospital of Dalian Medical University, Dalian, China
- *Correspondence: Jingyuan Zhao, ; Hong Yuan,
| | - Hong Yuan
- The Affiliated Central Hospital, Dalian University of Technology, Dalian, China
- *Correspondence: Jingyuan Zhao, ; Hong Yuan,
| |
Collapse
|
20
|
Li X, Jiang Y, Li M, Zhang J, Yin S, Luo H. MSFR-Net: Multi-modality and single-modality feature recalibration network for brain tumor segmentation. Med Phys 2022; 50:2249-2262. [PMID: 35962724 DOI: 10.1002/mp.15933] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 05/16/2022] [Accepted: 06/14/2022] [Indexed: 11/11/2022] Open
Abstract
BACKGROUND Accurate and automated brain tumor segmentation from multi-modality MR images plays a significant role in tumor treatment. However, the existing approaches mainly focus on the fusion of multi-modality while ignoring the correlation between single-modality and tumor sub-components. For example, T2-weighted images show good visualization of edema, and T1-contrast images have a good contrast between enhancing tumor core and necrosis. In the actual clinical process, professional physicians also label tumors according to these characteristics. We design a method for brain tumors segmentation that utilizes both multi-modality fusion and single-modality characteristics. METHODS A multi-modality and single-modality feature recalibration network (MSFR-Net) is proposed for brain tumor segmentation from MR images. Specifically, multi-modality information and single-modality information are assigned to independent pathways. Multi-modality network explicitly learn the relationship between all modalities and all tumor sub-components. Single-modality network learn the relationship between single-modality and its highly correlated tumor sub-components. Then, a dual recalibration module (DRM) is designed to connect the parallel single-modality network and multi-modality network at multiple stages. The function of the DRM is to unify the two types of features into the same feature space. RESULTS Experiments on BraTS 2015 dataset and BraTS 2018 dataset show that the proposed method is competitive and superior to other state-of-the-art methods. The proposed method achieved the segmentation results with dice coefficients of 0.86 and hausdorff distance of 4.82 on BraTS 2018 dataset, with dice coefficients of 0.80, positive predictive value of 0.76 and sensitivity of 0.78 on BraTS 2015 dataset. CONCLUSIONS This work combines the manual labeling process of doctors and introduces the correlation between single-modality and the tumor sub-components into the segmentation network. The method improves the segmentation performance of brain tumors and can be applied in the clinical practice. The code of the proposed method is available at: https://github.com/xiangQAQ/MSFR-Net. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Xiang Li
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, China
| | - Yuchen Jiang
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, China
| | - Minglei Li
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, China
| | - Jiusi Zhang
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, China
| | - Shen Yin
- Department of Mechanical and Industrial Engineering, Faculty of Engineering, Norwegian University of Science and Technology, Trondheim, 7034, Norway
| | - Hao Luo
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, China
| |
Collapse
|
21
|
Bouget D, Pedersen A, Jakola AS, Kavouridis V, Emblem KE, Eijgelaar RS, Kommers I, Ardon H, Barkhof F, Bello L, Berger MS, Conti Nibali M, Furtner J, Hervey-Jumper S, Idema AJS, Kiesel B, Kloet A, Mandonnet E, Müller DMJ, Robe PA, Rossi M, Sciortino T, Van den Brink WA, Wagemakers M, Widhalm G, Witte MG, Zwinderman AH, De Witt Hamer PC, Solheim O, Reinertsen I. Preoperative Brain Tumor Imaging: Models and Software for Segmentation and Standardized Reporting. Front Neurol 2022; 13:932219. [PMID: 35968292 PMCID: PMC9364874 DOI: 10.3389/fneur.2022.932219] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Accepted: 06/23/2022] [Indexed: 11/23/2022] Open
Abstract
For patients suffering from brain tumor, prognosis estimation and treatment decisions are made by a multidisciplinary team based on a set of preoperative MR scans. Currently, the lack of standardized and automatic methods for tumor detection and generation of clinical reports, incorporating a wide range of tumor characteristics, represents a major hurdle. In this study, we investigate the most occurring brain tumor types: glioblastomas, lower grade gliomas, meningiomas, and metastases, through four cohorts of up to 4,000 patients. Tumor segmentation models were trained using the AGU-Net architecture with different preprocessing steps and protocols. Segmentation performances were assessed in-depth using a wide-range of voxel and patient-wise metrics covering volume, distance, and probabilistic aspects. Finally, two software solutions have been developed, enabling an easy use of the trained models and standardized generation of clinical reports: Raidionics and Raidionics-Slicer. Segmentation performances were quite homogeneous across the four different brain tumor types, with an average true positive Dice ranging between 80 and 90%, patient-wise recall between 88 and 98%, and patient-wise precision around 95%. In conjunction to Dice, the identified most relevant other metrics were the relative absolute volume difference, the variation of information, and the Hausdorff, Mahalanobis, and object average symmetric surface distances. With our Raidionics software, running on a desktop computer with CPU support, tumor segmentation can be performed in 16-54 s depending on the dimensions of the MRI volume. For the generation of a standardized clinical report, including the tumor segmentation and features computation, 5-15 min are necessary. All trained models have been made open-access together with the source code for both software solutions and validation metrics computation. In the future, a method to convert results from a set of metrics into a final single score would be highly desirable for easier ranking across trained models. In addition, an automatic classification of the brain tumor type would be necessary to replace manual user input. Finally, the inclusion of post-operative segmentation in both software solutions will be key for generating complete post-operative standardized clinical reports.
Collapse
Affiliation(s)
- David Bouget
- Department of Health Research, SINTEF Digital, Trondheim, Norway
| | - André Pedersen
- Department of Health Research, SINTEF Digital, Trondheim, Norway
- Department of Clinical and Molecular Medicine, Norwegian University of Science and Technology, Trondheim, Norway
- Clinic of Surgery, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway
| | - Asgeir S. Jakola
- Department of Neurosurgery, Sahlgrenska University Hospital, Gothenburg, Sweden
- Department of Clinical Neuroscience, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| | - Vasileios Kavouridis
- Department of Neurosurgery, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway
| | - Kyrre E. Emblem
- Division of Radiology and Nuclear Medicine, Department of Physics and Computational Radiology, Oslo University Hospital, Oslo, Norway
| | - Roelant S. Eijgelaar
- Department of Neurosurgery, Amsterdam University Medical Centers, Vrije Universiteit, Amsterdam, Netherlands
- Cancer Center Amsterdam, Brain Tumor Center, Amsterdam University Medical Centers, Amsterdam, Netherlands
| | - Ivar Kommers
- Department of Neurosurgery, Amsterdam University Medical Centers, Vrije Universiteit, Amsterdam, Netherlands
- Cancer Center Amsterdam, Brain Tumor Center, Amsterdam University Medical Centers, Amsterdam, Netherlands
| | - Hilko Ardon
- Department of Neurosurgery, Twee Steden Hospital, Tilburg, Netherlands
| | - Frederik Barkhof
- Department of Radiology and Nuclear Medicine, Amsterdam University Medical Centers, Vrije Universiteit, Amsterdam, Netherlands
- Institutes of Neurology and Healthcare Engineering, University College London, London, United Kingdom
| | - Lorenzo Bello
- Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Humanitas Research Hospital, Università degli Studi di Milano, Milan, Italy
| | - Mitchel S. Berger
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, United States
| | - Marco Conti Nibali
- Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Humanitas Research Hospital, Università degli Studi di Milano, Milan, Italy
| | - Julia Furtner
- Department of Biomedical Imaging and Image-Guided Therapy, Medical University Vienna, Wien, Austria
| | - Shawn Hervey-Jumper
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, United States
| | | | - Barbara Kiesel
- Department of Neurosurgery, Medical University Vienna, Wien, Austria
| | - Alfred Kloet
- Department of Neurosurgery, Haaglanden Medical Center, The Hague, Netherlands
| | | | - Domenique M. J. Müller
- Department of Neurosurgery, Amsterdam University Medical Centers, Vrije Universiteit, Amsterdam, Netherlands
- Cancer Center Amsterdam, Brain Tumor Center, Amsterdam University Medical Centers, Amsterdam, Netherlands
| | - Pierre A. Robe
- Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht, Netherlands
| | - Marco Rossi
- Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Humanitas Research Hospital, Università degli Studi di Milano, Milan, Italy
| | - Tommaso Sciortino
- Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Humanitas Research Hospital, Università degli Studi di Milano, Milan, Italy
| | | | - Michiel Wagemakers
- Department of Neurosurgery, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| | - Georg Widhalm
- Department of Neurosurgery, Medical University Vienna, Wien, Austria
| | - Marnix G. Witte
- Department of Radiation Oncology, Netherlands Cancer Institute, Amsterdam, Netherlands
| | - Aeilko H. Zwinderman
- Department of Clinical Epidemiology and Biostatistics, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, Netherlands
| | - Philip C. De Witt Hamer
- Department of Neurosurgery, Amsterdam University Medical Centers, Vrije Universiteit, Amsterdam, Netherlands
- Cancer Center Amsterdam, Brain Tumor Center, Amsterdam University Medical Centers, Amsterdam, Netherlands
| | - Ole Solheim
- Department of Neurosurgery, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway
- Department of Neuromedicine and Movement Science, Norwegian University of Science and Technology, Trondheim, Norway
| | - Ingerid Reinertsen
- Department of Health Research, SINTEF Digital, Trondheim, Norway
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
22
|
Antonelli M, Reinke A, Bakas S, Farahani K, Kopp-Schneider A, Landman BA, Litjens G, Menze B, Ronneberger O, Summers RM, van Ginneken B, Bilello M, Bilic P, Christ PF, Do RKG, Gollub MJ, Heckers SH, Huisman H, Jarnagin WR, McHugo MK, Napel S, Pernicka JSG, Rhode K, Tobon-Gomez C, Vorontsov E, Meakin JA, Ourselin S, Wiesenfarth M, Arbeláez P, Bae B, Chen S, Daza L, Feng J, He B, Isensee F, Ji Y, Jia F, Kim I, Maier-Hein K, Merhof D, Pai A, Park B, Perslev M, Rezaiifar R, Rippel O, Sarasua I, Shen W, Son J, Wachinger C, Wang L, Wang Y, Xia Y, Xu D, Xu Z, Zheng Y, Simpson AL, Maier-Hein L, Cardoso MJ. The Medical Segmentation Decathlon. Nat Commun 2022; 13:4128. [PMID: 35840566 PMCID: PMC9287542 DOI: 10.1038/s41467-022-30695-9] [Citation(s) in RCA: 163] [Impact Index Per Article: 81.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Accepted: 05/13/2022] [Indexed: 02/05/2023] Open
Abstract
International challenges have become the de facto standard for comparative assessment of image analysis algorithms. Although segmentation is the most widely investigated medical image processing task, the various challenges have been organized to focus only on specific clinical tasks. We organized the Medical Segmentation Decathlon (MSD)-a biomedical image analysis challenge, in which algorithms compete in a multitude of both tasks and modalities to investigate the hypothesis that a method capable of performing well on multiple tasks will generalize well to a previously unseen task and potentially outperform a custom-designed solution. MSD results confirmed this hypothesis, moreover, MSD winner continued generalizing well to a wide range of other clinical problems for the next two years. Three main conclusions can be drawn from this study: (1) state-of-the-art image segmentation algorithms generalize well when retrained on unseen tasks; (2) consistent algorithmic performance across multiple tasks is a strong surrogate of algorithmic generalizability; (3) the training of accurate AI segmentation models is now commoditized to scientists that are not versed in AI model training.
Collapse
Affiliation(s)
- Michela Antonelli
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK.
| | - Annika Reinke
- Div. Computer Assisted Medical Interventions, German Cancer Research Center (DKFZ), Heidelberg, Germany
- HI Helmholtz Imaging, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Mathematics and Computer Science, University of Heidelberg, Heidelberg, Germany
| | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Keyvan Farahani
- Center for Biomedical Informatics and Information Technology, National Cancer Institute (NIH), Bethesda, MD, USA
| | | | - Bennett A Landman
- Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Geert Litjens
- Radboud University Medical Center, Radboud Institute for Health Sciences, Nijmegen, The Netherlands
| | - Bjoern Menze
- Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | | | - Ronald M Summers
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center (NIH), Bethesda, MD, USA
| | - Bram van Ginneken
- Radboud University Medical Center, Radboud Institute for Health Sciences, Nijmegen, The Netherlands
| | - Michel Bilello
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Patrick Bilic
- Department of Informatics, Technische Universität München, München, Germany
| | - Patrick F Christ
- Department of Informatics, Technische Universität München, München, Germany
| | - Richard K G Do
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Marc J Gollub
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Stephan H Heckers
- Department of Psychiatry & Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Henkjan Huisman
- Radboud University Medical Center, Radboud Institute for Health Sciences, Nijmegen, The Netherlands
| | - William R Jarnagin
- Department of Surgery, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Maureen K McHugo
- Department of Psychiatry & Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Sandy Napel
- Department of Radiology, Stanford University, Stanford, CA, USA
| | | | - Kawal Rhode
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Catalina Tobon-Gomez
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Eugene Vorontsov
- Department of Computer Science and Software Engineering, École Polytechnique de Montréal, Montréal, QC, Canada
| | - James A Meakin
- Radboud University Medical Center, Radboud Institute for Health Sciences, Nijmegen, The Netherlands
| | - Sebastien Ourselin
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Manuel Wiesenfarth
- Div. Biostatistics, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | | | | | | | - Laura Daza
- Universidad de los Andes, Bogota, Colombia
| | - Jianjiang Feng
- Department of Automation, Tsinghua University, Beijing, China
| | - Baochun He
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Fabian Isensee
- HI Applied Computer Vision Lab, Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Yuanfeng Ji
- Department of Computer Science, Xiamen University, Xiamen, China
| | - Fucang Jia
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Ildoo Kim
- Kakao Brain, Seongnam-si, Republic of Korea
| | - Klaus Maier-Hein
- Cerebriu A/S, Copenhagen, Denmark
- Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
| | - Dorit Merhof
- Institute of Imaging & Computer Vision, RWTH Aachen University, Aachen, Germany
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Akshay Pai
- Cerebriu A/S, Copenhagen, Denmark
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | | | - Mathias Perslev
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | | | - Oliver Rippel
- Institute of Imaging & Computer Vision, RWTH Aachen University, Aachen, Germany
| | - Ignacio Sarasua
- Lab for Artificial Intelligence in Medical Imaging (AI-Med), Department of Child and Adolescent Psychiatry, University Hospital, LMU München, Germany
| | - Wei Shen
- MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, Shanghai, China
| | | | - Christian Wachinger
- Lab for Artificial Intelligence in Medical Imaging (AI-Med), Department of Child and Adolescent Psychiatry, University Hospital, LMU München, Germany
| | - Liansheng Wang
- Department of Computer Science, Xiamen University, Xiamen, China
| | - Yan Wang
- Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai, China
| | - Yingda Xia
- Johns Hopkins University, Baltimore, MD, USA
| | | | - Zhanwei Xu
- Department of Automation, Tsinghua University, Beijing, China
| | | | - Amber L Simpson
- School of Computing/Department of Biomedical and Molecular Sciences, Queen's University, Kingston, ON, Canada
| | - Lena Maier-Hein
- Div. Computer Assisted Medical Interventions, German Cancer Research Center (DKFZ), Heidelberg, Germany
- HI Helmholtz Imaging, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Mathematics and Computer Science, University of Heidelberg, Heidelberg, Germany
- Medical Faculty, University of Heidelberg, Heidelberg, Germany
| | - M Jorge Cardoso
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| |
Collapse
|
23
|
Abstract
AbstractBrain tumor segmentation is one of the most challenging problems in medical image analysis. The goal of brain tumor segmentation is to generate accurate delineation of brain tumor regions. In recent years, deep learning methods have shown promising performance in solving various computer vision problems, such as image classification, object detection and semantic segmentation. A number of deep learning based methods have been applied to brain tumor segmentation and achieved promising results. Considering the remarkable breakthroughs made by state-of-the-art technologies, we provide this survey with a comprehensive study of recently developed deep learning based brain tumor segmentation techniques. More than 150 scientific papers are selected and discussed in this survey, extensively covering technical aspects such as network architecture design, segmentation under imbalanced conditions, and multi-modality processes. We also provide insightful discussions for future development directions.
Collapse
|
24
|
Niyas S, Pawan S, Anand Kumar M, Rajan J. Medical image segmentation with 3D convolutional neural networks: A survey. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.04.065] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
|
25
|
Balwant M. A Review on Convolutional Neural Networks for Brain Tumor Segmentation: Methods, Datasets, Libraries, and Future Directions. Ing Rech Biomed 2022. [DOI: 10.1016/j.irbm.2022.05.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
26
|
AI and Clinical Decision Making: The Limitations and Risks of Computational Reductionism in Bowel Cancer Screening. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12073341] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/09/2022]
Abstract
Advances in artificial intelligence in healthcare are frequently promoted as ‘solutions’ to improve the accuracy, safety, and quality of clinical decisions, treatments, and care. Despite some diagnostic success, however, AI systems rely on forms of reductive reasoning and computational determinism that embed problematic assumptions about clinical decision-making and clinical practice. Clinician autonomy, experience, and judgement are reduced to inputs and outputs framed as binary or multi-class classification problems benchmarked against a clinician’s capacity to identify or predict disease states. This paper examines this reductive reasoning in AI systems for colorectal cancer (CRC) to highlight their limitations and risks: (1) in AI systems themselves due to inherent biases in (a) retrospective training datasets and (b) embedded assumptions in underlying AI architectures and algorithms; (2) in the problematic and limited evaluations being conducted on AI systems prior to system integration in clinical practice; and (3) in marginalising socio-technical factors in the context-dependent interactions between clinicians, their patients, and the broader health system. The paper argues that to optimise benefits from AI systems and to avoid negative unintended consequences for clinical decision-making and patient care, there is a need for more nuanced and balanced approaches to AI system deployment and evaluation in CRC.
Collapse
|
27
|
Bouget D, Pedersen A, Vanel J, Leira HO, Langø T. Mediastinal lymph nodes segmentation using 3D convolutional neural network ensembles and anatomical priors guiding. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2022. [DOI: 10.1080/21681163.2022.2043778] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- David Bouget
- Department of Medical Technology, SINTEF, Trondheim, Norway
- Department of Circulation and Medical Imaging, NTNU, Center for Innovative Ultrasound Solutions, Trondheim, Norway
| | - André Pedersen
- Department of Medical Technology, SINTEF, Trondheim, Norway
| | - Johanna Vanel
- Department of Medical Technology, SINTEF, Trondheim, Norway
| | - Haakon O. Leira
- Department of Thoracic Medicine, St. Olavs Hospital, Trondheim, Norway
| | - Thomas Langø
- Department of Medical Technology, SINTEF, Trondheim, Norway
| |
Collapse
|
28
|
Xu W, Yang H, Zhang M, Cao Z, Pan X, Liu W. Brain tumor segmentation with corner attention and high-dimensional perceptual loss. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103438] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
29
|
Kong D, Liu X, Wang Y, Li D, Xue J. 3D hierarchical dual-attention fully convolutional networks with hybrid losses for diverse glioma segmentation. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2021.107692] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
|
30
|
Machine Learning in Medical Imaging – Clinical Applications and Challenges in Computer Vision. Artif Intell Med 2022. [DOI: 10.1007/978-981-19-1223-8_4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
31
|
Bhalodiya JM, Lim Choi Keung SN, Arvanitis TN. Magnetic resonance image-based brain tumour segmentation methods: A systematic review. Digit Health 2022; 8:20552076221074122. [PMID: 35340900 PMCID: PMC8943308 DOI: 10.1177/20552076221074122] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Revised: 11/20/2021] [Accepted: 12/27/2021] [Indexed: 01/10/2023] Open
Abstract
Background Image segmentation is an essential step in the analysis and subsequent characterisation of brain tumours through magnetic resonance imaging. In the literature, segmentation methods are empowered by open-access magnetic resonance imaging datasets, such as the brain tumour segmentation dataset. Moreover, with the increased use of artificial intelligence methods in medical imaging, access to larger data repositories has become vital in method development. Purpose To determine what automated brain tumour segmentation techniques can medical imaging specialists and clinicians use to identify tumour components, compared to manual segmentation. Methods We conducted a systematic review of 572 brain tumour segmentation studies during 2015-2020. We reviewed segmentation techniques using T1-weighted, T2-weighted, gadolinium-enhanced T1-weighted, fluid-attenuated inversion recovery, diffusion-weighted and perfusion-weighted magnetic resonance imaging sequences. Moreover, we assessed physics or mathematics-based methods, deep learning methods, and software-based or semi-automatic methods, as applied to magnetic resonance imaging techniques. Particularly, we synthesised each method as per the utilised magnetic resonance imaging sequences, study population, technical approach (such as deep learning) and performance score measures (such as Dice score). Statistical tests We compared median Dice score in segmenting the whole tumour, tumour core and enhanced tumour. Results We found that T1-weighted, gadolinium-enhanced T1-weighted, T2-weighted and fluid-attenuated inversion recovery magnetic resonance imaging are used the most in various segmentation algorithms. However, there is limited use of perfusion-weighted and diffusion-weighted magnetic resonance imaging. Moreover, we found that the U-Net deep learning technology is cited the most, and has high accuracy (Dice score 0.9) for magnetic resonance imaging-based brain tumour segmentation. Conclusion U-Net is a promising deep learning technology for magnetic resonance imaging-based brain tumour segmentation. The community should be encouraged to contribute open-access datasets so training, testing and validation of deep learning algorithms can be improved, particularly for diffusion- and perfusion-weighted magnetic resonance imaging, where there are limited datasets available.
Collapse
Affiliation(s)
- Jayendra M Bhalodiya
- Institute of Digital Healthcare, Warwick Manufacturing Group, The University of Warwick, UK
| | - Sarah N Lim Choi Keung
- Institute of Digital Healthcare, Warwick Manufacturing Group, The University of Warwick, UK
| | - Theodoros N Arvanitis
- Institute of Digital Healthcare, Warwick Manufacturing Group, The University of Warwick, UK
| |
Collapse
|
32
|
Eitel F, Albrecht JP, Weygandt M, Paul F, Ritter K. Patch individual filter layers in CNNs to harness the spatial homogeneity of neuroimaging data. Sci Rep 2021; 11:24447. [PMID: 34961762 PMCID: PMC8712523 DOI: 10.1038/s41598-021-03785-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Accepted: 11/22/2021] [Indexed: 11/09/2022] Open
Abstract
Convolutional neural networks (CNNs)-as a type of deep learning-have been specifically designed for highly heterogeneous data, such as natural images. Neuroimaging data, however, is comparably homogeneous due to (1) the uniform structure of the brain and (2) additional efforts to spatially normalize the data to a standard template using linear and non-linear transformations. To harness spatial homogeneity of neuroimaging data, we suggest here a new CNN architecture that combines the idea of hierarchical abstraction in CNNs with a prior on the spatial homogeneity of neuroimaging data. Whereas early layers are trained globally using standard convolutional layers, we introduce patch individual filters (PIF) for higher, more abstract layers. By learning filters in individual latent space patches without sharing weights, PIF layers can learn abstract features faster and specific to regions. We thoroughly evaluated PIF layers for three different tasks and data sets, namely sex classification on UK Biobank data, Alzheimer's disease detection on ADNI data and multiple sclerosis detection on private hospital data, and compared it with two baseline models, a standard CNN and a patch-based CNN. We obtained two main results: First, CNNs using PIF layers converge consistently faster, measured in run time in seconds and number of iterations than both baseline models. Second, both the standard CNN and the PIF model outperformed the patch-based CNN in terms of balanced accuracy and receiver operating characteristic area under the curve (ROC AUC) with a maximal balanced accuracy (ROC AUC) of 94.21% (99.10%) for the sex classification task (PIF model), and 81.24% and 80.48% (88.89% and 87.35%) respectively for the Alzheimer's disease and multiple sclerosis detection tasks (standard CNN model). In conclusion, we demonstrated that CNNs using PIF layers result in faster convergence while obtaining the same predictive performance as a standard CNN. To the best of our knowledge, this is the first study that introduces a prior in form of an inductive bias to harness spatial homogeneity of neuroimaging data.
Collapse
Affiliation(s)
- Fabian Eitel
- Department of Psychiatry and Neurosciences | CCM, Berlin Center for Advanced Neuroimaging, Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health (BIH), 10117, Berlin, Germany
- Humboldt-Universität zu Berlin, 10117, Berlin, Germany
- Bernstein Center for Computational Neuroscience, 10117, Berlin, Germany
| | - Jan Philipp Albrecht
- Department of Psychiatry and Neurosciences | CCM, Berlin Center for Advanced Neuroimaging, Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health (BIH), 10117, Berlin, Germany
- Freie Universität Berlin, 14195, Berlin, Germany
| | - Martin Weygandt
- Department of Neurology, NeuroCure Clinical Research Center, Experimental and Clinical Research Center, Max Delbrück Center for Molecular Medicine, Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health (BIH), 10117, Berlin, Germany
| | - Friedemann Paul
- Department of Neurology, NeuroCure Clinical Research Center, Experimental and Clinical Research Center, Max Delbrück Center for Molecular Medicine, Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health (BIH), 10117, Berlin, Germany
- Einstein Center for Neurosciences Berlin, 10117, Berlin, Germany
| | - Kerstin Ritter
- Department of Psychiatry and Neurosciences | CCM, Berlin Center for Advanced Neuroimaging, Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health (BIH), 10117, Berlin, Germany.
- Bernstein Center for Computational Neuroscience, 10117, Berlin, Germany.
- Einstein Center for Neurosciences Berlin, 10117, Berlin, Germany.
| |
Collapse
|
33
|
Hajiabadi M, Alizadeh Savareh B, Emami H, Bashiri A. Comparison of wavelet transformations to enhance convolutional neural network performance in brain tumor segmentation. BMC Med Inform Decis Mak 2021; 21:327. [PMID: 34814907 PMCID: PMC8609809 DOI: 10.1186/s12911-021-01687-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2020] [Accepted: 11/10/2021] [Indexed: 12/04/2022] Open
Abstract
INTRODUCTION AND GOAL TO BACKGROUND Due to the importance of segmentation of MRI images in identifying brain tumors, various methods including deep learning have been introduced for automatic brain tumor segmentation. On the other hand, using a combination of methods can improve their performance. Among them is the use of wavelet transform as an auxiliary element in deep networks. The analysis of the requirements of such combinations has been addressed in this study. METHOD In this developmental study, different wavelet functions were used to compress brain MRI images and finally as an auxiliary element in improving the performance of the convolutional neural network in brain tumor segmentation. RESULTS Based on the results of the tests performed, the Daubechies1 function was most effective in enhancing network performance in segmenting MRI images and was able to balance the performance and computational overload. CONCLUSION Choosing the wavelet function to optimize the performance of a convolutional neural network should be based on the requirements of the problem, also taking into account some considerations such as computational load, processing time, and performance of the wavelet function in optimizing CNN output in the intended task.
Collapse
Affiliation(s)
- Mohamadreza Hajiabadi
- Brain and Spinal Cord Injury Research Center, Neuroscience Institute, Tehran University of Medical Sciences, Tehran, Iran
| | - Behrouz Alizadeh Savareh
- National Agency for Strategic Research in Medical Education, Tehran, Iran
- Shiraz University of Medical Sciences, Shiraz, Iran
| | - Hassan Emami
- Faculty of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Azadeh Bashiri
- Department of Health Information Management, School of Health Management and Information Sciences, Health Human Resources Research Center, Clinical Education Research Center, Shiraz University of Medical Sciences, Shiraz, Iran
| |
Collapse
|
34
|
Yi D, Grøvik E, Tong E, Iv M, Emblem KE, Nilsen LB, Saxhaug C, Latysheva A, Jacobsen KD, Helland Å, Zaharchuk G, Rubin D. MRI pulse sequence integration for deep-learning-based brain metastases segmentation. Med Phys 2021; 48:6020-6035. [PMID: 34405896 DOI: 10.1002/mp.15136] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 06/23/2021] [Accepted: 07/03/2021] [Indexed: 11/06/2022] Open
Abstract
PURPOSE Magnetic resonance (MR) imaging is an essential diagnostic tool in clinical medicine. Recently, a variety of deep-learning methods have been applied to segmentation tasks in medical images, with promising results for computer-aided diagnosis. For MR images, effectively integrating different pulse sequences is important to optimize performance. However, the best way to integrate different pulse sequences remains unclear. In addition, networks trained with a certain subset of pulse sequences as input are unable to perform when given a subset of those pulse sequences. In this study, we evaluate multiple architectural features and characterize their effects in the task of metastasis segmentation while creating a method to robustly train a network to be able to work given any strict subset of the pulse sequences available during training. METHODS We use a 2.5D DeepLabv3 segmentation network to segment metastases lesions on brain MR's with four pulse sequence inputs. To study how we can best integrate MR pulse sequences for this task, we consider (1) different pulse sequence integration schemas, combining our features at early, middle, and late points within a deep network, (2) different modes of weight sharing for parallel network branches, and (3) a novel integration level dropout layer, which will allow the networks to be robust to performing inference on input with only a subset of pulse sequences available at the training. RESULTS We find that levels of integration and modes of weight sharing that favor low variance work best in our regime of small amounts of training data (n = 100). By adding an input-level dropout layer, we could preserve the overall performance of these networks while allowing for inference on inputs with missing pulse sequences. We illustrate not only the generalizability of the network but also the utility of this robustness when applying the trained model to data from a different center, which does not use the same pulse sequences. Finally, we apply network visualization methods to better understand which input features are most important for network performance. CONCLUSIONS Together, these results provide a framework for building networks with enhanced robustness to missing data while maintaining comparable performance in medical imaging applications.
Collapse
Affiliation(s)
- Darvin Yi
- Department of Biomedical Data Science, Stanford University, Stanford, California, USA
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, Illinois, USA
| | - Endre Grøvik
- Department for Diagnostic Physics, Oslo University Hospital, Oslo, Norway
| | - Elizabeth Tong
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Michael Iv
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Kyrre Eeg Emblem
- Department for Diagnostic Physics, Oslo University Hospital, Oslo, Norway
| | | | - Cathrine Saxhaug
- Department of Radiology and Nuclear Medicine, Oslo University Hospital, Oslo, Norway
| | - Anna Latysheva
- Department of Radiology and Nuclear Medicine, Oslo University Hospital, Oslo, Norway
| | | | - Åslaug Helland
- Department of Oncology, Oslo University Hospital, Oslo, Norway
| | - Greg Zaharchuk
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Daniel Rubin
- Department of Biomedical Data Science, Stanford University, Stanford, California, USA
- Department of Radiology, Stanford University, Stanford, California, USA
| |
Collapse
|
35
|
Phaphuangwittayakul A, Guo Y, Ying F, Dawod AY, Angkurawaranon S, Angkurawaranon C. An optimal deep learning framework for multi-type hemorrhagic lesions detection and quantification in head CT images for traumatic brain injury. APPL INTELL 2021; 52:7320-7338. [PMID: 34764620 PMCID: PMC8475375 DOI: 10.1007/s10489-021-02782-9] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/19/2021] [Indexed: 11/21/2022]
Abstract
Traumatic Brain Injury (TBI) could lead to intracranial hemorrhage (ICH), which has now been identified as a major cause of death after trauma if it is not adequately diagnosed and properly treated within the first 24 hours. CT examination is widely preferred for urgent ICH diagnosis, which enables the fast identification and detection of ICH regions. However, the use of it requires the clinical interpretation by experts to identify the subtypes of ICH. Besides, it is unable to provide the details needed to conduct quantitative assessment, such as the volume and thickness of hemorrhagic lesions, which may have prognostic importance to the decision-making on emergency treatment. In this paper, an optimal deep learning framework is proposed to assist the quantitative assessment for ICH diagnosis and the accurate detection of different subtypes of ICH through head CT scan. Firstly, the format of raw input data is converted from 3D DICOM to NIfTI. Secondly, a pre-trained multi-class semantic segmentation model is applied to each slice of CT images, so as to obtain a precise 3D mask of the whole ICH region. Thirdly, a fine-tuned classification neural network is employed to extract the key features from the raw input data and identify the subtypes of ICH. Finally, a quantitative assessment algorithm is adopted to automatically measure both thickness and volume via the 3D shape mask combined with the output probabilities of the classification network. The results of our extensive experiments demonstrate the effectiveness of the proposed framework where the average accuracy of 96.21 percent is achieved for three types of hemorrhage. The capability of our optimal classification model to distinguish between different types of lesion plays a significant role in reducing the false-positive rate in the existing work. Furthermore, the results suggest that our automatic quantitative assessment algorithm is effective in providing clinically relevant quantification in terms of volume and thickness. It is more important than the qualitative assessment conducted through visual inspection to the decision-making on emergency surgical treatment.
Collapse
Affiliation(s)
- Aniwat Phaphuangwittayakul
- Department of Computer Science and Engineering, East China University of Science and Technology, Shanghai, China
| | - Yi Guo
- Department of Computer Science and Engineering, East China University of Science and Technology, Shanghai, China
- National Engineering Laboratory for Big Data Distribution and Exchange Technologies, Shanghai, China
- Shanghai Engineering Research Center of Big Data and Internet Audience, Shanghai, China
| | - Fangli Ying
- Department of Computer Science and Engineering, State Key Laboratory of Bioreactor Engineering, East China University of Science and Technology, Shanghai, China
| | - Ahmad Yahya Dawod
- International College of Digital Innovation (ICDI), Chiang Mai University, Chiang Mai, Thailand
| | - Salita Angkurawaranon
- Department of Radiology, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand
| | - Chaisiri Angkurawaranon
- Department of Family Medicine, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand
| |
Collapse
|
36
|
Ma S, Tang J, Guo F. Multi-Task Deep Supervision on Attention R2U-Net for Brain Tumor Segmentation. Front Oncol 2021; 11:704850. [PMID: 34604039 PMCID: PMC8484917 DOI: 10.3389/fonc.2021.704850] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2021] [Accepted: 08/26/2021] [Indexed: 11/24/2022] Open
Abstract
Accurate automatic medical image segmentation technology plays an important role for the diagnosis and treatment of brain tumor. However, simple deep learning models are difficult to locate the tumor area and obtain accurate segmentation boundaries. In order to solve the problems above, we propose a 2D end-to-end model of attention R2U-Net with multi-task deep supervision (MTDS). MTDS can extract rich semantic information from images, obtain accurate segmentation boundaries, and prevent overfitting problems in deep learning. Furthermore, we propose the attention pre-activation residual module (APR), which is an attention mechanism based on multi-scale fusion methods. APR is suitable for a deep learning model to help the network locate the tumor area accurately. Finally, we evaluate our proposed model on the public BraTS 2020 validation dataset which consists of 125 cases, and got a competitive brain tumor segmentation result. Compared with the state-of-the-art brain tumor segmentation methods, our method has the characteristics of a small parameter and low computational cost.
Collapse
Affiliation(s)
- Shiqiang Ma
- School of Computer Science and Technology, College of Intelligence and Computing, Tianjin University, Tianjin, China
| | - Jijun Tang
- School of Computer Science and Technology, College of Intelligence and Computing, Tianjin University, Tianjin, China
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Department of Computer Science and Engineering, University of South Carolina, Columbia, SC, United States
| | - Fei Guo
- School of Computer Science and Engineering, Central South University, Changsha, China
| |
Collapse
|
37
|
Spadea MF, Maspero M, Zaffino P, Seco J. Deep learning based synthetic-CT generation in radiotherapy and PET: A review. Med Phys 2021; 48:6537-6566. [PMID: 34407209 DOI: 10.1002/mp.15150] [Citation(s) in RCA: 94] [Impact Index Per Article: 31.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Revised: 06/06/2021] [Accepted: 07/13/2021] [Indexed: 01/22/2023] Open
Abstract
Recently,deep learning (DL)-based methods for the generation of synthetic computed tomography (sCT) have received significant research attention as an alternative to classical ones. We present here a systematic review of these methods by grouping them into three categories, according to their clinical applications: (i) to replace computed tomography in magnetic resonance (MR) based treatment planning, (ii) facilitate cone-beam computed tomography based image-guided adaptive radiotherapy, and (iii) derive attenuation maps for the correction of positron emission tomography. Appropriate database searching was performed on journal articles published between January 2014 and December 2020. The DL methods' key characteristics were extracted from each eligible study, and a comprehensive comparison among network architectures and metrics was reported. A detailed review of each category was given, highlighting essential contributions, identifying specific challenges, and summarizing the achievements. Lastly, the statistics of all the cited works from various aspects were analyzed, revealing the popularity and future trends and the potential of DL-based sCT generation. The current status of DL-based sCT generation was evaluated, assessing the clinical readiness of the presented methods.
Collapse
Affiliation(s)
- Maria Francesca Spadea
- Department Experimental and Clinical Medicine, University "Magna Graecia" of Catanzaro, Catanzaro, 88100, Italy
| | - Matteo Maspero
- Division of Imaging & Oncology, Department of Radiotherapy, University Medical Center Utrecht, Heidelberglaan, Utrecht, The Netherlands.,Computational Imaging Group for MR Diagnostics & Therapy, Center for Image Sciences, University Medical Center Utrecht, Heidelberglaan, Utrecht, The Netherlands
| | - Paolo Zaffino
- Department Experimental and Clinical Medicine, University "Magna Graecia" of Catanzaro, Catanzaro, 88100, Italy
| | - Joao Seco
- Division of Biomedical Physics in Radiation Oncology, DKFZ German Cancer Research Center, Heidelberg, Germany.,Department of Physics and Astronomy, Heidelberg University, Heidelberg, Germany
| |
Collapse
|
38
|
Xiao Z, He K, Liu J, Zhang W. Multi-view hierarchical split network for brain tumor segmentation. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102897] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
|
39
|
Liu Z, Tong L, Chen L, Zhou F, Jiang Z, Zhang Q, Wang Y, Shan C, Li L, Zhou H. CANet: Context Aware Network for Brain Glioma Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1763-1777. [PMID: 33720830 DOI: 10.1109/tmi.2021.3065918] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Automated segmentation of brain glioma plays an active role in diagnosis decision, progression monitoring and surgery planning. Based on deep neural networks, previous studies have shown promising technologies for brain glioma segmentation. However, these approaches lack powerful strategies to incorporate contextual information of tumor cells and their surrounding, which has been proven as a fundamental cue to deal with local ambiguity. In this work, we propose a novel approach named Context-Aware Network (CANet) for brain glioma segmentation. CANet captures high dimensional and discriminative features with contexts from both the convolutional space and feature interaction graphs. We further propose context guided attentive conditional random fields which can selectively aggregate features. We evaluate our method using publicly accessible brain glioma segmentation datasets BRATS2017, BRATS2018 and BRATS2019. The experimental results show that the proposed algorithm has better or competitive performance against several State-of-The-Art approaches under different segmentation metrics on the training and validation sets.
Collapse
|
40
|
Brossard C, Lemasson B, Attyé A, de Busschère JA, Payen JF, Barbier EL, Grèze J, Bouzat P. Contribution of CT-Scan Analysis by Artificial Intelligence to the Clinical Care of TBI Patients. Front Neurol 2021; 12:666875. [PMID: 34177773 PMCID: PMC8222716 DOI: 10.3389/fneur.2021.666875] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Accepted: 04/15/2021] [Indexed: 01/29/2023] Open
Abstract
The gold standard to diagnose intracerebral lesions after traumatic brain injury (TBI) is computed tomography (CT) scan, and due to its accessibility and improved quality of images, the global burden of CT scan for TBI patients is increasing. The recent developments of automated determination of traumatic brain lesions and medical-decision process using artificial intelligence (AI) represent opportunities to help clinicians in screening more patients, identifying the nature and volume of lesions and estimating the patient outcome. This short review will summarize what is ongoing with the use of AI and CT scan for patients with TBI.
Collapse
Affiliation(s)
| | - Benjamin Lemasson
- Université Grenoble Alpes, Inserm, CHU Grenoble Alpes, U1216, Grenoble Institut Neurosciences, Grenoble, France
| | | | | | | | | | | | | |
Collapse
|
41
|
Visual interpretability in 3D brain tumor segmentation network. Comput Biol Med 2021; 133:104410. [PMID: 33894501 DOI: 10.1016/j.compbiomed.2021.104410] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2020] [Revised: 04/15/2021] [Accepted: 04/15/2021] [Indexed: 11/23/2022]
Abstract
Medical image segmentation is a complex yet one of the most essential tasks for diagnostic procedures such as brain tumor detection. Several 3D Convolutional Neural Network (CNN) architectures have achieved remarkable results in brain tumor segmentation. However, due to the black-box nature of CNNs, the integration of such models to make decisions about diagnosis and treatment is high-risk in the domain of healthcare. It is difficult to explain the rationale behind the model's predictions due to the lack of interpretability. Hence, the successful deployment of deep learning models in the medical domain requires accurate as well as transparent predictions. In this paper, we generate 3D visual explanations to analyze the 3D brain tumor segmentation model by extending a post-hoc interpretability technique. We explore the advantages of a gradient-free interpretability approach over gradient-based approaches. Moreover, we interpret the behavior of the segmentation model with respect to the input Magnetic Resonance Imaging (MRI) images and investigate the prediction strategy of the model. We also evaluate the interpretability methodology quantitatively for medical image segmentation tasks. To deduce that our visual explanations do not represent false information, we validate the extended methodology quantitatively. We learn that the information captured by the model is coherent with the domain knowledge of human experts, making it more trustworthy. We use the BraTS-2018 dataset to train the 3D brain tumor segmentation network and perform interpretability experiments to generate visual explanations.
Collapse
|
42
|
Abdullah MAM, Alkassar S, Jebur B, Chambers J. LBTS-Net: A fast and accurate CNN model for brain tumour segmentation. Healthc Technol Lett 2021; 8:31-36. [PMID: 33850627 PMCID: PMC8024025 DOI: 10.1049/htl2.12005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2020] [Revised: 01/29/2021] [Accepted: 02/05/2021] [Indexed: 12/21/2022] Open
Abstract
An accurate tumour segmentation in brain images is a complicated task due to the complext structure and irregular shape of the tumour. In this letter, our contribution is twofold: (1) a lightweight brain tumour segmentation network (LBTS-Net) is proposed for a fast yet accurate brain tumour segmentation; (2) transfer learning is integrated within the LBTS-Net to fine-tune the network and achieve a robust tumour segmentation. To the best of knowledge, this work is amongst the first in the literature which proposes a lightweight and tailored convolution neural network for brain tumour segmentation. The proposed model is based on the VGG architecture in which the number of convolution filters is cut to half in the first layer and the depth-wise convolution is employed to lighten the VGG-16 and VGG-19 networks. Also, the original pixel-labels in the LBTS-Net are replaced by the new tumour labels in order to form the classification layer. Experimental results on the BRATS2015 database and comparisons with the state-of-the-art methods confirmed the robustness of the proposed method achieving a global accuracy and a Dice score of 98.11% and 91%, respectively, while being much more computationally efficient due to containing almost half the number of parameters as in the standard VGG network.
Collapse
Affiliation(s)
| | - Sinan Alkassar
- Computer and Information Engineering DepartmentNinevah UniversityMosulIraq
| | - Bilal Jebur
- Computer and Information Engineering DepartmentNinevah UniversityMosulIraq
| | | |
Collapse
|
43
|
Sarvamangala DR, Kulkarni RV. Convolutional neural networks in medical image understanding: a survey. EVOLUTIONARY INTELLIGENCE 2021; 15:1-22. [PMID: 33425040 PMCID: PMC7778711 DOI: 10.1007/s12065-020-00540-3] [Citation(s) in RCA: 142] [Impact Index Per Article: 47.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Revised: 10/05/2020] [Accepted: 11/22/2020] [Indexed: 12/23/2022]
Abstract
Imaging techniques are used to capture anomalies of the human body. The captured images must be understood for diagnosis, prognosis and treatment planning of the anomalies. Medical image understanding is generally performed by skilled medical professionals. However, the scarce availability of human experts and the fatigue and rough estimate procedures involved with them limit the effectiveness of image understanding performed by skilled medical professionals. Convolutional neural networks (CNNs) are effective tools for image understanding. They have outperformed human experts in many image understanding tasks. This article aims to provide a comprehensive survey of applications of CNNs in medical image understanding. The underlying objective is to motivate medical image understanding researchers to extensively apply CNNs in their research and diagnosis. A brief introduction to CNNs has been presented. A discussion on CNN and its various award-winning frameworks have been presented. The major medical image understanding tasks, namely image classification, segmentation, localization and detection have been introduced. Applications of CNN in medical image understanding of the ailments of brain, breast, lung and other organs have been surveyed critically and comprehensively. A critical discussion on some of the challenges is also presented.
Collapse
|
44
|
|
45
|
Fathi Kazerooni A, Akbari H, Shukla G, Badve C, Rudie JD, Sako C, Rathore S, Bakas S, Pati S, Singh A, Bergman M, Ha SM, Kontos D, Nasrallah M, Bagley SJ, Lustig RA, O'Rourke DM, Sloan AE, Barnholtz-Sloan JS, Mohan S, Bilello M, Davatzikos C. Cancer Imaging Phenomics via CaPTk: Multi-Institutional Prediction of Progression-Free Survival and Pattern of Recurrence in Glioblastoma. JCO Clin Cancer Inform 2020; 4:234-244. [PMID: 32191542 PMCID: PMC7113126 DOI: 10.1200/cci.19.00121] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023] Open
Abstract
PURPOSE To construct a multi-institutional radiomic model that supports upfront prediction of progression-free survival (PFS) and recurrence pattern (RP) in patients diagnosed with glioblastoma multiforme (GBM) at the time of initial diagnosis. PATIENTS AND METHODS We retrospectively identified data for patients with newly diagnosed GBM from two institutions (institution 1, n = 65; institution 2, n = 15) who underwent gross total resection followed by standard adjuvant chemoradiation therapy, with pathologically confirmed recurrence, sufficient follow-up magnetic resonance imaging (MRI) scans to reliably determine PFS, and available presurgical multiparametric MRI (MP-MRI). The advanced software suite Cancer Imaging Phenomics Toolkit (CaPTk) was leveraged to analyze standard clinical brain MP-MRI scans. A rich set of imaging features was extracted from the MP-MRI scans acquired before the initial resection and was integrated into two distinct imaging signatures for predicting mean shorter or longer PFS and near or distant RP. The predictive signatures for PFS and RP were evaluated on the basis of different classification schemes: single-institutional analysis, multi-institutional analysis with random partitioning of the data into discovery and replication cohorts, and multi-institutional assessment with data from institution 1 as the discovery cohort and data from institution 2 as the replication cohort. RESULTS These predictors achieved cross-validated classification performance (ie, area under the receiver operating characteristic curve) of 0.88 (single-institution analysis) and 0.82 to 0.83 (multi-institution analysis) for prediction of PFS and 0.88 (single-institution analysis) and 0.56 to 0.71 (multi-institution analysis) for prediction of RP. CONCLUSION Imaging signatures of presurgical MP-MRI scans reveal relatively high predictability of time and location of GBM recurrence, subject to the patients receiving standard first-line chemoradiation therapy. Through its graphical user interface, CaPTk offers easy accessibility to advanced computational algorithms for deriving imaging signatures predictive of clinical outcome and could similarly be used for a variety of radiomic and radiogenomic analyses.
Collapse
Affiliation(s)
- Anahita Fathi Kazerooni
- Center for Biomedical Image Computing and Analytics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA.,Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA
| | - Hamed Akbari
- Center for Biomedical Image Computing and Analytics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA.,Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA
| | - Gaurav Shukla
- Center for Biomedical Image Computing and Analytics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA.,Department of Radiation Oncology, Christiana Care Helen F. Graham Cancer Center and Research Institute, Newark, DE.,Department of Radiation Oncology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA
| | - Chaitra Badve
- Department of Radiology, University Hospitals-Seidman Cancer Center, Cleveland, OH.,Case Comprehensive Cancer Center, Cleveland, OH
| | - Jeffrey D Rudie
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA.,Department of Radiology and Biomedical Imaging, University of California at San Francisco, San Francisco, CA
| | - Chiharu Sako
- Center for Biomedical Image Computing and Analytics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA.,Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA
| | - Saima Rathore
- Center for Biomedical Image Computing and Analytics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA.,Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA
| | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA.,Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA.,Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA
| | - Sarthak Pati
- Center for Biomedical Image Computing and Analytics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA.,Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA
| | - Ashish Singh
- Center for Biomedical Image Computing and Analytics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA.,Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA
| | - Mark Bergman
- Center for Biomedical Image Computing and Analytics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA.,Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA
| | - Sung Min Ha
- Center for Biomedical Image Computing and Analytics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA.,Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA
| | - Despina Kontos
- Center for Biomedical Image Computing and Analytics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA.,Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA
| | - MacLean Nasrallah
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA
| | - Stephen J Bagley
- Abramson Cancer Center, University of Pennsylvania, Philadelphia, PA
| | - Robert A Lustig
- Department of Radiation Oncology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA
| | - Donald M O'Rourke
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA.,Glioblastoma Translational Center of Excellence, Abramson Cancer Center, University of Pennsylvania, Philadelphia, PA
| | - Andrew E Sloan
- Case Western Reserve University School of Medicine, Cleveland, OH.,Case Comprehensive Cancer Center, Cleveland, OH.,Department of Neurologic Surgery, University Hospitals-Seidman Cancer Center, Cleveland, OH
| | - Jill S Barnholtz-Sloan
- Case Western Reserve University School of Medicine, Cleveland, OH.,Department of Population and Quantitative Health Sciences, Case Western Reserve University School of Medicine, Cleveland, OH
| | - Suyash Mohan
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA
| | - Michel Bilello
- Center for Biomedical Image Computing and Analytics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA.,Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA
| | - Christos Davatzikos
- Center for Biomedical Image Computing and Analytics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA.,Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA
| |
Collapse
|
46
|
Mitchell JR, Kamnitsas K, Singleton KW, Whitmire SA, Clark-Swanson KR, Ranjbar S, Rickertsen CR, Johnston SK, Egan KM, Rollison DE, Arrington J, Krecke KN, Passe TJ, Verdoorn JT, Nagelschneider AA, Carr CM, Port JD, Patton A, Campeau NG, Liebo GB, Eckel LJ, Wood CP, Hunt CH, Vibhute P, Nelson KD, Hoxworth JM, Patel AC, Chong BW, Ross JS, Boxerman JL, Vogelbaum MA, Hu LS, Glocker B, Swanson KR. Deep neural network to locate and segment brain tumors outperformed the expert technicians who created the training data. J Med Imaging (Bellingham) 2020; 7:055501. [PMID: 33102623 PMCID: PMC7567400 DOI: 10.1117/1.jmi.7.5.055501] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2020] [Accepted: 09/21/2020] [Indexed: 11/17/2022] Open
Abstract
Purpose: Deep learning (DL) algorithms have shown promising results for brain tumor segmentation in MRI. However, validation is required prior to routine clinical use. We report the first randomized and blinded comparison of DL and trained technician segmentations. Approach: We compiled a multi-institutional database of 741 pretreatment MRI exams. Each contained a postcontrast T1-weighted exam, a T2-weighted fluid-attenuated inversion recovery exam, and at least one technician-derived tumor segmentation. The database included 729 unique patients (470 males and 259 females). Of these exams, 641 were used for training the DL system, and 100 were reserved for testing. We developed a platform to enable qualitative, blinded, controlled assessment of lesion segmentations made by technicians and the DL method. On this platform, 20 neuroradiologists performed 400 side-by-side comparisons of segmentations on 100 test cases. They scored each segmentation between 0 (poor) and 10 (perfect). Agreement between segmentations from technicians and the DL method was also evaluated quantitatively using the Dice coefficient, which produces values between 0 (no overlap) and 1 (perfect overlap). Results: The neuroradiologists gave technician and DL segmentations mean scores of 6.97 and 7.31, respectively (p<0.00007). The DL method achieved a mean Dice coefficient of 0.87 on the test cases. Conclusions: This was the first objective comparison of automated and human segmentation using a blinded controlled assessment study. Our DL system learned to outperform its “human teachers” and produced output that was better, on average, than its training data.
Collapse
Affiliation(s)
- Joseph Ross Mitchell
- H. Lee Moffitt Cancer Center and Research Institute, Department of Machine Learning, Tampa, Florida, United States
| | | | - Kyle W Singleton
- Mayo Clinic, Mathematical NeuroOncology Lab, Phoenix, Arizona, United States
| | - Scott A Whitmire
- Mayo Clinic, Mathematical NeuroOncology Lab, Phoenix, Arizona, United States
| | | | - Sara Ranjbar
- Mayo Clinic, Mathematical NeuroOncology Lab, Phoenix, Arizona, United States
| | | | - Sandra K Johnston
- Mayo Clinic, Mathematical NeuroOncology Lab, Phoenix, Arizona, United States.,University of Washington, Department of Radiology, Seattle, Washington, United States
| | - Kathleen M Egan
- H. Lee Moffitt Cancer Center and Research Institute, Department of Cancer Epidemiology, Tampa, Florida, United States
| | - Dana E Rollison
- H. Lee Moffitt Cancer Center and Research Institute, Department of Cancer Epidemiology, Tampa, Florida, United States
| | - John Arrington
- H. Lee Moffitt Cancer Center and Research Institute, Department of Diagnostic Imaging and Interventional Radiology, Tampa, Florida, United States
| | - Karl N Krecke
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Theodore J Passe
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Jared T Verdoorn
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | | | - Carrie M Carr
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - John D Port
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Alice Patton
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Norbert G Campeau
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Greta B Liebo
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Laurence J Eckel
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Christopher P Wood
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Christopher H Hunt
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Prasanna Vibhute
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Kent D Nelson
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Joseph M Hoxworth
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Ameet C Patel
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Brian W Chong
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Jeffrey S Ross
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Jerrold L Boxerman
- Rhode Island Hospital and Alpert Medical School of Brown University, Department of Diagnostic Imaging, Providence, Rhode Island, United States
| | - Michael A Vogelbaum
- H. Lee Moffitt Cancer Center and Research Institute, Department of Neurosurgery, Tampa, Florida, United States
| | - Leland S Hu
- Mayo Clinic, Mathematical NeuroOncology Lab, Phoenix, Arizona, United States.,Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Ben Glocker
- Imperial College, Biomedical Image Analysis Group, London, United Kingdom
| | - Kristin R Swanson
- Mayo Clinic, Mathematical NeuroOncology Lab, Phoenix, Arizona, United States.,Mayo Clinic, Department of Neurosurgery, Phoenix, Arizona, United States
| |
Collapse
|
47
|
Spiteri M, Guillemaut JY, Windridge D, Avula S, Kumar R, Lewis E. Fully-Automated Identification of Imaging Biomarkers for Post-Operative Cerebellar Mutism Syndrome Using Longitudinal Paediatric MRI. Neuroinformatics 2020; 18:151-162. [PMID: 31254271 PMCID: PMC6981105 DOI: 10.1007/s12021-019-09427-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Post-operative cerebellar mutism syndrome (POPCMS) in children is a post- surgical complication which occurs following the resection of tumors within the brain stem and cerebellum. High resolution brain magnetic resonance (MR) images acquired at multiple time points across a patient’s treatment allow the quantification of localized changes caused by the progression of this syndrome. However, MR images are not necessarily acquired at regular intervals throughout treatment and are often not volumetric. This restricts the analysis to 2D space and causes difficulty in intra- and inter-subject comparison. To address these challenges, we have developed an automated image processing and analysis pipeline. Multi-slice 2D MR image slices are interpolated in space and time to produce a 4D volumetric MR image dataset providing a longitudinal representation of the cerebellum and brain stem at specific time points across treatment. The deformations within the brain over time are represented using a novel metric known as the Jacobian of deformations determinant. This metric, together with the changing grey-level intensity of areas within the brain over time, are analyzed using machine learning techniques in order to identify biomarkers that correspond with the development of POPCMS following tumor resection. This study makes use of a fully automated approach which is not hypothesis-driven. As a result, we were able to automatically detect six potential biomarkers that are related to the development of POPCMS following tumor resection in the posterior fossa.
Collapse
Affiliation(s)
- Michaela Spiteri
- Centre for Vision, Speech and Signal Processing (CVSSP), University of Surrey, Guildford, GU27XH, UK.
| | - Jean-Yves Guillemaut
- Centre for Vision, Speech and Signal Processing (CVSSP), University of Surrey, Guildford, GU27XH, UK
| | - David Windridge
- Centre for Vision, Speech and Signal Processing (CVSSP), University of Surrey, Guildford, GU27XH, UK
| | - Shivaram Avula
- Alder Hey Children's NHS Trust, E Prescot Rd, Liverpool, L14 5AB, UK
| | - Ram Kumar
- Alder Hey Children's NHS Trust, E Prescot Rd, Liverpool, L14 5AB, UK
| | - Emma Lewis
- Centre for Vision, Speech and Signal Processing (CVSSP), University of Surrey, Guildford, GU27XH, UK
| |
Collapse
|
48
|
Mecheter I, Alic L, Abbod M, Amira A, Ji J. MR Image-Based Attenuation Correction of Brain PET Imaging: Review of Literature on Machine Learning Approaches for Segmentation. J Digit Imaging 2020; 33:1224-1241. [PMID: 32607906 PMCID: PMC7573060 DOI: 10.1007/s10278-020-00361-x] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022] Open
Abstract
Recent emerging hybrid technology of positron emission tomography/magnetic resonance (PET/MR) imaging has generated a great need for an accurate MR image-based PET attenuation correction. MR image segmentation, as a robust and simple method for PET attenuation correction, has been clinically adopted in commercial PET/MR scanners. The general approach in this method is to segment the MR image into different tissue types, each assigned an attenuation constant as in an X-ray CT image. Machine learning techniques such as clustering, classification and deep networks are extensively used for brain MR image segmentation. However, only limited work has been reported on using deep learning in brain PET attenuation correction. In addition, there is a lack of clinical evaluation of machine learning methods in this application. The aim of this review is to study the use of machine learning methods for MR image segmentation and its application in attenuation correction for PET brain imaging. Furthermore, challenges and future opportunities in MR image-based PET attenuation correction are discussed.
Collapse
Affiliation(s)
- Imene Mecheter
- Department of Electronic and Computer Engineering, Brunel University London, Uxbridge, UK.
- Department of Electrical and Computer Engineering, Texas A & M University at Qatar, Doha, Qatar.
| | - Lejla Alic
- Magnetic Detection and Imaging Group, Faculty of Science and Technology, University of Twente, Enschede, Netherlands
| | - Maysam Abbod
- Department of Electronic and Computer Engineering, Brunel University London, Uxbridge, UK
| | - Abbes Amira
- Institute of Artificial Intelligence, De Montfort University, Leicester, UK
| | - Jim Ji
- Department of Electrical and Computer Engineering, Texas A & M University at Qatar, Doha, Qatar
- Department of Electrical and Computer Engineering, Texas A & M University, College Station, TX, USA
| |
Collapse
|
49
|
Eijgelaar RS, Visser M, Müller DMJ, Barkhof F, Vrenken H, van Herk M, Bello L, Conti Nibali M, Rossi M, Sciortino T, Berger MS, Hervey-Jumper S, Kiesel B, Widhalm G, Furtner J, Robe PAJT, Mandonnet E, De Witt Hamer PC, de Munck JC, Witte MG. Robust Deep Learning-based Segmentation of Glioblastoma on Routine Clinical MRI Scans Using Sparsified Training. Radiol Artif Intell 2020; 2:e190103. [PMID: 33937837 PMCID: PMC8082349 DOI: 10.1148/ryai.2020190103] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2019] [Revised: 04/10/2020] [Accepted: 04/16/2020] [Indexed: 12/15/2022]
Abstract
PURPOSE To improve the robustness of deep learning-based glioblastoma segmentation in a clinical setting with sparsified datasets. MATERIALS AND METHODS In this retrospective study, preoperative T1-weighted, T2-weighted, T2-weighted fluid-attenuated inversion recovery, and postcontrast T1-weighted MRI from 117 patients (median age, 64 years; interquartile range [IQR], 55-73 years; 76 men) included within the Multimodal Brain Tumor Image Segmentation (BraTS) dataset plus a clinical dataset (2012-2013) with similar imaging modalities of 634 patients (median age, 59 years; IQR, 49-69 years; 382 men) with glioblastoma from six hospitals were used. Expert tumor delineations on the postcontrast images were available, but for various clinical datasets, one or more sequences were missing. The convolutional neural network, DeepMedic, was trained on combinations of complete and incomplete data with and without site-specific data. Sparsified training was introduced, which randomly simulated missing sequences during training. The effects of sparsified training and center-specific training were tested using Wilcoxon signed rank tests for paired measurements. RESULTS A model trained exclusively on BraTS data reached a median Dice score of 0.81 for segmentation on BraTS test data but only 0.49 on the clinical data. Sparsified training improved performance (adjusted P < .05), even when excluding test data with missing sequences, to median Dice score of 0.67. Inclusion of site-specific data during sparsified training led to higher model performance Dice scores greater than 0.8, on par with a model based on all complete and incomplete data. For the model using BraTS and clinical training data, inclusion of site-specific data or sparsified training was of no consequence. CONCLUSION Accurate and automatic segmentation of glioblastoma on clinical scans is feasible using a model based on large, heterogeneous, and partially incomplete datasets. Sparsified training may boost the performance of a smaller model based on public and site-specific data.Supplemental material is available for this article.Published under a CC BY 4.0 license.
Collapse
Affiliation(s)
- Roelant S Eijgelaar
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands (R.S.E., M.v.H., M.G.W.); Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (M.V., F.B., H.V., J.C.d.M.); Neurosurgical Center Amsterdam, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (D.M.J.M., P.C.D.W.H.); Institutes of Neurology & Healthcare Engineering, University College London, London, England (F.B.); Faculty of Biology, Medicine & Health, Division of Cancer Sciences, University of Manchester and Christie NHS Trust, Manchester, England (M.v.H.); Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Humanitas Research Hospital, IRCCS, Milan, Italy (L.B., M.C.N., M.R., T.S.); Department of Neurologic Surgery, University of California-San Francisco, San Francisco, Calif (M.S.B., S.H.J.); Department of Neurosurgery, Medical University Vienna, Vienna, Austria (B.K., G.W.); Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna, Austria (J.F.); Department of Neurology & Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands (P.A.J.T.R.); and Department of Neurologic Surgery, Hôpital Lariboisière, Paris, France (E.M.)
| | - Martin Visser
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands (R.S.E., M.v.H., M.G.W.); Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (M.V., F.B., H.V., J.C.d.M.); Neurosurgical Center Amsterdam, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (D.M.J.M., P.C.D.W.H.); Institutes of Neurology & Healthcare Engineering, University College London, London, England (F.B.); Faculty of Biology, Medicine & Health, Division of Cancer Sciences, University of Manchester and Christie NHS Trust, Manchester, England (M.v.H.); Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Humanitas Research Hospital, IRCCS, Milan, Italy (L.B., M.C.N., M.R., T.S.); Department of Neurologic Surgery, University of California-San Francisco, San Francisco, Calif (M.S.B., S.H.J.); Department of Neurosurgery, Medical University Vienna, Vienna, Austria (B.K., G.W.); Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna, Austria (J.F.); Department of Neurology & Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands (P.A.J.T.R.); and Department of Neurologic Surgery, Hôpital Lariboisière, Paris, France (E.M.)
| | - Domenique M J Müller
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands (R.S.E., M.v.H., M.G.W.); Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (M.V., F.B., H.V., J.C.d.M.); Neurosurgical Center Amsterdam, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (D.M.J.M., P.C.D.W.H.); Institutes of Neurology & Healthcare Engineering, University College London, London, England (F.B.); Faculty of Biology, Medicine & Health, Division of Cancer Sciences, University of Manchester and Christie NHS Trust, Manchester, England (M.v.H.); Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Humanitas Research Hospital, IRCCS, Milan, Italy (L.B., M.C.N., M.R., T.S.); Department of Neurologic Surgery, University of California-San Francisco, San Francisco, Calif (M.S.B., S.H.J.); Department of Neurosurgery, Medical University Vienna, Vienna, Austria (B.K., G.W.); Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna, Austria (J.F.); Department of Neurology & Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands (P.A.J.T.R.); and Department of Neurologic Surgery, Hôpital Lariboisière, Paris, France (E.M.)
| | - Frederik Barkhof
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands (R.S.E., M.v.H., M.G.W.); Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (M.V., F.B., H.V., J.C.d.M.); Neurosurgical Center Amsterdam, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (D.M.J.M., P.C.D.W.H.); Institutes of Neurology & Healthcare Engineering, University College London, London, England (F.B.); Faculty of Biology, Medicine & Health, Division of Cancer Sciences, University of Manchester and Christie NHS Trust, Manchester, England (M.v.H.); Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Humanitas Research Hospital, IRCCS, Milan, Italy (L.B., M.C.N., M.R., T.S.); Department of Neurologic Surgery, University of California-San Francisco, San Francisco, Calif (M.S.B., S.H.J.); Department of Neurosurgery, Medical University Vienna, Vienna, Austria (B.K., G.W.); Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna, Austria (J.F.); Department of Neurology & Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands (P.A.J.T.R.); and Department of Neurologic Surgery, Hôpital Lariboisière, Paris, France (E.M.)
| | - Hugo Vrenken
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands (R.S.E., M.v.H., M.G.W.); Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (M.V., F.B., H.V., J.C.d.M.); Neurosurgical Center Amsterdam, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (D.M.J.M., P.C.D.W.H.); Institutes of Neurology & Healthcare Engineering, University College London, London, England (F.B.); Faculty of Biology, Medicine & Health, Division of Cancer Sciences, University of Manchester and Christie NHS Trust, Manchester, England (M.v.H.); Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Humanitas Research Hospital, IRCCS, Milan, Italy (L.B., M.C.N., M.R., T.S.); Department of Neurologic Surgery, University of California-San Francisco, San Francisco, Calif (M.S.B., S.H.J.); Department of Neurosurgery, Medical University Vienna, Vienna, Austria (B.K., G.W.); Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna, Austria (J.F.); Department of Neurology & Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands (P.A.J.T.R.); and Department of Neurologic Surgery, Hôpital Lariboisière, Paris, France (E.M.)
| | - Marcel van Herk
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands (R.S.E., M.v.H., M.G.W.); Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (M.V., F.B., H.V., J.C.d.M.); Neurosurgical Center Amsterdam, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (D.M.J.M., P.C.D.W.H.); Institutes of Neurology & Healthcare Engineering, University College London, London, England (F.B.); Faculty of Biology, Medicine & Health, Division of Cancer Sciences, University of Manchester and Christie NHS Trust, Manchester, England (M.v.H.); Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Humanitas Research Hospital, IRCCS, Milan, Italy (L.B., M.C.N., M.R., T.S.); Department of Neurologic Surgery, University of California-San Francisco, San Francisco, Calif (M.S.B., S.H.J.); Department of Neurosurgery, Medical University Vienna, Vienna, Austria (B.K., G.W.); Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna, Austria (J.F.); Department of Neurology & Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands (P.A.J.T.R.); and Department of Neurologic Surgery, Hôpital Lariboisière, Paris, France (E.M.)
| | - Lorenzo Bello
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands (R.S.E., M.v.H., M.G.W.); Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (M.V., F.B., H.V., J.C.d.M.); Neurosurgical Center Amsterdam, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (D.M.J.M., P.C.D.W.H.); Institutes of Neurology & Healthcare Engineering, University College London, London, England (F.B.); Faculty of Biology, Medicine & Health, Division of Cancer Sciences, University of Manchester and Christie NHS Trust, Manchester, England (M.v.H.); Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Humanitas Research Hospital, IRCCS, Milan, Italy (L.B., M.C.N., M.R., T.S.); Department of Neurologic Surgery, University of California-San Francisco, San Francisco, Calif (M.S.B., S.H.J.); Department of Neurosurgery, Medical University Vienna, Vienna, Austria (B.K., G.W.); Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna, Austria (J.F.); Department of Neurology & Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands (P.A.J.T.R.); and Department of Neurologic Surgery, Hôpital Lariboisière, Paris, France (E.M.)
| | - Marco Conti Nibali
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands (R.S.E., M.v.H., M.G.W.); Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (M.V., F.B., H.V., J.C.d.M.); Neurosurgical Center Amsterdam, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (D.M.J.M., P.C.D.W.H.); Institutes of Neurology & Healthcare Engineering, University College London, London, England (F.B.); Faculty of Biology, Medicine & Health, Division of Cancer Sciences, University of Manchester and Christie NHS Trust, Manchester, England (M.v.H.); Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Humanitas Research Hospital, IRCCS, Milan, Italy (L.B., M.C.N., M.R., T.S.); Department of Neurologic Surgery, University of California-San Francisco, San Francisco, Calif (M.S.B., S.H.J.); Department of Neurosurgery, Medical University Vienna, Vienna, Austria (B.K., G.W.); Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna, Austria (J.F.); Department of Neurology & Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands (P.A.J.T.R.); and Department of Neurologic Surgery, Hôpital Lariboisière, Paris, France (E.M.)
| | - Marco Rossi
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands (R.S.E., M.v.H., M.G.W.); Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (M.V., F.B., H.V., J.C.d.M.); Neurosurgical Center Amsterdam, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (D.M.J.M., P.C.D.W.H.); Institutes of Neurology & Healthcare Engineering, University College London, London, England (F.B.); Faculty of Biology, Medicine & Health, Division of Cancer Sciences, University of Manchester and Christie NHS Trust, Manchester, England (M.v.H.); Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Humanitas Research Hospital, IRCCS, Milan, Italy (L.B., M.C.N., M.R., T.S.); Department of Neurologic Surgery, University of California-San Francisco, San Francisco, Calif (M.S.B., S.H.J.); Department of Neurosurgery, Medical University Vienna, Vienna, Austria (B.K., G.W.); Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna, Austria (J.F.); Department of Neurology & Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands (P.A.J.T.R.); and Department of Neurologic Surgery, Hôpital Lariboisière, Paris, France (E.M.)
| | - Tommaso Sciortino
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands (R.S.E., M.v.H., M.G.W.); Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (M.V., F.B., H.V., J.C.d.M.); Neurosurgical Center Amsterdam, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (D.M.J.M., P.C.D.W.H.); Institutes of Neurology & Healthcare Engineering, University College London, London, England (F.B.); Faculty of Biology, Medicine & Health, Division of Cancer Sciences, University of Manchester and Christie NHS Trust, Manchester, England (M.v.H.); Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Humanitas Research Hospital, IRCCS, Milan, Italy (L.B., M.C.N., M.R., T.S.); Department of Neurologic Surgery, University of California-San Francisco, San Francisco, Calif (M.S.B., S.H.J.); Department of Neurosurgery, Medical University Vienna, Vienna, Austria (B.K., G.W.); Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna, Austria (J.F.); Department of Neurology & Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands (P.A.J.T.R.); and Department of Neurologic Surgery, Hôpital Lariboisière, Paris, France (E.M.)
| | - Mitchel S Berger
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands (R.S.E., M.v.H., M.G.W.); Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (M.V., F.B., H.V., J.C.d.M.); Neurosurgical Center Amsterdam, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (D.M.J.M., P.C.D.W.H.); Institutes of Neurology & Healthcare Engineering, University College London, London, England (F.B.); Faculty of Biology, Medicine & Health, Division of Cancer Sciences, University of Manchester and Christie NHS Trust, Manchester, England (M.v.H.); Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Humanitas Research Hospital, IRCCS, Milan, Italy (L.B., M.C.N., M.R., T.S.); Department of Neurologic Surgery, University of California-San Francisco, San Francisco, Calif (M.S.B., S.H.J.); Department of Neurosurgery, Medical University Vienna, Vienna, Austria (B.K., G.W.); Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna, Austria (J.F.); Department of Neurology & Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands (P.A.J.T.R.); and Department of Neurologic Surgery, Hôpital Lariboisière, Paris, France (E.M.)
| | - Shawn Hervey-Jumper
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands (R.S.E., M.v.H., M.G.W.); Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (M.V., F.B., H.V., J.C.d.M.); Neurosurgical Center Amsterdam, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (D.M.J.M., P.C.D.W.H.); Institutes of Neurology & Healthcare Engineering, University College London, London, England (F.B.); Faculty of Biology, Medicine & Health, Division of Cancer Sciences, University of Manchester and Christie NHS Trust, Manchester, England (M.v.H.); Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Humanitas Research Hospital, IRCCS, Milan, Italy (L.B., M.C.N., M.R., T.S.); Department of Neurologic Surgery, University of California-San Francisco, San Francisco, Calif (M.S.B., S.H.J.); Department of Neurosurgery, Medical University Vienna, Vienna, Austria (B.K., G.W.); Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna, Austria (J.F.); Department of Neurology & Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands (P.A.J.T.R.); and Department of Neurologic Surgery, Hôpital Lariboisière, Paris, France (E.M.)
| | - Barbara Kiesel
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands (R.S.E., M.v.H., M.G.W.); Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (M.V., F.B., H.V., J.C.d.M.); Neurosurgical Center Amsterdam, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (D.M.J.M., P.C.D.W.H.); Institutes of Neurology & Healthcare Engineering, University College London, London, England (F.B.); Faculty of Biology, Medicine & Health, Division of Cancer Sciences, University of Manchester and Christie NHS Trust, Manchester, England (M.v.H.); Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Humanitas Research Hospital, IRCCS, Milan, Italy (L.B., M.C.N., M.R., T.S.); Department of Neurologic Surgery, University of California-San Francisco, San Francisco, Calif (M.S.B., S.H.J.); Department of Neurosurgery, Medical University Vienna, Vienna, Austria (B.K., G.W.); Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna, Austria (J.F.); Department of Neurology & Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands (P.A.J.T.R.); and Department of Neurologic Surgery, Hôpital Lariboisière, Paris, France (E.M.)
| | - Georg Widhalm
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands (R.S.E., M.v.H., M.G.W.); Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (M.V., F.B., H.V., J.C.d.M.); Neurosurgical Center Amsterdam, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (D.M.J.M., P.C.D.W.H.); Institutes of Neurology & Healthcare Engineering, University College London, London, England (F.B.); Faculty of Biology, Medicine & Health, Division of Cancer Sciences, University of Manchester and Christie NHS Trust, Manchester, England (M.v.H.); Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Humanitas Research Hospital, IRCCS, Milan, Italy (L.B., M.C.N., M.R., T.S.); Department of Neurologic Surgery, University of California-San Francisco, San Francisco, Calif (M.S.B., S.H.J.); Department of Neurosurgery, Medical University Vienna, Vienna, Austria (B.K., G.W.); Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna, Austria (J.F.); Department of Neurology & Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands (P.A.J.T.R.); and Department of Neurologic Surgery, Hôpital Lariboisière, Paris, France (E.M.)
| | - Julia Furtner
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands (R.S.E., M.v.H., M.G.W.); Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (M.V., F.B., H.V., J.C.d.M.); Neurosurgical Center Amsterdam, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (D.M.J.M., P.C.D.W.H.); Institutes of Neurology & Healthcare Engineering, University College London, London, England (F.B.); Faculty of Biology, Medicine & Health, Division of Cancer Sciences, University of Manchester and Christie NHS Trust, Manchester, England (M.v.H.); Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Humanitas Research Hospital, IRCCS, Milan, Italy (L.B., M.C.N., M.R., T.S.); Department of Neurologic Surgery, University of California-San Francisco, San Francisco, Calif (M.S.B., S.H.J.); Department of Neurosurgery, Medical University Vienna, Vienna, Austria (B.K., G.W.); Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna, Austria (J.F.); Department of Neurology & Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands (P.A.J.T.R.); and Department of Neurologic Surgery, Hôpital Lariboisière, Paris, France (E.M.)
| | - Pierre A J T Robe
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands (R.S.E., M.v.H., M.G.W.); Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (M.V., F.B., H.V., J.C.d.M.); Neurosurgical Center Amsterdam, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (D.M.J.M., P.C.D.W.H.); Institutes of Neurology & Healthcare Engineering, University College London, London, England (F.B.); Faculty of Biology, Medicine & Health, Division of Cancer Sciences, University of Manchester and Christie NHS Trust, Manchester, England (M.v.H.); Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Humanitas Research Hospital, IRCCS, Milan, Italy (L.B., M.C.N., M.R., T.S.); Department of Neurologic Surgery, University of California-San Francisco, San Francisco, Calif (M.S.B., S.H.J.); Department of Neurosurgery, Medical University Vienna, Vienna, Austria (B.K., G.W.); Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna, Austria (J.F.); Department of Neurology & Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands (P.A.J.T.R.); and Department of Neurologic Surgery, Hôpital Lariboisière, Paris, France (E.M.)
| | - Emmanuel Mandonnet
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands (R.S.E., M.v.H., M.G.W.); Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (M.V., F.B., H.V., J.C.d.M.); Neurosurgical Center Amsterdam, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (D.M.J.M., P.C.D.W.H.); Institutes of Neurology & Healthcare Engineering, University College London, London, England (F.B.); Faculty of Biology, Medicine & Health, Division of Cancer Sciences, University of Manchester and Christie NHS Trust, Manchester, England (M.v.H.); Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Humanitas Research Hospital, IRCCS, Milan, Italy (L.B., M.C.N., M.R., T.S.); Department of Neurologic Surgery, University of California-San Francisco, San Francisco, Calif (M.S.B., S.H.J.); Department of Neurosurgery, Medical University Vienna, Vienna, Austria (B.K., G.W.); Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna, Austria (J.F.); Department of Neurology & Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands (P.A.J.T.R.); and Department of Neurologic Surgery, Hôpital Lariboisière, Paris, France (E.M.)
| | - Philip C De Witt Hamer
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands (R.S.E., M.v.H., M.G.W.); Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (M.V., F.B., H.V., J.C.d.M.); Neurosurgical Center Amsterdam, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (D.M.J.M., P.C.D.W.H.); Institutes of Neurology & Healthcare Engineering, University College London, London, England (F.B.); Faculty of Biology, Medicine & Health, Division of Cancer Sciences, University of Manchester and Christie NHS Trust, Manchester, England (M.v.H.); Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Humanitas Research Hospital, IRCCS, Milan, Italy (L.B., M.C.N., M.R., T.S.); Department of Neurologic Surgery, University of California-San Francisco, San Francisco, Calif (M.S.B., S.H.J.); Department of Neurosurgery, Medical University Vienna, Vienna, Austria (B.K., G.W.); Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna, Austria (J.F.); Department of Neurology & Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands (P.A.J.T.R.); and Department of Neurologic Surgery, Hôpital Lariboisière, Paris, France (E.M.)
| | - Jan C de Munck
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands (R.S.E., M.v.H., M.G.W.); Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (M.V., F.B., H.V., J.C.d.M.); Neurosurgical Center Amsterdam, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (D.M.J.M., P.C.D.W.H.); Institutes of Neurology & Healthcare Engineering, University College London, London, England (F.B.); Faculty of Biology, Medicine & Health, Division of Cancer Sciences, University of Manchester and Christie NHS Trust, Manchester, England (M.v.H.); Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Humanitas Research Hospital, IRCCS, Milan, Italy (L.B., M.C.N., M.R., T.S.); Department of Neurologic Surgery, University of California-San Francisco, San Francisco, Calif (M.S.B., S.H.J.); Department of Neurosurgery, Medical University Vienna, Vienna, Austria (B.K., G.W.); Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna, Austria (J.F.); Department of Neurology & Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands (P.A.J.T.R.); and Department of Neurologic Surgery, Hôpital Lariboisière, Paris, France (E.M.)
| | - Marnix G Witte
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands (R.S.E., M.v.H., M.G.W.); Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (M.V., F.B., H.V., J.C.d.M.); Neurosurgical Center Amsterdam, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (D.M.J.M., P.C.D.W.H.); Institutes of Neurology & Healthcare Engineering, University College London, London, England (F.B.); Faculty of Biology, Medicine & Health, Division of Cancer Sciences, University of Manchester and Christie NHS Trust, Manchester, England (M.v.H.); Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Humanitas Research Hospital, IRCCS, Milan, Italy (L.B., M.C.N., M.R., T.S.); Department of Neurologic Surgery, University of California-San Francisco, San Francisco, Calif (M.S.B., S.H.J.); Department of Neurosurgery, Medical University Vienna, Vienna, Austria (B.K., G.W.); Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna, Austria (J.F.); Department of Neurology & Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands (P.A.J.T.R.); and Department of Neurologic Surgery, Hôpital Lariboisière, Paris, France (E.M.)
| |
Collapse
|
50
|
Hu X, Liu Z, Zhou H, Fang J, Lu H. Deep HT: A deep neural network for diagnose on MR images of tumors of the hand. PLoS One 2020; 15:e0237606. [PMID: 32797089 PMCID: PMC7428075 DOI: 10.1371/journal.pone.0237606] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2020] [Accepted: 07/29/2020] [Indexed: 12/13/2022] Open
Abstract
Background There are many types of hand tumors, and it is often difficult for imaging diagnosticians to make a correct diagnosis, which can easily lead to misdiagnosis and delay in treatment. Thus in this paper, we propose a deep neural network for diagnose on MR Images of tumors of the hand in order to better define preoperative diagnosis and standardize surgical treatment. Methods We collected MRI figures of 221 patients with hand tumors from one medical center from 2016 to 2019, invited medical experts to annotate the images to form the annotation data set. Then the original image is preprocessed to get the image data set. The data set is randomly divided into ten parts, nine for training and one for test. Next, the data set is input into the neural network system for testing. Finally, average the results of ten experiments as an estimate of the accuracy of the algorithm. Results This research uses 221 images as dataset and the system shows an average confidence level of 71.6% in segmentation of hand tumors. The segmented tumor regions are validated through ground truth analysis and manual analysis by a radiologist. Conclusions With the recent advances in convolutional neural networks, vast improvements have been made for image segmentation, mainly based on the skip-connection-linked encoder decoder deep architectures. Therefore, in this paper, we propose an automatic segmentation method based on DeepLab v3+ and achieved a good diagnostic accuracy rate.
Collapse
Affiliation(s)
- Xianliang Hu
- School of Mathematical Sciences, Zhejiang Univeristy, Hangzhou, Zhejiang Province, P. R. China
| | - Zongyu Liu
- School of Mathematical Sciences, Zhejiang Univeristy, Hangzhou, Zhejiang Province, P. R. China
| | - Haiying Zhou
- Department of Orthopedics, The First Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang Province, P. R. China
| | - Jianyong Fang
- Suzhou Warrior Pioneer Software Co., Ltd., Suzhou, Jiangsu Province, P. R. China
| | - Hui Lu
- Department of Orthopedics, The First Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang Province, P. R. China
- * E-mail:
| |
Collapse
|