1
|
Servati M, Vaccaro CN, Diller EE, Pellegrino Da Silva R, Mafra F, Cao S, Stanley KB, Cohen-Gadol AA, Parker JG. Metabolic Insight into Glioma Heterogeneity: Mapping Whole Exome Sequencing to In Vivo Imaging with Stereotactic Localization and Deep Learning. Metabolites 2024; 14:337. [PMID: 38921472 PMCID: PMC11205750 DOI: 10.3390/metabo14060337] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Revised: 06/07/2024] [Accepted: 06/12/2024] [Indexed: 06/27/2024] Open
Abstract
Intratumoral heterogeneity (ITH) complicates the diagnosis and treatment of glioma, partly due to the diverse metabolic profiles driven by underlying genomic alterations. While multiparametric imaging enhances the characterization of ITH by capturing both spatial and functional variations, it falls short in directly assessing the metabolic activities that underpin these phenotypic differences. This gap stems from the challenge of integrating easily accessible, colocated pathology and detailed genomic data with metabolic insights. This study presents a multifaceted approach combining stereotactic biopsy with standard clinical open-craniotomy for sample collection, voxel-wise analysis of MR images, regression-based GAM, and whole-exome sequencing. This work aims to demonstrate the potential of machine learning algorithms to predict variations in cellular and molecular tumor characteristics. This retrospective study enrolled ten treatment-naïve patients with radiologically confirmed glioma. Each patient underwent a multiparametric MR scan (T1W, T1W-CE, T2W, T2W-FLAIR, DWI) prior to surgery. During standard craniotomy, at least 1 stereotactic biopsy was collected from each patient, with screenshots of the sample locations saved for spatial registration to pre-surgical MR data. Whole-exome sequencing was performed on flash-frozen tumor samples, prioritizing the signatures of five glioma-related genes: IDH1, TP53, EGFR, PIK3CA, and NF1. Regression was implemented with a GAM using a univariate shape function for each predictor. Standard receiver operating characteristic (ROC) analyses were used to evaluate detection, with AUC (area under curve) calculated for each gene target and MR contrast combination. Mean AUC for five gene targets and 31 MR contrast combinations was 0.75 ± 0.11; individual AUCs were as high as 0.96 for both IDH1 and TP53 with T2W-FLAIR and ADC, and 0.99 for EGFR with T2W and ADC. These results suggest the possibility of predicting exome-wide mutation events from noninvasive, in vivo imaging by combining stereotactic localization of glioma samples and a semi-parametric deep learning method. The genomic alterations identified, particularly in IDH1, TP53, EGFR, PIK3CA, and NF1, are known to play pivotal roles in metabolic pathways driving glioma heterogeneity. Our methodology, therefore, indirectly sheds light on the metabolic landscape of glioma through the lens of these critical genomic markers, suggesting a complex interplay between tumor genomics and metabolism. This approach holds potential for refining targeted therapy by better addressing the genomic heterogeneity of glioma tumors.
Collapse
Affiliation(s)
- Mahsa Servati
- Radiology and Imaging Sciences, School of Medicine, Indiana University, 950 W. Walnut St., R2 E107, Indianapolis, IN 46202, USA (J.G.P.)
- School of Health Sciences, Purdue University, West Lafayette, IN 47907, USA
| | - Courtney N. Vaccaro
- Center for Applied Genomics, Children’s Hospital of Philadelphia, Philadelphia, PA 19104, USA
| | - Emily E. Diller
- Feinberg School of Medicine, Northwestern Medicine, Chicago, IL 60611, USA
| | | | | | - Sha Cao
- Radiology and Imaging Sciences, School of Medicine, Indiana University, 950 W. Walnut St., R2 E107, Indianapolis, IN 46202, USA (J.G.P.)
| | - Katherine B. Stanley
- Radiology and Imaging Sciences, School of Medicine, Indiana University, 950 W. Walnut St., R2 E107, Indianapolis, IN 46202, USA (J.G.P.)
| | - Aaron A. Cohen-Gadol
- Radiology and Imaging Sciences, School of Medicine, Indiana University, 950 W. Walnut St., R2 E107, Indianapolis, IN 46202, USA (J.G.P.)
| | - Jason G. Parker
- Radiology and Imaging Sciences, School of Medicine, Indiana University, 950 W. Walnut St., R2 E107, Indianapolis, IN 46202, USA (J.G.P.)
- School of Health Sciences, Purdue University, West Lafayette, IN 47907, USA
| |
Collapse
|
2
|
Müller D, Soto-Rey I, Kramer F. Robust chest CT image segmentation of COVID-19 lung infection based on limited data. INFORMATICS IN MEDICINE UNLOCKED 2021; 25:100681. [PMID: 34337140 PMCID: PMC8313817 DOI: 10.1016/j.imu.2021.100681] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Revised: 07/12/2021] [Accepted: 07/25/2021] [Indexed: 12/17/2022] Open
Abstract
BACKGROUND The coronavirus disease 2019 (COVID-19) affects billions of lives around the world and has a significant impact on public healthcare. For quantitative assessment and disease monitoring medical imaging like computed tomography offers great potential as alternative to RT-PCR methods. For this reason, automated image segmentation is highly desired as clinical decision support. However, publicly available COVID-19 imaging data is limited which leads to overfitting of traditional approaches. METHODS To address this problem, we propose an innovative automated segmentation pipeline for COVID-19 infected regions, which is able to handle small datasets by utilization as variant databases. Our method focuses on on-the-fly generation of unique and random image patches for training by performing several preprocessing methods and exploiting extensive data augmentation. For further reduction of the overfitting risk, we implemented a standard 3D U-Net architecture instead of new or computational complex neural network architectures. RESULTS Through a k-fold cross-validation on 20 CT scans as training and validation of COVID-19, we were able to develop a highly accurate as well as robust segmentation model for lungs and COVID-19 infected regions without overfitting on limited data. We performed an in-detail analysis and discussion on the robustness of our pipeline through a sensitivity analysis based on the cross-validation and impact on model generalizability of applied preprocessing techniques. Our method achieved Dice similarity coefficients for COVID-19 infection between predicted and annotated segmentation from radiologists of 0.804 on validation and 0.661 on a separate testing set consisting of 100 patients. CONCLUSIONS We demonstrated that the proposed method outperforms related approaches, advances the state-of-the-art for COVID-19 segmentation and improves robust medical image analysis based on limited data.
Collapse
Affiliation(s)
- Dominik Müller
- IT-Infrastructure for Translational Medical Research, Faculty of Applied Computer Science, Faculty of Medicine, University of Augsburg, Germany
| | - Iñaki Soto-Rey
- IT-Infrastructure for Translational Medical Research, Faculty of Applied Computer Science, Faculty of Medicine, University of Augsburg, Germany
| | - Frank Kramer
- IT-Infrastructure for Translational Medical Research, Faculty of Applied Computer Science, Faculty of Medicine, University of Augsburg, Germany
| |
Collapse
|
3
|
Müller D, Kramer F. MIScnn: a framework for medical image segmentation with convolutional neural networks and deep learning. BMC Med Imaging 2021; 21:12. [PMID: 33461500 PMCID: PMC7814713 DOI: 10.1186/s12880-020-00543-7] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2020] [Accepted: 12/25/2020] [Indexed: 12/15/2022] Open
Abstract
Background The increased availability and usage of modern medical imaging induced a strong need for automatic medical image segmentation. Still, current image segmentation platforms do not provide the required functionalities for plain setup of medical image segmentation pipelines. Already implemented pipelines are commonly standalone software, optimized on a specific public data set. Therefore, this paper introduces the open-source Python library MIScnn. Implementation The aim of MIScnn is to provide an intuitive API allowing fast building of medical image segmentation pipelines including data I/O, preprocessing, data augmentation, patch-wise analysis, metrics, a library with state-of-the-art deep learning models and model utilization like training, prediction, as well as fully automatic evaluation (e.g. cross-validation). Similarly, high configurability and multiple open interfaces allow full pipeline customization. Results Running a cross-validation with MIScnn on the Kidney Tumor Segmentation Challenge 2019 data set (multi-class semantic segmentation with 300 CT scans) resulted into a powerful predictor based on the standard 3D U-Net model. Conclusions With this experiment, we could show that the MIScnn framework enables researchers to rapidly set up a complete medical image segmentation pipeline by using just a few lines of code. The source code for MIScnn is available in the Git repository: https://github.com/frankkramer-lab/MIScnn.
Collapse
Affiliation(s)
- Dominik Müller
- IT-Infrastructure for Translational Medical Research, Faculty of Applied Computer Science, Faculty of Medicine, University of Augsburg, Augsburg, Germany.
| | - Frank Kramer
- IT-Infrastructure for Translational Medical Research, Faculty of Applied Computer Science, Faculty of Medicine, University of Augsburg, Augsburg, Germany
| |
Collapse
|
4
|
Dikici E, Ryu JL, Demirer M, Bigelow M, White RD, Slone W, Erdal BS, Prevedello LM. Automated Brain Metastases Detection Framework for T1-Weighted Contrast-Enhanced 3D MRI. IEEE J Biomed Health Inform 2020; 24:2883-2893. [DOI: 10.1109/jbhi.2020.2982103] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
5
|
Tor-Diez C, Porras AR, Packer RJ, Avery RA, Linguraru MG. Unsupervised MRI Homogenization: Application to Pediatric Anterior Visual Pathway Segmentation. ACTA ACUST UNITED AC 2020; 12436:180-188. [PMID: 34327515 DOI: 10.1007/978-3-030-59861-7_19] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/25/2023]
Abstract
Deep learning strategies have become ubiquitous optimization tools for medical image analysis. With the appropriate amount of data, these approaches outperform classic methodologies in a variety of image processing tasks. However, rare diseases and pediatric imaging often lack extensive data. Specially, MRI are uncommon because they require sedation in young children. Moreover, the lack of standardization in MRI protocols introduces a strong variability between different datasets. In this paper, we present a general deep learning architecture for MRI homogenization that also provides the segmentation map of an anatomical region of interest. Homogenization is achieved using an unsupervised architecture based on variational autoencoder with cycle generative adversarial networks, which learns a common space (i.e. a representation of the optimal imaging protocol) using an unpaired image-to-image translation network. The segmentation is simultaneously generated by a supervised learning strategy. We evaluated our method segmenting the challenging anterior visual pathway using three brain T1-weighted MRI datasets (variable protocols and vendors). Our method significantly outperformed a non-homogenized multi-protocol U-Net.
Collapse
Affiliation(s)
- Carlos Tor-Diez
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC 20010, USA
| | - Antonio R Porras
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC 20010, USA
| | - Roger J Packer
- Center for Neuroscience & Behavioral Health, Children's National Hospital, Washington, DC 20010, USA
- Gilbert Neurofibromatosis Institute, Children's National Hospital, Washington, DC 20010, USA
| | - Robert A Avery
- Division of Pediatric Ophthalmology, Children's Hospital of Philadelphia, Philadelphia, PA 19104, USA
| | - Marius George Linguraru
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC 20010, USA
- School of Medicine and Health Sciences, George Washington University, Washington, DC 20037, USA
| |
Collapse
|
6
|
Sun X, Shi L, Luo Y, Yang W, Li H, Liang P, Li K, Mok VCT, Chu WCW, Wang D. Histogram-based normalization technique on human brain magnetic resonance images from different acquisitions. Biomed Eng Online 2015. [PMID: 26215471 PMCID: PMC4517549 DOI: 10.1186/s12938-015-0064-y] [Citation(s) in RCA: 53] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Intensity normalization is an important preprocessing step in brain magnetic resonance image (MRI) analysis. During MR image acquisition, different scanners or parameters would be used for scanning different subjects or the same subject at a different time, which may result in large intensity variations. This intensity variation will greatly undermine the performance of subsequent MRI processing and population analysis, such as image registration, segmentation, and tissue volume measurement. METHODS In this work, we proposed a new histogram normalization method to reduce the intensity variation between MRIs obtained from different acquisitions. In our experiment, we scanned each subject twice on two different scanners using different imaging parameters. With noise estimation, the image with lower noise level was determined and treated as the high-quality reference image. Then the histogram of the low-quality image was normalized to the histogram of the high-quality image. The normalization algorithm includes two main steps: (1) intensity scaling (IS), where, for the high-quality reference image, the intensities of the image are first rescaled to a range between the low intensity region (LIR) value and the high intensity region (HIR) value; and (2) histogram normalization (HN),where the histogram of low-quality image as input image is stretched to match the histogram of the reference image, so that the intensity range in the normalized image will also lie between LIR and HIR. RESULTS We performed three sets of experiments to evaluate the proposed method, i.e., image registration, segmentation, and tissue volume measurement, and compared this with the existing intensity normalization method. It is then possible to validate that our histogram normalization framework can achieve better results in all the experiments. It is also demonstrated that the brain template with normalization preprocessing is of higher quality than the template with no normalization processing. CONCLUSIONS We have proposed a histogram-based MRI intensity normalization method. The method can normalize scans which were acquired on different MRI units. We have validated that the method can greatly improve the image analysis performance. Furthermore, it is demonstrated that with the help of our normalization method, we can create a higher quality Chinese brain template.
Collapse
Affiliation(s)
- Xiaofei Sun
- Department of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong, China.,Research Center for Medical Image Computing, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong, China.,Department of Biomedical Engineering and Shun Hing Institute of Advanced Engineering, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong, China
| | - Lin Shi
- Department of Medicine and Therapeutics, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong, China.,Lui Che Woo Institute of Innovation Medicine, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong, China
| | - Yishan Luo
- Department of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong, China.,Research Center for Medical Image Computing, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong, China
| | - Wei Yang
- Shenzhen Research Institute, The Chinese University of Hong Kong, Shenzhen, China.,School of Geoscience and Info-Physics, Central South University, Changsha, China
| | - Hongpeng Li
- Department of Radiology, The Second Hospital of Jilin University, Changchun, Jilin, China.
| | - Peipeng Liang
- Department of Radiology, Xuanwu Hospital, Capital Medical University, Beijing, China
| | - Kuncheng Li
- Department of Radiology, Xuanwu Hospital, Capital Medical University, Beijing, China
| | - Vincent C T Mok
- Department of Medicine and Therapeutics, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong, China
| | - Winnie C W Chu
- Department of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong, China.,Research Center for Medical Image Computing, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong, China.,Shenzhen Research Institute, The Chinese University of Hong Kong, Shenzhen, China
| | - Defeng Wang
- Department of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong, China. .,Research Center for Medical Image Computing, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong, China. .,Department of Biomedical Engineering and Shun Hing Institute of Advanced Engineering, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong, China. .,Shenzhen Research Institute, The Chinese University of Hong Kong, Shenzhen, China.
| |
Collapse
|