1
|
Intensity scaling of conventional brain magnetic resonance images avoiding cerebral reference regions: A systematic review. PLoS One 2024; 19:e0298642. [PMID: 38483873 PMCID: PMC10939249 DOI: 10.1371/journal.pone.0298642] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Accepted: 01/26/2024] [Indexed: 03/17/2024] Open
Abstract
BACKGROUND Conventional brain magnetic resonance imaging (MRI) produces image intensities that have an arbitrary scale, hampering quantification. Intensity scaling aims to overcome this shortfall. As neurodegenerative and inflammatory disorders may affect all brain compartments, reference regions within the brain may be misleading. Here we summarize approaches for intensity scaling of conventional T1-weighted (w) and T2w brain MRI avoiding reference regions within the brain. METHODS Literature was searched in the databases of Scopus, PubMed, and Web of Science. We included only studies that avoided reference regions within the brain for intensity scaling and provided validating evidence, which we divided into four categories: 1) comparative variance reduction, 2) comparative correlation with clinical parameters, 3) relation to quantitative imaging, or 4) relation to histology. RESULTS Of the 3825 studies screened, 24 fulfilled the inclusion criteria. Three studies used scaled T1w images, 2 scaled T2w images, and 21 T1w/T2w-ratio calculation (with double counts). A robust reduction in variance was reported. Twenty studies investigated the relation of scaled intensities to different types of quantitative imaging. Statistically significant correlations with clinical or demographic data were reported in 8 studies. Four studies reporting the relation to histology gave no clear picture of the main signal driver of conventional T1w and T2w MRI sequences. CONCLUSIONS T1w/T2w-ratio calculation was applied most often. Variance reduction and correlations with other measures suggest a biologically meaningful signal harmonization. However, there are open methodological questions and uncertainty on its biological underpinning. Validation evidence on other scaling methods is even sparser.
Collapse
|
2
|
The Brain Tumor Segmentation (BraTS) Challenge 2023: Focus on Pediatrics (CBTN-CONNECT-DIPGR-ASNR-MICCAI BraTS-PEDs). ARXIV 2024:arXiv:2305.17033v6. [PMID: 37292481 PMCID: PMC10246083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Pediatric tumors of the central nervous system are the most common cause of cancer-related death in children. The five-year survival rate for high-grade gliomas in children is less than 20\%. Due to their rarity, the diagnosis of these entities is often delayed, their treatment is mainly based on historic treatment concepts, and clinical trials require multi-institutional collaborations. The MICCAI Brain Tumor Segmentation (BraTS) Challenge is a landmark community benchmark event with a successful history of 12 years of resource creation for the segmentation and analysis of adult glioma. Here we present the CBTN-CONNECT-DIPGR-ASNR-MICCAI BraTS-PEDs 2023 challenge, which represents the first BraTS challenge focused on pediatric brain tumors with data acquired across multiple international consortia dedicated to pediatric neuro-oncology and clinical trials. The BraTS-PEDs 2023 challenge focuses on benchmarking the development of volumentric segmentation algorithms for pediatric brain glioma through standardized quantitative performance evaluation metrics utilized across the BraTS 2023 cluster of challenges. Models gaining knowledge from the BraTS-PEDs multi-parametric structural MRI (mpMRI) training data will be evaluated on separate validation and unseen test mpMRI dataof high-grade pediatric glioma. The CBTN-CONNECT-DIPGR-ASNR-MICCAI BraTS-PEDs 2023 challenge brings together clinicians and AI/imaging scientists to lead to faster development of automated segmentation techniques that could benefit clinical trials, and ultimately the care of children with brain tumors.
Collapse
|
3
|
Machine learning of dissection photographs and surface scanning for quantitative 3D neuropathology. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.06.08.544050. [PMID: 37333251 PMCID: PMC10274889 DOI: 10.1101/2023.06.08.544050] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/20/2023]
Abstract
We present open-source tools for 3D analysis of photographs of dissected slices of human brains, which are routinely acquired in brain banks but seldom used for quantitative analysis. Our tools can: (i) 3D reconstruct a volume from the photographs and, optionally, a surface scan; and (ii) produce a high-resolution 3D segmentation into 11 brain regions per hemisphere (22 in total), independently of the slice thickness. Our tools can be used as a substitute for ex vivo magnetic resonance imaging (MRI), which requires access to an MRI scanner, ex vivo scanning expertise, and considerable financial resources. We tested our tools on synthetic and real data from two NIH Alzheimer's Disease Research Centers. The results show that our methodology yields accurate 3D reconstructions, segmentations, and volumetric measurements that are highly correlated to those from MRI. Our method also detects expected differences between post mortem confirmed Alzheimer's disease cases and controls. The tools are available in our widespread neuroimaging suite "FreeSurfer" ( https://surfer.nmr.mgh.harvard.edu/fswiki/PhotoTools ).
Collapse
|
4
|
Assessing individual variability of the entorhinal subfields in health and disease. J Comp Neurol 2023; 531:2062-2079. [PMID: 37700618 PMCID: PMC10841297 DOI: 10.1002/cne.25538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 07/31/2023] [Accepted: 08/23/2023] [Indexed: 09/14/2023]
Abstract
Investigating interindividual variability is a major field of interest in neuroscience. The entorhinal cortex (EC) is essential for memory and affected early in the progression of Alzheimer's disease (AD). We combined histology ground-truth data with ultrahigh-resolution 7T ex vivo MRI to analyze EC interindividual variability in 3D. Further, we characterized (1) entorhinal shape as a whole, (2) entorhinal subfield range and midpoints, and (3) subfield architectural location and tau burden derived from 3D probability maps. Our results indicated that EC shape varied but was not related to demographic or disease factors at this preclinical stage. The medial intermediate subfield showed the highest degree of location variability in the probability maps. However, individual subfields did not display the same level of variability across dimensions and outcome measure, each providing a different perspective. For example, the olfactory subfield showed low variability in midpoint location in the superior-inferior dimension but high variability in anterior-posterior, and the subfield entorhinal intermediate showed a large variability in volumetric measures but a low variability in location derived from the 3D probability maps. These findings suggest that interindividual variability within the entorhinal subfields requires a 3D approach incorporating multiple outcome measures. This study provides 3D probability maps of the individual entorhinal subfields and respective tau pathology in the preclinical stage (Braak I and II) of AD. These probability maps illustrate the subfield average and may serve as a checkpoint for future modeling.
Collapse
|
5
|
Accurate Bayesian segmentation of thalamic nuclei using diffusion MRI and an improved histological atlas. Neuroimage 2023; 274:120129. [PMID: 37088323 PMCID: PMC10636587 DOI: 10.1016/j.neuroimage.2023.120129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 03/30/2023] [Accepted: 04/20/2023] [Indexed: 04/25/2023] Open
Abstract
The human thalamus is a highly connected brain structure, which is key for the control of numerous functions and is involved in several neurological disorders. Recently, neuroimaging studies have increasingly focused on the volume and connectivity of the specific nuclei comprising this structure, rather than looking at the thalamus as a whole. However, accurate identification of cytoarchitectonically designed histological nuclei on standard in vivo structural MRI is hampered by the lack of image contrast that can be used to distinguish nuclei from each other and from surrounding white matter tracts. While diffusion MRI may offer such contrast, it has lower resolution and lacks some boundaries visible in structural imaging. In this work, we present a Bayesian segmentation algorithm for the thalamus. This algorithm combines prior information from a probabilistic atlas with likelihood models for both structural and diffusion MRI, allowing segmentation of 25 thalamic labels per hemisphere informed by both modalities. We present an improved probabilistic atlas, incorporating thalamic nuclei identified from histology and 45 white matter tracts surrounding the thalamus identified in ultra-high gradient strength diffusion imaging. We present a family of likelihood models for diffusion tensor imaging, ensuring compatibility with the vast majority of neuroimaging datasets that include diffusion MRI data. The use of these diffusion likelihood models greatly improves identification of nuclear groups versus segmentation based solely on structural MRI. Dice comparison of 5 manually identifiable groups of nuclei to ground truth segmentations show improvements of up to 10 percentage points. Additionally, our chosen model shows a high degree of reliability, with median test-retest Dice scores above 0.85 for four out of five nuclei groups, whilst also offering improved detection of differential thalamic involvement in Alzheimer's disease (AUROC 81.98%). The probabilistic atlas and segmentation tool will be made publicly available as part of the neuroimaging package FreeSurfer (https://freesurfer.net/fswiki/ThalamicNucleiDTI).
Collapse
|
6
|
The Brain Tumor Segmentation (BraTS-METS) Challenge 2023: Brain Metastasis Segmentation on Pre-treatment MRI. ARXIV 2023:arXiv:2306.00838v1. [PMID: 37396600 PMCID: PMC10312806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 07/04/2023]
Abstract
Clinical monitoring of metastatic disease to the brain can be a laborious and timeconsuming process, especially in cases involving multiple metastases when the assessment is performed manually. The Response Assessment in Neuro-Oncology Brain Metastases (RANO-BM) guideline, which utilizes the unidimensional longest diameter, is commonly used in clinical and research settings to evaluate response to therapy in patients with brain metastases. However, accurate volumetric assessment of the lesion and surrounding peri-lesional edema holds significant importance in clinical decision-making and can greatly enhance outcome prediction. The unique challenge in performing segmentations of brain metastases lies in their common occurrence as small lesions. Detection and segmentation of lesions that are smaller than 10 mm in size has not demonstrated high accuracy in prior publications. The brain metastases challenge sets itself apart from previously conducted MICCAI challenges on glioma segmentation due to the significant variability in lesion size. Unlike gliomas, which tend to be larger on presentation scans, brain metastases exhibit a wide range of sizes and tend to include small lesions. We hope that the BraTS-METS dataset and challenge will advance the field of automated brain metastasis detection and segmentation.
Collapse
|
7
|
The Brain Tumor Segmentation (BraTS) Challenge 2023: Glioma Segmentation in Sub-Saharan Africa Patient Population (BraTS-Africa). ARXIV 2023:arXiv:2305.19369v1. [PMID: 37396608 PMCID: PMC10312814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 07/04/2023]
Abstract
Gliomas are the most common type of primary brain tumors. Although gliomas are relatively rare, they are among the deadliest types of cancer, with a survival rate of less than 2 years after diagnosis. Gliomas are challenging to diagnose, hard to treat and inherently resistant to conventional therapy. Years of extensive research to improve diagnosis and treatment of gliomas have decreased mortality rates across the Global North, while chances of survival among individuals in low- and middle-income countries (LMICs) remain unchanged and are significantly worse in Sub-Saharan Africa (SSA) populations. Long-term survival with glioma is associated with the identification of appropriate pathological features on brain MRI and confirmation by histopathology. Since 2012, the Brain Tumor Segmentation (BraTS) Challenge have evaluated state-of-the-art machine learning methods to detect, characterize, and classify gliomas. However, it is unclear if the state-of-the-art methods can be widely implemented in SSA given the extensive use of lower-quality MRI technology, which produces poor image contrast and resolution and more importantly, the propensity for late presentation of disease at advanced stages as well as the unique characteristics of gliomas in SSA (i.e., suspected higher rates of gliomatosis cerebri). Thus, the BraTS-Africa Challenge provides a unique opportunity to include brain MRI glioma cases from SSA in global efforts through the BraTS Challenge to develop and evaluate computer-aided-diagnostic (CAD) methods for the detection and characterization of glioma in resource-limited settings, where the potential for CAD tools to transform healthcare are more likely.
Collapse
|
8
|
The ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge 2023: Intracranial Meningioma. ARXIV 2023:arXiv:2305.07642v1. [PMID: 37608937 PMCID: PMC10441446] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Subscribe] [Scholar Register] [Indexed: 08/24/2023]
Abstract
Meningiomas are the most common primary intracranial tumor in adults and can be associated with significant morbidity and mortality. Radiologists, neurosurgeons, neuro-oncologists, and radiation oncologists rely on multiparametric MRI (mpMRI) for diagnosis, treatment planning, and longitudinal treatment monitoring; yet automated, objective, and quantitative tools for non-invasive assessment of meningiomas on mpMRI are lacking. The BraTS meningioma 2023 challenge will provide a community standard and benchmark for state-of-the-art automated intracranial meningioma segmentation models based on the largest expert annotated multilabel meningioma mpMRI dataset to date. Challenge competitors will develop automated segmentation models to predict three distinct meningioma sub-regions on MRI including enhancing tumor, non-enhancing tumor core, and surrounding nonenhancing T2/FLAIR hyperintensity. Models will be evaluated on separate validation and held-out test datasets using standardized metrics utilized across the BraTS 2023 series of challenges including the Dice similarity coefficient and Hausdorff distance. The models developed during the course of this challenge will aid in incorporation of automated meningioma MRI segmentation into clinical practice, which will ultimately improve care of patients with meningioma.
Collapse
|
9
|
An open-source tool for longitudinal whole-brain and white matter lesion segmentation. Neuroimage Clin 2023; 38:103354. [PMID: 36907041 PMCID: PMC10024238 DOI: 10.1016/j.nicl.2023.103354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 02/10/2023] [Accepted: 02/19/2023] [Indexed: 03/06/2023]
Abstract
In this paper we describe and validate a longitudinal method for whole-brain segmentation of longitudinal MRI scans. It builds upon an existing whole-brain segmentation method that can handle multi-contrast data and robustly analyze images with white matter lesions. This method is here extended with subject-specific latent variables that encourage temporal consistency between its segmentation results, enabling it to better track subtle morphological changes in dozens of neuroanatomical structures and white matter lesions. We validate the proposed method on multiple datasets of control subjects and patients suffering from Alzheimer's disease and multiple sclerosis, and compare its results against those obtained with its original cross-sectional formulation and two benchmark longitudinal methods. The results indicate that the method attains a higher test-retest reliability, while being more sensitive to longitudinal disease effect differences between patient groups. An implementation is publicly available as part of the open-source neuroimaging package FreeSurfer.
Collapse
|
10
|
SynthSeg: Segmentation of brain MRI scans of any contrast and resolution without retraining. Med Image Anal 2023; 86:102789. [PMID: 36857946 PMCID: PMC10154424 DOI: 10.1016/j.media.2023.102789] [Citation(s) in RCA: 31] [Impact Index Per Article: 31.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2022] [Revised: 01/20/2023] [Accepted: 02/22/2023] [Indexed: 03/03/2023]
Abstract
Despite advances in data augmentation and transfer learning, convolutional neural networks (CNNs) difficultly generalise to unseen domains. When segmenting brain scans, CNNs are highly sensitive to changes in resolution and contrast: even within the same MRI modality, performance can decrease across datasets. Here we introduce SynthSeg, the first segmentation CNN robust against changes in contrast and resolution. SynthSeg is trained with synthetic data sampled from a generative model conditioned on segmentations. Crucially, we adopt a domain randomisation strategy where we fully randomise the contrast and resolution of the synthetic training data. Consequently, SynthSeg can segment real scans from a wide range of target domains without retraining or fine-tuning, which enables straightforward analysis of huge amounts of heterogeneous clinical data. Because SynthSeg only requires segmentations to be trained (no images), it can learn from labels obtained by automated methods on diverse populations (e.g., ageing and diseased), thus achieving robustness to a wide range of morphological variability. We demonstrate SynthSeg on 5,000 scans of six modalities (including CT) and ten resolutions, where it exhibits unparallelled generalisation compared with supervised CNNs, state-of-the-art domain adaptation, and Bayesian segmentation. Finally, we demonstrate the generalisability of SynthSeg by applying it to cardiac MRI and CT scans.
Collapse
|
11
|
Predicting survival of glioblastoma from automatic whole-brain and tumor segmentation of MR images. Sci Rep 2022; 12:19744. [PMID: 36396681 PMCID: PMC9671967 DOI: 10.1038/s41598-022-19223-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Accepted: 08/25/2022] [Indexed: 11/18/2022] Open
Abstract
Survival prediction models can potentially be used to guide treatment of glioblastoma patients. However, currently available MR imaging biomarkers holding prognostic information are often challenging to interpret, have difficulties generalizing across data acquisitions, or are only applicable to pre-operative MR data. In this paper we aim to address these issues by introducing novel imaging features that can be automatically computed from MR images and fed into machine learning models to predict patient survival. The features we propose have a direct anatomical-functional interpretation: They measure the deformation caused by the tumor on the surrounding brain structures, comparing the shape of various structures in the patient's brain to their expected shape in healthy individuals. To obtain the required segmentations, we use an automatic method that is contrast-adaptive and robust to missing modalities, making the features generalizable across scanners and imaging protocols. Since the features we propose do not depend on characteristics of the tumor region itself, they are also applicable to post-operative images, which have been much less studied in the context of survival prediction. Using experiments involving both pre- and post-operative data, we show that the proposed features carry prognostic value in terms of overall- and progression-free survival, over and above that of conventional non-imaging features.
Collapse
|
12
|
Editorial: Computational Neuroimage Analysis Tools for Brain (Diseases) Biomarkers. Front Neurosci 2022; 16:841807. [PMID: 35250471 PMCID: PMC8894255 DOI: 10.3389/fnins.2022.841807] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Accepted: 01/26/2022] [Indexed: 11/16/2022] Open
|
13
|
A Contrast Augmentation Approach to Improve Multi-Scanner Generalization in MRI. Front Neurosci 2021; 15:708196. [PMID: 34531715 PMCID: PMC8439197 DOI: 10.3389/fnins.2021.708196] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Accepted: 07/27/2021] [Indexed: 11/30/2022] Open
Abstract
Most data-driven methods are very susceptible to data variability. This problem is particularly apparent when applying Deep Learning (DL) to brain Magnetic Resonance Imaging (MRI), where intensities and contrasts vary due to acquisition protocol, scanner- and center-specific factors. Most publicly available brain MRI datasets originate from the same center and are homogeneous in terms of scanner and used protocol. As such, devising robust methods that generalize to multi-scanner and multi-center data is crucial for transferring these techniques into clinical practice. We propose a novel data augmentation approach based on Gaussian Mixture Models (GMM-DA) with the goal of increasing the variability of a given dataset in terms of intensities and contrasts. The approach allows to augment the training dataset such that the variability in the training set compares to what is seen in real world clinical data, while preserving anatomical information. We compare the performance of a state-of-the-art U-Net model trained for segmenting brain structures with and without the addition of GMM-DA. The models are trained and evaluated on single- and multi-scanner datasets. Additionally, we verify the consistency of test-retest results on same-patient images (same and different scanners). Finally, we investigate how the presence of bias field influences the performance of a model trained with GMM-DA. We found that the addition of the GMM-DA improves the generalization capability of the DL model to other scanners not present in the training data, even when the train set is already multi-scanner. Besides, the consistency between same-patient segmentation predictions is improved, both for same-scanner and different-scanner repetitions. We conclude that GMM-DA could increase the transferability of DL models into clinical scenarios.
Collapse
|
14
|
Cone beam computed tomography based image guidance and quality assessment of prostate cancer for magnetic resonance imaging-only radiotherapy in the pelvis. PHYSICS & IMAGING IN RADIATION ONCOLOGY 2021; 18:55-60. [PMID: 34258409 PMCID: PMC8254192 DOI: 10.1016/j.phro.2021.05.001] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/16/2021] [Revised: 04/23/2021] [Accepted: 05/04/2021] [Indexed: 12/22/2022]
Abstract
MRI-only IGRT accuracy is ≤2 mm as compared to CT but significant differences were observed. MRI-only CBCT-based IGRT seems feasible but caution is advised. The median absolute error (MeAE) for independent verification on the sCT quality is proposed. A MeAE around 0.1 in mass density could call for sCT quality inspection.
Background and purpose Radiotherapy (RT) based on magentic resonance imaging (MRI) only is currently used clinically in the pelvis. A synthetic computed tomography (sCT) is needed for dose planning. Here, we investigate the accuracy of cone beam CT (CBCT) based MRI-only image guided RT (IGRT) and sCT image quality. Materials and methods CT, MRI and CBCT scans of ten prostate cancer patients were included. The MRI was converted to a sCT using a multi-atlas approach. The sCT, CT and MR images were auto-matched with the CBCT on the bony anatomy. Paired sCT-CT and sCT-CBCT data were created. CT numbers were converted to relative electron (RED) and mass densities (DES) using a standard calibration curve for the CT and sCT. For the CBCT RED/DES conversion, a phantom and paired CT-CBCT population based calibration curve was used. For the latter, the CBCT numbers were averaged in 100 HU bins and the known RED/DES of the CT were assigned. The paired sCT-CT and sCT-CBCT data were averaged in bins of 10 HU or 0.01 RED/DES. The median absolute error (MeAE) between the sCT-CT and sCT-CBCT bins was calculated. Wilcoxon rank-sum tests were carried out for the IGRT and MeAE study. Results The mean sCT or MR IGRT difference from CT was ≤ 2 mm but significant differences were observed. A CBCT HU or phantom-based RED/DES MeAE did not estimate the sCT quality similar to a CT based MeAE but the CBCT population-based RED/DES MeAE did. Conclusions MRI-only CBCT-based IGRT seems feasible but caution is advised. A MeAE around 0.1 DES could call for sCT quality inspection.
Collapse
|
15
|
JOINT SEGMENTATION OF MULTIPLE SCLEROSIS LESIONS AND BRAIN ANATOMY IN MRI SCANS OF ANY CONTRAST AND RESOLUTION WITH CNNs. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2021; 2021:1971-1974. [PMID: 34367472 PMCID: PMC8340983 DOI: 10.1109/isbi48211.2021.9434127] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
We present the first deep learning method to segment Multiple Sclerosis lesions and brain structures from MRI scans of any (possibly multimodal) contrast and resolution. Our method only requires segmentations to be trained (no images), as it leverages the generative model of Bayesian segmentation to generate synthetic scans with simulated lesions, which are then used to train a CNN. Our method can be retrained to segment at any resolution by adjusting the amount of synthesised partial volume. By construction, the synthetic scans are perfectly aligned with their labels, which enables training with noisy labels obtained with automatic methods. The training data are generated on the fly, and aggressive augmentation (including artefacts) is applied for improved generalisation. We demonstrate our method on two public datasets, comparing it with a state-of-the-art Bayesian approach implemented in FreeSurfer, and dataset specific CNNs trained on real data. The code is available at https://github.com/BBillot/SynthSeg.
Collapse
|
16
|
Accurate and robust whole-head segmentation from magnetic resonance images for individualized head modeling. Neuroimage 2020; 219:117044. [PMID: 32534963 PMCID: PMC8048089 DOI: 10.1016/j.neuroimage.2020.117044] [Citation(s) in RCA: 61] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2020] [Revised: 05/15/2020] [Accepted: 06/09/2020] [Indexed: 12/18/2022] Open
Abstract
Transcranial brain stimulation (TBS) has been established as a method for modulating and mapping the function of the human brain, and as a potential treatment tool in several brain disorders. Typically, the stimulation is applied using a one-size-fits-all approach with predetermined locations for the electrodes, in electric stimulation (TES), or the coil, in magnetic stimulation (TMS), which disregards anatomical variability between individuals. However, the induced electric field distribution in the head largely depends on anatomical features implying the need for individually tailored stimulation protocols for focal dosing. This requires detailed models of the individual head anatomy, combined with electric field simulations, to find an optimal stimulation protocol for a given cortical target. Considering the anatomical and functional complexity of different brain disorders and pathologies, it is crucial to account for the anatomical variability in order to translate TBS from a research tool into a viable option for treatment. In this article we present a new method, called CHARM, for automated segmentation of fifteen different head tissues from magnetic resonance (MR) scans. The new method compares favorably to two freely available software tools on a five-tissue segmentation task, while obtaining reasonable segmentation accuracy over all fifteen tissues. The method automatically adapts to variability in the input scans and can thus be directly applied to clinical or research scans acquired with different scanners, sequences or settings. We show that an increase in automated segmentation accuracy results in a lower relative error in electric field simulations when compared to anatomical head models constructed from reference segmentations. However, also the improved segmentations and, by implication, the electric field simulations are affected by systematic artifacts in the input MR scans. As long as the artifacts are unaccounted for, this can lead to local simulation differences up to 30% of the peak field strength on reference simulations. Finally, we exemplarily demonstrate the effect of including all fifteen tissue classes in the field simulations against the standard approach of using only five tissue classes and show that for specific stimulation configurations the local differences can reach 10% of the peak field strength.
Collapse
|
17
|
Magnetic resonance-based computed tomography metal artifact reduction using Bayesian modelling. Phys Med Biol 2019; 64:245012. [PMID: 31766033 DOI: 10.1088/1361-6560/ab5b70] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
Metal artifact reduction (MAR) algorithms reduce the errors caused by metal implants in x-ray computed tomography (CT) images and are an important part of error management in radiotherapy. A promising MAR approach is to leverage the information in magnetic resonance (MR) images that can be acquired for organ or tumor delineation. This is however complicated by the ambiguous relationship between CT values and conventional-sequence MR intensities as well as potential co-registration issues. In order to address these issues, this paper proposes a self-tuning Bayesian model for MR-based MAR that combines knowledge of the MR image intensities in local spatial neighborhoods with the information in an initial, corrupted CT reconstructed using filtered back projection. We demonstrate the potential of the resulting model in three widely-used MAR scenarios: image inpainting, sinogram inpainting and model-based iterative reconstruction. Compared to conventional alternatives in a retrospective study on nine head-and-neck patients with CT and T1-weighted MR scans, we find improvements in terms of image quality and quantitative CT value accuracy within each scenario. We conclude that the proposed model provides a versatile way to use the anatomical information in a co-acquired MR scan to boost the performance of MAR algorithms.
Collapse
|
18
|
PSACNN: Pulse sequence adaptive fast whole brain segmentation. Neuroimage 2019; 199:553-569. [PMID: 31129303 PMCID: PMC6688920 DOI: 10.1016/j.neuroimage.2019.05.033] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2019] [Revised: 05/09/2019] [Accepted: 05/12/2019] [Indexed: 01/07/2023] Open
Abstract
With the advent of convolutional neural networks (CNN), supervised learning methods are increasingly being used for whole brain segmentation. However, a large, manually annotated training dataset of labeled brain images required to train such supervised methods is frequently difficult to obtain or create. In addition, existing training datasets are generally acquired with a homogeneous magnetic resonance imaging (MRI) acquisition protocol. CNNs trained on such datasets are unable to generalize on test data with different acquisition protocols. Modern neuroimaging studies and clinical trials are necessarily multi-center initiatives with a wide variety of acquisition protocols. Despite stringent protocol harmonization practices, it is very difficult to standardize the gamut of MRI imaging parameters across scanners, field strengths, receive coils etc., that affect image contrast. In this paper we propose a CNN-based segmentation algorithm that, in addition to being highly accurate and fast, is also resilient to variation in the input acquisition. Our approach relies on building approximate forward models of pulse sequences that produce a typical test image. For a given pulse sequence, we use its forward model to generate plausible, synthetic training examples that appear as if they were acquired in a scanner with that pulse sequence. Sampling over a wide variety of pulse sequences results in a wide variety of augmented training examples that help build an image contrast invariant model. Our method trains a single CNN that can segment input MRI images with acquisition parameters as disparate as T1-weighted and T2-weighted contrasts with only T1-weighted training data. The segmentations generated are highly accurate with state-of-the-art results (overall Dice overlap=0.94), with a fast run time (≈ 45 s), and consistent across a wide range of acquisition protocols.
Collapse
|
19
|
MR-based CT metal artifact reduction for head-and-neck photon, electron, and proton radiotherapy. Med Phys 2019; 46:4314-4323. [PMID: 31332792 PMCID: PMC6802740 DOI: 10.1002/mp.13729] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2018] [Revised: 06/24/2019] [Accepted: 07/06/2019] [Indexed: 11/30/2022] Open
Abstract
Purpose We investigated the impact on computed tomography (CT) image quality and photon, electron, and proton head‐and‐neck (H&N) radiotherapy (RT) dose calculations of three CT metal artifact reduction (MAR) approaches: A CT‐based algorithm (oMAR Philips Healthcare), manual water override, and our recently presented, Magnetic Resonance (MR)‐based kerMAR algorithm. We considered the following three hypotheses: I: Manual water override improves MAR over the CT‐ and MR‐based alternatives; II: The automatic algorithms (oMAR and kerMAR) improve MAR over the uncorrected CT; III: kerMAR improves MAR over oMAR. Methods We included a veal shank phantom with/without six metal inserts and nine H&N RT patients with dental implants. We quantified the MAR capabilities by the reduction of outliers in the CT value distribution in regions of interest, and the change in particle range and photon depth at maximum dose. Results Water override provided apparent image improvements in the soft tissue region but insignificantly or negatively influenced the dose calculations. We however found significant improvements in image quality and particle range impact, compared to the uncorrected CT, when using oMAR and kerMAR. kerMAR in turn provided superior improvements in terms of high intensity streak suppression compared to oMAR, again with associated impacts on the particle range estimates. Conclusion We found no benefits of the water override compared to the rest, and tentatively reject hypothesis I. We however found improvements in the automatic algorithms, and thus support for hypothesis II, and found the MR‐based kerMAR to improve upon oMAR, supporting hypothesis III.
Collapse
|
20
|
Personalized Radiotherapy Design for Glioblastoma: Integrating Mathematical Tumor Models, Multimodal Scans, and Bayesian Inference. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:1875-1884. [PMID: 30835219 PMCID: PMC7170051 DOI: 10.1109/tmi.2019.2902044] [Citation(s) in RCA: 60] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
Glioblastoma (GBM) is a highly invasive brain tumor, whose cells infiltrate surrounding normal brain tissue beyond the lesion outlines visible in the current medical scans. These infiltrative cells are treated mainly by radiotherapy. Existing radiotherapy plans for brain tumors derive from population studies and scarcely account for patient-specific conditions. Here, we provide a Bayesian machine learning framework for the rational design of improved, personalized radiotherapy plans using mathematical modeling and patient multimodal medical scans. Our method, for the first time, integrates complementary information from high-resolution MRI scans and highly specific FET-PET metabolic maps to infer tumor cell density in GBM patients. The Bayesian framework quantifies imaging and modeling uncertainties and predicts patient-specific tumor cell density with credible intervals. The proposed methodology relies only on data acquired at a single time point and, thus, is applicable to standard clinical settings. An initial clinical population study shows that the radiotherapy plans generated from the inferred tumor cell infiltration maps spare more healthy tissue thereby reducing radiation toxicity while yielding comparable accuracy with standard radiotherapy protocols. Moreover, the inferred regions of high tumor cell densities coincide with the tumor radioresistant areas, providing guidance for personalized dose-escalation. The proposed integration of multimodal scans and mathematical modeling provides a robust, non-invasive tool to assist personalized radiotherapy design.
Collapse
|
21
|
A modality-adaptive method for segmenting brain tumors and organs-at-risk in radiation therapy planning. Med Image Anal 2019; 54:220-237. [PMID: 30952038 PMCID: PMC6554451 DOI: 10.1016/j.media.2019.03.005] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2018] [Revised: 03/14/2019] [Accepted: 03/21/2019] [Indexed: 12/25/2022]
Abstract
In this paper we present a method for simultaneously segmenting brain tumors and an extensive set of organs-at-risk for radiation therapy planning of glioblastomas. The method combines a contrast-adaptive generative model for whole-brain segmentation with a new spatial regularization model of tumor shape using convolutional restricted Boltzmann machines. We demonstrate experimentally that the method is able to adapt to image acquisitions that differ substantially from any available training data, ensuring its applicability across treatment sites; that its tumor segmentation accuracy is comparable to that of the current state of the art; and that it captures most organs-at-risk sufficiently well for radiation therapy planning purposes. The proposed method may be a valuable step towards automating the delineation of brain tumors and organs-at-risk in glioblastoma patients undergoing radiation therapy.
Collapse
|
22
|
A probabilistic atlas of the human thalamic nuclei combining ex vivo MRI and histology. Neuroimage 2018; 183:314-326. [PMID: 30121337 PMCID: PMC6215335 DOI: 10.1016/j.neuroimage.2018.08.012] [Citation(s) in RCA: 265] [Impact Index Per Article: 44.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2018] [Revised: 07/27/2018] [Accepted: 08/09/2018] [Indexed: 01/18/2023] Open
Abstract
The human thalamus is a brain structure that comprises numerous, highly specific nuclei. Since these nuclei are known to have different functions and to be connected to different areas of the cerebral cortex, it is of great interest for the neuroimaging community to study their volume, shape and connectivity in vivo with MRI. In this study, we present a probabilistic atlas of the thalamic nuclei built using ex vivo brain MRI scans and histological data, as well as the application of the atlas to in vivo MRI segmentation. The atlas was built using manual delineation of 26 thalamic nuclei on the serial histology of 12 whole thalami from six autopsy samples, combined with manual segmentations of the whole thalamus and surrounding structures (caudate, putamen, hippocampus, etc.) made on in vivo brain MR data from 39 subjects. The 3D structure of the histological data and corresponding manual segmentations was recovered using the ex vivo MRI as reference frame, and stacks of blockface photographs acquired during the sectioning as intermediate target. The atlas, which was encoded as an adaptive tetrahedral mesh, shows a good agreement with previous histological studies of the thalamus in terms of volumes of representative nuclei. When applied to segmentation of in vivo scans using Bayesian inference, the atlas shows excellent test-retest reliability, robustness to changes in input MRI contrast, and ability to detect differential thalamic effects in subjects with Alzheimer's disease. The probabilistic atlas and companion segmentation tool are publicly available as part of the neuroimaging package FreeSurfer.
Collapse
|
23
|
Systematic comparison of different techniques to measure hippocampal subfield volumes in ADNI2. Neuroimage Clin 2017; 17:1006-1018. [PMID: 29527502 PMCID: PMC5842756 DOI: 10.1016/j.nicl.2017.12.036] [Citation(s) in RCA: 47] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2017] [Revised: 12/18/2017] [Accepted: 12/23/2017] [Indexed: 12/25/2022]
Abstract
Objective Subfield-specific measurements provide superior information in the early stages of neurodegenerative diseases compared to global hippocampal measurements. The overall goal was to systematically compare the performance of five representative manual and automated T1 and T2 based subfield labeling techniques in a sub-set of the ADNI2 population. Methods The high resolution T2 weighted hippocampal images (T2-HighRes) and the corresponding T1 images from 106 ADNI2 subjects (41 controls, 57 MCI, 8 AD) were processed as follows. A. T1-based: 1. Freesurfer + Large-Diffeomorphic-Metric-Mapping in combination with shape analysis. 2. FreeSurfer 5.1 subfields using in-vivo atlas. B. T2-HighRes: 1. Model-based subfield segmentation using ex-vivo atlas (FreeSurfer 6.0). 2. T2-based automated multi-atlas segmentation combined with similarity-weighted voting (ASHS). 3. Manual subfield parcellation. Multiple regression analyses were used to calculate effect sizes (ES) for group, amyloid positivity in controls, and associations with cognitive/memory performance for each approach. Results Subfield volumetry was better than whole hippocampal volumetry for the detection of the mild atrophy differences between controls and MCI (ES: 0.27 vs 0.11). T2-HighRes approaches outperformed T1 approaches for the detection of early stage atrophy (ES: 0.27 vs.0.10), amyloid positivity (ES: 0.11 vs 0.04), and cognitive associations (ES: 0.22 vs 0.19). Conclusions T2-HighRes subfield approaches outperformed whole hippocampus and T1 subfield approaches. None of the different T2-HghRes methods tested had a clear advantage over the other methods. Each has strengths and weaknesses that need to be taken into account when deciding which one to use to get the best results from subfield volumetry.
Collapse
|
24
|
A machine learning method for fast and accurate characterization of depth-of-interaction gamma cameras. ACTA ACUST UNITED AC 2017; 62:8376-8401. [DOI: 10.1088/1361-6560/aa6ee5] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
25
|
Abstract
PURPOSE In radiotherapy based only on magnetic resonance imaging (MRI), knowledge about tissue electron densities must be derived from the MRI. This can be achieved by converting the MRI scan to the so-called pseudo-computed tomography (pCT). An obstacle is that the voxel intensities in conventional MRI scans are not uniquely related to electron density. The authors previously demonstrated that a patch-based method could produce accurate pCTs of the brain using conventional T1-weighted MRI scans. The method was driven mainly by local patch similarities and relied on simple affine registrations between an atlas database of the co-registered MRI/CT scan pairs and the MRI scan to be converted. In this study, the authors investigate the applicability of the patch-based approach in the pelvis. This region is challenging for a method based on local similarities due to the greater inter-patient variation. The authors benchmark the method against a baseline pCT strategy where all voxels inside the body contour are assigned a water-equivalent bulk density. Furthermore, the authors implement a parallelized approximate patch search strategy to speed up the pCT generation time to a more clinically relevant level. METHODS The data consisted of CT and T1-weighted MRI scans of 10 prostate patients. pCTs were generated using an approximate patch search algorithm in a leave-one-out fashion and compared with the CT using frequently described metrics such as the voxel-wise mean absolute error (MAEvox) and the deviation in water-equivalent path lengths. Furthermore, the dosimetric accuracy was tested for a volumetric modulated arc therapy plan using dose-volume histogram (DVH) point deviations and γ-index analysis. RESULTS The patch-based approach had an average MAEvox of 54 HU; median deviations of less than 0.4% in relevant DVH points and a γ-index pass rate of 0.97 using a 1%/1 mm criterion. The patch-based approach showed a significantly better performance than the baseline water pCT in almost all metrics. The approximate patch search strategy was 70x faster than a brute-force search, with an average prediction time of 20.8 min. CONCLUSIONS The authors showed that a patch-based method based on affine registrations and T1-weighted MRI could generate accurate pCTs of the pelvis. The main source of differences between pCT and CT was positional changes of air pockets and body outline.
Collapse
|
26
|
PET/MRI in the Presence of Metal Implants: Completion of the Attenuation Map from PET Emission Data. J Nucl Med 2017; 58:840-845. [PMID: 28126884 DOI: 10.2967/jnumed.116.183343] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2016] [Accepted: 12/26/2016] [Indexed: 12/27/2022] Open
Abstract
We present a novel technique for accurate whole-body attenuation correction in the presence of metallic endoprosthesis, on integrated non-time-of-flight (non-TOF) PET/MRI scanners. The proposed implant PET-based attenuation map completion (IPAC) method performs a joint reconstruction of radioactivity and attenuation from the emission data to determine the position, shape, and linear attenuation coefficient (LAC) of metallic implants. Methods: The initial estimate of the attenuation map was obtained using the MR Dixon method currently available on the Siemens Biograph mMR scanner. The attenuation coefficients in the area of the MR image subjected to metal susceptibility artifacts are then reconstructed from the PET emission data using the IPAC algorithm. The method was tested on 11 subjects presenting 13 different metallic implants, who underwent CT and PET/MR scans. Relative mean LACs and Dice similarity coefficients were calculated to determine the accuracy of the reconstructed attenuation values and the shape of the metal implant, respectively. The reconstructed PET images were compared with those obtained using the reference CT-based approach and the Dixon-based method. Absolute relative change (aRC) images were generated in each case, and voxel-based analyses were performed. Results: The error in implant LAC estimation, using the proposed IPAC algorithm, was 15.7% ± 7.8%, which was significantly smaller than the Dixon- (100%) and CT- (39%) derived values. A mean Dice similarity coefficient of 73% ± 9% was obtained when comparing the IPAC- with the CT-derived implant shape. The voxel-based analysis of the reconstructed PET images revealed quantification errors (aRC) of 13.2% ± 22.1% for the IPAC- with respect to CT-corrected images. The Dixon-based method performed substantially worse, with a mean aRC of 23.1% ± 38.4%. Conclusion: We have presented a non-TOF emission-based approach for estimating the attenuation map in the presence of metallic implants, to be used for whole-body attenuation correction in integrated PET/MR scanners. The Graphics Processing Unit implementation of the algorithm will be included in the open-source reconstruction toolbox Occiput.io.
Collapse
|
27
|
Fast and sequence-adaptive whole-brain segmentation using parametric Bayesian modeling. Neuroimage 2016; 143:235-249. [PMID: 27612647 DOI: 10.1016/j.neuroimage.2016.09.011] [Citation(s) in RCA: 76] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2016] [Revised: 09/02/2016] [Accepted: 09/05/2016] [Indexed: 12/18/2022] Open
Abstract
Quantitative analysis of magnetic resonance imaging (MRI) scans of the brain requires accurate automated segmentation of anatomical structures. A desirable feature for such segmentation methods is to be robust against changes in acquisition platform and imaging protocol. In this paper we validate the performance of a segmentation algorithm designed to meet these requirements, building upon generative parametric models previously used in tissue classification. The method is tested on four different datasets acquired with different scanners, field strengths and pulse sequences, demonstrating comparable accuracy to state-of-the-art methods on T1-weighted scans while being one to two orders of magnitude faster. The proposed algorithm is also shown to be robust against small training datasets, and readily handles images with different MRI contrast as well as multi-contrast data.
Collapse
|
28
|
A Generative Probabilistic Model and Discriminative Extensions for Brain Lesion Segmentation--With Application to Tumor and Stroke. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:933-46. [PMID: 26599702 PMCID: PMC4854961 DOI: 10.1109/tmi.2015.2502596] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
We introduce a generative probabilistic model for segmentation of brain lesions in multi-dimensional images that generalizes the EM segmenter, a common approach for modelling brain images using Gaussian mixtures and a probabilistic tissue atlas that employs expectation-maximization (EM), to estimate the label map for a new image. Our model augments the probabilistic atlas of the healthy tissues with a latent atlas of the lesion. We derive an estimation algorithm with closed-form EM update equations. The method extracts a latent atlas prior distribution and the lesion posterior distributions jointly from the image data. It delineates lesion areas individually in each channel, allowing for differences in lesion appearance across modalities, an important feature of many brain tumor imaging sequences. We also propose discriminative model extensions to map the output of the generative model to arbitrary labels with semantic and biological meaning, such as "tumor core" or "fluid-filled structure", but without a one-to-one correspondence to the hypo- or hyper-intense lesion areas identified by the generative model. We test the approach in two image sets: the publicly available BRATS set of glioma patient scans, and multimodal brain images of patients with acute and subacute ischemic stroke. We find the generative model that has been designed for tumor lesions to generalize well to stroke images, and the extended discriminative -discriminative model to be one of the top ranking methods in the BRATS evaluation.
Collapse
|
29
|
Regional Hippocampal Atrophy and Higher Levels of Plasma Amyloid-Beta Are Associated With Subjective Memory Complaints in Nondemented Elderly Subjects. J Gerontol A Biol Sci Med Sci 2016; 71:1210-5. [PMID: 26946100 DOI: 10.1093/gerona/glw022] [Citation(s) in RCA: 41] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2015] [Accepted: 01/29/2016] [Indexed: 01/12/2023] Open
Abstract
BACKGROUND Evidence suggests a link between the presence of subjective memory complaints (SMC) and lower volume of the hippocampus, one of the first regions to show neuropathological lesions in Alzheimer's disease. However, it remains unknown whether this pattern of hippocampal atrophy is regionally specific and whether SMC are also paralleled by changes in peripheral levels of amyloid-beta (Aβ). METHODS The volume of hippocampal subregions and plasma Aβ levels were cross-sectionally compared between elderly individuals with (SMC(+); N = 47) and without SMC (SMC(-); N = 48). Significant volume differences in hippocampal subregions were further correlated with plasma Aβ levels and with objective memory performance. RESULTS Individuals with SMC exhibited significantly higher Aβ1-42 concentrations and lower volumes of CA1, CA4, dentate gyrus, and molecular layer compared with SMC(-) participants. Regression analyses further showed significant associations between lower volume of the dentate gyrus and both poorer memory performance and higher plasma Aβ1-42 levels in SMC(+) participants. CONCLUSIONS The presence of SMC, lower volumes of specific hippocampal regions, and higher plasma Aβ1-42 levels could be conditions associated with aging vulnerability. If such associations are confirmed in longitudinal studies, the combination may be markers recommending clinical follow-up in nondemented older adults.
Collapse
|
30
|
Brain Tumor Segmentation Using a Generative Model with an RBM Prior on Tumor Shape. BRAINLESION: GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES 2016. [DOI: 10.1007/978-3-319-30858-6_15] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
|
31
|
Abstract
In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients-manually annotated by up to four raters-and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%-85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource.
Collapse
|
32
|
Cone beam computed tomography guided treatment delivery and planning verification for magnetic resonance imaging only radiotherapy of the brain. Acta Oncol 2015. [PMID: 26198652 DOI: 10.3109/0284186x.2015.1062546] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
BACKGROUND Radiotherapy based on MRI only (MRI-only RT) shows a promising potential for the brain. Much research focuses on creating a pseudo computed tomography (pCT) from MRI for treatment planning while little attention is often paid to the treatment delivery. Here, we investigate if cone beam CT (CBCT) can be used for MRI-only image-guided radiotherapy (IGRT) and for verifying the correctness of the corresponding pCT. MATERIAL AND METHODS Six patients receiving palliative cranial RT were included in the study. Each patient had three-dimensional (3D) T1W MRI, a CBCT and a CT for reference. Further, a pCT was generated using a patch-based approach. MRI, pCT and CT were placed in the same frame of reference, matched to CBCT and the differences noted. Paired pCT-CT and pCT-CBCT data were created in bins of 10 HU and the absolute difference calculated. The data were converted to relative electron densities (RED) using the CT or a CBCT calibration curve. The latter was either based on a CBCT phantom (phan) or a paired CT-CBCT population (pop) of the five other patients. RESULTS Non-significant (NS) differences in the pooled CT-CBCT, MRI-CBCT and pCT-CBCT transformations were noted. The largest deviations from the CT-CBCT reference were < 1 mm and 1°. The average median absolute error (MeAE) in HU was 184 ± 34 and 299 ± 34 on average for pCT-CT and pCT-CBCT, respectively, and was significantly different (p < 0.01) in each patient. The average MeAE in RED was 0.108 ± 0.025, 0.104 ± 0.011 and 0.099 ± 0.017 for pCT-CT, pCT-CBCT phan (p < 0.01 on 2 patients) and pCT-CBCT pop (NS), respectively. CONCLUSIONS CBCT can be used for patient setup with either MRI or pCT as reference. The correctness of pCT can be verified from CBCT using a population-based calibration curve in the treatment geometry.
Collapse
|
33
|
Quantitative comparison of 21 protocols for labeling hippocampal subfields and parahippocampal subregions in in vivo MRI: towards a harmonized segmentation protocol. Neuroimage 2015; 111:526-41. [PMID: 25596463 PMCID: PMC4387011 DOI: 10.1016/j.neuroimage.2015.01.004] [Citation(s) in RCA: 226] [Impact Index Per Article: 25.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2014] [Revised: 11/25/2014] [Accepted: 01/01/2015] [Indexed: 11/16/2022] Open
Abstract
OBJECTIVE An increasing number of human in vivo magnetic resonance imaging (MRI) studies have focused on examining the structure and function of the subfields of the hippocampal formation (the dentate gyrus, CA fields 1-3, and the subiculum) and subregions of the parahippocampal gyrus (entorhinal, perirhinal, and parahippocampal cortices). The ability to interpret the results of such studies and to relate them to each other would be improved if a common standard existed for labeling hippocampal subfields and parahippocampal subregions. Currently, research groups label different subsets of structures and use different rules, landmarks, and cues to define their anatomical extents. This paper characterizes, both qualitatively and quantitatively, the variability in the existing manual segmentation protocols for labeling hippocampal and parahippocampal substructures in MRI, with the goal of guiding subsequent work on developing a harmonized substructure segmentation protocol. METHOD MRI scans of a single healthy adult human subject were acquired both at 3 T and 7 T. Representatives from 21 research groups applied their respective manual segmentation protocols to the MRI modalities of their choice. The resulting set of 21 segmentations was analyzed in a common anatomical space to quantify similarity and identify areas of agreement. RESULTS The differences between the 21 protocols include the region within which segmentation is performed, the set of anatomical labels used, and the extents of specific anatomical labels. The greatest overall disagreement among the protocols is at the CA1/subiculum boundary, and disagreement across all structures is greatest in the anterior portion of the hippocampal formation relative to the body and tail. CONCLUSIONS The combined examination of the 21 protocols in the same dataset suggests possible strategies towards developing a harmonized subfield segmentation protocol and facilitates comparison between published studies.
Collapse
|
34
|
Patch-based generation of a pseudo CT from conventional MRI sequences for MRI-only radiotherapy of the brain. Med Phys 2015; 42:1596-605. [PMID: 25832050 DOI: 10.1118/1.4914158] [Citation(s) in RCA: 96] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022] Open
|
35
|
An algorithm for optimal fusion of atlases with different labeling protocols. Neuroimage 2015; 106:451-63. [PMID: 25463466 PMCID: PMC4286284 DOI: 10.1016/j.neuroimage.2014.11.031] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2014] [Revised: 11/13/2014] [Accepted: 11/14/2014] [Indexed: 10/24/2022] Open
Abstract
In this paper we present a novel label fusion algorithm suited for scenarios in which different manual delineation protocols with potentially disparate structures have been used to annotate the training scans (hereafter referred to as "atlases"). Such scenarios arise when atlases have missing structures, when they have been labeled with different levels of detail, or when they have been taken from different heterogeneous databases. The proposed algorithm can be used to automatically label a novel scan with any of the protocols from the training data. Further, it enables us to generate new labels that are not present in any delineation protocol by defining intersections on the underling labels. We first use probabilistic models of label fusion to generalize three popular label fusion techniques to the multi-protocol setting: majority voting, semi-locally weighted voting and STAPLE. Then, we identify some shortcomings of the generalized methods, namely the inability to produce meaningful posterior probabilities for the different labels (majority voting, semi-locally weighted voting) and to exploit the similarities between the atlases (all three methods). Finally, we propose a novel generative label fusion model that can overcome these drawbacks. We use the proposed method to combine four brain MRI datasets labeled with different protocols (with a total of 102 unique labeled structures) to produce segmentations of 148 brain regions. Using cross-validation, we show that the proposed algorithm outperforms the generalizations of majority voting, semi-locally weighted voting and STAPLE (mean Dice score 83%, vs. 77%, 80% and 79%, respectively). We also evaluated the proposed algorithm in an aging study, successfully reproducing some well-known results in cortical and subcortical structures.
Collapse
|
36
|
A voxel-based investigation for MRI-only radiotherapy of the brain using ultra short echo times. Phys Med Biol 2014; 59:7501-19. [DOI: 10.1088/0031-9155/59/23/7501] [Citation(s) in RCA: 78] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
|
37
|
Improved inference in Bayesian segmentation using Monte Carlo sampling: application to hippocampal subfield volumetry. Med Image Anal 2013; 17:766-78. [PMID: 23773521 PMCID: PMC3719857 DOI: 10.1016/j.media.2013.04.005] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2012] [Revised: 03/15/2013] [Accepted: 04/15/2013] [Indexed: 11/20/2022]
Abstract
Many segmentation algorithms in medical image analysis use Bayesian modeling to augment local image appearance with prior anatomical knowledge. Such methods often contain a large number of free parameters that are first estimated and then kept fixed during the actual segmentation process. However, a faithful Bayesian analysis would marginalize over such parameters, accounting for their uncertainty by considering all possible values they may take. Here we propose to incorporate this uncertainty into Bayesian segmentation methods in order to improve the inference process. In particular, we approximate the required marginalization over model parameters using computationally efficient Markov chain Monte Carlo techniques. We illustrate the proposed approach using a recently developed Bayesian method for the segmentation of hippocampal subfields in brain MRI scans, showing a significant improvement in an Alzheimer's disease classification task. As an additional benefit, the technique also allows one to compute informative "error bars" on the volume estimates of individual structures.
Collapse
|
38
|
A unified framework for cross-modality multi-atlas segmentation of brain MRI. Med Image Anal 2013; 17:1181-91. [PMID: 24001931 DOI: 10.1016/j.media.2013.08.001] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2013] [Revised: 08/01/2013] [Accepted: 08/05/2013] [Indexed: 10/26/2022]
Abstract
Multi-atlas label fusion is a powerful image segmentation strategy that is becoming increasingly popular in medical imaging. A standard label fusion algorithm relies on independently computed pairwise registrations between individual atlases and the (target) image to be segmented. These registrations are then used to propagate the atlas labels to the target space and fuse them into a single final segmentation. Such label fusion schemes commonly rely on the similarity between intensity values of the atlases and target scan, which is often problematic in medical imaging - in particular, when the atlases and target images are obtained via different sensor types or imaging protocols. In this paper, we present a generative probabilistic model that yields an algorithm for solving the atlas-to-target registrations and label fusion steps simultaneously. The proposed model does not directly rely on the similarity of image intensities. Instead, it exploits the consistency of voxel intensities within the target scan to drive the registration and label fusion, hence the atlases and target image can be of different modalities. Furthermore, the framework models the joint warp of all the atlases, introducing interdependence between the registrations. We use variational expectation maximization and the Demons registration framework in order to efficiently identify the most probable segmentation and registrations. We use two sets of experiments to illustrate the approach, where proton density (PD) MRI atlases are used to segment T1-weighted brain scans and vice versa. Our results clearly demonstrate the accuracy gain due to exploiting within-target intensity consistency and integrating registration into label fusion.
Collapse
|
39
|
Fast, sequence adaptive parcellation of brain MR using parametric models. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2013; 16:727-34. [PMID: 24505732 DOI: 10.1007/978-3-642-40811-3_91] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
In this paper we propose a method for whole brain parcellation using the type of generative parametric models typically used in tissue classification. Compared to the non-parametric, multi-atlas segmentation techniques that have become popular in recent years, our method obtains state-of-the-art segmentation performance in both cortical and subcortical structures, while retaining all the benefits of generative parametric models, including high computational speed, automatic adaptiveness to changes in image contrast when different scanner platforms and pulse sequences are used, and the ability to handle multi-contrast (vector-valued intensities) MR data. We have validated our method by comparing its segmentations to manual delineations both within and across scanner platforms and pulse sequences, and show preliminary results on multi-contrast test-retest scans, demonstrating the feasibility of the approach.
Collapse
|
40
|
Is synthesizing MRI contrast useful for inter-modality analysis? MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2013; 16:631-8. [PMID: 24505720 DOI: 10.1007/978-3-642-40811-3_79] [Citation(s) in RCA: 57] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
Abstract
Availability of multi-modal magnetic resonance imaging (MRI) databases opens up the opportunity to synthesize different MRI contrasts without actually acquiring the images. In theory such synthetic images have the potential to reduce the amount of acquisitions to perform certain analyses. However, to what extent they can substitute real acquisitions in the respective analyses is an open question. In this study, we used a synthesis method based on patch matching to test whether synthetic images can be useful in segmentation and inter-modality cross-subject registration of brain MRI. Thirty-nine T1 scans with 36 manually labeled structures of interest were used in the registration and segmentation of eight proton density (PD) scans, for which ground truth T1 data were also available. The results show that synthesized T1 contrast can considerably enhance the quality of non-linear registration compared with using the original PD data, and it is only marginally worse than using the original T1 scans. In segmentation, the relative improvement with respect to using the PD is smaller, but still statistically significant.
Collapse
|
41
|
The relevance voxel machine (RVoxM): a self-tuning Bayesian model for informative image-based prediction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2012; 31:2290-2306. [PMID: 23008245 PMCID: PMC3623564 DOI: 10.1109/tmi.2012.2216543] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
This paper presents the relevance voxel machine (RVoxM), a dedicated Bayesian model for making predictions based on medical imaging data. In contrast to the generic machine learning algorithms that have often been used for this purpose, the method is designed to utilize a small number of spatially clustered sets of voxels that are particularly suited for clinical interpretation. RVoxM automatically tunes all its free parameters during the training phase, and offers the additional advantage of producing probabilistic prediction outcomes. We demonstrate RVoxM as a regression model by predicting age from volumetric gray matter segmentations, and as a classification model by distinguishing patients with Alzheimer's disease from healthy controls using surface-based cortical thickness data. Our results indicate that RVoxM yields biologically meaningful models, while providing state-of-the-art predictive accuracy.
Collapse
|
42
|
Predicting the location of human perirhinal cortex, Brodmann's area 35, from MRI. Neuroimage 2012; 64:32-42. [PMID: 22960087 DOI: 10.1016/j.neuroimage.2012.08.071] [Citation(s) in RCA: 64] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2012] [Revised: 08/10/2012] [Accepted: 08/21/2012] [Indexed: 11/30/2022] Open
Abstract
The perirhinal cortex (Brodmann's area 35) is a multimodal area that is important for normal memory function. Specifically, perirhinal cortex is involved in the detection of novel objects and manifests neurofibrillary tangles in Alzheimer's disease very early in disease progression. We scanned ex vivo brain hemispheres at standard resolution (1 mm × 1 mm × 1 mm) to construct pial/white matter surfaces in FreeSurfer and scanned again at high resolution (120 μm × 120 μm × 120 μm) to determine cortical architectural boundaries. After labeling perirhinal area 35 in the high resolution images, we mapped the high resolution labels to the surface models to localize area 35 in fourteen cases. We validated the area boundaries determined using histological Nissl staining. To test the accuracy of the probabilistic mapping, we measured the Hausdorff distance between the predicted and true labels and found that the median Hausdorff distance was 4.0mm for the left hemispheres (n=7) and 3.2mm for the right hemispheres (n=7) across subjects. To show the utility of perirhinal localization, we mapped our labels to a subset of the Alzheimer's Disease Neuroimaging Initiative dataset and found decreased cortical thickness measures in mild cognitive impairment and Alzheimer's disease compared to controls in the predicted perirhinal area 35. Our ex vivo probabilistic mapping of the perirhinal cortex provides histologically validated, automated and accurate labeling of architectonic regions in the medial temporal lobe, and facilitates the analysis of atrophic changes in a large dataset for earlier detection and diagnosis.
Collapse
|
43
|
P2‐256: Differential correlation of amyloid binding with hippocampal subfield volume loss in cognitively normal participants. Alzheimers Dement 2012. [DOI: 10.1016/j.jalz.2012.05.964] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
44
|
IC‐O1‐02: Differential correlation of amyloid binding with hippocampal subfield volume loss in cognitively normal participants. Alzheimers Dement 2012. [DOI: 10.1016/j.jalz.2012.05.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
45
|
A GENERATIVE MODEL FOR MULTI-ATLAS SEGMENTATION ACROSS MODALITIES. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2012:888-891. [PMID: 23568278 DOI: 10.1109/isbi.2012.6235691] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Current label fusion methods enhance multi-atlas segmentation by locally weighting the contribution of the atlases according to their similarity to the target volume after registration. However, these methods cannot handle voxel intensity inconsistencies between the atlases and the target image, which limits their application across modalities or even across MRI datasets due to differences in image contrast. Here we present a generative model for multi-atlas image segmentation, which does not rely on the intensity of the training images. Instead, we exploit the consistency of voxel intensities within regions in the target volume and their relation to the propagated labels. This is formulated in a probabilistic framework, where the most likely segmentation is obtained with variational expectation maximization (EM). The approach is demonstrated in an experiment where T1-weighted MRI atlases are used to segment proton-density (PD) weighted brain MRI scans, a scenario in which traditional weighting schemes cannot be used. Our method significantly improves the results provided by majority voting and STAPLE.
Collapse
|
46
|
Abstract
The maturity of registration methods, in combination with the increasing processing power of computers, has made multi-atlas segmentation methods practical. The problem of merging the deformed label maps from the atlases is known as label fusion. Even though label fusion has been well studied for intramodality scenarios, it remains relatively unexplored when the nature of the target data is multimodal or when its modality is different from that of the atlases. In this paper, we review the literature on label fusion methods and also present an extension of our previously published algorithm to the general case in which the target data are multimodal. The method is based on a generative model that exploits the consistency of voxel intensities within the target scan based on the current estimate of the segmentation. Using brain MRI scans acquired with a multiecho FLASH sequence, we compare the method with majority voting, statistical-atlas-based segmentation, the popular package FreeSurfer and an adaptive local multi-atlas segmentation method. The results show that our approach produces highly accurate segmentations (Dice 86.3% across 22 brain structures of interest), outperforming the competing methods.
Collapse
|
47
|
Abstract
This paper presents the Relevance Voxel Machine (RVoxM), a Bayesian multivariate pattern analysis (MVPA) algorithm that is specifically designed for making predictions based on image data. In contrast to generic MVPA algorithms that have often been used for this purpose, the method is designed to utilize a small number of spatially clustered sets of voxels that are particularly suited for clinical interpretation. RVoxM automatically tunes all its free parameters during the training phase, and offers the additional advantage of producing probabilistic prediction outcomes. Experiments on age prediction from structural brain MRI indicate that RVoxM yields biologically meaningful models that provide excellent predictive accuracy.
Collapse
|
48
|
P4‐056: Mild cognitive impairment: Differential atrophy in the hippocampal subfields. Alzheimers Dement 2011. [DOI: 10.1016/j.jalz.2011.05.2076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
49
|
Cerebral measurements and their correlation with the onset age and the duration of opioid abuse. J Opioid Manag 2011; 6:423-9. [PMID: 21269003 DOI: 10.5055/jom.2010.0040] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
BACKGROUND Opioid-dependent patients have been shown to have structural brain alterations. This study focuses on magnetic resonance imaging (MRI) measurements of brain and their correlation with the onset age and the duration of opioid abuse. METHODS Brain MRI was obtained from 17 opioid-dependent patients (mean age 34 years, SD 7 years) and 17 controls. Compulsive opioid use had begun between ages 15 and 31 (mean 20) and had continued from 5 to 26 years. All patients were tobacco smokers, six had also abused amphetamines and 11 benzodiazepines. Relative volumes of cerebral white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) spaces were measured. In addition, Sylvian fissure ratio (SFR), bifrontal ratio, and midsagittal cerebellar vermian area were correlated with the onset age and the duration of opioid abuse. RESULTS The total volume (GM + WM + CSF) of the cerebrum was significantly smaller in patients than in controls (Mann-Whitney U-test, p = 0.026) as well as the absolute volumes of GM and WM (p = 0.014 and p = 0.007, respectively). There was no significant difference in GM and WM volumes normalized with total cerebral volume. In contrast, the absolute volume of CSF did not significantly differ between the groups, but the relative volume of CSF was significantly higher in opioid dependents (p = 0.029). SFR and bifrontal ratio were larger in opioid dependents than in controls (p = 0.005 and p = 0.013). The SFR correlated negatively (p = 0.017, r = - 0.569) and the area of vermis cerebelli correlated positively (p = 0.043, r = 0.496) with the onset age of opioid abuse. The length of opioid abuse and the area of vermis cerebellum had a negative correlation (p = 0.038, r = - 0.523) even though the areas of cerebellar vermis did not significantly differ between opioid dependents and controls. The authors speculate that the onset of substance abuse in adolescence or early adulthood may have in part disturbed the late brain maturation process, as in normal development, the dorsolateral frontal cortex and superior parts of the temporal lobes are the last to maturate. Also, the cerebellar vermis may be affected by early onset substance abuse. It is possible that the brain is more vulnerable to substance abuse at a young age than later in life.
Collapse
|
50
|
A generative approach for image-based modeling of tumor growth. INFORMATION PROCESSING IN MEDICAL IMAGING : PROCEEDINGS OF THE ... CONFERENCE 2011; 22:735-47. [PMID: 21761700 PMCID: PMC3237122 DOI: 10.1007/978-3-642-22092-0_60] [Citation(s) in RCA: 41] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Extensive imaging is routinely used in brain tumor patients to monitor the state of the disease and to evaluate therapeutic options. A large number of multi-modal and multi-temporal image volumes is acquired in standard clinical cases, requiring new approaches for comprehensive integration of information from different image sources and different time points. In this work we propose a joint generative model of tumor growth and of image observation that naturally handles multimodal and longitudinal data. We use the model for analyzing imaging data in patients with glioma. The tumor growth model is based on a reaction-diffusion framework. Model personalization relies only on a forward model for the growth process and on image likelihood. We take advantage of an adaptive sparse grid approximation for efficient inference via Markov Chain Monte Carlo sampling. The approach can be used for integrating information from different multi-modal imaging protocols and can easily be adapted to other tumor growth models.
Collapse
|