26
|
Remedios SW, Han S, Dewey BE, Pham DL, Prince JL, Carass A. Joint Image and Label Self-Super-Resolution. SIMULATION AND SYNTHESIS IN MEDICAL IMAGING : ... INTERNATIONAL WORKSHOP, SASHIMI ..., HELD IN CONJUNCTION WITH MICCAI ..., PROCEEDINGS. SASHIMI (WORKSHOP) 2021; 12965:14-23. [PMID: 35291392 PMCID: PMC8919863 DOI: 10.1007/978-3-030-87592-3_2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
We propose a method to jointly super-resolve an anisotropic image volume along with its corresponding voxel labels without external training data. Our method is inspired by internally trained superresolution, or self-super-resolution (SSR) techniques that target anisotropic, low-resolution (LR) magnetic resonance (MR) images. While resulting images from such methods are quite useful, their corresponding LR labels-derived from either automatic algorithms or human raters-are no longer in correspondence with the super-resolved volume. To address this, we develop an SSR deep network that takes both an anisotropic LR MR image and its corresponding LR labels as input and produces both a super-resolved MR image and its super-resolved labels as output. We evaluated our method with 50 T 1-weighted brain MR images 4× down-sampled with 10 automatically generated labels. In comparison to other methods, our method had superior Dice across all labels and competitive metrics on the MR image. Our approach is the first reported method for SSR of paired anisotropic image and label volumes.
Collapse
|
27
|
He Y, Carass A, Zuo L, Dewey BE, Prince JL. Autoencoder based self-supervised test-time adaptation for medical image analysis. Med Image Anal 2021; 72:102136. [PMID: 34246070 PMCID: PMC8316425 DOI: 10.1016/j.media.2021.102136] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Revised: 05/15/2021] [Accepted: 06/14/2021] [Indexed: 01/02/2023]
Abstract
Deep neural networks have been successfully applied to medical image analysis tasks like segmentation and synthesis. However, even if a network is trained on a large dataset from the source domain, its performance on unseen test domains is not guaranteed. The performance drop on data obtained differently from the network's training data is a major problem (known as domain shift) in deploying deep learning in clinical practice. Existing work focuses on retraining the model with data from the test domain, or harmonizing the test domain's data to the network training data. A common practice is to distribute a carefully-trained model to multiple users (e.g., clinical centers), and then each user uses the model to process their own data, which may have a domain shift (e.g., varying imaging parameters and machines). However, the lack of availability of the source training data and the cost of training a new model often prevents the use of known methods to solve user-specific domain shifts. Here, we ask whether we can design a model that, once distributed to users, can quickly adapt itself to each new site without expensive retraining or access to the source training data? In this paper, we propose a model that can adapt based on a single test subject during inference. The model consists of three parts, which are all neural networks: a task model (T) which performs the image analysis task like segmentation; a set of autoencoders (AEs); and a set of adaptors (As). The task model and autoencoders are trained on the source dataset and can be computationally expensive. In the deployment stage, the adaptors are trained to transform the test image and its features to minimize the domain shift as measured by the autoencoders' reconstruction loss. Only the adaptors are optimized during the testing stage with a single test subject thus is computationally efficient. The method was validated on both retinal optical coherence tomography (OCT) image segmentation and magnetic resonance imaging (MRI) T1-weighted to T2-weighted image synthesis. Our method, with its short optimization time for the adaptors (10 iterations on a single test subject) and its additional required disk space for the autoencoders (around 15 MB), can achieve significant performance improvement. Our code is publicly available at: https://github.com/YufanHe/self-domain-adapted-network.
Collapse
|
28
|
He Y, Carass A, Liu Y, Jedynak BM, Solomon SD, Saidha S, Calabresi PA, Prince JL. Structured layer surface segmentation for retina OCT using fully convolutional regression networks. Med Image Anal 2021; 68:101856. [PMID: 33260113 PMCID: PMC7855873 DOI: 10.1016/j.media.2020.101856] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2020] [Revised: 08/27/2020] [Accepted: 09/14/2020] [Indexed: 12/18/2022]
Abstract
Optical coherence tomography (OCT) is a noninvasive imaging modality with micrometer resolution which has been widely used for scanning the retina. Retinal layers are important biomarkers for many diseases. Accurate automated algorithms for segmenting smooth continuous layer surfaces with correct hierarchy (topology) are important for automated retinal thickness and surface shape analysis. State-of-the-art methods typically use a two step process. Firstly, a trained classifier is used to label each pixel into either background and layers or boundaries and non-boundaries. Secondly, the desired smooth surfaces with the correct topology are extracted by graph methods (e.g., graph cut). Data driven methods like deep networks have shown great ability for the pixel classification step, but to date have not been able to extract structured smooth continuous surfaces with topological constraints in the second step. In this paper, we combine these two steps into a unified deep learning framework by directly modeling the distribution of the surface positions. Smooth, continuous, and topologically correct surfaces are obtained in a single feed forward operation. The proposed method was evaluated on two publicly available data sets of healthy controls and subjects with either multiple sclerosis or diabetic macular edema, and is shown to achieve state-of-the art performance with sub-pixel accuracy.
Collapse
|
29
|
Alshareef A, Knutsen AK, Johnson CL, Carass A, Upadhyay K, Bayly PV, Pham DL, Prince JL, Ramesh K. Integrating material properties from magnetic resonance elastography into subject-specific computational models for the human brain. BRAIN MULTIPHYSICS 2021; 2. [PMID: 37168236 PMCID: PMC10168673 DOI: 10.1016/j.brain.2021.100038] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023] Open
Abstract
Advances in brain imaging and computational methods have facilitated the creation of subject-specific computational brain models that aid researchers in investigating brain trauma using simulated impacts. The emergence of magnetic resonance elastography (MRE) as a non-invasive mechanical neuroimaging tool has enabled in vivo estimation of material properties at low-strain, harmonic loading. An open question in the field has been how this data can be integrated into computational models. The goals of this study were to use a novel MRI dataset acquired in human volunteers to generate models with subject-specific anatomy and material properties, and then to compare simulated brain deformations to subject-specific brain deformation data under non-injurious loading. Models of five subjects were simulated with linear viscoelastic (LVE) material properties estimated directly from MRE data. Model predictions were compared to experimental brain deformation acquired in the same subjects using tagged MRI. Outcomes from the models matched the spatial distribution and magnitude of the measured peak strain components as well as the 95th percentile in-plane peak strains within 0.005 mm/mm and maximum principal strain within 0.012 mm/mm. Sensitivity to material heterogeneity was also investigated. Simulated brain deformations from a model with homogenous brain properties and a model with brain properties discretized with up to ten regions were very similar (a mean absolute difference less than 0.0015 mm/mm in peak strains). Incorporating material properties directly from MRE into a biofidelic subject-specific model is an important step toward future investigations of higher-order model features and simulations under more severe loading conditions.
Collapse
|
30
|
Yang H, Sun J, Carass A, Zhao C, Lee J, Prince JL, Xu Z. Unsupervised MR-to-CT Synthesis Using Structure-Constrained CycleGAN. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:4249-4261. [PMID: 32780700 DOI: 10.1109/tmi.2020.3015379] [Citation(s) in RCA: 51] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Synthesizing a CT image from an available MR image has recently emerged as a key goal in radiotherapy treatment planning for cancer patients. CycleGANs have achieved promising results on unsupervised MR-to-CT image synthesis; however, because they have no direct constraints between input and synthetic images, cycleGANs do not guarantee structural consistency between these two images. This means that anatomical geometry can be shifted in the synthetic CT images, clearly a highly undesirable outcome in the given application. In this paper, we propose a structure-constrained cycleGAN for unsupervised MR-to-CT synthesis by defining an extra structure-consistency loss based on the modality independent neighborhood descriptor. We also utilize a spectral normalization technique to stabilize the training process and a self-attention module to model the long-range spatial dependencies in the synthetic images. Results on unpaired brain and abdomen MR-to-CT image synthesis show that our method produces better synthetic CT images in both accuracy and visual quality as compared to other unsupervised synthesis methods. We also show that an approximate affine pre-registration for unpaired training data can improve synthesis results.
Collapse
|
31
|
Han S, An Y, Carass A, Prince JL, Resnick SM. Longitudinal analysis of regional cerebellum volumes during normal aging. Neuroimage 2020; 220:117062. [PMID: 32592850 PMCID: PMC10683793 DOI: 10.1016/j.neuroimage.2020.117062] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2020] [Revised: 06/07/2020] [Accepted: 06/14/2020] [Indexed: 01/23/2023] Open
Abstract
Some cross-sectional studies suggest reduced cerebellar volumes with aging, but there have been few longitudinal studies of age changes in cerebellar subregions in cognitively healthy older adults. In this work, 2,023 magnetic resonance (MR) images of 822 cognitively normal participants from the Baltimore Longitudinal Study of Aging (BLSA) were analyzed. Participants ranged in age from 50 to 95 years (mean 70.7 years) at the baseline assessment. Follow-up intervals were 1-9 years (mean 3.7 years) for participants with two or more visits. We used a recently developed cerebellum parcellation algorithm based on convolutional neural networks to divide the cerebellum into 28 subregions. Linear mixed effects models were applied to the volume of each cerebellar subregion to investigate cross-sectional and longitudinal age effects, as well as effects of sex and their interactions, after adjusting for intracranial volume. Our findings suggest spatially varying atrophy patterns across the cerebellum with respect to age and sex both cross-sectionally and longitudinally.
Collapse
|
32
|
Han S, Carass A, He Y, Prince JL. Automatic cerebellum anatomical parcellation using U-Net with locally constrained optimization. Neuroimage 2020; 218:116819. [PMID: 32438049 PMCID: PMC7416473 DOI: 10.1016/j.neuroimage.2020.116819] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2019] [Revised: 03/06/2020] [Accepted: 03/25/2020] [Indexed: 12/20/2022] Open
Abstract
The cerebellum plays a central role in sensory input, voluntary motor action, and many neuropsychological functions and is involved in many brain diseases and neurological disorders. Cerebellar parcellation from magnetic resonance images provides a way to study regional cerebellar atrophy and also provides an anatomical map for functional imaging. In a recent comparison, a multi-atlas approach proved to be superior to other parcellation methods including some based on convolutional neural networks (CNNs) which have a considerable speed advantage. In this work, we developed an alternative CNN design for cerebellar parcellation, yielding a method that achieves the leading performance to date. The proposed method was evaluated on multiple data sets to show its broad applicability, and a Singularity container has been made publicly available.
Collapse
|
33
|
Carass A, Roy S, Gherman A, Reinhold JC, Jesson A, Arbel T, Maier O, Handels H, Ghafoorian M, Platel B, Birenbaum A, Greenspan H, Pham DL, Crainiceanu CM, Calabresi PA, Prince JL, Roncal WRG, Shinohara RT, Oguz I. Evaluating White Matter Lesion Segmentations with Refined Sørensen-Dice Analysis. Sci Rep 2020; 10:8242. [PMID: 32427874 PMCID: PMC7237671 DOI: 10.1038/s41598-020-64803-w] [Citation(s) in RCA: 56] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2019] [Accepted: 04/20/2020] [Indexed: 11/09/2022] Open
Abstract
The Sørensen-Dice index (SDI) is a widely used measure for evaluating medical image segmentation algorithms. It offers a standardized measure of segmentation accuracy which has proven useful. However, it offers diminishing insight when the number of objects is unknown, such as in white matter lesion segmentation of multiple sclerosis (MS) patients. We present a refinement for finer grained parsing of SDI results in situations where the number of objects is unknown. We explore these ideas with two case studies showing what can be learned from our two presented studies. Our first study explores an inter-rater comparison, showing that smaller lesions cannot be reliably identified. In our second case study, we demonstrate fusing multiple MS lesion segmentation algorithms based on the insights into the algorithms provided by our analysis to generate a segmentation that exhibits improved performance. This work demonstrates the wealth of information that can be learned from refined analysis of medical image segmentations.
Collapse
|
34
|
Zhao C, Shao M, Carass A, Li H, Dewey BE, Ellingsen LM, Woo J, Guttman MA, Blitz AM, Stone M, Calabresi PA, Halperin H, Prince JL. Applications of a deep learning method for anti-aliasing and super-resolution in MRI. Magn Reson Imaging 2019; 64:132-141. [PMID: 31247254 PMCID: PMC7094770 DOI: 10.1016/j.mri.2019.05.038] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2018] [Revised: 05/25/2019] [Accepted: 05/26/2019] [Indexed: 11/29/2022]
Abstract
Magnetic resonance (MR) images with both high resolutions and high signal-to-noise ratios (SNRs) are desired in many clinical and research applications. However, acquiring such images takes a long time, which is both costly and susceptible to motion artifacts. Acquiring MR images with good in-plane resolution and poor through-plane resolution is a common strategy that saves imaging time, preserves SNR, and provides one viewpoint with good resolution in two directions. Unfortunately, this strategy also creates orthogonal viewpoints that have poor resolution in one direction and, for 2D MR acquisition protocols, also creates aliasing artifacts. A deep learning approach called SMORE that carries out both anti-aliasing and super-resolution on these types of acquisitions using no external atlas or exemplars has been previously reported but not extensively validated. This paper reviews the SMORE algorithm and then demonstrates its performance in four applications with the goal to demonstrate its potential for use in both research and clinical scenarios. It is first shown to improve the visualization of brain white matter lesions in FLAIR images acquired from multiple sclerosis patients. Then it is shown to improve the visualization of scarring in cardiac left ventricular remodeling after myocardial infarction. Third, its performance on multi-view images of the tongue is demonstrated and finally it is shown to improve performance in parcellation of the brain ventricular system. Both visual and selected quantitative metrics of resolution enhancement are demonstrated.
Collapse
|
35
|
Dewey BE, Zhao C, Reinhold JC, Carass A, Fitzgerald KC, Sotirchos ES, Saidha S, Oh J, Pham DL, Calabresi PA, van Zijl PCM, Prince JL. DeepHarmony: A deep learning approach to contrast harmonization across scanner changes. Magn Reson Imaging 2019; 64:160-170. [PMID: 31301354 PMCID: PMC6874910 DOI: 10.1016/j.mri.2019.05.041] [Citation(s) in RCA: 99] [Impact Index Per Article: 19.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2018] [Revised: 05/30/2019] [Accepted: 05/30/2019] [Indexed: 11/16/2022]
Abstract
Magnetic resonance imaging (MRI) is a flexible medical imaging modality that often lacks reproducibility between protocols and scanners. It has been shown that even when care is taken to standardize acquisitions, any changes in hardware, software, or protocol design can lead to differences in quantitative results. This greatly impacts the quantitative utility of MRI in multi-site or long-term studies, where consistency is often valued over image quality. We propose a method of contrast harmonization, called DeepHarmony, which uses a U-Net-based deep learning architecture to produce images with consistent contrast. To provide training data, a small overlap cohort (n = 8) was scanned using two different protocols. Images harmonized with DeepHarmony showed significant improvement in consistency of volume quantification between scanning protocols. A longitudinal MRI dataset of patients with multiple sclerosis was also used to evaluate the effect of a protocol change on atrophy calculations in a clinical research setting. The results show that atrophy calculations were substantially and significantly affected by protocol change, whereas such changes have a less significant effect and substantially reduced overall difference when using DeepHarmony. This establishes that DeepHarmony can be used with an overlap cohort to reduce inconsistencies in segmentation caused by changes in scanner protocol, allowing for modernization of hardware and protocol design in long-term studies without invalidating previously acquired data.
Collapse
|
36
|
He Y, Carass A, Liu Y, Jedynak BM, Solomon SD, Saidha S, Calabresi PA, Prince JL. Fully Convolutional Boundary Regression for Retina OCT Segmentation. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2019; 11764:120-128. [PMID: 31853524 DOI: 10.1007/978-3-030-32239-7_14] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
A major goal of analyzing retinal optical coherence tomography (OCT) images is retinal layer segmentation. Accurate automated algorithms for segmenting smooth continuous layer surfaces, with correct hierarchy (topology) are desired for monitoring disease progression. State-of-the-art methods use a trained classifier to label each pixel into background, layer, or surface pixels. The final step of extracting the desired smooth surfaces with correct topology are mostly performed by graph methods (e.g. shortest path, graph cut). However, manually building a graph with varying constraints by retinal region and pathology and solving the minimization with specialized algorithms will degrade the flexibility and time efficiency of the whole framework. In this paper, we directly model the distribution of surface positions using a deep network with a fully differentiable soft argmax to obtain smooth, continuous surfaces in a single feed forward operation. A special topology module is used in the deep network both in the training and testing stages to guarantee the surface topology. An extra deep network output branch is also used for predicting lesion and layers in a pixel-wise labeling scheme. The proposed method was evaluated on two publicly available data sets of healthy controls, subjects with multiple sclerosis, and diabetic macular edema; it achieves state-of-the art sub-pixel results.
Collapse
|
37
|
Han S, Carass A, Prince JL. Hierarchical Parcellation of the Cerebellum. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2019; 11766:484-491. [PMID: 32399521 PMCID: PMC7217559 DOI: 10.1007/978-3-030-32248-9_54] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/14/2023]
Abstract
Parcellation of the cerebellum in an MR image has been used to study regional associations with both motion and cognitive functions. Despite the fact that the division of the cerebellum is defined hierarchically-i.e., the cerebellum can be divided into lobes and the lobes can be further divided into lobules-previous automatic methods to parcellate the cerebellum do not utilize this information. In this work, we propose a method based on convolutional neural networks (CNNs) to explicitly incorporate the hierarchical organization of the cerebellum. The network is constructed in a tree structure with each node representing a cerebellar region and having child nodes that further subdivide the region into finer substructures. Thus, our CNN is aware of the hierarchical organization of the cerebellum. Furthermore, by selecting tree nodes to represent the hierarchical properties of a given training sample, our network can be trained with heterogeneous training data that are labeled to different hierarchical depths. The proposed method was compared with a state-of-the-art cerebellum parcellation network. Our approach shows promising results as a first parcellation method to take the cerebellar hierarchical organization into consideration.
Collapse
|
38
|
He Y, Carass A, Liu Y, Jedynak BM, Solomon SD, Saidha S, Calabresi PA, Prince JL. Deep learning based topology guaranteed surface and MME segmentation of multiple sclerosis subjects from retinal OCT. BIOMEDICAL OPTICS EXPRESS 2019; 10:5042-5058. [PMID: 31646029 PMCID: PMC6788619 DOI: 10.1364/boe.10.005042] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2019] [Revised: 09/01/2019] [Accepted: 09/02/2019] [Indexed: 05/11/2023]
Abstract
Optical coherence tomography (OCT) is a noninvasive imaging modality that can be used to obtain depth images of the retina. Patients with multiple sclerosis (MS) have thinning retinal nerve fiber and ganglion cell layers, and approximately 5% of MS patients will develop microcystic macular edema (MME) within the retina. Segmentation of both the retinal layers and MME can provide important information to help monitor MS progression. Graph-based segmentation with machine learning preprocessing is the leading method for retinal layer segmentation, providing accurate surface delineations with the correct topological ordering. However, graph methods are time-consuming and they do not optimally incorporate joint MME segmentation. This paper presents a deep network that extracts continuous, smooth, and topology-guaranteed surfaces and MMEs. The network learns shape priors automatically during training rather than being hard-coded as in graph methods. In this new approach, retinal surfaces and MMEs are segmented together with two cascaded deep networks in a single feed forward propagation. The proposed framework obtains retinal surfaces (separating the layers) with sub-pixel surface accuracy comparable to the best existing graph methods and MMEs with better accuracy than the state-of-the-art method. The full segmentation operation takes only ten seconds for a 3D volume.
Collapse
|
39
|
Shao M, Han S, Carass A, Li X, Blitz AM, Shin J, Prince JL, Ellingsen LM. Brain ventricle parcellation using a deep neural network: Application to patients with ventriculomegaly. Neuroimage Clin 2019; 23:101871. [PMID: 31174103 PMCID: PMC6551563 DOI: 10.1016/j.nicl.2019.101871] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2018] [Revised: 04/20/2019] [Accepted: 05/20/2019] [Indexed: 02/01/2023]
Abstract
Numerous brain disorders are associated with ventriculomegaly, including both neuro-degenerative diseases and cerebrospinal fluid disorders. Detailed evaluation of the ventricular system is important for these conditions to help understand the pathogenesis of ventricular enlargement and elucidate novel patterns of ventriculomegaly that can be associated with different diseases. One such disease is normal pressure hydrocephalus (NPH), a chronic form of hydrocephalus in older adults that causes dementia. Automatic parcellation of the ventricular system into its sub-compartments in patients with ventriculomegaly is quite challenging due to the large variation of the ventricle shape and size. Conventional brain labeling methods are time-consuming and often fail to identify the boundaries of the enlarged ventricles. We propose a modified 3D U-Net method to perform accurate ventricular parcellation, even with grossly enlarged ventricles, from magnetic resonance images (MRIs). We validated our method on a data set of healthy controls as well as a cohort of 95 patients with NPH with mild to severe ventriculomegaly and compared with several state-of-the-art segmentation methods. On the healthy data set, the proposed network achieved mean Dice similarity coefficient (DSC) of 0.895 ± 0.03 for the ventricular system. On the NPH data set, we achieved mean DSC of 0.973 ± 0.02, which is significantly (p < 0.005) higher than four state-of-the-art segmentation methods we compared with. Furthermore, the typical processing time on CPU-base implementation of the proposed method is 2 min, which is much lower than the several hours required by the other methods. Results indicate that our method provides: 1) highly robust parcellation of the ventricular system that is comparable in accuracy to state-of-the-art methods on healthy controls; 2) greater robustness and significantly more accurate results on cases of ventricular enlargement; and 3) a tool that enables computation of novel imaging biomarkers for dilated ventricular spaces that characterize the ventricular system.
Collapse
|
40
|
Reinhold JC, Dewey BE, Carass A, Prince JL. Evaluating the Impact of Intensity Normalization on MR Image Synthesis. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2019; 10949. [PMID: 31551645 DOI: 10.1117/12.2513089] [Citation(s) in RCA: 58] [Impact Index Per Article: 11.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Image synthesis learns a transformation from the intensity features of an input image to yield a different tissue contrast of the output image. This process has been shown to have application in many medical image analysis tasks including imputation, registration, and segmentation. To carry out synthesis, the intensities of the input images are typically scaled-i.e., normalized-both in training to learn the transformation and in testing when applying the transformation, but it is not presently known what type of input scaling is optimal. In this paper, we consider seven different intensity normalization algorithms and three different synthesis methods to evaluate the impact of normalization. Our experiments demonstrate that intensity normalization as a preprocessing step improves the synthesis results across all investigated synthesis algorithms. Furthermore, we show evidence that suggests intensity normalization is vital for successful deep learning-based MR image synthesis.
Collapse
|
41
|
Liu Y, Carass A, He Y, Antony BJ, Filippatou A, Saidha S, Solomon SD, Calabresi PA, Prince JL. Layer boundary evolution method for macular OCT layer segmentation. BIOMEDICAL OPTICS EXPRESS 2019; 10:1064-1080. [PMID: 30891330 PMCID: PMC6420297 DOI: 10.1364/boe.10.001064] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/25/2018] [Revised: 12/27/2018] [Accepted: 12/28/2018] [Indexed: 05/30/2023]
Abstract
Optical coherence tomography (OCT) is used to produce high resolution depth images of the retina and is now the standard of care for in-vivo ophthalmological assessment. It is also increasingly being used for evaluation of neurological disorders such as multiple sclerosis (MS). Automatic segmentation methods identify the retinal layers of the macular cube providing consistent results without intra- and inter-rater variation and is faster than manual segmentation. In this paper, we propose a fast multi-layer macular OCT segmentation method based on a fast level set method. Our framework uses contours in an optimized approach specifically for OCT layer segmentation over the whole macular cube. Our algorithm takes boundary probability maps from a trained random forest and iteratively refines the prediction to subvoxel precision. Evaluation on both healthy and multiple sclerosis subjects shows that our method is statistically better than a state-of-the-art graph-based method.
Collapse
|
42
|
He Y, Carass A, Solomon SD, Saidha S, Calabresi PA, Prince JL. Retinal layer parcellation of optical coherence tomography images: Data resource for multiple sclerosis and healthy controls. Data Brief 2019; 22:601-604. [PMID: 30671506 PMCID: PMC6327073 DOI: 10.1016/j.dib.2018.12.073] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2018] [Revised: 12/14/2018] [Accepted: 12/20/2018] [Indexed: 11/20/2022] Open
Abstract
This paper presents optical coherence tomography (OCT) images of the human retina and manual delineations of eight retinal layers. The data includes 35 human retina scans acquired on a Spectralis OCT system (Heidelberg Engineering, Heidelberg, Germany), 14 of which are healthy controls (HC) and 21 have a diagnosis of multiple sclerosis (MS). The provided data includes manually delineation of eight retina layers, which were independently reviewed and edited. The data presented in this article was used to validate automatic segmentation algorithms (Lang et al., 2013).
Collapse
|
43
|
Han S, He Y, Carass A, Ying SH, Prince JL. Cerebellum Parcellation with Convolutional Neural Networks. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2019; 10949:109490K. [PMID: 32394999 PMCID: PMC7211767 DOI: 10.1117/12.2512119] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
To better understand cerebellum-related diseases and functional mapping of the cerebellum, quantitative measurements of cerebellar regions in magnetic resonance (MR) images have been studied in both clinical and neurological studies. Such studies have revealed that different spinocerebellar ataxia (SCA) subtypes have different patterns of cerebellar atrophy and that atrophy of different cerebellar regions is correlated with specific functional losses. Previous methods to automatically parcellate the cerebellum-that is, to identify its sub-regions-have been largely based on multi-atlas segmentation. Recently, deep convolutional neural network (CNN) algorithms have been shown to have high speed and accuracy in cerebral sub-cortical structure segmentation from MR images. In this work, two three-dimensional CNNs were used to parcellate the cerebellum into 28 regions. First, a locating network was used to predict a bounding box around the cerebellum. Second, a parcellating network was used to parcellate the cerebellum using the entire region within the bounding box. A leave-one-out cross validation of fifteen manually delineated images was performed. Compared with a previously reported state-of-the-art algorithm, the proposed algorithm shows superior Dice coefficients. The proposed algorithm was further applied to three MR images of a healthy subject and subjects with SCA6 and SCA8, respectively. A Singularity container of this algorithm is publicly available.
Collapse
|
44
|
Ghanem AM, Hamimi AH, Matta JR, Carass A, Elgarf RM, Gharib AM, Abd-Elmoniem KZ. Automatic Coronary Wall and Atherosclerotic Plaque Segmentation from 3D Coronary CT Angiography. Sci Rep 2019; 9:47. [PMID: 30631101 PMCID: PMC6328572 DOI: 10.1038/s41598-018-37168-4] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2018] [Accepted: 11/25/2018] [Indexed: 12/11/2022] Open
Abstract
Coronary plaque burden measured by coronary computerized tomography angiography (CCTA), independent of stenosis, is a significant independent predictor of coronary heart disease (CHD) events and mortality. Hence, it is essential to develop comprehensive CCTA plaque quantification beyond existing subjective plaque volume or stenosis scoring methods. The purpose of this study is to develop a framework for automated 3D segmentation of CCTA vessel wall and quantification of atherosclerotic plaque, independent of the amount of stenosis, along with overcoming challenges caused by poor contrast, motion artifacts, severe stenosis, and degradation of image quality. Vesselness, region growing, and two sequential level sets are employed for segmenting the inner and outer wall to prevent artifact-defective segmentation. Lumen and vessel boundaries are joined to create the coronary wall. Curved multiplanar reformation is used to straighten the segmented lumen and wall using lumen centerline. In-vivo evaluation included CCTA stenotic and non-stenotic plaques from 41 asymptomatic subjects with 122 plaques of different characteristics against the individual and consensus of expert readers. Results demonstrate that the framework segmentation performed robustly by providing a reliable working platform for accelerated, objective, and reproducible atherosclerotic plaque characterization beyond subjective assessment of stenosis; can be potentially applicable for monitoring response to therapy.
Collapse
|
45
|
Maier-Hein L, Eisenmann M, Reinke A, Onogur S, Stankovic M, Scholz P, Arbel T, Bogunovic H, Bradley AP, Carass A, Feldmann C, Frangi AF, Full PM, van Ginneken B, Hanbury A, Honauer K, Kozubek M, Landman BA, März K, Maier O, Maier-Hein K, Menze BH, Müller H, Neher PF, Niessen W, Rajpoot N, Sharp GC, Sirinukunwattana K, Speidel S, Stock C, Stoyanov D, Taha AA, van der Sommen F, Wang CW, Weber MA, Zheng G, Jannin P, Kopp-Schneider A. Why rankings of biomedical image analysis competitions should be interpreted with care. Nat Commun 2018; 9:5217. [PMID: 30523263 PMCID: PMC6284017 DOI: 10.1038/s41467-018-07619-7] [Citation(s) in RCA: 143] [Impact Index Per Article: 23.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2018] [Accepted: 11/07/2018] [Indexed: 11/08/2022] Open
Abstract
International challenges have become the standard for validation of biomedical image analysis methods. Given their scientific impact, it is surprising that a critical analysis of common practices related to the organization of challenges has not yet been performed. In this paper, we present a comprehensive analysis of biomedical image analysis challenges conducted up to now. We demonstrate the importance of challenges and show that the lack of quality control has critical consequences. First, reproducibility and interpretation of the results is often hampered as only a fraction of relevant information is typically provided. Second, the rank of an algorithm is generally not robust to a number of variables such as the test data used for validation, the ranking scheme applied and the observers that make the reference annotations. To overcome these problems, we recommend best practice guidelines and define open research questions to be addressed in the future.
Collapse
|
46
|
Carass A, Cuzzocreo JL, Han S, Hernandez-Castillo CR, Rasser PE, Ganz M, Beliveau V, Dolz J, Ben Ayed I, Desrosiers C, Thyreau B, Romero JE, Coupé P, Manjón JV, Fonov VS, Collins DL, Ying SH, Onyike CU, Crocetti D, Landman BA, Mostofsky SH, Thompson PM, Prince JL. Comparing fully automated state-of-the-art cerebellum parcellation from magnetic resonance images. Neuroimage 2018; 183:150-172. [PMID: 30099076 PMCID: PMC6271471 DOI: 10.1016/j.neuroimage.2018.08.003] [Citation(s) in RCA: 55] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2018] [Revised: 08/03/2018] [Accepted: 08/03/2018] [Indexed: 01/26/2023] Open
Abstract
The human cerebellum plays an essential role in motor control, is involved in cognitive function (i.e., attention, working memory, and language), and helps to regulate emotional responses. Quantitative in-vivo assessment of the cerebellum is important in the study of several neurological diseases including cerebellar ataxia, autism, and schizophrenia. Different structural subdivisions of the cerebellum have been shown to correlate with differing pathologies. To further understand these pathologies, it is helpful to automatically parcellate the cerebellum at the highest fidelity possible. In this paper, we coordinated with colleagues around the world to evaluate automated cerebellum parcellation algorithms on two clinical cohorts showing that the cerebellum can be parcellated to a high accuracy by newer methods. We characterize these various methods at four hierarchical levels: coarse (i.e., whole cerebellum and gross structures), lobe, subdivisions of the vermis, and the lobules. Due to the number of labels, the hierarchy of labels, the number of algorithms, and the two cohorts, we have restricted our analyses to the Dice measure of overlap. Under these conditions, machine learning based methods provide a collection of strategies that are efficient and deliver parcellations of a high standard across both cohorts, surpassing previous work in the area. In conjunction with the rank-sum computation, we identified an overall winning method.
Collapse
|
47
|
Shao M, Han S, Carass A, Li X, Blitz AM, Prince JL, Ellingsen LM. Shortcomings of Ventricle Segmentation Using Deep Convolutional Networks. ACTA ACUST UNITED AC 2018; 11038:79-86. [PMID: 33094293 DOI: 10.1007/978-3-030-02628-8_9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/08/2023]
Abstract
Normal Pressure Hydrocephalus (NPH) is a brain disorder that can present with ventriculomegaly and dementia-like symptoms, which often can be reversed through surgery. Having accurate segmentation of the ventricular system into its sub-compartments from magnetic resonance images (MRI) would be beneficial to better characterize the condition of NPH patients. Previous segmentation algorithms need long processing time and often fail to accurately segment severely enlarged ventricles in NPH patients. Recently, deep convolutional neural network (CNN) methods have been reported to have fast and accurate performance on medical image segmentation tasks. In this paper, we present a 3D U-net CNN-based network to segment the ventricular system in MRI. We trained three networks on different data sets and compared their performances. The networks trained on healthy controls (HC) failed in patients with NPH pathology, even in patients with normal appearing ventricles. The network trained on images from HC and NPH patients provided superior performance against state-of-the-art methods when evaluated on images from both data sets.
Collapse
|
48
|
Zhao C, Carass A, Dewey BE, Woo J, Oh J, Calabresi PA, Reich DS, Sati P, Pham DL, Prince JL. A Deep Learning Based Anti-aliasing Self Super-resolution Algorithm for MRI. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2018; 11070:100-108. [PMID: 38013916 PMCID: PMC10679927 DOI: 10.1007/978-3-030-00928-1_12] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
High resolution magnetic resonance (MR) images are desired in many clinical applications, yet acquiring such data with an adequate signal-to-noise ratio requires a long time, making them costly and susceptible to motion artifacts. A common way to partly achieve this goal is to acquire MR images with good in-plane resolution and poor through-plane resolution (i.e., large slice thickness). For such 2D imaging protocols, aliasing is also introduced in the through-plane direction, and these high-frequency artifacts cannot be removed by conventional interpolation. Super-resolution (SR) algorithms which can reduce aliasing artifacts and improve spatial resolution have previously been reported. State-of-the-art SR methods are mostly learning-based and require external training data consisting of paired low resolution (LR) and high resolution (HR) MR images. However, due to scanner limitations, such training data are often unavailable. This paper presents an anti-aliasing (AA) and self super-resolution (SSR) algorithm that needs no external training data. It takes advantage of the fact that the in-plane slices of those MR images contain high frequency information. Our algorithm consists of three steps: 1) We build a self AA (SAA) deep network followed by 2) an SSR deep network, both of which can be applied along different orientations within the original images, and 3) recombine the multiple orientations output from Steps 1 and 2 using Fourier burst accumulation. We perform our SAA+SSR algorithm on a diverse collection of MR data without modification or preprocessing other than N4 inhomogeneity correction, and demonstrate significant improvement compared to competing SSR methods.
Collapse
|
49
|
Wongsripuemtet J, Tyan AE, Carass A, Agarwal S, Gujar SK, Pillai JJ, Sair HI. Preoperative Mapping of the Supplementary Motor Area in Patients with Brain Tumor Using Resting-State fMRI with Seed-Based Analysis. AJNR Am J Neuroradiol 2018; 39:1493-1498. [PMID: 30002054 DOI: 10.3174/ajnr.a5709] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2018] [Accepted: 05/08/2018] [Indexed: 11/07/2022]
Abstract
BACKGROUND AND PURPOSE The supplementary motor area can be a critical region in the preoperative planning of patients undergoing brain tumor resection because it plays a role in both language and motor function. While primary motor regions have been successfully identified using resting-state fMRI, there is variability in the literature regarding the identification of the supplementary motor area for preoperative planning. The purpose of our study was to compare resting-state fMRI to task-based fMRI for localization of the supplementary motor area in a large cohort of patients with brain tumors presenting for preoperative brain mapping. MATERIALS AND METHODS Sixty-six patients with brain tumors were evaluated with resting-state fMRI using seed-based analysis of hand and orofacial motor regions. Rates of supplementary motor area localization were compared with those in healthy controls and with localization results by task-based fMRI. RESULTS Localization of the supplementary motor area using hand motor seed regions was more effective than seeding using orofacial motor regions for both patients with brain tumor (95.5% versus 34.8%, P < .001) and controls (95.2% versus 45.2%, P < .001). Bilateral hand motor seeding was superior to unilateral hand motor seeding in patients with brain tumor for either side (95.5% versus 75.8%/75.8% for right/left, P < .001). No difference was found in the ability to identify the supplementary motor area between patients with brain tumors and controls. CONCLUSIONS In addition to task-based fMRI, seed-based analysis of resting-state fMRI represents an equally effective method for supplementary motor area localization in patients with brain tumors, with the best results obtained with bilateral hand motor region seeding.
Collapse
|
50
|
Liu Y, Carass A, Solomon SD, Saidha S, Calabresi PA, Prince JL. Multi-layer Fast Level Set Segmentation for Macular OCT. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2018; 2018:1445-1448. [PMID: 31853331 PMCID: PMC6919647 DOI: 10.1109/isbi.2018.8363844] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Segmenting optical coherence tomography (OCT) images of the retina is important in the diagnosis, staging, and tracking of ophthalmological diseases. Whereas automatic segmentation methods are typically much faster than manual segmentation, they may still take several minutes to segment a three-dimensional macular scan, and this can be prohibitive for routine clinical application. In this paper, we propose a fast, multi-layer macular OCT segmentation method based on a fast level set method. In our framework, the boundary evolution operations are computationally fast, are specific to each boundary between retinal layers, guarantee proper layer ordering, and avoid level set computation during evolution. Subvoxel resolution is achieved by reconstructing the level set functions after convergence. Experiments demonstrate that our method reduces the computation expense by 90% compared to graph-based methods and produces comparable accuracy to both graph-based and level set retinal OCT segmentation methods.
Collapse
|