1
|
Himthani N, Brunn M, Kim JY, Schulte M, Mang A, Biros G. CLAIRE-Parallelized Diffeomorphic Image Registration for Large-Scale Biomedical Imaging Applications. J Imaging 2022; 8:jimaging8090251. [PMID: 36135416 PMCID: PMC9501197 DOI: 10.3390/jimaging8090251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2022] [Revised: 08/31/2022] [Accepted: 09/06/2022] [Indexed: 11/25/2022] Open
Abstract
We study the performance of CLAIRE—a diffeomorphic multi-node, multi-GPU image-registration algorithm and software—in large-scale biomedical imaging applications with billions of voxels. At such resolutions, most existing software packages for diffeomorphic image registration are prohibitively expensive. As a result, practitioners first significantly downsample the original images and then register them using existing tools. Our main contribution is an extensive analysis of the impact of downsampling on registration performance. We study this impact by comparing full-resolution registrations obtained with CLAIRE to lower resolution registrations for synthetic and real-world imaging datasets. Our results suggest that registration at full resolution can yield a superior registration quality—but not always. For example, downsampling a synthetic image from 10243 to 2563 decreases the Dice coefficient from 92% to 79%. However, the differences are less pronounced for noisy or low contrast high resolution images. CLAIRE allows us not only to register images of clinically relevant size in a few seconds but also to register images at unprecedented resolution in reasonable time. The highest resolution considered are CLARITY images of size 2816×3016×1162. To the best of our knowledge, this is the first study on image registration quality at such resolutions.
Collapse
Affiliation(s)
- Naveen Himthani
- Oden Institute, The University of Texas at Austin, Austin, TX 78712, USA
- Correspondence:
| | - Malte Brunn
- Institute for Parallel and Distributed Systems, University of Stuttgart, 70569 Stuttgart, Germany
| | - Jae-Youn Kim
- Department of Mathematics, University of Houston, Houston, TX 77004, USA
| | - Miriam Schulte
- Institute for Parallel and Distributed Systems, University of Stuttgart, 70569 Stuttgart, Germany
| | - Andreas Mang
- Department of Mathematics, University of Houston, Houston, TX 77004, USA
| | - George Biros
- Oden Institute, The University of Texas at Austin, Austin, TX 78712, USA
| |
Collapse
|
2
|
Váša F, Hobday H, Stanyard RA, Daws RE, Giampietro V, O'Daly O, Lythgoe DJ, Seidlitz J, Skare S, Williams SCR, Marquand AF, Leech R, Cole JH. Rapid processing and quantitative evaluation of structural brain scans for adaptive multimodal imaging. Hum Brain Mapp 2022; 43:1749-1765. [PMID: 34953014 PMCID: PMC8886661 DOI: 10.1002/hbm.25755] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 11/02/2021] [Accepted: 11/21/2021] [Indexed: 12/17/2022] Open
Abstract
Current neuroimaging acquisition and processing approaches tend to be optimised for quality rather than speed. However, rapid acquisition and processing of neuroimaging data can lead to novel neuroimaging paradigms, such as adaptive acquisition, where rapidly processed data is used to inform subsequent image acquisition steps. Here we first evaluate the impact of several processing steps on the processing time and quality of registration of manually labelled T1 -weighted MRI scans. Subsequently, we apply the selected rapid processing pipeline both to rapidly acquired multicontrast EPImix scans of 95 participants (which include T1 -FLAIR, T2 , T2 *, T2 -FLAIR, DWI and ADC contrasts, acquired in ~1 min), as well as to slower, more standard single-contrast T1 -weighted scans of a subset of 66 participants. We quantify the correspondence between EPImix T1 -FLAIR and single-contrast T1 -weighted scans, using correlations between voxels and regions of interest across participants, measures of within- and between-participant identifiability as well as regional structural covariance networks. Furthermore, we explore the use of EPImix for the rapid construction of morphometric similarity networks. Finally, we quantify the reliability of EPImix-derived data using test-retest scans of 10 participants. Our results demonstrate that quantitative information can be derived from a neuroimaging scan acquired and processed within minutes, which could further be used to implement adaptive multimodal imaging and tailor neuroimaging examinations to individual patients.
Collapse
Affiliation(s)
- František Váša
- Department of NeuroimagingInstitute of Psychiatry, Psychology & Neuroscience, King's College LondonLondonUK
| | - Harriet Hobday
- Department of NeuroimagingInstitute of Psychiatry, Psychology & Neuroscience, King's College LondonLondonUK
| | - Ryan A. Stanyard
- Department of NeuroimagingInstitute of Psychiatry, Psychology & Neuroscience, King's College LondonLondonUK
- Department of Forensic & Developmental SciencesInstitute of Psychiatry, Psychology & Neuroscience, King's College LondonLondonUK
| | - Richard E. Daws
- The Computational, Cognitive and Clinical Neuroimaging Laboratory, Department of Brain SciencesImperial College LondonLondonUK
| | - Vincent Giampietro
- Department of NeuroimagingInstitute of Psychiatry, Psychology & Neuroscience, King's College LondonLondonUK
| | - Owen O'Daly
- Department of NeuroimagingInstitute of Psychiatry, Psychology & Neuroscience, King's College LondonLondonUK
| | - David J. Lythgoe
- Department of NeuroimagingInstitute of Psychiatry, Psychology & Neuroscience, King's College LondonLondonUK
| | - Jakob Seidlitz
- Department of Child and Adolescent Psychiatry and Behavioral ScienceChildren's Hospital of PhiladelphiaPhiladelphiaPennsylvaniaUSA
- Department of PsychiatryUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Stefan Skare
- Department of NeuroradiologyKarolinska University HospitalStockholmSweden
- Department of Clinical NeuroscienceKarolinska InstitutetStockholmSweden
| | - Steven C. R. Williams
- Department of NeuroimagingInstitute of Psychiatry, Psychology & Neuroscience, King's College LondonLondonUK
| | - Andre F. Marquand
- Department of NeuroimagingInstitute of Psychiatry, Psychology & Neuroscience, King's College LondonLondonUK
- Donders Institute for Brain, Cognition and BehaviorRadboud University NijmegenNijmegenThe Netherlands
- Department for Cognitive NeuroscienceRadboud University Medical Center NijmegenNijmegenThe Netherlands
| | - Robert Leech
- Department of NeuroimagingInstitute of Psychiatry, Psychology & Neuroscience, King's College LondonLondonUK
| | - James H. Cole
- Department of Computer Science, Centre for Medical Image ComputingUniversity College LondonLondonUK
- Dementia Research Centre, Institute of NeurologyUniversity College LondonLondonUK
| |
Collapse
|
3
|
Wang H, Huang Z, Zhang Q, Gao D, OuYang Z, Liang D, Liu X, Yang Y, Zheng H, Hu Z. Technical note: A preliminary study of dual-tracer PET image reconstruction guided by FDG and/or MR kernels. Med Phys 2021; 48:5259-5271. [PMID: 34252216 DOI: 10.1002/mp.15089] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2020] [Revised: 06/22/2021] [Accepted: 06/23/2021] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Clinically, single radiotracer positron emission tomography (PET) imaging is a commonly used examination method; however, since each radioactive tracer reflects the information of only one kind of cell, it easily causes false negatives or false positives in disease diagnosis. Therefore, reasonably combining two or more radiotracers is recommended to improve the accuracy of diagnosis and the sensitivity and specificity of the disease when conditions permit. METHODS This paper proposes incorporating 18 F-fluorodeoxyglucose (FDG) as a higher-quality PET image to guide the reconstruction of other lower-count 11 C-methionine (MET) PET datasets to compensate for the lower image quality by a popular kernel algorithm. Specifically, the FDG prior is needed to extract kernel features, and these features were used to build a kernel matrix using a k-nearest-neighbor (kNN) search for MET image reconstruction. We created a 2-D brain phantom to validate the proposed method by simulating sinogram data containing Poisson random noise and quantitatively compared the performance of the proposed FDG-guided kernelized expectation maximization (KEM) method with the performance of Gaussian and non-local means (NLM) smoothed maximum likelihood expectation maximization (MLEM), MR-guided KEM, and multi-guided-S KEM algorithms. Mismatch experiments between FDG/MR and MET data were also carried out to investigate the outcomes of possible clinical situations. RESULTS In the simulation study, the proposed method outperformed the other algorithms by at least 3.11% in the signal-to-noise ratio (SNR) and 0.68% in the contrast recovery coefficient (CRC), and it reduced the mean absolute error (MAE) by 8.07%. Regarding the tumor in the reconstructed image, the proposed method contained more pathological information. Furthermore, the proposed method was still superior to the MR-guided KEM method in the mismatch experiments. CONCLUSIONS The proposed FDG-guided KEM algorithm can effectively utilize and compensate for the tissue metabolism information obtained from dual-tracer PET to maximize the advantages of PET imaging.
Collapse
Affiliation(s)
- Haiyan Wang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Beijing, China
| | - Zhenxing Huang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qiyang Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Dongfang Gao
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Zhanglei OuYang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, China
| | - Xin Liu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, China
| | - Yongfeng Yang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, China
| | - Hairong Zheng
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen, China
| |
Collapse
|
4
|
Young DM, Fazel Darbandi S, Schwartz G, Bonzell Z, Yuruk D, Nojima M, Gole LC, Rubenstein JL, Yu W, Sanders SJ. Constructing and optimizing 3D atlases from 2D data with application to the developing mouse brain. eLife 2021; 10:61408. [PMID: 33570495 PMCID: PMC7994002 DOI: 10.7554/elife.61408] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2020] [Accepted: 02/10/2021] [Indexed: 12/17/2022] Open
Abstract
3D imaging data necessitate 3D reference atlases for accurate quantitative interpretation. Existing computational methods to generate 3D atlases from 2D-derived atlases result in extensive artifacts, while manual curation approaches are labor-intensive. We present a computational approach for 3D atlas construction that substantially reduces artifacts by identifying anatomical boundaries in the underlying imaging data and using these to guide 3D transformation. Anatomical boundaries also allow extension of atlases to complete edge regions. Applying these methods to the eight developmental stages in the Allen Developing Mouse Brain Atlas (ADMBA) led to more comprehensive and accurate atlases. We generated imaging data from 15 whole mouse brains to validate atlas performance and observed qualitative and quantitative improvement (37% greater alignment between atlas and anatomical boundaries). We provide the pipeline as the MagellanMapper software and the eight 3D reconstructed ADMBA atlases. These resources facilitate whole-organ quantitative analysis between samples and across development. The research community needs precise, reliable 3D atlases of organs to pinpoint where biological structures and processes are located. For instance, these maps are essential to understand where specific genes are turned on or off, or the spatial organization of various groups of cells over time. For centuries, atlases have been built by thinly ‘slicing up’ an organ, and then precisely representing each 2D layer. Yet this approach is imperfect: each layer may be accurate on its own, but inevitable mismatches appear between the slices when viewed in 3D or from another angle. Advances in microscopy now allow entire organs to be imaged in 3D. Comparing these images with atlases could help to detect subtle differences that indicate or underlie disease. However, this is only possible if 3D maps are accurate and do not feature mismatches between layers. To create an atlas without such artifacts, one approach consists in starting from scratch and manually redrawing the maps in 3D, a labor-intensive method that discards a large body of well-established atlases. Instead, Young et al. set out to create an automated method which could help to refine existing ‘layer-based’ atlases, releasing software that anyone can use to improve current maps. The package was created by harnessing eight atlases in the Allen Developing Mouse Brain Atlas, and then using the underlying anatomical images to resolve discrepancies between layers or fill out any missing areas. Known as MagellanMapper, the software was extensively tested to demonstrate the accuracy of the maps it creates, including comparison to whole-brain imaging data from 15 mouse brains. Armed with this new software, researchers can improve the accuracy of their atlases, helping them to understand the structure of organs at the level of the cell and giving them insight into a broad range of human disorders.
Collapse
Affiliation(s)
- David M Young
- Department of Psychiatry and Behavioral Sciences, UCSF Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, United States.,Institute of Molecular and Cell Biology, Agency for Science, Technology and Research, Singapore, Singapore
| | - Siavash Fazel Darbandi
- Department of Psychiatry and Behavioral Sciences, UCSF Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, United States
| | - Grace Schwartz
- Department of Psychiatry and Behavioral Sciences, UCSF Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, United States
| | - Zachary Bonzell
- Department of Psychiatry and Behavioral Sciences, UCSF Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, United States
| | - Deniz Yuruk
- Department of Psychiatry and Behavioral Sciences, UCSF Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, United States
| | - Mai Nojima
- Department of Psychiatry and Behavioral Sciences, UCSF Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, United States
| | - Laurent C Gole
- Institute of Molecular and Cell Biology, Agency for Science, Technology and Research, Singapore, Singapore
| | - John Lr Rubenstein
- Department of Psychiatry and Behavioral Sciences, UCSF Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, United States
| | - Weimiao Yu
- Institute of Molecular and Cell Biology, Agency for Science, Technology and Research, Singapore, Singapore
| | - Stephan J Sanders
- Department of Psychiatry and Behavioral Sciences, UCSF Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, United States
| |
Collapse
|
5
|
Young DM, Duhn C, Gilson M, Nojima M, Yuruk D, Kumar A, Yu W, Sanders SJ. Whole-Brain Image Analysis and Anatomical Atlas 3D Generation Using MagellanMapper. ACTA ACUST UNITED AC 2020; 94:e104. [PMID: 32981139 DOI: 10.1002/cpns.104] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
MagellanMapper is a software suite designed for visual inspection and end-to-end automated processing of large-volume, 3D brain imaging datasets in a memory-efficient manner. The rapidly growing number of large-volume, high-resolution datasets necessitates visualization of raw data at both macro- and microscopic levels to assess the quality of data, as well as automated processing to quantify data in an unbiased manner for comparison across a large number of samples. To facilitate these analyses, MagellanMapper provides both a graphical user interface for manual inspection and a command-line interface for automated image processing. At the macroscopic level, the graphical interface allows researchers to view full volumetric images simultaneously in each dimension and to annotate anatomical label placements. At the microscopic level, researchers can inspect regions of interest at high resolution to build ground truth data of cellular locations such as nuclei positions. Using the command-line interface, researchers can automate cell detection across volumetric images, refine anatomical atlas labels to fit underlying histology, register these atlases to sample images, and perform statistical analyses by anatomical region. MagellanMapper leverages established open-source computer vision libraries and is itself open source and freely available for download and extension. © 2020 Wiley Periodicals LLC. Basic Protocol 1: MagellanMapper installation Alternate Protocol: Alternative methods for MagellanMapper installation Basic Protocol 2: Import image files into MagellanMapper Basic Protocol 3: Region of interest visualization and annotation Basic Protocol 4: Explore an atlas along all three dimensions and register to a sample brain Basic Protocol 5: Automated 3D anatomical atlas construction Basic Protocol 6: Whole-tissue cell detection and quantification by anatomical label Support Protocol: Import a tiled microscopy image in proprietary format into MagellanMapper.
Collapse
Affiliation(s)
- David M Young
- Department of Psychiatry and Behavioral Sciences, UCSF Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California.,Institute for Molecular and Cell Biology, Agency for Science, Technology and Research, Singapore
| | - Clif Duhn
- Department of Psychiatry and Behavioral Sciences, UCSF Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California
| | - Michael Gilson
- Department of Psychiatry and Behavioral Sciences, UCSF Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California
| | - Mai Nojima
- Department of Psychiatry and Behavioral Sciences, UCSF Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California
| | - Deniz Yuruk
- Department of Psychiatry and Behavioral Sciences, UCSF Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California
| | - Aparna Kumar
- Department of Psychiatry and Behavioral Sciences, UCSF Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California
| | - Weimiao Yu
- Institute for Molecular and Cell Biology, Agency for Science, Technology and Research, Singapore
| | - Stephan J Sanders
- Department of Psychiatry and Behavioral Sciences, UCSF Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, California
| |
Collapse
|