1
|
Zhu X, Ding M, Zhang X. Free form deformation and symmetry constraint‐based multi‐modal brain image registration using generative adversarial nets. CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY 2023. [DOI: 10.1049/cit2.12159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/13/2023] Open
Affiliation(s)
- Xingxing Zhu
- Department of Biomedical Engineering School of Life Science and Technology Ministry of Education Key Laboratory of Molecular Biophysics Huazhong University of Science and Technology Wuhan China
| | - Mingyue Ding
- Department of Biomedical Engineering School of Life Science and Technology Ministry of Education Key Laboratory of Molecular Biophysics Huazhong University of Science and Technology Wuhan China
| | - Xuming Zhang
- Department of Biomedical Engineering School of Life Science and Technology Ministry of Education Key Laboratory of Molecular Biophysics Huazhong University of Science and Technology Wuhan China
| |
Collapse
|
2
|
Reisert M, Sajonz BEA, Brugger TS, Reinacher PC, Russe MF, Kellner E, Skibbe H, Coenen VA. Where Position Matters-Deep-Learning-Driven Normalization and Coregistration of Computed Tomography in the Postoperative Analysis of Deep Brain Stimulation. Neuromodulation 2023; 26:302-309. [PMID: 36424266 DOI: 10.1016/j.neurom.2022.10.042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Revised: 09/12/2022] [Accepted: 10/06/2022] [Indexed: 11/23/2022]
Abstract
INTRODUCTION Recent developments in the postoperative evaluation of deep brain stimulation surgery on the group level warrant the detection of achieved electrode positions based on postoperative imaging. Computed tomography (CT) is a frequently used imaging modality, but because of its idiosyncrasies (high spatial accuracy at low soft tissue resolution), it has not been sufficient for the parallel determination of electrode position and details of the surrounding brain anatomy (nuclei). The common solution is rigid fusion of CT images and magnetic resonance (MR) images, which have much better soft tissue contrast and allow accurate normalization into template spaces. Here, we explored a deep-learning approach to directly relate positions (usually the lead position) in postoperative CT images to the native anatomy of the midbrain and group space. MATERIALS AND METHODS Deep learning is used to create derived tissue contrasts (white matter, gray matter, cerebrospinal fluid, brainstem nuclei) based on the CT image; that is, a convolution neural network (CNN) takes solely the raw CT image as input and outputs several tissue probability maps. The ground truth is based on coregistrations with MR contrasts. The tissue probability maps are then used to either rigidly coregister or normalize the CT image in a deformable way to group space. The CNN was trained in 220 patients and tested in a set of 80 patients. RESULTS Rigorous validation of such an approach is difficult because of the lack of ground truth. We examined the agreements between the classical and proposed approaches and considered the spread of implantation locations across a group of identically implanted subjects, which serves as an indicator of the accuracy of the lead localization procedure. The proposed procedure agrees well with current magnetic resonance imaging-based techniques, and the spread is comparable or even lower. CONCLUSIONS Postoperative CT imaging alone is sufficient for accurate localization of the midbrain nuclei and normalization to the group space. In the context of group analysis, it seems sufficient to have a single postoperative CT image of good quality for inclusion. The proposed approach will allow researchers and clinicians to include cases that were not previously suitable for analysis.
Collapse
Affiliation(s)
- Marco Reisert
- Department of Stereotactic and Functional Neurosurgery, Medical Center of Freiburg University, Freiburg, Germany; Medical Faculty of Freiburg University, Freiburg, Germany; Department of Diagnostic and Interventional Radiology, Medical Physics, Medical Center-University of Freiburg, Freiburg, Germany.
| | - Bastian E A Sajonz
- Department of Stereotactic and Functional Neurosurgery, Medical Center of Freiburg University, Freiburg, Germany; Medical Faculty of Freiburg University, Freiburg, Germany
| | - Timo S Brugger
- Department of Stereotactic and Functional Neurosurgery, Medical Center of Freiburg University, Freiburg, Germany; Medical Faculty of Freiburg University, Freiburg, Germany
| | - Peter C Reinacher
- Department of Stereotactic and Functional Neurosurgery, Medical Center of Freiburg University, Freiburg, Germany; Medical Faculty of Freiburg University, Freiburg, Germany; Fraunhofer Institute for Laser Technology, Aachen, Germany
| | - Maximilian F Russe
- Medical Faculty of Freiburg University, Freiburg, Germany; Department of Diagnostic and Interventional Radiology, Medical Physics, Medical Center-University of Freiburg, Freiburg, Germany
| | - Elias Kellner
- Medical Faculty of Freiburg University, Freiburg, Germany; Department of Diagnostic and Interventional Radiology, Medical Physics, Medical Center-University of Freiburg, Freiburg, Germany
| | - Henrik Skibbe
- RIKEN, Center for Brain Science, Brain Image Analysis Unit, Saitama, Japan
| | - Volker A Coenen
- Department of Stereotactic and Functional Neurosurgery, Medical Center of Freiburg University, Freiburg, Germany; Medical Faculty of Freiburg University, Freiburg, Germany; Center for Deep Brain Stimulation, Medical Center of Freiburg University, Freiburg, Germany
| |
Collapse
|
3
|
Stolk A, Griffin S, van der Meij R, Dewar C, Saez I, Lin JJ, Piantoni G, Schoffelen JM, Knight RT, Oostenveld R. Integrated analysis of anatomical and electrophysiological human intracranial data. Nat Protoc 2019; 13:1699-1723. [PMID: 29988107 PMCID: PMC6548463 DOI: 10.1038/s41596-018-0009-6] [Citation(s) in RCA: 88] [Impact Index Per Article: 17.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Human intracranial electroencephalography (iEEG) recordings provide data with much greater spatiotemporal precision than is possible from data obtained using scalp EEG, magnetoencephalography (MEG), or functional MRI. Until recently, the fusion of anatomical data (MRI and computed tomography (CT) images) with electrophysiological data and their subsequent analysis have required the use of technologically and conceptually challenging combinations of software. Here, we describe a comprehensive protocol that enables complex raw human iEEG data to be converted into more readily comprehensible illustrative representations. The protocol uses an open-source toolbox for electrophysiological data analysis (FieldTrip). This allows iEEG researchers to build on a continuously growing body of scriptable and reproducible analysis methods that, over the past decade, have been developed and used by a large research community. In this protocol, we describe how to analyze complex iEEG datasets by providing an intuitive and rapid approach that can handle both neuroanatomical information and large electrophysiological datasets. We provide a worked example using an example dataset. We also explain how to automate the protocol and adjust the settings to enable analysis of iEEG datasets with other characteristics. The protocol can be implemented by a graduate student or postdoctoral fellow with minimal MATLAB experience and takes approximately an hour to execute, excluding the automated cortical surface extraction.
Collapse
Affiliation(s)
- Arjen Stolk
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, USA. .,Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, The Netherlands.
| | - Sandon Griffin
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, USA
| | - Roemer van der Meij
- Department of Cognitive Science, University of California, San Diego, La Jolla, CA, USA
| | - Callum Dewar
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, USA.,College of Medicine, University of Illinois, Chicago, IL, USA
| | - Ignacio Saez
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, USA
| | - Jack J Lin
- Department of Neurology, University of California, Irvine, Irvine, CA, USA
| | - Giovanni Piantoni
- Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Jan-Mathijs Schoffelen
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Robert T Knight
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, USA.,Department of Psychology, University of California, Berkeley, Berkeley, CA, USA
| | - Robert Oostenveld
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, The Netherlands.,NatMEG, Karolinska Institutet, Stockholm, Sweden
| |
Collapse
|
4
|
Onofrey JA, Staib LH, Papademetris X. Segmenting the Brain Surface From CT Images With Artifacts Using Locally Oriented Appearance and Dictionary Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:596-607. [PMID: 30176584 PMCID: PMC6476428 DOI: 10.1109/tmi.2018.2868045] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
The accurate segmentation of the brain surface in post-surgical computed tomography (CT) images is critical for image-guided neurosurgical procedures in epilepsy patients. Following surgical implantation of intracranial electrodes, surgeons require accurate registration of the post-implantation CT images to the pre-implantation functional and structural magnetic resonance imaging to guide surgical resection of epileptic tissue. One way to perform the registration is via surface matching. The key challenge in this setup is the CT segmentation, where the extraction of the cortical surface is difficult due to the missing parts of the skull and artifacts introduced from the electrodes. In this paper, we present a dictionary learning-based method to segment the brain surface in post-surgical CT images of epilepsy patients following surgical implantation of electrodes. We propose learning a model of locally oriented appearance that captures both the normal tissue and the artifacts found along this brain surface boundary. Utilizing a database of clinical epilepsy imaging data to train and test our approach, we demonstrate that our method using locally oriented image appearance both more accurately extracts the brain surface and better localizes electrodes on the post-operative brain surface compared to standard, non-oriented appearance modeling. In addition, we compare our method to a standard atlas-based segmentation approach and to a U-Net-based deep convolutional neural network segmentation method.
Collapse
Affiliation(s)
- John A. Onofrey
- Department of Radiology & Biomedical Imaging, Yale University,
New Haven, CT, 06520, USA ()
| | - Lawrence H. Staib
- Departments of Radiology & Biomedical Imaging, Electrical
Engineering, and Biomedical Engineering, Yale University, New Haven, CT,
06520, USA ()
| | - Xenophon Papademetris
- Departments of Radiology & Biomedical Imaging and Biomedical
Engineering, Yale University, New Haven, CT, 06520, USA
()
| |
Collapse
|
5
|
PCANet-Based Structural Representation for Nonrigid Multimodal Medical Image Registration. SENSORS 2018; 18:s18051477. [PMID: 29738512 PMCID: PMC5982469 DOI: 10.3390/s18051477] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/30/2018] [Revised: 05/04/2018] [Accepted: 05/05/2018] [Indexed: 11/17/2022]
Abstract
Nonrigid multimodal image registration remains a challenging task in medical image processing and analysis. The structural representation (SR)-based registration methods have attracted much attention recently. However, the existing SR methods cannot provide satisfactory registration accuracy due to the utilization of hand-designed features for structural representation. To address this problem, the structural representation method based on the improved version of the simple deep learning network named PCANet is proposed for medical image registration. In the proposed method, PCANet is firstly trained on numerous medical images to learn convolution kernels for this network. Then, a pair of input medical images to be registered is processed by the learned PCANet. The features extracted by various layers in the PCANet are fused to produce multilevel features. The structural representation images are constructed for two input images based on nonlinear transformation of these multilevel features. The Euclidean distance between structural representation images is calculated and used as the similarity metrics. The objective function defined by the similarity metrics is optimized by L-BFGS method to obtain parameters of the free-form deformation (FFD) model. Extensive experiments on simulated and real multimodal image datasets show that compared with the state-of-the-art registration methods, such as modality-independent neighborhood descriptor (MIND), normalized mutual information (NMI), Weber local descriptor (WLD), and the sum of squared differences on entropy images (ESSD), the proposed method provides better registration performance in terms of target registration error (TRE) and subjective human vision.
Collapse
|
6
|
Onofrey JA, Staib LH, Sarkar S, Venkataraman R, Nawaf CB, Sprenkle PC, Papademetris X. Learning Non-rigid Deformations for Robust, Constrained Point-based Registration in Image-Guided MR-TRUS Prostate Intervention. Med Image Anal 2017; 39:29-43. [PMID: 28431275 DOI: 10.1016/j.media.2017.04.001] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2016] [Revised: 02/28/2017] [Accepted: 04/03/2017] [Indexed: 01/13/2023]
Abstract
Accurate and robust non-rigid registration of pre-procedure magnetic resonance (MR) imaging to intra-procedure trans-rectal ultrasound (TRUS) is critical for image-guided biopsies of prostate cancer. Prostate cancer is one of the most prevalent forms of cancer and the second leading cause of cancer-related death in men in the United States. TRUS-guided biopsy is the current clinical standard for prostate cancer diagnosis and assessment. State-of-the-art, clinical MR-TRUS image fusion relies upon semi-automated segmentations of the prostate in both the MR and the TRUS images to perform non-rigid surface-based registration of the gland. Segmentation of the prostate in TRUS imaging is itself a challenging task and prone to high variability. These segmentation errors can lead to poor registration and subsequently poor localization of biopsy targets, which may result in false-negative cancer detection. In this paper, we present a non-rigid surface registration approach to MR-TRUS fusion based on a statistical deformation model (SDM) of intra-procedural deformations derived from clinical training data. Synthetic validation experiments quantifying registration volume of interest overlaps of the PI-RADS parcellation standard and tests using clinical landmark data demonstrate that our use of an SDM for registration, with median target registration error of 2.98 mm, is significantly more accurate than the current clinical method. Furthermore, we show that the low-dimensional SDM registration results are robust to segmentation errors that are not uncommon in clinical TRUS data.
Collapse
Affiliation(s)
| | - Lawrence H Staib
- Department of Radiology & Biomedical Imaging, USA; Department of Electrical Engineering, USA; Department of Biomedical Engineering, USA.
| | | | | | - Cayce B Nawaf
- Department of Urology, Yale University, New Haven, Connecticut, USA.
| | | | - Xenophon Papademetris
- Department of Radiology & Biomedical Imaging, USA; Department of Biomedical Engineering, USA.
| |
Collapse
|