1
|
Shahsavarani S, Lopez F, Ibarra-Castanedo C, Maldague XPV. Advanced Image Stitching Method for Dual-Sensor Inspection. SENSORS (BASEL, SWITZERLAND) 2024; 24:3778. [PMID: 38931562 PMCID: PMC11207425 DOI: 10.3390/s24123778] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2024] [Revised: 05/24/2024] [Accepted: 06/05/2024] [Indexed: 06/28/2024]
Abstract
Efficient image stitching plays a vital role in the Non-Destructive Evaluation (NDE) of infrastructures. An essential challenge in the NDE of infrastructures is precisely visualizing defects within large structures. The existing literature predominantly relies on high-resolution close-distance images to detect surface or subsurface defects. While the automatic detection of all defect types represents a significant advancement, understanding the location and continuity of defects is imperative. It is worth noting that some defects may be too small to capture from a considerable distance. Consequently, multiple image sequences are captured and processed using image stitching techniques. Additionally, visible and infrared data fusion strategies prove essential for acquiring comprehensive information to detect defects across vast structures. Hence, there is a need for an effective image stitching method appropriate for infrared and visible images of structures and industrial assets, facilitating enhanced visualization and automated inspection for structural maintenance. This paper proposes an advanced image stitching method appropriate for dual-sensor inspections. The proposed image stitching technique employs self-supervised feature detection to enhance the quality and quantity of feature detection. Subsequently, a graph neural network is employed for robust feature matching. Ultimately, the proposed method results in image stitching that effectively eliminates perspective distortion in both infrared and visible images, a prerequisite for subsequent multi-modal fusion strategies. Our results substantially enhance the visualization capabilities for infrastructure inspection. Comparative analysis with popular state-of-the-art methods confirms the effectiveness of the proposed approach.
Collapse
Affiliation(s)
- Sara Shahsavarani
- Computer Vision and Systems Laboratory (CVSL), Department of Electrical and Computer Engineering, Faculty of Science and Engineering, Laval University, Quebec City, QC G1V 0A6, Canada;
| | - Fernando Lopez
- TORNGATS, 200 Boul. du Parc-Technologique, Quebec City, QC G1P 4S3, Canada;
| | - Clemente Ibarra-Castanedo
- Computer Vision and Systems Laboratory (CVSL), Department of Electrical and Computer Engineering, Faculty of Science and Engineering, Laval University, Quebec City, QC G1V 0A6, Canada;
| | - Xavier P. V. Maldague
- Computer Vision and Systems Laboratory (CVSL), Department of Electrical and Computer Engineering, Faculty of Science and Engineering, Laval University, Quebec City, QC G1V 0A6, Canada;
| |
Collapse
|
2
|
Wang H, Ni D, Wang Y. Recursive Deformable Pyramid Network for Unsupervised Medical Image Registration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2229-2240. [PMID: 38319758 DOI: 10.1109/tmi.2024.3362968] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/08/2024]
Abstract
Complicated deformation problems are frequently encountered in medical image registration tasks. Although various advanced registration models have been proposed, accurate and efficient deformable registration remains challenging, especially for handling the large volumetric deformations. To this end, we propose a novel recursive deformable pyramid (RDP) network for unsupervised non-rigid registration. Our network is a pure convolutional pyramid, which fully utilizes the advantages of the pyramid structure itself, but does not rely on any high-weight attentions or transformers. In particular, our network leverages a step-by-step recursion strategy with the integration of high-level semantics to predict the deformation field from coarse to fine, while ensuring the rationality of the deformation field. Meanwhile, due to the recursive pyramid strategy, our network can effectively attain deformable registration without separate affine pre-alignment. We compare the RDP network with several existing registration methods on three public brain magnetic resonance imaging (MRI) datasets, including LPBA, Mindboggle and IXI. Experimental results demonstrate our network consistently outcompetes state of the art with respect to the metrics of Dice score, average symmetric surface distance, Hausdorff distance, and Jacobian. Even for the data without the affine pre-alignment, our network maintains satisfactory performance on compensating for the large deformation. The code is publicly available at https://github.com/ZAX130/RDP.
Collapse
|
3
|
Ortega‐Cruz D, Bress KS, Gazula H, Rabano A, Iglesias JE, Strange BA. Three-dimensional histology reveals dissociable human hippocampal long-axis gradients of Alzheimer's pathology. Alzheimers Dement 2024; 20:2606-2619. [PMID: 38369763 PMCID: PMC11032559 DOI: 10.1002/alz.13695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 12/04/2023] [Accepted: 12/18/2023] [Indexed: 02/20/2024]
Abstract
INTRODUCTION Three-dimensional (3D) histology analyses are essential to overcome sampling variability and understand pathological differences beyond the dissection axis. We present Path2MR, the first pipeline allowing 3D reconstruction of sparse human histology without a magnetic resonance imaging (MRI) reference. We implemented Path2MR with post-mortem hippocampal sections to explore pathology gradients in Alzheimer's disease. METHODS Blockface photographs of brain hemisphere slices are used for 3D reconstruction, from which an MRI-like image is generated using machine learning. Histology sections are aligned to the reconstructed hemisphere and subsequently to an atlas in standard space. RESULTS Path2MR successfully registered histological sections to their anatomic position along the hippocampal longitudinal axis. Combined with histopathology quantification, we found an expected peak of tau pathology at the anterior end of the hippocampus, whereas amyloid-beta (Aβ) displayed a quadratic anterior-posterior distribution. CONCLUSION Path2MR, which enables 3D histology using any brain bank data set, revealed significant differences along the hippocampus between tau and Aβ. HIGHLIGHTS Path2MR enables three-dimensional (3D) brain reconstruction from blockface dissection photographs. This pipeline does not require dense specimen sampling or a subject-specific magnetic resonance (MR) image. Anatomically consistent mapping of hippocampal sections was obtained with Path2MR. Our analyses revealed an anterior-posterior gradient of hippocampal tau pathology. In contrast, the peak of amyloid-beta (Aβ) deposition was closer to the hippocampal body.
Collapse
Affiliation(s)
- Diana Ortega‐Cruz
- Laboratory for Clinical Neuroscience, Center for Biomedical TechnologyUniversidad Politécnica de Madrid, IdISSCMadridSpain
- Alzheimer's Disease Research UnitCIEN Foundation, Queen Sofia Foundation Alzheimer CenterMadridSpain
| | - Kimberly S. Bress
- Alzheimer's Disease Research UnitCIEN Foundation, Queen Sofia Foundation Alzheimer CenterMadridSpain
- Present address:
Vanderbilt University School of MedicineNashvilleTennesseeUSA
| | - Harshvardhan Gazula
- Martinos Center for Biomedical ImagingMassachusetts General Hospital and Harvard Medical SchoolBostonMassachusettsUSA
| | - Alberto Rabano
- Alzheimer's Disease Research UnitCIEN Foundation, Queen Sofia Foundation Alzheimer CenterMadridSpain
| | - Juan Eugenio Iglesias
- Martinos Center for Biomedical ImagingMassachusetts General Hospital and Harvard Medical SchoolBostonMassachusettsUSA
- Computer Science and Artificial Intelligence LaboratoryMassachusetts Institute of TechnologyBostonMassachusettsUSA
- Centre for Medical Image ComputingUniversity College LondonLondonUK
| | - Bryan A. Strange
- Laboratory for Clinical Neuroscience, Center for Biomedical TechnologyUniversidad Politécnica de Madrid, IdISSCMadridSpain
- Alzheimer's Disease Research UnitCIEN Foundation, Queen Sofia Foundation Alzheimer CenterMadridSpain
| |
Collapse
|
4
|
Lv K, Zhang J, Liu X, Zhou Y, Liu K. Computer-aided accurate calculation of interacted volumes for 3D isosurface point clouds of molecular electrostatic potential. J Mol Graph Model 2024; 126:108648. [PMID: 37857113 DOI: 10.1016/j.jmgm.2023.108648] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Revised: 09/22/2023] [Accepted: 10/03/2023] [Indexed: 10/21/2023]
Abstract
The quality of chiral environment (i.e. catalytic pocket) is directly related to the performance of chiral catalysts. The existing methods need super computing power and time, i.e., it is difficult to quickly judge the interaction between chiral catalysts and substrates for accurately evaluating the effects of chiral catalytic pockets. In this paper, for the 3D isosurface point clouds of molecular electrostatic potential, by using computer simulations, we propose a robust method to detect interacted points, and then accurately have the corresponding interacted volumes. First, by using the existing marching cubes algorithm, we construct the 3D models with triangular surface for isosurface point clouds of molecular electrostatic potentials. Second, by using our improved hierarchical bounding boxes algorithm, we significantly filter out most redundant non-collision points. Third, by using the normal vectors of the remaining points and related triangles, we robustly determine the interacted points to construct interacted sets. And finally, by combining the classical slicing with our multi-contour segmenting, we accurately calculate the interacted volumes. Over three groups of the point clouds of the chemical molecules, experimental results show that our method effectively removes the non-interacted points at average rates of 71.65%, 77.76%, and 71.82%, and calculates the interacted volumes with the average relative errors of 1.7%, 1.6%, and 1.9%, respectively.
Collapse
Affiliation(s)
- Kun Lv
- College of Electrical Engineering, Sichuan University, Chendu, Sichuan, 610065, China.
| | - Jin Zhang
- College of Electrical Engineering, Sichuan University, Chendu, Sichuan, 610065, China.
| | - Xiaohua Liu
- Key Laboratory of Green Chemistry & Technology, Ministry of Education, College of Chemistry, Sichuan University, Chendu, Sichuan, 610064, China.
| | - Yuqiao Zhou
- Key Laboratory of Green Chemistry & Technology, Ministry of Education, College of Chemistry, Sichuan University, Chendu, Sichuan, 610064, China.
| | - Kai Liu
- College of Electrical Engineering, Sichuan University, Chendu, Sichuan, 610065, China.
| |
Collapse
|
5
|
Ortega-Cruz D, Bress KS, Gazula H, Rabano A, Iglesias JE, Strange BA. Three-dimensional histology reveals dissociable human hippocampal long axis gradients of Alzheimer's pathology. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.12.05.570038. [PMID: 38105985 PMCID: PMC10723286 DOI: 10.1101/2023.12.05.570038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/19/2023]
Abstract
INTRODUCTION Three-dimensional (3D) histology analyses are essential to overcome sampling variability and understand pathological differences beyond the dissection axis. We present Path2MR, the first pipeline allowing 3D reconstruction of sparse human histology without an MRI reference. We implemented Path2MR with post-mortem hippocampal sections to explore pathology gradients in Alzheimer's Disease. METHODS Blockface photographs of brain hemisphere slices are used for 3D reconstruction, from which an MRI-like image is generated using machine learning. Histology sections are aligned to the reconstructed hemisphere and subsequently to an atlas in standard space. RESULTS Path2MR successfully registered histological sections to their anatomical position along the hippocampal longitudinal axis. Combined with histopathology quantification, we found an expected peak of tau pathology at the anterior end of the hippocampus, while amyloid-β displayed a quadratic anterior-posterior distribution. CONCLUSION Path2MR, which enables 3D histology using any brain bank dataset, revealed significant differences along the hippocampus between tau and amyloid-β.
Collapse
Affiliation(s)
- Diana Ortega-Cruz
- Laboratory for Clinical Neuroscience, Center for Biomedical Technology, Universidad Politécnica de Madrid, IdISSC, 28223, Madrid, Spain
- Alzheimer's Disease Research Unit, CIEN Foundation, Queen Sofia Foundation Alzheimer Center, 28031, Madrid, Spain
| | - Kimberly S Bress
- Alzheimer's Disease Research Unit, CIEN Foundation, Queen Sofia Foundation Alzheimer Center, 28031, Madrid, Spain
- Current address: Vanderbilt University School of Medicine, 37232, Nashville, TN, USA
| | - Harshvardhan Gazula
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, 02129, Boston, MA, USA
| | - Alberto Rabano
- Alzheimer's Disease Research Unit, CIEN Foundation, Queen Sofia Foundation Alzheimer Center, 28031, Madrid, Spain
| | - Juan Eugenio Iglesias
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, 02129, Boston, MA, USA
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 02139, Boston, MA, USA
- Centre for Medical Image Computing, University College London, WC1V 6LJ, London, United Kingdom
| | - Bryan A Strange
- Laboratory for Clinical Neuroscience, Center for Biomedical Technology, Universidad Politécnica de Madrid, IdISSC, 28223, Madrid, Spain
- Alzheimer's Disease Research Unit, CIEN Foundation, Queen Sofia Foundation Alzheimer Center, 28031, Madrid, Spain
| |
Collapse
|
6
|
Beetz M, Banerjee A, Ossenberg-Engels J, Grau V. Multi-class point cloud completion networks for 3D cardiac anatomy reconstruction from cine magnetic resonance images. Med Image Anal 2023; 90:102975. [PMID: 37804586 DOI: 10.1016/j.media.2023.102975] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 07/08/2023] [Accepted: 09/18/2023] [Indexed: 10/09/2023]
Abstract
Cine magnetic resonance imaging (MRI) is the current gold standard for the assessment of cardiac anatomy and function. However, it typically only acquires a set of two-dimensional (2D) slices of the underlying three-dimensional (3D) anatomy of the heart, thus limiting the understanding and analysis of both healthy and pathological cardiac morphology and physiology. In this paper, we propose a novel fully automatic surface reconstruction pipeline capable of reconstructing multi-class 3D cardiac anatomy meshes from raw cine MRI acquisitions. Its key component is a multi-class point cloud completion network (PCCN) capable of correcting both the sparsity and misalignment issues of the 3D reconstruction task in a unified model. We first evaluate the PCCN on a large synthetic dataset of biventricular anatomies and observe Chamfer distances between reconstructed and gold standard anatomies below or similar to the underlying image resolution for multiple levels of slice misalignment. Furthermore, we find a reduction in reconstruction error compared to a benchmark 3D U-Net by 32% and 24% in terms of Hausdorff distance and mean surface distance, respectively. We then apply the PCCN as part of our automated reconstruction pipeline to 1000 subjects from the UK Biobank study in a cross-domain transfer setting and demonstrate its ability to reconstruct accurate and topologically plausible biventricular heart meshes with clinical metrics comparable to the previous literature. Finally, we investigate the robustness of our proposed approach and observe its capacity to successfully handle multiple common outlier conditions.
Collapse
Affiliation(s)
- Marcel Beetz
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford OX3 7DQ, UK.
| | - Abhirup Banerjee
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford OX3 7DQ, UK; Division of Cardiovascular Medicine, Radcliffe Department of Medicine, University of Oxford, Oxford OX3 9DU, UK.
| | - Julius Ossenberg-Engels
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford OX3 7DQ, UK
| | - Vicente Grau
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford OX3 7DQ, UK
| |
Collapse
|
7
|
Wang Y, Sun Y, Gan K, Yuan J, Xu H, Gao H, Zhang X. Bone marrow sparing oriented multi-model image registration in cervical cancer radiotherapy. Comput Biol Med 2023; 166:107581. [PMID: 37862763 DOI: 10.1016/j.compbiomed.2023.107581] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2023] [Revised: 09/26/2023] [Accepted: 10/15/2023] [Indexed: 10/22/2023]
Abstract
Cervical cancer poses a serious threat to the health of women and radiotherapy is one of the primary treatment methods for this condition. However, this treatment is associated with a high risk of causing acute hematologic toxicity. Delineating the bone marrow (BM) for sparing based on computer tomography (CT) images before radiotherapy can effectively avoid this risk. Unfortunately, compared to magnetic resonance (MR) images, CT images lack the ability to express the activity of BM. Therefore, medical practitioners currently manually delineate the BM on CT images by corresponding to MR images. However, the manual delineation of BM is time-consuming and cannot guarantee accuracy due to the inconsistency of the CT-MR multimodal images. This study proposes a multimodal image-oriented automatic registration method for pelvic BM sparing. The proposed method includes three-dimensional (3D) bone point clouds reconstruction and an iterative closest point registration based on a local spherical system for marking BM on CT images. By introducing a joint coordinate system that combines the global Cartesian coordinate system with the local point clouds' spherical coordinate system, the increasement of point descriptive dimension avoids the local optimal registration and improves the registration accuracy. Experiments on the dataset of patients demonstrate that our proposed method can enhance the multimodal image registration accuracy and efficiency for medical practitioners in BM-sparing of cervical cancer radiotherapy. The method proposed in this contribution might also provide a solution to multimodal registration, especially in multimodal sequential images in other clinical applications, such as the diagnosis of cervical cancer and the preservation of normal organs during radiotherapy.
Collapse
Affiliation(s)
- Yuening Wang
- Nanjing University, The School of Electronic Science and Engineering, Nanjing, China
| | - Ying Sun
- Nanjing University, The School of Electronic Science and Engineering, Nanjing, China
| | - Kexin Gan
- Nanjing University, The School of Electronic Science and Engineering, Nanjing, China
| | - Jie Yuan
- Nanjing University, The School of Electronic Science and Engineering, Nanjing, China.
| | - Hanzi Xu
- The Jiangsu Cancer Hospital, Nanjing, China.
| | - Han Gao
- The Jiangsu Cancer Hospital, Nanjing, China
| | | |
Collapse
|
8
|
Rodgers G, Bikis C, Janz P, Tanner C, Schulz G, Thalmann P, Haas CA, Müller B. 3D X-ray Histology for the Investigation of Temporal Lobe Epilepsy in a Mouse Model. MICROSCOPY AND MICROANALYSIS : THE OFFICIAL JOURNAL OF MICROSCOPY SOCIETY OF AMERICA, MICROBEAM ANALYSIS SOCIETY, MICROSCOPICAL SOCIETY OF CANADA 2023; 29:1730-1745. [PMID: 37584515 DOI: 10.1093/micmic/ozad082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 06/29/2023] [Accepted: 07/28/2023] [Indexed: 08/17/2023]
Abstract
The most common form of epilepsy among adults is mesial temporal lobe epilepsy (mTLE), with seizures often originating in the hippocampus due to abnormal electrical activity. The gold standard for the histopathological analysis of mTLE is histology, which is a two-dimensional technique. To fill this gap, we propose complementary three-dimensional (3D) X-ray histology. Herein, we used synchrotron radiation-based phase-contrast microtomography with 1.6 μm-wide voxels for the post mortem visualization of tissue microstructure in an intrahippocampal-kainate mouse model for mTLE. We demonstrated that the 3D X-ray histology of unstained, unsectioned, paraffin-embedded brain hemispheres can identify hippocampal sclerosis through the loss of pyramidal neurons in the first and third regions of the Cornu ammonis as well as granule cell dispersion within the dentate gyrus. Morphology and density changes during epileptogenesis were quantified by segmentations from a deep convolutional neural network. Compared to control mice, the total dentate gyrus volume doubled and the granular layer volume quadrupled 21 days after injecting kainate. Subsequent sectioning of the same mouse brains allowed for benchmarking 3D X-ray histology against well-established histochemical and immunofluorescence stainings. Thus, 3D X-ray histology is a complementary neuroimaging tool to unlock the third dimension for the cellular-resolution histopathological analysis of mTLE.
Collapse
Affiliation(s)
- Griffin Rodgers
- Biomaterials Science Center, Department of Biomedical Engineering, University of Basel, 4123 Allschwil, Switzerland
- Biomaterials Science Center, Department of Clinical Research, University Hospital Basel, 4031 Basel, Switzerland
| | - Christos Bikis
- Biomaterials Science Center, Department of Biomedical Engineering, University of Basel, 4123 Allschwil, Switzerland
- Integrierte Psychiatrie Winterthur-Zürcher Unterland, 8408 Winterthur, Switzerland
| | - Philipp Janz
- Faculty of Medicine, Experimental Epilepsy Research, Department of Neurosurgery, Medical Center-University of Freiburg, 79106 Freiburg, Germany
- Faculty of Biology, University of Freiburg, 79106 Freiburg, Germany
- BrainLinks-BrainTools Center, University of Freiburg, 79106 Freiburg, Germany
| | - Christine Tanner
- Biomaterials Science Center, Department of Biomedical Engineering, University of Basel, 4123 Allschwil, Switzerland
- Biomaterials Science Center, Department of Clinical Research, University Hospital Basel, 4031 Basel, Switzerland
| | - Georg Schulz
- Biomaterials Science Center, Department of Biomedical Engineering, University of Basel, 4123 Allschwil, Switzerland
- Biomaterials Science Center, Department of Clinical Research, University Hospital Basel, 4031 Basel, Switzerland
- Core Facility Micro- and Nanotomography, Department of Biomedical Engineering, University of Basel, 4123 Allschwil, Switzerland
| | - Peter Thalmann
- Biomaterials Science Center, Department of Biomedical Engineering, University of Basel, 4123 Allschwil, Switzerland
| | - Carola A Haas
- Faculty of Medicine, Experimental Epilepsy Research, Department of Neurosurgery, Medical Center-University of Freiburg, 79106 Freiburg, Germany
- BrainLinks-BrainTools Center, University of Freiburg, 79106 Freiburg, Germany
- Center of Basics in NeuroModulation, Faculty of Medicine, University of Freiburg, 79114 Freiburg, Germany
| | - Bert Müller
- Biomaterials Science Center, Department of Biomedical Engineering, University of Basel, 4123 Allschwil, Switzerland
- Biomaterials Science Center, Department of Clinical Research, University Hospital Basel, 4031 Basel, Switzerland
| |
Collapse
|
9
|
Ciceri T, Squarcina L, Giubergia A, Bertoldo A, Brambilla P, Peruzzo D. Review on deep learning fetal brain segmentation from Magnetic Resonance images. Artif Intell Med 2023; 143:102608. [PMID: 37673558 DOI: 10.1016/j.artmed.2023.102608] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 05/31/2023] [Accepted: 06/06/2023] [Indexed: 09/08/2023]
Abstract
Brain segmentation is often the first and most critical step in quantitative analysis of the brain for many clinical applications, including fetal imaging. Different aspects challenge the segmentation of the fetal brain in magnetic resonance imaging (MRI), such as the non-standard position of the fetus owing to his/her movements during the examination, rapid brain development, and the limited availability of imaging data. In recent years, several segmentation methods have been proposed for automatically partitioning the fetal brain from MR images. These algorithms aim to define regions of interest with different shapes and intensities, encompassing the entire brain, or isolating specific structures. Deep learning techniques, particularly convolutional neural networks (CNNs), have become a state-of-the-art approach in the field because they can provide reliable segmentation results over heterogeneous datasets. Here, we review the deep learning algorithms developed in the field of fetal brain segmentation and categorize them according to their target structures. Finally, we discuss the perceived research gaps in the literature of the fetal domain, suggesting possible future research directions that could impact the management of fetal MR images.
Collapse
Affiliation(s)
- Tommaso Ciceri
- NeuroImaging Laboratory, Scientific Institute IRCCS Eugenio Medea, Bosisio Parini, Italy; Department of Information Engineering, University of Padua, Padua, Italy
| | - Letizia Squarcina
- Department of Pathophysiology and Transplantation, University of Milan, Milan, Italy
| | - Alice Giubergia
- NeuroImaging Laboratory, Scientific Institute IRCCS Eugenio Medea, Bosisio Parini, Italy; Department of Information Engineering, University of Padua, Padua, Italy
| | - Alessandra Bertoldo
- Department of Information Engineering, University of Padua, Padua, Italy; University of Padua, Padova Neuroscience Center, Padua, Italy
| | - Paolo Brambilla
- Department of Pathophysiology and Transplantation, University of Milan, Milan, Italy; Department of Neurosciences and Mental Health, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico, Milan, Italy.
| | - Denis Peruzzo
- NeuroImaging Laboratory, Scientific Institute IRCCS Eugenio Medea, Bosisio Parini, Italy
| |
Collapse
|
10
|
Guo H, Xu X, Song X, Xu S, Chao H, Myers J, Turkbey B, Pinto PA, Wood BJ, Yan P. Ultrasound Frame-to-Volume Registration via Deep Learning for Interventional Guidance. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2023; 70:1016-1025. [PMID: 37015418 PMCID: PMC10502768 DOI: 10.1109/tuffc.2022.3229903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Fusing intraoperative 2-D ultrasound (US) frames with preoperative 3-D magnetic resonance (MR) images for guiding interventions has become the clinical gold standard in image-guided prostate cancer biopsy. However, developing an automatic image registration system for this application is challenging because of the modality gap between US/MR and the dimensionality gap between 2-D/3-D data. To overcome these challenges, we propose a novel US frame-to-volume registration (FVReg) pipeline to bridge the dimensionality gap between 2-D US frames and 3-D US volume. The developed pipeline is implemented using deep neural networks, which are fully automatic without requiring external tracking devices. The framework consists of three major components, including one) a frame-to-frame registration network (Frame2Frame) that estimates the current frame's 3-D spatial position based on previous video context, two) a frame-to-slice correction network (Frame2Slice) adjusting the estimated frame position using the 3-D US volumetric information, and three) a similarity filtering (SF) mechanism selecting the frame with the highest image similarity with the query frame. We validated our method on a clinical dataset with 618 subjects and tested its potential on real-time 2-D-US to 3-D-MR fusion navigation tasks. The proposed FVReg achieved an average target navigation error of 1.93 mm at 5-14 fps. Our source code is publicly available at https://github.com/DIAL-RPI/Frame-to-Volume-Registration.
Collapse
|
11
|
Cordero-Grande L, Ortuno-Fisac JE, Del Hoyo AA, Uus A, Deprez M, Santos A, Hajnal JV, Ledesma-Carbayo MJ. Fetal MRI by Robust Deep Generative Prior Reconstruction and Diffeomorphic Registration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:810-822. [PMID: 36288233 DOI: 10.1109/tmi.2022.3217725] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Magnetic resonance imaging of whole fetal body and placenta is limited by different sources of motion affecting the womb. Usual scanning techniques employ single-shot multi-slice sequences where anatomical information in different slices may be subject to different deformations, contrast variations or artifacts. Volumetric reconstruction formulations have been proposed to correct for these factors, but they must accommodate a non-homogeneous and non-isotropic sampling, so regularization becomes necessary. Thus, in this paper we propose a deep generative prior for robust volumetric reconstructions integrated with a diffeomorphic volume to slice registration method. Experiments are performed to validate our contributions and compare with ifdefined tmiformat R2.5a state of the art method methods in the literature in a cohort of 72 fetal datasets in the range of 20-36 weeks gestational age. Results suggest improved image resolution Quantitative as well as radiological assessment suggest improved image quality and more accurate prediction of gestational age at scan is obtained when comparing to a state of the art reconstruction method methods. In addition, gestational age prediction results from our volumetric reconstructions compare favourably are competitive with existing brain-based approaches, with boosted accuracy when integrating information of organs other than the brain. Namely, a mean absolute error of 0.618 weeks ( R2=0.958 ) is achieved when combining fetal brain and trunk information.
Collapse
|
12
|
Shi W, Xu H, Sun C, Sun J, Li Y, Xu X, Zheng T, Zhang Y, Wang G, Wu D. AFFIRM: Affinity Fusion-Based Framework for Iteratively Random Motion Correction of Multi-Slice Fetal Brain MRI. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:209-219. [PMID: 36129858 DOI: 10.1109/tmi.2022.3208277] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Multi-slice magnetic resonance images of the fetal brain are usually contaminated by severe and arbitrary fetal and maternal motion. Hence, stable and robust motion correction is necessary to reconstruct high-resolution 3D fetal brain volume for clinical diagnosis and quantitative analysis. However, the conventional registration-based correction has a limited capture range and is insufficient for detecting relatively large motions. Here, we present a novel Affinity Fusion-based Framework for Iteratively Random Motion (AFFIRM) correction of the multi-slice fetal brain MRI. It learns the sequential motion from multiple stacks of slices and integrates the features between 2D slices and reconstructed 3D volume using affinity fusion, which resembles the iterations between slice-to-volume registration and volumetric reconstruction in the regular pipeline. The method accurately estimates the motion regardless of brain orientations and outperforms other state-of-the-art learning-based methods on the simulated motion-corrupted data, with a 48.4% reduction of mean absolute error for rotation and 61.3% for displacement. We then incorporated AFFIRM into the multi-resolution slice-to-volume registration and tested it on the real-world fetal MRI scans at different gestation stages. The results indicated that adding AFFIRM to the conventional pipeline improved the success rate of fetal brain super-resolution reconstruction from 77.2% to 91.9%.
Collapse
|
13
|
Huszar IN, Pallebage-Gamarallage M, Bangerter-Christensen S, Brooks H, Fitzgibbon S, Foxley S, Hiemstra M, Howard AFD, Jbabdi S, Kor DZL, Leonte A, Mollink J, Smart A, Tendler BC, Turner MR, Ansorge O, Miller KL, Jenkinson M. Tensor image registration library: Deformable registration of stand-alone histology images to whole-brain post-mortem MRI data. Neuroimage 2023; 265:119792. [PMID: 36509214 PMCID: PMC10933796 DOI: 10.1016/j.neuroimage.2022.119792] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Revised: 10/26/2022] [Accepted: 12/04/2022] [Indexed: 12/13/2022] Open
Abstract
BACKGROUND Accurate registration between microscopy and MRI data is necessary for validating imaging biomarkers against neuropathology, and to disentangle complex signal dependencies in microstructural MRI. Existing registration methods often rely on serial histological sampling or significant manual input, providing limited scope to work with a large number of stand-alone histology sections. Here we present a customisable pipeline to assist the registration of stand-alone histology sections to whole-brain MRI data. METHODS Our pipeline registers stained histology sections to whole-brain post-mortem MRI in 4 stages, with the help of two photographic intermediaries: a block face image (to undistort histology sections) and coronal brain slab photographs (to insert them into MRI space). Each registration stage is implemented as a configurable stand-alone Python script using our novel platform, Tensor Image Registration Library (TIRL), which provides flexibility for wider adaptation. We report our experience of registering 87 PLP-stained histology sections from 14 subjects and perform various experiments to assess the accuracy and robustness of each stage of the pipeline. RESULTS All 87 histology sections were successfully registered to MRI. Histology-to-block registration (Stage 1) achieved 0.2-0.4 mm accuracy, better than commonly used existing methods. Block-to-slice matching (Stage 2) showed great robustness in automatically identifying and inserting small tissue blocks into whole brain slices with 0.2 mm accuracy. Simulations demonstrated sub-voxel level accuracy (0.13 mm) of the slice-to-volume registration (Stage 3) algorithm, which was observed in over 200 actual brain slice registrations, compensating 3D slice deformations up to 6.5 mm. Stage 4 combined the previous stages and generated refined pixelwise aligned multi-modal histology-MRI stacks. CONCLUSIONS Our open-source pipeline provides robust automation tools for registering stand-alone histology sections to MRI data with sub-voxel level precision, and the underlying framework makes it readily adaptable to a diverse range of microscopy-MRI studies.
Collapse
Affiliation(s)
- Istvan N Huszar
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK; Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK.
| | | | - Sarah Bangerter-Christensen
- Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK; Brigham Young University, Provo, UT, USA
| | - Hannah Brooks
- Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
| | - Sean Fitzgibbon
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK; Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
| | - Sean Foxley
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK; Department of Radiology, University of Chicago, Chicago, IL, USA
| | - Marlies Hiemstra
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK; Department of Anatomy, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Centre, Nijmegen, the Netherlands
| | - Amy F D Howard
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK; Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
| | - Saad Jbabdi
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK; Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
| | - Daniel Z L Kor
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK; Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
| | - Anna Leonte
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK; Department of Neuroscience, University of Groningen, Groningen, the Netherlands
| | - Jeroen Mollink
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK; Department of Anatomy, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Centre, Nijmegen, the Netherlands
| | - Adele Smart
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK; Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
| | - Benjamin C Tendler
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK; Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
| | - Martin R Turner
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK; Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
| | - Olaf Ansorge
- Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
| | - Karla L Miller
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK; Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
| | - Mark Jenkinson
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK; Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
| |
Collapse
|
14
|
Combining High-Resolution Hard X-ray Tomography and Histology for Stem Cell-Mediated Distraction Osteogenesis. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12126286] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Distraction osteogenesis is a clinically established technique for lengthening, molding and shaping bone by new bone formation. The experimental evaluation of this expensive and time-consuming treatment is of high impact for better understanding of tissue engineering but mainly relies on a limited number of histological slices. These tissue slices contain two-dimensional information comprising only about one percent of the volume of interest. In order to analyze the soft and hard tissues of the entire jaw of a single rat in a multimodal assessment, we combined micro computed tomography (µCT) with histology. The µCT data acquired before and after decalcification were registered to determine the impact of decalcification on local tissue shrinkage. Identification of the location of the H&E-stained specimen within the synchrotron radiation-based µCT data collected after decalcification was achieved via non-rigid slice-to-volume registration. The resulting bi- and tri-variate histograms were divided into clusters related to anatomical features from bone and soft tissues, which allowed for a comparison of the approaches and resulted in the hypothesis that the combination of laboratory-based µCT before decalcification, synchrotron radiation-based µCT after decalcification and histology with hematoxylin-and-eosin staining could be used to discriminate between different types of collagen, key components of new bone formation.
Collapse
|
15
|
Zarenia M, Arpinar VE, Nencka AS, Muftuler LT, Koch KM. Dynamic tracking of scaphoid, lunate, and capitate carpal bones using four-dimensional MRI. PLoS One 2022; 17:e0269336. [PMID: 35653348 PMCID: PMC9162359 DOI: 10.1371/journal.pone.0269336] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2021] [Accepted: 05/18/2022] [Indexed: 11/18/2022] Open
Abstract
A preliminary exploration of technical methodology for dynamic analysis of scaphoid, capitate, and lunate during unconstrained movements is performed in this study. A heavily accelerated and fat-saturated 3D Cartesian MRI acquisition was used to capture temporal frames of the unconstrained moving wrist of 5 healthy subjects. A slab-to-volume point-cloud based registration was then utilized to register the moving volumes to a high-resolution image volume collected at a neutral resting position. Comprehensive in-silico error analyses for different acquisition parameter settings were performed to evaluate the performance limits of several dynamic metrics derived from the registration parameters. Computational analysis suggested that sufficient volume coverage for the dynamic acquisitions was reached when collecting 12 slice-encodes at 2.5mm resolution, which yielded a temporal resolution of and 2.6 seconds per volumetric frame. These acquisition parameters resulted in total in-silico errors of 1.9°±1.8° and 3°±4.6° in derived principal rotation angles within ulnar-radial deviation and flexion-extension motion, respectively. Rotation components of the carpal bones in the radius coordinate system were calculated and found to be consistent with earlier 4D-CT studies. Temporal metric profiles derived from ulnar-radial deviation motion demonstrated better performance than those derived from flexion/extension movements. Future work will continue to explore the use of these methods in deriving more complex dynamic metrics and their application to subjects with symptomatic carpal dysfunction.
Collapse
Affiliation(s)
- Mohammad Zarenia
- Department of Radiology, Medical College of Wisconsin, Milwaukee, WI, United States of America
- * E-mail:
| | - Volkan Emre Arpinar
- Department of Radiology, Medical College of Wisconsin, Milwaukee, WI, United States of America
| | - Andrew S. Nencka
- Department of Radiology, Medical College of Wisconsin, Milwaukee, WI, United States of America
| | - L. Tugan Muftuler
- Department of Radiology, Medical College of Wisconsin, Milwaukee, WI, United States of America
| | - Kevin M. Koch
- Department of Radiology, Medical College of Wisconsin, Milwaukee, WI, United States of America
| |
Collapse
|
16
|
Reattachable fiducial skin marker for automatic multimodality registration. Int J Comput Assist Radiol Surg 2022; 17:2141-2150. [PMID: 35604488 PMCID: PMC9515062 DOI: 10.1007/s11548-022-02639-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Accepted: 04/08/2022] [Indexed: 11/05/2022]
Abstract
Abstract
Purpose
Fusing image information has become increasingly important for optimal diagnosis and treatment of the patient. Despite intensive research towards markerless registration approaches, fiducial marker-based methods remain the default choice for a wide range of applications in clinical practice. However, as especially non-invasive markers cannot be positioned reproducibly in the same pose on the patient, pre-interventional imaging has to be performed immediately before the intervention for fiducial marker-based registrations.
Methods
We propose a new non-invasive, reattachable fiducial skin marker concept for multi-modal registration approaches including the use of electromagnetic or optical tracking technologies. We furthermore describe a robust, automatic fiducial marker localization algorithm for computed tomography (CT) and magnetic resonance imaging (MRI) images. Localization of the new fiducial marker has been assessed for different marker configurations using both CT and MRI. Furthermore, we applied the marker in an abdominal phantom study. For this, we attached the marker at three poses to the phantom, registered ten segmented targets of the phantom’s CT image to live ultrasound images and determined the target registration error (TRE) for each target and each marker pose.
Results
Reattachment of the marker was possible with a mean precision of 0.02 mm ± 0.01 mm. Our algorithm successfully localized the marker automatically in all ($$n=201$$
n
=
201
) evaluated CT/MRI images. Depending on the marker pose, the mean ($$n=10$$
n
=
10
) TRE of the abdominal phantom study ranged from 1.51 ± 0.75 mm to 4.65 ± 1.22 mm.
Conclusions
The non-invasive, reattachable skin marker concept allows reproducible positioning of the marker and automatic localization in different imaging modalities. The low TREs indicate the potential applicability of the marker concept for clinical interventions, such as the puncture of abdominal lesions, where current image-based registration approaches still lack robustness and existing marker-based methods are often impractical.
Collapse
|
17
|
Memiş A, Varlı S, Bilgili F. Fast and Accurate Registration of the Proximal Femurs in Bilateral Hip Joint Images by Using the Random Sub-Sample Points. Ing Rech Biomed 2022. [DOI: 10.1016/j.irbm.2021.04.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
18
|
|
19
|
Dida H, Charif F, Benchabane A. Registration of computed tomography images of a lung infected with COVID-19 based in the new meta-heuristic algorithm HPSGWO. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:18955-18976. [PMID: 35287378 PMCID: PMC8907398 DOI: 10.1007/s11042-022-12658-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Revised: 04/27/2021] [Accepted: 02/09/2022] [Indexed: 05/03/2023]
Abstract
Computed tomography (CT) helps the radiologist in the rapid and correct detection of a person infected with the coronavirus disease 2019 (COVID-19), and this by showing the presence of the ground-glass opacity in the lung of with the virus. Tracking the evolution of the spread of the ground-glass opacity (GGO) in the lung of the person infected with the virus needs to study more than one image in different times. The various CT images must be registration to identify the evolution of the ground glass in the lung and to facilitate the study and identification of the virus. Due to the process of registration images is essentially an improvement problem, we present in this paper a new HPSGWO algorithm for registration CT images of a lung infected with the COVID-19. This algorithm is a hybridization of the two algorithms Particle swarm optimization (PSO) and Grey wolf optimizer (GWO). The simulation results obtained after applying the algorithm to the test images show that the proposed approach achieved high-precision and robust registration compared to other methods such as GWO, PSO, Firefly Algorithm (FA), and Crow Searcha Algorithms (CSA).
Collapse
Affiliation(s)
- Hedifa Dida
- Faculty of New Information and Communication Technologies, Department of Electronics and Telecommunications, Kasdi Merbah University, Ouargla, Algeria
| | - Fella Charif
- Faculty of New Information and Communication Technologies, Department of Electronics and Telecommunications, Kasdi Merbah University, Ouargla, Algeria
| | - Abderrazak Benchabane
- Faculty of New Information and Communication Technologies, Department of Electronics and Telecommunications, Kasdi Merbah University, Ouargla, Algeria
| |
Collapse
|
20
|
Virtual histology of an entire mouse brain from formalin fixation to paraffin embedding. Part 1: Data acquisition, anatomical feature segmentation, tracking global volume and density changes. J Neurosci Methods 2021; 364:109354. [PMID: 34529981 DOI: 10.1016/j.jneumeth.2021.109354] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Revised: 09/01/2021] [Accepted: 09/04/2021] [Indexed: 01/14/2023]
Abstract
BACKGROUND Micrometer-resolution neuroimaging with gold-standard conventional histology requires tissue fixation and embedding. The exchange of solvents for the creation of sectionable paraffin blocks modifies tissue density and generates non-uniform brain shrinkage. NEW METHOD We employed synchrotron radiation-based X-ray microtomography for slicing- and label-free virtual histology of the mouse brain at different stages of the standard preparation protocol from formalin fixation via ascending ethanol solutions and xylene to paraffin embedding. Segmentation of anatomical regions allowed us to quantify non-uniform tissue shrinkage. Global and local changes in X-ray absorption gave insight into contrast enhancement for virtual histology. RESULTS The volume of the entire mouse brain was 60%, 56%, and 40% of that in formalin for, respectively, 100% ethanol, xylene, and paraffin. The volume changes of anatomical regions such as the hippocampus, anterior commissure, and ventricles differ from the global volume change. X-ray absorption of the full brain decreased, while local absorption differences increased, resulting in enhanced contrast for virtual histology. These trends were also observed with laboratory microtomography measurements. COMPARISON WITH EXISTING METHODS Microtomography provided sub-10 μm spatial resolution with sufficient density resolution to resolve anatomical structures at each step of the embedding protocol. The spatial resolution of conventional computed tomography and magnetic resonance microscopy is an order of magnitude lower and both do not match the contrast of microtomography over the entire embedding protocol. Unlike feature-to-feature or total volume measurements, our approach allows for calculation of volume change based on segmentation. CONCLUSION We present isotropic micrometer-resolution imaging to quantify morphology and composition changes in a mouse brain during the standard histological preparation. The proposed method can be employed to identify the most appropriate embedding medium for anatomical feature visualization, to reveal the basis for the dramatic X-ray contrast enhancement observed in numerous embedded tissues, and to quantify morphological changes during tissue fixation and embedding.
Collapse
|
21
|
Qu L, Wan W, Guo K, Liu Y, Tang J, Li X, Wu J. Triple-Input-Unsupervised neural Networks for deformable image registration. Pattern Recognit Lett 2021. [DOI: 10.1016/j.patrec.2021.08.032] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
22
|
Lawson MJ, Katsamenis OL, Chatelet D, Alzetani A, Larkin O, Haig I, Lackie P, Warner J, Schneider P. Immunofluorescence-guided segmentation of three-dimensional features in micro-computed tomography datasets of human lung tissue. ROYAL SOCIETY OPEN SCIENCE 2021; 8:211067. [PMID: 34737879 PMCID: PMC8564621 DOI: 10.1098/rsos.211067] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Accepted: 10/08/2021] [Indexed: 06/13/2023]
Abstract
Micro-computed tomography (µCT) provides non-destructive three-dimensional (3D) imaging of soft tissue microstructures. Specific features in µCT images can be identified using correlated two-dimensional (2D) histology images allowing manual segmentation. However, this is very time-consuming and requires specialist knowledge of the tissue and imaging modalities involved. Using a custom-designed µCT system optimized for imaging unstained formalin-fixed paraffin-embedded soft tissues, we imaged human lung tissue at isotropic voxel sizes less than 10 µm. Tissue sections were stained with haematoxylin and eosin or cytokeratin 18 in columnar airway epithelial cells using immunofluorescence (IF), as an exemplar of this workflow. Novel utilization of tissue autofluorescence allowed automatic alignment of 2D microscopy images to the 3D µCT data using scripted co-registration and automated image warping algorithms. Warped IF images, which were accurately aligned with the µCT datasets, allowed 3D segmentation of immunoreactive tissue microstructures in the human lung. Blood vessels were segmented semi-automatically using the co-registered µCT datasets. Correlating 2D IF and 3D µCT data enables accurate identification, localization and segmentation of features in fixed soft lung tissue. Our novel correlative imaging workflow provides faster and more automated 3D segmentation of µCT datasets. This is applicable to the huge range of formalin-fixed paraffin-embedded tissues held in biobanks and archives.
Collapse
Affiliation(s)
- Matthew J. Lawson
- School of Clinical and Experimental Sciences, Faculty of Medicine, University of Southampton, Southampton, UK
| | - Orestis L. Katsamenis
- μ-VIS X-ray Imaging Centre, Faculty of Engineering and Physical Sciences, University of Southampton, Southampton, UK
| | - David Chatelet
- School of Clinical and Experimental Sciences, Faculty of Medicine, University of Southampton, Southampton, UK
| | - Aiman Alzetani
- School of Clinical and Experimental Sciences, Faculty of Medicine, University of Southampton, Southampton, UK
| | - Oliver Larkin
- Bioengineering Research Group, Faculty of Engineering and Physical Sciences, University of Southampton, Southampton, UK
| | - Ian Haig
- Nikon X-Tek Systems Ltd, Tring, UK
| | - Peter Lackie
- School of Clinical and Experimental Sciences, Faculty of Medicine, University of Southampton, Southampton, UK
| | - Jane Warner
- School of Clinical and Experimental Sciences, Faculty of Medicine, University of Southampton, Southampton, UK
| | - Philipp Schneider
- Bioengineering Research Group, Faculty of Engineering and Physical Sciences, University of Southampton, Southampton, UK
- High-Performance Vision Systems, Center for Vision, Automation and Control, AIT Austrian Institute of Technology, Vienna, Austria
| |
Collapse
|
23
|
Stouffer KM, Wang Z, Xu E, Lee K, Lee P, Miller MI, Tward DJ. From Picoscale Pathology to Decascale Disease: Image Registration with a Scattering Transform and Varifolds for Manipulating Multiscale Data. MULTIMODAL LEARNING FOR CLINICAL DECISION SUPPORT : 11TH INTERNATIONAL WORKSHOP, ML-CDS 2021, HELD IN CONJUNCTION WITH MICCAI 2021, STRASBOURG, FRANCE, OCTOBER 1, 2021, PROCEEDINGS. ML-CDS (WORKSHOP) (11TH : 2021 : ONLINE) 2021; 13050:1-11. [PMID: 36283001 PMCID: PMC9582035 DOI: 10.1007/978-3-030-89847-2_1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Advances in neuroimaging have yielded extensive variety in the scale and type of data available. Effective integration of such data promises deeper understanding of anatomy and disease-with consequences for both diagnosis and treatment. Often catered to particular datatypes or scales, current computational tools and mathematical frameworks remain inadequate for simultaneously registering these multiple modes of "images" and statistically analyzing the ensuing menagerie of data. Here, we present (1) a registration algorithm using a "scattering transform" to align high and low resolution images and (2) a varifold-based modeling framework to compute 3D spatial statistics of multiscale data. We use our methods to quantify microscopic tau pathology across macroscopic 3D regions of the medial temporal lobe to address a major challenge in the diagnosis of Alzheimer's Disease-the reliance on invasive methods to detect microscopic pathology.
Collapse
Affiliation(s)
| | | | - Eileen Xu
- Johns Hopkins University, Baltimore, MD 21218, USA
| | - Karl Lee
- Johns Hopkins University, Baltimore, MD 21218, USA
| | - Paige Lee
- University of California Los Angeles, Los Angeles, CA 90095, USA
| | | | - Daniel J Tward
- University of California Los Angeles, Los Angeles, CA 90095, USA
| |
Collapse
|
24
|
Yang S, Zhao Y, Liao M, Zhang F. An Unsupervised Learning-Based Multi-Organ Registration Method for 3D Abdominal CT Images. SENSORS (BASEL, SWITZERLAND) 2021; 21:6254. [PMID: 34577461 PMCID: PMC8472627 DOI: 10.3390/s21186254] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/24/2021] [Revised: 08/22/2021] [Accepted: 08/26/2021] [Indexed: 12/28/2022]
Abstract
Medical image registration is an essential technique to achieve spatial consistency geometric positions of different medical images obtained from single- or multi-sensor, such as computed tomography (CT), magnetic resonance (MR), and ultrasound (US) images. In this paper, an improved unsupervised learning-based framework is proposed for multi-organ registration on 3D abdominal CT images. First, the explored coarse-to-fine recursive cascaded network (RCN) modules are embedded into a basic U-net framework to achieve more accurate multi-organ registration results from 3D abdominal CT images. Then, a topology-preserving loss is added in the total loss function to avoid a distortion of the predicted transformation field. Four public databases are selected to validate the registration performances of the proposed method. The experimental results show that the proposed method is superior to some existing traditional and deep learning-based methods and is promising to meet the real-time and high-precision clinical registration requirements of 3D abdominal CT images.
Collapse
Affiliation(s)
- Shaodi Yang
- School of Automation, Central South University, Changsha 410083, China; (S.Y.); (F.Z.)
| | - Yuqian Zhao
- School of Automation, Central South University, Changsha 410083, China; (S.Y.); (F.Z.)
- Hunan Xiangjiang Artificial Intelligence Academy, Changsha 410083, China
- Hunan Engineering Research Center of High Strength Fastener Intelligent Manufacturing, Changde 415701, China
| | - Miao Liao
- School of Computer Science and Engineering, Hunan University of Science and Technology, Xiangtan 411201, China;
| | - Fan Zhang
- School of Automation, Central South University, Changsha 410083, China; (S.Y.); (F.Z.)
- Hunan Xiangjiang Artificial Intelligence Academy, Changsha 410083, China
| |
Collapse
|
25
|
Yang SD, Zhao YQ, Zhang F, Liao M, Yang Z, Wang YJ, Yu LL. An efficient two-step multi-organ registration on abdominal CT via deep-learning based segmentation. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.103027] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
26
|
Target organ non-rigid registration on abdominal CT images via deep-learning based detection. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102976] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
|
27
|
Yang SD, Zhao YQ, Zhang F, Liao M, Yang Z, Wang YJ, Yu LL. An Abdominal Registration Technology for Integration of Nanomaterial Imaging-Aided Diagnosis and Treatment. J Biomed Nanotechnol 2021; 17:952-959. [PMID: 34082880 DOI: 10.1166/jbn.2021.3076] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Image registration technology is a key technology used in the process of nanomaterial imaging-aided diagnosis and targeted therapy effect monitoring for abdominal diseases. Recently, the deep-learning based methods have been increasingly used for large-scale medical image registration, because their iteration is much less than those of traditional ones. In this paper, a coarse-to-fine unsupervised learning-based three-dimensional (3D) abdominal CT image registration method is presented. Firstly, an affine transformation was used as an initial step to deal with large deformation between two images. Secondly, an unsupervised total loss function containing similarity, smoothness, and topology preservation measures was proposed to achieve better registration performances during convolutional neural network (CNN) training and testing. The experimental results demonstrated that the proposed method severally obtains the average MSE, PSNR, and SSIM values of 0.0055, 22.7950, and 0.8241, which outperformed some existing traditional and unsupervised learning-based methods. Moreover, our method can register 3D abdominal CT images with shortest time and is expected to become a real-time method for clinical application.
Collapse
Affiliation(s)
- Shao-Di Yang
- School of Automation, Central South University, Changsha 410083, China
| | - Yu-Qian Zhao
- School of Automation, Central South University, Changsha 410083, China
| | - Fan Zhang
- School of Automation, Central South University, Changsha 410083, China
| | - Miao Liao
- School of Automation, Central South University, Changsha 410083, China
| | - Zhen Yang
- School of Xiangya Hospital, Central South University, Changsha 410075, China
| | - Yan-Jin Wang
- School of Xiangya Hospital, Central South University, Changsha 410075, China
| | - Ling-Li Yu
- School of Automation, Central South University, Changsha 410083, China
| |
Collapse
|
28
|
Riedel Né Steinhoff M, Setsompop K, Mertins A, Börnert P. Segmented simultaneous multi-slice diffusion-weighted imaging with navigated 3D rigid motion correction. Magn Reson Med 2021; 86:1701-1717. [PMID: 33955588 DOI: 10.1002/mrm.28813] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2020] [Revised: 03/29/2021] [Accepted: 03/29/2021] [Indexed: 11/09/2022]
Abstract
PURPOSE To improve the robustness of diffusion-weighted imaging (DWI) data acquired with segmented simultaneous multi-slice (SMS) echo-planar imaging (EPI) against in-plane and through-plane rigid motion. THEORY AND METHODS The proposed algorithm incorporates a 3D rigid motion correction and wavelet denoising into the image reconstruction of segmented SMS-EPI diffusion data. Low-resolution navigators are used to estimate shot-specific diffusion phase corruptions and 3D rigid motion parameters through SMS-to-volume registration. The shot-wise rigid motion and phase parameters are integrated into a SENSE-based full-volume reconstruction for each diffusion direction. The algorithm is compared to a navigated SMS reconstruction without gross motion correction in simulations and in vivo studies with four-fold interleaved 3-SMS diffusion tensor acquisitions. RESULTS Simulations demonstrate high fidelity was achieved in the SMS-to-volume registration, with submillimeter registration errors and improved image reconstruction quality. In vivo experiments validate successful artifact reduction in 3D motion-compromised in vivo scans with a temporal motion resolution of approximately 0.3 s. CONCLUSION This work demonstrates the feasibility of retrospective 3D rigid motion correction from shot navigators for segmented SMS DWI.
Collapse
Affiliation(s)
| | - Kawin Setsompop
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, Massachusetts, USA.,Department of Radiology, Harvard Medical School, Boston, Massachusetts, USA.,Harvard-MIT Health Sciences and Technology, MIT, Cambridge, Massachusetts, USA
| | - Alfred Mertins
- Institute for Signal Processing, University of Luebeck, Luebeck, Germany
| | - Peter Börnert
- Philips Research, Hamburg, Germany.,Radiology, C.J. Gorter Center for High-Field MRI, Leiden University Medical Center, Leiden, The Netherlands
| |
Collapse
|
29
|
Memiş A, Varlı S, Bilgili F. A novel approach for computerized quantitative image analysis of proximal femur bone shape deformities based on the hip joint symmetry. Artif Intell Med 2021; 115:102057. [PMID: 34001317 DOI: 10.1016/j.artmed.2021.102057] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2020] [Revised: 03/15/2021] [Accepted: 03/22/2021] [Indexed: 11/29/2022]
Abstract
As a result of most of the bone disorders seen in hip joints, shape deformities occur in the structural form of the hip joint components. Image-based quantitative analysis and assessment of these deformities in bone shapes are very important for the evaluation, treatment, and prognosis of the various hip joint bone disorders. In this article, a novel approach for the image-based computerized quantitative analysis of proximal femur shape deformities is presented. In the proposed approach, shape deformities of the pathological proximal femurs were quantified over the contralateral healthy proximal femur shape structure of the same patient in 2D by taking the hip joint symmetry property of human anatomy into consideration. It is based on the idea that if the right and left proximal femurs in bilateral hip joints are highly symmetrical and also if one of the proximal femurs is healthy and the contralateral one is pathological, the non-overlapping bone shape regions can represent the deformities in pathological proximal femurs when both proximal femurs are registered to overlap each other. In the methodological process of the proposed study, a set of image preprocessing operations was primarily performed on the raw magnetic resonance imaging (MRI) data. Then, the segmented proximal femurs in bilateral hip joint images were automatically aligned with the Iterative Closest Point (ICP) rigid registration method. Following the registration, a set of image postprocessing operations was performed on the images of proximal femurs aligned. In the quantification phase, the bone shape deformities in pathological proximal femurs were quantified simply in terms of the mismatching area in 2D by measuring a shape variation index representing the total bone shape deformity ratio. To evaluate the proposed quantitative shape analysis approach, bilateral hip joints in a total of 13 coronal MRI sections of 13 patients with Legg-Calve-Perthes disease (LCPD) were used. Experimental studies have shown that the proposed approach has quite promising results in the quantitative representation of the pathological proximal femur shape deformities. Furthermore, consistent results have been observed for the Waldenström classification stages of the disease. The shape deformity ratios in pathological proximal femurs were quantified as 9.44% (±1.40), 18.38% (±6.30), 24.73% (±12.42), and 27.66% (±10.41), respectively for the Initial, Fragmentation, Reossification, and Remodelling stages of LCPD with the quantification error rates of 0.29% (±0.16), 0.58% (±0.71), 1.12% (±0.82), and 0.80% (±0.98). Additionally, a mean error rate of 0.65% (±0.68) was observed for the quantified shape deformity ratios of all samples.
Collapse
Affiliation(s)
- Abbas Memiş
- Department of Computer Engineering, Faculty of Electrical and Electronics Engineering, Yıldız Technical University, İstanbul, Turkey.
| | - Songül Varlı
- Department of Computer Engineering, Faculty of Electrical and Electronics Engineering, Yıldız Technical University, İstanbul, Turkey.
| | - Fuat Bilgili
- Department of Orthopaedics and Traumatology, İstanbul Faculty of Medicine, İstanbul University, İstanbul, Turkey.
| |
Collapse
|
30
|
Yeung PH, Aliasi M, Papageorghiou AT, Haak M, Xie W, Namburete AIL. Learning to map 2D ultrasound images into 3D space with minimal human annotation. Med Image Anal 2021; 70:101998. [PMID: 33711741 DOI: 10.1016/j.media.2021.101998] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2020] [Revised: 01/26/2021] [Accepted: 02/01/2021] [Indexed: 10/22/2022]
Abstract
In fetal neurosonography, aligning two-dimensional (2D) ultrasound scans to their corresponding plane in the three-dimensional (3D) space remains a challenging task. In this paper, we propose a convolutional neural network that predicts the position of 2D ultrasound fetal brain scans in 3D atlas space. Instead of purely supervised learning that requires heavy annotations for each 2D scan, we train the model by sampling 2D slices from 3D fetal brain volumes, and target the model to predict the inverse of the sampling process, resembling the idea of self-supervised learning. We propose a model that takes a set of images as input, and learns to compare them in pairs. The pairwise comparison is weighted by the attention module based on its contribution to the prediction, which is learnt implicitly during training. The feature representation for each image is thus computed by incorporating the relative position information to all the other images in the set, and is later used for the final prediction. We benchmark our model on 2D slices sampled from 3D fetal brain volumes at 18-22 weeks' gestational age. Using three evaluation metrics, namely, Euclidean distance, plane angles and normalized cross correlation, which account for both the geometric and appearance discrepancy between the ground-truth and prediction, in all these metrics, our model outperforms a baseline model by as much as 23%, when the number of input images increases. We further demonstrate that our model generalizes to (i) real 2D standard transthalamic plane images, achieving comparable performance as human annotations, as well as (ii) video sequences of 2D freehand fetal brain scans.
Collapse
Affiliation(s)
- Pak-Hei Yeung
- Department of Engineering Science, Institute of Biomedical Engineering, University of Oxford, Oxford, United Kingdom.
| | - Moska Aliasi
- Division of Fetal Medicine, Department of Obstetrics, Leiden University Medical Center, 2333 ZA Leiden, The Netherlands
| | - Aris T Papageorghiou
- Nuffield Department of Obstetrics and Gynaecology, University of Oxford, Oxford, United Kingdom
| | - Monique Haak
- Division of Fetal Medicine, Department of Obstetrics, Leiden University Medical Center, 2333 ZA Leiden, The Netherlands
| | - Weidi Xie
- Department of Engineering Science, Institute of Biomedical Engineering, University of Oxford, Oxford, United Kingdom; Visual Geometry Group, Department of Engineering Science, University of Oxford, Oxford, United Kingdom
| | - Ana I L Namburete
- Department of Engineering Science, Institute of Biomedical Engineering, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
31
|
Wang X, Zeng W, Yang X, Zhang Y, Fang C, Zeng S, Han Y, Fei P. Bi-channel image registration and deep-learning segmentation (BIRDS) for efficient, versatile 3D mapping of mouse brain. eLife 2021; 10:e63455. [PMID: 33459255 PMCID: PMC7840180 DOI: 10.7554/elife.63455] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2020] [Accepted: 12/27/2020] [Indexed: 12/21/2022] Open
Abstract
We have developed an open-source software called bi-channel image registration and deep-learning segmentation (BIRDS) for the mapping and analysis of 3D microscopy data and applied this to the mouse brain. The BIRDS pipeline includes image preprocessing, bi-channel registration, automatic annotation, creation of a 3D digital frame, high-resolution visualization, and expandable quantitative analysis. This new bi-channel registration algorithm is adaptive to various types of whole-brain data from different microscopy platforms and shows dramatically improved registration accuracy. Additionally, as this platform combines registration with neural networks, its improved function relative to the other platforms lies in the fact that the registration procedure can readily provide training data for network construction, while the trained neural network can efficiently segment-incomplete/defective brain data that is otherwise difficult to register. Our software is thus optimized to enable either minute-timescale registration-based segmentation of cross-modality, whole-brain datasets or real-time inference-based image segmentation of various brain regions of interest. Jobs can be easily submitted and implemented via a Fiji plugin that can be adapted to most computing environments.
Collapse
Affiliation(s)
- Xuechun Wang
- School of Optical and Electronic Information- Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and TechnologyWuhanChina
| | - Weilin Zeng
- School of Optical and Electronic Information- Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and TechnologyWuhanChina
| | - Xiaodan Yang
- School of Basic Medicine, Tongji Medical College, Huazhong University of Science and TechnologyWuhanChina
| | - Yongsheng Zhang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and TechnologyWuhanChina
| | - Chunyu Fang
- School of Optical and Electronic Information- Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and TechnologyWuhanChina
| | - Shaoqun Zeng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and TechnologyWuhanChina
| | - Yunyun Han
- School of Basic Medicine, Tongji Medical College, Huazhong University of Science and TechnologyWuhanChina
| | - Peng Fei
- School of Optical and Electronic Information- Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and TechnologyWuhanChina
| |
Collapse
|
32
|
Chen Z, Xu Z, Gui Q, Yang X, Cheng Q, Hou W, Ding M. Self-learning based medical image representation for rigid real-time and multimodal slice-to-volume registration. Inf Sci (N Y) 2020. [DOI: 10.1016/j.ins.2020.06.072] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
33
|
Singh A, Salehi SSM, Gholipour A. Deep Predictive Motion Tracking in Magnetic Resonance Imaging: Application to Fetal Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3523-3534. [PMID: 32746102 PMCID: PMC7787194 DOI: 10.1109/tmi.2020.2998600] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Fetal magnetic resonance imaging (MRI) is challenged by uncontrollable, large, and irregular fetal movements. It is, therefore, performed through visual monitoring of fetal motion and repeated acquisitions to ensure diagnostic-quality images are acquired. Nevertheless, visual monitoring of fetal motion based on displayed slices, and navigation at the level of stacks-of-slices is inefficient. The current process is highly operator-dependent, increases scanner usage and cost, and significantly increases the length of fetal MRI scans which makes them hard to tolerate for pregnant women. To help build automatic MRI motion tracking and navigation systems to overcome the limitations of the current process and improve fetal imaging, we have developed a new real-time image-based motion tracking method based on deep learning that learns to predict fetal motion directly from acquired images. Our method is based on a recurrent neural network, composed of spatial and temporal encoder-decoders, that infers motion parameters from anatomical features extracted from sequences of acquired slices. We compared our trained network on held-out test sets (including data with different characteristics, e.g. different fetuses scanned at different ages, and motion trajectories recorded from volunteer subjects) with networks designed for estimation as well as methods adopted to make predictions. The results show that our method outperformed alternative techniques, and achieved real-time performance with average errors of 3.5 and 8 degrees for the estimation and prediction tasks, respectively. Our real-time deep predictive motion tracking technique can be used to assess fetal movements, to guide slice acquisitions, and to build navigation systems for fetal MRI.
Collapse
|
34
|
Yang C, Huang Q, Ji X, Bai J. Fusion of Multiple-angles Intraoperative US Images and Pretreatment MR Images for USgHIFU Treatment of Uterine Fibroid: Retrospective Evaluation Based on Clinical Dataset. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:5236-5239. [PMID: 33019165 DOI: 10.1109/embc44109.2020.9175248] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
High-intensity focused ultrasound (HIFU) has been widely used for treatment of uterine fibroids. However, due to the limited resolution of ultrasound image in deep organs, the guidance of ultrasound-guided HIFU (USgHIFU) treatment greatly depends on clinicians' experience in US image. To address this issue, fusion of intraoperative US images and pretreatment MR images has been proposed. Contour segmentation and multiple-angles 2D US images combination are performed to obtain 3D points along the contour of uterus. Iterative closest point (ICP) algorithm based on prior knowledge is used to register these point sets. MR and US images of six treated patients are used for evaluation. The mean distance error (MDE) of our algorithm is 1.71±0.59 mm, and the average running time is 0.18 s. The results have verified the feasibility of fusion of MR images and US images for USgHIFU guidance. In addition, this method may be also potential for post-ablation evaluation with follow-up MR images.
Collapse
|
35
|
Robust and precise isotropic scaling registration algorithm using bi-directional distance and correntropy. Pattern Recognit Lett 2020. [DOI: 10.1016/j.patrec.2020.07.026] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
36
|
Sui Y, Afacan O, Gholipour A, Warfield SK. SLIMM: Slice localization integrated MRI monitoring. Neuroimage 2020; 223:117280. [PMID: 32853815 PMCID: PMC7735257 DOI: 10.1016/j.neuroimage.2020.117280] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Revised: 07/17/2020] [Accepted: 08/13/2020] [Indexed: 12/17/2022] Open
Abstract
Functional MRI (fMRI) is extremely challenging to perform in subjects who move because subject motion disrupts blood oxygenation level dependent (BOLD) signal measurement. It has become common to use retrospective framewise motion detection and censoring in fMRI studies to eliminate artifacts arising from motion. Data censoring results in significant loss of data and statistical power unless the data acquisition is extended to acquire more data not corrupted by motion. Acquiring more data than is necessary leads to longer than necessary scan duration, which is more expensive and may lead to additional subject non-compliance. Therefore, it is well established that real-time prospective motion monitoring is crucial to ensure data quality and reduce imaging costs. In addition, real-time monitoring of motion allows for feedback to the operator and the subject during the acquisition, to enable intervention to reduce the subject motion. The most widely used form of motion monitoring for fMRI is based on volume-to-volume registration (VVR), which quantifies motion as the misalignment between subsequent volumes. However, motion is not constrained to occur only at the boundaries of volume acquisition, but instead may occur at any time. Consequently, each slice of an fMRI acquisition may be displaced by motion, and assessment of whole volume to volume motion may be insensitive to both intra-volume and inter-volume motion that is revealed by displacement of the slices. We developed the first slice-by-slice self-navigated motion monitoring system for fMRI by developing a real-time slice-to-volume registration (SVR) algorithm. Our real-time SVR algorithm, which is the core of the system, uses a local image patch-based matching criterion along with a Levenberg-Marquardt optimizer, all accelerated via symmetric multi-processing, with interleaved and simultaneous multi-slice acquisition schemes. Extensive experimental results on real motion data demonstrated that our fast motion monitoring system, named Slice Localization Integrated MRI Monitoring (SLIMM), provides more accurate motion measurements than a VVR based approach. Therefore, SLIMM offers improved online motion monitoring which is particularly important in fMRI for challenging patient populations. Real-time motion monitoring is crucial for online data quality control and assurance, for enabling feedback to the subject and the operator to act to mitigate motion, and in adaptive acquisition strategies that aim to ensure enough data of sufficient quality is acquired without acquiring excess data.
Collapse
Affiliation(s)
- Yao Sui
- Computational Radiology Laboratory, Boston Children's Hospital, Boston, MA, USA; Harvard Medical School, Boston, MA, USA.
| | - Onur Afacan
- Computational Radiology Laboratory, Boston Children's Hospital, Boston, MA, USA; Harvard Medical School, Boston, MA, USA
| | - Ali Gholipour
- Computational Radiology Laboratory, Boston Children's Hospital, Boston, MA, USA; Harvard Medical School, Boston, MA, USA
| | - Simon K Warfield
- Computational Radiology Laboratory, Boston Children's Hospital, Boston, MA, USA; Harvard Medical School, Boston, MA, USA
| |
Collapse
|
37
|
Mancini M, Casamitjana A, Peter L, Robinson E, Crampsie S, Thomas DL, Holton JL, Jaunmuktane Z, Iglesias JE. A multimodal computational pipeline for 3D histology of the human brain. Sci Rep 2020; 10:13839. [PMID: 32796937 PMCID: PMC7429828 DOI: 10.1038/s41598-020-69163-z] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Accepted: 06/30/2020] [Indexed: 12/14/2022] Open
Abstract
Ex vivo imaging enables analysis of the human brain at a level of detail that is not possible in vivo with MRI. In particular, histology can be used to study brain tissue at the microscopic level, using a wide array of different stains that highlight different microanatomical features. Complementing MRI with histology has important applications in ex vivo atlas building and in modeling the link between microstructure and macroscopic MR signal. However, histology requires sectioning tissue, hence distorting its 3D structure, particularly in larger human samples. Here, we present an open-source computational pipeline to produce 3D consistent histology reconstructions of the human brain. The pipeline relies on a volumetric MRI scan that serves as undistorted reference, and on an intermediate imaging modality (blockface photography) that bridges the gap between MRI and histology. We present results on 3D histology reconstruction of whole human hemispheres from two donors.
Collapse
Affiliation(s)
- Matteo Mancini
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK.
- Department of Neuroscience, Brighton and Sussex Medical School, University of Sussex, Brighton, UK.
- CUBRIC, Cardiff University, Cardiff, UK.
- NeuroPoly Lab, Polytechnique Montreal, Montreal, Canada.
| | - Adrià Casamitjana
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK
| | - Loic Peter
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK
| | - Eleanor Robinson
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK
- Queen Square Brain Bank for Neurological Disorders, UCL Queen Square Institute of Neurology, University College London, London, UK
| | - Shauna Crampsie
- Queen Square Brain Bank for Neurological Disorders, UCL Queen Square Institute of Neurology, University College London, London, UK
| | - David L Thomas
- Neuroradiological Academic Unit, UCL Queen Square Institute of Neurology, University College London, London, UK
- Leonard Wolfson Experimental Neurology Centre, UCL Queen Square Institute of Neurology, University College London, London, UK
| | - Janice L Holton
- Queen Square Brain Bank for Neurological Disorders, UCL Queen Square Institute of Neurology, University College London, London, UK
| | - Zane Jaunmuktane
- Queen Square Brain Bank for Neurological Disorders, UCL Queen Square Institute of Neurology, University College London, London, UK
| | - Juan Eugenio Iglesias
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK.
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA.
- Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts Institute of Technology, Cambridge, MA, USA.
| |
Collapse
|
38
|
Ma J, Jiang X, Fan A, Jiang J, Yan J. Image Matching from Handcrafted to Deep Features: A Survey. Int J Comput Vis 2020. [DOI: 10.1007/s11263-020-01359-2] [Citation(s) in RCA: 230] [Impact Index Per Article: 57.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
AbstractAs a fundamental and critical task in various visual applications, image matching can identify then correspond the same or similar structure/content from two or more images. Over the past decades, growing amount and diversity of methods have been proposed for image matching, particularly with the development of deep learning techniques over the recent years. However, it may leave several open questions about which method would be a suitable choice for specific applications with respect to different scenarios and task requirements and how to design better image matching methods with superior performance in accuracy, robustness and efficiency. This encourages us to conduct a comprehensive and systematic review and analysis for those classical and latest techniques. Following the feature-based image matching pipeline, we first introduce feature detection, description, and matching techniques from handcrafted methods to trainable ones and provide an analysis of the development of these methods in theory and practice. Secondly, we briefly introduce several typical image matching-based applications for a comprehensive understanding of the significance of image matching. In addition, we also provide a comprehensive and objective comparison of these classical and latest techniques through extensive experiments on representative datasets. Finally, we conclude with the current status of image matching technologies and deliver insightful discussions and prospects for future works. This survey can serve as a reference for (but not limited to) researchers and engineers in image matching and related fields.
Collapse
|
39
|
Chen Y, He F, Li H, Zhang D, Wu Y. A full migration BBO algorithm with enhanced population quality bounds for multimodal biomedical image registration. Appl Soft Comput 2020. [DOI: 10.1016/j.asoc.2020.106335] [Citation(s) in RCA: 89] [Impact Index Per Article: 22.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
40
|
Bulk M, Abdelmoula WM, Geut H, Wiarda W, Ronen I, Dijkstra J, van der Weerd L. Quantitative MRI and laser ablation-inductively coupled plasma-mass spectrometry imaging of iron in the frontal cortex of healthy controls and Alzheimer’s disease patients. Neuroimage 2020; 215:116808. [DOI: 10.1016/j.neuroimage.2020.116808] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2020] [Accepted: 03/20/2020] [Indexed: 12/27/2022] Open
|
41
|
Chai Y, Xu B, Zhang K, Lepore N, Wood J. MRI restoration using edge-guided adversarial learning. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2020; 8:83858-83870. [PMID: 33747672 PMCID: PMC7977797 DOI: 10.1109/access.2020.2992204] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Magnetic resonance imaging (MRI) images acquired as multislice two-dimensional (2D) images present challenges when reformatted in orthogonal planes due to sparser sampling in the through-plane direction. Restoring the "missing" through-plane slices, or regions of an MRI image damaged by acquisition artifacts can be modeled as an image imputation task. In this work, we consider the damaged image data or missing through-plane slices as image masks and proposed an edge-guided generative adversarial network to restore brain MRI images. Inspired by the procedure of image inpainting, our proposed method decouples image repair into two stages: edge connection and contrast completion, both of which used general adversarial networks (GAN). We trained and tested on a dataset from the Human Connectome Project to test the application of our method for thick slice imputation, while we tested the artifact correction on clinical data and simulated datasets. Our Edge-Guided GAN had superior PSNR, SSIM, conspicuity and signal texture compared to traditional imputation tools, the Context Encoder and the Densely Connected Super Resolution Network with GAN (DCSRN-GAN). The proposed network may improve utilization of clinical 2D scans for 3D atlas generation and big-data comparative studies of brain morphometry.
Collapse
Affiliation(s)
- Yaqiong Chai
- Department of Biomedical Engineering, University of Southern California, CA, USA
- CIBORG lab, Department of Radiology, Children’s Hospital Los Angeles, CA, USA
| | - Botian Xu
- Department of Biomedical Engineering, University of Southern California, CA, USA
| | - Kangning Zhang
- Department of Electrical Engineering, University of Southern California, CA, USA
| | - Natasha Lepore
- Department of Biomedical Engineering, University of Southern California, CA, USA
- CIBORG lab, Department of Radiology, Children’s Hospital Los Angeles, CA, USA
| | - John Wood
- Department of Biomedical Engineering, University of Southern California, CA, USA
- Division of Cardiology, Children’s Hospital Los Angeles, CA, USA
| |
Collapse
|
42
|
Learning deformable registration of medical images with anatomical constraints. Neural Netw 2020; 124:269-279. [DOI: 10.1016/j.neunet.2020.01.023] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2019] [Revised: 12/25/2019] [Accepted: 01/20/2020] [Indexed: 12/31/2022]
|
43
|
Automatic Histogram Specification for Glioma Grading Using Multicenter Data. JOURNAL OF HEALTHCARE ENGINEERING 2020; 2019:9414937. [PMID: 31934325 PMCID: PMC6942805 DOI: 10.1155/2019/9414937] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/22/2019] [Revised: 11/06/2019] [Accepted: 11/23/2019] [Indexed: 11/30/2022]
Abstract
Multicenter sharing is an effective method to increase the data size for glioma research, but the data inconsistency among different institutions hindered the efficiency. This paper proposes a histogram specification with automatic selection of reference frames for magnetic resonance images to alleviate this problem (HSASR). The selection of reference frames is automatically performed by an optimized grid search strategy with coarse and fine search. The search range is firstly narrowed by coarse search of intraglioma samples, and then the suitable reference frame in histogram is selected by fine search within the sample selected by coarse search. Validation experiments are conducted on two datasets GliomaHPPH2018 and BraTS2017 to perform glioma grading. The results demonstrate the high performance of the proposed method. On the mixed dataset, the average AUC, accuracy, sensitivity, and specificity are 0.9786, 94.13%, 94.64%, and 93.00%, respectively. It is about 15% higher on all indicators compared with those without HSASR and has a slight advantage over the result of a manually selected reference frame by radiologists. Results show that our methods can effectively alleviate multicenter data inconsistencies and lift the performance of the prediction model.
Collapse
|
44
|
Xing Q, Chitnis P, Sikdar S, Alshiek J, Shobeiri SA, Wei Q. M3VR-A multi-stage, multi-resolution, and multi-volumes-of-interest volume registration method applied to 3D endovaginal ultrasound. PLoS One 2019; 14:e0224583. [PMID: 31751356 PMCID: PMC6872108 DOI: 10.1371/journal.pone.0224583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2019] [Accepted: 10/16/2019] [Indexed: 11/24/2022] Open
Abstract
Heterogeneity of echo-texture and lack of sharply delineated tissue boundaries in diagnostic ultrasound images make three-dimensional (3D) registration challenging, especially when the volumes to be registered are considerably different due to local changes. We implemented a novel computational method that optimally registers volumetric ultrasound image data containing significant and local anatomical differences. It is A Multi-stage, Multi-resolution, and Multi-volumes-of-interest Volume Registration Method. A single region registration is optimized first for a close initial alignment to avoid convergence to a locally optimal solution. Multiple sub-volumes of interest can then be selected as target alignment regions to achieve confident consistency across the volume. Finally, a multi-resolution rigid registration is performed on these sub-volumes associated with different weights in the cost function. We applied the method on 3D endovaginal ultrasound image data acquired from patients during biopsy procedure of the pelvic floor muscle. Systematic assessment of our proposed method through cross validation demonstrated its accuracy and robustness. The algorithm can also be applied on medical imaging data of other modalities for which the traditional rigid registration methods would fail.
Collapse
Affiliation(s)
- Qi Xing
- Department of Computer Science, George Mason University, Fairfax, Virginia, United States of America
- The School of Information Science and Technology, Southwest Jiaotong University, Sichuan, China
| | - Parag Chitnis
- Department of Bioengineering, George Mason University, Fairfax, Virginia, United States of America
| | - Siddhartha Sikdar
- Department of Bioengineering, George Mason University, Fairfax, Virginia, United States of America
| | - Jonia Alshiek
- Department of Obstetrics & Gynecology, INOVA Health System, Falls Church, Virginia, United States of America
| | - S. Abbas Shobeiri
- Department of Bioengineering, George Mason University, Fairfax, Virginia, United States of America
- Department of Obstetrics & Gynecology, INOVA Health System, Falls Church, Virginia, United States of America
| | - Qi Wei
- Department of Bioengineering, George Mason University, Fairfax, Virginia, United States of America
| |
Collapse
|
45
|
Shen D, Lin Y, Ren Z, Li Q. Robust and efficient GMM-based free-form parts registration via bi-directional distance. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.04.046] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
46
|
Maknojia S, Churchill NW, Schweizer TA, Graham SJ. Resting State fMRI: Going Through the Motions. Front Neurosci 2019; 13:825. [PMID: 31456656 PMCID: PMC6700228 DOI: 10.3389/fnins.2019.00825] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2019] [Accepted: 07/23/2019] [Indexed: 11/19/2022] Open
Abstract
Resting state functional magnetic resonance imaging (rs-fMRI) has become an indispensable tool in neuroscience research. Despite this, rs-fMRI signals are easily contaminated by artifacts arising from movement of the head during data collection. The artifacts can be problematic even for motions on the millimeter scale, with complex spatiotemporal properties that can lead to substantial errors in functional connectivity estimates. Effective correction methods must be employed, therefore, to distinguish true functional networks from motion-related noise. Research over the last three decades has produced numerous correction methods, many of which must be applied in combination to achieve satisfactory data quality. Subject instruction, training, and mild restraints are helpful at the outset, but usually insufficient. Improvements come from applying multiple motion correction algorithms retrospectively after rs-fMRI data are collected, although residual artifacts can still remain in cases of elevated motion, which are especially prevalent in patient populations. Although not commonly adopted at present, “real-time” correction methods are emerging that can be combined with retrospective methods and that promise better correction and increased rs-fMRI signal sensitivity. While the search for the ideal motion correction protocol continues, rs-fMRI research will benefit from good disclosure practices, such as: (1) reporting motion-related quality control metrics to provide better comparison between studies; and (2) including motion covariates in group-level analyses to limit the extent of motion-related confounds when studying group differences.
Collapse
Affiliation(s)
- Sanam Maknojia
- Physical Sciences Platform, Sunnybrook Research Institute, Sunnybrook Health Sciences Centre, Toronto, ON, Canada
| | - Nathan W Churchill
- Keenan Research Centre for Biomedical Science, St. Michael's Hospital, Toronto, ON, Canada
| | - Tom A Schweizer
- Keenan Research Centre for Biomedical Science, St. Michael's Hospital, Toronto, ON, Canada.,Division of Neurosurgery, Faculty of Medicine, University of Toronto, Toronto, ON, Canada.,Institute of Biomaterials and Biomedical Engineering, Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| | - S J Graham
- Physical Sciences Platform, Sunnybrook Research Institute, Sunnybrook Health Sciences Centre, Toronto, ON, Canada.,Department of Medical Biophysics, Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
47
|
Ferrante E, Dokania PK, Silva RM, Paragios N. Weakly Supervised Learning of Metric Aggregations for Deformable Image Registration. IEEE J Biomed Health Inform 2019; 23:1374-1384. [DOI: 10.1109/jbhi.2018.2869700] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
48
|
Sinha A, Billings SD, Reiter A, Liu X, Ishii M, Hager GD, Taylor RH. The deformable most-likely-point paradigm. Med Image Anal 2019; 55:148-164. [PMID: 31078111 PMCID: PMC6681672 DOI: 10.1016/j.media.2019.04.013] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2018] [Revised: 04/22/2019] [Accepted: 04/30/2019] [Indexed: 11/30/2022]
Abstract
In this paper, we present three deformable registration algorithms designed within a paradigm that uses 3D statistical shape models to accomplish two tasks simultaneously: 1) register point features from previously unseen data to a statistically derived shape (e.g., mean shape), and 2) deform the statistically derived shape to estimate the shape represented by the point features. This paradigm, called the deformable most-likely-point paradigm, is motivated by the idea that generative shape models built from available data can be used to estimate previously unseen data. We developed three deformable registration algorithms within this paradigm using statistical shape models built from reliably segmented objects with correspondences. Results from several experiments show that our algorithms produce accurate registrations and reconstructions in a variety of applications with errors up to CT resolution on medical datasets. Our code is available at https://github.com/AyushiSinha/cisstICP.
Collapse
Affiliation(s)
- Ayushi Sinha
- Department of Computer Science, the Johns Hopkins University, Baltimore, MD, USA.
| | - Seth D Billings
- Johns Hopkins University Applied Physics Laboratory, Laurel, MD, USA
| | - Austin Reiter
- Department of Computer Science, the Johns Hopkins University, Baltimore, MD, USA
| | - Xingtong Liu
- Department of Computer Science, the Johns Hopkins University, Baltimore, MD, USA
| | - Masaru Ishii
- Department of Otolaryngology - Head and Neck Surgery, Johns Hopkins Medical Institutions, Baltimore, MD, USA
| | - Gregory D Hager
- Department of Computer Science, the Johns Hopkins University, Baltimore, MD, USA
| | - Russell H Taylor
- Department of Computer Science, the Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
49
|
Pohlman RM, Turney MR, Wu PH, Brace CL, Ziemlewicz TJ, Varghese T. Two-dimensional ultrasound-computed tomography image registration for monitoring percutaneous hepatic intervention. Med Phys 2019; 46:2600-2609. [PMID: 31009079 DOI: 10.1002/mp.13554] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2018] [Revised: 04/14/2019] [Accepted: 04/15/2019] [Indexed: 01/03/2023] Open
Abstract
PURPOSE Deformable registration of ultrasound (US) and contrast enhanced computed tomography (CECT) images are essential for quantitative comparison of ablation boundaries and dimensions determined using these modalities. This comparison is essential as stiffness-based imaging using US has become popular and offers a nonionizing and cost-effective imaging modality for monitoring minimally invasive microwave ablation procedures. A sensible manual registration method is presented that performs the required CT-US image registration. METHODS The two-dimensional (2D) virtual CT image plane that corresponds to the clinical US B-mode was obtained by "virtually slicing" the 3D CT volume along the plane containing non-anatomical landmarks, namely points along the microwave ablation antenna. The initial slice plane was generated using the vector acquired by rotating the normal vector of the transverse (i.e., xz) plane along the angle subtended by the antenna. This plane was then further rotated along the ablation antenna and shifted along with the direction of normal vector to obtain similar anatomical structures, such as the liver surface and vasculature that is visualized on both the CT virtual slice and US B-mode images on 20 patients. Finally, an affine transformation was estimated using anatomic and non-anatomic landmarks to account for distortion between the colocated CT virtual slice and US B-mode image resulting in a final registered CT virtual slice. Registration accuracy was measured by estimating the Euclidean distance between corresponding registered points on CT and US B-mode images. RESULTS Mean and SD of the affine transformed registration error was 1.85 ± 2.14 (mm), computed from 20 coregistered data sets. CONCLUSIONS Our results demonstrate the ability to obtain 2D virtual CT slices that are registered to clinical US B-mode images. The use of both anatomical and non-anatomical landmarks result in accurate registration useful for validating ablative margins and comparison to electrode displacement elastography based images.
Collapse
Affiliation(s)
- Robert M Pohlman
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI, 53706, USA
| | - Michael R Turney
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI, 53706, USA
| | - Po-Hung Wu
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI, 53706, USA
| | - Christopher L Brace
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI, 53706, USA
| | - Timothy J Ziemlewicz
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI, 53706, USA
| | - Tomy Varghese
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI, 53706, USA
| |
Collapse
|
50
|
Abdelmoula WM, Regan MS, Lopez BGC, Randall EC, Lawler S, Mladek AC, Nowicki MO, Marin BM, Agar JN, Swanson KR, Kapur T, Sarkaria JN, Wells W, Agar NYR. Automatic 3D Nonlinear Registration of Mass Spectrometry Imaging and Magnetic Resonance Imaging Data. Anal Chem 2019; 91:6206-6216. [PMID: 30932478 DOI: 10.1021/acs.analchem.9b00854] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
Multimodal integration between mass spectrometry imaging (MSI) and radiology-established modalities such as magnetic resonance imaging (MRI) would allow the investigations of key questions in complex biological systems such as the central nervous system. Such integration would provide complementary multiscale data to bridge the gap between molecular and anatomical phenotypes, potentially revealing new insights into molecular mechanisms underlying anatomical pathologies presented on MRI. Automatic coregistration between 3D MSI/MRI is a computationally challenging process due to dimensional complexity, MSI data sparsity, lack of direct spatial-correspondences, and nonlinear tissue deformation. Here, we present a new computational approach based on stochastic neighbor embedding to nonlinearly align 3D MSI to MRI data, identify and reconstruct biologically relevant molecular patterns in 3D, and fuse the MSI datacube to the MRI space. We demonstrate our method using multimodal high-spectral resolution matrix-assisted laser desorption ionization (MALDI) 9.4 T MSI and 7 T in vivo MRI data, acquired from a patient-derived, xenograft mouse brain model of glioblastoma following administration of the EGFR inhibitor drug of Erlotinib. Results show the distribution of some identified molecular ions of the EGFR inhibitor erlotinib, a phosphatidylcholine lipid, and cholesterol, which were reconstructed in 3D and mapped to the MRI space. The registration quality was evaluated on two normal mouse brains using the Dice coefficient for the regions of brainstem, hippocampus, and cortex. The method is generic and can therefore be applied to hyperspectral images from different mass spectrometers and integrated with other established in vivo imaging modalities such as computed tomography (CT) and positron emission tomography (PET).
Collapse
Affiliation(s)
- Walid M Abdelmoula
- Department of Neurosurgery, Brigham and Women's Hospital , Harvard Medical School , Boston , Massachusetts 02115 , United States
| | - Michael S Regan
- Department of Neurosurgery, Brigham and Women's Hospital , Harvard Medical School , Boston , Massachusetts 02115 , United States
| | - Begona G C Lopez
- Department of Neurosurgery, Brigham and Women's Hospital , Harvard Medical School , Boston , Massachusetts 02115 , United States
| | - Elizabeth C Randall
- Department of Radiology, Brigham and Women's Hospital , Harvard Medical School , Boston , Massachusetts 02115 , United States
| | - Sean Lawler
- Department of Neurosurgery, Brigham and Women's Hospital , Harvard Medical School , Boston , Massachusetts 02115 , United States
| | - Ann C Mladek
- Department of Radiation Oncology , Mayo Clinic , 200 First Street SW , Rochester , Minnesota 55902 , United States
| | - Michal O Nowicki
- Department of Neurosurgery, Brigham and Women's Hospital , Harvard Medical School , Boston , Massachusetts 02115 , United States
| | - Bianca M Marin
- Department of Radiation Oncology , Mayo Clinic , 200 First Street SW , Rochester , Minnesota 55902 , United States
| | - Jeffrey N Agar
- Department of Chemistry and Chemical Biology , Northeastern University , 412 TF (140 The Fenway) , Boston , Massachusetts 02111 , United States
| | - Kristin R Swanson
- Mathematical NeuroOncology Lab, Department of Neurosurgery , Mayo Clinic , 5777 East Mayo Boulevard , Phoenix , Arizona 85054 , United States
| | - Tina Kapur
- Department of Radiology, Brigham and Women's Hospital , Harvard Medical School , Boston , Massachusetts 02115 , United States
| | - Jann N Sarkaria
- Department of Radiation Oncology , Mayo Clinic , 200 First Street SW , Rochester , Minnesota 55902 , United States
| | - William Wells
- Department of Radiology, Brigham and Women's Hospital , Harvard Medical School , Boston , Massachusetts 02115 , United States.,Computer Science and Artificial Intelligence Laboratory , Massachusetts Institute of Technology , Cambridge , Massachusetts 02139 , United States
| | - Nathalie Y R Agar
- Department of Neurosurgery, Brigham and Women's Hospital , Harvard Medical School , Boston , Massachusetts 02115 , United States.,Department of Radiology, Brigham and Women's Hospital , Harvard Medical School , Boston , Massachusetts 02115 , United States.,Department of Cancer Biology, Dana-Farber Cancer Institute , Harvard Medical School , Boston , Massachusetts 02115 , United States
| |
Collapse
|