1
|
Zhou Z, Wang S, Hu J, Liu A, Qian X, Geng C, Zheng J, Chen G, Ji J, Dai Y. Unsupervised registration for liver CT-MR images based on the multiscale integrated spatial-weight module and dual similarity guidance. Comput Med Imaging Graph 2023; 108:102260. [PMID: 37343325 DOI: 10.1016/j.compmedimag.2023.102260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Revised: 04/16/2023] [Accepted: 06/08/2023] [Indexed: 06/23/2023]
Abstract
PURPOSE Multimodal registration is a key task in medical image analysis. Due to the large differences of multimodal images in intensity scale and texture pattern, it is a great challenge to design distinctive similarity metrics to guide deep learning-based multimodal image registration. Besides, since the limitation of the small receptive field, existing deep learning-based methods are mainly suitable for small deformation, but helpless for large deformation. To address the above issues, we present an unsupervised multimodal image registration method based on the multiscale integrated spatial-weight module and dual similarity guidance. METHODS In this method, a U-shape network with our multiscale integrated spatial-weight module is embedded into a multi-resolution image registration architecture to achieve end-to-end large deformation registration, where the spatial-weight module can effectively highlight the regions with large deformation and aggregate discriminative features, and the multi-resolution architecture further helps to solve the optimization problem of the network in a coarse-to-fine pattern. Furthermore, we introduce a special loss function based on dual similarity, which represents both global gray-scale similarity and local feature similarity, to optimize the unsupervised multimodal registration network. RESULTS We verified the effectiveness of the proposed method on liver CT-MR images. Experimental results indicate that the proposed method achieves the optimal DSC value and TRE value of 92.70 ± 1.75(%) and 6.52 ± 2.94(mm), compared with other state-of-the-art registration algorithms. CONCLUSION The proposed method can accurately estimate the large deformation field by aggregating multiscale features, and achieve higher registration accuracy and fast registration speed. Comparative experiments also demonstrate the effectiveness and generalization ability of the algorithm.
Collapse
Affiliation(s)
- Zhiyong Zhou
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou 215163, China; School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Suzhou 215163, China
| | - Shuaikun Wang
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou 215163, China; School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Suzhou 215163, China
| | - Jisu Hu
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou 215163, China; School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Suzhou 215163, China
| | - Anqi Liu
- University of California, Davis, United States
| | - Xusheng Qian
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou 215163, China; School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Suzhou 215163, China
| | - Chen Geng
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou 215163, China
| | - Jian Zheng
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou 215163, China
| | - Guangqiang Chen
- The Second Affiliated Hospital of Suzhou University, Suzhou 215000, China
| | - Jiansong Ji
- Key Laboratory of Imaging Diagnosis and Minimally Invasive Intervention Research, The Fifth Affiliated Hospital of Wenzhou Medical University, Lishui 323000, China.
| | - Yakang Dai
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou 215163, China; School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Suzhou 215163, China.
| |
Collapse
|
2
|
Rallapalli H, Bayin NS, Goldman H, Maric D, Nieman BJ, Koretsky AP, Joyner AL, Turnbull DH. Cell specificity of Manganese-enhanced MRI signal in the cerebellum. Neuroimage 2023; 276:120198. [PMID: 37245561 PMCID: PMC10330770 DOI: 10.1016/j.neuroimage.2023.120198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Revised: 05/19/2023] [Accepted: 05/25/2023] [Indexed: 05/30/2023] Open
Abstract
Magnetic Resonance Imaging (MRI) resolution continues to improve, making it important to understand the cellular basis for different MRI contrast mechanisms. Manganese-enhanced MRI (MEMRI) produces layer-specific contrast throughout the brain enabling in vivo visualization of cellular cytoarchitecture, particularly in the cerebellum. Due to the unique geometry of the cerebellum, especially near the midline, 2D MEMRI images can be acquired from a relatively thick slice by averaging through areas of uniform morphology and cytoarchitecture to produce very high-resolution visualization of sagittal planes. In such images, MEMRI hyperintensity is uniform in thickness throughout the anterior-posterior axis of sagittal sections and is centrally located in the cerebellar cortex. These signal features suggested that the Purkinje cell layer, which houses the cell bodies of the Purkinje cells and the Bergmann glia, is the source of hyperintensity. Despite this circumstantial evidence, the cellular source of MRI contrast has been difficult to define. In this study, we quantified the effects of selective ablation of Purkinje cells or Bergmann glia on cerebellar MEMRI signal to determine whether signal could be assigned to one cell type. We found that the Purkinje cells, not the Bergmann glia, are the primary of source of the enhancement in the Purkinje cell layer. This cell-ablation strategy should be useful for determining the cell specificity of other MRI contrast mechanisms.
Collapse
Affiliation(s)
- Harikrishna Rallapalli
- Department of Radiology, NYU Langone Radiology - Center for Biomedical Imaging, New York University School of Medicine, 660 First Avenue, New York, NY 10016, United States; National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, MD, United States
| | - N Sumru Bayin
- Developmental Biology Program, Sloan Kettering Institute, New York, NY, United States; Gurdon Institute, University of Cambridge, UK; Department of Physiology, Development and Neuroscience, University of Cambridge, UK
| | - Hannah Goldman
- Department of Radiology, NYU Langone Radiology - Center for Biomedical Imaging, New York University School of Medicine, 660 First Avenue, New York, NY 10016, United States
| | - Dragan Maric
- National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, MD, United States
| | - Brian J Nieman
- Mouse Imaging Centre, The Hospital for Sick Children, Toronto, Canada; Translational Medicine, The Hospital for Sick Children, Toronto, Canada; Medical Biophysics, University of Toronto, Toronto, Canada; Ontario Institute for Cancer Research, Toronto, Canada
| | - Alan P Koretsky
- National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, MD, United States
| | - Alexandra L Joyner
- Developmental Biology Program, Sloan Kettering Institute, New York, NY, United States
| | - Daniel H Turnbull
- Department of Radiology, NYU Langone Radiology - Center for Biomedical Imaging, New York University School of Medicine, 660 First Avenue, New York, NY 10016, United States.
| |
Collapse
|
3
|
Zenteno O, Trinh DH, Treuillet S, Lucas Y, Bazin T, Lamarque D, Daul C. Optical biopsy mapping on endoscopic image mosaics with a marker-free probe. Comput Biol Med 2022; 143:105234. [PMID: 35093845 DOI: 10.1016/j.compbiomed.2022.105234] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2021] [Revised: 12/25/2021] [Accepted: 01/10/2022] [Indexed: 12/24/2022]
Abstract
Gastric cancer is the second leading cause of cancer-related deaths worldwide. Early diagnosis significantly increases the chances of survival; therefore, improved assisted exploration and screening techniques are necessary. Previously, we made use of an augmented multi-spectral endoscope by inserting an optical probe into the instrumentation channel. However, the limited field of view and the lack of markings left by optical biopsies on the tissue complicate the navigation and revisit of the suspect areas probed in-vivo. In this contribution two innovative tools are introduced to significantly increase the traceability and monitoring of patients in clinical practice: (i) video mosaicing to build a more comprehensive and panoramic view of large gastric areas; (ii) optical biopsy targeting and registration with the endoscopic images. The proposed optical flow-based mosaicing technique selects images that minimize texture discontinuities and is robust despite the lack of texture and illumination variations. The optical biopsy targeting is based on automatic tracking of a free-marker probe in the endoscopic view using deep learning to dynamically estimate its pose during exploration. The accuracy of pose estimation is sufficient to ensure a precise overlapping of the standard white-light color image and the hyperspectral probe image, assuming that the small target area of the organ is almost flat. This allows the mapping of all spatio-temporally tracked biopsy sites onto the panoramic mosaic. Experimental validations are carried out from videos acquired on patients in hospital. The proposed technique is purely software-based and therefore easily integrable into clinical practice. It is also generic and compatible to any imaging modality that connects to a fiberscope.
Collapse
Affiliation(s)
- Omar Zenteno
- Laboratoire PRISME, Université d'Orléans, Orléans, France
| | - Dinh-Hoan Trinh
- CRAN, UMR 7039 CNRS and Université de Lorraine, Vandœuvre-lès-Nancy, France
| | | | - Yves Lucas
- Laboratoire PRISME, Université d'Orléans, Orléans, France
| | - Thomas Bazin
- Service d'Hépato-gastroentérologie et oncologie digestive, Hôpital Ambroise Paré, Boulogne-Billancourt, France
| | - Dominique Lamarque
- Service d'Hépato-gastroentérologie et oncologie digestive, Hôpital Ambroise Paré, Boulogne-Billancourt, France
| | - Christian Daul
- CRAN, UMR 7039 CNRS and Université de Lorraine, Vandœuvre-lès-Nancy, France.
| |
Collapse
|
4
|
Ha IY, Heinrich MP. Modality-agnostic self-supervised deep feature learning and fast instance optimisation for multimodal fusion in ultrasound-guided interventions. Comput Methods Programs Biomed 2021; 211:106374. [PMID: 34601186 DOI: 10.1016/j.cmpb.2021.106374] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Accepted: 08/22/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE Fast and robust alignment of pre-operative MRI planning scans to intra-operative ultrasound is an important aspect for automatically supporting image-guided interventions. Thus far, learning-based approaches have failed to tackle the intertwined objectives of fast inference computation time and robustness to unexpectedly large motion and misalignment. In this work, we propose a novel method that decouples deep feature learning and the computation of long ranging local displacement probability maps from fast and robust global transformation prediction. METHODS In our approach, we firstly train a convolutional neural network (CNN) to extract modality-agnostic features with sub-second computation times for both 3D volumes during inference. Using sparsity-based network weight pruning, the model complexity and computation times can be substantially reduced. Based on these features, a large discretized search range of 3D motion vectors is explored to compute a probabilistic displacement map for each control point. These 3D probability maps are employed in our newly proposed, computationally efficient, instance optimisation that robustly estimates the most likely globally linear transformation that best reflects the local displacement beliefs subject to outlier rejection. RESULTS Our experimental validation demonstrates state-of-the-art accuracy on the challenging CuRIOUS dataset with average target registration errors of 2.50 mm, model size of only 1.2 MByte and run times of approx. 3 seconds for a full 3D multimodal registration. CONCLUSION We show that a significant improvement in accuracy and robustness can be gained with instance optimisation and our fast self-supervised deep learning model can achieve state-of-the-art accuracy on challenging registration task in only 3 seconds.
Collapse
Affiliation(s)
- In Young Ha
- Institute of Medical Informatics, University of Luebeck, Ratzeburger Allee 160, 23564 Luebeck, Germany
| | - Mattias P Heinrich
- Institute of Medical Informatics, University of Luebeck, Ratzeburger Allee 160, 23564 Luebeck, Germany.
| |
Collapse
|
5
|
Zhou B, Augenfeld Z, Chapiro J, Zhou SK, Liu C, Duncan JS. Anatomy-guided multimodal registration by learning segmentation without ground truth: Application to intraprocedural CBCT/MR liver segmentation and registration. Med Image Anal 2021; 71:102041. [PMID: 33823397 PMCID: PMC8184611 DOI: 10.1016/j.media.2021.102041] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2020] [Revised: 03/04/2021] [Accepted: 03/10/2021] [Indexed: 12/24/2022]
Abstract
Multimodal image registration has many applications in diagnostic medical imaging and image-guided interventions, such as Transcatheter Arterial Chemoembolization (TACE) of liver cancer guided by intraprocedural CBCT and pre-operative MR. The ability to register peri-procedurally acquired diagnostic images into the intraprocedural environment can potentially improve the intra-procedural tumor targeting, which will significantly improve therapeutic outcomes. However, the intra-procedural CBCT often suffers from suboptimal image quality due to lack of signal calibration for Hounsfield unit, limited FOV, and motion/metal artifacts. These non-ideal conditions make standard intensity-based multimodal registration methods infeasible to generate correct transformation across modalities. While registration based on anatomic structures, such as segmentation or landmarks, provides an efficient alternative, such anatomic structure information is not always available. One can train a deep learning-based anatomy extractor, but it requires large-scale manual annotations on specific modalities, which are often extremely time-consuming to obtain and require expert radiological readers. To tackle these issues, we leverage annotated datasets already existing in a source modality and propose an anatomy-preserving domain adaptation to segmentation network (APA2Seg-Net) for learning segmentation without target modality ground truth. The segmenters are then integrated into our anatomy-guided multimodal registration based on the robust point matching machine. Our experimental results on in-house TACE patient data demonstrated that our APA2Seg-Net can generate robust CBCT and MR liver segmentation, and the anatomy-guided registration framework with these segmenters can provide high-quality multimodal registrations.
Collapse
Affiliation(s)
- Bo Zhou
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA.
| | - Zachary Augenfeld
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Julius Chapiro
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - S Kevin Zhou
- School of Biomedical Engineering & Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, China; Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
| | - Chi Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - James S Duncan
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA; Department of Electrical Engineering, Yale University, New Haven, CT, USA.
| |
Collapse
|
6
|
Bessa S, Gouveia PF, Carvalho PH, Rodrigues C, Silva NL, Cardoso F, Cardoso JS, Oliveira HP, Cardoso MJ. 3D digital breast cancer models with multimodal fusion algorithms. Breast 2020; 49:281-290. [PMID: 31986378 PMCID: PMC7375583 DOI: 10.1016/j.breast.2019.12.016] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2019] [Revised: 12/23/2019] [Accepted: 12/27/2019] [Indexed: 11/17/2022] Open
Abstract
Breast cancer image fusion consists of registering and visualizing different sets of a patient synchronized torso and radiological images into a 3D model. Breast spatial interpretation and visualization by the treating physician can be augmented with a patient-specific digital breast model that integrates radiological images. But the absence of a ground truth for a good correlation between surface and radiological information has impaired the development of potential clinical applications. A new image acquisition protocol was designed to acquire breast Magnetic Resonance Imaging (MRI) and 3D surface scan data with surface markers on the patient’s breasts and torso. A patient-specific digital breast model integrating the real breast torso and the tumor location was created and validated with a MRI/3D surface scan fusion algorithm in 16 breast cancer patients. This protocol was used to quantify breast shape differences between different modalities, and to measure the target registration error of several variants of the MRI/3D scan fusion algorithm. The fusion of single breasts without the biomechanical model of pose transformation had acceptable registration errors and accurate tumor locations. The performance of the fusion algorithm was not affected by breast volume. Further research and virtual clinical interfaces could lead to fast integration of this fusion technology into clinical practice. MRI/3D surface scan fusion algorithm to create 3D breast cancer models. A replicable clinical validation protocol for MRI/3D surface scan fusion algorithms. Anthropometric study that quantifies breast deformations by area in MRI and 3D scans.
Collapse
Affiliation(s)
- Sílvia Bessa
- INESC TEC, Portugal; University of Porto, Portugal.
| | - Pedro F Gouveia
- Champalimaud Foundation, Portugal; Medical School, Lisbon University, Portugal
| | | | | | - Nuno L Silva
- Champalimaud Foundation, Portugal; Nova Medical School, Portugal
| | | | | | | | - Maria João Cardoso
- INESC TEC, Portugal; Champalimaud Foundation, Portugal; Nova Medical School, Portugal
| |
Collapse
|
7
|
Ranzini MBM, Henckel J, Ebner M, Cardoso MJ, Isaac A, Vercauteren T, Ourselin S, Hart A, Modat M. Automated postoperative muscle assessment of hip arthroplasty patients using multimodal imaging joint segmentation. Comput Methods Programs Biomed 2020; 183:105062. [PMID: 31522089 DOI: 10.1016/j.cmpb.2019.105062] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/05/2018] [Revised: 08/15/2019] [Accepted: 09/02/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE In patients treated with hip arthroplasty, the muscular condition and presence of inflammatory reactions are assessed using magnetic resonance imaging (MRI). As MRI lacks contrast for bony structures, computed tomography (CT) is preferred for clinical evaluation of bone tissue and orthopaedic surgical planning. Combining the complementary information of MRI and CT could improve current clinical practice for diagnosis, monitoring and treatment planning. In particular, the different contrast of these modalities could help better quantify the presence of fatty infiltration to characterise muscular condition and assess implant failure. In this work, we combine CT and MRI for joint bone and muscle segmentation and we propose a novel Intramuscular Fat Fraction estimation method for the quantification of muscle atrophy. METHODS Our multimodal framework is able to segment healthy and pathological musculoskeletal structures as well as implants, and develops into three steps. First, input images are pre-processed to improve the low quality of clinically acquired images and to reduce the noise associated with metal artefact. Subsequently, CT and MRI are non-linearly aligned using a novel approach which imposes rigidity constraints on bony structures to ensure realistic deformation. Finally, taking advantage of a multimodal atlas we created for this task, a multi-atlas based segmentation delineates pelvic bones, abductor muscles and implants on both modalities jointly. From the obtained segmentation, a multimodal estimation of the Intramuscular Fat Fraction can be automatically derived. RESULTS Evaluation of the segmentation in a leave-one-out cross-validation study on 22 hip sides resulted in an average Dice score of 0.90 for skeletal and 0.84 for muscular structures. Our multimodal Intramuscular Fat Fraction was benchmarked on 27 different cases against a standard radiological score, showing stronger association than a single modality approach in a one-way ANOVA F-test analysis. CONCLUSIONS The proposed framework represents a promising tool to support image analysis in hip arthroplasty, being robust to the presence of implants and associated image artefacts. By allowing for the automated extraction of a muscle atrophy imaging biomarker, it could quantitatively inform the decision-making process about patient's management.
Collapse
Affiliation(s)
- Marta B M Ranzini
- Centre for Medical Imaging Computing, University College London, London, UK; School of Biomedical Engineering & Imaging Sciences, King's College London, King's Health Partners, St Thomas' Hospital, London SE1 7EH, United Kingdom; Medical Physics and Biomedical Engineering Department, University College London, London WC1E 6BT, United Kingdom.
| | - Johann Henckel
- Royal National Orthopaedic Hospital NHS Foundation Trust, London, UK
| | - Michael Ebner
- Centre for Medical Imaging Computing, University College London, London, UK; School of Biomedical Engineering & Imaging Sciences, King's College London, King's Health Partners, St Thomas' Hospital, London SE1 7EH, United Kingdom; Medical Physics and Biomedical Engineering Department, University College London, London WC1E 6BT, United Kingdom
| | - M Jorge Cardoso
- School of Biomedical Engineering & Imaging Sciences, King's College London, King's Health Partners, St Thomas' Hospital, London SE1 7EH, United Kingdom; Medical Physics and Biomedical Engineering Department, University College London, London WC1E 6BT, United Kingdom
| | - Amanda Isaac
- School of Biomedical Engineering & Imaging Sciences, King's College London, King's Health Partners, St Thomas' Hospital, London SE1 7EH, United Kingdom; Radiology Department, Guys & St Thomas Hospitals NHS Foundation Trust, London SE1 7EH, UK
| | - Tom Vercauteren
- School of Biomedical Engineering & Imaging Sciences, King's College London, King's Health Partners, St Thomas' Hospital, London SE1 7EH, United Kingdom; Medical Physics and Biomedical Engineering Department, University College London, London WC1E 6BT, United Kingdom
| | - Sébastien Ourselin
- School of Biomedical Engineering & Imaging Sciences, King's College London, King's Health Partners, St Thomas' Hospital, London SE1 7EH, United Kingdom; Medical Physics and Biomedical Engineering Department, University College London, London WC1E 6BT, United Kingdom
| | - Alister Hart
- Royal National Orthopaedic Hospital NHS Foundation Trust, London, UK
| | - Marc Modat
- School of Biomedical Engineering & Imaging Sciences, King's College London, King's Health Partners, St Thomas' Hospital, London SE1 7EH, United Kingdom; Medical Physics and Biomedical Engineering Department, University College London, London WC1E 6BT, United Kingdom
| |
Collapse
|
8
|
Sick JT, Rancilio NJ, Fulkerson CV, Plantenga JM, Knapp DW, Stantz KM. An ultrasound based platform for image-guided radiotherapy in canine bladder cancer patients. Phys Imaging Radiat Oncol 2019; 12:10-16. [PMID: 33458289 PMCID: PMC7807639 DOI: 10.1016/j.phro.2019.10.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/14/2019] [Revised: 10/07/2019] [Accepted: 10/09/2019] [Indexed: 11/15/2022]
Abstract
Background and purpose Ultrasound (US) is a non-invasive, non-radiographic imaging technique with high spatial and temporal resolution that can be used for localizing soft-tissue structures and tumors in real-time during radiotherapy (RT) (inter- and intra-fraction). A comprehensive approach incorporating an in-house 3D-US system within RT is presented. This system is easier to adopt into existing treatment protocols than current US based systems, with the aim of providing millimeter intra-fraction alignment errors and sensitivity to track intra-fraction bladder movement. Materials and methods An in-house integrated US manipulator and platform was designed to relate the computed tomographic (CT) scanner, 3D-US and linear accelerator coordinate systems. An agar-based phantom with measured speed of sound and densities consistent with tissues surrounding the bladder was rotated (0–45°) and translated (up to 55 mm) relative to the US and CT coordinate systems to validate this device. After acquiring and integrating CT and US images into the treatment planning system, US-to-US and US-to-CT images were co-registered to re-align the phantom relative to the linear accelerator. Results Statistical errors from US-to-US registrations for various patient orientations ranged from 0.1 to 1.7 mm for x, y, and z translation components, and 0.0–1.1° for rotational components. Statistical errors from US-to-CT registrations were 0.3–1.2 mm for the x, y and z translational components and 0.1–2.5° for the rotational components. Conclusions An ultrasound-based platform was designed, constructed and tested on a CT/US tissue-equivalent phantom to track bladder displacement with a statistical uncertainty to correct and track inter- and intra-fractional displacements of the bladder during radiation treatments.
Collapse
Affiliation(s)
- Justin T Sick
- School of Health Sciences, Purdue University, 550 Stadium Mall Drive, West Lafayette, IN 47907, USA
| | - Nicholas J Rancilio
- Department of Veterinary Clinical Sciences, Purdue University College of Veterinary Medicine, 625 Harrison Street, West Lafayette, IN 47907, USA
| | - Caroline V Fulkerson
- Department of Veterinary Clinical Sciences, Purdue University College of Veterinary Medicine, 625 Harrison Street, West Lafayette, IN 47907, USA
| | - Jeannie M Plantenga
- School of Health Sciences, Purdue University, 550 Stadium Mall Drive, West Lafayette, IN 47907, USA.,Department of Veterinary Clinical Sciences, Purdue University College of Veterinary Medicine, 625 Harrison Street, West Lafayette, IN 47907, USA.,Purdue University Center for Cancer Research, Purdue University, 201 S University St, West Lafayette, IN 47906, USA
| | - Deborah W Knapp
- Department of Veterinary Clinical Sciences, Purdue University College of Veterinary Medicine, 625 Harrison Street, West Lafayette, IN 47907, USA.,Purdue University Center for Cancer Research, Purdue University, 201 S University St, West Lafayette, IN 47906, USA
| | - Keith M Stantz
- School of Health Sciences, Purdue University, 550 Stadium Mall Drive, West Lafayette, IN 47907, USA.,Department of Radiology, Indiana University School of Medicine, 550 University Blvd, Indianapolis, IN, 46202, USA
| |
Collapse
|
9
|
Van Malderen SJM, Van Acker T, Laforce B, De Bruyne M, de Rycke R, Asaoka T, Vincze L, Vanhaecke F. Three-dimensional reconstruction of the distribution of elemental tags in single cells using laser ablation ICP-mass spectrometry via registration approaches. Anal Bioanal Chem 2019; 411:4849-4859. [PMID: 30790022 DOI: 10.1007/s00216-019-01677-6] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2018] [Revised: 01/22/2019] [Accepted: 02/06/2019] [Indexed: 12/15/2022]
Abstract
This paper describes a workflow towards the reconstruction of the three-dimensional elemental distribution profile within human cervical carcinoma cells (HeLa), at a spatial resolution down to 1 μm, employing state-of-the-art laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS) instrumentation. The suspended cells underwent a series of fixation/embedding protocols and were stained with uranyl acetate and an Ir-based DNA intercalator. A priori, laboratory-based absorption micro-computed tomography (μ-CT) was applied to acquire a reference frame of the morphology of the cells and their spatial distribution before sectioning. After CT analysis, a trimmed 300 × 300 × 300 μm3 block was sectioned into a sequential series of 132 sections with a thickness of 2 μm, which were subjected to LA-ICP-MS imaging. A pixel acquisition rate of 250 pixels s-1 was achieved, through a bidirectional scanning strategy. After acquisition, the two-dimensional elemental images were reconstructed using the timestamps in the laser log file. The synchronization of the data required an improved optimization algorithm, which forces the pixels of scans in different ablation directions to be spatially coherent in the direction orthogonal to the scan direction. The volume was reconstructed using multiple registration approaches. Registration using the section outline itself as a fiducial marker resulted into a volume which was in good agreement with the morphology visualized in the μ-CT volume. The 3D μ-CT volume could be registered to the LA-ICP-MS volume, consisting of 2.9 × 107 voxels, and the nucleus dimensions in 3D space could be derived.
Collapse
Affiliation(s)
- Stijn J M Van Malderen
- Atomic & Mass Spectrometry (A&MS) Research Unit, Department of Chemistry, Ghent University, Campus Sterre, Krijgslaan 281 - S12, 9000, Ghent, Belgium
| | - Thibaut Van Acker
- Atomic & Mass Spectrometry (A&MS) Research Unit, Department of Chemistry, Ghent University, Campus Sterre, Krijgslaan 281 - S12, 9000, Ghent, Belgium
| | - Brecht Laforce
- X-ray Microspectroscopy and Imaging (XMI) Research Unit, Department of Chemistry, Ghent University, Campus Sterre, Krijgslaan 281 - S12, 9000, Ghent, Belgium
| | - Michiel De Bruyne
- Department of Biomedical Molecular Biology and VIB Center for Inflammation Research, Ghent University, Technologiepark 71, 9052, Ghent, Belgium
- Ghent University Expertise Centre for Transmission Electron Microscopy and VIB BioImaging Core, Ghent University, Technologiepark 927, 9052, Ghent, Belgium
| | - Riet de Rycke
- Department of Biomedical Molecular Biology and VIB Center for Inflammation Research, Ghent University, Technologiepark 71, 9052, Ghent, Belgium
- Ghent University Expertise Centre for Transmission Electron Microscopy and VIB BioImaging Core, Ghent University, Technologiepark 927, 9052, Ghent, Belgium
| | - Tomoko Asaoka
- Department of Biomedical Molecular Biology and VIB Center for Inflammation Research, Ghent University, Technologiepark 71, 9052, Ghent, Belgium
| | - Laszlo Vincze
- X-ray Microspectroscopy and Imaging (XMI) Research Unit, Department of Chemistry, Ghent University, Campus Sterre, Krijgslaan 281 - S12, 9000, Ghent, Belgium
| | - Frank Vanhaecke
- Atomic & Mass Spectrometry (A&MS) Research Unit, Department of Chemistry, Ghent University, Campus Sterre, Krijgslaan 281 - S12, 9000, Ghent, Belgium.
| |
Collapse
|
10
|
Lv W, Chen H, Peng Y, Li Y, Li J. An atlas-based multimodal registration method for 2D images with discrepancy structures. Med Biol Eng Comput 2018; 56:2151-61. [PMID: 29862470 DOI: 10.1007/s11517-018-1808-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2017] [Accepted: 02/16/2018] [Indexed: 10/14/2022]
Abstract
An atlas-based multimodal registration method for 2-dimension images with discrepancy structures was proposed in this paper. Atlas was utilized for complementing the discrepancy structure information in multimodal medical images. The scheme includes three steps: floating image to atlas registration, atlas to reference image registration, and field-based deformation. To evaluate the performance, a frame model, a brain model, and clinical images were employed in registration experiments. We measured the registration performance by the squared sum of intensity differences. Results indicate that this method is robust and performs better than the direct registration for multimodal images with discrepancy structures. We conclude that the proposed method is suitable for multimodal images with discrepancy structures. Graphical Abstract An Atlas-based multimodal registration method schematic diagram.
Collapse
|
11
|
Nix MG, Prestwich RJD, Speight R. Automated, reference-free local error assessment of multimodal deformable image registration for radiotherapy in the head and neck. Radiother Oncol 2017; 125:478-484. [PMID: 29100697 DOI: 10.1016/j.radonc.2017.10.004] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2017] [Revised: 09/25/2017] [Accepted: 10/02/2017] [Indexed: 11/15/2022]
Abstract
BACKGROUND Head and neck MR-CT deformable image registration (DIR) for radiotherapy planning is hindered by the lack of both ground-truth and per-patient accuracy assessment methods. This study assesses novel post-registration reference-free error assessment algorithms, based on local rigid re-registration of native and pseudomodality images. METHODS Head and neck MR obtained in and out of the treatment position underwent DIR to planning CT. Block-wise mutual information (b-MI) and pseudomodality mutual information (b-pmMI) algorithms were validated against applied rotations and translations. Inherent registration error detection was compared across 14 patient datasets. RESULTS Using radiotherapy position MR-CT DIR, quantitative comparison of applied rotations and translations revealed that errors between 1 and 4 mm were accurately determined by both algorithms. Using diagnostic position MR-CT DIR, translations of up to 5 mm were accurately detected within the gross tumour volume by both methods. In 14 patient datasets, b-MI and b-pmMI detected similar errors with improved stability in regions of low contrast or CT artefact and a 10-fold speedup for b-pmMI. CONCLUSIONS b-MI and b-pmMI algorithms have been validated as providing accurate reference-free quantitative assessment of DIR accuracy on a per-patient basis. b-pmMI is faster and more robust in the presence of modality-specific information.
Collapse
Affiliation(s)
- Michael G Nix
- Department of Medical Physics and Engineering, Leeds Teaching Hospitals NHS Trust, UK.
| | | | - Richard Speight
- Department of Medical Physics and Engineering, Leeds Teaching Hospitals NHS Trust, UK
| |
Collapse
|
12
|
Gutierrez-Becker B, Mateus D, Peter L, Navab N. Guiding multimodal registration with learned optimization updates. Med Image Anal 2017; 41:2-17. [PMID: 28506641 DOI: 10.1016/j.media.2017.05.002] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2017] [Revised: 05/01/2017] [Accepted: 05/03/2017] [Indexed: 10/19/2022]
Abstract
In this paper, we address the multimodal registration problem from a novel perspective, aiming to predict the transformation aligning images directly from their visual appearance. We formulate the prediction as a supervised regression task, with joint image descriptors as input and the output are the parameters of the transformation that guide the moving image towards alignment. We model the joint local appearance with context aware descriptors that capture both local and global cues simultaneously in the two modalities, while the regression function is based on the gradient boosted trees method capable of handling the very large contextual feature space. The good properties of our predictions allow us to couple them with a simple gradient-based optimization for the final registration. Our approach can be applied to any transformation parametrization as well as a broad range of modality pairs. Our method learns the relationship between the intensity distributions of a pair of modalities by using prior knowledge in the form of a small training set of aligned image pairs (in the order of 1-5 in our experiments). We demonstrate the flexibility and generality of our method by evaluating its performance on a variety of multimodal imaging pairs obtained from two publicly available datasets, RIRE (brain MR, CT and PET) and IXI (brain MR). We also show results for the very challenging deformable registration of Intravascular Ultrasound and Histology images. In these experiments, our approach has a larger capture range when compared to other state-of-the-art methods, while improving registration accuracy in complex cases.
Collapse
Affiliation(s)
- Benjamin Gutierrez-Becker
- Computer Aided Medical Procedures (CAMP), Technische Universität München, Boltzmanstr. 3 Garching, 85748, Germany; Department of Child and Adolescent Psychiatry, Psychosomatic and Psychotherapy, Ludwig-Maximilian-University, Waltherstr. 23. Munich, Germany.
| | - Diana Mateus
- Computer Aided Medical Procedures (CAMP), Technische Universität München, Boltzmanstr. 3 Garching, 85748, Germany.
| | - Loic Peter
- Computer Aided Medical Procedures (CAMP), Technische Universität München, Boltzmanstr. 3 Garching, 85748, Germany.
| | - Nassir Navab
- Computer Aided Medical Procedures (CAMP), Technische Universität München, Boltzmanstr. 3 Garching, 85748, Germany; Computer Aided Medical Procedures (CAMP), Johns Hopkins University, USA.
| |
Collapse
|
13
|
Carvalho DDB, Arias Lorza AM, Niessen WJ, de Bruijne M, Klein S. Automated Registration of Freehand B-Mode Ultrasound and Magnetic Resonance Imaging of the Carotid Arteries Based on Geometric Features. Ultrasound Med Biol 2017; 43:273-285. [PMID: 27743726 DOI: 10.1016/j.ultrasmedbio.2016.08.031] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2016] [Revised: 07/30/2016] [Accepted: 08/29/2016] [Indexed: 06/06/2023]
Abstract
An automated method for registering B-mode ultrasound (US) and magnetic resonance imaging (MRI) of the carotid arteries is proposed. The registration uses geometric features, namely, lumen centerlines and lumen segmentations, which are extracted fully automatically from the images after manual annotation of three seed points in US and MRI. The registration procedure starts with alignment of the lumen centerlines using a point-based registration algorithm. The resulting rigid transformation is used to initialize a rigid and subsequent non-rigid registration procedure that jointly aligns centerlines and segmentations by minimizing a weighted sum of the Euclidean distance between centerlines and the dissimilarity between segmentations. The method was evaluated in 28 carotid arteries from eight patients and six healthy volunteers. First, the automated US lumen segmentation method was validated and optimized in a cross-validation experiment. Next, the effect of the weighting parameter of the proposed registration dissimilarity metric and the control point spacing in the non-rigid registration was evaluated. Finally, the proposed registration method was evaluated in comparison to an existing intensity-and-point-based method, a registration using only the centerlines and a registration using manual US lumen segmentations. Registration accuracy was measured in terms of the mean surface distance between manual US segmentations and the registered MRI segmentations. The average mean surface distance was 0.78 ± 0.34 mm for all subjects, 0.65 ± 0.09 mm for healthy volunteers and 0.87 ± 0.42 mm for patients. The results for the complete set were significantly better (Wilcoxon test, p < 0.01) than the results for the intensity-and-point-based method and the centerline-based registration method. We conclude that the proposed method can robustly and accurately register US and MR images of the carotid artery, allowing multimodal analysis of the carotid plaque to improve plaque assessment.
Collapse
Affiliation(s)
- Diego D B Carvalho
- Biomedical Imaging Group Rotterdam, Departments of Medical Informatics & Radiology, Erasmus MC, Rotterdam, The Netherlands
| | - Andres Mauricio Arias Lorza
- Biomedical Imaging Group Rotterdam, Departments of Medical Informatics & Radiology, Erasmus MC, Rotterdam, The Netherlands.
| | - Wiro J Niessen
- Biomedical Imaging Group Rotterdam, Departments of Medical Informatics & Radiology, Erasmus MC, Rotterdam, The Netherlands; Imaging Physics, Faculty of Applied Sciences, Delft University of Technology, Delft, The Netherlands
| | - Marleen de Bruijne
- Biomedical Imaging Group Rotterdam, Departments of Medical Informatics & Radiology, Erasmus MC, Rotterdam, The Netherlands; Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Stefan Klein
- Biomedical Imaging Group Rotterdam, Departments of Medical Informatics & Radiology, Erasmus MC, Rotterdam, The Netherlands
| |
Collapse
|
14
|
Rivest-Hénault D, Dowson N, Greer PB, Fripp J, Dowling JA. Robust inverse-consistent affine CT-MR registration in MRI-assisted and MRI-alone prostate radiation therapy. Med Image Anal 2015; 23:56-69. [PMID: 25966468 DOI: 10.1016/j.media.2015.04.014] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2014] [Revised: 04/17/2015] [Accepted: 04/17/2015] [Indexed: 10/23/2022]
Abstract
BACKGROUND CT-MR registration is a critical component of many radiation oncology protocols. In prostate external beam radiation therapy, it allows the propagation of MR-derived contours to reference CT images at the planning stage, and it enables dose mapping during dosimetry studies. The use of carefully registered CT-MR atlases allows the estimation of patient specific electron density maps from MRI scans, enabling MRI-alone radiation therapy planning and treatment adaptation. In all cases, the precision and accuracy achieved by registration influences the quality of the entire process. PROBLEM Most current registration algorithms do not robustly generalize and lack inverse-consistency, increasing the risk of human error and acting as a source of bias in studies where information is propagated in a particular direction, e.g. CT to MR or vice versa. In MRI-based treatment planning where both CT and MR scans serve as spatial references, inverse-consistency is critical, if under-acknowledged. PURPOSE A robust, inverse-consistent, rigid/affine registration algorithm that is well suited to CT-MR alignment in prostate radiation therapy is presented. METHOD The presented method is based on a robust block-matching optimization process that utilises a half-way space definition to maintain inverse-consistency. Inverse-consistency substantially reduces the influence of the order of input images, simplifying analysis, and increasing robustness. An open source implementation is available online at http://aehrc.github.io/Mirorr/. RESULTS Experimental results on a challenging 35 CT-MR pelvis dataset demonstrate that the proposed method is more accurate than other popular registration packages and is at least as accurate as the state of the art, while being more robust and having an order of magnitude higher inverse-consistency than competing approaches. CONCLUSION The presented results demonstrate that the proposed registration algorithm is readily applicable to prostate radiation therapy planning.
Collapse
Affiliation(s)
- David Rivest-Hénault
- CSIRO, The Australian e-Health Research Centre, Herston, Queensland 4029, Australia.
| | - Nicholas Dowson
- CSIRO, The Australian e-Health Research Centre, Herston, Queensland 4029, Australia.
| | - Peter B Greer
- Calvary Mater Newcastle Hospital, Newcastle, New South Wales 2298, Australia; University of Newcastle, Newcastle, New South Wales 2308, Australia
| | - Jurgen Fripp
- CSIRO, The Australian e-Health Research Centre, Herston, Queensland 4029, Australia
| | - Jason A Dowling
- CSIRO, The Australian e-Health Research Centre, Herston, Queensland 4029, Australia; University of Newcastle, Newcastle, New South Wales 2308, Australia.
| |
Collapse
|