1
|
Mazzucchi E, Hiepe P, Langhof M, La Rocca G, Pignotti F, Rinaldi P, Sabatino G. Automatic rigid image Fusion of preoperative MR and intraoperative US acquired after craniotomy. Cancer Imaging 2023; 23:37. [PMID: 37055790 PMCID: PMC10099637 DOI: 10.1186/s40644-023-00554-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Accepted: 04/05/2023] [Indexed: 04/15/2023] Open
Abstract
BACKGROUND Neuronavigation of preoperative MRI is limited by several errors. Intraoperative ultrasound (iUS) with navigated probes that provide automatic superposition of pre-operative MRI and iUS and three-dimensional iUS reconstruction may overcome some of these limitations. Aim of the present study is to verify the accuracy of an automatic MRI - iUS fusion algorithm to improve MR-based neuronavigation accuracy. METHODS An algorithm using Linear Correlation of Linear Combination (LC2)-based similarity metric has been retrospectively evaluated for twelve datasets acquired in patients with brain tumor. A series of landmarks were defined both in MRI and iUS scans. The Target Registration Error (TRE) was determined for each pair of landmarks before and after the automatic Rigid Image Fusion (RIF). The algorithm has been tested on two conditions of the initial image alignment: registration-based fusion (RBF), as given by the navigated ultrasound probe, and different simulated course alignments during convergence test. RESULTS Except for one case RIF was successfully applied in all patients considering the RBF as initial alignment. Here, mean TRE after RBF was significantly reduced from 4.03 (± 1.40) mm to (2.08 ± 0.96 mm) (p = 0.002), after RIF. For convergence test, the mean TRE value after initial perturbations was 8.82 (± 0.23) mm which has been reduced to a mean TRE of 2.64 (± 1.20) mm after RIF (p < 0.001). CONCLUSIONS The integration of an automatic image fusion method for co-registration of pre-operative MRI and iUS data may improve the accuracy in MR-based neuronavigation.
Collapse
Affiliation(s)
- Edoardo Mazzucchi
- Unit of Neurosurgery, Mater Olbia Hospital, Olbia, Italy.
- Institute of Neurosurgery, IRCCS Fondazione Policlinico Universitario Agostino Gemelli, Catholic University, Rome, Italy.
| | | | | | - Giuseppe La Rocca
- Unit of Neurosurgery, Mater Olbia Hospital, Olbia, Italy
- Institute of Neurosurgery, IRCCS Fondazione Policlinico Universitario Agostino Gemelli, Catholic University, Rome, Italy
| | - Fabrizio Pignotti
- Unit of Neurosurgery, Mater Olbia Hospital, Olbia, Italy
- Institute of Neurosurgery, IRCCS Fondazione Policlinico Universitario Agostino Gemelli, Catholic University, Rome, Italy
| | | | - Giovanni Sabatino
- Unit of Neurosurgery, Mater Olbia Hospital, Olbia, Italy
- Institute of Neurosurgery, IRCCS Fondazione Policlinico Universitario Agostino Gemelli, Catholic University, Rome, Italy
| |
Collapse
|
2
|
Wang Y, Fu T, Wu C, Xiao J, Fan J, Song H, Liang P, Yang J. Multimodal registration of ultrasound and MR images using weighted self-similarity structure vector. Comput Biol Med 2023; 155:106661. [PMID: 36827789 DOI: 10.1016/j.compbiomed.2023.106661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Revised: 01/22/2023] [Accepted: 02/09/2023] [Indexed: 02/12/2023]
Abstract
PROPOSE Multimodal registration of 2D Ultrasound (US) and 3D Magnetic Resonance (MR) for fusion navigation can improve the intraoperative detection accuracy of lesion. However, multimodal registration remains a challenge because of the poor US image quality. In the study, a weighted self-similarity structure vector (WSSV) is proposed to registrate multimodal images. METHOD The self-similarity structure vector utilizes the normalized distance of symmetrically located patches in the neighborhood to describe the local structure information. The texture weights are extracted using the local standard deviation to reduce the speckle interference in the US images. The multimodal similarity metric is constructed by combining a self-similarity structure vector with a texture weight map. RESULTS Experiments were performed on US and MR images of the liver from 88 groups of data including 8 patients and 80 simulated samples. The average target registration error was reduced from 14.91 ± 3.86 mm to 4.95 ± 2.23 mm using the WSSV-based method. CONCLUSIONS The experimental results show that the WSSV-based registration method could robustly align the US and MR images of the liver. With further acceleration, the registration framework can be potentially applied in time-sensitive clinical settings, such as US-MR image registration in image-guided surgery.
Collapse
Affiliation(s)
- Yifan Wang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, PR China
| | - Tianyu Fu
- School of Medical Technology, Beijing Institute of Technology, Beijing, 100081, PR China.
| | - Chan Wu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, PR China
| | - Jian Xiao
- School of Medical Technology, Beijing Institute of Technology, Beijing, 100081, PR China
| | - Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, PR China
| | - Hong Song
- School of Software, Beijing Institute of Technology, Beijing, 100081, PR China
| | - Ping Liang
- Department of Interventional Ultrasound, Chinese PLA General Hospital, Beijing, 100853, PR China.
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, PR China.
| |
Collapse
|
3
|
Jamal A, Yuan T, Galvan S, Castellano A, Riva M, Secoli R, Falini A, Bello L, Rodriguez y Baena F, Dini D. Insights into Infusion-Based Targeted Drug Delivery in the Brain: Perspectives, Challenges and Opportunities. Int J Mol Sci 2022; 23:3139. [PMID: 35328558 PMCID: PMC8949870 DOI: 10.3390/ijms23063139] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Revised: 03/09/2022] [Accepted: 03/10/2022] [Indexed: 01/31/2023] Open
Abstract
Targeted drug delivery in the brain is instrumental in the treatment of lethal brain diseases, such as glioblastoma multiforme, the most aggressive primary central nervous system tumour in adults. Infusion-based drug delivery techniques, which directly administer to the tissue for local treatment, as in convection-enhanced delivery (CED), provide an important opportunity; however, poor understanding of the pressure-driven drug transport mechanisms in the brain has hindered its ultimate success in clinical applications. In this review, we focus on the biomechanical and biochemical aspects of infusion-based targeted drug delivery in the brain and look into the underlying molecular level mechanisms. We discuss recent advances and challenges in the complementary field of medical robotics and its use in targeted drug delivery in the brain. A critical overview of current research in these areas and their clinical implications is provided. This review delivers new ideas and perspectives for further studies of targeted drug delivery in the brain.
Collapse
Affiliation(s)
- Asad Jamal
- Department of Mechanical Engineering, Imperial College London, London SW7 2AZ, UK; (T.Y.); (S.G.); (R.S.); (F.R.y.B.)
| | - Tian Yuan
- Department of Mechanical Engineering, Imperial College London, London SW7 2AZ, UK; (T.Y.); (S.G.); (R.S.); (F.R.y.B.)
| | - Stefano Galvan
- Department of Mechanical Engineering, Imperial College London, London SW7 2AZ, UK; (T.Y.); (S.G.); (R.S.); (F.R.y.B.)
| | - Antonella Castellano
- Vita-Salute San Raffaele University, 20132 Milan, Italy; (A.C.); (A.F.)
- Neuroradiology Unit and CERMAC, IRCCS Ospedale San Raffaele, 20132 Milan, Italy
| | - Marco Riva
- Department of Medical Biotechnology and Translational Medicine, Università degli Studi di Milano, Via Festa del Perdono 7, 20122 Milan, Italy;
| | - Riccardo Secoli
- Department of Mechanical Engineering, Imperial College London, London SW7 2AZ, UK; (T.Y.); (S.G.); (R.S.); (F.R.y.B.)
| | - Andrea Falini
- Vita-Salute San Raffaele University, 20132 Milan, Italy; (A.C.); (A.F.)
- Neuroradiology Unit and CERMAC, IRCCS Ospedale San Raffaele, 20132 Milan, Italy
| | - Lorenzo Bello
- Department of Oncology and Hematology-Oncology, Universitá degli Studi di Milano, 20122 Milan, Italy;
| | - Ferdinando Rodriguez y Baena
- Department of Mechanical Engineering, Imperial College London, London SW7 2AZ, UK; (T.Y.); (S.G.); (R.S.); (F.R.y.B.)
| | - Daniele Dini
- Department of Mechanical Engineering, Imperial College London, London SW7 2AZ, UK; (T.Y.); (S.G.); (R.S.); (F.R.y.B.)
| |
Collapse
|
4
|
Soleimani M, Aghagolzadeh A, Ezoji M. Symmetry-based representation for registration of multimodal images. Med Biol Eng Comput 2022; 60:1015-1032. [PMID: 35171412 DOI: 10.1007/s11517-022-02515-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2020] [Accepted: 01/21/2022] [Indexed: 11/24/2022]
Abstract
We propose a new two-dimensional structural representation method for registration of multimodal images by using the local structural symmetry of images, which is similar at different modalities. The symmetry is measured in various orientations and the best is mapped and used for the representation image. The optimum performance is obtained when using only two different orientations, which is called binary dominant symmetry representation (BDSR). This representation is highly robust to noise and intensity non-uniformity. We also propose a new objective function based on L2 distance with low sensitivity to the overlapping region. Then, five different meta-heuristic algorithms are comparatively applied. Two of them have been used for the first time on image registration. BDSR remarkably outperforms the previous successful representations, such as entropy images, self-similarity context, and modality-independent local binary pattern, as well as mutual information-based registration, in terms of success rate, runtime, convergence error, and representation construction.
Collapse
Affiliation(s)
- Mojtaba Soleimani
- Faculty of Electrical and Computer Engineering, Babol Noshirvani University of Technology, Babol, Iran
| | - Ali Aghagolzadeh
- Faculty of Electrical and Computer Engineering, Babol Noshirvani University of Technology, Babol, Iran.
| | - Mehdi Ezoji
- Faculty of Electrical and Computer Engineering, Babol Noshirvani University of Technology, Babol, Iran
| |
Collapse
|
5
|
Tukra S, Lidströmer N, Ashrafian H, Gianarrou S. AI in Surgical Robotics. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_323] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
6
|
Ha IY, Heinrich MP. Modality-agnostic self-supervised deep feature learning and fast instance optimisation for multimodal fusion in ultrasound-guided interventions. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 211:106374. [PMID: 34601186 DOI: 10.1016/j.cmpb.2021.106374] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Accepted: 08/22/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE Fast and robust alignment of pre-operative MRI planning scans to intra-operative ultrasound is an important aspect for automatically supporting image-guided interventions. Thus far, learning-based approaches have failed to tackle the intertwined objectives of fast inference computation time and robustness to unexpectedly large motion and misalignment. In this work, we propose a novel method that decouples deep feature learning and the computation of long ranging local displacement probability maps from fast and robust global transformation prediction. METHODS In our approach, we firstly train a convolutional neural network (CNN) to extract modality-agnostic features with sub-second computation times for both 3D volumes during inference. Using sparsity-based network weight pruning, the model complexity and computation times can be substantially reduced. Based on these features, a large discretized search range of 3D motion vectors is explored to compute a probabilistic displacement map for each control point. These 3D probability maps are employed in our newly proposed, computationally efficient, instance optimisation that robustly estimates the most likely globally linear transformation that best reflects the local displacement beliefs subject to outlier rejection. RESULTS Our experimental validation demonstrates state-of-the-art accuracy on the challenging CuRIOUS dataset with average target registration errors of 2.50 mm, model size of only 1.2 MByte and run times of approx. 3 seconds for a full 3D multimodal registration. CONCLUSION We show that a significant improvement in accuracy and robustness can be gained with instance optimisation and our fast self-supervised deep learning model can achieve state-of-the-art accuracy on challenging registration task in only 3 seconds.
Collapse
Affiliation(s)
- In Young Ha
- Institute of Medical Informatics, University of Luebeck, Ratzeburger Allee 160, 23564 Luebeck, Germany
| | - Mattias P Heinrich
- Institute of Medical Informatics, University of Luebeck, Ratzeburger Allee 160, 23564 Luebeck, Germany.
| |
Collapse
|
7
|
DDV: A Taxonomy for Deep Learning Methods in Detecting Prostate Cancer. Neural Process Lett 2021. [DOI: 10.1007/s11063-021-10485-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
8
|
Riva M, Hiepe P, Frommert M, Divenuto I, Gay LG, Sciortino T, Nibali MC, Rossi M, Pessina F, Bello L. Intraoperative Computed Tomography and Finite Element Modelling for Multimodal Image Fusion in Brain Surgery. Oper Neurosurg (Hagerstown) 2021; 18:531-541. [PMID: 31342073 DOI: 10.1093/ons/opz196] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2019] [Accepted: 04/16/2019] [Indexed: 11/12/2022] Open
Abstract
BACKGROUND intraoperative computer tomography (iCT) and advanced image fusion algorithms could improve the management of brainshift and the navigation accuracy. OBJECTIVE To evaluate the performance of an iCT-based fusion algorithm using clinical data. METHODS Ten patients with brain tumors were enrolled; preoperative MRI was acquired. The iCT was applied at the end of microsurgical resection. Elastic image fusion of the preoperative MRI to iCT data was performed by deformable fusion employing a biomechanical simulation based on a finite element model. Fusion accuracy was evaluated: the target registration error (TRE, mm) was measured for rigid and elastic fusion (Rf and Ef) and anatomical landmark pairs were divided into test and control structures according to distinct involvement by the brainshift. Intraoperative points describing the stereotactic position of the brain were also acquired and a qualitative evaluation of the adaptive morphing of the preoperative MRI was performed by 5 observers. RESULTS The mean TRE for control and test structures with Rf was 1.81 ± 1.52 and 5.53 ± 2.46 mm, respectively. No significant change was observed applying Ef to control structures; the test structures showed reduced TRE values of 3.34 ± 2.10 mm after Ef (P < .001). A 32% average gain (range 9%-54%) in accuracy of image registration was recorded. The morphed MRI showed robust matching with iCT scans and intraoperative stereotactic points. CONCLUSIONS The evaluated method increased the registration accuracy of preoperative MRI and iCT data. The iCT-based non-linear morphing of the preoperative MRI can potentially enhance the consistency of neuronavigation intraoperatively.
Collapse
Affiliation(s)
- Marco Riva
- Department of Medical Biotechnology and Translational Medicine, Università degli Studi di Milano, Milan, Italy.,Unit of Oncological Neurosurgery, Humanitas Clinical and Research Center - IRCCS, Rozzano, Italy
| | | | | | - Ignazio Divenuto
- Unit of Neuroradiology, Humanitas Clinical and Research Center - IRCCS, Rozzano, Italy
| | - Lorenzo G Gay
- Unit of Oncological Neurosurgery, Humanitas Clinical and Research Center - IRCCS, Rozzano, Italy
| | - Tommaso Sciortino
- Unit of Oncological Neurosurgery, Humanitas Clinical and Research Center - IRCCS, Rozzano, Italy
| | - Marco Conti Nibali
- Unit of Oncological Neurosurgery, Humanitas Clinical and Research Center - IRCCS, Rozzano, Italy
| | - Marco Rossi
- Unit of Oncological Neurosurgery, Humanitas Clinical and Research Center - IRCCS, Rozzano, Italy
| | - Federico Pessina
- Unit of Oncological Neurosurgery, Humanitas Clinical and Research Center - IRCCS, Rozzano, Italy.,Department of Biomedical Sciences, Humanitas University, Rozzano, Italy
| | - Lorenzo Bello
- Unit of Oncological Neurosurgery, Humanitas Clinical and Research Center - IRCCS, Rozzano, Italy.,Department of Oncology and Hemato-oncology, Università degli Studi di Milano, Milan, Italy
| |
Collapse
|
9
|
Ivashchenko OV, Kuhlmann KFD, van Veen R, Pouw B, Kok NFM, Hoetjes NJ, Smit JN, Klompenhouwer EG, Nijkamp J, Ruers TJM. CBCT-based navigation system for open liver surgery: Accurate guidance toward mobile and deformable targets with a semi-rigid organ approximation and electromagnetic tracking of the liver. Med Phys 2021; 48:2145-2159. [PMID: 33666243 PMCID: PMC8251891 DOI: 10.1002/mp.14825] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Revised: 02/23/2021] [Accepted: 02/23/2021] [Indexed: 12/21/2022] Open
Abstract
Purpose The surgical navigation system that provides guidance throughout the surgery can facilitate safer and more radical liver resections, but such a system should also be able to handle organ motion. This work investigates the accuracy of intraoperative surgical guidance during open liver resection, with a semi‐rigid organ approximation and electromagnetic tracking of the target area. Methods The suggested navigation technique incorporates a preoperative 3D liver model based on diagnostic 4D MRI scan, intraoperative contrast‐enhanced CBCT imaging and electromagnetic (EM) tracking of the liver surface, as well as surgical instruments, by means of six degrees‐of‐freedom micro‐EM sensors. Results The system was evaluated during surgeries with 35 patients and resulted in an accurate and intuitive real‐time visualization of liver anatomy and tumor's location, confirmed by intraoperative checks on visible anatomical landmarks. Based on accuracy measurements verified by intraoperative CBCT, the system’s average accuracy was 4.0 ± 3.0 mm, while the total surgical delay due to navigation stayed below 20 min. Conclusions The electromagnetic navigation system for open liver surgery developed in this work allows for accurate localization of liver lesions and critical anatomical structures surrounding the resection area, even when the liver was manipulated. However, further clinically integrating the method requires shortening the guidance‐related surgical delay, which can be achieved by shifting to faster intraoperative imaging like ultrasound. Our approach is adaptable to navigation on other mobile and deformable organs, and therefore may benefit various clinical applications.
Collapse
Affiliation(s)
- Oleksandra V Ivashchenko
- Department of Surgical Oncology, The Netherlands Cancer Institute Antoni van Leeuwenhoek Hospital, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
| | - Koert F D Kuhlmann
- Department of Surgical Oncology, The Netherlands Cancer Institute Antoni van Leeuwenhoek Hospital, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
| | - Ruben van Veen
- Department of Surgical Oncology, The Netherlands Cancer Institute Antoni van Leeuwenhoek Hospital, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
| | - Bas Pouw
- Department of Surgical Oncology, The Netherlands Cancer Institute Antoni van Leeuwenhoek Hospital, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
| | - Niels F M Kok
- Department of Surgical Oncology, The Netherlands Cancer Institute Antoni van Leeuwenhoek Hospital, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
| | - Nikie J Hoetjes
- Department of Surgical Oncology, The Netherlands Cancer Institute Antoni van Leeuwenhoek Hospital, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
| | - Jasper N Smit
- Department of Surgical Oncology, The Netherlands Cancer Institute Antoni van Leeuwenhoek Hospital, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
| | - Elisabeth G Klompenhouwer
- Department of Radiology, The Netherlands Cancer Institute Antoni van Leeuwenhoek Hospital, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
| | - Jasper Nijkamp
- Department of Surgical Oncology, The Netherlands Cancer Institute Antoni van Leeuwenhoek Hospital, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
| | - Theodoor J M Ruers
- Department of Surgical Oncology, The Netherlands Cancer Institute Antoni van Leeuwenhoek Hospital, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands.,Faculty of Science and Technology (TNW), University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands
| |
Collapse
|
10
|
Reinertsen I, Collins DL, Drouin S. The Essential Role of Open Data and Software for the Future of Ultrasound-Based Neuronavigation. Front Oncol 2021; 10:619274. [PMID: 33604299 PMCID: PMC7884817 DOI: 10.3389/fonc.2020.619274] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2020] [Accepted: 12/11/2020] [Indexed: 01/17/2023] Open
Abstract
With the recent developments in machine learning and modern graphics processing units (GPUs), there is a marked shift in the way intra-operative ultrasound (iUS) images can be processed and presented during surgery. Real-time processing of images to highlight important anatomical structures combined with in-situ display, has the potential to greatly facilitate the acquisition and interpretation of iUS images when guiding an operation. In order to take full advantage of the recent advances in machine learning, large amounts of high-quality annotated training data are necessary to develop and validate the algorithms. To ensure efficient collection of a sufficient number of patient images and external validity of the models, training data should be collected at several centers by different neurosurgeons, and stored in a standard format directly compatible with the most commonly used machine learning toolkits and libraries. In this paper, we argue that such effort to collect and organize large-scale multi-center datasets should be based on common open source software and databases. We first describe the development of existing open-source ultrasound based neuronavigation systems and how these systems have contributed to enhanced neurosurgical guidance over the last 15 years. We review the impact of the large number of projects worldwide that have benefited from the publicly available datasets “Brain Images of Tumors for Evaluation” (BITE) and “Retrospective evaluation of Cerebral Tumors” (RESECT) that include MR and US data from brain tumor cases. We also describe the need for continuous data collection and how this effort can be organized through the use of a well-adapted and user-friendly open-source software platform that integrates both continually improved guidance and automated data collection functionalities.
Collapse
Affiliation(s)
- Ingerid Reinertsen
- Department of Health Research, SINTEF Digital, Trondheim, Norway.,Department of Circulation and Medical Imaging, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
| | - D Louis Collins
- NIST Laboratory, McConnell Brain Imaging Center, Montreal Neurological Institute and Hospital, McGill University, Montréal, QC, Canada
| | - Simon Drouin
- Laboratoire Multimédia, École de Technologie Supérieure, Montréal, QC, Canada
| |
Collapse
|
11
|
Tukra S, Lidströmer N, Ashrafian H, Giannarou S. AI in Surgical Robotics. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_323-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
12
|
Lavenir L, Zemiti N, Akkari M, Subsol G, Venail F, Poignet P. HFUS Imaging of the Cochlea: A Feasibility Study for Anatomical Identification by Registration with MicroCT. Ann Biomed Eng 2020; 49:1308-1317. [PMID: 33128180 DOI: 10.1007/s10439-020-02671-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2020] [Accepted: 10/21/2020] [Indexed: 11/25/2022]
Abstract
Cochlear implantation consists in electrically stimulating the auditory nerve by inserting an electrode array inside the cochlea, a bony structure of the inner ear. In the absence of any visual feedback, the insertion results in many cases of damages of the internal structures. This paper presents a feasibility study on intraoperative imaging and identification of cochlear structures with high-frequency ultrasound (HFUS). 6 ex-vivo guinea pig cochleae were subjected to both US and microcomputed tomography (µCT) we respectively referred as intraoperative and preoperative modalities. For each sample, registration based on simulating US from the scanner was performed to allow a precise matching between the visible structures. According to two otologists, the procedure led to a target registration error of 0.32 mm ± 0.05. Thanks to referring to a better preoperative anatomical representation, we were able to intraoperatively identify the modiolus, both scalae vestibuli and tympani and deduce the location of the basilar membrane, all of which is of great interest for cochlear implantation. Our main objective is to extend this procedure to the human case and thus provide a new tool for inner ear surgery.
Collapse
Affiliation(s)
- Lucas Lavenir
- LIRMM, University of Montpellier, CNRS, Montpellier, France
| | - Nabil Zemiti
- LIRMM, University of Montpellier, CNRS, Montpellier, France.
| | - Mohamed Akkari
- Department of ENT and Head and Neck Surgery, University Hospital Gui de Chauliac, University of Montpellier, Montpellier, France
| | - Gérard Subsol
- LIRMM, University of Montpellier, CNRS, Montpellier, France
| | - Frédéric Venail
- Department of ENT and Head and Neck Surgery, University Hospital Gui de Chauliac, University of Montpellier, Montpellier, France.,Institute for Neurosciences of Montpellier, INSERM U105, Montpellier, France
| | | |
Collapse
|
13
|
Ahmadi SA, Bötzel K, Levin J, Maiostre J, Klein T, Wein W, Rozanski V, Dietrich O, Ertl-Wagner B, Navab N, Plate A. Analyzing the co-localization of substantia nigra hyper-echogenicities and iron accumulation in Parkinson's disease: A multi-modal atlas study with transcranial ultrasound and MRI. NEUROIMAGE-CLINICAL 2020; 26:102185. [PMID: 32050136 PMCID: PMC7013333 DOI: 10.1016/j.nicl.2020.102185] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/01/2019] [Revised: 01/12/2020] [Accepted: 01/14/2020] [Indexed: 12/23/2022]
Abstract
Volumetric 3D analysis of hyper-echogenicities from transcranial ultrasound (TCS). First multi-modal analysis of TCS and QSM-MRI in Parkinson's disease. Computations of TCS-MRI registration and a novel multi-modal anatomical template. TCS hyper-echogenicities are co-localized with QSM iron accumulations. Co-localizations occur in the SNc and VTA, but nowhere else in the midbrain.
Background Transcranial B-mode sonography (TCS) can detect hyperechogenic speckles in the area of the substantia nigra (SN) in Parkinson's disease (PD). These speckles correlate with iron accumulation in the SN tissue, but an exact volumetric localization in and around the SN is still unknown. Areas of increased iron content in brain tissue can be detected in vivo with magnetic resonance imaging, using quantitative susceptibility mapping (QSM). Methods In this work, we i) acquire, co-register and transform TCS and QSM imaging from a cohort of 23 PD patients and 27 healthy control subjects into a normalized atlas template space and ii) analyze and compare the 3D spatial distributions of iron accumulation in the midbrain, as detected by a signal increase (TCS+ and QSM+) in both modalities. Results We achieved sufficiently accurate intra-modal target registration errors (TRE<1 mm) for all MRI volumes and multi-modal TCS-MRI co-localization (TRE<4 mm) for 66.7% of TCS scans. In the caudal part of the midbrain, enlarged TCS+ and QSM+ areas were located within the SN pars compacta in PD patients in comparison to healthy controls. More cranially, overlapping TCS+ and QSM+ areas in PD subjects were found in the area of the ventral tegmental area (VTA). Conclusion Our findings are concordant with several QSM-based studies on iron-related alterations in the area SN pars compacta. They substantiate that TCS+ is an indicator of iron accumulation in Parkinson's disease within and in the vicinity of the SN. Furthermore, they are in favor of an involvement of the VTA and thereby the mesolimbic system in Parkinson's disease.
Collapse
Affiliation(s)
- Seyed-Ahmad Ahmadi
- Department of Neurology, Ludwig-Maximilians University, Marchioninistraße 15, Munich 81377, Germany; German Center for Vertigo and Balance Disorders (DSGZ), Ludwig-Maximilians University, Marchioninistraße 15, Munich 81377, Germany; Chair for Computer Aided Medical Procedures (CAMP), Technical University of Munich, Boltzmannstr. 3, Garching 85748, Germany
| | - Kai Bötzel
- Department of Neurology, Ludwig-Maximilians University, Marchioninistraße 15, Munich 81377, Germany
| | - Johannes Levin
- Department of Neurology, Ludwig-Maximilians University, Marchioninistraße 15, Munich 81377, Germany
| | - Juliana Maiostre
- Department of Neurology, Ludwig-Maximilians University, Marchioninistraße 15, Munich 81377, Germany
| | | | - Wolfgang Wein
- ImFusion GmbH, Agnes-Pockels-Bogen 1, München 80992, Germany
| | | | - Olaf Dietrich
- Department of Radiology, Ludwig-Maximilians University, Marchioninistr. 15, Munich 81377, Germany
| | - Birgit Ertl-Wagner
- Department of Radiology, Ludwig-Maximilians University, Marchioninistr. 15, Munich 81377, Germany; The Hospital for Sick Children, 555 University Avenue, Toronto, Ontario M5G 1 × 8, Canada
| | - Nassir Navab
- Chair for Computer Aided Medical Procedures (CAMP), Technical University of Munich, Boltzmannstr. 3, Garching 85748, Germany
| | - Annika Plate
- Department of Neurology, Ludwig-Maximilians University, Marchioninistraße 15, Munich 81377, Germany.
| |
Collapse
|
14
|
Vercauteren T, Unberath M, Padoy N, Navab N. CAI4CAI: The Rise of Contextual Artificial Intelligence in Computer Assisted Interventions. PROCEEDINGS OF THE IEEE. INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS 2020; 108:198-214. [PMID: 31920208 PMCID: PMC6952279 DOI: 10.1109/jproc.2019.2946993] [Citation(s) in RCA: 41] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2019] [Revised: 09/12/2019] [Accepted: 10/04/2019] [Indexed: 05/10/2023]
Abstract
Data-driven computational approaches have evolved to enable extraction of information from medical images with a reliability, accuracy and speed which is already transforming their interpretation and exploitation in clinical practice. While similar benefits are longed for in the field of interventional imaging, this ambition is challenged by a much higher heterogeneity. Clinical workflows within interventional suites and operating theatres are extremely complex and typically rely on poorly integrated intra-operative devices, sensors, and support infrastructures. Taking stock of some of the most exciting developments in machine learning and artificial intelligence for computer assisted interventions, we highlight the crucial need to take context and human factors into account in order to address these challenges. Contextual artificial intelligence for computer assisted intervention, or CAI4CAI, arises as an emerging opportunity feeding into the broader field of surgical data science. Central challenges being addressed in CAI4CAI include how to integrate the ensemble of prior knowledge and instantaneous sensory information from experts, sensors and actuators; how to create and communicate a faithful and actionable shared representation of the surgery among a mixed human-AI actor team; how to design interventional systems and associated cognitive shared control schemes for online uncertainty-aware collaborative decision making ultimately producing more precise and reliable interventions.
Collapse
Affiliation(s)
- Tom Vercauteren
- School of Biomedical Engineering & Imaging SciencesKing’s College LondonLondonWC2R 2LSU.K.
| | - Mathias Unberath
- Department of Computer ScienceJohns Hopkins UniversityBaltimoreMD21218USA
| | - Nicolas Padoy
- ICube institute, CNRS, IHU Strasbourg, University of Strasbourg67081StrasbourgFrance
| | - Nassir Navab
- Fakultät für InformatikTechnische Universität München80333MunichGermany
| |
Collapse
|
15
|
Machado I, Toews M, George E, Unadkat P, Essayed W, Luo J, Teodoro P, Carvalho H, Martins J, Golland P, Pieper S, Frisken S, Golby A, Wells Iii W, Ou Y. Deformable MRI-Ultrasound registration using correlation-based attribute matching for brain shift correction: Accuracy and generality in multi-site data. Neuroimage 2019; 202:116094. [PMID: 31446127 PMCID: PMC6819249 DOI: 10.1016/j.neuroimage.2019.116094] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2019] [Revised: 07/18/2019] [Accepted: 08/09/2019] [Indexed: 11/16/2022] Open
Abstract
Intraoperative tissue deformation, known as brain shift, decreases the benefit of using preoperative images to guide neurosurgery. Non-rigid registration of preoperative magnetic resonance (MR) to intraoperative ultrasound (iUS) has been proposed as a means to compensate for brain shift. We focus on the initial registration from MR to predurotomy iUS. We present a method that builds on previous work to address the need for accuracy and generality of MR-iUS registration algorithms in multi-site clinical data. High-dimensional texture attributes were used instead of image intensities for image registration and the standard difference-based attribute matching was replaced with correlation-based attribute matching. A strategy that deals explicitly with the large field-of-view mismatch between MR and iUS images was proposed. Key parameters were optimized across independent MR-iUS brain tumor datasets acquired at 3 institutions, with a total of 43 tumor patients and 758 reference landmarks for evaluating the accuracy of the proposed algorithm. Despite differences in imaging protocols, patient demographics and landmark distributions, the algorithm is able to reduce landmark errors prior to registration in three data sets (5.37±4.27, 4.18±1.97 and 6.18±3.38 mm, respectively) to a consistently low level (2.28±0.71, 2.08±0.37 and 2.24±0.78 mm, respectively). This algorithm was tested against 15 other algorithms and it is competitive with the state-of-the-art on multiple datasets. We show that the algorithm has one of the lowest errors in all datasets (accuracy), and this is achieved while sticking to a fixed set of parameters for multi-site data (generality). In contrast, other algorithms/tools of similar performance need per-dataset parameter tuning (high accuracy but lower generality), and those that stick to fixed parameters have larger errors or inconsistent performance (generality but not the top accuracy). Landmark errors were further characterized according to brain regions and tumor types, a topic so far missing in the literature.
Collapse
Affiliation(s)
- Inês Machado
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Mechanical Engineering, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal.
| | - Matthew Toews
- Department of Systems Engineering, École de Technologie Supérieure, Montreal, Canada
| | - Elizabeth George
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Prashin Unadkat
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Walid Essayed
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Jie Luo
- Graduate School of Frontier Sciences, University of Tokyo, Tokyo, Japan
| | - Pedro Teodoro
- Escola Superior Náutica Infante D. Henrique, Lisbon, Portugal
| | - Herculano Carvalho
- Department of Neurosurgery, Hospital de Santa Maria, CHLN, Lisbon, Portugal
| | - Jorge Martins
- Department of Mechanical Engineering, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal
| | - Polina Golland
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
| | - Steve Pieper
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Isomics, Inc., Cambridge, MA, USA
| | - Sarah Frisken
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Alexandra Golby
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - William Wells Iii
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
| | - Yangming Ou
- Department of Pediatrics and Radiology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
16
|
Frisken S, Luo M, Juvekar P, Bunevicius A, Machado I, Unadkat P, Bertotti MM, Toews M, Wells WM, Miga MI, Golby AJ. A comparison of thin-plate spline deformation and finite element modeling to compensate for brain shift during tumor resection. Int J Comput Assist Radiol Surg 2019; 15:75-85. [PMID: 31444624 DOI: 10.1007/s11548-019-02057-2] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2019] [Accepted: 08/14/2019] [Indexed: 10/26/2022]
Abstract
PURPOSE Brain shift during tumor resection can progressively invalidate the accuracy of neuronavigation systems and affect neurosurgeons' ability to achieve optimal resections. This paper compares two methods that have been presented in the literature to compensate for brain shift: a thin-plate spline deformation model and a finite element method (FEM). For this comparison, both methods are driven by identical sparse data. Specifically, both methods are driven by displacements between automatically detected and matched feature points from intraoperative 3D ultrasound (iUS). Both methods have been shown to be fast enough for intraoperative brain shift correction (Machado et al. in Int J Comput Assist Radiol Surg 13(10):1525-1538, 2018; Luo et al. in J Med Imaging (Bellingham) 4(3):035003, 2017). However, the spline method requires no preprocessing and ignores physical properties of the brain while the FEM method requires significant preprocessing and incorporates patient-specific physical and geometric constraints. The goal of this work was to explore the relative merits of these methods on recent clinical data. METHODS Data acquired during 19 sequential tumor resections in Brigham and Women's Hospital's Advanced Multi-modal Image-Guided Operating Suite between December 2017 and October 2018 were considered for this retrospective study. Of these, 15 cases and a total of 24 iUS to iUS image pairs met inclusion requirements. Automatic feature detection (Machado et al. in Int J Comput Assist Radiol Surg 13(10):1525-1538, 2018) was used to detect and match features in each pair of iUS images. Displacements between matched features were then used to drive both the spline model and the FEM method to compensate for brain shift between image acquisitions. The accuracies of the resultant deformation models were measured by comparing the displacements of manually identified landmarks before and after deformation. RESULTS The mean initial subcortical registration error between preoperative MRI and the first iUS image averaged 5.3 ± 0.75 mm. The mean subcortical brain shift, measured using displacements between manually identified landmarks in pairs of iUS images, was 2.5 ± 1.3 mm. Our results showed that FEM was able to reduce subcortical registration error by a small but statistically significant amount (from 2.46 to 2.02 mm). A large variability in the results of the spline method prevented us from demonstrating either a statistically significant reduction in subcortical registration error after applying the spline method or a statistically significant difference between the results of the two methods. CONCLUSIONS In this study, we observed less subcortical brain shift than has previously been reported in the literature (Frisken et al., in: Miller (ed) Biomechanics of the brain, Springer, Cham, 2019). This may be due to the fact that we separated out the initial misregistration between preoperative MRI and the first iUS image from our brain shift measurements or it may be due to modern neurosurgical practices designed to reduce brain shift, including reduced craniotomy sizes and better control of intracranial pressure with the use of mannitol and other medications. It appears that the FEM method and its use of geometric and biomechanical constraints provided more consistent brain shift correction and better correction farther from the driving feature displacements than the simple spline model. The spline-based method was simpler and tended to give better results for small deformations. However, large variability in the spline results and relatively small brain shift prevented this study from demonstrating a statistically significant difference between the results of the two methods.
Collapse
Affiliation(s)
- Sarah Frisken
- Department of Radiology, Brigham and Women's Hospital, Boston, MA, USA.
| | - Ma Luo
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA
| | - Parikshit Juvekar
- Department of Neurosurgery, Brigham and Women's Hospital, Boston, MA, USA
| | - Adomas Bunevicius
- Department of Neurosurgery, Brigham and Women's Hospital, Boston, MA, USA
| | - Ines Machado
- Instituto Superior Tecnico, Universidade de Lisboa, Lisbon, Portugal
| | - Prashin Unadkat
- Department of Neurosurgery, Brigham and Women's Hospital, Boston, MA, USA
| | - Melina M Bertotti
- Department of Neurosurgery, Brigham and Women's Hospital, Boston, MA, USA
| | - Matt Toews
- Département de Génie des Systems, Ecole de Technologie Superieure, Montreal, Canada
| | - William M Wells
- Department of Radiology, Brigham and Women's Hospital, Boston, MA, USA.,Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Michael I Miga
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA.,Department of Neurological Surgery, Vanderbilt University Medical Center, Nashville, TN, USA.,Department of Radiology and Radiological Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.,Vanderbilt Institute for Surgery and Engineering, Vanderbilt University, Nashville, TN, USA
| | - Alexandra J Golby
- Department of Radiology, Brigham and Women's Hospital, Boston, MA, USA.,Department of Neurosurgery, Brigham and Women's Hospital, Boston, MA, USA
| |
Collapse
|
17
|
Banerjee J, Sun Y, Klink C, Gahrmann R, Niessen WJ, Moelker A, van Walsum T. Multiple-correlation similarity for block-matching based fast CT to ultrasound registration in liver interventions. Med Image Anal 2019; 53:132-141. [PMID: 30772666 DOI: 10.1016/j.media.2019.02.003] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2018] [Revised: 01/23/2019] [Accepted: 02/07/2019] [Indexed: 11/24/2022]
Abstract
In this work we present a fast approach to perform registration of computed tomography to ultrasound volumes for image guided intervention applications. The method is based on a combination of block-matching and outlier rejection. The block-matching uses a correlation based multimodal similarity metric, where the intensity and the gradient of the computed tomography images along with the ultrasound volumes are the input images to find correspondences between blocks in the computed tomography and the ultrasound volumes. A variance and octree based feature point-set selection method is used for selecting distinct and evenly spread point locations for block-matching. Geometric consistency and smoothness criteria are imposed in an outlier rejection step to refine the block-matching results. The block-matching results after outlier rejection are used to determine the affine transformation between the computed tomography and the ultrasound volumes. Various experiments are carried out to assess the optimal performance and the influence of parameters on accuracy and computational time of the registration. A leave-one-patient-out cross-validation registration error of 3.6 mm is achieved over 29 datasets, acquired from 17 patients.
Collapse
Affiliation(s)
- Jyotirmoy Banerjee
- Biomedical Imaging Group Rotterdam, Departments of Radiology & Nuclear Medicine and Medical Informatics, Erasmus MC - University Medical Center Rotterdam, The Netherlands
| | - Yuanyuan Sun
- Biomedical Imaging Group Rotterdam, Departments of Radiology & Nuclear Medicine and Medical Informatics, Erasmus MC - University Medical Center Rotterdam, The Netherlands
| | - Camiel Klink
- Department of Radiology & Nuclear Medicine, Erasmus MC - University Medical Center Rotterdam, The Netherlands
| | - Renske Gahrmann
- Department of Radiology & Nuclear Medicine, Erasmus MC - University Medical Center Rotterdam, The Netherlands
| | - Wiro J Niessen
- Biomedical Imaging Group Rotterdam, Departments of Radiology & Nuclear Medicine and Medical Informatics, Erasmus MC - University Medical Center Rotterdam, The Netherlands; Quantitative Imaging Group, Faculty of Technical Physics, Delft University of Technology, The Netherlands
| | - Adriaan Moelker
- Department of Radiology & Nuclear Medicine, Erasmus MC - University Medical Center Rotterdam, The Netherlands
| | - Theo van Walsum
- Biomedical Imaging Group Rotterdam, Departments of Radiology & Nuclear Medicine and Medical Informatics, Erasmus MC - University Medical Center Rotterdam, The Netherlands.
| |
Collapse
|
18
|
Frisken S, Luo M, Machado I, Unadkat P, Juvekar P, Bunevicius A, Toews M, Wells WM, Miga MI, Golby AJ. Preliminary Results Comparing Thin Plate Splines with Finite Element Methods for Modeling Brain Deformation during Neurosurgery using Intraoperative Ultrasound. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2019; 10951:1095120. [PMID: 31000909 PMCID: PMC6467062 DOI: 10.1117/12.2512799] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Brain shift compensation attempts to model the deformation of the brain which occurs during the surgical removal of brain tumors to enable mapping of presurgical image data into patient coordinates during surgery and thus improve the accuracy and utility of neuro-navigation. We present preliminary results from clinical tumor resections that compare two methods for modeling brain deformation, a simple thin plate spline method that interpolates displacements and a more complex finite element method (FEM) that models physical and geometric constraints of the brain and its material properties. Both methods are driven by the same set of displacements at locations surrounding the tumor. These displacements were derived from sets of corresponding matched features that were automatically detected using the SIFT-Rank algorithm. The deformation accuracy was tested using a set of manually identified landmarks. The FEM method requires significantly more preprocessing than the spline method but both methods can be used to model deformations in the operating room in reasonable time frames. Our preliminary results indicate that the FEM deformation model significantly out-performs the spline-based approach for predicting the deformation of manual landmarks. While both methods compensate for brain shift, this work suggests that models that incorporate biophysics and geometric constraints may be more accurate.
Collapse
Affiliation(s)
- S Frisken
- Department of Radiology, Brigham and Women's Hospital, Boston, MA
| | - M Luo
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN
| | - I Machado
- Instituto Superior Tecnico, Universidade de Lisboa, Lisbon, PORTUGAL
| | - P Unadkat
- Department of Neurosurgery, Brigham and Women's Hospital, Boston, MA
| | - P Juvekar
- Department of Neurosurgery, Brigham and Women's Hospital, Boston, MA
| | - A Bunevicius
- Department of Neurosurgery, Brigham and Women's Hospital, Boston, MA
| | - M Toews
- Département de Génie des Systems, Ecole de Technologie Superieure, Montreal, CANADA
| | - W M Wells
- Department of Radiology, Brigham and Women's Hospital, Boston, MA
- Comp. Sci. and Artificial Intelligence Lab., Massachusetts Institute of Technology, Cambridge, MA
| | - M I Miga
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN
- Department of Neurological Surgery, Vanderbilt University Medical Center, Nashville, TN
- Department of Radiology and Radiological Sciences, Vanderbilt University Medical Center, Nashville, TN
- Vanderbilt Institute for Surgery and Engineering, Vanderbilt University, Nashville, TN
| | - A J Golby
- Department of Radiology, Brigham and Women's Hospital, Boston, MA
- Department of Neurosurgery, Brigham and Women's Hospital, Boston, MA
| |
Collapse
|
19
|
Image synthesis-based multi-modal image registration framework by using deep fully convolutional networks. Med Biol Eng Comput 2018; 57:1037-1048. [PMID: 30523534 DOI: 10.1007/s11517-018-1924-y] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2018] [Accepted: 10/30/2018] [Indexed: 10/27/2022]
Abstract
Multi-modal image registration has significant meanings in clinical diagnosis, treatment planning, and image-guided surgery. Since different modalities exhibit different characteristics, finding a fast and accurate correspondence between images of different modalities is still a challenge. In this paper, we propose an image synthesis-based multi-modal registration framework. Image synthesis is performed by a ten-layer fully convolutional network (FCN). The network is composed of 10 convolutional layers combined with batch normalization (BN) and rectified linear unit (ReLU), which can be trained to learn an end-to-end mapping from one modality to the other. After the cross-modality image synthesis, multi-modal registration can be transformed into mono-modal registration. The mono-modal registration can be solved by methods with lower computational complexity, such as sum of squared differences (SSD). We tested our method in T1-weighted vs T2-weighted, T1-weighted vs PD, and T2-weighted vs PD image registrations with BrainWeb phantom data and IXI real patients' data. The result shows that our framework can achieve higher registration accuracy than the state-of-the-art multi-modal image registration methods, such as local mutual information (LMI) and α-mutual information (α-MI). The average registration errors of our method in experiment with IXI real patients' data were 1.19, 2.23, and 1.57 compared to 1.53, 2.60, and 2.36 of LMI and 1.34, 2.39, and 1.76 of α-MI in T2-weighted vs PD, T1-weighted vs PD, and T1-weighted vs T2-weighted image registration, respectively. In this paper, we propose an image synthesis-based multi-modal image registration framework. A deep FCN model is developed to perform image synthesis for this framework, which can capture the complex nonlinear relationship between different modalities and discover complex structural representations automatically by a large number of trainable mapping and parameters and perform accurate image synthesis. The framework combined with the deep FCN model and mono-modal registration methods (SSD) can achieve fast and robust results in multi-modal medical image registration. Graphical abstract The workflow of proposed multi-modal image registration framework.
Collapse
|
20
|
Iversen DH, Wein W, Lindseth F, Unsgård G, Reinertsen I. Automatic Intraoperative Correction of Brain Shift for Accurate Neuronavigation. World Neurosurg 2018; 120:e1071-e1078. [DOI: 10.1016/j.wneu.2018.09.012] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2018] [Revised: 08/30/2018] [Accepted: 09/02/2018] [Indexed: 11/29/2022]
|
21
|
Boucher MA, Lippé S, Dupont C, Knoth IS, Lopez G, Shams R, El-Jalbout R, Damphousse A, Kadoury S. Computer-aided lateral ventricular and brain volume measurements in 3D ultrasound for assessing growth trajectories in newborns and neonates. ACTA ACUST UNITED AC 2018; 63:225012. [DOI: 10.1088/1361-6560/aaea85] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
22
|
Haskins G, Kruecker J, Kruger U, Xu S, Pinto PA, Wood BJ, Yan P. Learning deep similarity metric for 3D MR-TRUS image registration. Int J Comput Assist Radiol Surg 2018; 14:417-425. [PMID: 30382457 DOI: 10.1007/s11548-018-1875-7] [Citation(s) in RCA: 50] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2018] [Accepted: 10/14/2018] [Indexed: 11/26/2022]
Abstract
PURPOSE The fusion of transrectal ultrasound (TRUS) and magnetic resonance (MR) images for guiding targeted prostate biopsy has significantly improved the biopsy yield of aggressive cancers. A key component of MR-TRUS fusion is image registration. However, it is very challenging to obtain a robust automatic MR-TRUS registration due to the large appearance difference between the two imaging modalities. The work presented in this paper aims to tackle this problem by addressing two challenges: (i) the definition of a suitable similarity metric and (ii) the determination of a suitable optimization strategy. METHODS This work proposes the use of a deep convolutional neural network to learn a similarity metric for MR-TRUS registration. We also use a composite optimization strategy that explores the solution space in order to search for a suitable initialization for the second-order optimization of the learned metric. Further, a multi-pass approach is used in order to smooth the metric for optimization. RESULTS The learned similarity metric outperforms the classical mutual information and also the state-of-the-art MIND feature-based methods. The results indicate that the overall registration framework has a large capture range. The proposed deep similarity metric-based approach obtained a mean TRE of 3.86 mm (with an initial TRE of 16 mm) for this challenging problem. CONCLUSION A similarity metric that is learned using a deep neural network can be used to assess the quality of any given image registration and can be used in conjunction with the aforementioned optimization framework to perform automatic registration that is robust to poor initialization.
Collapse
Affiliation(s)
- Grant Haskins
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, 12180, USA
| | | | - Uwe Kruger
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, 12180, USA
| | - Sheng Xu
- Center for Interventional Oncology, Radiology & Imaging Sciences, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Peter A Pinto
- Center for Interventional Oncology, Radiology & Imaging Sciences, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Brad J Wood
- Center for Interventional Oncology, Radiology & Imaging Sciences, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Pingkun Yan
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, 12180, USA.
| |
Collapse
|
23
|
De Silva T, Uneri A, Zhang X, Ketcha M, Han R, Sheth N, Martin A, Vogt S, Kleinszig G, Belzberg A, Sciubba DM, Siewerdsen JH. Real-time, image-based slice-to-volume registration for ultrasound-guided spinal intervention. Phys Med Biol 2018; 63:215016. [PMID: 30372418 DOI: 10.1088/1361-6560/aae761] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
Real-time fusion of magnetic resonance (MR) and ultrasound (US) images could facilitate safe and accurate needle placement in spinal interventions. We develop an entirely image-based registration method (independent of or complementary to surgical trackers) that includes an efficient US probe pose initialization algorithm. The registration enables the simultaneous display of 2D ultrasound image slices relative to 3D pre-procedure MR images for navigation. A dictionary-based 3D-2D pose initialization algorithm was developed in which likely probe positions are predefined in a dictionary with feature encoding by Haar wavelet filters. Feature vectors representing the 2D US image are computed by scaling and translating multiple Haar basis filters to capture scale, location, and relative intensity patterns of distinct anatomical features. Following pose initialization, fast 3D-2D registration was performed by optimizing normalized cross-correlation between intra- and pre-procedure images using Powell's method. Experiments were performed using a lumbar puncture phantom and a fresh cadaver specimen presenting realistic image quality in spinal US imaging. Accuracy was quantified by comparing registration transforms to ground truth motion imparted by a computer-controlled motion system and calculating target registration error (TRE) in anatomical landmarks. Initialization using a 315-length feature vector yielded median translation accuracy of 2.7 mm (3.4 mm interquartile range, IQR) in the phantom and 2.1 mm (2.5 mm IQR) in the cadaver. By comparison, storing the entire image set in the dictionary and optimizing correlation yielded a comparable median accuracy of 2.1 mm (2.8 mm IQR) in the phantom and 2.9 mm (3.5 mm IQR) in the cadaver. However, the dictionary-based method reduced memory requirements by 47× compared to storing the entire image set. The overall 3D error after registration measured using 3D landmarks was 3.2 mm (1.8 mm IQR) mm in the phantom and 3.0 mm (2.3 mm IQR) mm in the cadaver. The system was implemented in a 3D Slicer interface to facilitate translation to clinical studies. Haar feature based initialization provided accuracy and robustness at a level that was sufficient for real-time registration using an entirely image-based method for ultrasound navigation. Such an approach could improve the accuracy and safety of spinal interventions in broad utilization, since it is entirely software-based and can operate free from the cost and workflow requirements of surgical trackers.
Collapse
Affiliation(s)
- T De Silva
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, United States of America
| | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
24
|
Favazza CP, Gorny KR, Callstrom MR, Kurup AN, Washburn M, Trester PS, Fowler CL, Hangiandreou NJ. Development of a robust MRI fiducial system for automated fusion of MR-US abdominal images. J Appl Clin Med Phys 2018; 19:261-270. [PMID: 29785834 PMCID: PMC6036384 DOI: 10.1002/acm2.12352] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2017] [Revised: 03/30/2018] [Accepted: 04/05/2018] [Indexed: 11/17/2022] Open
Abstract
We present the development of a two‐component magnetic resonance (MR) fiducial system, that is, a fiducial marker device combined with an auto‐segmentation algorithm, designed to be paired with existing ultrasound probe tracking and image fusion technology to automatically fuse MR and ultrasound (US) images. The fiducial device consisted of four ~6.4 mL cylindrical wells filled with 1 g/L copper sulfate solution. The algorithm was designed to automatically segment the device in clinical abdominal MR images. The algorithm's detection rate and repeatability were investigated through a phantom study and in human volunteers. The detection rate was 100% in all phantom and human images. The center‐of‐mass of the fiducial device was robustly identified with maximum variations of 2.9 mm in position and 0.9° in angular orientation. In volunteer images, average differences between algorithm‐measured inter‐marker spacings and actual separation distances were 0.53 ± 0.36 mm. “Proof‐of‐concept” automatic MR‐US fusions were conducted with sets of images from both a phantom and volunteer using a commercial prototype system, which was built based on the above findings. Image fusion accuracy was measured to be within 5 mm for breath‐hold scanning. These results demonstrate the capability of this approach to automatically fuse US and MR images acquired across a wide range of clinical abdominal pulse sequences.
Collapse
Affiliation(s)
| | | | | | - Anil N. Kurup
- Department of Radiology; Mayo Clinic; Rochester MN USA
| | | | | | | | | |
Collapse
|
25
|
Optimized 3D co-registration of ultra-low-field and high-field magnetic resonance images. PLoS One 2018; 13:e0193890. [PMID: 29509780 PMCID: PMC5839578 DOI: 10.1371/journal.pone.0193890] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2017] [Accepted: 02/19/2018] [Indexed: 12/19/2022] Open
Abstract
The prototypes of ultra-low-field (ULF) MRI scanners developed in recent years represent new, innovative, cost-effective and safer systems, which are suitable to be integrated in multi-modal (Magnetoencephalography and MRI) devices. Integrated ULF-MRI and MEG scanners could represent an ideal solution to obtain functional (MEG) and anatomical (ULF MRI) information in the same environment, without errors that may limit source reconstruction accuracy. However, the low resolution and signal-to-noise ratio (SNR) of ULF images, as well as their limited coverage, do not generally allow for the construction of an accurate individual volume conductor model suitable for MEG localization. Thus, for practical usage, a high-field (HF) MRI image is also acquired, and the HF-MRI images are co-registered to the ULF-MRI ones. We address here this issue through an optimized pipeline (SWIM—Sliding WIndow grouping supporting Mutual information). The co-registration is performed by an affine transformation, the parameters of which are estimated using Normalized Mutual Information as the cost function, and Adaptive Simulated Annealing as the minimization algorithm. The sub-voxel resolution of the ULF images is handled by a sliding-window approach applying multiple grouping strategies to down-sample HF MRI to the ULF-MRI resolution. The pipeline has been tested on phantom and real data from different ULF-MRI devices, and comparison with well-known toolboxes for fMRI analysis has been performed. Our pipeline always outperformed the fMRI toolboxes (FSL and SPM). The HF–ULF MRI co-registration obtained by means of our pipeline could lead to an effective integration of ULF MRI with MEG, with the aim of improving localization accuracy, but also to help exploit ULF MRI in tumor imaging.
Collapse
|
26
|
Reaungamornrat S, Carass A, He Y, Saidha S, Calabresi PA, Prince JL. Inter-scanner Variation Independent Descriptors for Constrained Diffeomorphic Demons Registration of Retina OCT. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2018; 10574:105741B. [PMID: 31695241 PMCID: PMC6834339 DOI: 10.1117/12.2293790] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
PURPOSE OCT offers high in-plane micrometer resolution, enabling studies of neurodegenerative and ocular-disease mechanisms via imaging of the retina at low cost. An important component to such studies is inter-scanner deformable image registration. Image quality of OCT, however, is suboptimal with poor signal-to-noise ratio and through-plane resolution. Geometry of OCT is additionally improperly defined. We developed a diffeomorphic deformable registration method incorporating constraints accommodating the improper geometry and a decentralized-modality-insensitive-neighborhood-descriptors (D-MIND) robust against degradation of OCT image quality and inter-scanner variability. METHOD The method, called D-MIND Demons, estimates diffeomorphisms using D-MINDs under constraints on the direction of velocity fields in a MIND-Demons framework. Descriptiveness of D-MINDs with/without denoising was ranked against four other shape/texture-based descriptors. Performance of D-MIND Demons and its variants incorporating other descriptors was compared for cross-scanner, intra- and inter-subject deformable registration using clinical retina OCT data. RESULT D-MINDs outperformed other descriptors with the difference in mutual descriptiveness between high-contrast and homogenous regions > 0.2. Among Demons variants, D-MIND-Demons was computationally efficient, demonstrating robustness against OCT image degradation (noise, speckle, intensity-non-uniformity, and poor through-plane resolution) and consistent registration accuracy [(4±4 μm) and (4±6 μm) in cross-scanner intra- and inter-subject registration] regardless of denoising. CONCLUSIONS A promising method for cross-scanner, intra- and inter-subject OCT image registration has been developed for ophthalmological and neurological studies of retinal structures. The approach could assist image segmentation, evaluation of longitudinal disease progression, and patient population analysis, which in turn, facilitate diagnosis and patient-specific treatment.
Collapse
Affiliation(s)
| | - A Carass
- Department of Neurology, Johns Hopkins Hospital, Baltimore, MD
| | - Y He
- Department of Neurology, Johns Hopkins Hospital, Baltimore, MD
| | - S Saidha
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore MD
| | - P A Calabresi
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore MD
| | - J L Prince
- Department of Neurology, Johns Hopkins Hospital, Baltimore, MD
| |
Collapse
|
27
|
Ilunga-Mbuyamba E, Avina-Cervantes JG, Lindner D, Arlt F, Ituna-Yudonago JF, Chalopin C. Patient-specific model-based segmentation of brain tumors in 3D intraoperative ultrasound images. Int J Comput Assist Radiol Surg 2018; 13:331-342. [PMID: 29330658 DOI: 10.1007/s11548-018-1703-0] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2017] [Accepted: 01/04/2018] [Indexed: 11/27/2022]
Abstract
PURPOSE Intraoperative ultrasound (iUS) imaging is commonly used to support brain tumor operation. The tumor segmentation in the iUS images is a difficult task and still under improvement because of the low signal-to-noise ratio. The success of automatic methods is also limited due to the high noise sensibility. Therefore, an alternative brain tumor segmentation method in 3D-iUS data using a tumor model obtained from magnetic resonance (MR) data for local MR-iUS registration is presented in this paper. The aim is to enhance the visualization of the brain tumor contours in iUS. METHODS A multistep approach is proposed. First, a region of interest (ROI) based on the specific patient tumor model is defined. Second, hyperechogenic structures, mainly tumor tissues, are extracted from the ROI of both modalities by using automatic thresholding techniques. Third, the registration is performed over the extracted binary sub-volumes using a similarity measure based on gradient values, and rigid and affine transformations. Finally, the tumor model is aligned with the 3D-iUS data, and its contours are represented. RESULTS Experiments were successfully conducted on a dataset of 33 patients. The method was evaluated by comparing the tumor segmentation with expert manual delineations using two binary metrics: contour mean distance and Dice index. The proposed segmentation method using local and binary registration was compared with two grayscale-based approaches. The outcomes showed that our approach reached better results in terms of computational time and accuracy than the comparative methods. CONCLUSION The proposed approach requires limited interaction and reduced computation time, making it relevant for intraoperative use. Experimental results and evaluations were performed offline. The developed tool could be useful for brain tumor resection supporting neurosurgeons to improve tumor border visualization in the iUS volumes.
Collapse
Affiliation(s)
- Elisee Ilunga-Mbuyamba
- CA Telematics, Engineering Division, Campus Irapuato-Salamanca, University of Guanajuato, Carr. Salamanca-Valle de Santiago km 3.5 + 1.8, Comunidad de Palo Blanco, 36885, Salamanca, Mexico
- Innovation Center Computer Assisted Surgery (ICCAS), University of Leipzig, 04103, Leipzig, Germany
| | - Juan Gabriel Avina-Cervantes
- CA Telematics, Engineering Division, Campus Irapuato-Salamanca, University of Guanajuato, Carr. Salamanca-Valle de Santiago km 3.5 + 1.8, Comunidad de Palo Blanco, 36885, Salamanca, Mexico.
| | - Dirk Lindner
- Department of Neurosurgery, University Hospital Leipzig, 04103, Leipzig, Germany
| | - Felix Arlt
- Department of Neurosurgery, University Hospital Leipzig, 04103, Leipzig, Germany
| | - Jean Fulbert Ituna-Yudonago
- CA Telematics, Engineering Division, Campus Irapuato-Salamanca, University of Guanajuato, Carr. Salamanca-Valle de Santiago km 3.5 + 1.8, Comunidad de Palo Blanco, 36885, Salamanca, Mexico
| | - Claire Chalopin
- Innovation Center Computer Assisted Surgery (ICCAS), University of Leipzig, 04103, Leipzig, Germany
| |
Collapse
|
28
|
Xiao Y, Eikenes L, Reinertsen I, Rivaz H. Nonlinear deformation of tractography in ultrasound-guided low-grade gliomas resection. Int J Comput Assist Radiol Surg 2018; 13:457-467. [DOI: 10.1007/s11548-017-1699-x] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2017] [Accepted: 12/21/2017] [Indexed: 11/24/2022]
|
29
|
Deformable MRI-Ultrasound Registration Using 3D Convolutional Neural Network. SIMULATION, IMAGE PROCESSING, AND ULTRASOUND SYSTEMS FOR ASSISTED DIAGNOSIS AND NAVIGATION 2018. [DOI: 10.1007/978-3-030-01045-4_18] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/05/2022]
|
30
|
Liu X, Tang Z, Wang M, Song Z. Deformable multi-modal registration using 3D-FAST conditioned mutual information. Comput Assist Surg (Abingdon) 2017; 22:295-304. [DOI: 10.1080/24699322.2017.1389408] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022] Open
Affiliation(s)
- Xueli Liu
- Digital Medical Research Center, Fudan University, Shanghai, China
- Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai, China
| | - Zhixian Tang
- Digital Medical Research Center, Fudan University, Shanghai, China
- Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai, China
| | - Manning Wang
- Digital Medical Research Center, Fudan University, Shanghai, China
- Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai, China
| | - Zhijian Song
- Digital Medical Research Center, Fudan University, Shanghai, China
- Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai, China
| |
Collapse
|
31
|
Multimodal image registration based on binary gradient angle descriptor. Int J Comput Assist Radiol Surg 2017; 12:2157-2167. [PMID: 28861704 DOI: 10.1007/s11548-017-1661-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2017] [Accepted: 08/17/2017] [Indexed: 10/19/2022]
Abstract
PURPOSE Multimodal image registration plays an important role in image-guided interventions/therapy and atlas building, and it is still a challenging task due to the complex intensity variations in different modalities. METHODS The paper addresses the problem and proposes a simple, compact, fast and generally applicable modality-independent binary gradient angle descriptor (BGA) based on the rationale of gradient orientation alignment. The BGA can be easily calculated at each voxel by coding the quadrant in which a local gradient vector falls, and it has an extremely low computational complexity, requiring only three convolutions, two multiplication operations and two comparison operations. Meanwhile, the binarized encoding of the gradient orientation makes the BGA more resistant to image degradations compared with conventional gradient orientation methods. The BGA can extract similar feature descriptors for different modalities and enable the use of simple similarity measures, which makes it applicable within a wide range of optimization frameworks. RESULTS The results for pairwise multimodal and monomodal registrations between various images (T1, T2, PD, T1c, Flair) consistently show that the BGA significantly outperforms localized mutual information. The experimental results also confirm that the BGA can be a reliable alternative to the sum of absolute difference in monomodal image registration. The BGA can also achieve an accuracy of [Formula: see text], similar to that of the SSC, for the deformable registration of inhale and exhale CT scans. Specifically, for the highly challenging deformable registration of preoperative MRI and 3D intraoperative ultrasound images, the BGA achieves a similar registration accuracy of [Formula: see text] compared with state-of-the-art approaches, with a computation time of 18.3 s per case. CONCLUSIONS The BGA improves the registration performance in terms of both accuracy and time efficiency. With further acceleration, the framework has the potential for application in time-sensitive clinical environments, such as for preoperative MRI and intraoperative US image registration for image-guided intervention.
Collapse
|
32
|
Morin F, Courtecuisse H, Reinertsen I, Le Lann F, Palombi O, Payan Y, Chabanas M. Brain-shift compensation using intraoperative ultrasound and constraint-based biomechanical simulation. Med Image Anal 2017. [DOI: 10.1016/j.media.2017.06.003] [Citation(s) in RCA: 47] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
33
|
Ferrante E, Paragios N. Slice-to-volume medical image registration: A survey. Med Image Anal 2017; 39:101-123. [DOI: 10.1016/j.media.2017.04.010] [Citation(s) in RCA: 95] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2016] [Revised: 04/08/2017] [Accepted: 04/27/2017] [Indexed: 11/25/2022]
|
34
|
Xiao Y, Fortin M, Unsgård G, Rivaz H, Reinertsen I. REtroSpective Evaluation of Cerebral Tumors (RESECT): A clinical database of pre-operative MRI and intra-operative ultrasound in low-grade glioma surgeries. Med Phys 2017; 44:3875-3882. [PMID: 28391601 DOI: 10.1002/mp.12268] [Citation(s) in RCA: 51] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2016] [Revised: 03/05/2017] [Accepted: 04/05/2017] [Indexed: 11/11/2022] Open
Abstract
PURPOSE The advancement of medical image processing techniques, such as image registration, can effectively help improve the accuracy and efficiency of brain tumor surgeries. However, it is often challenging to validate these techniques with real clinical data due to the rarity of such publicly available repositories. ACQUISITION AND VALIDATION METHODS Pre-operative magnetic resonance images (MRI), and intra-operative ultrasound (US) scans were acquired from 23 patients with low-grade gliomas who underwent surgeries at St. Olavs University Hospital between 2011 and 2016. Each patient was scanned by Gadolinium-enhanced T1w and T2-FLAIR MRI protocols to reveal the anatomy and pathology, and series of B-mode ultrasound images were obtained before, during, and after tumor resection to track the surgical progress and tissue deformation. Retrospectively, corresponding anatomical landmarks were identified across US images of different surgical stages, and between MRI and US, and can be used to validate image registration algorithms. Quality of landmark identification was assessed with intra- and inter-rater variability. DATA FORMAT AND ACCESS In addition to co-registered MRIs, each series of US scans are provided as a reconstructed 3D volume. All images are accessible in MINC2 and NIFTI formats, and the anatomical landmarks were annotated in MNI tag files. Both the imaging data and the corresponding landmarks are available online as the RESECT database at https://archive.norstore.no (search for "RESECT"). POTENTIAL IMPACT The proposed database provides real high-quality multi-modal clinical data to validate and compare image registration algorithms that can potentially benefit the accuracy and efficiency of brain tumor resection. Furthermore, the database can also be used to test other image processing methods and neuro-navigation software platforms.
Collapse
Affiliation(s)
- Yiming Xiao
- PERFORM Centre, Concordia University, Montreal, H4B 1R6, Canada.,Department of Electrical and Computer Engineering, Concordia University, Montreal, H3G 1M8, Canada
| | - Maryse Fortin
- PERFORM Centre, Concordia University, Montreal, H4B 1R6, Canada.,Department of Electrical and Computer Engineering, Concordia University, Montreal, H3G 1M8, Canada
| | - Geirmund Unsgård
- Department of Neurosurgery, St. Olavs University Hospital, Trondheim, NO-7006, Norway.,Department of Neuroscience, Norwegian University of Science and Technology, Trondheim, NO-7491, Norway.,Norwegian National Advisory Unit for Ultrasound and Image Guided Therapy, St. Olavs University Hospital, Trondheim, NO-7006, Norway
| | - Hassan Rivaz
- PERFORM Centre, Concordia University, Montreal, H4B 1R6, Canada.,Department of Electrical and Computer Engineering, Concordia University, Montreal, H3G 1M8, Canada
| | - Ingerid Reinertsen
- Department of Medical Technology, SINTEF, Trondheim, NO-7465, Norway.,Norwegian National Advisory Unit for Ultrasound and Image Guided Therapy, St. Olavs University Hospital, Trondheim, NO-7006, Norway
| |
Collapse
|
35
|
Riva M, Hennersperger C, Milletari F, Katouzian A, Pessina F, Gutierrez-Becker B, Castellano A, Navab N, Bello L. 3D intra-operative ultrasound and MR image guidance: pursuing an ultrasound-based management of brainshift to enhance neuronavigation. Int J Comput Assist Radiol Surg 2017; 12:1711-1725. [DOI: 10.1007/s11548-017-1578-5] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2016] [Accepted: 03/20/2017] [Indexed: 12/01/2022]
|
36
|
Jiang D, Shi Y, Chen X, Wang M, Song Z. Fast and robust multimodal image registration using a local derivative pattern. Med Phys 2017; 44:497-509. [PMID: 28205308 DOI: 10.1002/mp.12049] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2016] [Revised: 10/09/2016] [Accepted: 11/27/2016] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Deformable multimodal image registration, which can benefit radiotherapy and image guided surgery by providing complementary information, remains a challenging task in the medical image analysis field due to the difficulty of defining a proper similarity measure. This article presents a novel, robust and fast binary descriptor, the discriminative local derivative pattern (dLDP), which is able to encode images of different modalities into similar image representations. METHODS dLDP calculates a binary string for each voxel according to the pattern of intensity derivatives in its neighborhood. The descriptor similarity is evaluated using the Hamming distance, which can be efficiently computed, instead of conventional L1 or L2 norms. For the first time, we validated the effectiveness and feasibility of the local derivative pattern for multimodal deformable image registration with several multi-modal registration applications. RESULTS dLDP was compared with three state-of-the-art methods in artificial image and clinical settings. In the experiments of deformable registration between different magnetic resonance imaging (MRI) modalities from BrainWeb, between computed tomography and MRI images from patient data, and between MRI and ultrasound images from BITE database, we show our method outperforms localized mutual information and entropy images in terms of both accuracy and time efficiency. We have further validated dLDP for the deformable registration of preoperative MRI and three-dimensional intraoperative ultrasound images. Our results indicate that dLDP reduces the average mean target registration error from 4.12 mm to 2.30 mm. This accuracy is statistically equivalent to the accuracy of the state-of-the-art methods in the study; however, in terms of computational complexity, our method significantly outperforms other methods and is even comparable to the sum of the absolute difference. CONCLUSIONS The results reveal that dLDP can achieve superior performance regarding both accuracy and time efficiency in general multimodal image registration. In addition, dLDP also indicates the potential for clinical ultrasound guided intervention.
Collapse
Affiliation(s)
- Dongsheng Jiang
- Digital Medical Research Center of School of Basic Medical Sciences, Fudan University and Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, 138 YiXue Yuan Road, Shanghai, 200032, China
| | - Yonghong Shi
- Digital Medical Research Center of School of Basic Medical Sciences, Fudan University and Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, 138 YiXue Yuan Road, Shanghai, 200032, China
| | - Xinrong Chen
- Digital Medical Research Center of School of Basic Medical Sciences, Fudan University and Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, 138 YiXue Yuan Road, Shanghai, 200032, China
| | - Manning Wang
- Digital Medical Research Center of School of Basic Medical Sciences, Fudan University and Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, 138 YiXue Yuan Road, Shanghai, 200032, China
| | - Zhijian Song
- Digital Medical Research Center of School of Basic Medical Sciences, Fudan University and Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, 138 YiXue Yuan Road, Shanghai, 200032, China
| |
Collapse
|
37
|
Geometric modeling of hepatic arteries in 3D ultrasound with unsupervised MRA fusion during liver interventions. Int J Comput Assist Radiol Surg 2017; 12:961-972. [PMID: 28271356 DOI: 10.1007/s11548-017-1550-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2017] [Accepted: 02/27/2017] [Indexed: 10/20/2022]
Abstract
PURPOSE Modulating the chemotherapy injection rate with regard to blood flow velocities in the tumor-feeding arteries during intra-arterial therapies may help improve liver tumor targeting while decreasing systemic exposure. These velocities can be obtained noninvasively using Doppler ultrasound (US). However, small vessels situated in the liver are difficult to identify and follow in US. We propose a multimodal fusion approach that non-rigidly registers a 3D geometric mesh model of the hepatic arteries obtained from preoperative MR angiography (MRA) acquisitions with intra-operative 3D US imaging. METHODS The proposed fusion tool integrates 3 imaging modalities: an arterial MRA, a portal phase MRA and an intra-operative 3D US. Preoperatively, the arterial phase MRA is used to generate a 3D model of the hepatic arteries, which is then non-rigidly co-registered with the portal phase MRA. Once the intra-operative 3D US is acquired, we register it with the portal MRA using a vessel-based rigid initialization followed by a non-rigid registration using an image-based metric based on linear correlation of linear combination. Using the combined non-rigid transformation matrices, the 3D mesh model is fused with the 3D US. RESULTS 3D US and multi-phase MRA images acquired from 10 porcine models were used to test the performance of the proposed fusion tool. Unimodal registration of the MRA phases yielded a target registration error (TRE) of [Formula: see text] mm. Initial rigid alignment of the portal MRA and 3D US yielded a mean TRE of [Formula: see text] mm, which was significantly reduced to [Formula: see text] mm ([Formula: see text]) after affine image-based registration. The following deformable registration step allowed for further decrease of the mean TRE to [Formula: see text] mm. CONCLUSION The proposed tool could facilitate visualization and localization of these vessels when using 3D US intra-operatively for either intravascular or percutaneous interventions to avoid vessel perforation.
Collapse
|
38
|
Zettinig O, Frisch B, Virga S, Esposito M, Rienmüller A, Meyer B, Hennersperger C, Ryang YM, Navab N. 3D ultrasound registration-based visual servoing for neurosurgical navigation. Int J Comput Assist Radiol Surg 2017; 12:1607-1619. [DOI: 10.1007/s11548-017-1536-2] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2016] [Accepted: 02/01/2017] [Indexed: 12/27/2022]
|
39
|
Hennersperger C, Fuerst B, Virga S, Zettinig O, Frisch B, Neff T, Navab N. Towards MRI-Based Autonomous Robotic US Acquisitions: A First Feasibility Study. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:538-548. [PMID: 27831861 DOI: 10.1109/tmi.2016.2620723] [Citation(s) in RCA: 38] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Robotic ultrasound has the potential to assist and guide physicians during interventions. In this work, we present a set of methods and a workflow to enable autonomous MRI-guided ultrasound acquisitions. Our approach uses a structured-light 3D scanner for patient-to-robot and image-to-patient calibration, which in turn is used to plan 3D ultrasound trajectories. These MRI-based trajectories are followed autonomously by the robot and are further refined online using automatic MRI/US registration. Despite the low spatial resolution of structured light scanners, the initial planned acquisition path can be followed with an accuracy of 2.46 ± 0.96 mm. This leads to a good initialization of the MRI/US registration: the 3D-scan-based alignment for planning and acquisition shows an accuracy (distance between planned ultrasound and MRI) of 4.47 mm, and 0.97 mm after an online-update of the calibration based on a closed loop registration.
Collapse
|
40
|
Yang M, Ding H, Kang J, Cong L, Zhu L, Wang G. Local structure orientation descriptor based on intra-image similarity for multimodal registration of liver ultrasound and MR images. Comput Biol Med 2016; 76:69-79. [DOI: 10.1016/j.compbiomed.2016.06.025] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2016] [Revised: 06/11/2016] [Accepted: 06/24/2016] [Indexed: 02/07/2023]
|
41
|
Drouin S, Kochanowska A, Kersten-Oertel M, Gerard IJ, Zelmann R, De Nigris D, Bériault S, Arbel T, Sirhan D, Sadikot AF, Hall JA, Sinclair DS, Petrecca K, DelMaestro RF, Collins DL. IBIS: an OR ready open-source platform for image-guided neurosurgery. Int J Comput Assist Radiol Surg 2016; 12:363-378. [DOI: 10.1007/s11548-016-1478-0] [Citation(s) in RCA: 49] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2016] [Accepted: 08/19/2016] [Indexed: 10/21/2022]
|
42
|
Jiang D, Shi Y, Yao D, Wang M, Song Z. miLBP: a robust and fast modality-independent 3D LBP for multimodal deformable registration. Int J Comput Assist Radiol Surg 2016; 11:997-1005. [PMID: 27250854 PMCID: PMC4893381 DOI: 10.1007/s11548-016-1407-2] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2016] [Accepted: 03/31/2016] [Indexed: 05/29/2023]
Abstract
Purpose Computer-assisted intervention often depends on multimodal deformable registration to provide complementary information. However, multimodal deformable registration remains a challenging task. Methods This paper introduces a novel robust and fast modality-independent 3D binary descriptor, called miLBP, which integrates the principle of local self-similarity with a form of local binary pattern and can robustly extract the similar geometry features from 3D volumes across different modalities. miLBP is a bit string that can be computed by simply thresholding the voxel distance. Furthermore, the descriptor similarity can be evaluated efficiently using the Hamming distance. Results miLBP was compared to vector-valued self-similarity context (SSC) in artificial image and clinical settings. The results show that miLBP is more robust than SSC in extracting local geometry features across modalities and achieved higher registration accuracy in different registration scenarios. Furthermore, in the most challenging registration between preoperative magnetic resonance imaging and intra-operative ultrasound images, our approach significantly outperforms the state-of-the-art methods in terms of both accuracy (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$2.15\pm 1.1 \hbox { mm}$$\end{document}2.15±1.1mm) and speed (29.2 s for one case). Conclusions Registration performance and speed indicate that miLBP has the potential of being applied to the time-sensitive intra-operative computer-assisted intervention.
Collapse
Affiliation(s)
- Dongsheng Jiang
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, China.,Digital Medical Research Center of School of Basic Medical Sciences, Fudan University, Shanghai, China
| | - Yonghong Shi
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, China.,Digital Medical Research Center of School of Basic Medical Sciences, Fudan University, Shanghai, China
| | - Demin Yao
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, China.,Digital Medical Research Center of School of Basic Medical Sciences, Fudan University, Shanghai, China
| | - Manning Wang
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, China. .,Digital Medical Research Center of School of Basic Medical Sciences, Fudan University, Shanghai, China.
| | - Zhijian Song
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, China. .,Digital Medical Research Center of School of Basic Medical Sciences, Fudan University, Shanghai, China.
| |
Collapse
|
43
|
Kojcev R, Fuerst B, Zettinig O, Fotouhi J, Lee SC, Frisch B, Taylor R, Sinibaldi E, Navab N. Dual-robot ultrasound-guided needle placement: closing the planning-imaging-action loop. Int J Comput Assist Radiol Surg 2016; 11:1173-81. [DOI: 10.1007/s11548-016-1408-1] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2016] [Accepted: 03/31/2016] [Indexed: 10/21/2022]
|
44
|
Ilunga-Mbuyamba E, Avina-Cervantes JG, Lindner D, Cruz-Aceves I, Arlt F, Chalopin C. Vascular Structure Identification in Intraoperative 3D Contrast-Enhanced Ultrasound Data. SENSORS (BASEL, SWITZERLAND) 2016; 16:E497. [PMID: 27070610 PMCID: PMC4851011 DOI: 10.3390/s16040497] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/22/2016] [Revised: 03/19/2016] [Accepted: 03/31/2016] [Indexed: 11/18/2022]
Abstract
In this paper, a method of vascular structure identification in intraoperative 3D Contrast-Enhanced Ultrasound (CEUS) data is presented. Ultrasound imaging is commonly used in brain tumor surgery to investigate in real time the current status of cerebral structures. The use of an ultrasound contrast agent enables to highlight tumor tissue, but also surrounding blood vessels. However, these structures can be used as landmarks to estimate and correct the brain shift. This work proposes an alternative method for extracting small vascular segments close to the tumor as landmark. The patient image dataset involved in brain tumor operations includes preoperative contrast T1MR (cT1MR) data and 3D intraoperative contrast enhanced ultrasound data acquired before (3D-iCEUS(start) and after (3D-iCEUS(end) tumor resection. Based on rigid registration techniques, a preselected vascular segment in cT1MR is searched in 3D-iCEUS(start) and 3D-iCEUS(end) data. The method was validated by using three similarity measures (Normalized Gradient Field, Normalized Mutual Information and Normalized Cross Correlation). Tests were performed on data obtained from ten patients overcoming a brain tumor operation and it succeeded in nine cases. Despite the small size of the vascular structures, the artifacts in the ultrasound images and the brain tissue deformations, blood vessels were successfully identified.
Collapse
Affiliation(s)
- Elisee Ilunga-Mbuyamba
- Telematics (CA), Engineering Division (DICIS), University of Guanajuato, Campus Irapuato-Salamanca, Carr. Salamanca-Valle km 3.5 + 1.8, Com. Palo Blanco, Salamanca, Gto. 36885, Mexico.
| | - Juan Gabriel Avina-Cervantes
- Telematics (CA), Engineering Division (DICIS), University of Guanajuato, Campus Irapuato-Salamanca, Carr. Salamanca-Valle km 3.5 + 1.8, Com. Palo Blanco, Salamanca, Gto. 36885, Mexico.
| | - Dirk Lindner
- Department of Neurosurgery, University Hospital Leipzig, Leipzig 04103, Germany.
| | - Ivan Cruz-Aceves
- CONACYT Research-Fellow, Center for Research in Mathematics (CIMAT), A.C., Jalisco S/N, Col. Valenciana, Guanajuato, Gto. 36000, Mexico.
| | - Felix Arlt
- Department of Neurosurgery, University Hospital Leipzig, Leipzig 04103, Germany.
| | - Claire Chalopin
- Innovation Center Computer Assisted Surgery (ICCAS), University of Leipzig, Leipzig 04103, Germany.
| |
Collapse
|
45
|
Mapping and characterizing endometrial implants by registering 2D transvaginal ultrasound to 3D pelvic magnetic resonance images. Comput Med Imaging Graph 2015; 45:11-25. [DOI: 10.1016/j.compmedimag.2015.07.007] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2014] [Revised: 06/26/2015] [Accepted: 07/13/2015] [Indexed: 11/23/2022]
|
46
|
Becker K, Stauber M, Schwarz F, Beißbarth T. Automated 3D-2D registration of X-ray microcomputed tomography with histological sections for dental implants in bone using chamfer matching and simulated annealing. Comput Med Imaging Graph 2015; 44:62-8. [PMID: 26026659 DOI: 10.1016/j.compmedimag.2015.04.005] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2014] [Revised: 03/01/2015] [Accepted: 04/17/2015] [Indexed: 10/23/2022]
Abstract
We propose a novel 3D-2D registration approach for micro-computed tomography (μCT) and histology (HI), constructed for dental implant biopsies, that finds the position and normal vector of the oblique slice from μCT that corresponds to HI. During image pre-processing, the implants and the bone tissue are segmented using a combination of thresholding, morphological filters and component labeling. After this, chamfer matching is employed to register the implant edges and fine registration of the bone tissues is achieved using simulated annealing. The method was tested on n=10 biopsies, obtained at 20 weeks after non-submerged healing in the canine mandible. The specimens were scanned with μCT 100 and processed for hard tissue sectioning. After registration, we assessed the agreement of bone to implant contact (BIC) using automated and manual measurements. Statistical analysis was conducted to test the agreement of the BIC measurements in the registered samples. Registration was successful for all specimens and agreement of the respective binary images was high (median: 0.90, 1.-3. Qu.: 0.89-0.91). Direct comparison of BIC yielded that automated (median 0.82, 1.-3. Qu.: 0.75-0.85) and manual (median 0.61, 1.-3. Qu.: 0.52-0.67) measures from μCT were significant positively correlated with HI (median 0.65, 1.-3. Qu.: 0.59-0.72) between μCT and HI groups (manual: R(2)=0.87, automated: R(2)=0.75, p<0.001). The results show that this method yields promising results and that μCT may become a valid alternative to assess osseointegration in three dimensions.
Collapse
Affiliation(s)
- Kathrin Becker
- Department of Medical Statistics, Biostatistics Group, University Medical Center, Georg-August University, Humboldt Allee 32, 37073 Göttingen, Germany; Department of Oral Surgery, Westdeutsche Kieferklinik, Heinrich-Heine University, Moorenstr. 5, 40225 Düsseldorf, Germany.
| | - Martin Stauber
- Scanco Medical AG, Fabrikweg 2, 8306 Brüttisellen, Switzerland.
| | - Frank Schwarz
- Department of Oral Surgery, Westdeutsche Kieferklinik, Heinrich-Heine University, Moorenstr. 5, 40225 Düsseldorf, Germany.
| | - Tim Beißbarth
- Department of Medical Statistics, Biostatistics Group, University Medical Center, Georg-August University, Humboldt Allee 32, 37073 Göttingen, Germany.
| |
Collapse
|
47
|
Ferrante E, Fecamp V, Paragios N. Slice-to-volume deformable registration: efficient one-shot consensus between plane selection and in-plane deformation. Int J Comput Assist Radiol Surg 2015; 10:791-800. [DOI: 10.1007/s11548-015-1205-2] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2015] [Accepted: 04/03/2015] [Indexed: 10/23/2022]
|
48
|
Ahmadi SA, Milletari F, Navab N, Schuberth M, Plate A, Bötzel K. 3D transcranial ultrasound as a novel intra-operative imaging technique for DBS surgery: a feasibility study. Int J Comput Assist Radiol Surg 2015; 10:891-900. [DOI: 10.1007/s11548-015-1191-4] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2015] [Accepted: 03/20/2015] [Indexed: 12/28/2022]
|