1
|
Wang Z, Yang Y, Chen Y, Yuan T, Sermesant M, Delingette H, Wu O. Mutual Information Guided Diffusion for Zero-Shot Cross-Modality Medical Image Translation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2825-2838. [PMID: 38551825 DOI: 10.1109/tmi.2024.3382043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/02/2024]
Abstract
Cross-modality data translation has attracted great interest in medical image computing. Deep generative models show performance improvement in addressing related challenges. Nevertheless, as a fundamental challenge in image translation, the problem of zero-shot learning cross-modality image translation with fidelity remains unanswered. To bridge this gap, we propose a novel unsupervised zero-shot learning method called Mutual Information guided Diffusion Model, which learns to translate an unseen source image to the target modality by leveraging the inherent statistical consistency of Mutual Information between different modalities. To overcome the prohibitive high dimensional Mutual Information calculation, we propose a differentiable local-wise mutual information layer for conditioning the iterative denoising process. The Local-wise-Mutual-Information-Layer captures identical cross-modality features in the statistical domain, offering diffusion guidance without relying on direct mappings between the source and target domains. This advantage allows our method to adapt to changing source domains without the need for retraining, making it highly practical when sufficient labeled source domain data is not available. We demonstrate the superior performance of MIDiffusion in zero-shot cross-modality translation tasks through empirical comparisons with other generative models, including adversarial-based and diffusion-based models. Finally, we showcase the real-world application of MIDiffusion in 3D zero-shot learning-based cross-modality image segmentation tasks.
Collapse
|
2
|
Hernandez M, Ramon Julvez U. Insights into traditional Large Deformation Diffeomorphic Metric Mapping and unsupervised deep-learning for diffeomorphic registration and their evaluation. Comput Biol Med 2024; 178:108761. [PMID: 38908357 DOI: 10.1016/j.compbiomed.2024.108761] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 06/04/2024] [Accepted: 06/13/2024] [Indexed: 06/24/2024]
Abstract
This paper explores the connections between traditional Large Deformation Diffeomorphic Metric Mapping methods and unsupervised deep-learning approaches for non-rigid registration, particularly emphasizing diffeomorphic registration. The study provides useful insights and establishes connections between the methods, thereby facilitating a profound understanding of the methodological landscape. The methods considered in our study are extensively evaluated in T1w MRI images using traditional NIREP and Learn2Reg OASIS evaluation protocols with a focus on fairness, to establish equitable benchmarks and facilitate informed comparisons. Through a comprehensive analysis of the results, we address key questions, including the intricate relationship between accuracy and transformation quality in performance, the disentanglement of the influence of registration ingredients on performance, and the determination of benchmark methods and baselines. We offer valuable insights into the strengths and limitations of both traditional and deep-learning methods, shedding light on their comparative performance and guiding future advancements in the field.
Collapse
Affiliation(s)
- Monica Hernandez
- Computer Science Department, University of Zaragoza, Spain; Aragon Institute on Engineering Research, Spain.
| | - Ubaldo Ramon Julvez
- Computer Science Department, University of Zaragoza, Spain; Aragon Institute on Engineering Research, Spain
| |
Collapse
|
3
|
Rahmani M, Moghaddasi H, Pour-Rashidi A, Ahmadian A, Najafzadeh E, Farnia P. D 2BGAN: Dual Discriminator Bayesian Generative Adversarial Network for Deformable MR-Ultrasound Registration Applied to Brain Shift Compensation. Diagnostics (Basel) 2024; 14:1319. [PMID: 39001209 PMCID: PMC11240784 DOI: 10.3390/diagnostics14131319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2024] [Revised: 05/30/2024] [Accepted: 06/18/2024] [Indexed: 07/16/2024] Open
Abstract
During neurosurgical procedures, the neuro-navigation system's accuracy is affected by the brain shift phenomenon. One popular strategy is to compensate for brain shift using intraoperative ultrasound (iUS) registration with pre-operative magnetic resonance (MR) scans. This requires a satisfactory multimodal image registration method, which is challenging due to the low image quality of ultrasound and the unpredictable nature of brain deformation during surgery. In this paper, we propose an automatic unsupervised end-to-end MR-iUS registration approach named the Dual Discriminator Bayesian Generative Adversarial Network (D2BGAN). The proposed network consists of two discriminators and a generator optimized by a Bayesian loss function to improve the functionality of the generator, and we add a mutual information loss function to the discriminator for similarity measurements. Extensive validation was performed on the RESECT and BITE datasets, where the mean target registration error (mTRE) of MR-iUS registration using D2BGAN was determined to be 0.75 ± 0.3 mm. The D2BGAN illustrated a clear advantage by achieving an 85% improvement in the mTRE over the initial error. Moreover, the results confirmed that the proposed Bayesian loss function, rather than the typical loss function, improved the accuracy of MR-iUS registration by 23%. The improvement in registration accuracy was further enhanced by the preservation of the intensity and anatomical information of the input images.
Collapse
Affiliation(s)
- Mahdiyeh Rahmani
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences (TUMS), Tehran 1461884513, Iran
- Research Center for Biomedical Technologies and Robotics (RCBTR), Advanced Medical Technologies and Equipment Institute (AMTEI), Imam Khomeini Hospital Complex, Tehran University of Medical Sciences (TUMS), Tehran 1419733141, Iran
| | - Hadis Moghaddasi
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences (TUMS), Tehran 1461884513, Iran
- Research Center for Biomedical Technologies and Robotics (RCBTR), Advanced Medical Technologies and Equipment Institute (AMTEI), Imam Khomeini Hospital Complex, Tehran University of Medical Sciences (TUMS), Tehran 1419733141, Iran
| | - Ahmad Pour-Rashidi
- Department of Neurosurgery, Sina Hospital, School of Medicine, Tehran University of Medical Sciences (TUMS), Tehran 11367469111, Iran
| | - Alireza Ahmadian
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences (TUMS), Tehran 1461884513, Iran
- Research Center for Biomedical Technologies and Robotics (RCBTR), Advanced Medical Technologies and Equipment Institute (AMTEI), Imam Khomeini Hospital Complex, Tehran University of Medical Sciences (TUMS), Tehran 1419733141, Iran
| | - Ebrahim Najafzadeh
- Department of Medical Physics, School of Medicine, Iran University of Medical Sciences, Tehran 1417466191, Iran
- Department of Molecular Imaging, Faculty of Advanced Technologies in Medicine, Iran University of Medical Sciences, Tehran 1449614535, Iran
| | - Parastoo Farnia
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences (TUMS), Tehran 1461884513, Iran
- Research Center for Biomedical Technologies and Robotics (RCBTR), Advanced Medical Technologies and Equipment Institute (AMTEI), Imam Khomeini Hospital Complex, Tehran University of Medical Sciences (TUMS), Tehran 1419733141, Iran
| |
Collapse
|
4
|
Bopp MHA, Grote A, Gjorgjevski M, Pojskic M, Saß B, Nimsky C. Enabling Navigation and Augmented Reality in the Sitting Position in Posterior Fossa Surgery Using Intraoperative Ultrasound. Cancers (Basel) 2024; 16:1985. [PMID: 38893106 PMCID: PMC11171013 DOI: 10.3390/cancers16111985] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Revised: 05/09/2024] [Accepted: 05/21/2024] [Indexed: 06/21/2024] Open
Abstract
Despite its broad use in cranial and spinal surgery, navigation support and microscope-based augmented reality (AR) have not yet found their way into posterior fossa surgery in the sitting position. While this position offers surgical benefits, navigation accuracy and thereof the use of navigation itself seems limited. Intraoperative ultrasound (iUS) can be applied at any time during surgery, delivering real-time images that can be used for accuracy verification and navigation updates. Within this study, its applicability in the sitting position was assessed. Data from 15 patients with lesions within the posterior fossa who underwent magnetic resonance imaging (MRI)-based navigation-supported surgery in the sitting position were retrospectively analyzed using the standard reference array and new rigid image-based MRI-iUS co-registration. The navigation accuracy was evaluated based on the spatial overlap of the outlined lesions and the distance between the corresponding landmarks in both data sets, respectively. Image-based co-registration significantly improved (p < 0.001) the spatial overlap of the outlined lesion (0.42 ± 0.30 vs. 0.65 ± 0.23) and significantly reduced (p < 0.001) the distance between the corresponding landmarks (8.69 ± 6.23 mm vs. 3.19 ± 2.73 mm), allowing for the sufficient use of navigation and AR support. Navigated iUS can therefore serve as an easy-to-use tool to enable navigation support for posterior fossa surgery in the sitting position.
Collapse
Affiliation(s)
- Miriam H. A. Bopp
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043 Marburg, Germany; (A.G.); (M.G.); (M.P.); (B.S.); (C.N.)
- Center for Mind, Brain and Behavior (CMBB), 35043 Marburg, Germany
| | - Alexander Grote
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043 Marburg, Germany; (A.G.); (M.G.); (M.P.); (B.S.); (C.N.)
| | - Marko Gjorgjevski
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043 Marburg, Germany; (A.G.); (M.G.); (M.P.); (B.S.); (C.N.)
| | - Mirza Pojskic
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043 Marburg, Germany; (A.G.); (M.G.); (M.P.); (B.S.); (C.N.)
| | - Benjamin Saß
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043 Marburg, Germany; (A.G.); (M.G.); (M.P.); (B.S.); (C.N.)
| | - Christopher Nimsky
- Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043 Marburg, Germany; (A.G.); (M.G.); (M.P.); (B.S.); (C.N.)
- Center for Mind, Brain and Behavior (CMBB), 35043 Marburg, Germany
| |
Collapse
|
5
|
Juvekar P, Dorent R, Kögl F, Torio E, Barr C, Rigolo L, Galvin C, Jowkar N, Kazi A, Haouchine N, Cheema H, Navab N, Pieper S, Wells WM, Bi WL, Golby A, Frisken S, Kapur T. ReMIND: The Brain Resection Multimodal Imaging Database. Sci Data 2024; 11:494. [PMID: 38744868 PMCID: PMC11093985 DOI: 10.1038/s41597-024-03295-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 04/23/2024] [Indexed: 05/16/2024] Open
Abstract
The standard of care for brain tumors is maximal safe surgical resection. Neuronavigation augments the surgeon's ability to achieve this but loses validity as surgery progresses due to brain shift. Moreover, gliomas are often indistinguishable from surrounding healthy brain tissue. Intraoperative magnetic resonance imaging (iMRI) and ultrasound (iUS) help visualize the tumor and brain shift. iUS is faster and easier to incorporate into surgical workflows but offers a lower contrast between tumorous and healthy tissues than iMRI. With the success of data-hungry Artificial Intelligence algorithms in medical image analysis, the benefits of sharing well-curated data cannot be overstated. To this end, we provide the largest publicly available MRI and iUS database of surgically treated brain tumors, including gliomas (n = 92), metastases (n = 11), and others (n = 11). This collection contains 369 preoperative MRI series, 320 3D iUS series, 301 iMRI series, and 356 segmentations collected from 114 consecutive patients at a single institution. This database is expected to help brain shift and image analysis research and neurosurgical training in interpreting iUS and iMRI.
Collapse
Affiliation(s)
| | - Reuben Dorent
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA
| | - Fryderyk Kögl
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA
- Computer Aided Medical Procedures, Technische Universität München, Munich, Germany
| | - Erickson Torio
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA
| | - Colton Barr
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA
| | - Laura Rigolo
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA
| | - Colin Galvin
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA
| | - Nick Jowkar
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA
| | - Anees Kazi
- Massachusetts General Hospital, Harvard Medical School, Boston, USA
| | - Nazim Haouchine
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA
| | - Harneet Cheema
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA
- Department of Health Science, University of Ottawa, Ottawa, Canada
| | - Nassir Navab
- Computer Aided Medical Procedures, Technische Universität München, Munich, Germany
| | - Steve Pieper
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA
| | - William M Wells
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA
| | - Wenya Linda Bi
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA
| | - Alexandra Golby
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA
| | - Sarah Frisken
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA
| | - Tina Kapur
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA.
| |
Collapse
|
6
|
Abbasi S, Mehdizadeh A, Boveiri HR, Mosleh Shirazi MA, Javidan R, Khayami R, Tavakoli M. Unsupervised deep learning registration model for multimodal brain images. J Appl Clin Med Phys 2023; 24:e14177. [PMID: 37823748 PMCID: PMC10647957 DOI: 10.1002/acm2.14177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 05/29/2023] [Accepted: 09/14/2023] [Indexed: 10/13/2023] Open
Abstract
Multimodal image registration is a key for many clinical image-guided interventions. However, it is a challenging task because of complicated and unknown relationships between different modalities. Currently, deep supervised learning is the state-of-theart method at which the registration is conducted in end-to-end manner and one-shot. Therefore, a huge ground-truth data is required to improve the results of deep neural networks for registration. Moreover, supervised methods may yield models that bias towards annotated structures. Here, to deal with above challenges, an alternative approach is using unsupervised learning models. In this study, we have designed a novel deep unsupervised Convolutional Neural Network (CNN)-based model based on computer tomography/magnetic resonance (CT/MR) co-registration of brain images in an affine manner. For this purpose, we created a dataset consisting of 1100 pairs of CT/MR slices from the brain of 110 neuropsychic patients with/without tumor. At the next step, 12 landmarks were selected by a well-experienced radiologist and annotated on each slice resulting in the computation of series of metrics evaluation, target registration error (TRE), Dice similarity, Hausdorff, and Jaccard coefficients. The proposed method could register the multimodal images with TRE 9.89, Dice similarity 0.79, Hausdorff 7.15, and Jaccard 0.75 that are appreciable for clinical applications. Moreover, the approach registered the images in an acceptable time 203 ms and can be appreciable for clinical usage due to the short registration time and high accuracy. Here, the results illustrated that our proposed method achieved competitive performance against other related approaches from both reasonable computation time and the metrics evaluation.
Collapse
Affiliation(s)
- Samaneh Abbasi
- Department of Medical Physics and EngineeringSchool of MedicineShiraz University of Medical SciencesShirazIran
| | - Alireza Mehdizadeh
- Research Center for Neuromodulation and PainShiraz University of Medical SciencesShirazIran
| | - Hamid Reza Boveiri
- Department of Computer Engineering and ITShiraz University of TechnologyShirazIran
| | - Mohammad Amin Mosleh Shirazi
- Ionizing and Non‐Ionizing Radiation Protection Research Center, School of Paramedical SciencesShiraz University of Medical SciencesShirazIran
| | - Reza Javidan
- Department of Computer Engineering and ITShiraz University of TechnologyShirazIran
| | - Raouf Khayami
- Department of Computer Engineering and ITShiraz University of TechnologyShirazIran
| | - Meysam Tavakoli
- Department of Radiation Oncologyand Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| |
Collapse
|
7
|
Zhang X, Gosnell J, Nainamalai V, Page S, Huang S, Haw M, Peng B, Vettukattil J, Jiang J. Advances in TEE-Centric Intraprocedural Multimodal Image Guidance for Congenital and Structural Heart Disease. Diagnostics (Basel) 2023; 13:2981. [PMID: 37761348 PMCID: PMC10530233 DOI: 10.3390/diagnostics13182981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 08/17/2023] [Accepted: 08/17/2023] [Indexed: 09/29/2023] Open
Abstract
Percutaneous interventions are gaining rapid acceptance in cardiology and revolutionizing the treatment of structural heart disease (SHD). As new percutaneous procedures of SHD are being developed, their associated complexity and anatomical variability demand a high-resolution special understanding for intraprocedural image guidance. During the last decade, three-dimensional (3D) transesophageal echocardiography (TEE) has become one of the most accessed imaging methods for structural interventions. Although 3D-TEE can assess cardiac structures and functions in real-time, its limitations (e.g., limited field of view, image quality at a large depth, etc.) must be addressed for its universal adaptation, as well as to improve the quality of its imaging and interventions. This review aims to present the role of TEE in the intraprocedural guidance of percutaneous structural interventions. We also focus on the current and future developments required in a multimodal image integration process when using TEE to enhance the management of congenital and SHD treatments.
Collapse
Affiliation(s)
- Xinyue Zhang
- School of Computer Science, Southwest Petroleum University, Chengdu 610500, China; (X.Z.); (B.P.)
| | - Jordan Gosnell
- Betz Congenital Health Center, Helen DeVos Children’s Hospital, Grand Rapids, MI 49503, USA; (J.G.); (S.H.); (M.H.)
| | - Varatharajan Nainamalai
- Department of Biomedical Engineering, Michigan Technological University, Houghton, MI 49931, USA; (V.N.); (S.P.)
- Joint Center for Biocomputing and Digital Health, Health Research Institute and Institute of Computing and Cybernetics, Michigan Technological University, Houghton, MI 49931, USA
| | - Savannah Page
- Department of Biomedical Engineering, Michigan Technological University, Houghton, MI 49931, USA; (V.N.); (S.P.)
- Joint Center for Biocomputing and Digital Health, Health Research Institute and Institute of Computing and Cybernetics, Michigan Technological University, Houghton, MI 49931, USA
| | - Sihong Huang
- Betz Congenital Health Center, Helen DeVos Children’s Hospital, Grand Rapids, MI 49503, USA; (J.G.); (S.H.); (M.H.)
| | - Marcus Haw
- Betz Congenital Health Center, Helen DeVos Children’s Hospital, Grand Rapids, MI 49503, USA; (J.G.); (S.H.); (M.H.)
| | - Bo Peng
- School of Computer Science, Southwest Petroleum University, Chengdu 610500, China; (X.Z.); (B.P.)
| | - Joseph Vettukattil
- Betz Congenital Health Center, Helen DeVos Children’s Hospital, Grand Rapids, MI 49503, USA; (J.G.); (S.H.); (M.H.)
- Department of Biomedical Engineering, Michigan Technological University, Houghton, MI 49931, USA; (V.N.); (S.P.)
| | - Jingfeng Jiang
- Department of Biomedical Engineering, Michigan Technological University, Houghton, MI 49931, USA; (V.N.); (S.P.)
- Joint Center for Biocomputing and Digital Health, Health Research Institute and Institute of Computing and Cybernetics, Michigan Technological University, Houghton, MI 49931, USA
| |
Collapse
|
8
|
Masoumi N, Rivaz H, Hacihaliloglu I, Ahmad MO, Reinertsen I, Xiao Y. The Big Bang of Deep Learning in Ultrasound-Guided Surgery: A Review. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2023; 70:909-919. [PMID: 37028313 DOI: 10.1109/tuffc.2023.3255843] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Ultrasound (US) imaging is a paramount modality in many image-guided surgeries and percutaneous interventions, thanks to its high portability, temporal resolution, and cost-efficiency. However, due to its imaging principles, the US is often noisy and difficult to interpret. Appropriate image processing can greatly enhance the applicability of the imaging modality in clinical practice. Compared with the classic iterative optimization and machine learning (ML) approach, deep learning (DL) algorithms have shown great performance in terms of accuracy and efficiency for US processing. In this work, we conduct a comprehensive review on deep-learning algorithms in the applications of US-guided interventions, summarize the current trends, and suggest future directions on the topic.
Collapse
|
9
|
Mazzucchi E, Hiepe P, Langhof M, La Rocca G, Pignotti F, Rinaldi P, Sabatino G. Automatic rigid image Fusion of preoperative MR and intraoperative US acquired after craniotomy. Cancer Imaging 2023; 23:37. [PMID: 37055790 PMCID: PMC10099637 DOI: 10.1186/s40644-023-00554-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Accepted: 04/05/2023] [Indexed: 04/15/2023] Open
Abstract
BACKGROUND Neuronavigation of preoperative MRI is limited by several errors. Intraoperative ultrasound (iUS) with navigated probes that provide automatic superposition of pre-operative MRI and iUS and three-dimensional iUS reconstruction may overcome some of these limitations. Aim of the present study is to verify the accuracy of an automatic MRI - iUS fusion algorithm to improve MR-based neuronavigation accuracy. METHODS An algorithm using Linear Correlation of Linear Combination (LC2)-based similarity metric has been retrospectively evaluated for twelve datasets acquired in patients with brain tumor. A series of landmarks were defined both in MRI and iUS scans. The Target Registration Error (TRE) was determined for each pair of landmarks before and after the automatic Rigid Image Fusion (RIF). The algorithm has been tested on two conditions of the initial image alignment: registration-based fusion (RBF), as given by the navigated ultrasound probe, and different simulated course alignments during convergence test. RESULTS Except for one case RIF was successfully applied in all patients considering the RBF as initial alignment. Here, mean TRE after RBF was significantly reduced from 4.03 (± 1.40) mm to (2.08 ± 0.96 mm) (p = 0.002), after RIF. For convergence test, the mean TRE value after initial perturbations was 8.82 (± 0.23) mm which has been reduced to a mean TRE of 2.64 (± 1.20) mm after RIF (p < 0.001). CONCLUSIONS The integration of an automatic image fusion method for co-registration of pre-operative MRI and iUS data may improve the accuracy in MR-based neuronavigation.
Collapse
Affiliation(s)
- Edoardo Mazzucchi
- Unit of Neurosurgery, Mater Olbia Hospital, Olbia, Italy.
- Institute of Neurosurgery, IRCCS Fondazione Policlinico Universitario Agostino Gemelli, Catholic University, Rome, Italy.
| | | | | | - Giuseppe La Rocca
- Unit of Neurosurgery, Mater Olbia Hospital, Olbia, Italy
- Institute of Neurosurgery, IRCCS Fondazione Policlinico Universitario Agostino Gemelli, Catholic University, Rome, Italy
| | - Fabrizio Pignotti
- Unit of Neurosurgery, Mater Olbia Hospital, Olbia, Italy
- Institute of Neurosurgery, IRCCS Fondazione Policlinico Universitario Agostino Gemelli, Catholic University, Rome, Italy
| | | | - Giovanni Sabatino
- Unit of Neurosurgery, Mater Olbia Hospital, Olbia, Italy
- Institute of Neurosurgery, IRCCS Fondazione Policlinico Universitario Agostino Gemelli, Catholic University, Rome, Italy
| |
Collapse
|
10
|
Hering A, Hansen L, Mok TCW, Chung ACS, Siebert H, Hager S, Lange A, Kuckertz S, Heldmann S, Shao W, Vesal S, Rusu M, Sonn G, Estienne T, Vakalopoulou M, Han L, Huang Y, Yap PT, Brudfors M, Balbastre Y, Joutard S, Modat M, Lifshitz G, Raviv D, Lv J, Li Q, Jaouen V, Visvikis D, Fourcade C, Rubeaux M, Pan W, Xu Z, Jian B, De Benetti F, Wodzinski M, Gunnarsson N, Sjolund J, Grzech D, Qiu H, Li Z, Thorley A, Duan J, Grosbrohmer C, Hoopes A, Reinertsen I, Xiao Y, Landman B, Huo Y, Murphy K, Lessmann N, van Ginneken B, Dalca AV, Heinrich MP. Learn2Reg: Comprehensive Multi-Task Medical Image Registration Challenge, Dataset and Evaluation in the Era of Deep Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:697-712. [PMID: 36264729 DOI: 10.1109/tmi.2022.3213983] [Citation(s) in RCA: 15] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Image registration is a fundamental medical image analysis task, and a wide variety of approaches have been proposed. However, only a few studies have comprehensively compared medical image registration approaches on a wide range of clinically relevant tasks. This limits the development of registration methods, the adoption of research advances into practice, and a fair benchmark across competing approaches. The Learn2Reg challenge addresses these limitations by providing a multi-task medical image registration data set for comprehensive characterisation of deformable registration algorithms. A continuous evaluation will be possible at https://learn2reg.grand-challenge.org. Learn2Reg covers a wide range of anatomies (brain, abdomen, and thorax), modalities (ultrasound, CT, MR), availability of annotations, as well as intra- and inter-patient registration evaluation. We established an easily accessible framework for training and validation of 3D registration methods, which enabled the compilation of results of over 65 individual method submissions from more than 20 unique teams. We used a complementary set of metrics, including robustness, accuracy, plausibility, and runtime, enabling unique insight into the current state-of-the-art of medical image registration. This paper describes datasets, tasks, evaluation methods and results of the challenge, as well as results of further analysis of transferability to new datasets, the importance of label supervision, and resulting bias. While no single approach worked best across all tasks, many methodological aspects could be identified that push the performance of medical image registration to new state-of-the-art performance. Furthermore, we demystified the common belief that conventional registration methods have to be much slower than deep-learning-based methods.
Collapse
|
11
|
A hybrid deformable registration method to generate motion-compensated 3D virtual MRI for fusion with interventional real-time 3D ultrasound. Int J Comput Assist Radiol Surg 2023:10.1007/s11548-023-02833-1. [PMID: 36648702 DOI: 10.1007/s11548-023-02833-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Accepted: 01/06/2023] [Indexed: 01/18/2023]
Abstract
PURPOSE Ultrasound is often the preferred modality for image-guided therapy or treatment in organs such as liver due to real-time imaging capabilities. However, the reduced conspicuity of tumors in ultrasound images adversely impacts the precision and accuracy of treatment delivery. This problem is compounded by deformable motion due to breathing and other physiological activity. This creates the need for a fusion method to align interventional US with pre-interventional modalities that provide superior soft-tissue contrast (e.g., MRI) to accurately target a structure-of-interest and compensate for liver motion. METHOD In this work, we developed a hybrid deformable fusion method to align 3D pre-interventional MRI and 3D interventional US volumes to target the structures-of-interest in liver accurately in real-time. The deformable multimodal fusion method involved an offline alignment of a pre-intervention MRI with a pre-intervention US volume using a traditional registration method, followed by real-time prediction of deformation using a trained deep-learning model between interventional US volumes across different respiratory states. This framework enables motion-compensated MRI-US image fusion in real-time for image-guided treatment. RESULTS The proposed hybrid deformable registration method was evaluated on three healthy volunteers across the pre-intervention MRI and 20 US volume pairs in the free-breathing respiratory cycle. The mean Euclidean landmark distance of three homologous targets in all three volunteers was less than 3 mm for percutaneous liver procedures. CONCLUSIONS Preliminary results show that clinically acceptable registration accuracies for near real-time, deformable MRI-US fusion can be achieved by our proposed hybrid approach. The proposed combination of traditional and deep-learning deformable registration techniques is thus a promising approach for motion-compensated MRI-US fusion to improve targeting in image-guided liver interventions.
Collapse
|
12
|
Ma K, Sahinidis NV, Bindlish R, Bury SJ, Haghpanah R, Rajagopalan S. Data-driven strategies for extractive distillation unit optimization. Comput Chem Eng 2022. [DOI: 10.1016/j.compchemeng.2022.107970] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
13
|
Automatic 3D MRI-Ultrasound Registration for Image Guided Arthroscopy. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12115488] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Registration of partial view intra-operative ultrasound (US) to pre-operative MRI is an essential step in image-guided minimally invasive surgery. In this paper, we present an automatic, landmark-free 3D multimodal registration of pre-operative MRI to 4D US (high-refresh-rate 3D-US) for enabling guidance in knee arthroscopy. We focus on the problem of initializing registration in the case of partial views. The proposed method utilizes a pre-initialization step of using the automatically segmented structures from both modalities to achieve a global geometric initialization. This is followed by computing distance maps of the procured segmentations for registration in the distance space. Following that, the final local refinement between the MRI-US volumes is achieved using the LC2 (Linear correlation of linear combination) metric. The method is evaluated on 11 cases spanning six subjects, with four levels of knee flexion. A best-case error of 1.41 mm and 2.34∘ and an average registration error of 3.45 mm and 7.76∘ is achieved in translation and rotation, respectively. An inter-observer variability study is performed, and a mean difference of 4.41 mm and 7.77∘ is reported. The errors obtained through the developed registration algorithm and inter-observer difference values are found to be comparable. We have shown that the proposed algorithm is simple, robust and allows for the automatic global registration of 3D US and MRI that can enable US based image guidance in minimally invasive procedures.
Collapse
|
14
|
Liu YR, He HH, Wu J. Differentiation of Human GBM From Non-GBM Brain Tissue With Polarization Imaging Technique. Front Oncol 2022; 12:863682. [PMID: 35574382 PMCID: PMC9095988 DOI: 10.3389/fonc.2022.863682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Accepted: 03/22/2022] [Indexed: 12/01/2022] Open
Abstract
As for optical techniques, it is difficult for the 5-aminolevulinic (5-ALA) fluorescence guidance technique to completely detect glioma due to residual cells in the blind area and the dead angle of vision under microscopy. The purpose of this research is to characterize different microstructural information and optical properties of formalin-soaked unstained glioblastoma (GBM) and non-GBM tissue with the polarization imaging technique (PIT), and provide a novel method to detect GBM during surgery. In this paper, a 3×3 Mueller matrix polarization experimental system in backscattering mode was built to detect the GBM and non-GBM tissue bulk. The Mueller matrix decomposition and transformation parameters of GBM and non-GBM tissue were calculated and analyzed, and showed that parameters (1−Δ) and t are good indicators for distinguishing GBM from non-GBM tissues. Furthermore, the central moment coefficients (CMCs) of the frequency distribution histogram (FDH) were also calculated and used to distinguish the cancerous tissues. The results of the experiments confirmed the feasibility of PIT applied in the clinic to detect glioma, laying the foundation for the subsequent non-invasive, non-staining glioma detection.
Collapse
Affiliation(s)
- Yi-Rong Liu
- School of Medicine, Tsinghua University, Beijing, China.,Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
| | - Hong-Hui He
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
| | - Jian Wu
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
| |
Collapse
|
15
|
Li Q, Yuan Y, Song G, Liu Y. Nursing Analysis Based on Medical Imaging Technology before and after Coronary Angiography in Cardiovascular Medicine. Appl Bionics Biomech 2022; 2022:3279068. [PMID: 35465185 PMCID: PMC9033406 DOI: 10.1155/2022/3279068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Revised: 03/19/2022] [Accepted: 03/29/2022] [Indexed: 11/17/2022] Open
Abstract
With the advancement of technology, medical imaging technology has been greatly improved. This article mainly studies the nursing before and after coronary angiography in cardiovascular medicine based on medical imaging technology. This paper proposes a multimodal medical image fusion algorithm based on multiscale decomposition and convolution sparse representation. The algorithm first decomposes the preregistered source medical image by NSST, takes the subimages of different scales as training images, and optimizes the subdictionaries of different scales; then convolution and sparse the subimages on each scale encoding to obtain the sparse coefficients of different subimages; secondly, the combination of improved L1 norm and improved spatial frequency (novel sum-modified SF (NMSF)) is used for high-frequency subimage coefficients, and the fusion of low-frequency subimages improved the rule of combining the L1 norm and the regional energy; finally, the final fused image is obtained by inverse NSST of the fused low-frequency subband and high-frequency subband. Experimental analysis found that the bifurcation angle has nothing to do with the damage of the branch vessels after the main branch stent is placed. The bifurcation angle greater than 50° is an independent predictor of MACE after stent extrusion for bifurcation lesions. Experimental results show that the proposed method has good performance in contrast enhancement, detail extraction, and information retention, and it improves the quality of the fusion image.
Collapse
Affiliation(s)
- Qin Li
- Department of Cardiovascular Medicine, Lianyungang First People's Hospital, Lianyungang, 222002 Jiangsu, China
| | - Yangyang Yuan
- Department of Cardiovascular Medicine, Lianyungang First People's Hospital, Lianyungang, 222002 Jiangsu, China
| | - Guangyu Song
- Department of Cardiovascular Medicine, Lianyungang First People's Hospital, Lianyungang, 222002 Jiangsu, China
| | - Yonghua Liu
- Department of Cardiovascular Medicine, Lianyungang First People's Hospital, Lianyungang, 222002 Jiangsu, China
| |
Collapse
|
16
|
Farnia P, Makkiabadi B, Alimohamadi M, Najafzadeh E, Basij M, Yan Y, Mehrmohammadi M, Ahmadian A. Photoacoustic-MR Image Registration Based on a Co-Sparse Analysis Model to Compensate for Brain Shift. SENSORS 2022; 22:s22062399. [PMID: 35336570 PMCID: PMC8954240 DOI: 10.3390/s22062399] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/19/2021] [Revised: 11/16/2021] [Accepted: 11/18/2021] [Indexed: 12/13/2022]
Abstract
Brain shift is an important obstacle to the application of image guidance during neurosurgical interventions. There has been a growing interest in intra-operative imaging to update the image-guided surgery systems. However, due to the innate limitations of the current imaging modalities, accurate brain shift compensation continues to be a challenging task. In this study, the application of intra-operative photoacoustic imaging and registration of the intra-operative photoacoustic with pre-operative MR images are proposed to compensate for brain deformation. Finding a satisfactory registration method is challenging due to the unpredictable nature of brain deformation. In this study, the co-sparse analysis model is proposed for photoacoustic-MR image registration, which can capture the interdependency of the two modalities. The proposed algorithm works based on the minimization of mapping transform via a pair of analysis operators that are learned by the alternating direction method of multipliers. The method was evaluated using an experimental phantom and ex vivo data obtained from a mouse brain. The results of the phantom data show about 63% improvement in target registration error in comparison with the commonly used normalized mutual information method. The results proved that intra-operative photoacoustic images could become a promising tool when the brain shift invalidates pre-operative MRI.
Collapse
Affiliation(s)
- Parastoo Farnia
- Medical Physics and Biomedical Engineering Department, Faculty of Medicine, Tehran University of Medical Sciences (TUMS), Tehran 1417653761, Iran; (P.F.); (B.M.); (E.N.)
- Research Centre of Biomedical Technology and Robotics (RCBTR), Imam Khomeini Hospital Complex, Tehran University of Medical Sciences (TUMS), Tehran 1419733141, Iran
| | - Bahador Makkiabadi
- Medical Physics and Biomedical Engineering Department, Faculty of Medicine, Tehran University of Medical Sciences (TUMS), Tehran 1417653761, Iran; (P.F.); (B.M.); (E.N.)
- Research Centre of Biomedical Technology and Robotics (RCBTR), Imam Khomeini Hospital Complex, Tehran University of Medical Sciences (TUMS), Tehran 1419733141, Iran
| | - Maysam Alimohamadi
- Brain and Spinal Cord Injury Research Center, Neuroscience Institute, Tehran University of Medical Sciences (TUMS), Tehran 1419733141, Iran;
| | - Ebrahim Najafzadeh
- Medical Physics and Biomedical Engineering Department, Faculty of Medicine, Tehran University of Medical Sciences (TUMS), Tehran 1417653761, Iran; (P.F.); (B.M.); (E.N.)
- Research Centre of Biomedical Technology and Robotics (RCBTR), Imam Khomeini Hospital Complex, Tehran University of Medical Sciences (TUMS), Tehran 1419733141, Iran
| | - Maryam Basij
- Department of Biomedical Engineering, Wayne State University, Detroit, MI 48201, USA; (M.B.); (Y.Y.)
| | - Yan Yan
- Department of Biomedical Engineering, Wayne State University, Detroit, MI 48201, USA; (M.B.); (Y.Y.)
| | - Mohammad Mehrmohammadi
- Department of Biomedical Engineering, Wayne State University, Detroit, MI 48201, USA; (M.B.); (Y.Y.)
- Barbara Ann Karmanos Cancer Institute, Detroit, MI 48201, USA
- Correspondence: (M.M.); (A.A.)
| | - Alireza Ahmadian
- Medical Physics and Biomedical Engineering Department, Faculty of Medicine, Tehran University of Medical Sciences (TUMS), Tehran 1417653761, Iran; (P.F.); (B.M.); (E.N.)
- Research Centre of Biomedical Technology and Robotics (RCBTR), Imam Khomeini Hospital Complex, Tehran University of Medical Sciences (TUMS), Tehran 1419733141, Iran
- Correspondence: (M.M.); (A.A.)
| |
Collapse
|
17
|
Liu YR, Sun WZ, Wu J. Effect of the Samples' Surface With Complex Microscopic Geometry on 3 × 3 Mueller Matrix Measurement of Tissue Bulks. Front Bioeng Biotechnol 2022; 10:841298. [PMID: 35356770 PMCID: PMC8959538 DOI: 10.3389/fbioe.2022.841298] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2021] [Accepted: 02/14/2022] [Indexed: 11/24/2022] Open
Abstract
The clinical in vivo tissue bulks' surface is always coarse and shows a complex microscopic geometry which may affect the visual effect of polarization images and calculation of polarization parameters of the sample. To confirm whether this effect would cause identification difficulties and misjudgments on the target recognition when performed the polarization imaging based on 3 × 3 Mueller matrix measurement, cylindrical type and slope type physical models were used to study and analyze the effect of the surface with complex microscopic geometry on the polarization images. Then, clinical tumor bulk samples were used to interact with different sizes of patterns to simulate the different complex microscopic geometry and test the coarse surface effect on polarization images. Meanwhile, assessment parameters were defined to evaluate and confirm the variation between two polarization images quantitatively. The results showed that the polarization imaging of the sample surface with the complex microscopic geometry led to acceptable visual effect and limited quantitative variation on the value of polarization parameters and assessment parameters, and it caused no identification difficulties on target recognition, indicating that it is feasible to apply the polarization imaging based on 3 × 3 Mueller matrix measurement on clinical in vivo tissues with the complex microscopic geometry sample surface.
Collapse
Affiliation(s)
- Yi-Rong Liu
- School of Medicine, Tsinghua University, Beijing, China
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
| | - Wei-Zheng Sun
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, Shenzhen, China
| | - Jian Wu
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
| |
Collapse
|
18
|
Matsumae M, Nishiyama J, Kuroda K. Intraoperative MR Imaging during Glioma Resection. Magn Reson Med Sci 2022; 21:148-167. [PMID: 34880193 PMCID: PMC9199972 DOI: 10.2463/mrms.rev.2021-0116] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Accepted: 10/11/2021] [Indexed: 11/09/2022] Open
Abstract
One of the major issues in the surgical treatment of gliomas is the concern about maximizing the extent of resection while minimizing neurological impairment. Thus, surgical planning by carefully observing the relationship between the glioma infiltration area and eloquent area of the connecting fibers is crucial. Neurosurgeons usually detect an eloquent area by functional MRI and identify a connecting fiber by diffusion tensor imaging. However, during surgery, the accuracy of neuronavigation can be decreased due to brain shift, but the positional information may be updated by intraoperative MRI and the next steps can be planned accordingly. In addition, various intraoperative modalities may be used to guide surgery, including neurophysiological monitoring that provides real-time information (e.g., awake surgery, motor-evoked potentials, and sensory evoked potential); photodynamic diagnosis, which can identify high-grade glioma cells; and other imaging techniques that provide anatomical information during the surgery. In this review, we present the historical and current context of the intraoperative MRI and some related approaches for an audience active in the technical, clinical, and research areas of radiology, as well as mention important aspects regarding safety and types of devices.
Collapse
Affiliation(s)
- Mitsunori Matsumae
- Department of Neurosurgery, Tokai University School of Medicine, Isehara, Kanagawa, Japan
| | - Jun Nishiyama
- Department of Neurosurgery, Tokai University School of Medicine, Isehara, Kanagawa, Japan
| | - Kagayaki Kuroda
- Department of Human and Information Sciences, School of Information Science and Technology, Tokai University, Hiratsuka, Kanagawa, Japan
| |
Collapse
|
19
|
Hoffmann M, Billot B, Greve DN, Iglesias JE, Fischl B, Dalca AV. SynthMorph: Learning Contrast-Invariant Registration Without Acquired Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:543-558. [PMID: 34587005 PMCID: PMC8891043 DOI: 10.1109/tmi.2021.3116879] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
We introduce a strategy for learning image registration without acquired imaging data, producing powerful networks agnostic to contrast introduced by magnetic resonance imaging (MRI). While classical registration methods accurately estimate the spatial correspondence between images, they solve an optimization problem for every new image pair. Learning-based techniques are fast at test time but limited to registering images with contrasts and geometric content similar to those seen during training. We propose to remove this dependency on training data by leveraging a generative strategy for diverse synthetic label maps and images that exposes networks to a wide range of variability, forcing them to learn more invariant features. This approach results in powerful networks that accurately generalize to a broad array of MRI contrasts. We present extensive experiments with a focus on 3D neuroimaging, showing that this strategy enables robust and accurate registration of arbitrary MRI contrasts even if the target contrast is not seen by the networks during training. We demonstrate registration accuracy surpassing the state of the art both within and across contrasts, using a single model. Critically, training on arbitrary shapes synthesized from noise distributions results in competitive performance, removing the dependency on acquired data of any kind. Additionally, since anatomical label maps are often available for the anatomy of interest, we show that synthesizing images from these dramatically boosts performance, while still avoiding the need for real intensity images. Our code is available at doic https://w3id.org/synthmorph.
Collapse
|
20
|
Siebert H, Hansen L, Heinrich MP. Learning a Metric for Multimodal Medical Image Registration without Supervision Based on Cycle Constraints. SENSORS (BASEL, SWITZERLAND) 2022; 22:1107. [PMID: 35161851 PMCID: PMC8840694 DOI: 10.3390/s22031107] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Revised: 01/24/2022] [Accepted: 01/27/2022] [Indexed: 06/14/2023]
Abstract
Deep learning based medical image registration remains very difficult and often fails to improve over its classical counterparts where comprehensive supervision is not available, in particular for large transformations-including rigid alignment. The use of unsupervised, metric-based registration networks has become popular, but so far no universally applicable similarity metric is available for multimodal medical registration, requiring a trade-off between local contrast-invariant edge features or more global statistical metrics. In this work, we aim to improve over the use of handcrafted metric-based losses. We propose to use synthetic three-way (triangular) cycles that for each pair of images comprise two multimodal transformations to be estimated and one known synthetic monomodal transform. Additionally, we present a robust method for estimating large rigid transformations that is differentiable in end-to-end learning. By minimising the cycle discrepancy and adapting the synthetic transformation to be close to the real geometric difference of the image pairs during training, we successfully tackle intra-patient abdominal CT-MRI registration and reach performance on par with state-of-the-art metric-supervision and classic methods. Cyclic constraints enable the learning of cross-modality features that excel at accurate anatomical alignment of abdominal CT and MRI scans.
Collapse
Affiliation(s)
- Hanna Siebert
- Institute of Medical Informatics, Universität zu Lübeck, 23538 Lübeck, Germany; (L.H.); (M.P.H.)
| | | | | |
Collapse
|
21
|
Lesage AC, Simmons A, Sen A, Singh S, Chen M, Cazoulat G, Weinberg JS, Brock KK. Viscoelastic biomechanical models to predict inward brain-shift using public benchmark data. Phys Med Biol 2021; 66. [PMID: 34469879 DOI: 10.1088/1361-6560/ac22dc] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2020] [Accepted: 09/01/2021] [Indexed: 11/11/2022]
Abstract
Brain-shift during neurosurgery compromises the accuracy of tracking the boundaries of the tumor to be resected. Although several studies have used various finite element models (FEMs) to predict inward brain-shift, evaluation of their accuracy and efficiency based on public benchmark data has been limited. This study evaluates several FEMs proposed in the literature (various boundary conditions, mesh sizes, and material properties) by using intraoperative imaging data (the public REtroSpective Evaluation of Cerebral Tumors [RESECT] database). Four patients with low-grade gliomas were identified as having inward brain-shifts. We computed the accuracy (using target registration error) of several FEM-based brain-shift predictions and compared our findings. Since information on head orientation during craniotomy is not included in this database, we tested various plausible angles of head rotation. We analyzed the effects of brain tissue viscoelastic properties, mesh size, craniotomy position, CSF drainage level, and rigidity of meninges and then quantitatively evaluated the trade-off between accuracy and central processing unit time in predicting inward brain-shift across all models with second-order tetrahedral FEMs. The mean initial target registration error (TRE) was 5.78 ± 3.78 mm with rigid registration. FEM prediction (edge-length, 5 mm) with non-rigid meninges led to a mean TRE correction of 1.84 ± 0.83 mm assuming heterogeneous material. Results show that, for the low-grade glioma patients in the study, including non-rigid modeling of the meninges was significant statistically. In contrast including heterogeneity was not significant. To estimate the optimal head orientation and CSF drainage, an angle step of 5° and an CSF height step of 5 mm were enough leading to <0.26 mm TRE fluctuation.
Collapse
Affiliation(s)
- Anne-Cecile Lesage
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, United States of America
| | - Alexis Simmons
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, United States of America
| | - Anando Sen
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, United States of America
| | - Simran Singh
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, United States of America
| | - Melissa Chen
- Department of Neuroradiology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States of America
| | - Guillaume Cazoulat
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, United States of America
| | - Jeffrey S Weinberg
- Department of Neurosurgery, The University of Texas MD Anderson Cancer Center, Houston, TX, United States of America
| | - Kristy K Brock
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, United States of America
| |
Collapse
|
22
|
Zhong X, Amrehn M, Ravikumar N, Chen S, Strobel N, Birkhold A, Kowarschik M, Fahrig R, Maier A. Deep action learning enables robust 3D segmentation of body organs in various CT and MRI images. Sci Rep 2021; 11:3311. [PMID: 33558570 PMCID: PMC7870874 DOI: 10.1038/s41598-021-82370-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2020] [Accepted: 01/14/2021] [Indexed: 11/09/2022] Open
Abstract
In this study, we propose a novel point cloud based 3D registration and segmentation framework using reinforcement learning. An artificial agent, implemented as a distinct actor based on value networks, is trained to predict the optimal piece-wise linear transformation of a point cloud for the joint tasks of registration and segmentation. The actor network estimates a set of plausible actions and the value network aims to select the optimal action for the current observation. Point-wise features that comprise spatial positions (and surface normal vectors in the case of structured meshes), and their corresponding image features, are used to encode the observation and represent the underlying 3D volume. The actor and value networks are applied iteratively to estimate a sequence of transformations that enable accurate delineation of object boundaries. The proposed approach was extensively evaluated in both segmentation and registration tasks using a variety of challenging clinical datasets. Our method has fewer trainable parameters and lower computational complexity compared to the 3D U-Net, and it is independent of the volume resolution. We show that the proposed method is applicable to mono- and multi-modal segmentation tasks, achieving significant improvements over the state-of-the-art for the latter. The flexibility of the proposed framework is further demonstrated for a multi-modal registration application. As we learn to predict actions rather than a target, the proposed method is more robust compared to the 3D U-Net when dealing with previously unseen datasets, acquired using different protocols or modalities. As a result, the proposed method provides a promising multi-purpose segmentation and registration framework, particular in the context of image-guided interventions.
Collapse
Affiliation(s)
- Xia Zhong
- Pattern Recognition Lab, Friedrich-Alexander University, Erlangen-Nürnberg, Germany.
| | - Mario Amrehn
- Pattern Recognition Lab, Friedrich-Alexander University, Erlangen-Nürnberg, Germany
| | - Nishant Ravikumar
- Pattern Recognition Lab, Friedrich-Alexander University, Erlangen-Nürnberg, Germany
| | - Shuqing Chen
- Pattern Recognition Lab, Friedrich-Alexander University, Erlangen-Nürnberg, Germany
| | - Norbert Strobel
- Institute of Medical Engineering, University of Applied Sciences, Würzburg-Schweinfurt, Germany
| | | | | | | | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander University, Erlangen-Nürnberg, Germany
| |
Collapse
|
23
|
Reinertsen I, Collins DL, Drouin S. The Essential Role of Open Data and Software for the Future of Ultrasound-Based Neuronavigation. Front Oncol 2021; 10:619274. [PMID: 33604299 PMCID: PMC7884817 DOI: 10.3389/fonc.2020.619274] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2020] [Accepted: 12/11/2020] [Indexed: 01/17/2023] Open
Abstract
With the recent developments in machine learning and modern graphics processing units (GPUs), there is a marked shift in the way intra-operative ultrasound (iUS) images can be processed and presented during surgery. Real-time processing of images to highlight important anatomical structures combined with in-situ display, has the potential to greatly facilitate the acquisition and interpretation of iUS images when guiding an operation. In order to take full advantage of the recent advances in machine learning, large amounts of high-quality annotated training data are necessary to develop and validate the algorithms. To ensure efficient collection of a sufficient number of patient images and external validity of the models, training data should be collected at several centers by different neurosurgeons, and stored in a standard format directly compatible with the most commonly used machine learning toolkits and libraries. In this paper, we argue that such effort to collect and organize large-scale multi-center datasets should be based on common open source software and databases. We first describe the development of existing open-source ultrasound based neuronavigation systems and how these systems have contributed to enhanced neurosurgical guidance over the last 15 years. We review the impact of the large number of projects worldwide that have benefited from the publicly available datasets “Brain Images of Tumors for Evaluation” (BITE) and “Retrospective evaluation of Cerebral Tumors” (RESECT) that include MR and US data from brain tumor cases. We also describe the need for continuous data collection and how this effort can be organized through the use of a well-adapted and user-friendly open-source software platform that integrates both continually improved guidance and automated data collection functionalities.
Collapse
Affiliation(s)
- Ingerid Reinertsen
- Department of Health Research, SINTEF Digital, Trondheim, Norway.,Department of Circulation and Medical Imaging, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
| | - D Louis Collins
- NIST Laboratory, McConnell Brain Imaging Center, Montreal Neurological Institute and Hospital, McGill University, Montréal, QC, Canada
| | - Simon Drouin
- Laboratoire Multimédia, École de Technologie Supérieure, Montréal, QC, Canada
| |
Collapse
|
24
|
Xiao Y, Lau JC, Hemachandra D, Gilmore G, Khan AR, Peters TM. Image Guidance in Deep Brain Stimulation Surgery to Treat Parkinson's Disease: A Comprehensive Review. IEEE Trans Biomed Eng 2020; 68:1024-1033. [PMID: 32746050 DOI: 10.1109/tbme.2020.3006765] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Deep brain stimulation (DBS) is an effective therapy as an alternative to pharmaceutical treatments for Parkinson's disease (PD). Aside from factors such as instrumentation, treatment plans, and surgical protocols, the success of the procedure depends heavily on the accurate placement of the electrode within the optimal therapeutic targets while avoiding vital structures that can cause surgical complications and adverse neurologic effects. Although specific surgical techniques for DBS can vary, interventional guidance with medical imaging has greatly contributed to the development, outcomes, and safety of the procedure. With rapid development in novel imaging techniques, computational methods, and surgical navigation software, as well as growing insights into the disease and mechanism of action of DBS, modern image guidance is expected to further enhance the capacity and efficacy of the procedure in treating PD. This article surveys the state-of-the-art techniques in image-guided DBS surgery to treat PD, and discusses their benefits and drawbacks, as well as future directions on the topic.
Collapse
|
25
|
Carton FX, Chabanas M, Le Lann F, Noble JH. Automatic segmentation of brain tumor resections in intraoperative ultrasound images using U-Net. J Med Imaging (Bellingham) 2020; 7:031503. [PMID: 32090137 PMCID: PMC7026519 DOI: 10.1117/1.jmi.7.3.031503] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2019] [Accepted: 01/17/2020] [Indexed: 11/14/2022] Open
Abstract
To compensate for the intraoperative brain tissue deformation, computer-assisted intervention methods have been used to register preoperative magnetic resonance images with intraoperative images. In order to model the deformation due to tissue resection, the resection cavity needs to be segmented in intraoperative images. We present an automatic method to segment the resection cavity in intraoperative ultrasound (iUS) images. We trained and evaluated two-dimensional (2-D) and three-dimensional (3-D) U-Net networks on two datasets of 37 and 13 cases that contain images acquired from different ultrasound systems. The best overall performing method was the 3-D network, which resulted in a 0.72 mean and 0.88 median Dice score over the whole dataset. The 2-D network also had good results with less computation time, with a median Dice score over 0.8. We also evaluated the sensitivity of network performance to training and testing with images from different ultrasound systems and image field of view. In this application, we found specialized networks to be more accurate for processing similar images than a general network trained with all the data. Overall, promising results were obtained for both datasets using specialized networks. This motivates further studies with additional clinical data, to enable training and validation of a clinically viable deep-learning model for automated delineation of the tumor resection cavity in iUS images.
Collapse
Affiliation(s)
- François-Xavier Carton
- University of Grenoble Alpes, CNRS, Grenoble INP, TIMC-IMAG, Grenoble, France
- Vanderbilt University, Department of Electrical Engineering and Computer Science, Nashville, Tennessee, United States
| | - Matthieu Chabanas
- University of Grenoble Alpes, CNRS, Grenoble INP, TIMC-IMAG, Grenoble, France
- Vanderbilt University, Department of Electrical Engineering and Computer Science, Nashville, Tennessee, United States
| | - Florian Le Lann
- Grenoble Alpes University Hospital, Department of Neurosurgery, Grenoble, France
| | - Jack H. Noble
- Vanderbilt University, Department of Electrical Engineering and Computer Science, Nashville, Tennessee, United States
| |
Collapse
|
26
|
Špiclin Ž, McClelland J, Kybic J, Goksel O. Learning-Based Affine Registration of Histological Images. BIOMEDICAL IMAGE REGISTRATION 2020. [PMCID: PMC7279928 DOI: 10.1007/978-3-030-50120-4_2] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
The use of different stains for histological sample preparation reveals distinct tissue properties and may result in a more accurate diagnosis. However, as a result of the staining process, the tissue slides are being deformed and registration is required before further processing. The importance of this problem led to organizing an open challenge named Automatic Non-rigid Histological Image Registration Challenge (ANHIR), organized jointly with the IEEE ISBI 2019 conference. The challenge organizers provided several hundred image pairs and a server-side evaluation platform. One of the most difficult sub-problems for the challenge participants was to find an initial, global transform, before attempting to calculate the final, non-rigid deformation field. This article solves the problem by proposing a deep network trained in an unsupervised way with a good generalization. We propose a method that works well for images with different resolutions, aspect ratios, without the necessity to perform image padding, while maintaining a low number of network parameters and fast forward pass time. The proposed method is orders of magnitude faster than the classical approach based on the iterative similarity metric optimization or computer vision descriptors. The success rate is above 98% for both the training set and the evaluation set. We make both the training and inference code freely available.
Collapse
Affiliation(s)
- Žiga Špiclin
- Faculty of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
| | - Jamie McClelland
- Centre for Medical Image Computing, University College London, London, UK
| | - Jan Kybic
- Faculty of Electrical Engineering, Czech Technical University in Prague, Prague, Czech Republic
| | - Orcun Goksel
- Computer Vision Lab, ETH Zurich, Zurich, Switzerland
| |
Collapse
|
27
|
Machado I, Toews M, George E, Unadkat P, Essayed W, Luo J, Teodoro P, Carvalho H, Martins J, Golland P, Pieper S, Frisken S, Golby A, Wells Iii W, Ou Y. Deformable MRI-Ultrasound registration using correlation-based attribute matching for brain shift correction: Accuracy and generality in multi-site data. Neuroimage 2019; 202:116094. [PMID: 31446127 PMCID: PMC6819249 DOI: 10.1016/j.neuroimage.2019.116094] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2019] [Revised: 07/18/2019] [Accepted: 08/09/2019] [Indexed: 11/16/2022] Open
Abstract
Intraoperative tissue deformation, known as brain shift, decreases the benefit of using preoperative images to guide neurosurgery. Non-rigid registration of preoperative magnetic resonance (MR) to intraoperative ultrasound (iUS) has been proposed as a means to compensate for brain shift. We focus on the initial registration from MR to predurotomy iUS. We present a method that builds on previous work to address the need for accuracy and generality of MR-iUS registration algorithms in multi-site clinical data. High-dimensional texture attributes were used instead of image intensities for image registration and the standard difference-based attribute matching was replaced with correlation-based attribute matching. A strategy that deals explicitly with the large field-of-view mismatch between MR and iUS images was proposed. Key parameters were optimized across independent MR-iUS brain tumor datasets acquired at 3 institutions, with a total of 43 tumor patients and 758 reference landmarks for evaluating the accuracy of the proposed algorithm. Despite differences in imaging protocols, patient demographics and landmark distributions, the algorithm is able to reduce landmark errors prior to registration in three data sets (5.37±4.27, 4.18±1.97 and 6.18±3.38 mm, respectively) to a consistently low level (2.28±0.71, 2.08±0.37 and 2.24±0.78 mm, respectively). This algorithm was tested against 15 other algorithms and it is competitive with the state-of-the-art on multiple datasets. We show that the algorithm has one of the lowest errors in all datasets (accuracy), and this is achieved while sticking to a fixed set of parameters for multi-site data (generality). In contrast, other algorithms/tools of similar performance need per-dataset parameter tuning (high accuracy but lower generality), and those that stick to fixed parameters have larger errors or inconsistent performance (generality but not the top accuracy). Landmark errors were further characterized according to brain regions and tumor types, a topic so far missing in the literature.
Collapse
Affiliation(s)
- Inês Machado
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Mechanical Engineering, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal.
| | - Matthew Toews
- Department of Systems Engineering, École de Technologie Supérieure, Montreal, Canada
| | - Elizabeth George
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Prashin Unadkat
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Walid Essayed
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Jie Luo
- Graduate School of Frontier Sciences, University of Tokyo, Tokyo, Japan
| | - Pedro Teodoro
- Escola Superior Náutica Infante D. Henrique, Lisbon, Portugal
| | - Herculano Carvalho
- Department of Neurosurgery, Hospital de Santa Maria, CHLN, Lisbon, Portugal
| | - Jorge Martins
- Department of Mechanical Engineering, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal
| | - Polina Golland
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
| | - Steve Pieper
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Isomics, Inc., Cambridge, MA, USA
| | - Sarah Frisken
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Alexandra Golby
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - William Wells Iii
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
| | - Yangming Ou
- Department of Pediatrics and Radiology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|