1
|
Drakopoulos F, Liu Y, Garner K, Chrisochoides N. Image-to-mesh conversion method for multi-tissue medical image computing simulations. ENGINEERING WITH COMPUTERS 2024; 40:3979-4005. [PMID: 39717418 PMCID: PMC11666122 DOI: 10.1007/s00366-024-02023-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Accepted: 07/01/2024] [Indexed: 12/25/2024]
Abstract
Converting a three-dimensional medical image into a 3D mesh that satisfies both the quality and fidelity constraints of predictive simulations and image-guided surgical procedures remains a critical problem. Presented is an image-to-mesh conversion method called CBC3D. It first discretizes a segmented image by generating an adaptive Body-Centered Cubic (BCC) mesh of high-quality elements. Next, the tetrahedral mesh is converted into a mixed-element mesh of tetrahedra, pentahedra, and hexahedra to decrease element count while maintaining quality. Finally, the mesh surfaces are deformed to their corresponding physical image boundaries, improving the mesh's fidelity. The deformation scheme builds upon the ITK open-source library and is based on the concept of energy minimization, relying on a multi-material point-based registration. It uses non-connectivity patterns to implicitly control the number of extracted feature points needed for the registration and, thus, adjusts the trade-off between the achieved mesh fidelity and the deformation speed. We compare CBC3D with four widely used and state-of-the-art homegrown image-to-mesh conversion methods from industry and academia. Results indicate that the CBC3D meshes (i) achieve high fidelity, (ii) keep the element count reasonably low, and (iii) exhibit good element quality.
Collapse
Affiliation(s)
- Fotis Drakopoulos
- Center for Real-Time Computing, Department of Computer Science, Old Dominion University, Norfolk, VA, United States of America
| | - Yixun Liu
- Center for Real-Time Computing, Department of Computer Science, Old Dominion University, Norfolk, VA, United States of America
| | - Kevin Garner
- Center for Real-Time Computing, Department of Computer Science, Old Dominion University, Norfolk, VA, United States of America
| | - Nikos Chrisochoides
- Center for Real-Time Computing, Department of Computer Science, Old Dominion University, Norfolk, VA, United States of America
| |
Collapse
|
2
|
Antoniou PE, Economou D, Athanasiou A, Tsoulfas G. Editorial: Immersive media in connected health-volume II. Front Digit Health 2024; 6:1425769. [PMID: 38832348 PMCID: PMC11144886 DOI: 10.3389/fdgth.2024.1425769] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Accepted: 05/06/2024] [Indexed: 06/05/2024] Open
Abstract
Immersive media, particularly Extended Reality (XR), is at the forefront of revolutionizing the healthcare industry. Healthcare provides XR with "silver bullet" use cases that add value and societal effect to the technology. Healthcare interventions frequently require imaging or visualization to be applied correctly, and the sensation of presence that XR can provide is crucial as a training aid for healthcare learners. From anatomy to surgical training, multimodal immersion in the reality of a medical situation increases the impact of an XR resource compared to the usual approach. Thus, healthcare has become a specialized focus for the immersive media sector, with a multitude of development and research underway. This research subject, which followed on from the previous one, yielded an eclectic group of works spanning the gamut of immersive media applications in healthcare. The underlying theme in these works remains a consistent focus on calibrating, validating, verifying, and standardizing procedures, instruments, and technologies in order to constantly rigorously streamline the means and materials that will integrate immersive technologies in healthcare. In that spirit, we share the findings from this research topic as a motivator for rigorous and evidence-based use of immersive media in digital and connected health.
Collapse
Affiliation(s)
- P. E. Antoniou
- Lab of Medical Physics and Digital Innovations, Department of Medicine, School of Health Sciences, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - D. Economou
- School of Computer Science, University of Westminster, London, United Kingdom
| | - A. Athanasiou
- Lab of Medical Physics and Digital Innovations, Department of Medicine, School of Health Sciences, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - G. Tsoulfas
- Department of Transplantation Surgery, Ippokrateio General Hospital/Aristotle University of Thessaloniki, Thessaloniki, Greece
| |
Collapse
|
3
|
Li A, Ying Y, Gao T, Zhang L, Zhao X, Zhao Y, Song G, Zhang H. MF-Net: multi-scale feature extraction-integration network for unsupervised deformable registration. Front Neurosci 2024; 18:1364409. [PMID: 38680447 PMCID: PMC11045908 DOI: 10.3389/fnins.2024.1364409] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Accepted: 03/20/2024] [Indexed: 05/01/2024] Open
Abstract
Deformable registration plays a fundamental and crucial role in scenarios such as surgical navigation and image-assisted analysis. While deformable registration methods based on unsupervised learning have shown remarkable success in predicting displacement fields with high accuracy, many existing registration networks are limited by the lack of multi-scale analysis, restricting comprehensive utilization of global and local features in the images. To address this limitation, we propose a novel registration network called multi-scale feature extraction-integration network (MF-Net). First, we propose a multiscale analysis strategy that enables the model to capture global and local semantic information in the image, thus facilitating accurate texture and detail registration. Additionally, we introduce grouped gated inception block (GI-Block) as the basic unit of the feature extractor, enabling the feature extractor to selectively extract quantitative features from images at various resolutions. Comparative experiments demonstrate the superior accuracy of our approach over existing methods.
Collapse
Affiliation(s)
- Andi Li
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Yuhan Ying
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Tian Gao
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China
- School of Automation and Electrical Engineering, Shenyang Ligong University, Shenyang, China
| | - Lei Zhang
- Spine Surgery Unit, Shengjing Hospital of China Medical University, Shenyang, China
| | - Xingang Zhao
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China
| | - Yiwen Zhao
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China
| | - Guoli Song
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China
| | - He Zhang
- Orthopedic Department, The Second Affiliated Hospital of Chongqing Medical University, Chongqing, China
| |
Collapse
|
4
|
Chrisochoides N, Liu Y, Drakopoulos F, Kot A, Foteinos P, Tsolakis C, Billias E, Clatz O, Ayache N, Fedorov A, Golby A, Black P, Kikinis R. Comparison of physics-based deformable registration methods for image-guided neurosurgery. Front Digit Health 2023; 5:1283726. [PMID: 38144260 PMCID: PMC10740151 DOI: 10.3389/fdgth.2023.1283726] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2023] [Accepted: 11/02/2023] [Indexed: 12/26/2023] Open
Abstract
This paper compares three finite element-based methods used in a physics-based non-rigid registration approach and reports on the progress made over the last 15 years. Large brain shifts caused by brain tumor removal affect registration accuracy by creating point and element outliers. A combination of approximation- and geometry-based point and element outlier rejection improves the rigid registration error by 2.5 mm and meets the real-time constraints (4 min). In addition, the paper raises several questions and presents two open problems for the robust estimation and improvement of registration error in the presence of outliers due to sparse, noisy, and incomplete data. It concludes with preliminary results on leveraging Quantum Computing, a promising new technology for computationally intensive problems like Feature Detection and Block Matching in addition to finite element solver; all three account for 75% of computing time in deformable registration.
Collapse
Affiliation(s)
- Nikos Chrisochoides
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA, United States
| | - Yixun Liu
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA, United States
| | - Fotis Drakopoulos
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA, United States
| | - Andriy Kot
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA, United States
| | - Panos Foteinos
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA, United States
| | - Christos Tsolakis
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA, United States
| | - Emmanuel Billias
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA, United States
| | - Olivier Clatz
- Inria, French Research Institute for Digital Science, Sophia Antipolis, Valbonne, France
| | - Nicholas Ayache
- Inria, French Research Institute for Digital Science, Sophia Antipolis, Valbonne, France
| | - Andrey Fedorov
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA, United States
- Neuroimaging Analysis Center, Department of Radiology, Harvard Medical School, Boston, MA, United States
| | - Alex Golby
- Neuroimaging Analysis Center, Department of Radiology, Harvard Medical School, Boston, MA, United States
- Image-guided Neurosurgery, Department of Neurosurgery, Harvard Medical School, Boston, MA, United States
| | - Peter Black
- Image-guided Neurosurgery, Department of Neurosurgery, Harvard Medical School, Boston, MA, United States
| | - Ron Kikinis
- Neuroimaging Analysis Center, Department of Radiology, Harvard Medical School, Boston, MA, United States
| |
Collapse
|
5
|
Safdar S, Zwick BF, Yu Y, Bourantas GC, Joldes GR, Warfield SK, Hyde DE, Frisken S, Kapur T, Kikinis R, Golby A, Nabavi A, Wittek A, Miller K. SlicerCBM: automatic framework for biomechanical analysis of the brain. Int J Comput Assist Radiol Surg 2023; 18:1925-1940. [PMID: 37004646 PMCID: PMC10497672 DOI: 10.1007/s11548-023-02881-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Accepted: 03/17/2023] [Indexed: 04/04/2023]
Abstract
PURPOSE Brain shift that occurs during neurosurgery disturbs the brain's anatomy. Prediction of the brain shift is essential for accurate localisation of the surgical target. Biomechanical models have been envisaged as a possible tool for such predictions. In this study, we created a framework to automate the workflow for predicting intra-operative brain deformations. METHODS We created our framework by uniquely combining our meshless total Lagrangian explicit dynamics (MTLED) algorithm for computing soft tissue deformations, open-source software libraries and built-in functions within 3D Slicer, an open-source software package widely used for medical research. Our framework generates the biomechanical brain model from the pre-operative MRI, computes brain deformation using MTLED and outputs results in the form of predicted warped intra-operative MRI. RESULTS Our framework is used to solve three different neurosurgical brain shift scenarios: craniotomy, tumour resection and electrode placement. We evaluated our framework using nine patients. The average time to construct a patient-specific brain biomechanical model was 3 min, and that to compute deformations ranged from 13 to 23 min. We performed a qualitative evaluation by comparing our predicted intra-operative MRI with the actual intra-operative MRI. For quantitative evaluation, we computed Hausdorff distances between predicted and actual intra-operative ventricle surfaces. For patients with craniotomy and tumour resection, approximately 95% of the nodes on the ventricle surfaces are within two times the original in-plane resolution of the actual surface determined from the intra-operative MRI. CONCLUSION Our framework provides a broader application of existing solution methods not only in research but also in clinics. We successfully demonstrated the application of our framework by predicting intra-operative deformations in nine patients undergoing neurosurgical procedures.
Collapse
Affiliation(s)
- Saima Safdar
- Intelligent Systems for Medicine Laboratory, The University of Western Australia, 35 Stirling Highway, Perth, WA, Australia.
| | - Benjamin F Zwick
- Intelligent Systems for Medicine Laboratory, The University of Western Australia, 35 Stirling Highway, Perth, WA, Australia
| | - Yue Yu
- Intelligent Systems for Medicine Laboratory, The University of Western Australia, 35 Stirling Highway, Perth, WA, Australia
| | - George C Bourantas
- Intelligent Systems for Medicine Laboratory, The University of Western Australia, 35 Stirling Highway, Perth, WA, Australia
- Department of Agriculture, University of Patras Nea Ktiria, 30200, Campus Mesologhi, Greece
| | - Grand R Joldes
- Intelligent Systems for Medicine Laboratory, The University of Western Australia, 35 Stirling Highway, Perth, WA, Australia
| | - Simon K Warfield
- Computational Radiology Laboratory, Boston Children's Hospital, Boston, MA, USA
- Harvard Medical School, Boston, MA, USA
| | - Damon E Hyde
- Computational Radiology Laboratory, Boston Children's Hospital, Boston, MA, USA
- Harvard Medical School, Boston, MA, USA
| | - Sarah Frisken
- Brigham and Women's Hospital, Boston, MA, USA
- Harvard Medical School, Boston, MA, USA
| | - Tina Kapur
- Brigham and Women's Hospital, Boston, MA, USA
- Harvard Medical School, Boston, MA, USA
| | - Ron Kikinis
- Brigham and Women's Hospital, Boston, MA, USA
- Harvard Medical School, Boston, MA, USA
| | - Alexandra Golby
- Brigham and Women's Hospital, Boston, MA, USA
- Harvard Medical School, Boston, MA, USA
| | - Arya Nabavi
- Department of Neurosurgery, KRH Klinikum Nordstadt, Hannover, Germany
| | - Adam Wittek
- Intelligent Systems for Medicine Laboratory, The University of Western Australia, 35 Stirling Highway, Perth, WA, Australia
| | - Karol Miller
- Intelligent Systems for Medicine Laboratory, The University of Western Australia, 35 Stirling Highway, Perth, WA, Australia
- Harvard Medical School, Boston, MA, USA
| |
Collapse
|
6
|
Chrisochoides N, Fedorov A, Liu Y, Kot A, Foteinos P, Drakopoulos F, Tsolakis C, Billias E, Clatz O, Ayache N, Golby A, Black P, Kikinis R. Real-Time Dynamic Data Driven Deformable Registration for Image-Guided Neurosurgery: Computational Aspects. ARXIV 2023:arXiv:2309.03336v1. [PMID: 37731651 PMCID: PMC10508827] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 09/22/2023]
Abstract
Current neurosurgical procedures utilize medical images of various modalities to enable the precise location of tumors and critical brain structures to plan accurate brain tumor resection. The difficulty of using preoperative images during the surgery is caused by the intra-operative deformation of the brain tissue (brain shift), which introduces discrepancies concerning the preoperative configuration. Intra-operative imaging allows tracking such deformations but cannot fully substitute for the quality of the pre-operative data. Dynamic Data Driven Deformable Non-Rigid Registration (D4NRR) is a complex and time-consuming image processing operation that allows the dynamic adjustment of the pre-operative image data to account for intra-operative brain shift during the surgery. This paper summarizes the computational aspects of a specific adaptive numerical approximation method and its variations for registering brain MRIs. It outlines its evolution over the last 15 years and identifies new directions for the computational aspects of the technique.
Collapse
Affiliation(s)
- Nikos Chrisochoides
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA
| | - Andrey Fedorov
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA
- Neuroimaging Analysis Center, Department of Radiology, Harvard Medical School, Boston, MA
| | - Yixun Liu
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA
| | - Andriy Kot
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA
| | - Panos Foteinos
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA
| | - Fotis Drakopoulos
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA
| | - Christos Tsolakis
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA
| | - Emmanuel Billias
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA
| | - Olivier Clatz
- Inria, French Research Institute for Digital Science, Sophia Antipolis, France
| | - Nicholas Ayache
- Inria, French Research Institute for Digital Science, Sophia Antipolis, France
| | - Alex Golby
- Neuroimaging Analysis Center, Department of Radiology, Harvard Medical School, Boston, MA
- Image-guided Neurosurgery, Department of Neurosurgery, Harvard Medical School, Boston, MA
| | - Peter Black
- Image-guided Neurosurgery, Department of Neurosurgery, Harvard Medical School, Boston, MA
| | - Ron Kikinis
- Neuroimaging Analysis Center, Department of Radiology, Harvard Medical School, Boston, MA
| |
Collapse
|
7
|
Yu Y, Safdar S, Bourantas G, Zwick B, Joldes G, Kapur T, Frisken S, Kikinis R, Nabavi A, Golby A, Wittek A, Miller K. Automatic framework for patient-specific modelling of tumour resection-induced brain shift. Comput Biol Med 2022; 143:105271. [PMID: 35123136 PMCID: PMC9389918 DOI: 10.1016/j.compbiomed.2022.105271] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2021] [Revised: 01/09/2022] [Accepted: 01/24/2022] [Indexed: 11/25/2022]
Abstract
Our motivation is to enable non-biomechanical engineering specialists to use sophisticated biomechanical models in the clinic to predict tumour resection-induced brain shift, and subsequently know the location of the residual tumour and its boundary. To achieve this goal, we developed a framework for automatically generating and solving patient-specific biomechanical models of the brain. This framework automatically determines patient-specific brain geometry from MRI data, generates patient-specific computational grid, assigns material properties, defines boundary conditions, applies external loads to the anatomical structures, and solves differential equations of nonlinear elasticity using Meshless Total Lagrangian Explicit Dynamics (MTLED) algorithm. We demonstrated the effectiveness and appropriateness of our framework on real clinical cases of tumour resection-induced brain shift.
Collapse
Affiliation(s)
- Yue Yu
- Intelligent Systems for Medicine Laboratory, The University of Western Australia, Perth 6009, Australia.
| | - Saima Safdar
- Intelligent Systems for Medicine Laboratory, The University of Western Australia, Perth 6009, Australia
| | - George Bourantas
- Intelligent Systems for Medicine Laboratory, The University of Western Australia, Perth 6009, Australia
| | - Benjamin Zwick
- Intelligent Systems for Medicine Laboratory, The University of Western Australia, Perth 6009, Australia
| | - Grand Joldes
- Intelligent Systems for Medicine Laboratory, The University of Western Australia, Perth 6009, Australia
| | - Tina Kapur
- Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Sarah Frisken
- Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Ron Kikinis
- Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Arya Nabavi
- Department of Neurosurgery, KRH Klinikum Nordstadt, Hannover, Germany
| | - Alexandra Golby
- Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Adam Wittek
- Intelligent Systems for Medicine Laboratory, The University of Western Australia, Perth 6009, Australia
| | - Karol Miller
- Intelligent Systems for Medicine Laboratory, The University of Western Australia, Perth 6009, Australia; Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
8
|
Yu Y, Bourantas G, Zwick B, Joldes G, Kapur T, Frisken S, Kikinis R, Nabavi A, Golby A, Wittek A, Miller K. Computer simulation of tumour resection-induced brain deformation by a meshless approach. INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING 2022; 38:e3539. [PMID: 34647427 PMCID: PMC8881972 DOI: 10.1002/cnm.3539] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Revised: 09/06/2021] [Accepted: 10/03/2021] [Indexed: 06/13/2023]
Abstract
Tumour resection requires precise planning and navigation to maximise tumour removal while simultaneously protecting nearby healthy tissues. Neurosurgeons need to know the location of the remaining tumour after partial tumour removal before continuing with the resection. Our approach to the problem uses biomechanical modelling and computer simulation to compute the brain deformations after the tumour is resected. In this study, we use meshless Total Lagrangian explicit dynamics as the solver. The problem geometry is extracted from the patient-specific magnetic resonance imaging (MRI) data and includes the parenchyma, tumour, cerebrospinal fluid and skull. The appropriate non-linear material formulation is used. Loading is performed by imposing intra-operative conditions of gravity and reaction forces between the tumour and surrounding healthy parenchyma tissues. A finite frictionless sliding contact is enforced between the skull (rigid) and parenchyma. The meshless simulation results are compared to intra-operative MRI sections. We also calculate Hausdorff distances between the computed deformed surfaces (ventricles and tumour cavities) and surfaces observed intra-operatively. Over 80% of points on the ventricle surface and 95% of points on the tumour cavity surface were successfully registered (results within the limits of two times the original in-plane resolution of the intra-operative image). Computed results demonstrate the potential for our method in estimating the tissue deformation and tumour boundary during the resection.
Collapse
Affiliation(s)
- Yue Yu
- Intelligent Systems for Medicine Laboratory, The University of Western Australia, Perth, Western Australia, Australia
| | - George Bourantas
- Intelligent Systems for Medicine Laboratory, The University of Western Australia, Perth, Western Australia, Australia
| | - Benjamin Zwick
- Intelligent Systems for Medicine Laboratory, The University of Western Australia, Perth, Western Australia, Australia
| | - Grand Joldes
- Intelligent Systems for Medicine Laboratory, The University of Western Australia, Perth, Western Australia, Australia
| | - Tina Kapur
- Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Sarah Frisken
- Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Ron Kikinis
- Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Arya Nabavi
- Department of Neurosurgery, KRH Klinikum Nordstadt, Hannover, Germany
| | - Alexandra Golby
- Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Adam Wittek
- Intelligent Systems for Medicine Laboratory, The University of Western Australia, Perth, Western Australia, Australia
| | - Karol Miller
- Intelligent Systems for Medicine Laboratory, The University of Western Australia, Perth, Western Australia, Australia
- Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|