1
|
Efthymiou S, Han W, Ilyas M, Li J, Yu Y, Scala M, Malintan NT, Ilyas M, Vavouraki N, Mankad K, Maroofian R, Rocca C, Salpietro V, Lakhani S, Mallack EJ, Palculict TB, Li H, Zhang G, Zafar F, Rana N, Takashima N, Matsunaga H, Manzoni C, Striano P, Lythgoe MF, Aruga J, Lu W, Houlden H. Human mutations in SLITRK3 implicated in GABAergic synapse development in mice. Front Mol Neurosci 2024; 17:1222935. [PMID: 38495551 PMCID: PMC10940442 DOI: 10.3389/fnmol.2024.1222935] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Accepted: 02/02/2024] [Indexed: 03/19/2024] Open
Abstract
This study reports on biallelic homozygous and monoallelic de novo variants in SLITRK3 in three unrelated families presenting with epileptic encephalopathy associated with a broad neurological involvement characterized by microcephaly, intellectual disability, seizures, and global developmental delay. SLITRK3 encodes for a transmembrane protein that is involved in controlling neurite outgrowth and inhibitory synapse development and that has an important role in brain function and neurological diseases. Using primary cultures of hippocampal neurons carrying patients' SLITRK3 variants and in combination with electrophysiology, we demonstrate that recessive variants are loss-of-function alleles. Immunostaining experiments in HEK-293 cells showed that human variants C566R and E606X change SLITRK3 protein expression patterns on the cell surface, resulting in highly accumulating defective proteins in the Golgi apparatus. By analyzing the development and phenotype of SLITRK3 KO (SLITRK3-/-) mice, the study shows evidence of enhanced susceptibility to pentylenetetrazole-induced seizure with the appearance of spontaneous epileptiform EEG as well as developmental deficits such as higher motor activities and reduced parvalbumin interneurons. Taken together, the results exhibit impaired development of the peripheral and central nervous system and support a conserved role of this transmembrane protein in neurological function. The study delineates an emerging spectrum of human core synaptopathies caused by variants in genes that encode SLITRK proteins and essential regulatory components of the synaptic machinery. The hallmark of these disorders is impaired postsynaptic neurotransmission at nerve terminals; an impaired neurotransmission resulting in a wide array of (often overlapping) clinical features, including neurodevelopmental impairment, weakness, seizures, and abnormal movements. The genetic synaptopathy caused by SLITRK3 mutations highlights the key roles of this gene in human brain development and function.
Collapse
Affiliation(s)
- Stephanie Efthymiou
- Department of Neuromuscular Disorders, University College London (UCL) Queen Square Institute of Neurology, London, United Kingdom
- U.O.C. Genetica Medica, Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS) Istituto Giannina Gaslini, Genoa, Italy
| | - Wenyan Han
- Synapse and Neural Circuit Research Section, National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, MD, United States
| | - Muhammad Ilyas
- Department of Biological Sciences, International Islamic University Islamabad, Islamabad, Pakistan
| | - Jun Li
- Synapse and Neural Circuit Research Section, National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, MD, United States
| | - Yichao Yu
- Centre for Advanced Biomedical Imaging, Division of Medicine, University College London, London, United Kingdom
| | - Marcello Scala
- Department of Neuromuscular Disorders, University College London (UCL) Queen Square Institute of Neurology, London, United Kingdom
- Department of Neurosciences, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health, Università Degli Studi di Genova, Genoa, Italy
- Pediatric Neurology and Muscular Diseases Unit, Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS) Istituto Giannina Gaslini, Genoa, Italy
| | - Nancy T. Malintan
- Department of Neuromuscular Disorders, University College London (UCL) Queen Square Institute of Neurology, London, United Kingdom
| | - Muhammad Ilyas
- Centre for Omic Sciences, Islamia College Peshawar, Peshawar, Pakistan
| | - Nikoleta Vavouraki
- School of Pharmacy, University of Reading, Reading, United Kingdom
- Department of Mathematics and Statistics, University of Reading, Reading, United Kingdom
| | - Kshitij Mankad
- Department of Radiology, Great Ormond Street Hospital, London, United Kingdom
- Developmental Neurosciences Department, University College London (UCL) Great Ormond Street Institute of Child Health, London, United Kingdom
| | - Reza Maroofian
- Department of Neuromuscular Disorders, University College London (UCL) Queen Square Institute of Neurology, London, United Kingdom
| | - Clarissa Rocca
- Department of Neuromuscular Disorders, University College London (UCL) Queen Square Institute of Neurology, London, United Kingdom
| | - Vincenzo Salpietro
- Department of Neuromuscular Disorders, University College London (UCL) Queen Square Institute of Neurology, London, United Kingdom
| | - Shenela Lakhani
- Center for Neurogenetics, Feil Family Brain and Mind Research Institute, Weill Cornell Medicine, New York, NY, United States
| | - Eric J. Mallack
- Center for Neurogenetics, Feil Family Brain and Mind Research Institute, Weill Cornell Medicine, New York, NY, United States
| | | | - Hong Li
- Department of Human Genetics, Emory University School of Medicine, Atlanta, GA, United States
| | - Guojun Zhang
- Department of Human Genetics, Emory University School of Medicine, Atlanta, GA, United States
- Department of Pediatric Neurology, Children’s Healthcare of Atlanta, Atlanta, GA, United States
| | - Faisal Zafar
- Department of Pediatrics, Multan Hospital, Multan, Pakistan
| | - Nuzhat Rana
- Department of Pediatrics, Multan Hospital, Multan, Pakistan
| | - Noriko Takashima
- Laboratory for Behavioral and Developmental Disorders, RIKEN Brain Science Institute (BSI), Saitama, Japan
| | - Hayato Matsunaga
- Department of Medical Pharmacology, Nagasaki University Institute of Biomedical Sciences, Nagasaki, Japan
| | - Claudia Manzoni
- School of Pharmacy, University College London, London, United Kingdom
| | - Pasquale Striano
- Department of Neurosciences, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health, Università Degli Studi di Genova, Genoa, Italy
- Pediatric Neurology and Muscular Diseases Unit, Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS) Istituto Giannina Gaslini, Genoa, Italy
| | - Mark F. Lythgoe
- Centre for Advanced Biomedical Imaging, Division of Medicine, University College London, London, United Kingdom
| | - Jun Aruga
- Laboratory for Behavioral and Developmental Disorders, RIKEN Brain Science Institute (BSI), Saitama, Japan
- Department of Medical Pharmacology, Nagasaki University Institute of Biomedical Sciences, Nagasaki, Japan
| | - Wei Lu
- Synapse and Neural Circuit Research Section, National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, MD, United States
| | - Henry Houlden
- Department of Neuromuscular Disorders, University College London (UCL) Queen Square Institute of Neurology, London, United Kingdom
| |
Collapse
|
2
|
Zeineldin RA, Karar ME, Burgert O, Mathis-Ullrich F. NeuroIGN: Explainable Multimodal Image-Guided System for Precise Brain Tumor Surgery. J Med Syst 2024; 48:25. [PMID: 38393660 DOI: 10.1007/s10916-024-02037-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 02/03/2024] [Indexed: 02/25/2024]
Abstract
Precise neurosurgical guidance is critical for successful brain surgeries and plays a vital role in all phases of image-guided neurosurgery (IGN). Neuronavigation software enables real-time tracking of surgical tools, ensuring their presentation with high precision in relation to a virtual patient model. Therefore, this work focuses on the development of a novel multimodal IGN system, leveraging deep learning and explainable AI to enhance brain tumor surgery outcomes. The study establishes the clinical and technical requirements of the system for brain tumor surgeries. NeuroIGN adopts a modular architecture, including brain tumor segmentation, patient registration, and explainable output prediction, and integrates open-source packages into an interactive neuronavigational display. The NeuroIGN system components underwent validation and evaluation in both laboratory and simulated operating room (OR) settings. Experimental results demonstrated its accuracy in tumor segmentation and the success of ExplainAI in increasing the trust of medical professionals in deep learning. The proposed system was successfully assembled and set up within 11 min in a pre-clinical OR setting with a tracking accuracy of 0.5 (± 0.1) mm. NeuroIGN was also evaluated as highly useful, with a high frame rate (19 FPS) and real-time ultrasound imaging capabilities. In conclusion, this paper describes not only the development of an open-source multimodal IGN system but also demonstrates the innovative application of deep learning and explainable AI algorithms in enhancing neuronavigation for brain tumor surgeries. By seamlessly integrating pre- and intra-operative patient image data with cutting-edge interventional devices, our experiments underscore the potential for deep learning models to improve the surgical treatment of brain tumors and long-term post-operative outcomes.
Collapse
Affiliation(s)
- Ramy A Zeineldin
- Department of Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander University Erlangen-Nürnberg, 91052, Erlangen, Germany.
- Research Group Computer Assisted Medicine (CaMed), Reutlingen University, 72762, Reutlingen, Germany.
- Faculty of Electronic Engineering (FEE), Menoufia University, Minuf, 32952, Egypt.
| | - Mohamed E Karar
- Faculty of Electronic Engineering (FEE), Menoufia University, Minuf, 32952, Egypt
| | - Oliver Burgert
- Research Group Computer Assisted Medicine (CaMed), Reutlingen University, 72762, Reutlingen, Germany
| | - Franziska Mathis-Ullrich
- Department of Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander University Erlangen-Nürnberg, 91052, Erlangen, Germany
| |
Collapse
|
3
|
Ramalhinho J, Yoo S, Dowrick T, Koo B, Somasundaram M, Gurusamy K, Hawkes DJ, Davidson B, Blandford A, Clarkson MJ. The value of Augmented Reality in surgery - A usability study on laparoscopic liver surgery. Med Image Anal 2023; 90:102943. [PMID: 37703675 PMCID: PMC10958137 DOI: 10.1016/j.media.2023.102943] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 06/29/2023] [Accepted: 08/24/2023] [Indexed: 09/15/2023]
Abstract
Augmented Reality (AR) is considered to be a promising technology for the guidance of laparoscopic liver surgery. By overlaying pre-operative 3D information of the liver and internal blood vessels on the laparoscopic view, surgeons can better understand the location of critical structures. In an effort to enable AR, several authors have focused on the development of methods to obtain an accurate alignment between the laparoscopic video image and the pre-operative 3D data of the liver, without assessing the benefit that the resulting overlay can provide during surgery. In this paper, we present a study that aims to assess quantitatively and qualitatively the value of an AR overlay in laparoscopic surgery during a simulated surgical task on a phantom setup. We design a study where participants are asked to physically localise pre-operative tumours in a liver phantom using three image guidance conditions - a baseline condition without any image guidance, a condition where the 3D surfaces of the liver are aligned to the video and displayed on a black background, and a condition where video see-through AR is displayed on the laparoscopic video. Using data collected from a cohort of 24 participants which include 12 surgeons, we observe that compared to the baseline, AR decreases the median localisation error of surgeons on non-peripheral targets from 25.8 mm to 9.2 mm. Using subjective feedback, we also identify that AR introduces usability improvements in the surgical task and increases the perceived confidence of the users. Between the two tested displays, the majority of participants preferred to use the AR overlay instead of navigated view of the 3D surfaces on a separate screen. We conclude that AR has the potential to improve performance and decision making in laparoscopic surgery, and that improvements in overlay alignment accuracy and depth perception should be pursued in the future.
Collapse
Affiliation(s)
- João Ramalhinho
- Wellcome ESPRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom.
| | - Soojeong Yoo
- Wellcome ESPRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom; UCL Interaction Centre, University College London, London, United Kingdom
| | - Thomas Dowrick
- Wellcome ESPRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| | - Bongjin Koo
- Wellcome ESPRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| | - Murali Somasundaram
- Division of Surgery and Interventional Sciences, University College London, London, United Kingdom
| | - Kurinchi Gurusamy
- Division of Surgery and Interventional Sciences, University College London, London, United Kingdom
| | - David J Hawkes
- Wellcome ESPRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| | - Brian Davidson
- Division of Surgery and Interventional Sciences, University College London, London, United Kingdom
| | - Ann Blandford
- Wellcome ESPRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom; UCL Interaction Centre, University College London, London, United Kingdom
| | - Matthew J Clarkson
- Wellcome ESPRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| |
Collapse
|
4
|
Zhou Z, Qian X, Hu J, Chen G, Zhang C, Zhu J, Dai Y. An artificial intelligence-assisted diagnosis modeling software (AIMS) platform based on medical images and machine learning: a development and validation study. Quant Imaging Med Surg 2023; 13:7504-7522. [PMID: 37969634 PMCID: PMC10644131 DOI: 10.21037/qims-23-20] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2023] [Accepted: 06/12/2023] [Indexed: 11/17/2023]
Abstract
Background Supervised machine learning methods [both radiomics and convolutional neural network (CNN)-based deep learning] are usually employed to develop artificial intelligence models with medical images for computer-assisted diagnosis and prognosis of diseases. A classical machine learning-based modeling workflow involves a series of interconnected components and various algorithms, but this makes it challenging, tedious, and labor intensive for radiologists and researchers to build customized models for specific clinical applications if they lack expertise in machine learning methods. Methods We developed a user-friendly artificial intelligence-assisted diagnosis modeling software (AIMS) platform, which supplies standardized machine learning-based modeling workflows for computer-assisted diagnosis and prognosis systems with medical images. In contrast to other existing software platforms, AIMS contains both radiomics and CNN-based deep learning workflows, making it an all-in-one software platform for machine learning-based medical image analysis. The modular design of AIMS allows users to build machine learning models easily, test models comprehensively, and fairly compare the performance of different models in a specific application. The graphical user interface (GUI) enables users to process large numbers of medical images without programming or script addition. Furthermore, AIMS also provides a flexible image processing toolkit (e.g., semiautomatic segmentation, registration, morphological operations) to rapidly create lesion labels for multiphase analysis, multiregion analysis of an individual tumor (e.g., tumor mass and peritumor), and multimodality analysis. Results The functionality and efficiency of AIMS were demonstrated in 3 independent experiments in radiation oncology, where multiphase, multiregion, and multimodality analyses were performed, respectively. For clear cell renal cell carcinoma (ccRCC) Fuhrman grading with multiphase analysis (sample size =187), the area under the curve (AUC) value of the AIMS was 0.776; for ccRCC Fuhrman grading with multiregion analysis (sample size =177), the AUC value of the AIMS was 0.848; for prostate cancer Gleason grading with multimodality analysis (sample size =206), the AUC value of the AIMS was 0.980. Conclusions AIMS provides a user-friendly infrastructure for radiologists and researchers, lowering the barrier to building customized machine learning-based computer-assisted diagnosis models for medical image analysis.
Collapse
Affiliation(s)
- Zhiyong Zhou
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
| | - Xusheng Qian
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
| | - Jisu Hu
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
| | - Guangqiang Chen
- Department of Radiology, The Second Affiliated Hospital of Soochow University, Suzhou, China
| | - Caiyuan Zhang
- Department of Radiology, The Second Affiliated Hospital of Soochow University, Suzhou, China
| | - Jianbing Zhu
- Suzhou Science & Technology Town Hospital, Suzhou Hospital, Affiliated Hospital of Medical School, Nanjing University, Suzhou, China
| | - Yakang Dai
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
- Suzhou Guoke Kangcheng Medical Technology Co., Ltd., Suzhou, China
| |
Collapse
|
5
|
Enkaoua A, Islam M, Ramalhinho J, Dowrick T, Booker J, Khan DZ, Marcus HJ, Clarkson MJ. Image-guidance in endoscopic pituitary surgery: an in-silico study of errors involved in tracker-based techniques. Front Surg 2023; 10:1222859. [PMID: 37780914 PMCID: PMC10540627 DOI: 10.3389/fsurg.2023.1222859] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Accepted: 08/11/2023] [Indexed: 10/03/2023] Open
Abstract
Background Endoscopic endonasal surgery is an established minimally invasive technique for resecting pituitary adenomas. However, understanding orientation and identifying critical neurovascular structures in this anatomically dense region can be challenging. In clinical practice, commercial navigation systems use a tracked pointer for guidance. Augmented Reality (AR) is an emerging technology used for surgical guidance. It can be tracker based or vision based, but neither is widely used in pituitary surgery. Methods This pre-clinical study aims to assess the accuracy of tracker-based navigation systems, including those that allow for AR. Two setups were used to conduct simulations: (1) the standard pointer setup, tracked by an infrared camera; and (2) the endoscope setup that allows for AR, using reflective markers on the end of the endoscope, tracked by infrared cameras. The error sources were estimated by calculating the Euclidean distance between a point's true location and the point's location after passing it through the noisy system. A phantom study was then conducted to verify the in-silico simulation results and show a working example of image-based navigation errors in current methodologies. Results The errors of the tracked pointer and tracked endoscope simulations were 1.7 and 2.5 mm respectively. The phantom study showed errors of 2.14 and 3.21 mm for the tracked pointer and tracked endoscope setups respectively. Discussion In pituitary surgery, precise neighboring structure identification is crucial for success. However, our simulations reveal that the errors of tracked approaches were too large to meet the fine error margins required for pituitary surgery. In order to achieve the required accuracy, we would need much more accurate tracking, better calibration and improved registration techniques.
Collapse
Affiliation(s)
- Aure Enkaoua
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Mobarakol Islam
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - João Ramalhinho
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Thomas Dowrick
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - James Booker
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
- Division of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK
| | - Danyal Z. Khan
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
- Division of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK
| | - Hani J. Marcus
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
- Division of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK
| | - Matthew J. Clarkson
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| |
Collapse
|
6
|
Iribar-Zabala A, Benito R, Sánchez-Merino G, Cortes CA, Garcia-Fidalgo MA, Lopez-Linares K, Bertelsen Á. MIGHTY: a comprehensive platform for the development of medical image-guided holographic therapy. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2022. [DOI: 10.1080/21681163.2022.2152373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
Affiliation(s)
- Amaia Iribar-Zabala
- Vicomtech Foundation, Basque Research and Technology Alliance (BRTA), Donostia-San Sebastián, Spain
| | - Rafael Benito
- Vicomtech Foundation, Basque Research and Technology Alliance (BRTA), Donostia-San Sebastián, Spain
| | - Gaspar Sánchez-Merino
- Bioaraba, New Technologies and Information Systems in Health Research Group, Vitoria-Gasteiz, Spain
- Osakidetza Basque Health Service, Medical Physics Department, Araba University Hospital, Vitoria-Gasteiz, Spain
| | - Camilo A. Cortes
- Vicomtech Foundation, Basque Research and Technology Alliance (BRTA), Donostia-San Sebastián, Spain
- Bioengineering Area, Biodonostia Health Research Institute, San Sebastián, Spain
| | - M. Angeles Garcia-Fidalgo
- Bioaraba, New Technologies and Information Systems in Health Research Group, Vitoria-Gasteiz, Spain
- Osakidetza Basque Health Service, Araba University Hospital, Vitoria-Gasteiz, Spain
| | - Karen Lopez-Linares
- Vicomtech Foundation, Basque Research and Technology Alliance (BRTA), Donostia-San Sebastián, Spain
- Bioengineering Area, Biodonostia Health Research Institute, San Sebastián, Spain
| | - Álvaro Bertelsen
- Vicomtech Foundation, Basque Research and Technology Alliance (BRTA), Donostia-San Sebastián, Spain
- Bioengineering Area, Biodonostia Health Research Institute, San Sebastián, Spain
| |
Collapse
|
7
|
Ramalhinho J, Koo B, Montaña-Brown N, Saeed SU, Bonmati E, Gurusamy K, Pereira SP, Davidson B, Hu Y, Clarkson MJ. Deep hashing for global registration of untracked 2D laparoscopic ultrasound to CT. Int J Comput Assist Radiol Surg 2022; 17:1461-1468. [PMID: 35366130 PMCID: PMC9307559 DOI: 10.1007/s11548-022-02605-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Accepted: 03/09/2022] [Indexed: 11/30/2022]
Abstract
PURPOSE The registration of Laparoscopic Ultrasound (LUS) to CT can enhance the safety of laparoscopic liver surgery by providing the surgeon with awareness on the relative positioning between critical vessels and a tumour. In an effort to provide a translatable solution for this poorly constrained problem, Content-based Image Retrieval (CBIR) based on vessel information has been suggested as a method for obtaining a global coarse registration without using tracking information. However, the performance of these frameworks is limited by the use of non-generalisable handcrafted vessel features. METHODS We propose the use of a Deep Hashing (DH) network to directly convert vessel images from both LUS and CT into fixed size hash codes. During training, these codes are learnt from a patient-specific CT scan by supplying the network with triplets of vessel images which include both a registered and a mis-registered pair. Once hash codes have been learnt, they can be used to perform registration with CBIR methods. RESULTS We test a CBIR pipeline on 11 sequences of untracked LUS distributed across 5 clinical cases. Compared to a handcrafted feature approach, our model improves the registration success rate significantly from 48% to 61%, considering a 20 mm error as the threshold for a successful coarse registration. CONCLUSIONS We present the first DH framework for interventional multi-modal registration tasks. The presented approach is easily generalisable to other registration problems, does not require annotated data for training, and may promote the translation of these techniques.
Collapse
Affiliation(s)
- João Ramalhinho
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences and Centre for Medical Image Computing, UCL, London, UK.
| | - Bongjin Koo
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences and Centre for Medical Image Computing, UCL, London, UK
| | - Nina Montaña-Brown
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences and Centre for Medical Image Computing, UCL, London, UK
| | - Shaheer U Saeed
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences and Centre for Medical Image Computing, UCL, London, UK
| | - Ester Bonmati
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences and Centre for Medical Image Computing, UCL, London, UK
| | | | | | - Brian Davidson
- Division of Surgery and Interventional Science, UCL, London, UK
| | - Yipeng Hu
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences and Centre for Medical Image Computing, UCL, London, UK
| | - Matthew J Clarkson
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences and Centre for Medical Image Computing, UCL, London, UK
| |
Collapse
|
8
|
Yu Y, Payne C, Marina N, Korsak A, Southern P, García‐Prieto A, Christie IN, Baker RR, Fisher EMC, Wells JA, Kalber TL, Pankhurst QA, Gourine AV, Lythgoe MF. Remote and Selective Control of Astrocytes by Magnetomechanical Stimulation. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2022; 9:e2104194. [PMID: 34927381 PMCID: PMC8867145 DOI: 10.1002/advs.202104194] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/20/2021] [Revised: 11/15/2021] [Indexed: 05/06/2023]
Abstract
Astrocytes play crucial and diverse roles in brain health and disease. The ability to selectively control astrocytes provides a valuable tool for understanding their function and has the therapeutic potential to correct dysfunction. Existing technologies such as optogenetics and chemogenetics require the introduction of foreign proteins, which adds a layer of complication and hinders their clinical translation. A novel technique, magnetomechanical stimulation (MMS), that enables remote and selective control of astrocytes without genetic modification is described here. MMS exploits the mechanosensitivity of astrocytes and triggers mechanogated Ca2+ and adenosine triphosphate (ATP) signaling by applying a magnetic field to antibody-functionalized magnetic particles that are targeted to astrocytes. Using purpose-built magnetic devices, the mechanosensory threshold of astrocytes is determined, a sub-micrometer particle for effective MMS is identified, the in vivo fate of the particles is established, and cardiovascular responses are induced in rats after particles are delivered to specific brainstem astrocytes. By eliminating the need for device implantation and genetic modification, MMS is a method for controlling astroglial activity with an improved prospect for clinical application than existing technologies.
Collapse
Affiliation(s)
- Yichao Yu
- Centre for Advanced Biomedical ImagingDivision of MedicineUniversity College London72 Huntley StreetLondonWC1E 6DDUK
| | - Christopher Payne
- Centre for Advanced Biomedical ImagingDivision of MedicineUniversity College London72 Huntley StreetLondonWC1E 6DDUK
| | - Nephtali Marina
- Centre for Cardiovascular and Metabolic NeuroscienceResearch Department of Neuroscience, Physiology and PharmacologyUniversity College LondonGower StreetLondonWC1E 6BTUK
| | - Alla Korsak
- Centre for Cardiovascular and Metabolic NeuroscienceResearch Department of Neuroscience, Physiology and PharmacologyUniversity College LondonGower StreetLondonWC1E 6BTUK
| | - Paul Southern
- Healthcare Biomagnetics LaboratoryUniversity College London21 Albemarle StreetLondonW1S 4BSUK
| | - Ana García‐Prieto
- Healthcare Biomagnetics LaboratoryUniversity College London21 Albemarle StreetLondonW1S 4BSUK
- Departamento Física Aplicada IUniversidad del País VascoBilbao48013Spain
| | - Isabel N. Christie
- Centre for Cardiovascular and Metabolic NeuroscienceResearch Department of Neuroscience, Physiology and PharmacologyUniversity College LondonGower StreetLondonWC1E 6BTUK
| | - Rebecca R. Baker
- Centre for Advanced Biomedical ImagingDivision of MedicineUniversity College London72 Huntley StreetLondonWC1E 6DDUK
| | - Elizabeth M. C. Fisher
- Department of Neuromuscular DiseasesQueen Square Institute of NeurologyUniversity College LondonQueen SquareLondonWC1N 3BGUK
| | - Jack A. Wells
- Centre for Advanced Biomedical ImagingDivision of MedicineUniversity College London72 Huntley StreetLondonWC1E 6DDUK
| | - Tammy L. Kalber
- Centre for Advanced Biomedical ImagingDivision of MedicineUniversity College London72 Huntley StreetLondonWC1E 6DDUK
| | - Quentin A. Pankhurst
- Healthcare Biomagnetics LaboratoryUniversity College London21 Albemarle StreetLondonW1S 4BSUK
| | - Alexander V. Gourine
- Centre for Cardiovascular and Metabolic NeuroscienceResearch Department of Neuroscience, Physiology and PharmacologyUniversity College LondonGower StreetLondonWC1E 6BTUK
| | - Mark F. Lythgoe
- Centre for Advanced Biomedical ImagingDivision of MedicineUniversity College London72 Huntley StreetLondonWC1E 6DDUK
| |
Collapse
|
9
|
Automatic, global registration in laparoscopic liver surgery. Int J Comput Assist Radiol Surg 2021; 17:167-176. [PMID: 34697757 PMCID: PMC8739294 DOI: 10.1007/s11548-021-02518-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Accepted: 10/04/2021] [Indexed: 11/13/2022]
Abstract
Purpose The initial registration of a 3D pre-operative CT model to a 2D laparoscopic video image in augmented reality systems for liver surgery needs to be fast, intuitive to perform and with minimal interruptions to the surgical intervention. Several recent methods have focussed on using easily recognisable landmarks across modalities. However, these methods still need manual annotation or manual alignment. We propose a novel, fully automatic pipeline for 3D–2D global registration in laparoscopic liver interventions. Methods Firstly, we train a fully convolutional network for the semantic detection of liver contours in laparoscopic images. Secondly, we propose a novel contour-based global registration algorithm to estimate the camera pose without any manual input during surgery. The contours used are the anterior ridge and the silhouette of the liver. Results We show excellent generalisation of the semantic contour detection on test data from 8 clinical cases. In quantitative experiments, the proposed contour-based registration can successfully estimate a global alignment with as little as 30% of the liver surface, a visibility ratio which is characteristic of laparoscopic interventions. Moreover, the proposed pipeline showed very promising results in clinical data from 5 laparoscopic interventions. Conclusions Our proposed automatic global registration could make augmented reality systems more intuitive and usable for surgeons and easier to translate to operating rooms. Yet, as the liver is deformed significantly during surgery, it will be very beneficial to incorporate deformation into our method for more accurate registration.
Collapse
|
10
|
Fiford CM, Sudre CH, Young AL, Macdougall A, Nicholas J, Manning EN, Malone IB, Walsh P, Goodkin O, Pemberton HG, Barkhof F, Alexander DC, Cardoso MJ, Biessels GJ, Barnes J. Presumed small vessel disease, imaging and cognition markers in the Alzheimer's Disease Neuroimaging Initiative. Brain Commun 2021; 3:fcab226. [PMID: 34661106 PMCID: PMC8514859 DOI: 10.1093/braincomms/fcab226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2020] [Revised: 06/22/2021] [Accepted: 06/25/2021] [Indexed: 01/18/2023] Open
Abstract
MRI-derived features of presumed cerebral small vessel disease are frequently found in Alzheimer's disease. Influences of such markers on disease-progression measures are poorly understood. We measured markers of presumed small vessel disease (white matter hyperintensity volumes; cerebral microbleeds) on baseline images of newly enrolled individuals in the Alzheimer's Disease Neuroimaging Initiative cohort (GO and 2) and used linear mixed models to relate these to subsequent atrophy and neuropsychological score change. We also assessed heterogeneity in white matter hyperintensity positioning within biomarker abnormality sequences, driven by the data, using the Subtype and Stage Inference algorithm. This study recruited both sexes and included: controls: [n = 159, mean(SD) age = 74(6) years]; early and late mild cognitive impairment [ns = 265 and 139, respectively, mean(SD) ages =71(7) and 72(8) years, respectively]; Alzheimer's disease [n = 103, mean(SD) age = 75(8)] and significant memory concern [n = 72, mean(SD) age = 72(6) years]. Baseline demographic and vascular risk-factor data, and longitudinal cognitive scores (Mini-Mental State Examination; logical memory; and Trails A and B) were collected. Whole-brain and hippocampal volume change metrics were calculated. White matter hyperintensity volumes were associated with greater whole-brain and hippocampal volume changes independently of cerebral microbleeds (a doubling of baseline white matter hyperintensity was associated with an increase in atrophy rate of 0.3 ml/year for brain and 0.013 ml/year for hippocampus). Cerebral microbleeds were found in 15% of individuals and the presence of a microbleed, as opposed to none, was associated with increases in atrophy rate of 1.4 ml/year for whole brain and 0.021 ml/year for hippocampus. White matter hyperintensities were predictive of greater decline in all neuropsychological scores, while cerebral microbleeds were predictive of decline in logical memory (immediate recall) and Mini-Mental State Examination scores. We identified distinct groups with specific sequences of biomarker abnormality using continuous baseline measures and brain volume change. Four clusters were found; Group 1 showed early Alzheimer's pathology; Group 2 showed early neurodegeneration; Group 3 had early mixed Alzheimer's and cerebrovascular pathology; Group 4 had early neuropsychological score abnormalities. White matter hyperintensity volumes becoming abnormal was a late event for Groups 1 and 4 and an early event for 2 and 3. In summary, white matter hyperintensities and microbleeds were independently associated with progressive neurodegeneration (brain atrophy rates) and cognitive decline (change in neuropsychological scores). Mechanisms involving white matter hyperintensities and progression and microbleeds and progression may be partially separate. Distinct sequences of biomarker progression were found. White matter hyperintensity development was an early event in two sequences.
Collapse
Affiliation(s)
- Cassidy M Fiford
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, Queen Square, London WC1N 3BG, UK
| | - Carole H Sudre
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, Queen Square, London WC1N 3BG, UK
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London SE1 7EH, UK
- Centre for Medical Image Computing, University College London, London WC1V 6LJ, UK
- MRC Unit for Lifelong Health and Ageing at UCL, Department of Population Health Sciences, University College London, London WC1E 3HB, UK
| | - Alexandra L Young
- Centre for Medical Image Computing, University College London, London WC1V 6LJ, UK
- Department of Neuroimaging, Institute of Psychiatry, Psychology and Neuroscience, King’s College London, London SE5 3AF, UK
| | - Amy Macdougall
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, Queen Square, London WC1N 3BG, UK
- Department of Medical Statistics, London School of Hygiene and Tropical Medicine, London WC1E 7HT, UK
| | - Jennifer Nicholas
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, Queen Square, London WC1N 3BG, UK
- Department of Medical Statistics, London School of Hygiene and Tropical Medicine, London WC1E 7HT, UK
| | - Emily N Manning
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, Queen Square, London WC1N 3BG, UK
| | - Ian B Malone
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, Queen Square, London WC1N 3BG, UK
| | - Phoebe Walsh
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, Queen Square, London WC1N 3BG, UK
| | - Olivia Goodkin
- Centre for Medical Image Computing, University College London, London WC1V 6LJ, UK
| | - Hugh G Pemberton
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, Queen Square, London WC1N 3BG, UK
- Centre for Medical Image Computing, University College London, London WC1V 6LJ, UK
| | - Frederik Barkhof
- Department of Radiology and Nuclear Medicine, VU University Medical Center, Amsterdam Neuroscience, 1081 HV Amsterdam, The Netherlands
- UCL Queen Square Institute of Neurology, London WC1N 3BG, UK
- UCL Institute of Healthcare Engineering, London WC1E 6DH, UK
| | - Daniel C Alexander
- Centre for Medical Image Computing, University College London, London WC1V 6LJ, UK
| | - M Jorge Cardoso
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London SE1 7EH, UK
| | - Geert Jan Biessels
- Department of Neurology and Neurosurgery, UMC Utrecht Brain Center, University Medical Center Utrecht, 3584 CG Utrecht, The Netherlands
| | - Josephine Barnes
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, Queen Square, London WC1N 3BG, UK
| | | |
Collapse
|
11
|
Shapey J, Dowrick T, Delaunay R, Mackle EC, Thompson S, Janatka M, Guichard R, Georgoulas A, Pérez-Suárez D, Bradford R, Saeed SR, Ourselin S, Clarkson MJ, Vercauteren T. Integrated multi-modality image-guided navigation for neurosurgery: open-source software platform using state-of-the-art clinical hardware. Int J Comput Assist Radiol Surg 2021; 16:1347-1356. [PMID: 33937966 PMCID: PMC8295168 DOI: 10.1007/s11548-021-02374-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2021] [Accepted: 04/08/2021] [Indexed: 01/19/2023]
Abstract
PURPOSE Image-guided surgery (IGS) is an integral part of modern neuro-oncology surgery. Navigated ultrasound provides the surgeon with reconstructed views of ultrasound data, but no commercial system presently permits its integration with other essential non-imaging-based intraoperative monitoring modalities such as intraoperative neuromonitoring. Such a system would be particularly useful in skull base neurosurgery. METHODS We established functional and technical requirements of an integrated multi-modality IGS system tailored for skull base surgery with the ability to incorporate: (1) preoperative MRI data and associated 3D volume reconstructions, (2) real-time intraoperative neurophysiological data and (3) live reconstructed 3D ultrasound. We created an open-source software platform to integrate with readily available commercial hardware. We tested the accuracy of the system's ultrasound navigation and reconstruction using a polyvinyl alcohol phantom model and simulated the use of the complete navigation system in a clinical operating room using a patient-specific phantom model. RESULTS Experimental validation of the system's navigated ultrasound component demonstrated accuracy of [Formula: see text] and a frame rate of 25 frames per second. Clinical simulation confirmed that system assembly was straightforward, could be achieved in a clinically acceptable time of [Formula: see text] and performed with a clinically acceptable level of accuracy. CONCLUSION We present an integrated open-source research platform for multi-modality IGS. The present prototype system was tailored for neurosurgery and met all minimum design requirements focused on skull base surgery. Future work aims to optimise the system further by addressing the remaining target requirements.
Collapse
Affiliation(s)
- Jonathan Shapey
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK. .,Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK. .,Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK.
| | - Thomas Dowrick
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK.,Centre for Medical Image Computing, UCL, London, UK.,Department of Medical Physics and Biomedical Engineering, UCL, London, UK
| | - Rémi Delaunay
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK
| | - Eleanor C Mackle
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK
| | - Stephen Thompson
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK.,Centre for Medical Image Computing, UCL, London, UK.,Department of Medical Physics and Biomedical Engineering, UCL, London, UK
| | - Mirek Janatka
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK.,Centre for Medical Image Computing, UCL, London, UK.,Department of Medical Physics and Biomedical Engineering, UCL, London, UK
| | - Roland Guichard
- Research Software Development Group, Research IT Services, UCL, London, UK
| | | | - David Pérez-Suárez
- Research Software Development Group, Research IT Services, UCL, London, UK
| | - Robert Bradford
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK
| | - Shakeel R Saeed
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK.,The Ear Institute, UCL, London, UK.,The Royal National Throat, Nose and Ear Hospital, London, UK
| | - Sébastien Ourselin
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Matthew J Clarkson
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK.,Centre for Medical Image Computing, UCL, London, UK.,Department of Medical Physics and Biomedical Engineering, UCL, London, UK
| | - Tom Vercauteren
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| |
Collapse
|
12
|
Goodkin O, Prados F, Vos SB, Pemberton H, Collorone S, Hagens MHJ, Cardoso MJ, Yousry TA, Thornton JS, Sudre CH, Barkhof F. FLAIR-only joint volumetric analysis of brain lesions and atrophy in clinically isolated syndrome (CIS) suggestive of multiple sclerosis. NEUROIMAGE-CLINICAL 2020; 29:102542. [PMID: 33418171 PMCID: PMC7804983 DOI: 10.1016/j.nicl.2020.102542] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/19/2020] [Accepted: 12/20/2020] [Indexed: 11/18/2022]
Abstract
Background MRI assessment in multiple sclerosis (MS) focuses on the presence of typical white matter (WM) lesions. Neurodegeneration characterised by brain atrophy is recognised in the research field as an important prognostic factor. It is not routinely reported clinically, in part due to difficulty in achieving reproducible measurements. Automated MRI quantification of WM lesions and brain volume could provide important clinical monitoring data. In general, lesion quantification relies on both T1 and FLAIR input images, while tissue volumetry relies on T1. However, T1-weighted scans are not routinely included in the clinical MS protocol, limiting the utility of automated quantification. Objectives We address an aspect of this important translational challenge by assessing the performance of FLAIR-only lesion and brain segmentation, against a conventional approach requiring multi-contrast acquisition. We explore whether FLAIR-only grey matter (GM) segmentation yields more variability in performance compared with two-channel segmentation; whether this is related to field strength; and whether the results meet a level of clinical acceptability demonstrated by the ability to reproduce established biological associations. Methods We used a multicentre dataset of subjects with a CIS suggestive of MS scanned at 1.5T and 3T in the same week. WM lesions were manually segmented by two raters, ‘manual 1′ guided by consensus reading of CIS-specific lesions and ‘manual 2′ by any WM hyperintensity. An existing brain segmentation method was adapted for FLAIR-only input. Automated segmentation of WM hyperintensity and brain volumes were performed with conventional (T1/T1 + FLAIR) and FLAIR-only methods. Results WM lesion volumes were comparable at 1.5T between ‘manual 2′ and FLAIR-only methods and at 3T between ‘manual 2′, T1 + FLAIR and FLAIR-only methods. For cortical GM volume, linear regression measures between conventional and FLAIR-only segmentation were high (1.5T: α = 1.029, R2 = 0.997, standard error (SE) = 0.007; 3T: α = 1.019, R2 = 0.998, SE = 0.006). Age-associated change in cortical GM volume was a significant covariate in both T1 (p = 0.001) and FLAIR-only (p = 0.005) methods, confirming the expected relationship between age and GM volume for FLAIR-only segmentations. Conclusions FLAIR-only automated segmentation of WM lesions and brain volumes were consistent with results obtained through conventional methods and had the ability to demonstrate biological effects in our study population. Imaging protocol harmonisation and validation with other MS phenotypes could facilitate the integration of automated WM lesion volume and brain atrophy analysis as clinical tools in radiological MS reporting.
Collapse
Affiliation(s)
- O Goodkin
- Centre for Medical Image Computing (CMIC), University College London, London, United Kingdom; Neuroradiological Academic Unit, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom.
| | - F Prados
- Centre for Medical Image Computing (CMIC), University College London, London, United Kingdom; Neuroradiological Academic Unit, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom; eHealth Centre, Universitat Oberta de Catalunya, Barcelona, Spain
| | - S B Vos
- Centre for Medical Image Computing (CMIC), University College London, London, United Kingdom; Neuroradiological Academic Unit, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom; Lysholm Department of Neuroradiology, National Hospital for Neurology and Neurosurgery, UCLH NHS Foundation Trust, London, United Kingdom
| | - H Pemberton
- Centre for Medical Image Computing (CMIC), University College London, London, United Kingdom; Neuroradiological Academic Unit, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - S Collorone
- NMR Research Unit, Queen Square Multiple Sclerosis Centre, Department of Neuroinflammation, UCL Institute of Neurology, Faculty of Brain Sciences, University College London (UCL), London, United Kingdom
| | - M H J Hagens
- MS Center Amsterdam, Department of Neurology, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - M J Cardoso
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
| | - T A Yousry
- Neuroradiological Academic Unit, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom; Lysholm Department of Neuroradiology, National Hospital for Neurology and Neurosurgery, UCLH NHS Foundation Trust, London, United Kingdom
| | - J S Thornton
- Neuroradiological Academic Unit, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom; Lysholm Department of Neuroradiology, National Hospital for Neurology and Neurosurgery, UCLH NHS Foundation Trust, London, United Kingdom
| | - C H Sudre
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom; Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - F Barkhof
- Centre for Medical Image Computing (CMIC), University College London, London, United Kingdom; Neuroradiological Academic Unit, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom; Lysholm Department of Neuroradiology, National Hospital for Neurology and Neurosurgery, UCLH NHS Foundation Trust, London, United Kingdom; Radiology & Nuclear Medicine, VU University Medical Center, Amsterdam, Netherlands
| |
Collapse
|
13
|
Fiford CM, Sudre CH, Pemberton H, Walsh P, Manning E, Malone IB, Nicholas J, Bouvy WH, Carmichael OT, Biessels GJ, Cardoso MJ, Barnes J. Automated White Matter Hyperintensity Segmentation Using Bayesian Model Selection: Assessment and Correlations with Cognitive Change. Neuroinformatics 2020; 18:429-449. [PMID: 32062817 PMCID: PMC7338814 DOI: 10.1007/s12021-019-09439-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
Accurate, automated white matter hyperintensity (WMH) segmentations are needed for large-scale studies to understand contributions of WMH to neurological diseases. We evaluated Bayesian Model Selection (BaMoS), a hierarchical fully-unsupervised model selection framework for WMH segmentation. We compared BaMoS segmentations to semi-automated segmentations, and assessed whether they predicted longitudinal cognitive change in control, early Mild Cognitive Impairment (EMCI), late Mild Cognitive Impairment (LMCI), subjective/significant memory concern (SMC) and Alzheimer's (AD) participants. Data were downloaded from the Alzheimer's disease Neuroimaging Initiative (ADNI). Magnetic resonance images from 30 control and 30 AD participants were selected to incorporate multiple scanners, and were semi-automatically segmented by 4 raters and BaMoS. Segmentations were assessed using volume correlation, Dice score, and other spatial metrics. Linear mixed-effect models were fitted to 180 control, 107 SMC, 320 EMCI, 171 LMCI and 151 AD participants separately in each group, with the outcomes being cognitive change (e.g. mini-mental state examination; MMSE), and BaMoS WMH, age, sex, race and education used as predictors. There was a high level of agreement between BaMoS' WMH segmentation volumes and a consensus of rater segmentations, with a median Dice score of 0.74 and correlation coefficient of 0.96. BaMoS WMH predicted cognitive change in: control, EMCI, and SMC groups using MMSE; LMCI using clinical dementia rating scale; and EMCI using Alzheimer's disease assessment scale-cognitive subscale (p < 0.05, all tests). BaMoS compares well to semi-automated segmentation, is robust to different WMH loads and scanners, and can generate volumes which predict decline. BaMoS can be applicable to further large-scale studies.
Collapse
Affiliation(s)
- Cassidy M. Fiford
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, London, UK
| | - Carole H. Sudre
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, London, UK
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
- Department of Medical Physics and Biomedical Engineering, University College London, London, UK
| | - Hugh Pemberton
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, London, UK
| | - Phoebe Walsh
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, London, UK
| | - Emily Manning
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, London, UK
| | - Ian B. Malone
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, London, UK
| | | | - Willem H Bouvy
- Department of Neurology and Neurosurgery, Brain Center Rudolf Magnus, University Medical Center Utrecht, Utrecht, the Netherlands
| | | | - Geert Jan Biessels
- Department of Neurology and Neurosurgery, Brain Center Rudolf Magnus, University Medical Center Utrecht, Utrecht, the Netherlands
| | - M. Jorge Cardoso
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, London, UK
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
- Department of Medical Physics and Biomedical Engineering, University College London, London, UK
| | - Josephine Barnes
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, London, UK
| | - for the Alzheimer’s Disease Neuroimaging Initiative
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, London, UK
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
- Department of Medical Physics and Biomedical Engineering, University College London, London, UK
- London School of Hygiene and Tropical Medicine, London, UK
- Department of Neurology and Neurosurgery, Brain Center Rudolf Magnus, University Medical Center Utrecht, Utrecht, the Netherlands
- Pennington Biomedical Research Center, Baton Rouge, LA USA
| |
Collapse
|
14
|
Thompson S, Dowrick T, Ahmad M, Xiao G, Koo B, Bonmati E, Kahl K, Clarkson MJ. SciKit-Surgery: compact libraries for surgical navigation. Int J Comput Assist Radiol Surg 2020; 15:1075-1084. [PMID: 32436132 PMCID: PMC7316849 DOI: 10.1007/s11548-020-02180-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2019] [Accepted: 04/22/2020] [Indexed: 12/03/2022]
Abstract
Purpose This paper introduces the SciKit-Surgery libraries, designed to enable rapid development of clinical applications for image-guided interventions. SciKit-Surgery implements a family of compact, orthogonal, libraries accompanied by robust testing, documentation, and quality control. SciKit-Surgery libraries can be rapidly assembled into testable clinical applications and subsequently translated to production software without the need for software reimplementation. The aim is to support translation from single surgeon trials to multicentre trials in under 2 years. Methods At the time of publication, there were 13 SciKit-Surgery libraries provide functionality for visualisation and augmented reality in surgery, together with hardware interfaces for video, tracking, and ultrasound sources. The libraries are stand-alone, open source, and provide Python interfaces. This design approach enables fast development of robust applications and subsequent translation. The paper compares the libraries with existing platforms and uses two example applications to show how SciKit-Surgery libraries can be used in practice. Results Using the number of lines of code and the occurrence of cross-dependencies as proxy measurements of code complexity, two example applications using SciKit-Surgery libraries are analysed. The SciKit-Surgery libraries demonstrate ability to support rapid development of testable clinical applications. By maintaining stricter orthogonality between libraries, the number, and complexity of dependencies can be reduced. The SciKit-Surgery libraries also demonstrate the potential to support wider dissemination of novel research. Conclusion The SciKit-Surgery libraries utilise the modularity of the Python language and the standard data types of the NumPy package to provide an easy-to-use, well-tested, and extensible set of tools for the development of applications for image-guided interventions. The example application built on SciKit-Surgery has a simpler dependency structure than the same application built using a monolithic platform, making ongoing clinical translation more feasible.
Collapse
Affiliation(s)
- Stephen Thompson
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK.
| | - Thomas Dowrick
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK
| | - Mian Ahmad
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK
| | - Goufang Xiao
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK
| | - Bongjin Koo
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK
| | - Ester Bonmati
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK
| | - Kim Kahl
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK
| | - Matthew J Clarkson
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK
| |
Collapse
|
15
|
Fedorov A, Beichel R, Kalpathy-Cramer J, Clunie D, Onken M, Riesmeier J, Herz C, Bauer C, Beers A, Fillion-Robin JC, Lasso A, Pinter C, Pieper S, Nolden M, Maier-Hein K, Herrmann MD, Saltz J, Prior F, Fennessy F, Buatti J, Kikinis R. Quantitative Imaging Informatics for Cancer Research. JCO Clin Cancer Inform 2020; 4:444-453. [PMID: 32392097 PMCID: PMC7265794 DOI: 10.1200/cci.19.00165] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/16/2020] [Indexed: 01/06/2023] Open
Abstract
PURPOSE We summarize Quantitative Imaging Informatics for Cancer Research (QIICR; U24 CA180918), one of the first projects funded by the National Cancer Institute (NCI) Informatics Technology for Cancer Research program. METHODS QIICR was motivated by the 3 use cases from the NCI Quantitative Imaging Network. 3D Slicer was selected as the platform for implementation of open-source quantitative imaging (QI) tools. Digital Imaging and Communications in Medicine (DICOM) was chosen for standardization of QI analysis outputs. Support of improved integration with community repositories focused on The Cancer Imaging Archive (TCIA). Priorities included improved capabilities of the standard, toolkits and tools, reference datasets, collaborations, and training and outreach. RESULTS Fourteen new tools to support head and neck cancer, glioblastoma, and prostate cancer QI research were introduced and downloaded over 100,000 times. DICOM was amended, with over 40 correction proposals addressing QI needs. Reference implementations of the standard in a popular toolkit and standalone tools were introduced. Eight datasets exemplifying the application of the standard and tools were contributed. An open demonstration/connectathon was organized, attracting the participation of academic groups and commercial vendors. Integration of tools with TCIA was improved by implementing programmatic communication interface and by refining best practices for QI analysis results curation. CONCLUSION Tools, capabilities of the DICOM standard, and datasets we introduced found adoption and utility within the cancer imaging community. A collaborative approach is critical to addressing challenges in imaging informatics at the national and international levels. Numerous challenges remain in establishing and maintaining the infrastructure of analysis tools and standardized datasets for the imaging community. Ideas and technology developed by the QIICR project are contributing to the NCI Imaging Data Commons currently being developed.
Collapse
Affiliation(s)
- Andrey Fedorov
- Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA
| | | | | | | | | | | | - Christian Herz
- Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA
| | | | | | | | | | | | | | - Marco Nolden
- German Cancer Research Center, Heidelberg, Germany
| | | | - Markus D. Herrmann
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA
| | | | - Fred Prior
- University of Arkansas for Medical Sciences, Little Rock, AR
| | - Fiona Fennessy
- Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA
| | | | - Ron Kikinis
- Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA
| |
Collapse
|
16
|
Xiao G, Bonmati E, Thompson S, Evans J, Hipwell J, Nikitichev D, Gurusamy K, Ourselin S, Hawkes DJ, Davidson B, Clarkson MJ. Electromagnetic tracking in image-guided laparoscopic surgery: Comparison with optical tracking and feasibility study of a combined laparoscope and laparoscopic ultrasound system. Med Phys 2018; 45:5094-5104. [PMID: 30247765 PMCID: PMC6282846 DOI: 10.1002/mp.13210] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2018] [Revised: 09/07/2018] [Accepted: 09/07/2018] [Indexed: 11/23/2022] Open
Abstract
PURPOSE In image-guided laparoscopy, optical tracking is commonly employed, but electromagnetic (EM) systems have been proposed in the literature. In this paper, we provide a thorough comparison of EM and optical tracking systems for use in image-guided laparoscopic surgery and a feasibility study of a combined, EM-tracked laparoscope and laparoscopic ultrasound (LUS) image guidance system. METHODS We first assess the tracking accuracy of a laparoscope with two optical trackers tracking retroreflective markers mounted on the shaft and an EM tracker with the sensor embedded at the proximal end, using a standard evaluation plate. We then use a stylus to test the precision of position measurement and accuracy of distance measurement of the trackers. Finally, we assess the accuracy of an image guidance system comprised of an EM-tracked laparoscope and an EM-tracked LUS probe. RESULTS In the experiment using a standard evaluation plate, the two optical trackers show less jitter in position and orientation measurement than the EM tracker. Also, the optical trackers demonstrate better consistency of orientation measurement within the test volume. However, their accuracy of measuring relative positions decreases significantly with longer distances whereas the EM tracker's performance is stable; at 50 mm distance, the RMS errors for the two optical trackers are 0.210 and 0.233 mm, respectively, and it is 0.214 mm for the EM tracker; at 250 mm distance, the RMS errors for the two optical trackers become 1.031 and 1.178 mm, respectively, while it is 0.367 mm for the EM tracker. In the experiment using the stylus, the two optical trackers have RMS errors of 1.278 and 1.555 mm in localizing the stylus tip, and it is 1.117 mm for the EM tracker. Our prototype of a combined, EM-tracked laparoscope and LUS system using representative calibration methods showed a RMS point localization error of 3.0 mm for the laparoscope and 1.3 mm for the LUS probe, the lager error of the former being predominantly due to the triangulation error when using a narrow-baseline stereo laparoscope. CONCLUSIONS The errors incurred by optical trackers, due to the lever-arm effect and variation in tracking accuracy in the depth direction, would make EM-tracked solutions preferable if the EM sensor is placed at the proximal end of the laparoscope.
Collapse
Affiliation(s)
- Guofang Xiao
- Wellcome/EPSRC Center for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Center for Medical Image ComputingUniversity College LondonLondonUK
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| | - Ester Bonmati
- Center for Medical Image ComputingUniversity College LondonLondonUK
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| | - Stephen Thompson
- Wellcome/EPSRC Center for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Center for Medical Image ComputingUniversity College LondonLondonUK
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| | - Joe Evans
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| | - John Hipwell
- Center for Medical Image ComputingUniversity College LondonLondonUK
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| | - Daniil Nikitichev
- Wellcome/EPSRC Center for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Center for Medical Image ComputingUniversity College LondonLondonUK
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| | - Kurinchi Gurusamy
- Division of Surgery and Interventional ScienceUniversity College LondonLondonUK
| | - Sébastien Ourselin
- Wellcome/EPSRC Center for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Center for Medical Image ComputingUniversity College LondonLondonUK
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| | - David J. Hawkes
- Wellcome/EPSRC Center for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Center for Medical Image ComputingUniversity College LondonLondonUK
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| | - Brian Davidson
- Wellcome/EPSRC Center for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Division of Surgery and Interventional ScienceUniversity College LondonLondonUK
| | - Matthew J. Clarkson
- Wellcome/EPSRC Center for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Center for Medical Image ComputingUniversity College LondonLondonUK
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| |
Collapse
|
17
|
Souza VH, Matsuda RH, Peres ASC, Amorim PHJ, Moraes TF, Silva JVL, Baffa O. Development and characterization of the InVesalius Navigator software for navigated transcranial magnetic stimulation. J Neurosci Methods 2018; 309:109-120. [PMID: 30149047 DOI: 10.1016/j.jneumeth.2018.08.023] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2018] [Revised: 08/07/2018] [Accepted: 08/20/2018] [Indexed: 11/25/2022]
Abstract
BACKGROUND Neuronavigation provides visual guidance of an instrument during procedures of neurological interventions, and has been shown to be a valuable tool for accurately positioning transcranial magnetic stimulation (TMS) coils relative to an individual's anatomy. Despite the importance of neuronavigation, its high cost, low portability, and low availability of magnetic resonance imaging facilities limit its insertion in research and clinical environments. NEW METHOD We have developed and validated the InVesalius Navigator as the first free, open-source software for image-guided navigated TMS, compatible with multiple tracking devices. A point-based, co-registration algorithm and a guiding interface were designed for tracking any instrument (e.g. TMS coils) relative to an individual's anatomy. RESULTS Localization, precision errors, and repeatability were measured for two tracking devices during navigation in a phantom and in a simulated TMS study. Errors were measured in two commercial navigated TMS systems for comparison. Localization error was about 1.5 mm, and repeatability was about 1 mm for translation and 1° for rotation angles, both within limits established in the literature. COMPARISON WITH EXISTING METHODS Existing TMS neuronavigation software programs are not compatible with multiple tracking devices, and do not provide an easy to implement platform for custom tools. Moreover, commercial alternatives are expensive with limited portability. CONCLUSIONS InVesalius Navigator might contribute to improving spatial accuracy and the reliability of techniques for brain interventions by means of an intuitive graphical interface. Furthermore, the software can be easily integrated into existing neuroimaging tools, and customized for novel applications such as multi-locus and/or controllable-pulse TMS.
Collapse
Affiliation(s)
- Victor Hugo Souza
- Departamento de Física, Faculdade de Filosofia, Ciências e Letras de Ribeirão Preto, Universidade de São Paulo, Av. Bandeirantes, 3900, 14040-901, Ribeirão Preto, SP, Brazil.
| | - Renan H Matsuda
- Departamento de Física, Faculdade de Filosofia, Ciências e Letras de Ribeirão Preto, Universidade de São Paulo, Av. Bandeirantes, 3900, 14040-901, Ribeirão Preto, SP, Brazil.
| | - André S C Peres
- Departamento de Física, Faculdade de Filosofia, Ciências e Letras de Ribeirão Preto, Universidade de São Paulo, Av. Bandeirantes, 3900, 14040-901, Ribeirão Preto, SP, Brazil; Instituto Internacional de Neurociência de Natal Edmond e Lily Safra, Instituto Santos Dumont, Rodovia RN 160 Km 03, 3003, 59280-000, Macaíba, RN, Brazil.
| | - Paulo Henrique J Amorim
- Núcleo de Tecnologias Tridimensionais, Centro de Tecnologia da Informação Renato Archer, Rodovia Dom Pedro I Km 143, 13069-901, Campinas, SP, Brazil.
| | - Thiago F Moraes
- Núcleo de Tecnologias Tridimensionais, Centro de Tecnologia da Informação Renato Archer, Rodovia Dom Pedro I Km 143, 13069-901, Campinas, SP, Brazil.
| | - Jorge Vicente L Silva
- Núcleo de Tecnologias Tridimensionais, Centro de Tecnologia da Informação Renato Archer, Rodovia Dom Pedro I Km 143, 13069-901, Campinas, SP, Brazil.
| | - Oswaldo Baffa
- Departamento de Física, Faculdade de Filosofia, Ciências e Letras de Ribeirão Preto, Universidade de São Paulo, Av. Bandeirantes, 3900, 14040-901, Ribeirão Preto, SP, Brazil.
| |
Collapse
|
18
|
Gibson E, Li W, Sudre C, Fidon L, Shakir DI, Wang G, Eaton-Rosen Z, Gray R, Doel T, Hu Y, Whyntie T, Nachev P, Modat M, Barratt DC, Ourselin S, Cardoso MJ, Vercauteren T. NiftyNet: a deep-learning platform for medical imaging. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 158:113-122. [PMID: 29544777 PMCID: PMC5869052 DOI: 10.1016/j.cmpb.2018.01.025] [Citation(s) in RCA: 234] [Impact Index Per Article: 39.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/02/2017] [Revised: 01/08/2018] [Accepted: 01/24/2018] [Indexed: 05/12/2023]
Abstract
BACKGROUND AND OBJECTIVES Medical image analysis and computer-assisted intervention problems are increasingly being addressed with deep-learning-based solutions. Established deep-learning platforms are flexible but do not provide specific functionality for medical image analysis and adapting them for this domain of application requires substantial implementation effort. Consequently, there has been substantial duplication of effort and incompatible infrastructure developed across many research groups. This work presents the open-source NiftyNet platform for deep learning in medical imaging. The ambition of NiftyNet is to accelerate and simplify the development of these solutions, and to provide a common mechanism for disseminating research outputs for the community to use, adapt and build upon. METHODS The NiftyNet infrastructure provides a modular deep-learning pipeline for a range of medical imaging applications including segmentation, regression, image generation and representation learning applications. Components of the NiftyNet pipeline including data loading, data augmentation, network architectures, loss functions and evaluation metrics are tailored to, and take advantage of, the idiosyncracies of medical image analysis and computer-assisted intervention. NiftyNet is built on the TensorFlow framework and supports features such as TensorBoard visualization of 2D and 3D images and computational graphs by default. RESULTS We present three illustrative medical image analysis applications built using NiftyNet infrastructure: (1) segmentation of multiple abdominal organs from computed tomography; (2) image regression to predict computed tomography attenuation maps from brain magnetic resonance images; and (3) generation of simulated ultrasound images for specified anatomical poses. CONCLUSIONS The NiftyNet infrastructure enables researchers to rapidly develop and distribute deep learning solutions for segmentation, regression, image generation and representation learning applications, or extend the platform to new applications.
Collapse
Affiliation(s)
- Eli Gibson
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, UK; Centre for Medical Image Computing (CMIC), Departments of Medical Physics & Biomedical Engineering and Computer Science, University College London, UK
| | - Wenqi Li
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, UK.
| | - Carole Sudre
- Centre for Medical Image Computing (CMIC), Departments of Medical Physics & Biomedical Engineering and Computer Science, University College London, UK
| | - Lucas Fidon
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, UK
| | - Dzhoshkun I Shakir
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, UK
| | - Guotai Wang
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, UK
| | - Zach Eaton-Rosen
- Centre for Medical Image Computing (CMIC), Departments of Medical Physics & Biomedical Engineering and Computer Science, University College London, UK
| | - Robert Gray
- Institute of Neurology, University College London, UK; National Hospital for Neurology and Neurosurgery, London, UK
| | - Tom Doel
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, UK
| | - Yipeng Hu
- Centre for Medical Image Computing (CMIC), Departments of Medical Physics & Biomedical Engineering and Computer Science, University College London, UK
| | - Tom Whyntie
- Centre for Medical Image Computing (CMIC), Departments of Medical Physics & Biomedical Engineering and Computer Science, University College London, UK
| | - Parashkev Nachev
- Institute of Neurology, University College London, UK; National Hospital for Neurology and Neurosurgery, London, UK
| | - Marc Modat
- Centre for Medical Image Computing (CMIC), Departments of Medical Physics & Biomedical Engineering and Computer Science, University College London, UK
| | - Dean C Barratt
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, UK; Centre for Medical Image Computing (CMIC), Departments of Medical Physics & Biomedical Engineering and Computer Science, University College London, UK
| | - Sébastien Ourselin
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, UK
| | - M Jorge Cardoso
- Centre for Medical Image Computing (CMIC), Departments of Medical Physics & Biomedical Engineering and Computer Science, University College London, UK
| | - Tom Vercauteren
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, UK
| |
Collapse
|
19
|
In vivo estimation of target registration errors during augmented reality laparoscopic surgery. Int J Comput Assist Radiol Surg 2018; 13:865-874. [PMID: 29663273 PMCID: PMC5973973 DOI: 10.1007/s11548-018-1761-3] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2018] [Accepted: 04/02/2018] [Indexed: 11/02/2022]
Abstract
PURPOSE Successful use of augmented reality for laparoscopic surgery requires that the surgeon has a thorough understanding of the likely accuracy of any overlay. Whilst the accuracy of such systems can be estimated in the laboratory, it is difficult to extend such methods to the in vivo clinical setting. Herein we describe a novel method that enables the surgeon to estimate in vivo errors during use. We show that the method enables quantitative evaluation of in vivo data gathered with the SmartLiver image guidance system. METHODS The SmartLiver system utilises an intuitive display to enable the surgeon to compare the positions of landmarks visible in both a projected model and in the live video stream. From this the surgeon can estimate the system accuracy when using the system to locate subsurface targets not visible in the live video. Visible landmarks may be either point or line features. We test the validity of the algorithm using an anatomically representative liver phantom, applying simulated perturbations to achieve clinically realistic overlay errors. We then apply the algorithm to in vivo data. RESULTS The phantom results show that using projected errors of surface features provides a reliable predictor of subsurface target registration error for a representative human liver shape. Applying the algorithm to in vivo data gathered with the SmartLiver image-guided surgery system shows that the system is capable of accuracies around 12 mm; however, achieving this reliably remains a significant challenge. CONCLUSION We present an in vivo quantitative evaluation of the SmartLiver image-guided surgery system, together with a validation of the evaluation algorithm. This is the first quantitative in vivo analysis of an augmented reality system for laparoscopic surgery.
Collapse
|
20
|
Tella-Amo M, Peter L, Shakir DI, Deprest J, Stoyanov D, Iglesias JE, Vercauteren T, Ourselin S. Probabilistic visual and electromagnetic data fusion for robust drift-free sequential mosaicking: application to fetoscopy. J Med Imaging (Bellingham) 2018; 5:021217. [PMID: 29487889 PMCID: PMC5822039 DOI: 10.1117/1.jmi.5.2.021217] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2017] [Accepted: 01/23/2018] [Indexed: 11/23/2022] Open
Abstract
The most effective treatment for twin-to-twin transfusion syndrome is laser photocoagulation of the shared vascular anastomoses in the placenta. Vascular connections are extremely challenging to locate due to their caliber and the reduced field-of-view of the fetoscope. Therefore, mosaicking techniques are beneficial to expand the scene, facilitate navigation, and allow vessel photocoagulation decision-making. Local vision-based mosaicking algorithms inherently drift over time due to the use of pairwise transformations. We propose the use of an electromagnetic tracker (EMT) sensor mounted at the tip of the fetoscope to obtain camera pose measurements, which we incorporate into a probabilistic framework with frame-to-frame visual information to achieve globally consistent sequential mosaics. We parametrize the problem in terms of plane and camera poses constrained by EMT measurements to enforce global consistency while leveraging pairwise image relationships in a sequential fashion through the use of local bundle adjustment. We show that our approach is drift-free and performs similarly to state-of-the-art global alignment techniques like bundle adjustment albeit with much less computational burden. Additionally, we propose a version of bundle adjustment that uses EMT information. We demonstrate the robustness to EMT noise and loss of visual information and evaluate mosaics for synthetic, phantom-based and ex vivo datasets.
Collapse
Affiliation(s)
- Marcel Tella-Amo
- University College London, Wellcome/EPSRC Center for Interventional and Surgical Sciences, London, United Kingdom
| | - Loic Peter
- University College London, Wellcome/EPSRC Center for Interventional and Surgical Sciences, London, United Kingdom
| | - Dzhoshkun I Shakir
- University College London, Wellcome/EPSRC Center for Interventional and Surgical Sciences, London, United Kingdom
| | - Jan Deprest
- University College London, Wellcome/EPSRC Center for Interventional and Surgical Sciences, London, United Kingdom.,KU Leuven, Center for Surgical Technologies, Faculty of Medicine, Leuven, Belgium
| | - Danail Stoyanov
- University College London, Wellcome/EPSRC Center for Interventional and Surgical Sciences, London, United Kingdom
| | - Juan Eugenio Iglesias
- University College London, Translational Imaging Group, CMIC, Medical Physics, London, United Kingdom
| | - Tom Vercauteren
- University College London, Wellcome/EPSRC Center for Interventional and Surgical Sciences, London, United Kingdom.,KU Leuven, Center for Surgical Technologies, Faculty of Medicine, Leuven, Belgium
| | - Sebastien Ourselin
- University College London, Wellcome/EPSRC Center for Interventional and Surgical Sciences, London, United Kingdom
| |
Collapse
|
21
|
Frank T, Krieger A, Leonard S, Patel NA, Tokuda J. ROS-IGTL-Bridge: an open network interface for image-guided therapy using the ROS environment. Int J Comput Assist Radiol Surg 2017; 12:1451-1460. [PMID: 28567563 DOI: 10.1007/s11548-017-1618-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2017] [Accepted: 05/18/2017] [Indexed: 01/18/2023]
Abstract
PURPOSE With the growing interest in advanced image-guidance for surgical robot systems, rapid integration and testing of robotic devices and medical image computing software are becoming essential in the research and development. Maximizing the use of existing engineering resources built on widely accepted platforms in different fields, such as robot operating system (ROS) in robotics and 3D Slicer in medical image computing could simplify these tasks. We propose a new open network bridge interface integrated in ROS to ensure seamless cross-platform data sharing. METHODS A ROS node named ROS-IGTL-Bridge was implemented. It establishes a TCP/IP network connection between the ROS environment and external medical image computing software using the OpenIGTLink protocol. The node exports ROS messages to the external software over the network and vice versa simultaneously, allowing seamless and transparent data sharing between the ROS-based devices and the medical image computing platforms. RESULTS Performance tests demonstrated that the bridge could stream transforms, strings, points, and images at 30 fps in both directions successfully. The data transfer latency was <1.2 ms for transforms, strings and points, and 25.2 ms for color VGA images. A separate test also demonstrated that the bridge could achieve 900 fps for transforms. Additionally, the bridge was demonstrated in two representative systems: a mock image-guided surgical robot setup consisting of 3D slicer, and Lego Mindstorms with ROS as a prototyping and educational platform for IGT research; and the smart tissue autonomous robot surgical setup with 3D Slicer. CONCLUSION The study demonstrated that the bridge enabled cross-platform data sharing between ROS and medical image computing software. This will allow rapid and seamless integration of advanced image-based planning/navigation offered by the medical image computing software such as 3D Slicer into ROS-based surgical robot systems.
Collapse
Affiliation(s)
- Tobias Frank
- Institute of Mechatronic Systems, Gottfried Wilhelm Leibniz Universität Hannover, Appelstrasse 11 a, 30167, Hannover, Germany.
| | - Axel Krieger
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Childrens National Health System, 111 Michigan Avenue Northwest, Washington, DC, 20010, USA
| | - Simon Leonard
- Department of Computer Science, Johns Hopkins University, 3400 North Charles Street, Baltimore, MD, 21218, USA
| | - Niravkumar A Patel
- Automation and Interventional Medicine (AIM) Laboratory, Worcester Polytechnic Institute, 100 Institute Road, Worcester, MA, 01609, USA
| | - Junichi Tokuda
- Department of Radiology, Brigham and Womens Hospital and Harvard Medical School, 75 Francis Street, Boston, MA, 02115, USA
| |
Collapse
|
22
|
Hawkes DJ. From clinical imaging and computational models to personalised medicine and image guided interventions. Med Image Anal 2016; 33:50-55. [PMID: 27407003 DOI: 10.1016/j.media.2016.06.022] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2016] [Revised: 06/10/2016] [Accepted: 06/15/2016] [Indexed: 11/25/2022]
Abstract
This short paper describes the development of the UCL Centre for Medical Image Computing (CMIC) from 2006 to 2016, together with reference to historical developments of the Computational Imaging sciences Group (CISG) at Guy's Hospital. Key early work in automated image registration led to developments in image guided surgery and improved cancer diagnosis and therapy. The work is illustrated with examples from neurosurgery, laparoscopic liver and gastric surgery, diagnosis and treatment of prostate cancer and breast cancer, and image guided radiotherapy for lung cancer.
Collapse
Affiliation(s)
- David J Hawkes
- Centre for Medical Image Computing, UCL, London, UK, WC1E 6BT, United Kingdom.
| |
Collapse
|
23
|
Ungi T, Lasso A, Fichtinger G. Open-source platforms for navigated image-guided interventions. Med Image Anal 2016; 33:181-186. [DOI: 10.1016/j.media.2016.06.011] [Citation(s) in RCA: 89] [Impact Index Per Article: 11.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2016] [Revised: 06/06/2016] [Accepted: 06/13/2016] [Indexed: 11/28/2022]
|
24
|
MITK-OpenIGTLink for combining open-source toolkits in real-time computer-assisted interventions. Int J Comput Assist Radiol Surg 2016; 12:351-361. [DOI: 10.1007/s11548-016-1488-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2016] [Accepted: 09/08/2016] [Indexed: 11/26/2022]
|
25
|
Drouin S, Kochanowska A, Kersten-Oertel M, Gerard IJ, Zelmann R, De Nigris D, Bériault S, Arbel T, Sirhan D, Sadikot AF, Hall JA, Sinclair DS, Petrecca K, DelMaestro RF, Collins DL. IBIS: an OR ready open-source platform for image-guided neurosurgery. Int J Comput Assist Radiol Surg 2016; 12:363-378. [DOI: 10.1007/s11548-016-1478-0] [Citation(s) in RCA: 49] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2016] [Accepted: 08/19/2016] [Indexed: 10/21/2022]
|
26
|
From computer-assisted intervention research to clinical impact: The need for a holistic approach. Med Image Anal 2016; 33:72-78. [PMID: 27425646 DOI: 10.1016/j.media.2016.06.018] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2016] [Revised: 06/09/2016] [Accepted: 06/13/2016] [Indexed: 01/11/2023]
Abstract
The early days of the field of medical image computing (MIC) and computer-assisted intervention (CAI), when publishing a strong self-contained methodological algorithm was enough to produce impact, are over. As a community, we now have substantial responsibility to translate our scientific progresses into improved patient care. In the field of computer-assisted interventions, the emphasis is also shifting from the mere use of well-known established imaging modalities and position trackers to the design and combination of innovative sensing, elaborate computational models and fine-grained clinical workflow analysis to create devices with unprecedented capabilities. The barriers to translating such devices in the complex and understandably heavily regulated surgical and interventional environment can seem daunting. Whether we leave the translation task mostly to our industrial partners or welcome, as researchers, an important share of it is up to us. We argue that embracing the complexity of surgical and interventional sciences is mandatory to the evolution of the field. Being able to do so requires large-scale infrastructure and a critical mass of expertise that very few research centres have. In this paper, we emphasise the need for a holistic approach to computer-assisted interventions where clinical, scientific, engineering and regulatory expertise are combined as a means of moving towards clinical impact. To ensure that the breadth of infrastructure and expertise required for translational computer-assisted intervention research does not lead to a situation where the field advances only thanks to a handful of exceptionally large research centres, we also advocate that solutions need to be designed to lower the barriers to entry. Inspired by fields such as particle physics and astronomy, we claim that centralised very large innovation centres with state of the art technology and health technology assessment capabilities backed by core support staff and open interoperability standards need to be accessible to the wider computer-assisted intervention research community.
Collapse
|
27
|
Thompson S, Stoyanov D, Schneider C, Gurusamy K, Ourselin S, Davidson B, Hawkes D, Clarkson MJ. Hand-eye calibration for rigid laparoscopes using an invariant point. Int J Comput Assist Radiol Surg 2016; 11:1071-80. [PMID: 26995597 PMCID: PMC4893361 DOI: 10.1007/s11548-016-1364-9] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2016] [Accepted: 02/24/2016] [Indexed: 01/22/2023]
Abstract
PURPOSE Laparoscopic liver resection has significant advantages over open surgery due to less patient trauma and faster recovery times, yet it can be difficult due to the restricted field of view and lack of haptic feedback. Image guidance provides a potential solution but one current challenge is in accurate "hand-eye" calibration, which determines the position and orientation of the laparoscope camera relative to the tracking markers. METHODS In this paper, we propose a simple and clinically feasible calibration method based on a single invariant point. The method requires no additional hardware, can be constructed by theatre staff during surgical setup, requires minimal image processing and can be visualised in real time. Real-time visualisation allows the surgical team to assess the calibration accuracy before use in surgery. In addition, in the laboratory, we have developed a laparoscope with an electromagnetic tracking sensor attached to the camera end and an optical tracking marker attached to the distal end. This enables a comparison of tracking performance. RESULTS We have evaluated our method in the laboratory and compared it to two widely used methods, "Tsai's method" and "direct" calibration. The new method is of comparable accuracy to existing methods, and we show RMS projected error due to calibration of 1.95 mm for optical tracking and 0.85 mm for EM tracking, versus 4.13 and 1.00 mm respectively, using existing methods. The new method has also been shown to be workable under sterile conditions in the operating room. CONCLUSION We have proposed a new method of hand-eye calibration, based on a single invariant point. Initial experience has shown that the method provides visual feedback, satisfactory accuracy and can be performed during surgery. We also show that an EM sensor placed near the camera would provide significantly improved image overlay accuracy.
Collapse
Affiliation(s)
- Stephen Thompson
- Centre for Medical Image Computing, Front Engineering Building, University College London, Malet Place, London, UK.
| | - Danail Stoyanov
- Centre for Medical Image Computing, Front Engineering Building, University College London, Malet Place, London, UK
| | - Crispin Schneider
- Division of Surgery, Hampstead Campus, UCL Medical School, Royal Free Hospital, 9th Floor, Rowland Hill Street, London, UK
| | - Kurinchi Gurusamy
- Division of Surgery, Hampstead Campus, UCL Medical School, Royal Free Hospital, 9th Floor, Rowland Hill Street, London, UK
| | - Sébastien Ourselin
- Centre for Medical Image Computing, Front Engineering Building, University College London, Malet Place, London, UK
| | - Brian Davidson
- Division of Surgery, Hampstead Campus, UCL Medical School, Royal Free Hospital, 9th Floor, Rowland Hill Street, London, UK
| | - David Hawkes
- Centre for Medical Image Computing, Front Engineering Building, University College London, Malet Place, London, UK
| | - Matthew J Clarkson
- Centre for Medical Image Computing, Front Engineering Building, University College London, Malet Place, London, UK
| |
Collapse
|
28
|
Nowell M, Rodionov R, Zombori G, Sparks R, Rizzi M, Ourselin S, Miserocchi A, McEvoy A, Duncan J. A Pipeline for 3D Multimodality Image Integration and Computer-assisted Planning in Epilepsy Surgery. J Vis Exp 2016. [PMID: 27286266 PMCID: PMC4927706 DOI: 10.3791/53450] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023] Open
Abstract
Epilepsy surgery is challenging and the use of 3D multimodality image integration (3DMMI) to aid presurgical planning is well-established. Multimodality image integration can be technically demanding, and is underutilised in clinical practice. We have developed a single software platform for image integration, 3D visualization and surgical planning. Here, our pipeline is described in step-by-step fashion, starting with image acquisition, proceeding through image co-registration, manual segmentation, brain and vessel extraction, 3D visualization and manual planning of stereoEEG (SEEG) implantations. With dissemination of the software this pipeline can be reproduced in other centres, allowing other groups to benefit from 3DMMI. We also describe the use of an automated, multi-trajectory planner to generate stereoEEG implantation plans. Preliminary studies suggest this is a rapid, safe and efficacious adjunct for planning SEEG implantations. Finally, a simple solution for the export of plans and models to commercial neuronavigation systems for implementation of plans in the operating theater is described. This software is a valuable tool that can support clinical decision making throughout the epilepsy surgery pathway.
Collapse
Affiliation(s)
- Mark Nowell
- Department of Clinical and Experimental Epilepsy, UCL Institute of Neurology;
| | - Roman Rodionov
- Department of Clinical and Experimental Epilepsy, UCL Institute of Neurology
| | | | | | - Michele Rizzi
- Department of Clinical and Experimental Epilepsy, UCL Institute of Neurology
| | | | - Anna Miserocchi
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery
| | - Andrew McEvoy
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery
| | - John Duncan
- Department of Clinical and Experimental Epilepsy, UCL Institute of Neurology
| |
Collapse
|
29
|
Song Y, Totz J, Thompson S, Johnsen S, Barratt D, Schneider C, Gurusamy K, Davidson B, Ourselin S, Hawkes D, Clarkson MJ. Locally rigid, vessel-based registration for laparoscopic liver surgery. Int J Comput Assist Radiol Surg 2015; 10:1951-61. [PMID: 26092658 PMCID: PMC4642598 DOI: 10.1007/s11548-015-1236-8] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2015] [Accepted: 05/30/2015] [Indexed: 12/05/2022]
Abstract
PURPOSE Laparoscopic liver resection has significant advantages over open surgery due to less patient trauma and faster recovery times, yet is difficult for most lesions due to the restricted field of view and lack of haptic feedback. Image guidance provides a potential solution but is challenging in a soft deforming organ such as the liver. In this paper, we therefore propose a laparoscopic ultrasound (LUS) image guidance system and study the feasibility of a locally rigid registration for laparoscopic liver surgery. METHODS We developed a real-time segmentation method to extract vessel centre points from calibrated, freehand, electromagnetically tracked, 2D LUS images. Using landmark-based initial registration and an optional iterative closest point (ICP) point-to-line registration, a vessel centre-line model extracted from preoperative computed tomography (CT) is registered to the ultrasound data during surgery. RESULTS Using the locally rigid ICP method, the RMS residual error when registering to a phantom was 0.7 mm, and the mean target registration error (TRE) for two in vivo porcine studies was 3.58 and 2.99 mm, respectively. Using the locally rigid landmark-based registration method gave a mean TRE of 4.23 mm using vessel centre lines derived from CT scans taken with pneumoperitoneum and 6.57 mm without pneumoperitoneum. CONCLUSION In this paper we propose a practical image-guided surgery system based on locally rigid registration of a CT-derived model to vascular structures located with LUS. In a physical phantom and during porcine laparoscopic liver resection, we demonstrate accuracy of target location commensurate with surgical requirements. We conclude that locally rigid registration could be sufficient for practically useful image guidance in the near future.
Collapse
Affiliation(s)
- Yi Song
- Centre For Medical Image Computing, Engineering Front Building, University College London, Malet Place, London, UK.
| | - Johannes Totz
- Centre For Medical Image Computing, Engineering Front Building, University College London, Malet Place, London, UK
| | - Steve Thompson
- Centre For Medical Image Computing, Engineering Front Building, University College London, Malet Place, London, UK
| | - Stian Johnsen
- Centre For Medical Image Computing, Engineering Front Building, University College London, Malet Place, London, UK
| | - Dean Barratt
- Centre For Medical Image Computing, Engineering Front Building, University College London, Malet Place, London, UK
| | - Crispin Schneider
- Royal Free Campus, 9th Floor, Royal Free Hospital, UCL Medical School, Rowland Hill Street, London, UK
| | - Kurinchi Gurusamy
- Royal Free Campus, 9th Floor, Royal Free Hospital, UCL Medical School, Rowland Hill Street, London, UK
| | - Brian Davidson
- Royal Free Campus, 9th Floor, Royal Free Hospital, UCL Medical School, Rowland Hill Street, London, UK
| | - Sébastien Ourselin
- Centre For Medical Image Computing, Engineering Front Building, University College London, Malet Place, London, UK
| | - David Hawkes
- Centre For Medical Image Computing, Engineering Front Building, University College London, Malet Place, London, UK
| | - Matthew J Clarkson
- Centre For Medical Image Computing, Engineering Front Building, University College London, Malet Place, London, UK.
| |
Collapse
|