1
|
Mallay MG, Landry TG, Brown JA. An 8 mm endoscopic histotripsy array with integrated high-resolution ultrasound imaging. ULTRASONICS 2024; 139:107275. [PMID: 38508082 DOI: 10.1016/j.ultras.2024.107275] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 01/31/2024] [Accepted: 02/25/2024] [Indexed: 03/22/2024]
Abstract
An 8 mm diameter, image-guided, annular array histotripsy transducer was fabricated and characterized. The array was laser etched on a 5 MHz, 1-3 dice and fill, PZT-5H/epoxy composite with a 45 % volume fraction. Flexible PCBs were used to electrically connect to the array elements using wirebonds. The array was backed with a low acoustic impedance epoxy mixture. A 3.6 by 3.8 mm, 64-element, 30 MHz phased array imaging probe was positioned in the center hole, to co-align the imaging plane with the bubble cloud produced by the therapy array. A custom 16-channel high voltage pulse generator was used to test the annular array for focal lengths ranging from 3- to 8-mm. An aluminum lens-focussed transducer with a 7 mm focal length was fabricated using the same piezocomposite and backing material and tested along with the histotripsy array. Simulated results from COMSOL FEM models were compared to measured results for low voltage characterization of the array and lens-focussed transducer. The measured transmit sensitivity of the array ranged from 0.113 to 0.167 MPa/V, while the lens-focussed transducer was 0.192 MPa/V. Simulated values were 0.160 to 0.174 MPa/V and 0.169 MPa/V, respectively. The measured acoustic fields showed a significantly increased depth-of-field compared the lens-focussed transducer, while the beamwidths of the array focus were comparable to the lens. The measured cavitation voltage in water was between 254 V and 498 V depending on the focal length, and 336 V for the lens-focussed transducer. The array had a lower cavitation voltage than the lens-focussed transducer for a comparable operating depth. The histotripsy array was tested in a tissue phantom and an in vivo rat brain. It was used to produce an elongated lesion in the brain by electronically steering the focal length from 3- to 8-mm axially. Real time ultrasound imaging with a Doppler overlay was used to target the tissue and monitor ablation progress, and histology confirmed the targeted tissue was fully homogenized.
Collapse
Affiliation(s)
- Matthew G Mallay
- School of Biomedical Engineering, Dalhousie University, Halifax, NS, Canada.
| | - Thomas G Landry
- School of Biomedical Engineering, Dalhousie University, Halifax, NS, Canada
| | - Jeremy A Brown
- School of Biomedical Engineering, Dalhousie University, Halifax, NS, Canada; Department of Electrical and Computer Engineering, Dalhousie University, Halifax, NS, Canada
| |
Collapse
|
2
|
Zeineldin RA, Karar ME, Burgert O, Mathis-Ullrich F. NeuroIGN: Explainable Multimodal Image-Guided System for Precise Brain Tumor Surgery. J Med Syst 2024; 48:25. [PMID: 38393660 DOI: 10.1007/s10916-024-02037-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 02/03/2024] [Indexed: 02/25/2024]
Abstract
Precise neurosurgical guidance is critical for successful brain surgeries and plays a vital role in all phases of image-guided neurosurgery (IGN). Neuronavigation software enables real-time tracking of surgical tools, ensuring their presentation with high precision in relation to a virtual patient model. Therefore, this work focuses on the development of a novel multimodal IGN system, leveraging deep learning and explainable AI to enhance brain tumor surgery outcomes. The study establishes the clinical and technical requirements of the system for brain tumor surgeries. NeuroIGN adopts a modular architecture, including brain tumor segmentation, patient registration, and explainable output prediction, and integrates open-source packages into an interactive neuronavigational display. The NeuroIGN system components underwent validation and evaluation in both laboratory and simulated operating room (OR) settings. Experimental results demonstrated its accuracy in tumor segmentation and the success of ExplainAI in increasing the trust of medical professionals in deep learning. The proposed system was successfully assembled and set up within 11 min in a pre-clinical OR setting with a tracking accuracy of 0.5 (± 0.1) mm. NeuroIGN was also evaluated as highly useful, with a high frame rate (19 FPS) and real-time ultrasound imaging capabilities. In conclusion, this paper describes not only the development of an open-source multimodal IGN system but also demonstrates the innovative application of deep learning and explainable AI algorithms in enhancing neuronavigation for brain tumor surgeries. By seamlessly integrating pre- and intra-operative patient image data with cutting-edge interventional devices, our experiments underscore the potential for deep learning models to improve the surgical treatment of brain tumors and long-term post-operative outcomes.
Collapse
Affiliation(s)
- Ramy A Zeineldin
- Department of Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander University Erlangen-Nürnberg, 91052, Erlangen, Germany.
- Research Group Computer Assisted Medicine (CaMed), Reutlingen University, 72762, Reutlingen, Germany.
- Faculty of Electronic Engineering (FEE), Menoufia University, Minuf, 32952, Egypt.
| | - Mohamed E Karar
- Faculty of Electronic Engineering (FEE), Menoufia University, Minuf, 32952, Egypt
| | - Oliver Burgert
- Research Group Computer Assisted Medicine (CaMed), Reutlingen University, 72762, Reutlingen, Germany
| | - Franziska Mathis-Ullrich
- Department of Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander University Erlangen-Nürnberg, 91052, Erlangen, Germany
| |
Collapse
|
3
|
He Z, Zhu YN, Chen Y, Chen Y, He Y, Sun Y, Wang T, Zhang C, Sun B, Yan F, Zhang X, Sun QF, Yang GZ, Feng Y. A deep unrolled neural network for real-time MRI-guided brain intervention. Nat Commun 2023; 14:8257. [PMID: 38086851 PMCID: PMC10716161 DOI: 10.1038/s41467-023-43966-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Accepted: 11/24/2023] [Indexed: 12/18/2023] Open
Abstract
Accurate navigation and targeting are critical for neurological interventions including biopsy and deep brain stimulation. Real-time image guidance further improves surgical planning and MRI is ideally suited for both pre- and intra-operative imaging. However, balancing spatial and temporal resolution is a major challenge for real-time interventional MRI (i-MRI). Here, we proposed a deep unrolled neural network, dubbed as LSFP-Net, for real-time i-MRI reconstruction. By integrating LSFP-Net and a custom-designed, MR-compatible interventional device into a 3 T MRI scanner, a real-time MRI-guided brain intervention system is proposed. The performance of the system was evaluated using phantom and cadaver studies. 2D/3D real-time i-MRI was achieved with temporal resolutions of 80/732.8 ms, latencies of 0.4/3.66 s including data communication, processing and reconstruction time, and in-plane spatial resolution of 1 × 1 mm2. The results demonstrated that the proposed method enables real-time monitoring of the remote-controlled brain intervention, and showed the potential to be readily integrated into diagnostic scanners for image-guided neurosurgery.
Collapse
Affiliation(s)
- Zhao He
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, China
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, 200240, China
- National Engineering Research Center of Advanced Magnetic Resonance Technologies for Diagnosis and Therapy (NERC-AMRT), School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Ya-Nan Zhu
- School of Mathematical Sciences, MOE-LSC and Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Yu Chen
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, China
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, 200240, China
- National Engineering Research Center of Advanced Magnetic Resonance Technologies for Diagnosis and Therapy (NERC-AMRT), School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Yi Chen
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, China
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, 200240, China
- National Engineering Research Center of Advanced Magnetic Resonance Technologies for Diagnosis and Therapy (NERC-AMRT), School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Yuchen He
- Department of Mathematics, City University of Hong Kong, Kowloon, Hong Kong SAR
| | - Yuhao Sun
- Department of Neurosurgery, Ruijin Hospital affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, China
| | - Tao Wang
- Department of Neurosurgery, Ruijin Hospital affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, China
| | - Chengcheng Zhang
- Department of Neurosurgery, Ruijin Hospital affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, China
| | - Bomin Sun
- Department of Neurosurgery, Ruijin Hospital affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, China
| | - Fuhua Yan
- Department of Radiology, Ruijin Hospital affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, China
| | - Xiaoqun Zhang
- School of Mathematical Sciences, MOE-LSC and Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai, 200240, China
- Shanghai Artificial Intelligence Laboratory, Shanghai, 200232, China
| | - Qing-Fang Sun
- Department of Neurosurgery, Ruijin Hospital affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, China.
| | - Guang-Zhong Yang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, China.
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, 200240, China.
- National Engineering Research Center of Advanced Magnetic Resonance Technologies for Diagnosis and Therapy (NERC-AMRT), School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China.
| | - Yuan Feng
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, China.
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, 200240, China.
- National Engineering Research Center of Advanced Magnetic Resonance Technologies for Diagnosis and Therapy (NERC-AMRT), School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China.
- Department of Radiology, Ruijin Hospital affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, China.
| |
Collapse
|
4
|
Bierbrier J, Eskandari M, Giovanni DAD, Collins DL. Toward Estimating MRI-Ultrasound Registration Error in Image-Guided Neurosurgery. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2023; 70:999-1015. [PMID: 37022005 DOI: 10.1109/tuffc.2023.3239320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Image-guided neurosurgery allows surgeons to view their tools in relation to preoperatively acquired patient images and models. To continue using neuronavigation systems throughout operations, image registration between preoperative images [typically magnetic resonance imaging (MRI)] and intraoperative images (e.g., ultrasound) is common to account for brain shift (deformations of the brain during surgery). We implemented a method to estimate MRI-ultrasound registration errors, with the goal of enabling surgeons to quantitatively assess the performance of linear or nonlinear registrations. To the best of our knowledge, this is the first dense error estimating algorithm applied to multimodal image registrations. The algorithm is based on a previously proposed sliding-window convolutional neural network that operates on a voxelwise basis. To create training data where the true registration error is known, simulated ultrasound images were created from preoperative MRI images and artificially deformed. The model was evaluated on artificially deformed simulated ultrasound data and real ultrasound data with manually annotated landmark points. The model achieved a mean absolute error (MAE) of 0.977 ± 0.988 mm and a correlation of 0.8 ± 0.062 on the simulated ultrasound data, and an MAE of 2.24 ± 1.89 mm and a correlation of 0.246 on the real ultrasound data. We discuss concrete areas to improve the results on real ultrasound data. Our progress lays the foundation for future developments and ultimately implementation of clinical neuronavigation systems.
Collapse
|
5
|
Taleb A, Guigou C, Leclerc S, Lalande A, Bozorg Grayeli A. Image-to-Patient Registration in Computer-Assisted Surgery of Head and Neck: State-of-the-Art, Perspectives, and Challenges. J Clin Med 2023; 12:5398. [PMID: 37629441 PMCID: PMC10455300 DOI: 10.3390/jcm12165398] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 08/08/2023] [Accepted: 08/14/2023] [Indexed: 08/27/2023] Open
Abstract
Today, image-guided systems play a significant role in improving the outcome of diagnostic and therapeutic interventions. They provide crucial anatomical information during the procedure to decrease the size and the extent of the approach, to reduce intraoperative complications, and to increase accuracy, repeatability, and safety. Image-to-patient registration is the first step in image-guided procedures. It establishes a correspondence between the patient's preoperative imaging and the intraoperative data. When it comes to the head-and-neck region, the presence of many sensitive structures such as the central nervous system or the neurosensory organs requires a millimetric precision. This review allows evaluating the characteristics and the performances of different registration methods in the head-and-neck region used in the operation room from the perspectives of accuracy, invasiveness, and processing times. Our work led to the conclusion that invasive marker-based methods are still considered as the gold standard of image-to-patient registration. The surface-based methods are recommended for faster procedures and applied on the surface tissues especially around the eyes. In the near future, computer vision technology is expected to enhance these systems by reducing human errors and cognitive load in the operating room.
Collapse
Affiliation(s)
- Ali Taleb
- Team IFTIM, Institute of Molecular Chemistry of University of Burgundy (ICMUB UMR CNRS 6302), Univ. Bourgogne Franche-Comté, 21000 Dijon, France; (C.G.); (S.L.); (A.L.); (A.B.G.)
| | - Caroline Guigou
- Team IFTIM, Institute of Molecular Chemistry of University of Burgundy (ICMUB UMR CNRS 6302), Univ. Bourgogne Franche-Comté, 21000 Dijon, France; (C.G.); (S.L.); (A.L.); (A.B.G.)
- Otolaryngology Department, University Hospital of Dijon, 21000 Dijon, France
| | - Sarah Leclerc
- Team IFTIM, Institute of Molecular Chemistry of University of Burgundy (ICMUB UMR CNRS 6302), Univ. Bourgogne Franche-Comté, 21000 Dijon, France; (C.G.); (S.L.); (A.L.); (A.B.G.)
| | - Alain Lalande
- Team IFTIM, Institute of Molecular Chemistry of University of Burgundy (ICMUB UMR CNRS 6302), Univ. Bourgogne Franche-Comté, 21000 Dijon, France; (C.G.); (S.L.); (A.L.); (A.B.G.)
- Medical Imaging Department, University Hospital of Dijon, 21000 Dijon, France
| | - Alexis Bozorg Grayeli
- Team IFTIM, Institute of Molecular Chemistry of University of Burgundy (ICMUB UMR CNRS 6302), Univ. Bourgogne Franche-Comté, 21000 Dijon, France; (C.G.); (S.L.); (A.L.); (A.B.G.)
- Otolaryngology Department, University Hospital of Dijon, 21000 Dijon, France
| |
Collapse
|
6
|
Shimamoto T, Sano Y, Yoshimitsu K, Masamune K, Muragaki Y. Precise Brain-shift Prediction by New Combination of W-Net Deep Learning for Neurosurgical Navigation. Neurol Med Chir (Tokyo) 2023; 63:295-303. [PMID: 37164701 PMCID: PMC10406456 DOI: 10.2176/jns-nmc.2022-0350] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Accepted: 02/01/2023] [Indexed: 05/12/2023] Open
Abstract
Brain tissue deformation during surgery significantly reduces the accuracy of image-guided neurosurgeries. We generated updated magnetic resonance images (uMR) in this study to compensate for brain shifts after dural opening using a convolutional neural network (CNN). This study included 248 consecutive patients who underwent craniotomy for initial intra-axial brain tumor removal and correspondingly underwent preoperative MR (pMR) and intraoperative MR (iMR) imaging. Deep learning using CNN to compensate for brain shift was performed using the pMR as input data, and iMR obtained after dural opening as the ground truth. For the tumor center (TC) and the maximum shift position (MSP), statistical analysis using the Wilcoxon signed-rank test was performed between the target registration error (TRE) for the pMR and iMR (i.e., the actual amount of brain shift) and the TRE for the uMR and iMR (i.e., residual error after compensation). The TRE at the TC decreased from 4.14 ± 2.31 mm to 2.31 ± 1.15 mm, and the TRE at the MSP decreased from 9.61 ± 3.16 mm to 3.71 ± 1.98 mm. The Wilcoxon signed-rank test of the pMR TRE and uMR TRE yielded a p-value less than 0.0001 for both the TC and MSP. Using a CNN model, we designed and implemented a new system that compensated for brain shifts after dural opening. Learning pMR and iMR with a CNN demonstrated the possibility of correcting the brain shift after dural opening.
Collapse
Affiliation(s)
- Takafumi Shimamoto
- Faculty of Advanced Techno-Surgery, Institute of Advanced Biomedical Engineering and Science, Tokyo Women's Medical University
- FUJIFILM Healthcare Corporation
| | | | - Kitaro Yoshimitsu
- Faculty of Advanced Techno-Surgery, Institute of Advanced Biomedical Engineering and Science, Tokyo Women's Medical University
| | - Ken Masamune
- Faculty of Advanced Techno-Surgery, Institute of Advanced Biomedical Engineering and Science, Tokyo Women's Medical University
| | - Yoshihiro Muragaki
- Faculty of Advanced Techno-Surgery, Institute of Advanced Biomedical Engineering and Science, Tokyo Women's Medical University
- Department of Neurosurgery, Neurological Institute, Tokyo Women's Medical University
- Center for Advanced Medical Engineering Research and Development, Kobe University
| |
Collapse
|
7
|
Klint E, Richter J, Wårdell K. Combined Use of Frameless Neuronavigation and In Situ Optical Guidance in Brain Tumor Needle Biopsies. Brain Sci 2023; 13:brainsci13050809. [PMID: 37239281 DOI: 10.3390/brainsci13050809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Revised: 05/10/2023] [Accepted: 05/13/2023] [Indexed: 05/28/2023] Open
Abstract
Brain tumor needle biopsies are performed to retrieve tissue samples for neuropathological analysis. Although preoperative images guide the procedure, there are risks of hemorrhage and sampling of non-tumor tissue. This study aimed to develop and evaluate a method for frameless one-insertion needle biopsies with in situ optical guidance and present a processing pipeline for combined postoperative analysis of optical, MRI, and neuropathological data. An optical system for quantified feedback on tissue microcirculation, gray-whiteness, and the presence of a tumor (protoporphyrin IX (PpIX) accumulation) with a one-insertion optical probe was integrated into a needle biopsy kit that was used for frameless neuronavigation. In Python, a pipeline for signal processing, image registration, and coordinate transformation was set up. The Euclidian distances between the pre- and postoperative coordinates were calculated. The proposed workflow was evaluated on static references, a phantom, and three patients with suspected high-grade gliomas. In total, six biopsy samples that overlapped with the region of the highest PpIX peak without increased microcirculation were taken. The samples were confirmed as being tumorous and postoperative imaging was used to define the biopsy locations. A 2.5 ± 1.2 mm difference between the pre- and postoperative coordinates was found. Optical guidance in frameless brain tumor biopsies could offer benefits such as quantified in situ indication of high-grade tumor tissue and indications of increased blood flow along the needle trajectory before the tissue is removed. Additionally, postoperative visualization enables the combined analysis of MRI, optical, and neuropathological data.
Collapse
Affiliation(s)
- Elisabeth Klint
- Department of Biomedical Engineering, Linköping University, 581 85 Linköping, Sweden
| | - Johan Richter
- Department of Biomedical Engineering, Linköping University, 581 85 Linköping, Sweden
- Department of Neurosurgery, Linköping University Hospital, 581 85 Linköping, Sweden
| | - Karin Wårdell
- Department of Biomedical Engineering, Linköping University, 581 85 Linköping, Sweden
| |
Collapse
|
8
|
Zhang X, Sisniega A, Zbijewski WB, Lee J, Jones CK, Wu P, Han R, Uneri A, Vagdargi P, Helm PA, Luciano M, Anderson WS, Siewerdsen JH. Combining physics-based models with deep learning image synthesis and uncertainty in intraoperative cone-beam CT of the brain. Med Phys 2023; 50:2607-2624. [PMID: 36906915 PMCID: PMC10175241 DOI: 10.1002/mp.16351] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 02/03/2023] [Accepted: 02/27/2023] [Indexed: 03/13/2023] Open
Abstract
BACKGROUND Image-guided neurosurgery requires high localization and registration accuracy to enable effective treatment and avoid complications. However, accurate neuronavigation based on preoperative magnetic resonance (MR) or computed tomography (CT) images is challenged by brain deformation occurring during the surgical intervention. PURPOSE To facilitate intraoperative visualization of brain tissues and deformable registration with preoperative images, a 3D deep learning (DL) reconstruction framework (termed DL-Recon) was proposed for improved intraoperative cone-beam CT (CBCT) image quality. METHODS The DL-Recon framework combines physics-based models with deep learning CT synthesis and leverages uncertainty information to promote robustness to unseen features. A 3D generative adversarial network (GAN) with a conditional loss function modulated by aleatoric uncertainty was developed for CBCT-to-CT synthesis. Epistemic uncertainty of the synthesis model was estimated via Monte Carlo (MC) dropout. Using spatially varying weights derived from epistemic uncertainty, the DL-Recon image combines the synthetic CT with an artifact-corrected filtered back-projection (FBP) reconstruction. In regions of high epistemic uncertainty, DL-Recon includes greater contribution from the FBP image. Twenty paired real CT and simulated CBCT images of the head were used for network training and validation, and experiments evaluated the performance of DL-Recon on CBCT images containing simulated and real brain lesions not present in the training data. Performance among learning- and physics-based methods was quantified in terms of structural similarity (SSIM) of the resulting image to diagnostic CT and Dice similarity metric (DSC) in lesion segmentation compared to ground truth. A pilot study was conducted involving seven subjects with CBCT images acquired during neurosurgery to assess the feasibility of DL-Recon in clinical data. RESULTS CBCT images reconstructed via FBP with physics-based corrections exhibited the usual challenges to soft-tissue contrast resolution due to image non-uniformity, noise, and residual artifacts. GAN synthesis improved image uniformity and soft-tissue visibility but was subject to error in the shape and contrast of simulated lesions that were unseen in training. Incorporation of aleatoric uncertainty in synthesis loss improved estimation of epistemic uncertainty, with variable brain structures and unseen lesions exhibiting higher epistemic uncertainty. The DL-Recon approach mitigated synthesis errors while maintaining improvement in image quality, yielding 15%-22% increase in SSIM (image appearance compared to diagnostic CT) and up to 25% increase in DSC in lesion segmentation compared to FBP. Clear gains in visual image quality were also observed in real brain lesions and in clinical CBCT images. CONCLUSIONS DL-Recon leveraged uncertainty estimation to combine the strengths of DL and physics-based reconstruction and demonstrated substantial improvements in the accuracy and quality of intraoperative CBCT. The improved soft-tissue contrast resolution could facilitate visualization of brain structures and support deformable registration with preoperative images, further extending the utility of intraoperative CBCT in image-guided neurosurgery.
Collapse
Affiliation(s)
- Xiaoxuan Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Alejandro Sisniega
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Wojciech B. Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Junghoon Lee
- Department of Radiation Oncology, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Craig K. Jones
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Pengwei Wu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Runze Han
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Ali Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Prasad Vagdargi
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
| | | | - Mark Luciano
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD 21218, USA
| | - William S. Anderson
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD 21218, USA
| | - Jeffrey H. Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD 21218, USA
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030
| |
Collapse
|
9
|
Cannon PC, Ferguson JM, Pitt EB, Shrand JA, Setia SA, Nimmagadda N, Barth EJ, Kavoussi NL, Galloway RL, Herrell SD, Webster RJ. A Safe Framework for Quantitative In Vivo Human Evaluation of Image Guidance. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2023; 5:133-139. [PMID: 38487093 PMCID: PMC10939321 DOI: 10.1109/ojemb.2023.3271853] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 02/16/2023] [Accepted: 03/27/2023] [Indexed: 03/17/2024] Open
Abstract
Goal: We present a new framework for in vivo image guidance evaluation and provide a case study on robotic partial nephrectomy. Methods: This framework (called the "bystander protocol") involves two surgeons, one who solely performs the therapeutic process without image guidance, and another who solely periodically collects data to evaluate image guidance. This isolates the evaluation from the therapy, so that in-development image guidance systems can be tested without risk of negatively impacting the standard of care. We provide a case study applying this protocol in clinical cases during robotic partial nephrectomy surgery. Results: The bystander protocol was performed successfully in 6 patient cases. We find average lesion centroid localization error with our IGS system to be 6.5 mm in vivo compared to our prior result of 3.0 mm in phantoms. Conclusions: The bystander protocol is a safe, effective method for testing in-development image guidance systems in human subjects.
Collapse
Affiliation(s)
| | | | | | | | | | - Naren Nimmagadda
- Vanderbilt University Medical CenterNashvilleTN37232USA
- The Johns Hopkins University School of MedicineBaltimoreMD21287USA
| | | | | | | | | | | |
Collapse
|
10
|
Watanabe G, Conching A, Nishioka S, Steed T, Matsunaga M, Lozanoff S, Noh T. Themes in neuronavigation research: A machine learning topic analysis. World Neurosurg X 2023; 18:100182. [PMID: 37013107 PMCID: PMC10066551 DOI: 10.1016/j.wnsx.2023.100182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2022] [Revised: 02/22/2023] [Accepted: 03/16/2023] [Indexed: 03/19/2023] Open
Abstract
Objective To understand trends in neuronavigation we employed machine learning methods to perform a broad literature review which would be impractical by manual inspection. Methods PubMed was queried for articles with "Neuronavigation" in any field from inception-2020. Articles were designated neuronavigation-focused (NF) if "Neuronavigation" was a major MeSH. The latent dirichlet allocation topic modeling technique was used to identify themes of NF research. Results There were 3896 articles of which 1727 (44%) were designated as NF. Between 1999-2009 and 2010-2020, the number of NF publications experienced 80% growth. Between 2009-2014 and 2015-2020, there was a 0.3% decline. Eleven themes covered 1367 (86%) NF articles. "Resection of Eloquent Lesions" comprised the highest number of articles (243), followed by "Accuracy and Registration" (242), "Patient Outcomes" (156), "Stimulation and Mapping" (126), "Planning and Visualization" (123), "Intraoperative Tools" (104), "Placement of Ventricular Catheters" (86), "Spine Surgery" (85), "New Systems" (80), "Guided Biopsies" (61), and "Surgical Approach" (61). All topics except for "Planning and Visualization", "Intraoperative Tools", and "New Systems" exhibited a monotonic positive trend. When analyzing subcategories, there were a greater number of clinical assessments or usage of existing neuronavigation systems (77%) rather than modification or development of new apparatuses (18%). Conclusion NF research appears to focus on the clinical assessment of neuronavigation and to a lesser extent on the development of new systems. Although neuronavigation has made significant strides, NF research output appears to have plateaued in the last decade.
Collapse
|
11
|
OGANDO-RIVAS E, CASTILLO P, BELTRAN JQ, ARELLANO R, GALVAN-REMIGIO I, SOTO-ULLOA V, DIAZ-PEREGRINO R, OCHOA-HERNANDEZ D, REYES-GONZÁLEZ P, SAYOUR E, MITCHELL D. Evolution and Revolution of Imaging Technologies in Neurosurgery. Neurol Med Chir (Tokyo) 2022; 62:542-551. [PMID: 36288973 PMCID: PMC9831622 DOI: 10.2176/jns-nmc.2022-0116] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
Abstract
We understand only a small fraction of the events happening in our brains; therefore, despite all the progress made thus far, a whole array of questions remains. Nonetheless, neurosurgeons invented new tools to circumvent the challenges that had plagued their predecessors. With the manufacturing boom of the 20th century, technological innovations blossomed enabling the neuroscientific community to study and operate upon the living brain in finer detail and with greater precision while avoiding harm to the nervous system. The purpose of this chronological review is to 1) raise awareness among future neurosurgeons about the latest advances in the field, 2) become familiar with innovations such as augmented reality (AR) that should be included in education given their ready applicability in surgical training, and 3) be comfortable with customizing these technologies to real-life cases like in the case of mixed reality.
Collapse
Affiliation(s)
- Elizabeth OGANDO-RIVAS
- Department of Neurosurgery, Brain Tumor Immunotherapy Program, McKnight Brain Institute, University of Florida, Gainesville, FL, USA
| | - Paul CASTILLO
- Department of Pediatrics, UF Health Shands Children's Hospital, Gainesville, FL, USA
| | - Jesus Q. BELTRAN
- Unit of Stereotactic and Functional Neurosurgery, General Hospital of Mexico, Mexico City, Mexico
| | - Rodolfo ARELLANO
- Department of Neurosurgery, CostaMed Medical Group, Quintana Roo, Mexico
| | | | - Victor SOTO-ULLOA
- Emergency Department, Hospital General #48, Instituto Mexicano del Seguro Social, Mexico City, México
| | | | | | | | - Elias SAYOUR
- Department of Neurosurgery, Brain Tumor Immunotherapy Program, McKnight Brain Institute, University of Florida, Gainesville, FL, USA,Department of Pediatrics, UF Health Shands Children's Hospital, Gainesville, FL, USA
| | - Duane MITCHELL
- Department of Neurosurgery, Brain Tumor Immunotherapy Program, McKnight Brain Institute, University of Florida, Gainesville, FL, USA
| |
Collapse
|
12
|
Cheng VW, de Pennington N, Zakaria R, Larkin JR, Serres S, Sarkar M, Kirkman MA, Bristow C, Croal P, Plaha P, Campo L, Chappell MA, Lord S, Jenkinson MD, Middleton MR, Sibson NR. VCAM-1-targeted MRI Improves Detection of the Tumor-brain Interface. Clin Cancer Res 2022; 28:2385-2396. [PMID: 35312755 PMCID: PMC9662863 DOI: 10.1158/1078-0432.ccr-21-4011] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Revised: 01/25/2022] [Accepted: 03/17/2022] [Indexed: 01/07/2023]
Abstract
PURPOSE Despite optimal local therapy, tumor cell invasion into normal brain parenchyma frequently results in recurrence in patients with solid tumors. The aim of this study was to determine whether microvascular inflammation can be targeted to better delineate the tumor-brain interface through vascular cell adhesion molecule-1 (VCAM-1)-targeted MRI. EXPERIMENTAL DESIGN Intracerebral xenograft rat models of MDA231Br-GFP (breast cancer) brain metastasis and U87MG (glioblastoma) were used to histologically examine the tumor-brain interface and to test the efficacy of VCAM-1-targeted MRI in detecting this region. Human biopsy samples of the brain metastasis and glioblastoma margins were examined for endothelial VCAM-1 expression. RESULTS The interface between tumor and surrounding normal brain tissue exhibited elevated endothelial VCAM-1 expression and increased microvessel density. Tumor proliferation and stemness markers were also significantly upregulated at the tumor rim in the brain metastasis model. T2*-weighted MRI, following intravenous administration of VCAM-MPIO, highlighted the tumor-brain interface of both tumor models more extensively than gadolinium-DTPA-enhanced T1-weighted MRI. Sites of VCAM-MPIO binding, evident as hypointense signals on MR images, correlated spatially with endothelial VCAM-1 upregulation and bound VCAM-MPIO beads detected histologically. These findings were further validated in an orthotopic medulloblastoma model. Finally, the tumor-brain interface in human brain metastasis and glioblastoma samples was similarly characterized by microvascular inflammation, extending beyond the region detectable using conventional MRI. CONCLUSIONS This work illustrates the potential of VCAM-1-targeted MRI for improved delineation of the tumor-brain interface in both primary and secondary brain tumors.
Collapse
Affiliation(s)
- Vinton W.T. Cheng
- Department of Oncology, University of Oxford, Oxford, United Kingdom
- Leeds Institute of Medical Research, University of Leeds, Leeds, United Kingdom
| | | | - Rasheed Zakaria
- Department of Neurosurgery, The Walton Centre NHS Foundation Trust, Liverpool, United Kingdom
- Faculty of Health and Life Sciences, University of Liverpool, Liverpool, United Kingdom
| | - James R. Larkin
- Department of Oncology, University of Oxford, Oxford, United Kingdom
| | - Sébastien Serres
- Department of Oncology, University of Oxford, Oxford, United Kingdom
- School of Life Sciences, University of Nottingham, Nottingham, United Kingdom
| | - Manjima Sarkar
- Department of Oncology, University of Oxford, Oxford, United Kingdom
| | - Matthew A. Kirkman
- Department of Oncology, University of Oxford, Oxford, United Kingdom
- UCL Institute for Education, University College London, London, United Kingdom
| | - Claire Bristow
- Department of Oncology, University of Oxford, Oxford, United Kingdom
| | - Paula Croal
- Mental Health and Clinical Neurosciences & Sir Peter Mansfield Imaging Centre, School of Medicine, University of Nottingham, Nottingham, United Kingdom
- Nottingham Biomedical Research Centre, Queens Medical Centre, University of Nottingham, Nottingham, United Kingdom
| | - Puneet Plaha
- Nuffield Department of Surgery, University of Oxford and Department of Neurosurgery, Oxford University Hospitals NHS Trust, Oxford, United Kingdom
| | - Leticia Campo
- Nottingham Biomedical Research Centre, Queens Medical Centre, University of Nottingham, Nottingham, United Kingdom
| | - Michael A. Chappell
- Mental Health and Clinical Neurosciences & Sir Peter Mansfield Imaging Centre, School of Medicine, University of Nottingham, Nottingham, United Kingdom
- Nottingham Biomedical Research Centre, Queens Medical Centre, University of Nottingham, Nottingham, United Kingdom
| | - Simon Lord
- Department of Oncology, University of Oxford, Oxford, United Kingdom
| | - Michael D. Jenkinson
- Department of Neurosurgery, The Walton Centre NHS Foundation Trust, Liverpool, United Kingdom
- Institute of Systems, Molecular and Integrative Biology, University of Liverpool, Liverpool, United Kingdom
| | - Mark R. Middleton
- Department of Oncology, University of Oxford, Oxford, United Kingdom
- Experimental Cancer Medicine Centre, Department of Oncology, University of Oxford, Oxford, United Kingdom
- Oxford National Institute for Health Research Comprehensive Biomedical Research Centre, Oxford, United Kingdom
| | - Nicola R. Sibson
- Department of Oncology, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
13
|
Zeineldin RA, Karar ME, Elshaer Z, Coburger J, Wirtz CR, Burgert O, Mathis-Ullrich F. Explainability of deep neural networks for MRI analysis of brain tumors. Int J Comput Assist Radiol Surg 2022; 17:1673-1683. [PMID: 35460019 PMCID: PMC9463287 DOI: 10.1007/s11548-022-02619-x] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Accepted: 03/23/2022] [Indexed: 11/26/2022]
Abstract
PURPOSE Artificial intelligence (AI), in particular deep neural networks, has achieved remarkable results for medical image analysis in several applications. Yet the lack of explainability of deep neural models is considered the principal restriction before applying these methods in clinical practice. METHODS In this study, we propose a NeuroXAI framework for explainable AI of deep learning networks to increase the trust of medical experts. NeuroXAI implements seven state-of-the-art explanation methods providing visualization maps to help make deep learning models transparent. RESULTS NeuroXAI has been applied to two applications of the most widely investigated problems in brain imaging analysis, i.e., image classification and segmentation using magnetic resonance (MR) modality. Visual attention maps of multiple XAI methods have been generated and compared for both applications. Another experiment demonstrated that NeuroXAI can provide information flow visualization on internal layers of a segmentation CNN. CONCLUSION Due to its open architecture, ease of implementation, and scalability to new XAI methods, NeuroXAI could be utilized to assist radiologists and medical professionals in the detection and diagnosis of brain tumors in the clinical routine of cancer patients. The code of NeuroXAI is publicly accessible at https://github.com/razeineldin/NeuroXAI .
Collapse
Affiliation(s)
- Ramy A Zeineldin
- Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology (KIT), 76131, Karlsruhe, Germany.
- Research Group Computer Assisted Medicine (CaMed), Reutlingen University, 72762, Reutlingen, Germany.
- Faculty of Electronic Engineering (FEE), Menoufia University, Menouf, 32952, Egypt.
| | - Mohamed E Karar
- Faculty of Electronic Engineering (FEE), Menoufia University, Menouf, 32952, Egypt
| | - Ziad Elshaer
- Department of Neurosurgery, University of Ulm, 89312, Günzburg, Germany
| | - Jan Coburger
- Department of Neurosurgery, University of Ulm, 89312, Günzburg, Germany
| | - Christian R Wirtz
- Department of Neurosurgery, University of Ulm, 89312, Günzburg, Germany
| | - Oliver Burgert
- Research Group Computer Assisted Medicine (CaMed), Reutlingen University, 72762, Reutlingen, Germany
| | - Franziska Mathis-Ullrich
- Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology (KIT), 76131, Karlsruhe, Germany
| |
Collapse
|
14
|
Nawaz M, Nazir T, Masood M, Mehmood A, Mahum R, Khan MA, Kadry S, Thinnukool O. Analysis of Brain MRI Images Using Improved CornerNet Approach. Diagnostics (Basel) 2021; 11:diagnostics11101856. [PMID: 34679554 PMCID: PMC8535141 DOI: 10.3390/diagnostics11101856] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Revised: 09/24/2021] [Accepted: 09/27/2021] [Indexed: 01/18/2023] Open
Abstract
The brain tumor is a deadly disease that is caused by the abnormal growth of brain cells, which affects the human blood cells and nerves. Timely and precise detection of brain tumors is an important task to avoid complex and painful treatment procedures, as it can assist doctors in surgical planning. Manual brain tumor detection is a time-consuming activity and highly dependent on the availability of area experts. Therefore, it is a need of the hour to design accurate automated systems for the detection and classification of various types of brain tumors. However, the exact localization and categorization of brain tumors is a challenging job due to extensive variations in their size, position, and structure. To deal with the challenges, we have presented a novel approach, namely, DenseNet-41-based CornerNet framework. The proposed solution comprises three steps. Initially, we develop annotations to locate the exact region of interest. In the second step, a custom CornerNet with DenseNet-41 as a base network is introduced to extract the deep features from the suspected samples. In the last step, the one-stage detector CornerNet is employed to locate and classify several brain tumors. To evaluate the proposed method, we have utilized two databases, namely, the Figshare and Brain MRI datasets, and attained an average accuracy of 98.8% and 98.5%, respectively. Both qualitative and quantitative analysis show that our approach is more proficient and consistent with detecting and classifying various types of brain tumors than other latest techniques.
Collapse
Affiliation(s)
- Marriam Nawaz
- Department of Computer Science, University of Engineering and Technology, Taxila 47050, Pakistan; (M.N.); (T.N.); (M.M.); (A.M.); (R.M.)
| | - Tahira Nazir
- Department of Computer Science, University of Engineering and Technology, Taxila 47050, Pakistan; (M.N.); (T.N.); (M.M.); (A.M.); (R.M.)
| | - Momina Masood
- Department of Computer Science, University of Engineering and Technology, Taxila 47050, Pakistan; (M.N.); (T.N.); (M.M.); (A.M.); (R.M.)
| | - Awais Mehmood
- Department of Computer Science, University of Engineering and Technology, Taxila 47050, Pakistan; (M.N.); (T.N.); (M.M.); (A.M.); (R.M.)
| | - Rabbia Mahum
- Department of Computer Science, University of Engineering and Technology, Taxila 47050, Pakistan; (M.N.); (T.N.); (M.M.); (A.M.); (R.M.)
| | | | - Seifedine Kadry
- Department of Applied Data Science, Noroff University College, 4612 Kristiansand, Norway;
| | - Orawit Thinnukool
- Research Group of Embedded Systems and Mobile Application in Health Science, College of Arts, Media and Technology, Chiang Mai University, Chiang Mai 50200, Thailand
- Correspondence:
| |
Collapse
|
15
|
Mallay MG, Woodacre JK, Landry TG, Campbell NA, Brown JA. A Dual-Frequency Lens-Focused Endoscopic Histotripsy Transducer. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2021; 68:2906-2916. [PMID: 33961553 DOI: 10.1109/tuffc.2021.3078326] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
A forward-looking miniature histotripsy transducer has been developed that incorporates an acoustic lens and dual-frequency stacked transducers. An acoustic lens is used to increase the peak negative pressure through focal gain and the dual-frequency transducers are designed to increase peak negative pressure by summing the pressure generated by each transducer individually. Four lens designs, each with an f -number of approximately 1, were evaluated in a PZT5A composite transducer. The finite-element model (FEM) predicted axial beamwidths of 1.61, 2.40, 2.84, and 2.36 mm for the resin conventional, resin Fresnel, silicone conventional, and silicone Fresnel lenses, respectively; the measured axial beamwidths were 1.30, 2.28, 2.71, and 2.11 mm, respectively. Radial beamwidths from the model were between 0.32 and 0.35 mm, while measurements agreed to within 0.2 mm. The measured peak negative was 0.150, 0.124, 0.160, and 0.160 MPa/V for the resin conventional, resin Fresnel, silicone conventional, and silicone Fresnel lenses, respectively. For the dual-frequency device, the 5-MHz (therapy) transducer had a measured peak negative pressure of 0.136 MPa/V for the PZT5A composite and 0.163 MPa/V for the PMN-PT composite. The 1.2-MHz (pump) transducer had a measured peak negative pressure of 0.028 MPa/V. The pump transducer significantly lowered the cavitation threshold of the therapy transducer. The dual-frequency device was tested on an ex vivo rat brain, ablating tissue at up to 4-mm depth, with lesion sizes as small as [Formula: see text].
Collapse
|
16
|
Gosal JS, Tiwari S, Sharma T, Agrawal M, Garg M, Mahal S, Bhaskar S, Sharma RK, Janu V, Jha DK. Simulation of surgery for supratentorial gliomas in virtual reality using a 3D volume rendering technique: a poor man's neuronavigation. Neurosurg Focus 2021; 51:E23. [PMID: 34333461 DOI: 10.3171/2021.5.focus21236] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Accepted: 05/18/2021] [Indexed: 11/06/2022]
Abstract
OBJECTIVE Different techniques of performing image-guided neurosurgery exist, namely, neuronavigation systems, intraoperative ultrasound, and intraoperative MRI, each with its limitations. Except for ultrasound, other methods are expensive. Three-dimensional virtual reconstruction and surgical simulation using 3D volume rendering (VR) is an economical and excellent technique for preoperative surgical planning and image-guided neurosurgery. In this article, the authors discuss several nuances of the 3D VR technique that have not yet been described. METHODS The authors included 6 patients with supratentorial gliomas who underwent surgery between January 2019 and March 2021. Preoperative clinical data, including patient demographics, preoperative planning details (done using the VR technique), and intraoperative details, including relevant photos and videos, were collected. RadiAnt software was used for generating virtual 3D images using the VR technique on a computer running Microsoft Windows. RESULTS The 3D VR technique assists in glioma surgery with a preoperative simulation of the skin incision and craniotomy, virtual cortical surface marking and navigation for deep-seated gliomas, preoperative visualization of morbid cortical surface and venous anatomy in surfacing gliomas, identifying the intervenous surgical corridor in both surfacing and deep-seated gliomas, and pre- and postoperative virtual 3D images highlighting the exact spatial geometric residual tumor location and extent of resection for low-grade gliomas (LGGs). CONCLUSIONS Image-guided neurosurgery with the 3D VR technique using RadiAnt software is an economical, easy-to-learn, and user-friendly method of simulating glioma surgery, especially in resource-constrained countries where expensive neuronavigation systems are not readily available. Apart from cortical sulci/gyri anatomy, FLAIR sequences are ideal for the 3D visualization of nonenhancing diffuse LGGs using the VR technique. In addition to cortical vessels (especially veins), contrast MRI sequences are perfect for the 3D visualization of contrast-enhancing high-grade gliomas.
Collapse
Affiliation(s)
| | - Sarbesh Tiwari
- 2Diagnostic & Interventional Radiology, All India Institute of Medical Sciences (AIIMS), Jodhpur, Rajasthan, India
| | | | | | | | - Sayani Mahal
- 2Diagnostic & Interventional Radiology, All India Institute of Medical Sciences (AIIMS), Jodhpur, Rajasthan, India
| | | | | | | | | |
Collapse
|
17
|
Anthony D, Louis RG, Shekhtman Y, Steineke T, Frempong-Boadu A, Steinberg GK. Patient-specific virtual reality technology for complex neurosurgical cases: illustrative cases. JOURNAL OF NEUROSURGERY: CASE LESSONS 2021; 1:CASE21114. [PMID: 36046517 PMCID: PMC9394696 DOI: 10.3171/case21114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/17/2021] [Accepted: 03/23/2021] [Indexed: 11/20/2022]
Abstract
BACKGROUND Virtual reality (VR) offers an interactive environment for visualizing the intimate three-dimensional (3D) relationship between a patient’s pathology and surrounding anatomy. The authors present a model for using personalized VR technology, applied across the neurosurgical treatment continuum from the initial consultation to preoperative surgical planning, then to intraoperative navigation, and finally to postoperative visits, for various tumor and vascular pathologies. OBSERVATIONS Five adult patients undergoing procedures for spinal cord cavernoma, clinoidal meningioma, anaplastic oligodendroglioma, giant aneurysm, and arteriovenous malformation were included. For each case, 360-degree VR (360°VR) environments developed using Surgical Theater were used for patient consultation, preoperative planning, and/or intraoperative 3D navigation. The custom 360°VR model was rendered from the patient’s preoperative imaging. For two cases, the plan changed after reviewing the patient’s 360°VR model from one based on conventional Digital Imaging and Communications in Medicine imaging. LESSONS Live 360° visualization with Surgical Theater in conjunction with surgical navigation helped validate the decisions made intraoperatively. The 360°VR models provided visualization to better understand the lesion’s 3D anatomy, as well as to plan and execute the safest patient-specific approach, rather than a less detailed, more standardized one. In all cases, preoperative planning using the patient’s 360°VR model had a significant impact on the surgical approach.
Collapse
Affiliation(s)
- Diana Anthony
- Department of Neurosurgery and Stanford Stroke Center, Stanford University School of Medicine, Stanford, California
| | - Robert G. Louis
- Pickup Family Neuroscience Institute, Hoag Memorial Hospital Newport Beach, Newport Beach, California
| | - Yevgenia Shekhtman
- Neuroscience Institute, Hackensack Meridian JFK Medical Center, Edison, New Jersey; and
| | - Thomas Steineke
- Neuroscience Institute, Hackensack Meridian JFK Medical Center, Edison, New Jersey; and
| | | | - Gary K. Steinberg
- Department of Neurosurgery and Stanford Stroke Center, Stanford University School of Medicine, Stanford, California
| |
Collapse
|
18
|
Jayatilake SMDAC, Ganegoda GU. Involvement of Machine Learning Tools in Healthcare Decision Making. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:6679512. [PMID: 33575021 PMCID: PMC7857908 DOI: 10.1155/2021/6679512] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/30/2020] [Revised: 12/18/2020] [Accepted: 01/09/2021] [Indexed: 01/07/2023]
Abstract
In the present day, there are many diseases which need to be identified at their early stages to start relevant treatments. If not, they could be uncurable and deadly. Due to this reason, there is a need of analysing complex medical data, medical reports, and medical images at a lesser time but with greater accuracy. There are even some instances where certain abnormalities cannot be directly recognized by humans. In healthcare for computational decision making, machine learning approaches are being used in these types of situations where a crucial data analysis needs to be performed on medical data to reveal hidden relationships or abnormalities which are not visible to humans. Implementing algorithms to perform such tasks itself is difficult, but what makes it even more challenging is to increase the accuracy of the algorithm while decreasing the required time for the algorithm to execute. In the early days, processing of large amount of medical data was an important task which resulted in machine learning being adapted in the biological domain. Since this happened, the biology and biomedical fields have been reaching higher levels by exploring more knowledge and identifying relationships which were never observed before. Reaching to its peak now the concern is being diverted towards treating patients not only based on the type of disease but also their genetics, which is known as precision medicine. Modifications in machine learning algorithms are being performed and tested daily to improve the performance of the algorithms in analysing and presenting more accurate information. In the healthcare field, starting from information extraction from medical documents until the prediction or diagnosis of a disease, machine learning has been involved. Medical imaging is a section that was greatly improved with the integration of machine learning algorithms to the field of computational biology. Nowadays, many disease diagnoses are being performed by medical image processing using machine learning algorithms. In addition, patient care, resource allocation, and research on treatments for various diseases are also being performed using machine learning-based computational decision making. Throughout this paper, various machine learning algorithms and approaches that are being used for decision making in the healthcare sector will be discussed along with the involvement of machine learning in healthcare applications in the current context. With the explored knowledge, it was evident that neural network-based deep learning methods have performed extremely well in the field of computational biology with the support of the high processing power of modern sophisticated computers and are being extensively applied because of their high predicting accuracy and reliability. When giving concern towards the big picture by combining the observations, it is noticeable that computational biology and biomedicine-based decision making in healthcare have now become dependent on machine learning algorithms, and thus they cannot be separated from the field of artificial intelligence.
Collapse
|
19
|
Wittek A, Bourantas G, Zwick BF, Joldes G, Esteban L, Miller K. Mathematical modeling and computer simulation of needle insertion into soft tissue. PLoS One 2020; 15:e0242704. [PMID: 33351854 PMCID: PMC7755224 DOI: 10.1371/journal.pone.0242704] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Accepted: 11/08/2020] [Indexed: 01/25/2023] Open
Abstract
In this study we present a kinematic approach for modeling needle insertion into soft tissues. The kinematic approach allows the presentation of the problem as Dirichlet-type (i.e. driven by enforced motion of boundaries) and therefore weakly sensitive to unknown properties of the tissues and needle-tissue interaction. The parameters used in the kinematic approach are straightforward to determine from images. Our method uses Meshless Total Lagrangian Explicit Dynamics (MTLED) method to compute soft tissue deformations. The proposed scheme was validated against experiments of needle insertion into silicone gel samples. We also present a simulation of needle insertion into the brain demonstrating the method's insensitivity to assumed mechanical properties of tissue.
Collapse
Affiliation(s)
- Adam Wittek
- Intelligent Systems for Medicine Laboratory, The University of Western Australia, Perth, Western Australia, Australia
| | - George Bourantas
- Intelligent Systems for Medicine Laboratory, The University of Western Australia, Perth, Western Australia, Australia
| | - Benjamin F Zwick
- Intelligent Systems for Medicine Laboratory, The University of Western Australia, Perth, Western Australia, Australia
| | - Grand Joldes
- Intelligent Systems for Medicine Laboratory, The University of Western Australia, Perth, Western Australia, Australia
| | - Lionel Esteban
- Commonwealth Science and Industry Research Organization CSIRO, Medical XCT Facility, Kensington, Western Australia, Australia
| | - Karol Miller
- Intelligent Systems for Medicine Laboratory, The University of Western Australia, Perth, Western Australia, Australia
| |
Collapse
|
20
|
Wang J, Liu H, Ke J, Hu L, Zhang S, Yang B, Sun S, Guo N, Ma F. Image-guided cochlear access by non-invasive registration: a cadaveric feasibility study. Sci Rep 2020; 10:18318. [PMID: 33110188 PMCID: PMC7591497 DOI: 10.1038/s41598-020-75530-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Accepted: 10/15/2020] [Indexed: 11/09/2022] Open
Abstract
Image-guided cochlear implant surgery is expected to reduce volume of mastoidectomy, accelerate recovery, and improve safety. The purpose of this study was to investigate the safety and effectiveness of image-guided cochlear implant surgery by a non-invasive registration method, in a cadaveric study. We developed a visual positioning frame that can utilize the maxillary dentition as a registration tool and completed the tunnels experiment on 5 cadaver specimens (8 cases in total). The accuracy of the entry point and the target point were 0.471 ± 0.276 mm and 0.671 ± 0.268 mm, respectively. The shortest distance from the margin of the tunnel to the facial nerve and the ossicular chain were 0.790 ± 0.709 mm and 1.960 ± 0.630 mm, respectively. All facial nerves, tympanic membranes, and ossicular chains were completely preserved. Using this approach, high accuracy was achieved in this preliminary study, suggesting that the non-invasive registration method can meet the accuracy requirements for cochlear implant surgery. Based on the above accuracy, we speculate that our method can also be applied to neurosurgery, orbitofacial surgery, lateral skull base surgery, and anterior skull base surgery with satisfactory accuracy.
Collapse
Affiliation(s)
- Jiang Wang
- Department of Otorhinolaryngology - Head and Neck Surgery, Peking University Third Hospital, Peking University, No. 49 North Garden Road, Haidian District, Beijing, 100191, China
| | - Hongsheng Liu
- The Robotics Institute, School of Mechanical Engineering and Automation, Beihang University, Beijing, China
| | - Jia Ke
- Department of Otorhinolaryngology - Head and Neck Surgery, Peking University Third Hospital, Peking University, No. 49 North Garden Road, Haidian District, Beijing, 100191, China
| | - Lei Hu
- The Robotics Institute, School of Mechanical Engineering and Automation, Beihang University, Beijing, China
| | - Shaoxing Zhang
- Department of Otorhinolaryngology - Head and Neck Surgery, Peking University Third Hospital, Peking University, No. 49 North Garden Road, Haidian District, Beijing, 100191, China
| | - Biao Yang
- The Robotics Institute, School of Mechanical Engineering and Automation, Beihang University, Beijing, China
| | - Shilong Sun
- Department of Otorhinolaryngology - Head and Neck Surgery, Peking University Third Hospital, Peking University, No. 49 North Garden Road, Haidian District, Beijing, 100191, China
| | - Na Guo
- The Robotics Institute, School of Mechanical Engineering and Automation, Beihang University, Beijing, China
| | - Furong Ma
- Department of Otorhinolaryngology - Head and Neck Surgery, Peking University Third Hospital, Peking University, No. 49 North Garden Road, Haidian District, Beijing, 100191, China.
| |
Collapse
|
21
|
Zuo F, Hu K, Kong J, Zhang Y, Wan J. Surgical Management of Brain Metastases in the Perirolandic Region. Front Oncol 2020; 10:572644. [PMID: 33194673 PMCID: PMC7649351 DOI: 10.3389/fonc.2020.572644] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2020] [Accepted: 10/06/2020] [Indexed: 01/22/2023] Open
Abstract
Brain metastases (BM) are the most frequent intracranial tumors, which may result in significant morbidity and mortality when the lesions involve the perirolandic region. Surgical intervention for BM in the perirolandic region is still under discussion even though prompt relief of mass effect and avoidance of necrosis together with brain edema may not be achieved by radiotherapy. More recently, several researchers attempt to evaluate the benefit of surgery for BM within this pivotal sensorimotor area. Nevertheless, data are sparse and optimal treatment paradigm is not yet widely described. Since the advance in intraoperative neuroimaging and neurophysiology, resection of BM in the perirolandic region has been proven to be safe and efficacious, sparing this eloquent area while retaining reasonably low morbidity rates. Although management of BM becomes much more tailored and multimodal, surgery remains the cornerstone and principles of resection as well as indications for surgery should be well defined. This is the first review concerning the characteristics of BM involving the perirolandic region and the current impact of surgical therapy for the lesions. Future perspectives of advanced neurosurgical techniques are also presented.
Collapse
Affiliation(s)
- Fuxing Zuo
- Department of Neurosurgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Ke Hu
- Department of Neurosurgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jianxin Kong
- Department of Neurosurgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Ye Zhang
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jinghai Wan
- Department of Neurosurgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
22
|
Zeineldin RA, Karar ME, Coburger J, Wirtz CR, Mathis-Ullrich F, Burgert O. Towards automated correction of brain shift using deep deformable magnetic resonance imaging-intraoperative ultrasound (MRI-iUS) registration. CURRENT DIRECTIONS IN BIOMEDICAL ENGINEERING 2020. [DOI: 10.1515/cdbme-2020-0039] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Abstract
Intraoperative brain deformation, so-called brain shift, affects the applicability of preoperative magnetic resonance imaging (MRI) data to assist the procedures of intraoperative ultrasound (iUS) guidance during neurosurgery. This paper proposes a deep learning-based approach for fast and accurate deformable registration of preoperative MRI to iUS images to correct brain shift. Based on the architecture of 3D convolutional neural networks, the proposed deep MRI-iUS registration method has been successfully tested and evaluated on the retrospective evaluation of cerebral tumors (RESECT) dataset. This study showed that our proposed method outperforms other registration methods in previous studies with an average mean squared error (MSE) of 85. Moreover, this method can register three 3D MRI-US pair in less than a second, improving the expected outcomes of brain surgery.
Collapse
Affiliation(s)
- Ramy A. Zeineldin
- Research Group Computer Assisted Medicine, Reutlingen University , Reutlingen , Germany
- Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology , Karlsruhe , Germany
| | - Mohamed E. Karar
- Faculty of Electronic Engineering (FEE), Menoufia University , Menouf , Egypt
| | - Jan Coburger
- Department of Neurosurgery , University of Ulm , Günzburg , Germany
| | | | - Franziska Mathis-Ullrich
- Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology , Karlsruhe , Germany
| | - Oliver Burgert
- Research Group Computer Assisted Medicine, Reutlingen University , Reutlingen , Germany
| |
Collapse
|
23
|
Liu P, Li C, Xiao C, Zhang Z, Ma J, Gao J, Shao P, Valerio I, Pawlik TM, Ding C, Yilmaz A, Xu R. A Wearable Augmented Reality Navigation System for Surgical Telementoring Based on Microsoft HoloLens. Ann Biomed Eng 2020; 49:287-298. [PMID: 32504141 DOI: 10.1007/s10439-020-02538-5] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2020] [Accepted: 05/25/2020] [Indexed: 11/29/2022]
Abstract
This paper reports a new type of augmented reality (AR) system that integrates a Microsoft HoloLens device with a three-dimensional (3D) point tracking module for medical training and telementored surgery. In this system, a stereo camera is used to track the 3D position of a scalpel and transfer its coordinates wirelessly to a HoloLens device. In the scenario of surgical training, a virtual surgical scene with pre-recorded surgical annotations is superimposed with the actual surgical scene so that the surgical trainee is able to operate following virtual instructions. In the scenario of telementored surgery, the virtual surgical scene is co-registered with the actual surgical scene so that the virtual scalpel remotely mentored by an experienced surgeon provides the AR guidance for the inexperienced on-site operator. The performance characteristics of the proposed AR telementoring system are verified by benchtop experiments. The clinical applicability of the proposed system in telementored skin grafting surgery and fasciotomy is validated in a New Zealand rabbit model. Our benchtop and in vivo experiments demonstrate the potential to improve surgical performance and reduce healthcare disparities in remote areas with limited resources.
Collapse
Affiliation(s)
- Peng Liu
- Department of Precision Machinery and Instrumentation, University of Science and Technology of China, Hefei, Anhui, China
| | - Chenmeng Li
- Department of Precision Machinery and Instrumentation, University of Science and Technology of China, Hefei, Anhui, China.,Department of Biomedical Engineering, The Ohio State University, Columbus, USA
| | - Changlin Xiao
- Photogrammetric Computer Vision Laboratory, The Ohio State University, Columbus, USA
| | - Zeshu Zhang
- Department of Precision Machinery and Instrumentation, University of Science and Technology of China, Hefei, Anhui, China.,Department of Biomedical Engineering, The Ohio State University, Columbus, USA
| | - Junqi Ma
- Department of Precision Machinery and Instrumentation, University of Science and Technology of China, Hefei, Anhui, China
| | - Jian Gao
- Department of Precision Machinery and Instrumentation, University of Science and Technology of China, Hefei, Anhui, China
| | - Pengfei Shao
- Department of Precision Machinery and Instrumentation, University of Science and Technology of China, Hefei, Anhui, China
| | - Ian Valerio
- Department of Surgery, The Ohio State University, Columbus, USA
| | | | - Chengbiao Ding
- Department of Rehabilitation Medicine, The Second Hospital of Anhui Medical University, Hefei, Anhui, China
| | - Alper Yilmaz
- Photogrammetric Computer Vision Laboratory, The Ohio State University, Columbus, USA.
| | - Ronald Xu
- Department of Precision Machinery and Instrumentation, University of Science and Technology of China, Hefei, Anhui, China. .,Department of Biomedical Engineering, The Ohio State University, Columbus, USA.
| |
Collapse
|
24
|
Zeineldin RA, Karar ME, Coburger J, Wirtz CR, Burgert O. DeepSeg: deep neural network framework for automatic brain tumor segmentation using magnetic resonance FLAIR images. Int J Comput Assist Radiol Surg 2020; 15:909-920. [PMID: 32372386 PMCID: PMC7303084 DOI: 10.1007/s11548-020-02186-z] [Citation(s) in RCA: 53] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2020] [Accepted: 04/23/2020] [Indexed: 11/25/2022]
Abstract
PURPOSE Gliomas are the most common and aggressive type of brain tumors due to their infiltrative nature and rapid progression. The process of distinguishing tumor boundaries from healthy cells is still a challenging task in the clinical routine. Fluid-attenuated inversion recovery (FLAIR) MRI modality can provide the physician with information about tumor infiltration. Therefore, this paper proposes a new generic deep learning architecture, namely DeepSeg, for fully automated detection and segmentation of the brain lesion using FLAIR MRI data. METHODS The developed DeepSeg is a modular decoupling framework. It consists of two connected core parts based on an encoding and decoding relationship. The encoder part is a convolutional neural network (CNN) responsible for spatial information extraction. The resulting semantic map is inserted into the decoder part to get the full-resolution probability map. Based on modified U-Net architecture, different CNN models such as residual neural network (ResNet), dense convolutional network (DenseNet), and NASNet have been utilized in this study. RESULTS The proposed deep learning architectures have been successfully tested and evaluated on-line based on MRI datasets of brain tumor segmentation (BraTS 2019) challenge, including s336 cases as training data and 125 cases for validation data. The dice and Hausdorff distance scores of obtained segmentation results are about 0.81 to 0.84 and 9.8 to 19.7 correspondingly. CONCLUSION This study showed successful feasibility and comparative performance of applying different deep learning models in a new DeepSeg framework for automated brain tumor segmentation in FLAIR MR images. The proposed DeepSeg is open source and freely available at https://github.com/razeineldin/DeepSeg/.
Collapse
Affiliation(s)
- Ramy A Zeineldin
- Research Group Computer Assisted Medicine (CaMed), Reutlingen University, 72762, Reutlingen, Germany.
| | - Mohamed E Karar
- Faculty of Electronic Engineering (FEE), Menoufia University, Menouf, 32952, Egypt
| | - Jan Coburger
- Department of Neurosurgery, University of Ulm, 89312, Günzburg, Germany
| | - Christian R Wirtz
- Department of Neurosurgery, University of Ulm, 89312, Günzburg, Germany
| | - Oliver Burgert
- Research Group Computer Assisted Medicine (CaMed), Reutlingen University, 72762, Reutlingen, Germany
| |
Collapse
|
25
|
Alfonso-Garcia A, Bec J, Sridharan S, Hartl B, Unger J, Bobinski M, Lechpammer M, Girgis F, Boggan J, Marcu L. Real-time augmented reality for delineation of surgical margins during neurosurgery using autofluorescence lifetime contrast. JOURNAL OF BIOPHOTONICS 2020; 13:e201900108. [PMID: 31304655 PMCID: PMC7510838 DOI: 10.1002/jbio.201900108] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2019] [Revised: 07/11/2019] [Accepted: 07/11/2019] [Indexed: 05/05/2023]
Abstract
Current clinical brain imaging techniques used for surgical planning of tumor resection lack intraoperative and real-time feedback; hence surgeons ultimately rely on subjective evaluation to identify tumor areas and margins. We report a fluorescence lifetime imaging (FLIm) instrument (excitation: 355 nm; emission spectral bands: 390/40 nm, 470/28 nm, 542/50 nm and 629/53 nm) that integrates with surgical microscopes to provide real-time intraoperative augmentation of the surgical field of view with fluorescent derived parameters encoding diagnostic information. We show the functionality and safety features of this instrument during neurosurgical procedures in patients undergoing craniotomy for the resection of brain tumors and/or tissue with radiation damage. We demonstrate in three case studies the ability of this instrument to resolve distinct tissue types and pathology including cortex, white matter, tumor and radiation-induced necrosis. In particular, two patients with effects of radiation-induced necrosis exhibited longer fluorescence lifetimes and increased optical redox ratio on the necrotic tissue with respect to non-affected cortex, and an oligodendroglioma resected from a third patient reported shorter fluorescence lifetime and a decrease in optical redox ratio than the surrounding white matter. These results encourage the use of FLIm as a label-free and non-invasive intraoperative tool for neurosurgical guidance.
Collapse
Affiliation(s)
- Alba Alfonso-Garcia
- Dept. Biomedical Engineering, University of California, Davis, California, United States
| | - Julien Bec
- Dept. Biomedical Engineering, University of California, Davis, California, United States
| | - Shamira Sridharan
- Dept. Biomedical Engineering, University of California, Davis, California, United States
| | - Brad Hartl
- Dept. Biomedical Engineering, University of California, Davis, California, United States
| | - Jakob Unger
- Dept. Biomedical Engineering, University of California, Davis, California, United States
| | - Matthew Bobinski
- Dept. Radiology, University of California, Davis, California, United States
| | - Mirna Lechpammer
- Dept. Pathology and Laboratory Medicine, University of California, Davis, California, United States
| | - Fady Girgis
- Dept. Neurological Surgery, University of California, Davis, California, United States
| | - James Boggan
- Dept. Neurological Surgery, University of California, Davis, California, United States
| | - Laura Marcu
- Dept. Biomedical Engineering, University of California, Davis, California, United States
- Dept. Neurological Surgery, University of California, Davis, California, United States
| |
Collapse
|
26
|
Schneider C, Thompson S, Totz J, Song Y, Allam M, Sodergren MH, Desjardins AE, Barratt D, Ourselin S, Gurusamy K, Stoyanov D, Clarkson MJ, Hawkes DJ, Davidson BR. Comparison of manual and semi-automatic registration in augmented reality image-guided liver surgery: a clinical feasibility study. Surg Endosc 2020; 34:4702-4711. [PMID: 32780240 PMCID: PMC7524854 DOI: 10.1007/s00464-020-07807-x] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2020] [Accepted: 07/10/2020] [Indexed: 02/06/2023]
Abstract
BACKGROUND The laparoscopic approach to liver resection may reduce morbidity and hospital stay. However, uptake has been slow due to concerns about patient safety and oncological radicality. Image guidance systems may improve patient safety by enabling 3D visualisation of critical intra- and extrahepatic structures. Current systems suffer from non-intuitive visualisation and a complicated setup process. A novel image guidance system (SmartLiver), offering augmented reality visualisation and semi-automatic registration has been developed to address these issues. A clinical feasibility study evaluated the performance and usability of SmartLiver with either manual or semi-automatic registration. METHODS Intraoperative image guidance data were recorded and analysed in patients undergoing laparoscopic liver resection or cancer staging. Stereoscopic surface reconstruction and iterative closest point matching facilitated semi-automatic registration. The primary endpoint was defined as successful registration as determined by the operating surgeon. Secondary endpoints were system usability as assessed by a surgeon questionnaire and comparison of manual vs. semi-automatic registration accuracy. Since SmartLiver is still in development no attempt was made to evaluate its impact on perioperative outcomes. RESULTS The primary endpoint was achieved in 16 out of 18 patients. Initially semi-automatic registration failed because the IGS could not distinguish the liver surface from surrounding structures. Implementation of a deep learning algorithm enabled the IGS to overcome this issue and facilitate semi-automatic registration. Mean registration accuracy was 10.9 ± 4.2 mm (manual) vs. 13.9 ± 4.4 mm (semi-automatic) (Mean difference - 3 mm; p = 0.158). Surgeon feedback was positive about IGS handling and improved intraoperative orientation but also highlighted the need for a simpler setup process and better integration with laparoscopic ultrasound. CONCLUSION The technical feasibility of using SmartLiver intraoperatively has been demonstrated. With further improvements semi-automatic registration may enhance user friendliness and workflow of SmartLiver. Manual and semi-automatic registration accuracy were comparable but evaluation on a larger patient cohort is required to confirm these findings.
Collapse
Affiliation(s)
- C. Schneider
- Division of Surgery & Interventional Science, Royal Free Campus, University College London, Pond Street, London, NW3 2QG UK
| | - S. Thompson
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - J. Totz
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - Y. Song
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - M. Allam
- Division of Surgery & Interventional Science, Royal Free Campus, University College London, Pond Street, London, NW3 2QG UK
| | - M. H. Sodergren
- Centre for Medical Image Computing (CMIC), University College London, London, UK
| | - A. E. Desjardins
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - D. Barratt
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - S. Ourselin
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - K. Gurusamy
- Division of Surgery & Interventional Science, Royal Free Campus, University College London, Pond Street, London, NW3 2QG UK ,Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Department of Hepatopancreatobiliary and Liver Transplant Surgery, Royal Free Hospital, London, UK
| | - D. Stoyanov
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Computer Science, University College London, London, UK
| | - M. J. Clarkson
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - D. J. Hawkes
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - B. R. Davidson
- Division of Surgery & Interventional Science, Royal Free Campus, University College London, Pond Street, London, NW3 2QG UK ,Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Department of Hepatopancreatobiliary and Liver Transplant Surgery, Royal Free Hospital, London, UK
| |
Collapse
|
27
|
Karar ME, El-Brawany MA. Fully tuned RBF neural network controller for ultrasound hyperthermia cancer tumour therapy. NETWORK (BRISTOL, ENGLAND) 2018; 29:20-36. [PMID: 30404543 DOI: 10.1080/0954898x.2018.1539260] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/02/2018] [Accepted: 10/09/2018] [Indexed: 06/08/2023]
Abstract
Thermal dose is an important clinical efficacy index for hyperthermia cancer treatment. This paper presents a new direct radial basis function (RBF) neural network controller for high-temperature hyperthermia thermal dose during the therapeutic procedure of cancer tumours by short-time pulses of high-intensity focused ultrasound (HIFU). The developed controller is stabilized and automatically tuned based on Lyapunov functions and ant colony optimization (ACO) algorithm, respectively. In addition, this thermal dose control system has been validated using one-dimensional (1-D) biothermal tissue model. Simulation results showed that the fully tuned RBF neural network controller outperforms other controllers in the previous studies by achieving targeted thermal dose with shortest treatment times less than 13.5 min, avoiding the tissue cavitation during the thermal therapy. Moreover, the maximum value of its mean integral time absolute error (MTAE) is 98.64, which is significantly less than the resulted errors for the manual-tuned controller under the same treatment conditions of all tested cases. In this study, integrated ACO method with robust RBF neural network controller provides a successful and improved performance to deliver accurate thermal dose of hyperthermia cancer tumour treatment using the focused ultrasound transducer without external cooling effect.
Collapse
Affiliation(s)
- M E Karar
- a Faculty of Electronic Engineering (FEE) , Menoufia University , Menouf , Egypt
| | - M A El-Brawany
- a Faculty of Electronic Engineering (FEE) , Menoufia University , Menouf , Egypt
| |
Collapse
|
28
|
Mehbodniya AH, Moghavvemi M, Narayanan V, Waran V. Frequency and Causes of Line of Sight Issues During Neurosurgical Procedures Using Optical Image-Guided Systems. World Neurosurg 2018; 122:e449-e454. [PMID: 30347306 DOI: 10.1016/j.wneu.2018.10.069] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2018] [Revised: 10/08/2018] [Accepted: 10/09/2018] [Indexed: 12/29/2022]
Abstract
BACKGROUND Navigation (image guidance) is an essential tool in modern neurosurgery, and most surgeons use an optical tracking system. Although the technology is accurate and reliable, one often is confronted by line of sight issues that interrupt the flow of an operation. There has been feedback on the matter, but the actual problem has not been accurately quantified, therefore making this the primary aim of this study. It is particularly important given that robotic technology is gradually making its way into neurosurgery and most of these devices depend on optical navigation when procedures are being conducted. METHODS In this study, the frequency and causes of line of sight issues is assessed using recordings of Navigation probe locations and its synchronised video recordings. RESULTS The mentioned experiment conducted for a series of 15 neurosurgical operations. This issue occured in all these surgeries except one. Maximum duration of issue presisting reached up to 56% of the navigation usage time. CONCLUSIONS The arrangment of staff and equipment is a key factor in avoiding this issue.
Collapse
Affiliation(s)
- Amir H Mehbodniya
- Centre for Research in Applied Electronics, Department of Electrical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur, Malaysia
| | - Mahmoud Moghavvemi
- Centre for Research in Applied Electronics, Department of Electrical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur, Malaysia; University of Science and Culture, Tehran, Iran
| | - Vairavan Narayanan
- Department of Surgery, Faculty of Medicine, University of Malaya, Kuala Lumpur, Malaysia
| | - Vicknes Waran
- Department of Surgery, Faculty of Medicine, University of Malaya, Kuala Lumpur, Malaysia.
| |
Collapse
|
29
|
Léger É, Reyes J, Drouin S, Collins DL, Popa T, Kersten-Oertel M. Gesture-based registration correction using a mobile augmented reality image-guided neurosurgery system. Healthc Technol Lett 2018; 5:137-142. [PMID: 30800320 PMCID: PMC6372086 DOI: 10.1049/htl.2018.5063] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2018] [Accepted: 08/20/2018] [Indexed: 01/02/2023] Open
Abstract
In image-guided neurosurgery, a registration between the patient and their pre-operative images and the tracking of surgical tools enables GPS-like guidance to the surgeon. However, factors such as brainshift, image distortion, and registration error cause the patient-to-image alignment accuracy to degrade throughout the surgical procedure no longer providing accurate guidance. The authors present a gesture-based method for manual registration correction to extend the usage of augmented reality (AR) neuronavigation systems. The authors' method, which makes use of the touchscreen capabilities of a tablet on which the AR navigation view is presented, enables surgeons to compensate for the effects of brainshift, misregistration, or tracking errors. They tested their system in a laboratory user study with ten subjects and found that they were able to achieve a median registration RMS error of 3.51 mm on landmarks around the craniotomy of interest. This is comparable to the level of accuracy attainable with previously proposed methods and currently available commercial systems while being simpler and quicker to use. The method could enable surgeons to quickly and easily compensate for most of the observed shift. Further advantages of their method include its ease of use, its small impact on the surgical workflow and its small-time requirement.
Collapse
Affiliation(s)
- Étienne Léger
- Department of Computer Science and Software Engineering, Concordia University, Montréal, Canada
| | - Jonatan Reyes
- Department of Computer Science and Software Engineering, Concordia University, Montréal, Canada
| | - Simon Drouin
- Department of Biomedical Engineering, McGill University, Montréal, Canada
| | - D. Louis Collins
- Department of Biomedical Engineering, McGill University, Montréal, Canada
| | - Tiberiu Popa
- Department of Computer Science and Software Engineering, Concordia University, Montréal, Canada
- PERFORM Centre, Concordia University, Montréal, Canada
| | - Marta Kersten-Oertel
- Department of Computer Science and Software Engineering, Concordia University, Montréal, Canada
- PERFORM Centre, Concordia University, Montréal, Canada
| |
Collapse
|