1
|
Cai Y, Zhu M, He B, Zhang J. Distributed visual positioning for surgical instrument tracking. Phys Eng Sci Med 2024; 47:273-286. [PMID: 38194180 DOI: 10.1007/s13246-023-01363-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Accepted: 11/28/2023] [Indexed: 01/10/2024]
Abstract
In clinical operations, it is crucial for surgeons to know the location of the surgical instrument. Traditional positioning systems have difficulty dealing with camera occlusion, marker occlusion, and environmental interference. To address these issues, we propose a distributed visual positioning system for surgical instrument tracking in surgery. First, we design the marker pattern with a black and white triangular grid and dot that can be adapted to various instrument surfaces and improve the marker location accuracy of the feature. The cross-points in the marker are the features that each feature has a unique ID. Furthermore, we proposed detection and identification for the position-sensing marker to realize the accurate location and identification of features. Second, we introduce multi Perspective-n-Point (mPnP) method, which fuses feature coordinates from all cameras to deduce the final result directly by the intrinsic and extrinsic parameters. This method provides a reliable initial value for the Bundle Adjustment algorithms. During instrument tracking, we assess the motion state of the instrument and select either dynamic or static Kalman filtering to mitigate any jitter in the instrument's movement. The core algorithms comparison experiment indicates our positioning algorithm has a lower reprojection error comparison to the mainstream algorithms. A series of quantitative experiments showed that the proposed system positioning error is below 0.207 mm, and the run time is below 118.842 ms. The results demonstrate the tremendous clinical application potential of our system providing accurate positioning of instruments promoting the efficiency and safety of clinical surgery.
Collapse
Affiliation(s)
- Yu Cai
- School of Mechanical Engineering, Fuzhou University, Fuzhou, 350108, China
| | - Mingzhu Zhu
- School of Mechanical Engineering, Fuzhou University, Fuzhou, 350108, China.
| | - Bingwei He
- School of Mechanical Engineering, Fuzhou University, Fuzhou, 350108, China.
| | - Jianwei Zhang
- Department of Informatics, University of Hamburg, 22527, Hamburg, Germany
| |
Collapse
|
2
|
Shao L, Li X, Fu T, Meng F, Zhu Z, Zhao R, Huo M, Xiao D, Fan J, Lin Y, Zhang T, Yang J. Robot-assisted augmented reality surgical navigation based on optical tracking for mandibular reconstruction surgery. Med Phys 2024; 51:363-377. [PMID: 37431603 DOI: 10.1002/mp.16598] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Revised: 06/07/2023] [Accepted: 06/19/2023] [Indexed: 07/12/2023] Open
Abstract
PURPOSE This work proposes a robot-assisted augmented reality (AR) surgical navigation system for mandibular reconstruction. The system accurately superimposes the preoperative osteotomy plan of the mandible and fibula into a real scene. It assists the doctor in osteotomy quickly and safely under the guidance of the robotic arm. METHODS The proposed system mainly consists of two modules: the AR guidance module of the mandible and fibula and the robot navigation module. In the AR guidance module, we propose an AR calibration method based on the spatial registration of the image tracking marker to superimpose the virtual models of the mandible and fibula into the real scene. In the robot navigation module, the posture of the robotic arm is first calibrated under the tracking of the optical tracking system. The robotic arm can then be positioned at the planned osteotomy after the registration of the computed tomography image and the patient position. The combined guidance of AR and robotic arm can enhance the safety and precision of the surgery. RESULTS The effectiveness of the proposed system was quantitatively assessed on cadavers. In the AR guidance module, osteotomies of the mandible and fibula achieved mean errors of 1.61 ± 0.62 and 1.08 ± 0.28 mm, respectively. The mean reconstruction error of the mandible was 1.36 ± 0.22 mm. In the AR-robot guidance module, the mean osteotomy errors of the mandible and fibula were 1.47 ± 0.46 and 0.98 ± 0.24 mm, respectively. The mean reconstruction error of the mandible was 1.20 ± 0.36 mm. CONCLUSIONS The cadaveric experiments of 12 fibulas and six mandibles demonstrate the proposed system's effectiveness and potential clinical value in reconstructing the mandibular defect with a free fibular flap.
Collapse
Affiliation(s)
- Long Shao
- School of Computer Science & Technology, Beijing Institute of Technology, Beijing, China
| | - Xing Li
- Department of Stomatology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Tianyu Fu
- School of Medical Technology, Beijing Institute of Technology, Beijing, China
| | - Fanhao Meng
- Department of Stomatology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Zhihui Zhu
- Department of Stomatology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Ruiqi Zhao
- Department of Stomatology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Minghao Huo
- Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Deqiang Xiao
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Yucong Lin
- School of Medical Technology, Beijing Institute of Technology, Beijing, China
| | - Tao Zhang
- Department of Stomatology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| |
Collapse
|
3
|
Lin Z, Lei C, Yang L. Modern Image-Guided Surgery: A Narrative Review of Medical Image Processing and Visualization. SENSORS (BASEL, SWITZERLAND) 2023; 23:9872. [PMID: 38139718 PMCID: PMC10748263 DOI: 10.3390/s23249872] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Revised: 11/15/2023] [Accepted: 12/13/2023] [Indexed: 12/24/2023]
Abstract
Medical image analysis forms the basis of image-guided surgery (IGS) and many of its fundamental tasks. Driven by the growing number of medical imaging modalities, the research community of medical imaging has developed methods and achieved functionality breakthroughs. However, with the overwhelming pool of information in the literature, it has become increasingly challenging for researchers to extract context-relevant information for specific applications, especially when many widely used methods exist in a variety of versions optimized for their respective application domains. By being further equipped with sophisticated three-dimensional (3D) medical image visualization and digital reality technology, medical experts could enhance their performance capabilities in IGS by multiple folds. The goal of this narrative review is to organize the key components of IGS in the aspects of medical image processing and visualization with a new perspective and insights. The literature search was conducted using mainstream academic search engines with a combination of keywords relevant to the field up until mid-2022. This survey systemically summarizes the basic, mainstream, and state-of-the-art medical image processing methods as well as how visualization technology like augmented/mixed/virtual reality (AR/MR/VR) are enhancing performance in IGS. Further, we hope that this survey will shed some light on the future of IGS in the face of challenges and opportunities for the research directions of medical image processing and visualization.
Collapse
Affiliation(s)
- Zhefan Lin
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310030, China;
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| | - Chen Lei
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| | - Liangjing Yang
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310030, China;
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| |
Collapse
|
4
|
Prabhu AV, Peterman M, Kesaria A, Samanta S, Crownover R, Lewis GD. Virtual reality technology: A potential tool to enhance brachytherapy training and delivery. Brachytherapy 2023; 22:709-715. [PMID: 37679242 DOI: 10.1016/j.brachy.2023.07.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Revised: 07/08/2023] [Accepted: 07/27/2023] [Indexed: 09/09/2023]
Affiliation(s)
- Arpan V Prabhu
- Department of Radiation Oncology, University of Arkansas for Medical Sciences, Little Rock, AR
| | - Melissa Peterman
- Department of Radiation Oncology, University of Arkansas for Medical Sciences, Little Rock, AR
| | - Anam Kesaria
- Department of Radiation Oncology, University of Arkansas for Medical Sciences, Little Rock, AR
| | - Santanu Samanta
- Department of Radiation Oncology, University of Arkansas for Medical Sciences, Little Rock, AR
| | - Richard Crownover
- Department of Radiation Oncology, University of Arkansas for Medical Sciences, Little Rock, AR
| | - Gary D Lewis
- Department of Radiation Oncology, University of Arkansas for Medical Sciences, Little Rock, AR.
| |
Collapse
|
5
|
Gu W, Knopf J, Cast J, Higgins LD, Knopf D, Unberath M. Nail it! vision-based drift correction for accurate mixed reality surgical guidance. Int J Comput Assist Radiol Surg 2023:10.1007/s11548-023-02950-x. [PMID: 37231201 DOI: 10.1007/s11548-023-02950-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Accepted: 05/02/2023] [Indexed: 05/27/2023]
Abstract
PURPOSE Mixed reality-guided surgery through head-mounted displays (HMDs) is gaining interest among surgeons. However, precise tracking of HMDs relative to the surgical environment is crucial for successful outcomes. Without fiducial markers, spatial tracking of the HMD suffers from millimeter- to centimeter-scale drift, resulting in misaligned visualization of registered overlays. Methods and workflows capable of automatically correcting for drift after patient registration are essential to assuring accurate execution of surgical plans. METHODS We present a mixed reality surgical navigation workflow that continuously corrects for drift after patient registration using only image-based methods. We demonstrate its feasibility and capabilities using the Microsoft HoloLens on glenoid pin placement in total shoulder arthroplasty. A phantom study was conducted involving five users with each user placing pins on six glenoids of different deformity, followed by a cadaver study by an attending surgeon. RESULTS In both studies, all users were satisfied with the registration overlay before drilling the pin. Postoperative CT scans showed 1.5 mm error in entry point deviation and 2.4[Formula: see text] error in pin orientation on average in the phantom study and 2.5 mm and 1.5[Formula: see text] in the cadaver study. A trained user takes around 90 s to complete the workflow. Our method also outperformed HoloLens native tracking in drift correction. CONCLUSION Our findings suggest that image-based drift correction can provide mixed reality environments precisely aligned with patient anatomy, enabling pin placement with consistently high accuracy. These techniques constitute a next step toward purely image-based mixed reality surgical guidance, without requiring patient markers or external tracking hardware.
Collapse
Affiliation(s)
- Wenhao Gu
- Johns Hopkins University, Baltimore, MD, USA.
| | | | - John Cast
- Johns Hopkins University, Baltimore, MD, USA
| | | | - David Knopf
- Arthrex Inc., 1 Arthrex Way, Naples, FL, USA
| | | |
Collapse
|
6
|
Ma L, Huang T, Wang J, Liao H. Visualization, registration and tracking techniques for augmented reality guided surgery: a review. Phys Med Biol 2023; 68. [PMID: 36580681 DOI: 10.1088/1361-6560/acaf23] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Accepted: 12/29/2022] [Indexed: 12/31/2022]
Abstract
Augmented reality (AR) surgical navigation has developed rapidly in recent years. This paper reviews and analyzes the visualization, registration, and tracking techniques used in AR surgical navigation systems, as well as the application of these AR systems in different surgical fields. The types of AR visualization are divided into two categories ofin situvisualization and nonin situvisualization. The rendering contents of AR visualization are various. The registration methods include manual registration, point-based registration, surface registration, marker-based registration, and calibration-based registration. The tracking methods consist of self-localization, tracking with integrated cameras, external tracking, and hybrid tracking. Moreover, we describe the applications of AR in surgical fields. However, most AR applications were evaluated through model experiments and animal experiments, and there are relatively few clinical experiments, indicating that the current AR navigation methods are still in the early stage of development. Finally, we summarize the contributions and challenges of AR in the surgical fields, as well as the future development trend. Despite the fact that AR-guided surgery has not yet reached clinical maturity, we believe that if the current development trend continues, it will soon reveal its clinical utility.
Collapse
Affiliation(s)
- Longfei Ma
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, People's Republic of China
| | - Tianqi Huang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, People's Republic of China
| | - Jie Wang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, People's Republic of China
| | - Hongen Liao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, People's Republic of China
| |
Collapse
|
7
|
Chegini S, Edwards E, McGurk M, Clarkson M, Schilling C. Systematic review of techniques used to validate the registration of augmented-reality images using a head-mounted device to navigate surgery. Br J Oral Maxillofac Surg 2023; 61:19-27. [PMID: 36513525 DOI: 10.1016/j.bjoms.2022.08.007] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Revised: 07/31/2022] [Accepted: 08/17/2022] [Indexed: 12/14/2022]
Abstract
Augmented-reality (AR) head-mounted devices (HMD) allow the wearer to have digital images superposed on to their field of vision. They are being used to superpose annotations on to the surgical field akin to a navigation system. This review examines published validation studies on HMD-AR systems, their reported protocols, and outcomes. The aim was to establish commonalities and an acceptable registration outcome. Multiple databases were systematically searched for relevant articles between January 2015 and January 2021. Studies that examined the registration of AR content using a HMD to guide surgery were eligible for inclusion. The country of origin, year of publication, medical specialty, HMD device, software, and method of registration, were recorded. A meta-analysis of the mean registration error was conducted. A total of 4784 papers were identified, of which 23 met the inclusion criteria. They included studies using HoloLens (Microsoft) (n = 22) and nVisor ST60 (NVIS Inc) (n = 1). Sixty-six per cent of studies were in hard tissue specialties. Eleven studies reported registration errors using pattern markers (mean (SD) 2.6 (1.8) mm), and four reported registration errors using surface markers (mean (SD) 3.8 (3.7) mm). Three studies reported registration errors using manual alignment (mean (SD) 2.2 (1.3) mm). The majority of studies in this review used in-house software with a variety of registration methods and reported errors. The mean registration error calculated in this study can be considered as a minimum acceptable standard. It should be taken into consideration when procedural applications are selected.
Collapse
Affiliation(s)
- Soudeh Chegini
- University College London, Charles Bell House, 43-45 Foley St, London W1W 7TY, United Kingdom.
| | - Eddie Edwards
- University College London, Charles Bell House, 43-45 Foley St, London W1W 7TY, United Kingdom
| | - Mark McGurk
- University College London, Charles Bell House, 43-45 Foley St, London W1W 7TY, United Kingdom
| | - Matthew Clarkson
- University College London, Charles Bell House, 43-45 Foley St, London W1W 7TY, United Kingdom
| | - Clare Schilling
- University College London, Charles Bell House, 43-45 Foley St, London W1W 7TY, United Kingdom
| |
Collapse
|
8
|
Gu W, Shah K, Knopf J, Josewski C, Unberath M. A calibration-free workflow for image-based mixed reality navigation of total shoulder arthroplasty. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2022. [DOI: 10.1080/21681163.2021.2009378] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Wenhao Gu
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, Maryland, USA
| | - Kinjal Shah
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, Maryland, USA
| | | | | | - Mathias Unberath
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, Maryland, USA
| |
Collapse
|
9
|
Birlo M, Edwards PJE, Clarkson M, Stoyanov D. Utility of optical see-through head mounted displays in augmented reality-assisted surgery: A systematic review. Med Image Anal 2022; 77:102361. [PMID: 35168103 PMCID: PMC10466024 DOI: 10.1016/j.media.2022.102361] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Revised: 11/17/2021] [Accepted: 01/10/2022] [Indexed: 12/11/2022]
Abstract
This article presents a systematic review of optical see-through head mounted display (OST-HMD) usage in augmented reality (AR) surgery applications from 2013 to 2020. Articles were categorised by: OST-HMD device, surgical speciality, surgical application context, visualisation content, experimental design and evaluation, accuracy and human factors of human-computer interaction. 91 articles fulfilled all inclusion criteria. Some clear trends emerge. The Microsoft HoloLens increasingly dominates the field, with orthopaedic surgery being the most popular application (28.6%). By far the most common surgical context is surgical guidance (n=58) and segmented preoperative models dominate visualisation (n=40). Experiments mainly involve phantoms (n=43) or system setup (n=21), with patient case studies ranking third (n=19), reflecting the comparative infancy of the field. Experiments cover issues from registration to perception with very different accuracy results. Human factors emerge as significant to OST-HMD utility. Some factors are addressed by the systems proposed, such as attention shift away from the surgical site and mental mapping of 2D images to 3D patient anatomy. Other persistent human factors remain or are caused by OST-HMD solutions, including ease of use, comfort and spatial perception issues. The significant upward trend in published articles is clear, but such devices are not yet established in the operating room and clinical studies showing benefit are lacking. A focused effort addressing technical registration and perceptual factors in the lab coupled with design that incorporates human factors considerations to solve clear clinical problems should ensure that the significant current research efforts will succeed.
Collapse
Affiliation(s)
- Manuel Birlo
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London (UCL), Charles Bell House, 43-45 Foley Street, London W1W 7TS, UK.
| | - P J Eddie Edwards
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London (UCL), Charles Bell House, 43-45 Foley Street, London W1W 7TS, UK
| | - Matthew Clarkson
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London (UCL), Charles Bell House, 43-45 Foley Street, London W1W 7TS, UK
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London (UCL), Charles Bell House, 43-45 Foley Street, London W1W 7TS, UK
| |
Collapse
|
10
|
Zhu T, Jiang S, Yang Z, Zhou Z, Li Y, Ma S, Zhuo J. A neuroendoscopic navigation system based on dual-mode augmented reality for minimally invasive surgical treatment of hypertensive intracerebral hemorrhage. Comput Biol Med 2022; 140:105091. [PMID: 34872012 DOI: 10.1016/j.compbiomed.2021.105091] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Revised: 11/23/2021] [Accepted: 11/26/2021] [Indexed: 01/01/2023]
Abstract
BACKGROUND AND OBJECTIVE Hypertensive intracerebral hemorrhage is characterized by a high rate of morbidity, mortality, disability and recurrence. Neuroendoscopy has been utilized for treatment as an advanced technology. However, traditional neuroendoscopy allows professionals to see only tissue surfaces, and the field of vision is limited, which cannot provide spatial guidance. In this study, an AR-based neuroendoscopic navigation system is proposed to assist surgeons in locating and clearing hematoma. METHODS The neuroendoscope can be registered through the vector closed loop algorithm. The single-shot method is designed to register medical images with patients precisely. Real-time AR is realized based on video stream fusion. Dual-mode AR navigation is proposed to provide comprehensive guidance from catheter implantation to hematoma removal. A series of experiments is designed to validate the accuracy and significance of this system. RESULTS The average root mean square error of the registration between medical images and patients is 0.784 mm, and the variance is 0.1426 mm. The pixel mismatching degrees are less than 1% in different AR modes. In catheter implantation experiments, the average error of distance is 1.28 mm, and the variance is 0.43 mm, while the average error of angles is 1.34°, and the variance is 0.45°. Comparative experiments are also conducted to evaluate the feasibility of this system. CONCLUSION This system can provide stereo images with depth information fused with patients to guide surgeons to locate targets and remove hematoma. It has been validated to have high accuracy and feasibility.
Collapse
Affiliation(s)
- Tao Zhu
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Shan Jiang
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China.
| | - Zhiyong Yang
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Zeyang Zhou
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Yuhua Li
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Shixing Ma
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Jie Zhuo
- Department of Neurosurgery, Tianjin Huanhu Hospital, Tianjin, 300200, China
| |
Collapse
|
11
|
Johnson PB, Jackson A, Saki M, Feldman E, Bradley J. Patient posture correction and alignment using mixed reality visualization and the HoloLens 2. Med Phys 2021; 49:15-22. [PMID: 34780068 DOI: 10.1002/mp.15349] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2021] [Revised: 10/08/2021] [Accepted: 11/02/2021] [Indexed: 12/17/2022] Open
Abstract
PURPOSE The purpose of this study was to develop and preliminarily test a radiotherapy system for patient posture correction and alignment using mixed reality (MixR) visualization. The write-up of this work also provides an opportunity to introduce the concepts and technology of MixR for a medical physics audience who may be unfamiliar with the topic. METHODS A MixR application was developed for on optical-see-through head-mounted display (HoloLens 2) allowing a user to simultaneously and directly view a patient and a reference hologram derived from their simulation CT scan. The hologram provides a visual reference for the exact posture needed during treatment and is initialized in relation to the origin of a radiotherapy device using marker-based tracking. The system further provides marker-less tracking that allows the user tofreely navigate the room as they view and align the patient from various angles. The system was preliminarily tested using both a rigid (pelvis) and nonrigid (female mannequin) anthropomorphic phantom. Each phantom was aligned via hologram and accuracy quantified using CBCT and CT. RESULTS A fully realized system was developed. Rigid registration accuracy was on the order of 3.0 ± 1.5 mm based on the performance of three users repeating alignment five times each. The lateral direction showed the most variability among users and was associated with the largest off-sets (approximately 2.0 mm). For nonrigid alignment, the MixR setup outperformed a setup based on three-point alignment and setup photos, the latter of which showed a difference in arm position of 2 cm and a torso roll of 6-7°. CONCLUSIONS MixR visualization is a rapidly emerging domain that has the potential to significantly impact the field of medicine. The current application is an illustration of this and highlights the advantages of MixR for patient setup in radiation oncology. The key feature of the system is the way in which it transforms nonrigid registration into rigid registration by providing an efficient, portable, and cost-effective mechanism for reproducing patient posture without the use of ionizing radiation. Preliminary estimates of registration accuracy indicate clinical viability and form the foundation for further development and clinical testing.
Collapse
Affiliation(s)
- Perry B Johnson
- Department of Radiation Oncology, University of Florida College of Medicine, Gainesville, Florida, USA.,University of Florida Health Proton Therapy Institute, Jacksonville, Florida, USA
| | - Amanda Jackson
- Department of Radiology, University of Florida College of Medicine, Gainesville, Florida, USA
| | - Mohammad Saki
- University of Florida Health Proton Therapy Institute, Jacksonville, Florida, USA
| | - Emily Feldman
- University of Florida Health Proton Therapy Institute, Jacksonville, Florida, USA
| | - Julie Bradley
- Department of Radiation Oncology, University of Florida College of Medicine, Gainesville, Florida, USA.,University of Florida Health Proton Therapy Institute, Jacksonville, Florida, USA
| |
Collapse
|
12
|
3D-printed template and optical needle navigation in CT-guided iodine-125 permanent seed implantation. J Contemp Brachytherapy 2021; 13:410-418. [PMID: 34484355 PMCID: PMC8407253 DOI: 10.5114/jcb.2021.108595] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2021] [Accepted: 06/15/2021] [Indexed: 11/17/2022] Open
Abstract
Purpose To preliminarily verify the accuracy of navigation-assisted seed implantation by comparing pre-operative and actual differences in puncture characteristics and dosimetry in computed tomography (CT)-guided, navigation-assisted radioactive iodine-125 seed implantation, using 3D-printed templates for malignant tumors’ treatment. Material and methods A total of 27 tumor patients, who were treated with seed implantation under combination guidance in our hospital between December 2019 and December 2020 were enrolled in this study. Navigation needles (n = 1-3) were placed in each patient to obtain pre-operative and intra-operative puncture information, such as angle, depth, insertion point, and tip position. Moreover, dosimetry parameters in pre-operative and post-operative plans, including D90, V100, V150, V200, minimum peripheral dose (MPD), conformal index, external index, and homogeneity index of target area were investigated. Results Mean errors of the angle, depth, insertion point, and tip position were 0.5 ±0.5°, 4.0 ±2.0 mm, 1.7 ±1 mm, and 3.1 ±1.8 mm, respectively. There were no significant differences between intra-operative and pre-operative angles (p = 0.271), but there was a significant difference in the depth (p = 0.002). Errors of the angle, depth, and insertion point were larger for the pelvic/retroperitoneal area than for the head and neck/chest wall (p < 0.05). With the exception of MPD, there was no significant difference in dosimetry indices between post-operative and preoperative plans (p > 0.05). Conclusions Seed implantation under combination guidance showed good accuracy, and the actual intra-operative puncture information and post-operative doses were in agreement with those in the pre-operative plan, thereby demonstrating promising prospects for further development.
Collapse
|
13
|
The status of medical physics in radiotherapy in China. Phys Med 2021; 85:147-157. [PMID: 34010803 DOI: 10.1016/j.ejmp.2021.05.007] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/10/2020] [Revised: 05/01/2021] [Accepted: 05/03/2021] [Indexed: 01/09/2023] Open
Abstract
PURPOSE To present an overview of the status of medical physics in radiotherapy in China, including facilities and devices, occupation, education, research, etc. MATERIALS AND METHODS: The information about medical physics in clinics was obtained from the 9-th nationwide survey conducted by the China Society for Radiation Oncology in 2019. The data of medical physics in education and research was collected from the publications of the official and professional organizations. RESULTS By 2019, there were 1463 hospitals or institutes registered to practice radiotherapy and the number of accelerators per million population was 1.5. There were 4172 medical physicists working in clinics of radiation oncology. The ratio between the numbers of radiation oncologists and medical physicists is 3.51. Approximately, 95% of medical physicists have an undergraduate or graduate degrees in nuclear physics and biomedical engineering. 86% of medical physicists have certificates issued by the Chinese Society of Medical Physics. There has been a fast growth of publications by authors from mainland of China in the top international medical physics and radiotherapy journals since 2018. CONCLUSIONS Demand for medical physicists in radiotherapy increased quickly in the past decade. The distribution of radiotherapy facilities in China became more balanced. High quality continuing education and training programs for medical physicists are deficient in most areas. The role of medical physicists in the clinic has not been clearly defined and their contributions have not been fully recognized by the community.
Collapse
|
14
|
Effect of marker position and size on the registration accuracy of HoloLens in a non-clinical setting with implications for high-precision surgical tasks. Int J Comput Assist Radiol Surg 2021; 16:955-966. [PMID: 33856643 PMCID: PMC8166698 DOI: 10.1007/s11548-021-02354-9] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Accepted: 03/16/2021] [Indexed: 01/26/2023]
Abstract
Purpose Emerging holographic headsets can be used to register patient-specific virtual models obtained from medical scans with the patient’s body. Maximising accuracy of the virtual models’ inclination angle and position (ideally, ≤ 2° and ≤ 2 mm, respectively, as in currently approved navigation systems) is vital for this application to be useful. This study investigated the accuracy with which a holographic headset registers virtual models with real-world features based on the position and size of image markers. Methods HoloLens® and the image-pattern-recognition tool Vuforia Engine™ were used to overlay a 5-cm-radius virtual hexagon on a monitor’s surface in a predefined position. The headset’s camera detection of an image marker (displayed on the monitor) triggered the rendering of the virtual hexagon on the headset’s lenses. 4 × 4, 8 × 8 and 12 × 12 cm image markers displayed at nine different positions were used. In total, the position and dimensions of 114 virtual hexagons were measured on photographs captured by the headset’s camera. Results Some image marker positions and the smallest image marker (4 × 4 cm) led to larger errors in the perceived dimensions of the virtual models than other image marker positions and larger markers (8 × 8 and 12 × 12 cm). ≤ 2° and ≤ 2 mm errors were found in 70.7% and 76% of cases, respectively. Conclusion Errors obtained in a non-negligible percentage of cases are not acceptable for certain surgical tasks (e.g. the identification of correct trajectories of surgical instruments). Achieving sufficient accuracy with image marker sizes that meet surgical needs and regardless of image marker position remains a challenge. Supplementary Information The online version contains supplementary material available at 10.1007/s11548-021-02354-9.
Collapse
|
15
|
Li R, Tong Y, Yang T, Guo J, Si W, Zhang Y, Klein R, Heng PA. Towards quantitative and intuitive percutaneous tumor puncture via augmented virtual reality. Comput Med Imaging Graph 2021; 90:101905. [PMID: 33848757 DOI: 10.1016/j.compmedimag.2021.101905] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2020] [Revised: 02/02/2021] [Accepted: 03/14/2021] [Indexed: 11/24/2022]
Abstract
In recent years, the radiofrequency ablation (RFA) therapy has become a widely accepted minimal invasive treatment for liver tumor patients. However, it is challenging for doctors to precisely and efficiently perform the percutaneous tumor punctures under free-breathing conditions. This is because the traditional RFA is based on the 2D CT Image information, the missing spatial and dynamic information is dependent on surgeons' experience. This paper presents a novel quantitative and intuitive surgical navigation modality for percutaneous respiratory tumor puncture via augmented virtual reality, which is to achieve the augmented visualization of the pre-operative virtual planning information precisely being overlaid on intra-operative surgical scenario. In the pre-operation stage, we first combine the signed distance field of feasible structures (like liver and tumor) where the puncture path can go through and unfeasible structures (like large vessels and ribs) where the needle is not allowed to go through to quantitatively generate the 3D feasible region for percutaneous puncture. Then we design three constraints according to the RFA specialists consensus to automatically determine the optimal puncture trajectory. In the intra-operative stage, we first propose a virtual-real alignment method to precisely superimpose the virtual information on surgical scenario. Then, a user-friendly collaborative holographic interface is designed for real-time 3D respiratory tumor puncture navigation, which can effectively assist surgeons fast and accurately locating the target step-by step. The validation of our system is performed on static abdominal phantom and in vivo beagle dogs with artificial lesion. Experimental results demonstrate that the accuracy of the proposed planning strategy is better than the manual planning sketched by experienced doctors. Besides, the proposed holographic navigation modality can effectively reduce the needle adjustment for precise puncture as well. Our system shows its clinical feasibility to provide the quantitative planning of optimal needle path and intuitive in situ holographic navigation for percutaneous tumor ablation without surgeons' experience-dependence and reduce the times of needle adjustment. The proposed augmented virtual reality navigation system can effectively improve the precision and reliability in percutaneous tumor ablation and has the potential to be used for other surgical navigation tasks.
Collapse
Affiliation(s)
- Ruotong Li
- Department of Computer Science II, University of Bonn, Germany
| | - Yuqi Tong
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, China
| | - Tianpei Yang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, China
| | | | - Weixin Si
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, China.
| | | | - Reinhard Klein
- Department of Computer Science II, University of Bonn, Germany
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, Chinese University of Hong Kong, Hong Kong SAR, China
| |
Collapse
|
16
|
Takata T, Nakabayashi S, Kondo H, Yamamoto M, Furui S, Shiraishi K, Kobayashi T, Oba H, Okamoto T, Kotoku J. Mixed Reality Visualization of Radiation Dose for Health Professionals and Patients in Interventional Radiology. J Med Syst 2021; 45:38. [PMID: 33594609 PMCID: PMC7886835 DOI: 10.1007/s10916-020-01700-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Accepted: 12/10/2020] [Indexed: 11/29/2022]
Abstract
For interventional radiology, dose management has persisted as a crucially important issue to reduce radiation exposure to patients and medical staff. This study designed a real-time dose visualization system for interventional radiology designed with mixed reality technology and Monte Carlo simulation. An earlier report described a Monte-Carlo-based estimation system, which simulates a patient's skin dose and air dose distributions, adopted for our system. We also developed a system of acquiring fluoroscopic conditions to input them into the Monte Carlo system. Then we combined the Monte Carlo system with a wearable device for three-dimensional holographic visualization. The estimated doses were transferred sequentially to the device. The patient's dose distribution was then projected on the patient body. The visualization system also has a mechanism to detect one's position in a room to estimate the user's exposure dose to detect and display the exposure level. Qualitative tests were conducted to evaluate the workload and usability of our mixed reality system. An end-to-end system test was performed using a human phantom. The acquisition system accurately recognized conditions that were necessary for real-time dose estimation. The dose hologram represents the patient dose. The user dose was changed correctly, depending on conditions and positions. The perceived overall workload score (33.50) was lower than the scores reported in the literature for medical tasks (50.60) for computer activities (54.00). Mixed reality dose visualization is expected to improve exposure dose management for patients and health professionals by exhibiting the invisible radiation exposure in real space.
Collapse
Affiliation(s)
- Takeshi Takata
- Graduate School of Medical Care and Technology, Teikyo University, Tokyo, Japan
| | | | - Hiroshi Kondo
- Department of Radiology, Teikyo University School of Medicine, Tokyo, Japan
| | - Masayoshi Yamamoto
- Department of Radiology, Teikyo University School of Medicine, Tokyo, Japan
| | - Shigeru Furui
- Graduate School of Medical Care and Technology, Teikyo University, Tokyo, Japan
- Department of Radiology, Teikyo University School of Medicine, Tokyo, Japan
| | - Kenshiro Shiraishi
- Department of Radiology, Teikyo University School of Medicine, Tokyo, Japan
| | - Takenori Kobayashi
- Graduate School of Medical Care and Technology, Teikyo University, Tokyo, Japan
| | - Hiroshi Oba
- Department of Radiology, Teikyo University School of Medicine, Tokyo, Japan
| | - Takahide Okamoto
- Graduate School of Medical Care and Technology, Teikyo University, Tokyo, Japan
- Central Radiology Division, Teikyo University Hospital, Tokyo, Japan
| | - Jun'ichi Kotoku
- Graduate School of Medical Care and Technology, Teikyo University, Tokyo, Japan.
- Central Radiology Division, Teikyo University Hospital, Tokyo, Japan.
| |
Collapse
|