1
|
Zhang Q, Yu W, Liu W, Xu H, He Y. A Lightweight Visual Simultaneous Localization and Mapping Method with a High Precision in Dynamic Scenes. SENSORS (BASEL, SWITZERLAND) 2023; 23:9274. [PMID: 38005660 PMCID: PMC10675022 DOI: 10.3390/s23229274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Revised: 11/07/2023] [Accepted: 11/08/2023] [Indexed: 11/26/2023]
Abstract
Currently, in most traditional VSLAM (visual SLAM) systems, static assumptions result in a low accuracy in dynamic environments, or result in a new and higher level of accuracy but at the cost of sacrificing the real-time property. In highly dynamic scenes, balancing a high accuracy and a low computational cost has become a pivotal requirement for VSLAM systems. This paper proposes a new VSLAM system, balancing the competitive demands between positioning accuracy and computational complexity and thereby further improving the overall system properties. From the perspective of accuracy, the system applies an improved lightweight target detection network to quickly detect dynamic feature points while extracting feature points at the front end of the system, and only feature points of static targets are applied for frame matching. Meanwhile, the attention mechanism is integrated into the target detection network to continuously and accurately capture dynamic factors to cope with more complex dynamic environments. From the perspective of computational expense, the lightweight network Ghostnet module is applied as the backbone network of the target detection network YOLOv5s, significantly reducing the number of model parameters and improving the overall inference speed of the algorithm. Experimental results on the TUM dynamic dataset indicate that in contrast with the ORB-SLAM3 system, the pose estimation accuracy of the system improved by 84.04%. In contrast with dynamic SLAM systems such as DS-SLAM and DVO SLAM, the system has a significantly improved positioning accuracy. In contrast with other VSLAM algorithms based on deep learning, the system has superior real-time properties while maintaining a similar accuracy index.
Collapse
Affiliation(s)
- Qi Zhang
- School of Computer and Information Engineering, Central South University of Forestry and Technology, Changsha 410018, China; (Q.Z.); (H.X.); (Y.H.)
| | - Wentao Yu
- School of Computer, Central South University, Changsha 410083, China;
| | - Weirong Liu
- School of Computer, Central South University, Changsha 410083, China;
| | - Hao Xu
- School of Computer and Information Engineering, Central South University of Forestry and Technology, Changsha 410018, China; (Q.Z.); (H.X.); (Y.H.)
| | - Yuan He
- School of Computer and Information Engineering, Central South University of Forestry and Technology, Changsha 410018, China; (Q.Z.); (H.X.); (Y.H.)
| |
Collapse
|
2
|
Real-time augmented reality application in presurgical planning and lesion scalp localization by a smartphone. Acta Neurochir (Wien) 2022; 164:1069-1078. [PMID: 34448914 DOI: 10.1007/s00701-021-04968-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Accepted: 08/08/2021] [Indexed: 10/20/2022]
Abstract
OBJECTIVE A smartphone augmented reality (AR) application (app) was explored for clinical use in presurgical planning and lesion scalp localization. METHODS We programmed an AR App on a smartphone. The accuracy of the AR app was tested on a 3D-printed head model, using the Euclidean distance of displacement of virtual objects. For clinical validation, 14 patients with brain tumors were included in the study. Preoperative MRI images were used to generate 3D models for AR contents. The 3D models were then transferred to the smartphone AR app. Tumor scalp localization was marked, and a surgical corridor was planned on the patient's head by viewing AR images on the smartphone screen. Standard neuronavigation was applied to evaluate the accuracy of the smartphone. Max-margin distance (MMD) and area overlap ratio (AOR) were measured to quantitatively validate the clinical accuracy of the smartphone AR technique. RESULTS In model validation, the total mean Euclidean distance of virtual object displacement using the smartphone AR app was 4.7 ± 2.3 mm. In clinical validation, the mean duration of AR app usage was 168.5 ± 73.9 s. The total mean MMD was 6.7 ± 3.7 mm, and total mean AOR was 79%. CONCLUSIONS The smartphone AR app provides a new way of experience to observe intracranial anatomy in situ, and it makes surgical planning more intuitive and efficient. Localization accuracy is satisfactory with lesions larger than 15 mm.
Collapse
|
3
|
Roopa D, Bose S. A Rapid Dual Feature Tracking Method for Medical Equipments Assembly and Disassembly in Markerless Augmented Reality. JOURNAL OF MEDICAL IMAGING AND HEALTH INFORMATICS 2022. [DOI: 10.1166/jmihi.2022.3944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Markerless Augmented Reality (MAR) is a superior technology that is currently used by the medical device assembler with aid in design, assembly, disassembly and maintenance operations. The medical assembler assembles the medical equipment based on the doctors requirement, they also
maintains quality and sanitation of the equipment. The major research challenges in MAR are as follows: establish automatic registration parts, find and track the orientation of parts, and lack of depth and visual features. This work proposes a rapid dual feature tracking method i.e., combination
of Visual Simultaneous Localization and Mapping (SLAM) and Matched Pairs Selection (MAPSEL). The main idea of this work is to attain high tracking accuracy using the combined method. To get a good depth image map, a Graph-Based Joint Bilateral with Sharpening Filter (GRB-JBF with SF) is proposed
since depth images are noisy due to the dynamic change of environmental factors that affects tracking accuracy. Then, the best feature points are obtained for matching using Oriented Fast and Rotated Brief (ORB) as a feature detector, Fast Retina Key point with Histogram of Gradients (FREAK-HoG)
as a feature descriptor, and Feature Matching using Rajsk’s distance. Finally, the virtual object is rendered based on 3D affine and projection transformation. This work computes the performance in terms of tracking accuracy, tracking time, and rotation error for different distances
using MATLAB R2017b. From the observed results, it is perceived that the proposed method attained the least position error value about 0.1 cm to 0.3 cm. Also, rotation error is observed as minimal between 2.40 (Deg) to 3.10 and its average scale is observed as 2.7140. Further, the proposed
combination consumes less time against frames compared with other combinations and obtained a higher tracking accuracy of about 95.14% for 180 tracked points. The witnessed outcomes from the proposed scheme display superior performance compared with existing methods.
Collapse
Affiliation(s)
- D. Roopa
- Department of Computer Science and Engineering, Sri Sai Ram Institute of Technology, Anna University, Chennai, 600044, Tamil Nadu, India
| | - S. Bose
- Department of Computer Science and Engineering, Anna University, Chennai, 600025, Tamil Nadu, India
| |
Collapse
|
4
|
VINS-dimc: A Visual-Inertial Navigation System for Dynamic Environment Integrating Multiple Constraints. ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION 2022. [DOI: 10.3390/ijgi11020095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Most visual–inertial navigation systems (VINSs) suffer from moving objects and achieve poor positioning accuracy in dynamic environments. Therefore, to improve the positioning accuracy of VINS in dynamic environments, a monocular visual–inertial navigation system, VINS-dimc, is proposed. This system integrates various constraints on the elimination of dynamic feature points, which helps to improve the positioning accuracy of VINSs in dynamic environments. First, the motion model, computed from the inertial measurement unit (IMU) data, is subjected to epipolar constraint and flow vector bound (FVB) constraint to eliminate feature matching that deviates significantly from the motion model. This algorithm then combines multiple feature point matching constraints that avoid the lack of single constraints and make the system more robust and universal. Finally, VINS-dimc was proposed, which can adapt to a dynamic environment. Experiments show that the proposed algorithm could accurately eliminate the dynamic feature points on moving objects while preserving the static feature points. It is a great help for the positioning accuracy and robustness of VINSs, whether they are from self-collected data or public datasets.
Collapse
|
5
|
Jia T, Taylor ZA, Chen X. Long term and robust 6DoF motion tracking for highly dynamic stereo endoscopy videos. Comput Med Imaging Graph 2021; 94:101995. [PMID: 34656811 DOI: 10.1016/j.compmedimag.2021.101995] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Revised: 09/02/2021] [Accepted: 09/28/2021] [Indexed: 11/29/2022]
Abstract
Real-time augmented reality (AR) for minimally invasive surgery without extra tracking devices is a valuable yet challenging task, especially considering dynamic surgery environments. Multiple different motions between target organs are induced by respiration, cardiac motion or operative tools, and often must be characterized by a moving, manually positioned endoscope. Therefore, a 6DoF motion tracking method that takes advantage of the latest 2D target tracking methods and non-linear pose optimization and tracking loss retrieval in SLAM technologies is proposed and can be embedded into such an AR system. Specifically, the SiamMask deep learning-based target tracking method is incorporated to roughly exclude motion distractions and enable frame matching. This algorithm's light computation cost makes it possible for the proposed method to run in real-time. A global map and a set of keyframes as in ORB-SLAM are maintained for pose optimization and tracking loss retrieval. The stereo matching and frame matching methods are improved and a new strategy to select reference frames is introduced to make the first-time motion estimation of every arriving frame as accurate as possible. Experiments on both a clinical laparoscopic partial nephrectomy dataset and an ex-vivo porcine kidney dataset are conducted. The results show that the proposed method gives a more robust and accurate performance compared with ORB-SLAM2 in the presence of motion distractions or motion blur; however, heavy smoke still remains a big factor that reduces the tracking accuracy.
Collapse
Affiliation(s)
- Tingting Jia
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China; Institute of Medical Robotics, Shanghai Jiao Tong University, 200240, Shanghai, China.
| | - Zeike A Taylor
- CISTIB Centre for Computational Imaging and Simulation Technologies in Biomedicine, Institute of Medical and Biological Engineering, University of Leeds, Leeds, UK.
| | - Xiaojun Chen
- School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China; Institute of Medical Robotics, Shanghai Jiao Tong University, 200240, Shanghai, China.
| |
Collapse
|
6
|
Recasens D, Lamarca J, Facil JM, Montiel JMM, Civera J. Endo-Depth-and-Motion: Reconstruction and Tracking in Endoscopic Videos Using Depth Networks and Photometric Constraints. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3095528] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
7
|
Visual Localisation for Knee Arthroscopy. Int J Comput Assist Radiol Surg 2021; 16:2137-2145. [PMID: 34218361 DOI: 10.1007/s11548-021-02444-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Accepted: 06/28/2021] [Indexed: 01/14/2023]
Abstract
PURPOSE : Navigation in visually complex endoscopic environments requires an accurate and robust localisation system. This paper presents the single image deep learning based camera localisation method for orthopedic surgery. METHODS : The approach combines image information, deep learning techniques and bone-tracking data to estimate camera poses relative to the bone-markers. We have collected one arthroscopic video sequence for four knee flexion angles, per synthetic phantom knee model and a cadaveric knee-joint. RESULTS : Experimental results are shown for both a synthetic knee model and a cadaveric knee-joint with mean localisation errors of 9.66mm/0.85[Formula: see text] and 9.94mm/1.13[Formula: see text] achieved respectively. We have found no correlation between localisation errors achieved on synthetic and cadaveric images, and hence we predict that arthroscopic image artifacts play a minor role in camera pose estimation compared to constraints introduced by the presented setup. We have discovered that the images acquired for 90°and 0°knee flexion angles are respectively most and least informative for visual localisation. CONCLUSION : The performed study shows deep learning performs well in visually challenging, feature-poor, knee arthroscopy environments, which suggests such techniques can bring further improvements to localisation in Minimally Invasive Surgery.
Collapse
|
8
|
Zhang H, Sun Q, Liu Z. Augmented reality display of neurosurgery craniotomy lesions based on feature contour matching. COGNITIVE COMPUTATION AND SYSTEMS 2021. [DOI: 10.1049/ccs2.12021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Affiliation(s)
- Hao Zhang
- Tianjin Key Laboratory for Advanced Mechatronic System Design and Intelligent Control School of Mechanical Engineering Tianjin University of Technology Tianjin China
- National Demonstration Center for Experimental Mechanical and Electrical Engineering Education Tianjin University of Technology Tianjin China
| | - Qi‐Yuan Sun
- Tianjin Key Laboratory for Advanced Mechatronic System Design and Intelligent Control School of Mechanical Engineering Tianjin University of Technology Tianjin China
- National Demonstration Center for Experimental Mechanical and Electrical Engineering Education Tianjin University of Technology Tianjin China
| | - Zhen‐Zhong Liu
- Tianjin Key Laboratory for Advanced Mechatronic System Design and Intelligent Control School of Mechanical Engineering Tianjin University of Technology Tianjin China
- National Demonstration Center for Experimental Mechanical and Electrical Engineering Education Tianjin University of Technology Tianjin China
| |
Collapse
|
9
|
Augmented Reality Interface for Complex Anatomy Learning in the Central Nervous System: A Systematic Review. JOURNAL OF HEALTHCARE ENGINEERING 2020; 2020:8835544. [PMID: 32963749 PMCID: PMC7501559 DOI: 10.1155/2020/8835544] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/22/2020] [Revised: 08/14/2020] [Accepted: 08/21/2020] [Indexed: 01/17/2023]
Abstract
The medical system is facing the transformations with augmentation in the use of medical information systems, electronic records, smart, wearable devices, and handheld. The central nervous system function is to control the activities of the mind and the human body. Modern speedy development in medical and computational growth in the field of the central nervous system enables practitioners and researchers to extract and visualize insight from these systems. The function of augmented reality is to incorporate virtual and real objects, interactively running in a real-time and real environment. The role of augmented reality in the central nervous system becomes a thought-provoking task. Gesture interaction approach-based augmented reality in the central nervous system has enormous impending for reducing the care cost, quality refining of care, and waste and error reducing. To make this process smooth, it would be effective to present a comprehensive study report of the available state-of-the-art-work for enabling doctors and practitioners to easily use it in the decision making process. This comprehensive study will finally summarise the outputs of the published materials associate to gesture interaction-based augmented reality approach in the central nervous system. This research uses the protocol of systematic literature which systematically collects, analyses, and derives facts from the collected papers. The data collected range from the published materials for 10 years. 78 papers were selected and included papers based on the predefined inclusion, exclusion, and quality criteria. The study supports to identify the studies related to augmented reality in the nervous system, application of augmented reality in the nervous system, technique of augmented reality in the nervous system, and the gesture interaction approaches in the nervous system. The derivations from the studies show that there is certain amount of rise-up in yearly wise articles, and numerous studies exist, related to augmented reality and gestures interaction approaches to different systems of the human body, specifically to the nervous system. This research organises and summarises the existing associated work, which is in the form of published materials, and are related to augmented reality. This research will help the practitioners and researchers to sight most of the existing studies subjected to augmented reality-based gestures interaction approaches for the nervous system and then can eventually be followed as support in future for complex anatomy learning.
Collapse
|
10
|
Luo H, Yin D, Zhang S, Xiao D, He B, Meng F, Zhang Y, Cai W, He S, Zhang W, Hu Q, Guo H, Liang S, Zhou S, Liu S, Sun L, Guo X, Fang C, Liu L, Jia F. Augmented reality navigation for liver resection with a stereoscopic laparoscope. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 187:105099. [PMID: 31601442 DOI: 10.1016/j.cmpb.2019.105099] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/10/2019] [Revised: 08/14/2019] [Accepted: 09/27/2019] [Indexed: 06/10/2023]
Abstract
OBJECTIVE Understanding the three-dimensional (3D) spatial position and orientation of vessels and tumor(s) is vital in laparoscopic liver resection procedures. Augmented reality (AR) techniques can help surgeons see the patient's internal anatomy in conjunction with laparoscopic video images. METHOD In this paper, we present an AR-assisted navigation system for liver resection based on a rigid stereoscopic laparoscope. The stereo image pairs from the laparoscope are used by an unsupervised convolutional network (CNN) framework to estimate depth and generate an intraoperative 3D liver surface. Meanwhile, 3D models of the patient's surgical field are segmented from preoperative CT images using V-Net architecture for volumetric image data in an end-to-end predictive style. A globally optimal iterative closest point (Go-ICP) algorithm is adopted to register the pre- and intraoperative models into a unified coordinate space; then, the preoperative 3D models are superimposed on the live laparoscopic images to provide the surgeon with detailed information about the subsurface of the patient's anatomy, including tumors, their resection margins and vessels. RESULTS The proposed navigation system is tested on four laboratory ex vivo porcine livers and five operating theatre in vivo porcine experiments to validate its accuracy. The ex vivo and in vivo reprojection errors (RPE) are 6.04 ± 1.85 mm and 8.73 ± 2.43 mm, respectively. CONCLUSION AND SIGNIFICANCE Both the qualitative and quantitative results indicate that our AR-assisted navigation system shows promise and has the potential to be highly useful in clinical practice.
Collapse
Affiliation(s)
- Huoling Luo
- Research Lab for Medical Imaging and Digital Surgery, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China; Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
| | - Dalong Yin
- Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China; Department of Hepatobiliary Surgery, Shengli Hospital Affiliated to University of Science and Technology of China, Hefei, China
| | - Shugeng Zhang
- Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China; Department of Hepatobiliary Surgery, Shengli Hospital Affiliated to University of Science and Technology of China, Hefei, China
| | - Deqiang Xiao
- Research Lab for Medical Imaging and Digital Surgery, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China; Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
| | - Baochun He
- Research Lab for Medical Imaging and Digital Surgery, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Fanzheng Meng
- Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Yanfang Zhang
- Department of Interventional Radiology, Shenzhen People's Hospital, Shenzhen, China
| | - Wei Cai
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Shenghao He
- Research Lab for Medical Imaging and Digital Surgery, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Wenyu Zhang
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Qingmao Hu
- Research Lab for Medical Imaging and Digital Surgery, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China; Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
| | - Hongrui Guo
- Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Shuhang Liang
- Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Shuo Zhou
- Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Shuxun Liu
- Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Linmao Sun
- Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Xiao Guo
- Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Chihua Fang
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Lianxin Liu
- Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China; Department of Hepatobiliary Surgery, Shengli Hospital Affiliated to University of Science and Technology of China, Hefei, China.
| | - Fucang Jia
- Research Lab for Medical Imaging and Digital Surgery, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China; Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China.
| |
Collapse
|
11
|
Budhathoki S, Alsadoon A, Prasad P, Haddad S, Maag A. Augmented reality for narrow area navigation in jaw surgery: Modified tracking by detection volume subtraction algorithm. Int J Med Robot 2020; 16:e2097. [DOI: 10.1002/rcs.2097] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2019] [Revised: 02/19/2020] [Accepted: 02/20/2020] [Indexed: 12/27/2022]
Affiliation(s)
- Srijana Budhathoki
- School of Computing and MathematicsCharles Sturt University Sydney Campus Australia
- Department of Information TechnologyStudy Group Australia Sydney Campus Australia
| | - Abeer Alsadoon
- School of Computing and MathematicsCharles Sturt University Sydney Campus Australia
- Department of Information TechnologyStudy Group Australia Sydney Campus Australia
| | - P.W.C. Prasad
- School of Computing and MathematicsCharles Sturt University Sydney Campus Australia
- Department of Information TechnologyStudy Group Australia Sydney Campus Australia
| | - Sami Haddad
- Department of Oral and Maxillofacial ServicesGreater Western Sydney Area Health Services Sydney Australia
- Department of Oral and Maxillofacial ServicesCentral Coast Area Health Gosford Australia
| | - Angelika Maag
- School of Computing and MathematicsCharles Sturt University Sydney Campus Australia
- Department of Information TechnologyStudy Group Australia Sydney Campus Australia
| |
Collapse
|
12
|
Pérez-Pachón L, Poyade M, Lowe T, Gröning F. Image Overlay Surgery Based on Augmented Reality: A Systematic Review. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2020; 1260:175-195. [PMID: 33211313 DOI: 10.1007/978-3-030-47483-6_10] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Augmented Reality (AR) applied to surgical guidance is gaining relevance in clinical practice. AR-based image overlay surgery (i.e. the accurate overlay of patient-specific virtual images onto the body surface) helps surgeons to transfer image data produced during the planning of the surgery (e.g. the correct resection margins of tissue flaps) to the operating room, thus increasing accuracy and reducing surgery times. We systematically reviewed 76 studies published between 2004 and August 2018 to explore which existing tracking and registration methods and technologies allow healthcare professionals and researchers to develop and implement these systems in-house. Most studies used non-invasive markers to automatically track a patient's position, as well as customised algorithms, tracking libraries or software development kits (SDKs) to compute the registration between patient-specific 3D models and the patient's body surface. Few studies combined the use of holographic headsets, SDKs and user-friendly game engines, and described portable and wearable systems that combine tracking, registration, hands-free navigation and direct visibility of the surgical site. Most accuracy tests included a low number of subjects and/or measurements and did not normally explore how these systems affect surgery times and success rates. We highlight the need for more procedure-specific experiments with a sufficient number of subjects and measurements and including data about surgical outcomes and patients' recovery. Validation of systems combining the use of holographic headsets, SDKs and game engines is especially interesting as this approach facilitates an easy development of mobile AR applications and thus the implementation of AR-based image overlay surgery in clinical practice.
Collapse
Affiliation(s)
- Laura Pérez-Pachón
- School of Medicine, Medical Sciences and Nutrition, University of Aberdeen, Aberdeen, UK.
| | - Matthieu Poyade
- School of Simulation and Visualisation, Glasgow School of Art, Glasgow, UK
| | - Terry Lowe
- School of Medicine, Medical Sciences and Nutrition, University of Aberdeen, Aberdeen, UK
- Head and Neck Oncology Unit, Aberdeen Royal Infirmary (NHS Grampian), Aberdeen, UK
| | - Flora Gröning
- School of Medicine, Medical Sciences and Nutrition, University of Aberdeen, Aberdeen, UK
| |
Collapse
|
13
|
Rose AS, Kim H, Fuchs H, Frahm JM. Development of augmented-reality applications in otolaryngology-head and neck surgery. Laryngoscope 2019; 129 Suppl 3:S1-S11. [PMID: 31260127 DOI: 10.1002/lary.28098] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2019] [Accepted: 05/16/2019] [Indexed: 11/11/2022]
Abstract
OBJECTIVES/HYPOTHESIS Augmented reality (AR) allows for the addition of transparent virtual images and video to one's view of a physical environment. Our objective was to develop a head-worn, AR system for accurate, intraoperative localization of pathology and normal anatomic landmarks during open head and neck surgery. STUDY DESIGN Face validity and case study. METHODS A protocol was developed for the creation of three-dimensional (3D) virtual models based on computed tomography scans. Using the HoloLens AR platform, a novel system of registration and tracking was developed. Accuracy was determined in relation to actual physical landmarks. A face validity study was then performed in which otolaryngologists were asked to evaluate the technology and perform a simulated surgical task using AR image guidance. A case study highlighting the potential usefulness of the technology is also presented. RESULTS An AR system was developed for intraoperative 3D visualization and localization. The average error in measurement of accuracy was 2.47 ± 0.46 millimeters (1.99, 3.30). The face validity study supports the potential of this system to improve safety and efficiency in open head and neck surgical procedures. CONCLUSIONS An AR system for accurate localization of pathology and normal anatomic landmarks of the head and neck is feasible with current technology. A face validity study reveals the potential value of the system in intraoperative image guidance. This application of AR, among others in the field of otolaryngology-head and neck surgery, promises to improve surgical efficiency and patient safety in the operating room. LEVEL OF EVIDENCE 2b Laryngoscope, 129:S1-S11, 2019.
Collapse
Affiliation(s)
- Austin S Rose
- Department of Otolaryngology-Head and Neck Surgery, University of North Carolina, Chapel Hill, North Carolina, U.S.A
| | - Hyounghun Kim
- Department of Computer Science, University of North Carolina, Chapel Hill, North Carolina, U.S.A
| | - Henry Fuchs
- Department of Computer Science, University of North Carolina, Chapel Hill, North Carolina, U.S.A
| | - Jan-Michael Frahm
- Department of Computer Science, University of North Carolina, Chapel Hill, North Carolina, U.S.A
| |
Collapse
|
14
|
Abstract
BACKGROUND One of the main challenges for modern surgery is the effective use of the many available imaging modalities and diagnostic methods. Augmented reality systems can be used in the future to blend patient and planning information into the view of surgeons, which can improve the efficiency and safety of interventions. OBJECTIVE In this article we present five visualization methods to integrate augmented reality displays into medical procedures and the advantages and disadvantages are explained. MATERIAL AND METHODS Based on an extensive literature review the various existing approaches for integration of augmented reality displays into medical procedures are divided into five categories and the most important research results for each approach are presented. RESULTS A large number of mixed and augmented reality solutions for medical interventions have been developed as research prototypes; however, only very few systems have been tested on patients. CONCLUSION In order to integrate mixed and augmented reality displays into medical practice, highly specialized solutions need to be developed. Such systems must comply with the requirements with respect to accuracy, fidelity, ergonomics and seamless integration into the surgical workflow.
Collapse
Affiliation(s)
- Ulrich Eck
- Lehrstuhl für Informatikanwendungen in der Medizin, Technische Universität München, Boltzmannstr. 3, 85748, Garching bei München, Deutschland.
| | - Alexander Winkler
- Lehrstuhl für Informatikanwendungen in der Medizin, Technische Universität München, Boltzmannstr. 3, 85748, Garching bei München, Deutschland.
| |
Collapse
|
15
|
Accuracy assessment for the co-registration between optical and VIVE head-mounted display tracking. Int J Comput Assist Radiol Surg 2019; 14:1207-1215. [PMID: 31069642 DOI: 10.1007/s11548-019-01992-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2019] [Accepted: 04/25/2019] [Indexed: 10/26/2022]
Abstract
PURPOSE We report on the development and accuracy assessment of a hybrid tracking system that integrates optical spatial tracking into a video pass-through head-mounted display. METHODS The hybrid system uses a dual-tracked co-calibration apparatus to provide a co-registration between the origins of an optical dynamic reference frame and the VIVE Pro controller through a point-based registration. This registration provides the location of optically tracked tools with respect to the VIVE controller's origin and thus the VIVE's tracking system. RESULTS The positional accuracy was assessed using a CNC machine to collect a grid of points with 25 samples per location. The positional trueness and precision for the hybrid tracking system were [Formula: see text] and [Formula: see text], respectively. The rotational accuracy was assessed through inserting a stylus tracked by all three systems into a hemispherical phantom with cylindrical openings at known angles and collecting 25 samples per cylinder for each system. The rotational trueness and precision for the hybrid tracking system were [Formula: see text] and [Formula: see text], respectively. The difference in position and rotational trueness between the OTS and the hybrid tracking system was [Formula: see text] and [Formula: see text], respectively. CONCLUSIONS We developed a hybrid tracking system that allows the pose of optically tracked surgical instruments to be known within a first-person HMD visualization system, achieving submillimeter accuracy. This research validated the positional and rotational accuracy of the hybrid tracking system and subsequently the optical tracking and VIVE tracking systems. This work provides a method to determine the position of an optically tracked surgical tool with a surgically acceptable accuracy within a low-cost commercial-grade video pass-through HMD. The hybrid tracking system provides the foundation for the continued development of virtual reality or augmented virtuality surgical navigation systems for training or practicing surgical techniques.
Collapse
|
16
|
Effective Application of Mixed Reality Device HoloLens: Simple Manual Alignment of Surgical Field and Holograms. Plast Reconstr Surg 2019; 143:647-651. [PMID: 30688914 DOI: 10.1097/prs.0000000000005215] [Citation(s) in RCA: 47] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
The technology used to add information to a real visual field is defined as augmented reality technology. Augmented reality technology that can interactively manipulate displayed information is called mixed reality technology. HoloLens from Microsoft, which is a head-mounted mixed reality device released in 2016, can display a precise three-dimensional model stably on the real visual field as hologram. If it is possible to accurately superimpose the position/direction of the hologram in the surgical field, surgical navigation-like use can be expected. However, in HoloLens, there was no such function. The authors devised a method that can align the surgical field and holograms precisely within a short time using a simple manual operation. The mechanism is to match the three points on the hologram to the corresponding marking points of the body surface. By making it possible to arbitrarily select any of the three points as a pivot/axis of the rotational movement of the hologram, alignment by manual operation becomes very easy. The alignment between the surgical field and the hologram was good and thus contributed to intraoperative objective judgment. By using the method of this study, the clinical usefulness of the mixed reality device HoloLens will be expanded.
Collapse
|
17
|
Meulstee JW, Nijsink J, Schreurs R, Verhamme LM, Xi T, Delye HHK, Borstlap WA, Maal TJJ. Toward Holographic-Guided Surgery. Surg Innov 2018; 26:86-94. [DOI: 10.1177/1553350618799552] [Citation(s) in RCA: 56] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The implementation of augmented reality (AR) in image-guided surgery (IGS) can improve surgical interventions by presenting the image data directly on the patient at the correct position and in the actual orientation. This approach can resolve the switching focus problem, which occurs in conventional IGS systems when the surgeon has to look away from the operation field to consult the image data on a 2-dimensional screen. The Microsoft HoloLens, a head-mounted AR display, was combined with an optical navigation system to create an AR-based IGS system. Experiments were performed on a phantom model to determine the accuracy of the complete system and to evaluate the effect of adding AR. The results demonstrated a mean Euclidean distance of 2.3 mm with a maximum error of 3.5 mm for the complete system. Adding AR visualization to a conventional system increased the mean error by 1.6 mm. The introduction of AR in IGS was promising. The presented system provided a solution for the switching focus problem and created a more intuitive guidance system. With a further reduction in the error and more research to optimize the visualization, many surgical applications could benefit from the advantages of AR guidance.
Collapse
Affiliation(s)
| | - Johan Nijsink
- Radboud University Medical Center, Nijmegen, Netherlands
| | - Ruud Schreurs
- Radboud University Medical Center, Nijmegen, Netherlands
- Academic Medical Center, Amsterdam, Netherlands
| | | | - Tong Xi
- Radboud University Medical Center, Nijmegen, Netherlands
| | | | | | | |
Collapse
|
18
|
Wang R, Zhang M, Meng X, Geng Z, Wang FY. 3-D Tracking for Augmented Reality Using Combined Region and Dense Cues in Endoscopic Surgery. IEEE J Biomed Health Inform 2018; 22:1540-1551. [DOI: 10.1109/jbhi.2017.2770214] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
19
|
Song C, Jeon S, Lee S, Ha HG, Kim J, Hong J. Augmented reality-based electrode guidance system for reliable electroencephalography. Biomed Eng Online 2018; 17:64. [PMID: 29793498 PMCID: PMC5968572 DOI: 10.1186/s12938-018-0500-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2017] [Accepted: 05/17/2018] [Indexed: 11/10/2022] Open
Abstract
Background In longitudinal electroencephalography (EEG) studies, repeatable electrode positioning is essential for reliable EEG assessment. Conventional methods use anatomical landmarks as fiducial locations for the electrode placement. Since the landmarks are manually identified, the EEG assessment is inevitably unreliable because of individual variations among the subjects and the examiners. To overcome this unreliability, an augmented reality (AR) visualization-based electrode guidance system was proposed. Methods The proposed electrode guidance system is based on AR visualization to replace the manual electrode positioning. After scanning and registration of the facial surface of a subject by an RGB-D camera, the AR of the initial electrode positions as reference positions is overlapped with the current electrode positions in real time. Thus, it can guide the position of the subsequently placed electrodes with high repeatability. Results The experimental results with the phantom show that the repeatability of the electrode positioning was improved compared to that of the conventional 10–20 positioning system. Conclusion The proposed AR guidance system improves the electrode positioning performance with a cost-effective system, which uses only RGB-D camera. This system can be used as an alternative to the international 10–20 system.
Collapse
Affiliation(s)
- Chanho Song
- Department of Robotics Engineering, DGIST, Techno jungang-daero, Daegu, Republic of Korea
| | - Sangseo Jeon
- Department of Robotics Engineering, DGIST, Techno jungang-daero, Daegu, Republic of Korea
| | - Seongpung Lee
- Department of Robotics Engineering, DGIST, Techno jungang-daero, Daegu, Republic of Korea
| | - Ho-Gun Ha
- Department of Robotics Engineering, DGIST, Techno jungang-daero, Daegu, Republic of Korea
| | - Jonghyun Kim
- Department of Robotics Engineering, DGIST, Techno jungang-daero, Daegu, Republic of Korea
| | - Jaesung Hong
- Department of Robotics Engineering, DGIST, Techno jungang-daero, Daegu, Republic of Korea.
| |
Collapse
|
20
|
Soft-Body Registration of Pre-operative 3D Models to Intra-operative RGBD Partial Body Scans. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION – MICCAI 2018 2018. [DOI: 10.1007/978-3-030-00937-3_5] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
|
21
|
|
22
|
Intraoperative Evaluation of Body Surface Improvement by an Augmented Reality System That a Clinician Can Modify. PLASTIC AND RECONSTRUCTIVE SURGERY-GLOBAL OPEN 2017; 5:e1432. [PMID: 28894655 PMCID: PMC5585428 DOI: 10.1097/gox.0000000000001432] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2017] [Accepted: 06/09/2017] [Indexed: 01/01/2023]
Abstract
BACKGROUND Augmented reality (AR) technology that can combine computer-generated images with a real scene has been reported in the medical field recently. We devised the AR system for evaluation of improvements of the body surface, which is important for plastic surgery. METHODS We constructed an AR system that is easy to modify by combining existing devices and free software. We superimposed the 3-dimensional images of the body surface and the bone (obtained from VECTRA H1 and CT) onto the actual surgical field by Moverio BT-200 smart glasses and evaluated improvements of the body surface in 8 cases. RESULTS In all cases, the 3D image was successfully projected on the surgical field. Improvement of the display method of the 3D image made it easier to distinguish the different shapes in the 3D image and surgical field, making comparison easier. In a patient with fibrous dysplasia, the symmetrized body surface image was useful for confirming improvement of the real body surface. In a patient with complex facial fracture, the simulated bone image was useful as a reference for reduction. In a patient with an osteoma of the forehead, simultaneously displayed images of the body surface and the bone made it easier to understand these positional relationships. CONCLUSIONS This study confirmed that AR technology is helpful for evaluation of the body surface in several clinical applications. Our findings are not only useful for body surface evaluation but also for effective utilization of AR technology in the field of plastic surgery.
Collapse
|
23
|
The status of augmented reality in laparoscopic surgery as of 2016. Med Image Anal 2017; 37:66-90. [DOI: 10.1016/j.media.2017.01.007] [Citation(s) in RCA: 183] [Impact Index Per Article: 26.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2016] [Revised: 01/16/2017] [Accepted: 01/23/2017] [Indexed: 12/27/2022]
|