1
|
Gu W, Martin-Gomez A, Cho SM, Osgood G, Bracke B, Josewski C, Knopf J, Unberath M. The impact of visualization paradigms on the detectability of spatial misalignment in mixed reality surgical guidance. Int J Comput Assist Radiol Surg 2022; 17:921-927. [DOI: 10.1007/s11548-022-02602-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Accepted: 03/08/2022] [Indexed: 11/28/2022]
|
2
|
Strzeletz S, Hazubski S, Moctezuma JL, Hoppe H. Fast, robust, and accurate monocular peer-to-peer tracking for surgical navigation. Int J Comput Assist Radiol Surg 2020; 15:479-489. [PMID: 31950410 PMCID: PMC7036064 DOI: 10.1007/s11548-019-02111-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2019] [Accepted: 12/29/2019] [Indexed: 11/25/2022]
Abstract
Purpose This work presents a new monocular peer-to-peer tracking concept overcoming the distinction between tracking tools and tracked tools for optical navigation systems. A marker model concept based on marker triplets combined with a fast and robust algorithm for assigning image feature points to the corresponding markers of the tracker is introduced. Also included is a new and fast algorithm for pose estimation. Methods A peer-to-peer tracker consists of seven markers, which can be tracked by other peers, and one camera which is used to track the position and orientation of other peers. The special marker layout enables a fast and robust algorithm for assigning image feature points to the correct markers. The iterative pose estimation algorithm is based on point-to-line matching with Lagrange–Newton optimization and does not rely on initial guesses. Uniformly distributed quaternions in 4D (the vertices of a hexacosichora) are used as starting points and always provide the global minimum. Results Experiments have shown that the marker assignment algorithm robustly assigns image feature points to the correct markers even under challenging conditions. The pose estimation algorithm works fast, robustly and always finds the correct pose of the trackers. Image processing, marker assignment, and pose estimation for two trackers are handled in less than 18 ms on an Intel i7-6700 desktop computer at 3.4 GHz. Conclusion The new peer-to-peer tracking concept is a valuable approach to a decentralized navigation system that offers more freedom in the operating room while providing accurate, fast, and robust results.
Collapse
Affiliation(s)
- Simon Strzeletz
- Department of Electrical Engineering, Medical Engineering and Computer Science, Offenburg University, Badstraße 24, 77652 Offenburg, Germany
| | - Simon Hazubski
- Department of Electrical Engineering, Medical Engineering and Computer Science, Offenburg University, Badstraße 24, 77652 Offenburg, Germany
| | - José-Luis Moctezuma
- Stryker Leibinger GmbH & Co. KG, Bötzinger Str. 39–41, 79111 Freiburg im Breisgau, Germany
| | - Harald Hoppe
- Department of Electrical Engineering, Medical Engineering and Computer Science, Offenburg University, Badstraße 24, 77652 Offenburg, Germany
| |
Collapse
|
3
|
Pérez-Pachón L, Poyade M, Lowe T, Gröning F. Image Overlay Surgery Based on Augmented Reality: A Systematic Review. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2020; 1260:175-195. [PMID: 33211313 DOI: 10.1007/978-3-030-47483-6_10] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Augmented Reality (AR) applied to surgical guidance is gaining relevance in clinical practice. AR-based image overlay surgery (i.e. the accurate overlay of patient-specific virtual images onto the body surface) helps surgeons to transfer image data produced during the planning of the surgery (e.g. the correct resection margins of tissue flaps) to the operating room, thus increasing accuracy and reducing surgery times. We systematically reviewed 76 studies published between 2004 and August 2018 to explore which existing tracking and registration methods and technologies allow healthcare professionals and researchers to develop and implement these systems in-house. Most studies used non-invasive markers to automatically track a patient's position, as well as customised algorithms, tracking libraries or software development kits (SDKs) to compute the registration between patient-specific 3D models and the patient's body surface. Few studies combined the use of holographic headsets, SDKs and user-friendly game engines, and described portable and wearable systems that combine tracking, registration, hands-free navigation and direct visibility of the surgical site. Most accuracy tests included a low number of subjects and/or measurements and did not normally explore how these systems affect surgery times and success rates. We highlight the need for more procedure-specific experiments with a sufficient number of subjects and measurements and including data about surgical outcomes and patients' recovery. Validation of systems combining the use of holographic headsets, SDKs and game engines is especially interesting as this approach facilitates an easy development of mobile AR applications and thus the implementation of AR-based image overlay surgery in clinical practice.
Collapse
Affiliation(s)
- Laura Pérez-Pachón
- School of Medicine, Medical Sciences and Nutrition, University of Aberdeen, Aberdeen, UK.
| | - Matthieu Poyade
- School of Simulation and Visualisation, Glasgow School of Art, Glasgow, UK
| | - Terry Lowe
- School of Medicine, Medical Sciences and Nutrition, University of Aberdeen, Aberdeen, UK
- Head and Neck Oncology Unit, Aberdeen Royal Infirmary (NHS Grampian), Aberdeen, UK
| | - Flora Gröning
- School of Medicine, Medical Sciences and Nutrition, University of Aberdeen, Aberdeen, UK
| |
Collapse
|
4
|
Wang T, Zheng B. 3D presentation in surgery: a review of technology and adverse effects. J Robot Surg 2018; 13:363-370. [PMID: 30847653 DOI: 10.1007/s11701-018-00900-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2018] [Accepted: 12/02/2018] [Indexed: 01/15/2023]
Abstract
A systematic review was undertaken to assess the technology used to create stereovision for human perception. Adverse effects associated with artificial stereoscopic technology were reviewed with an emphasis on the impact of surgical performance in the operating room. MEDLINE/PubMed library databases were used to identify literature published up to Aug 2017. In the past 60 years, four major types of technologies have been used for reconstructing stereo images: anaglyph, polarization, active shutter, and autostereoscopy. As none of them can perfectly duplicate our natural stereoperception, user exposure to this artificial environment for a period of time can lead to a series of psychophysiological responses including nausea, dizziness, and others. The exact mechanism underlying these symptoms is not clear. Neurophysiologic evidences suggest that the visuo-vestibular pathway plays a vital role in coupling unnatural visual inputs to autonomic neural responses. When stereoscopic technology was used in surgical environments, controversial results were reported. Although recent advances in stereoscopy are promising, no definitive evidence has yet been presented to support that stereoscopes can enhance surgical performance in image-guided surgery. Stereoscopic technology has been rapidly introduced to healthcare. Adverse effects to human operators caused by immature technology seem inevitable. The impact on surgeons working with this visualization system needs to be explored and its safety and feasibility need to be addressed.
Collapse
Affiliation(s)
- Tianqi Wang
- Surgical Simulation Research Lab, Department of Surgery, Faculty of Medicine and Dentistry, University of Alberta, 162 Heritage Medical Research Centre, 112 St. NW, Edmonton, AB, T6G 2E1, Canada
| | - Bin Zheng
- Surgical Simulation Research Lab, Department of Surgery, Faculty of Medicine and Dentistry, University of Alberta, 162 Heritage Medical Research Centre, 112 St. NW, Edmonton, AB, T6G 2E1, Canada.
| |
Collapse
|
5
|
Vignali G, Bertolini M, Bottani E, Di Donato L, Ferraro A, Longo F. Design and Testing of an Augmented Reality Solution to Enhance Operator Safety in the Food Industry. INTERNATIONAL JOURNAL OF FOOD ENGINEERING 2018. [DOI: 10.1515/ijfe-2017-0122] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Abstract:Augmented reality (AR) systems help users perform tasks and operations in man–machine interaction, by adding virtual information (such as live-video streams, pictures or instructions) to the real-world environment. This paper describes the design and testing of an AR solution created to enhance the safety of employees when carrying out maintenance tasks on a food processing machine. The machine which was analyzed is a hot-break juice extractor used to obtain juice from fruits and vegetables by separating out seeds and peel. The maintenance task for which the AR system is intended involves cleaning the machine’s porous sieves or substituting them with clean replacements and should be carried out at least every 12 hours while the plant is in operation. The paper discusses the main steps involved in developing the AR solution, its testing in the real operating environment and the expected pros/cons of its implementation and use.
Collapse
|
6
|
The status of augmented reality in laparoscopic surgery as of 2016. Med Image Anal 2017; 37:66-90. [DOI: 10.1016/j.media.2017.01.007] [Citation(s) in RCA: 183] [Impact Index Per Article: 26.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2016] [Revised: 01/16/2017] [Accepted: 01/23/2017] [Indexed: 12/27/2022]
|
7
|
Abstract
Minimally invasive surgery (MIS) poses visual challenges to the surgeons. In MIS, binocular disparity is not freely available for surgeons, who are required to mentally rebuild the 3-dimensional (3D) patient anatomy from a limited number of monoscopic visual cues. The insufficient depth cues from the MIS environment could cause surgeons to misjudge spatial depth, which could lead to performance errors thus jeopardizing patient safety. In this article, we will first discuss the natural human depth perception by exploring the main depth cues available for surgeons in open procedures. Subsequently, we will reveal what depth cues are lost in MIS and how surgeons compensate for the incomplete depth presentation. Next, we will further expand our knowledge by exploring some of the available solutions for improving depth presentation to surgeons. Here we will review the innovative approaches (multiple 2D camera assembly, shadow introduction) and devices (3D monitors, head-mounted devices, and auto-stereoscopic monitors) for 3D image presentation from the past few years.
Collapse
Affiliation(s)
| | | | - Bin Zheng
- University of Alberta, Edmonton, Alberta, Canada
| |
Collapse
|
8
|
Design and validation of an augmented reality system for laparoscopic surgery in a real environment. BIOMED RESEARCH INTERNATIONAL 2013; 2013:758491. [PMID: 24236293 PMCID: PMC3819885 DOI: 10.1155/2013/758491] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/20/2013] [Revised: 09/12/2013] [Accepted: 09/16/2013] [Indexed: 01/19/2023]
Abstract
Purpose. This work presents the protocol carried out in the development and validation of an augmented reality system which was installed in an operating theatre to help surgeons with trocar placement during laparoscopic surgery. The purpose of this validation is to demonstrate the improvements that this system can provide to the field of medicine, particularly surgery. Method. Two experiments that were noninvasive for both the patient and the surgeon were designed. In one of these experiments the augmented reality system was used, the other one was the control experiment, and the system was not used. The type of operation selected for all cases was a cholecystectomy due to the low degree of complexity and complications before, during, and after the surgery. The technique used in the placement of trocars was the French technique, but the results can be extrapolated to any other technique and operation. Results and Conclusion. Four clinicians and ninety-six measurements obtained of twenty-four patients (randomly assigned in each experiment) were involved in these experiments. The final results show an improvement in accuracy and variability of 33% and 63%, respectively, in comparison to traditional methods, demonstrating that the use of an augmented reality system offers advantages for trocar placement in laparoscopic surgery.
Collapse
|
9
|
Kersten-Oertel M, Jannin P, Collins DL. The state of the art of visualization in mixed reality image guided surgery. Comput Med Imaging Graph 2013; 37:98-112. [PMID: 23490236 DOI: 10.1016/j.compmedimag.2013.01.009] [Citation(s) in RCA: 106] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2012] [Revised: 01/04/2013] [Accepted: 01/23/2013] [Indexed: 11/26/2022]
Abstract
This paper presents a review of the state of the art of visualization in mixed reality image guided surgery (IGS). We used the DVV (data, visualization processing, view) taxonomy to classify a large unbiased selection of publications in the field. The goal of this work was not only to give an overview of current visualization methods and techniques in IGS but more importantly to analyze the current trends and solutions used in the domain. In surveying the current landscape of mixed reality IGS systems, we identified a strong need to assess which of the many possible data sets should be visualized at particular surgical steps, to focus on novel visualization processing techniques and interface solutions, and to evaluate new systems.
Collapse
Affiliation(s)
- Marta Kersten-Oertel
- Department of Biomedical Engineering, McGill University, McConnell Brain Imaging Center, Montreal Neurological Institute, Montréal, Canada.
| | | | | |
Collapse
|
10
|
Sauer F. Image registration: enabling technology for image guided surgery and therapy. CONFERENCE PROCEEDINGS : ... ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL CONFERENCE 2012; 2005:7242-5. [PMID: 17281951 DOI: 10.1109/iembs.2005.1616182] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Imaging looks inside the patient's body, exposing the patient's anatomy beyond what is visible on the surface. Medical imaging has a very successful history for medical diagnosis. It also plays an increasingly important role as enabling technology for minimally invasive procedures. Interventional procedures (e.g. catheter based cardiac interventions) are traditionally supported by intra-procedure imaging (X-ray fluoro, ultrasound). There is realtime feedback, but the images provide limited information. Surgical procedures are traditionally supported with pre-operative images (CT, MR). The image quality can be very good; however, the link between images and patient has been lost. For both cases, image registration can play an essential role -augmenting intra-op images with pre-op images, and mapping pre-op images to the patient's body. We will present examples of both approaches from an application oriented perspective, covering electrophysiology, radiation therapy, and neuro-surgery. Ultimately, as the boundaries between interventional radiology and surgery are becoming blurry, also the different methods for image guidance will merge. Image guidance will draw upon a combination of pre-op and intra-op imaging together with magnetic or optical tracking systems, and enable precise minimally invasive procedures. The information is registered into a common coordinate system, and allows advanced methods for visualization such as augmented reality or advanced methods for therapy delivery such as robotics.
Collapse
|
11
|
Kersten-Oertel M, Jannin P, Collins DL. DVV: a taxonomy for mixed reality visualization in image guided surgery. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2012; 18:332-352. [PMID: 21383411 DOI: 10.1109/tvcg.2011.50] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Mixed reality visualizations are increasingly studied for use in image guided surgery (IGS) systems, yet few mixed reality systems have been introduced for daily use into the operating room (OR). This may be the result of several factors: the systems are developed from a technical perspective, are rarely evaluated in the field, and/or lack consideration of the end user and the constraints of the OR. We introduce the Data, Visualization processing, View (DVV) taxonomy which defines each of the major components required to implement a mixed reality IGS system. We propose that these components be considered and used as validation criteria for introducing a mixed reality IGS system into the OR. A taxonomy of IGS visualization systems is a step toward developing a common language that will help developers and end users discuss and understand the constituents of a mixed reality visualization system, facilitating a greater presence of future systems in the OR. We evaluate the DVV taxonomy based on its goodness of fit and completeness. We demonstrate the utility of the DVV taxonomy by classifying 17 state-of-the-art research papers in the domain of mixed reality visualization IGS systems. Our classification shows that few IGS visualization systems' components have been validated and even fewer are evaluated.
Collapse
Affiliation(s)
- Marta Kersten-Oertel
- McConell Brain Imaging Center at the Montreal Neurological Institute (MNI), 3801 University St, Montre´al, QC H3A 2B4, Canada.
| | | | | |
Collapse
|
12
|
Stoyanov D, Mylonas GP, Lerotic M, Chung AJ, Yang GZ. Intra-Operative Visualizations: Perceptual Fidelity and Human Factors. ACTA ACUST UNITED AC 2008. [DOI: 10.1109/jdt.2008.926497] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
13
|
Baumhauer M, Feuerstein M, Meinzer HP, Rassweiler J. Navigation in Endoscopic Soft Tissue Surgery: Perspectives and Limitations. J Endourol 2008; 22:751-66. [PMID: 18366319 DOI: 10.1089/end.2007.9827] [Citation(s) in RCA: 91] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022] Open
Affiliation(s)
- Matthias Baumhauer
- Division of Medical and Biological Informatics, German Cancer Research Center, Heidelberg, Germany
| | - Marco Feuerstein
- Computer Aided Medical Procedures (CAMP), Technical University Munich (TUM), Munich, Germany
| | - Hans-Peter Meinzer
- Division of Medical and Biological Informatics, German Cancer Research Center, Heidelberg, Germany
| | - J. Rassweiler
- Department of Urology, Clinic Heilbronn, University of Heidelberg, Heilbronn, Germany
| |
Collapse
|