1
|
Ramalhinho J, Yoo S, Dowrick T, Koo B, Somasundaram M, Gurusamy K, Hawkes DJ, Davidson B, Blandford A, Clarkson MJ. The value of Augmented Reality in surgery - A usability study on laparoscopic liver surgery. Med Image Anal 2023; 90:102943. [PMID: 37703675 PMCID: PMC10958137 DOI: 10.1016/j.media.2023.102943] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 06/29/2023] [Accepted: 08/24/2023] [Indexed: 09/15/2023]
Abstract
Augmented Reality (AR) is considered to be a promising technology for the guidance of laparoscopic liver surgery. By overlaying pre-operative 3D information of the liver and internal blood vessels on the laparoscopic view, surgeons can better understand the location of critical structures. In an effort to enable AR, several authors have focused on the development of methods to obtain an accurate alignment between the laparoscopic video image and the pre-operative 3D data of the liver, without assessing the benefit that the resulting overlay can provide during surgery. In this paper, we present a study that aims to assess quantitatively and qualitatively the value of an AR overlay in laparoscopic surgery during a simulated surgical task on a phantom setup. We design a study where participants are asked to physically localise pre-operative tumours in a liver phantom using three image guidance conditions - a baseline condition without any image guidance, a condition where the 3D surfaces of the liver are aligned to the video and displayed on a black background, and a condition where video see-through AR is displayed on the laparoscopic video. Using data collected from a cohort of 24 participants which include 12 surgeons, we observe that compared to the baseline, AR decreases the median localisation error of surgeons on non-peripheral targets from 25.8 mm to 9.2 mm. Using subjective feedback, we also identify that AR introduces usability improvements in the surgical task and increases the perceived confidence of the users. Between the two tested displays, the majority of participants preferred to use the AR overlay instead of navigated view of the 3D surfaces on a separate screen. We conclude that AR has the potential to improve performance and decision making in laparoscopic surgery, and that improvements in overlay alignment accuracy and depth perception should be pursued in the future.
Collapse
Affiliation(s)
- João Ramalhinho
- Wellcome ESPRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom.
| | - Soojeong Yoo
- Wellcome ESPRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom; UCL Interaction Centre, University College London, London, United Kingdom
| | - Thomas Dowrick
- Wellcome ESPRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| | - Bongjin Koo
- Wellcome ESPRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| | - Murali Somasundaram
- Division of Surgery and Interventional Sciences, University College London, London, United Kingdom
| | - Kurinchi Gurusamy
- Division of Surgery and Interventional Sciences, University College London, London, United Kingdom
| | - David J Hawkes
- Wellcome ESPRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| | - Brian Davidson
- Division of Surgery and Interventional Sciences, University College London, London, United Kingdom
| | - Ann Blandford
- Wellcome ESPRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom; UCL Interaction Centre, University College London, London, United Kingdom
| | - Matthew J Clarkson
- Wellcome ESPRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| |
Collapse
|
2
|
Román-Belmonte JM, Rodríguez-Merchán EC, De la Corte-Rodríguez H. Metaverse applied to musculoskeletal pathology: Orthoverse and Rehabverse. Postgrad Med 2023:1-9. [PMID: 36786393 DOI: 10.1080/00325481.2023.2180953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/15/2023]
Abstract
The Metaverse is 'an integrated network of 3D virtual worlds.' It incorporates digitally created realities into the real world, involves virtual copies of existing places and changes the physical reality by superimposing digital aspects, allowing its users to interact with these elements in an immersive, real-time experience. The applications of the Metaverse are numerous, with an increasing number of experiences in the field of musculoskeletal disease management. In the field of medical training, the Metaverse can help facilitate the learning experience and help develop complex clinical skills. In clinical care, the Metaverse can help clinicians perform orthopedic surgery more accurately and safely and can improve pain management, the performance of rehabilitation techniques and the promotion of healthy lifestyles. Virtualization can also optimize aspects of healthcare information and management, increasing the effectiveness of procedures and the functioning of organizations. This optimization can be especially relevant in departments that are under significant care provider pressure. However, we must not lose sight of the fundamental challenges that still need to be solved, such as ensuring patient privacy and fairness. Several studies are underway to assess the feasibility and safety of the Metaverse.
Collapse
Affiliation(s)
- Juan M Román-Belmonte
- Department of Physical Medicine and Rehabilitation, Cruz Roja San José y Santa Adela University Hospital, Madrid, Spain
| | - E Carlos Rodríguez-Merchán
- Department of Orthopedic Surgery, La Paz University Hospital, Madrid, Spain.,Osteoarticular Surgery Research, Hospital La Paz Institute for Health Research - IdiPAZ (La Paz University Hospital - Autonomous University of Madrid), Madrid, Spain
| | | |
Collapse
|
3
|
Jiang J, Zhang J, Sun J, Wu D, Xu S. User's image perception improved strategy and application of augmented reality systems in smart medical care: A review. Int J Med Robot 2023; 19:e2497. [PMID: 36629798 DOI: 10.1002/rcs.2497] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 12/26/2022] [Accepted: 01/06/2023] [Indexed: 01/12/2023]
Abstract
BACKGROUND Augmented reality (AR) is a new human-computer interaction technology that combines virtual reality, computer vision, and computer networks. With the rapid advancement of the medical field towards intelligence and data visualisation, AR systems are becoming increasingly popular in the medical field because they can provide doctors with clear enough medical images and accurate image navigation in practical applications. However, it has been discovered that different display types of AR systems have different effects on doctors' perception of the image after virtual-real fusion during the actual medical application. If doctors cannot correctly perceive the image, they may be unable to correctly match the virtual information with the real world, which will have a significant impact on their ability to recognise complex structures. METHODS This paper uses Citespace, a literature analysis tool, to visualise and analyse the research hotspots when AR systems are used in the medical field. RESULTS A visual analysis of the 1163 articles retrieved from the Web of Science Core Collection database reveals that display technology and visualisation technology are the key research directions of AR systems at the moment. CONCLUSION This paper categorises AR systems based on their display principles, reviews current image perception optimisation schemes for various types of systems, and analyses and compares different display types of AR systems based on their practical applications in the field of smart medical care so that doctors can select the appropriate display types based on different application scenarios. Finally, the future development direction of AR display technology is anticipated in order for AR technology to be more effectively applied in the field of smart medical care. The advancement of display technology for AR systems is critical for their use in the medical field, and the advantages and disadvantages of various display types should be considered in different application scenarios to select the best AR system.
Collapse
Affiliation(s)
- Jingang Jiang
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin University of Science and Technology, Harbin, Heilongjiang, China.,Robotics & Its Engineering Research Center, Harbin University of Science and Technology, Harbin, Heilongjiang, China
| | - Jiawei Zhang
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin University of Science and Technology, Harbin, Heilongjiang, China
| | - Jianpeng Sun
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin University of Science and Technology, Harbin, Heilongjiang, China
| | - Dianhao Wu
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin University of Science and Technology, Harbin, Heilongjiang, China
| | - Shuainan Xu
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin University of Science and Technology, Harbin, Heilongjiang, China
| |
Collapse
|
4
|
Han B, Li R, Huang T, Ma L, Liang H, Zhang X, Liao H. An accurate 3D augmented reality navigation system with enhanced autostereoscopic display for oral and maxillofacial surgery. Int J Med Robot 2022; 18:e2404. [DOI: 10.1002/rcs.2404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2021] [Revised: 03/03/2022] [Accepted: 04/05/2022] [Indexed: 11/10/2022]
Affiliation(s)
- Boxuan Han
- Department of Biomedical Engineering School of Medicine Tsinghua University Beijing China
| | - Ruiyang Li
- Department of Biomedical Engineering School of Medicine Tsinghua University Beijing China
| | - Tianqi Huang
- Department of Biomedical Engineering School of Medicine Tsinghua University Beijing China
| | - Longfei Ma
- Department of Biomedical Engineering School of Medicine Tsinghua University Beijing China
| | - Hanying Liang
- Department of Biomedical Engineering School of Medicine Tsinghua University Beijing China
| | - Xinran Zhang
- Department of Biomedical Engineering School of Medicine Tsinghua University Beijing China
| | - Hongen Liao
- Department of Biomedical Engineering School of Medicine Tsinghua University Beijing China
| |
Collapse
|
5
|
Ferrari V, Cattari N, Fontana U, Cutolo F. Parallax Free Registration for Augmented Reality Optical See-Through Displays in the Peripersonal Space. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:1608-1618. [PMID: 32881688 DOI: 10.1109/tvcg.2020.3021534] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Egocentric augmented reality (AR) interfaces are quickly becoming a key asset for assisting high precision activities in the peripersonal space in several application fields. In these applications, accurate and robust registration of computer-generated information to the real scene is hard to achieve with traditional Optical See-Through (OST) displays given that it relies on the accurate calibration of the combined eye-display projection model. The calibration is required to efficiently estimate the projection parameters of the pinhole model that encapsulate the optical features of the display and whose values vary according to the position of the user's eye. In this article, we describe an approach that prevents any parallax-related AR misregistration at a pre-defined working distance in OST displays with infinity focus; our strategy relies on the use of a magnifier placed in front of the OST display, and features a proper parameterization of the virtual rendering camera achieved through a dedicated calibration procedure that accounts for the contribution of the magnifier. We model the registration error due to the viewpoint parallax outside the ideal working distance. Finally, we validate our strategy on a OST display, and we show that sub-millimetric registration accuracy can be achieved for working distances of ±100 mm around the focal length of the magnifier.
Collapse
|
6
|
Augmented Reality (AR) in Orthopedics: Current Applications and Future Directions. Curr Rev Musculoskelet Med 2021; 14:397-405. [PMID: 34751894 DOI: 10.1007/s12178-021-09728-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 09/27/2021] [Indexed: 01/05/2023]
Abstract
PURPOSE OF REVIEW Imaging technologies (X-ray, CT, MRI, and ultrasound) have revolutionized orthopedic surgery, allowing for the more efficient diagnosis, monitoring, and treatment of musculoskeletal aliments. The current review investigates recent literature surrounding the impact of augmented reality (AR) imaging technologies on orthopedic surgery. In particular, it investigates the impact that AR technologies may have on provider cognitive burden, operative times, occupational radiation exposure, and surgical precision and outcomes. RECENT FINDINGS Many AR technologies have been shown to lower provider cognitive burden and reduce operative time and radiation exposure while improving surgical precision in pre-clinical cadaveric and sawbones models. So far, only a few platforms focusing on pedicle screw placement have been approved by the FDA. These technologies have been implemented clinically with mixed results when compared to traditional free-hand approaches. It remains to be seen if current AR technologies can deliver upon their multitude of promises, and the ability to do so seems contingent upon continued technological progress. Additionally, the impact of these platforms will likely be highly conditional on clinical indication and provider type. It remains unclear if AR will be broadly accepted and utilized or if it will be reserved for niche indications where it adds significant value. One thing is clear, orthopedics' high utilization of pre- and intra-operative imaging, combined with the relative ease of tracking rigid structures like bone as compared to soft tissues, has made it the clear beachhead market for AR technologies in medicine.
Collapse
|
7
|
Maleki M, Tehrani AF, Aray A, Ranjbar M. Intramedullary nail holes laser indicator, a non-invasive technique for interlocking of intramedullary nails. Sci Rep 2021; 11:21166. [PMID: 34707138 PMCID: PMC8551185 DOI: 10.1038/s41598-021-00382-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Accepted: 10/12/2021] [Indexed: 11/09/2022] Open
Abstract
Interlocking of intramedullary nails is a challenging procedure in orthopedic trauma surgery. Numerous methods have been described to facilitate this process. But they are exposed patient and surgical team to X-rays or involves trial and error. An accurate and non-invasive method has been provided to easily interlocking intramedullary nails. By transferring a safe visible light inside the nail, a drilling position appears which use to drilling bone toward the nail hole. The wavelength of this light was obtained from ex-vivo spectroscopy on biological tissues which has optimal transmission, reflectance, and absorption properties. Moreover, animal and human experiments were performed to evaluate performance of the proposed system. Ex-vivo performance experiments were performed successfully on two groups of cow and sheep samples. Output parameters were procedure time and drilling quality which there were significant differences between the two groups in procedure time (P < 0.05). But no significant differences were observed in drilling quality (P > 0.05). Moreover, an In-vivo performance experiment was performed successfully on a middle-aged man. To compare the provided method, targeting-arm, and free-hand techniques, two human experiments were performed on a middle-aged and a young man. The results indicate the advantage of the proposed technique in the procedure time (P < 0.05), while the drilling quality is equal to the free-hand technique (P = 0.05). Intramedullary nail holes laser indicator is a safe and accurate method that reduced surgical time and simplifies the process. This new technology makes it easier to interlocking the intramedullary nail which can have good clinical applications.
Collapse
Affiliation(s)
- Mohammadreza Maleki
- Department of Mechanical Engineering, Isfahan University of Technology, 84156-83111, Isfahan, Iran.
| | - Alireza Fadaei Tehrani
- Department of Mechanical Engineering, Isfahan University of Technology, 84156-83111, Isfahan, Iran
| | - Ayda Aray
- Department of Physics, Isfahan University of Technology, 84156-83111, Isfahan, Iran
| | - Mehdi Ranjbar
- Department of Physics, Isfahan University of Technology, 84156-83111, Isfahan, Iran
| |
Collapse
|
8
|
Chen F, Cui X, Han B, Liu J, Zhang X, Liao H. Augmented reality navigation for minimally invasive knee surgery using enhanced arthroscopy. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 201:105952. [PMID: 33561710 DOI: 10.1016/j.cmpb.2021.105952] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Accepted: 01/21/2021] [Indexed: 06/12/2023]
Abstract
PURPOSE During the minimally invasive knee surgery, surgeons insert surgical instruments and arthroscopy through small incisions, and implement treatment assisted by 2D arthroscopic images. However, this 2D arthroscopic navigation faces several problems. Firstly, the guidance information is displayed on a screen away from the surgical area, which makes hand/eye coordination difficult. Secondly, the small incision limits the surgeons to view the internal knee structures only from an arthroscopic camera. In addition, arthroscopic images commonly appear obscure visions. METHODS To solve these problems, we proposed a novel in-situ augmented reality navigation system with the enhanced arthroscopic information. Firstly, intraoperative anatomical locations were obtained by using arthroscopic images and arthroscopy calibration. Secondly, tissue properties-based model deformation method was proposed to update the 3D preoperative knee model with anatomical location information. Then, the updated model was further rendered with glasses-free real 3D display for achieving the global in-situ augmented reality view. In addition, virtual arthroscopic images were generated from the updated preoperative model to provide the anatomical information of the operation area. RESULTS Experimental results demonstrated that virtual arthroscopic images could reflect the correct structure information with a mean error of 0.32 mm. Compared with 2D arthroscopic navigation, the proposed augmented reality navigation reduced the targeting errors by 2.10 mm and 2.70 mm for the experiments of knee phantom and in-vitro swine knee, respectively. CONCLUSION Our navigation method is helpful for minimally invasive knee surgery since it can provide the global in-situ information and detail anatomical information.
Collapse
Affiliation(s)
- Fang Chen
- Department of Computer Science and Engineering, Nanjing University of Aeronautics and Astronautics, MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, Nanjing, China.
| | - Xiwen Cui
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Boxuan Han
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Jia Liu
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Xinran Zhang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Hongen Liao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China.
| |
Collapse
|
9
|
Ma L, Fei B. Comprehensive review of surgical microscopes: technology development and medical applications. JOURNAL OF BIOMEDICAL OPTICS 2021; 26:JBO-200292VRR. [PMID: 33398948 PMCID: PMC7780882 DOI: 10.1117/1.jbo.26.1.010901] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/10/2020] [Accepted: 12/04/2020] [Indexed: 05/06/2023]
Abstract
SIGNIFICANCE Surgical microscopes provide adjustable magnification, bright illumination, and clear visualization of the surgical field and have been increasingly used in operating rooms. State-of-the-art surgical microscopes are integrated with various imaging modalities, such as optical coherence tomography (OCT), fluorescence imaging, and augmented reality (AR) for image-guided surgery. AIM This comprehensive review is based on the literature of over 500 papers that cover the technology development and applications of surgical microscopy over the past century. The aim of this review is threefold: (i) providing a comprehensive technical overview of surgical microscopes, (ii) providing critical references for microscope selection and system development, and (iii) providing an overview of various medical applications. APPROACH More than 500 references were collected and reviewed. A timeline of important milestones during the evolution of surgical microscope is provided in this study. An in-depth technical overview of the optical system, mechanical system, illumination, visualization, and integration with advanced imaging modalities is provided. Various medical applications of surgical microscopes in neurosurgery and spine surgery, ophthalmic surgery, ear-nose-throat (ENT) surgery, endodontics, and plastic and reconstructive surgery are described. RESULTS Surgical microscopy has been significantly advanced in the technical aspects of high-end optics, bright and shadow-free illumination, stable and flexible mechanical design, and versatile visualization. New imaging modalities, such as hyperspectral imaging, OCT, fluorescence imaging, photoacoustic microscopy, and laser speckle contrast imaging, are being integrated with surgical microscopes. Advanced visualization and AR are being added to surgical microscopes as new features that are changing clinical practices in the operating room. CONCLUSIONS The combination of new imaging technologies and surgical microscopy will enable surgeons to perform challenging procedures and improve surgical outcomes. With advanced visualization and improved ergonomics, the surgical microscope has become a powerful tool in neurosurgery, spinal, ENT, ophthalmic, plastic and reconstructive surgeries.
Collapse
Affiliation(s)
- Ling Ma
- University of Texas at Dallas, Department of Bioengineering, Richardson, Texas, United States
| | - Baowei Fei
- University of Texas at Dallas, Department of Bioengineering, Richardson, Texas, United States
- University of Texas Southwestern Medical Center, Department of Radiology, Dallas, Texas, United States
| |
Collapse
|
10
|
Abstract
BACKGROUND One of the main challenges for modern surgery is the effective use of the many available imaging modalities and diagnostic methods. Augmented reality systems can be used in the future to blend patient and planning information into the view of surgeons, which can improve the efficiency and safety of interventions. OBJECTIVE In this article we present five visualization methods to integrate augmented reality displays into medical procedures and the advantages and disadvantages are explained. MATERIAL AND METHODS Based on an extensive literature review the various existing approaches for integration of augmented reality displays into medical procedures are divided into five categories and the most important research results for each approach are presented. RESULTS A large number of mixed and augmented reality solutions for medical interventions have been developed as research prototypes; however, only very few systems have been tested on patients. CONCLUSION In order to integrate mixed and augmented reality displays into medical practice, highly specialized solutions need to be developed. Such systems must comply with the requirements with respect to accuracy, fidelity, ergonomics and seamless integration into the surgical workflow.
Collapse
Affiliation(s)
- Ulrich Eck
- Lehrstuhl für Informatikanwendungen in der Medizin, Technische Universität München, Boltzmannstr. 3, 85748, Garching bei München, Deutschland.
| | - Alexander Winkler
- Lehrstuhl für Informatikanwendungen in der Medizin, Technische Universität München, Boltzmannstr. 3, 85748, Garching bei München, Deutschland.
| |
Collapse
|
11
|
Letter to the Editor on “Augmented Reality Based Navigation for Computer Assisted Hip Resurfacing: A Proof of Concept Study”. Ann Biomed Eng 2019; 47:2151-2153. [DOI: 10.1007/s10439-019-02299-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Accepted: 05/29/2019] [Indexed: 01/20/2023]
|
12
|
Sotoca JM, Latorre-Carmona P, Espinos-Morato H, Pla F, Javidi B. Depth estimation improvement in 3D integral imaging using an edge removal approach. Pattern Anal Appl 2018. [DOI: 10.1007/s10044-018-0721-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
|
13
|
Augmented reality technology for preoperative planning and intraoperative navigation during hepatobiliary surgery: A review of current methods. Hepatobiliary Pancreat Dis Int 2018; 17:101-112. [PMID: 29567047 DOI: 10.1016/j.hbpd.2018.02.002] [Citation(s) in RCA: 61] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/31/2017] [Accepted: 11/16/2017] [Indexed: 02/05/2023]
Abstract
BACKGROUND Augmented reality (AR) technology is used to reconstruct three-dimensional (3D) images of hepatic and biliary structures from computed tomography and magnetic resonance imaging data, and to superimpose the virtual images onto a view of the surgical field. In liver surgery, these superimposed virtual images help the surgeon to visualize intrahepatic structures and therefore, to operate precisely and to improve clinical outcomes. DATA SOURCES The keywords "augmented reality", "liver", "laparoscopic" and "hepatectomy" were used for searching publications in the PubMed database. The primary source of literatures was from peer-reviewed journals up to December 2016. Additional articles were identified by manual search of references found in the key articles. RESULTS In general, AR technology mainly includes 3D reconstruction, display, registration as well as tracking techniques and has recently been adopted gradually for liver surgeries including laparoscopy and laparotomy with video-based AR assisted laparoscopic resection as the main technical application. By applying AR technology, blood vessels and tumor structures in the liver can be displayed during surgery, which permits precise navigation during complex surgical procedures. Liver transformation and registration errors during surgery were the main factors that limit the application of AR technology. CONCLUSIONS With recent advances, AR technologies have the potential to improve hepatobiliary surgical procedures. However, additional clinical studies will be required to evaluate AR as a tool for reducing postoperative morbidity and mortality and for the improvement of long-term clinical outcomes. Future research is needed in the fusion of multiple imaging modalities, improving biomechanical liver modeling, and enhancing image data processing and tracking technologies to increase the accuracy of current AR methods.
Collapse
|
14
|
Ma L, Zhao Z, Zhang B, Jiang W, Fu L, Zhang X, Liao H. Three-dimensional augmented reality surgical navigation with hybrid optical and electromagnetic tracking for distal intramedullary nail interlocking. Int J Med Robot 2018; 14:e1909. [PMID: 29575601 DOI: 10.1002/rcs.1909] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2017] [Revised: 02/07/2018] [Accepted: 02/08/2018] [Indexed: 11/08/2022]
Affiliation(s)
- Longfei Ma
- Department of Biomedical Engineering, School of Medicine; Tsinghua University; Beijing China
| | - Zhe Zhao
- Department of Orthopedics Surgery; Beijing Tsinghua Changgung Hospital; Beijing China
| | - Boyu Zhang
- Department of Biomedical Engineering, School of Medicine; Tsinghua University; Beijing China
| | - Weipeng Jiang
- Department of Biomedical Engineering, School of Medicine; Tsinghua University; Beijing China
| | - Ligong Fu
- Department of Orthopedics Surgery; Beijing Tsinghua Changgung Hospital; Beijing China
| | - Xinran Zhang
- Department of Biomedical Engineering, School of Medicine; Tsinghua University; Beijing China
| | - Hongen Liao
- Department of Biomedical Engineering, School of Medicine; Tsinghua University; Beijing China
| |
Collapse
|
15
|
Fan Z, Chen G, Wang J, Liao H. Spatial Position Measurement System for Surgical Navigation Using 3-D Image Marker-Based Tracking Tools With Compact Volume. IEEE Trans Biomed Eng 2018; 65:378-389. [DOI: 10.1109/tbme.2017.2771356] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
16
|
3D Visualization and Augmented Reality for Orthopedics. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2018; 1093:193-205. [DOI: 10.1007/978-981-13-1396-7_16] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|
17
|
Tang R, Ma L, Xiang C, Wang X, Li A, Liao H, Dong J. Augmented reality navigation in open surgery for hilar cholangiocarcinoma resection with hemihepatectomy using video-based in situ three-dimensional anatomical modeling: A case report. Medicine (Baltimore) 2017; 96:e8083. [PMID: 28906410 PMCID: PMC5604679 DOI: 10.1097/md.0000000000008083] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/22/2022] Open
Abstract
RATIONALE Patients who undergo hilar cholangiocarcinoma (HCAC) resection with concomitant hepatectomy have a high risk of postoperative morbidity and mortality due to surgical trauma to the hepatic and biliary vasculature. PATIENT CONCERNS A 58-year-old Chinese man with yellowing skin and sclera, abdominal distension, pruritus, and anorexia for approximately 3 weeks. DIAGNOSES Magnetic resonance cholangiopancreatography and enhanced computed tomography (CT) scanning revealed a mass over the biliary tree at the porta hepatis, which diagnosed to be s a hilar cholangiocarcinoma. INTERVENTION Three-dimensional (3D) images of the patient's hepatic and biliary structures were reconstructed preoperatively from CT data, and the 3D images were used for preoperative planning and augmented reality (AR)-assisted intraoperative navigation during open HCAC resection with hemihepatectomy. A 3D-printed model of the patient's biliary structures was also used intraoperatively as a visual reference. OUTCOMES No serious postoperative complications occurred, and the patient was tumor-free at the 9-month follow-up examination based on CT results. LESSONS AR-assisted preoperative planning and intraoperative navigation might be beneficial in other patients with HCAC patients to reduce postoperative complications and ensure disease-free survival. In our postoperative analysis, we also found that, when the3D images were superimposed 3D-printed model using a see-through integral video graphy display device, our senses of depth perception and motion parallax were improved, compared with that which we had experienced intraoperatively using the videobased AR display system.
Collapse
Affiliation(s)
- Rui Tang
- Department of Hepatopancreatobiliary Surgery, Beijing Tsinghua Changgung Hospital
| | - Longfei Ma
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Canhong Xiang
- Department of Hepatopancreatobiliary Surgery, Beijing Tsinghua Changgung Hospital
| | - Xuedong Wang
- Department of Hepatopancreatobiliary Surgery, Beijing Tsinghua Changgung Hospital
| | - Ang Li
- Department of Hepatopancreatobiliary Surgery, Beijing Tsinghua Changgung Hospital
| | - Hongen Liao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Jiahong Dong
- Department of Hepatopancreatobiliary Surgery, Beijing Tsinghua Changgung Hospital
| |
Collapse
|
18
|
Fan Z, Weng Y, Chen G, Liao H. 3D interactive surgical visualization system using mobile spatial information acquisition and autostereoscopic display. J Biomed Inform 2017; 71:154-164. [PMID: 28533140 DOI: 10.1016/j.jbi.2017.05.014] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2016] [Revised: 04/16/2017] [Accepted: 05/15/2017] [Indexed: 10/19/2022]
|
19
|
Robust and Accurate Algorithm for Wearable Stereoscopic Augmented Reality with Three Indistinguishable Markers. ELECTRONICS 2016. [DOI: 10.3390/electronics5030059] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
|
20
|
Martinez-Uso A, Latorre-Carmona P, Sotoca JM, Pla F, Javidi B. Depth estimation in Integral Imaging based on a maximum voting strategy. ACTA ACUST UNITED AC 2016. [DOI: 10.1109/jdt.2016.2615565] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
21
|
|
22
|
Suenaga H, Tran HH, Liao H, Masamune K, Dohi T, Hoshi K, Takato T. Vision-based markerless registration using stereo vision and an augmented reality surgical navigation system: a pilot study. BMC Med Imaging 2015; 15:51. [PMID: 26525142 PMCID: PMC4630916 DOI: 10.1186/s12880-015-0089-5] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2015] [Accepted: 10/09/2015] [Indexed: 11/15/2022] Open
Abstract
Background This study evaluated the use of an augmented reality navigation system that provides a markerless registration system using stereo vision in oral and maxillofacial surgery. Method A feasibility study was performed on a subject, wherein a stereo camera was used for tracking and markerless registration. The computed tomography data obtained from the volunteer was used to create an integral videography image and a 3-dimensional rapid prototype model of the jaw. The overlay of the subject’s anatomic site and its 3D-IV image were displayed in real space using a 3D-AR display. Extraction of characteristic points and teeth matching were done using parallax images from two stereo cameras for patient-image registration. Results Accurate registration of the volunteer’s anatomy with IV stereoscopic images via image matching was done using the fully automated markerless system, which recognized the incisal edges of the teeth and captured information pertaining to their position with an average target registration error of < 1 mm. These 3D-CT images were then displayed in real space with high accuracy using AR. Even when the viewing position was changed, the 3D images could be observed as if they were floating in real space without using special glasses. Conclusion Teeth were successfully used for registration via 3D image (contour) matching. This system, without using references or fiducial markers, displayed 3D-CT images in real space with high accuracy. The system provided real-time markerless registration and 3D image matching via stereo vision, which, combined with AR, could have significant clinical applications. Electronic supplementary material The online version of this article (doi:10.1186/s12880-015-0089-5) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Hideyuki Suenaga
- Department of Oral-Maxillofacial Surgery, Dentistry and Orthodontics, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo ku, Tokyo, 113 8656, Japan.
| | - Huy Hoang Tran
- Department of Mechano-Informatics, Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan.
| | - Hongen Liao
- Department of Bioengineering, Graduate School of Engineering, The University of Tokyo, Tokyo, Japan. .,Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China.
| | - Ken Masamune
- Department of Mechano-Informatics, Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan. .,Faculty of Advanced Technology and Surgery, Institute of Advanced Biomedical Engineering and Science, Tokyo Women's Medical University, Tokyo, Japan.
| | - Takeyoshi Dohi
- Department of Mechanical Engineering, School of Engineering, Tokyo Denki University, Tokyo, Japan.
| | - Kazuto Hoshi
- Department of Oral-Maxillofacial Surgery, Dentistry and Orthodontics, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo ku, Tokyo, 113 8656, Japan.
| | - Tsuyoshi Takato
- Department of Oral-Maxillofacial Surgery, Dentistry and Orthodontics, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo ku, Tokyo, 113 8656, Japan.
| |
Collapse
|
23
|
Floating autostereoscopic 3D display with multidimensional images for telesurgical visualization. Int J Comput Assist Radiol Surg 2015; 11:207-15. [PMID: 26410839 DOI: 10.1007/s11548-015-1289-8] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2015] [Accepted: 08/31/2015] [Indexed: 12/17/2022]
Abstract
PURPOSE We propose a combined floating autostereoscopic three-dimensional (3D) display approach for telesurgical visualization, which could reproduce live surgical scene in a realistic and intuitive manner. METHODS A polyhedron-shaped 3D display device is developed for spatially floating autostereoscopic 3D image. Integral videography (IV) technique is adopted to generate real-time 3D images. Combined two-dimensional (2D) and 3D displays are presented floatingly around the center of the display device through reflection of semitransparent mirrors. Intra-operative surgery information is fused and updated in the 3D display, so that telesurgical visualization could be enhanced remotely. RESULTS The experimental results showed that our approach can achieve a combined floating autostereoscopic display that presents 2D and 3D fusion images. The glasses-free IV 3D display has full parallax and can be observed by multiple persons from surrounding areas at the same time. Furthermore, real-time surgical scene could be presented and updated in a realistic and intuitive visualization platform. It is shown that the proposed method is feasible for facilitating telesurgical visualization. CONCLUSION The proposed floating autostereoscopic display device presents surgical information in an efficient form, so as to enhance operative cooperation and efficiency during operation. Combined presentation of imaging information is promising for medical applications.
Collapse
|
24
|
Zhang X, Fan Z, Wang J, Liao H. 3D Augmented Reality Based Orthopaedic Interventions. ACTA ACUST UNITED AC 2015. [DOI: 10.1007/978-3-319-23482-3_4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/22/2023]
|
25
|
Zhao D, Su B, Chen G, Liao H. 360 degree viewable floating autostereoscopic display using integral photography and multiple semitransparent mirrors. OPTICS EXPRESS 2015; 23:9812-9823. [PMID: 25969022 DOI: 10.1364/oe.23.009812] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
In this paper, we present a polyhedron-shaped floating autostereoscopic display viewable from 360 degrees using integral photography (IP) and multiple semitransparent mirrors. IP combined with polyhedron-shaped multiple semitransparent mirrors is used to achieve a 360 degree viewable floating three-dimensional (3D) autostereoscopic display, having the advantage of being able to be viewed by several observers from various viewpoints simultaneously. IP is adopted to generate a 3D autostereoscopic image with full parallax property. Multiple semitransparent mirrors reflect corresponding IP images, and the reflected IP images are situated around the center of the polyhedron-shaped display device for producing the floating display. The spatial reflected IP images reconstruct a floating autostereoscopic image viewable from 360 degrees. We manufactured two prototypes for producing such displays and performed two sets of experiments to evaluate the feasibility of the method described above. The results of our experiments showed that our approach can achieve a floating autostereoscopic display viewable from surrounding area. Moreover, it is shown the proposed method is feasible to facilitate the continuous viewpoint of a whole 360 degree display without flipping.
Collapse
|
26
|
Wang J, Suenaga H, Liao H, Hoshi K, Yang L, Kobayashi E, Sakuma I. Real-time computer-generated integral imaging and 3D image calibration for augmented reality surgical navigation. Comput Med Imaging Graph 2014; 40:147-59. [PMID: 25465067 DOI: 10.1016/j.compmedimag.2014.11.003] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2014] [Revised: 09/03/2014] [Accepted: 11/03/2014] [Indexed: 11/26/2022]
Abstract
Autostereoscopic 3D image overlay for augmented reality (AR) based surgical navigation has been studied and reported many times. For the purpose of surgical overlay, the 3D image is expected to have the same geometric shape as the original organ, and can be transformed to a specified location for image overlay. However, how to generate a 3D image with high geometric fidelity and quantitative evaluation of 3D image's geometric accuracy have not been addressed. This paper proposes a graphics processing unit (GPU) based computer-generated integral imaging pipeline for real-time autostereoscopic 3D display, and an automatic closed-loop 3D image calibration paradigm for displaying undistorted 3D images. Based on the proposed methods, a novel AR device for 3D image surgical overlay is presented, which mainly consists of a 3D display, an AR window, a stereo camera for 3D measurement, and a workstation for information processing. The evaluation on the 3D image rendering performance with 2560×1600 elemental image resolution shows the rendering speeds of 50-60 frames per second (fps) for surface models, and 5-8 fps for large medical volumes. The evaluation of the undistorted 3D image after the calibration yields sub-millimeter geometric accuracy. A phantom experiment simulating oral and maxillofacial surgery was also performed to evaluate the proposed AR overlay device in terms of the image registration accuracy, 3D image overlay accuracy, and the visual effects of the overlay. The experimental results show satisfactory image registration and image overlay accuracy, and confirm the system usability.
Collapse
Affiliation(s)
- Junchen Wang
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan.
| | - Hideyuki Suenaga
- Department of Oral-Maxillofacial Surgery, Dentistry and Orthodontics, The University of Tokyo Hospital, Tokyo, Japan
| | - Hongen Liao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China.
| | - Kazuto Hoshi
- Department of Oral-Maxillofacial Surgery, Dentistry and Orthodontics, The University of Tokyo Hospital, Tokyo, Japan
| | - Liangjing Yang
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Etsuko Kobayashi
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Ichiro Sakuma
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
27
|
Ruijters D, Zinger S, Do L, de With PH. Latency optimization for autostereoscopic volumetric visualization in image-guided interventions. Neurocomputing 2014. [DOI: 10.1016/j.neucom.2014.02.065] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
28
|
Javier Traver V, Latorre-Carmona P, Salvador-Balaguer E, Pla F, Javidi B. Human gesture recognition using three-dimensional integral imaging. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2014; 31:2312-2320. [PMID: 25401260 DOI: 10.1364/josaa.31.002312] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Three-dimensional (3D) integral imaging allows one to reconstruct a 3D scene, including range information, and provides sectional refocused imaging of 3D objects at different ranges. This paper explores the potential use of 3D passive sensing integral imaging for human gesture recognition tasks from sequences of reconstructed 3D video scenes. As a preliminary testbed, the 3D integral imaging sensing is implemented using an array of cameras with the appropriate algorithms for 3D scene reconstruction. Recognition experiments are performed by acquiring 3D video scenes of multiple hand gestures performed by ten people. We analyze the capability and performance of gesture recognition using 3D integral imaging representations at given distances and compare its performance with the use of standard two-dimensional (2D) single-camera videos. To the best of our knowledge, this is the first report on using 3D integral imaging for human gesture recognition.
Collapse
|
29
|
Junchen Wang, Suenaga H, Hoshi K, Liangjing Yang, Kobayashi E, Sakuma I, Hongen Liao. Augmented Reality Navigation With Automatic Marker-Free Image Registration Using 3-D Image Overlay for Dental Surgery. IEEE Trans Biomed Eng 2014; 61:1295-304. [DOI: 10.1109/tbme.2014.2301191] [Citation(s) in RCA: 120] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
30
|
Luo CG, Xiao X, Martínez-Corral M, Chen CW, Javidi B, Wang QH. Analysis of the depth of field of integral imaging displays based on wave optics. OPTICS EXPRESS 2013; 21:31263-31273. [PMID: 24514700 DOI: 10.1364/oe.21.031263] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
In this paper, we analyze the depth of field (DOF) of integral imaging displays based on wave optics. With considering the diffraction effect, we analyze the intensity distribution of light with multiple micro-lenses and derive a DOF calculation formula for integral imaging display system. We study the variations of DOF values with different system parameters. Experimental results are provided to verify the accuracy of the theoretical analysis. The analyses and experimental results presented in this paper could be beneficial for better understanding and designing of integral imaging displays.
Collapse
|
31
|
Real-time in situ three-dimensional integral videography and surgical navigation using augmented reality: a pilot study. Int J Oral Sci 2013; 5:98-102. [PMID: 23703710 PMCID: PMC3707071 DOI: 10.1038/ijos.2013.26] [Citation(s) in RCA: 52] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2012] [Accepted: 04/22/2013] [Indexed: 12/04/2022] Open
Abstract
To evaluate the feasibility and accuracy of a three-dimensional augmented reality system incorporating integral videography for imaging oral and maxillofacial regions, based on preoperative computed tomography data. Three-dimensional surface models of the jawbones, based on the computed tomography data, were used to create the integral videography images of a subject's maxillofacial area. The three-dimensional augmented reality system (integral videography display, computed tomography, a position tracker and a computer) was used to generate a three-dimensional overlay that was projected on the surgical site via a half-silvered mirror. Thereafter, a feasibility study was performed on a volunteer. The accuracy of this system was verified on a solid model while simulating bone resection. Positional registration was attained by identifying and tracking the patient/surgical instrument's position. Thus, integral videography images of jawbones, teeth and the surgical tool were superimposed in the correct position. Stereoscopic images viewed from various angles were accurately displayed. Change in the viewing angle did not negatively affect the surgeon's ability to simultaneously observe the three-dimensional images and the patient, without special glasses. The difference in three-dimensional position of each measuring point on the solid model and augmented reality navigation was almost negligible (<1 mm); this indicates that the system was highly accurate. This augmented reality system was highly accurate and effective for surgical navigation and for overlaying a three-dimensional computed tomography image on a patient's surgical area, enabling the surgeon to understand the positional relationship between the preoperative image and the actual surgical site, with the naked eye.
Collapse
|
32
|
Shin D, Javidi B. Three-dimensional integral imaging with improved visualization using subpixel optical ray sensing. OPTICS LETTERS 2012; 37:2130-2132. [PMID: 22660144 DOI: 10.1364/ol.37.002130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
In this Letter, we propose an improved three-dimensional (3D) image reconstruction method for integral imaging. We use subpixel sensing of the optical rays of the 3D scene projected onto the image sensor. When reconstructing the 3D image, we use a calculated minimum subpixel distance for each sensor pixel instead of the average pixel value of integrated pixels from elemental images. The minimum subpixel distance is defined by measuring the distance between the center of the sensor pixel and the physical position of the imaging lens point spread function onto the sensor, which is projected from each reconstruction point for all elemental images. To show the usefulness of the proposed method, preliminary 3D imaging experiments are presented. Experimental results reveal that the proposed method may improve 3D imaging visualization because of the superior sensing and reconstruction of optical ray direction and intensity information for 3D objects.
Collapse
Affiliation(s)
- Donghak Shin
- Electrical and Computer Engineering Department, University of Connecticut, Storrs, Connecticut 06269, USA
| | | |
Collapse
|
33
|
Kersten-Oertel M, Jannin P, Collins DL. DVV: a taxonomy for mixed reality visualization in image guided surgery. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2012; 18:332-352. [PMID: 21383411 DOI: 10.1109/tvcg.2011.50] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Mixed reality visualizations are increasingly studied for use in image guided surgery (IGS) systems, yet few mixed reality systems have been introduced for daily use into the operating room (OR). This may be the result of several factors: the systems are developed from a technical perspective, are rarely evaluated in the field, and/or lack consideration of the end user and the constraints of the OR. We introduce the Data, Visualization processing, View (DVV) taxonomy which defines each of the major components required to implement a mixed reality IGS system. We propose that these components be considered and used as validation criteria for introducing a mixed reality IGS system into the OR. A taxonomy of IGS visualization systems is a step toward developing a common language that will help developers and end users discuss and understand the constituents of a mixed reality visualization system, facilitating a greater presence of future systems in the OR. We evaluate the DVV taxonomy based on its goodness of fit and completeness. We demonstrate the utility of the DVV taxonomy by classifying 17 state-of-the-art research papers in the domain of mixed reality visualization IGS systems. Our classification shows that few IGS visualization systems' components have been validated and even fewer are evaluated.
Collapse
Affiliation(s)
- Marta Kersten-Oertel
- McConell Brain Imaging Center at the Montreal Neurological Institute (MNI), 3801 University St, Montre´al, QC H3A 2B4, Canada.
| | | | | |
Collapse
|
34
|
Dohi T, Nomura K. Autostereoscopic 3D Display with Long Visualization Depth Using Referential Viewing Area-Based Integral Photography. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2011; 17:1690-1701. [PMID: 21173452 DOI: 10.1109/tvcg.2010.267] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
We developed an autostereoscopic display for distant viewing of 3D computer graphics (CG) images without using special viewing glasses or tracking devices. The images are created by employing referential viewing area-based CG image generation and pixel distribution algorithm for integral photography (IP) and integral videography (IV) imaging. CG image rendering is used to generate IP/IV elemental images. The images can be viewed from each viewpoint within a referential viewing area and the elemental images are reconstructed from rendered CG images by pixel redistribution and compensation method. The elemental images are projected onto a screen that is placed at the same referential viewing distance from the lens array as in the image rendering. Photographic film is used to record the elemental images through each lens. The method enables 3D images with a long visualization depth to be viewed from relatively long distances without any apparent influence from deviated or distorted lenses in the array. We succeeded in creating an actual autostereoscopic images with an image depth of several meters in front of and behind the display that appear to have 3D even when viewed from a distance.
Collapse
|
35
|
|
36
|
Tagaya N, Aoyagi H, Nakagawa A, Abe A, Iwasaki Y, Tachibana M, Kubota K. A novel approach for sentinel lymph node identification using fluorescence imaging and image overlay navigation surgery in patients with breast cancer. World J Surg 2011; 35:154-8. [PMID: 20931198 DOI: 10.1007/s00268-010-0811-y] [Citation(s) in RCA: 45] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
BACKGROUND We reported a novel technique of sentinel lymph node (SLN) identification using fluorescence imaging of indocyanine green injection. Furthermore, to obtain safe and accurate identification of SLN during surgery, we introduce the image overlay navigation surgery and evaluate its efficacy. METHODS This study enrolled 50 patients with a tumors <2 cm in diameter. Initially, we obtained three-dimensional (3-D) imaging from multidetector-row computed tomography (MD-CT) by volume rendering. It was projected on the patient's operative field with the clear visualization of lymph node (LN) through projector. Then, the dye of indocyanine green (ICG) was injected subdermally in the areola. Subcutaneous lymphatic channels draining from the areola to the axilla were visible by fluorescence imaging immediately. Lymphatic flow was reached after LN revealed on 3-D imaging. After incising the axillary skin on the point of LN mapping, SLN was then dissected under the guidance of fluorescence imaging with adequate adjustment of sensitivity and 3-D imaging. RESULTS Lymphatic channels and SLN were successfully identified by Photodynamic eye (PDE) in all patients. And the sites of skin incision also were identical with the LN being demonstrated by 3-D imaging in all patients. The mean number of SLN was 3.7. The image overlay navigation surgery was visually easy to identify the location of SLN from the axillary skin. There were no intra- or postoperative complications associated with SLN identification. CONCLUSIONS This combined navigations of fluorescence and 3-D imaging revealed more easy and effective to detect SLN intraoperatively than fluorescence imaging alone.
Collapse
Affiliation(s)
- Nobumi Tagaya
- Second Department of Surgery, Dokkyo Medical University, 880 Kitakobayashi, Mibu, Tochigi, 321-0293, Japan.
| | | | | | | | | | | | | |
Collapse
|
37
|
Hongen Liao, Inomata T, Sakuma I, Dohi T. 3-D Augmented Reality for MRI-Guided Surgery Using Integral Videography Autostereoscopic Image Overlay. IEEE Trans Biomed Eng 2010; 57:1476-86. [DOI: 10.1109/tbme.2010.2040278] [Citation(s) in RCA: 136] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
38
|
Ando T, Taniguchi K, Kim H, Joung S, Kobayashi E, Liao H, Kyo S, Sakuma I. High-sensitive fluorescence endoscope using electrocardiograph-synchronized multiple exposure. Int J Comput Assist Radiol Surg 2010; 6:73-81. [PMID: 20473575 DOI: 10.1007/s11548-010-0478-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2010] [Accepted: 04/26/2010] [Indexed: 11/26/2022]
Abstract
PURPOSE Fluorescence-based measurement of cardiac disease, using autofluorescent substances that already exist in the heart, has not been used for endoscopic surgery because the endoscopic lenses cannot transmit sufficient light. A highly sensitive fluorescence endoscope using an electrocardiograph (ECG)-synchronized multiple exposure (ESME) approach was developed that provides a bright fluorescent image. METHODS A system was developed consisting of an endoscope, an excitation light, an ECG amplifier, a trigger and delay unit, and a computer. This system is based on periodic motion of the heart. Since the shape of the heart can be photographed by ECG triggering in a similar manner, a bright image can be synthesized by accumulating multiple trigger-captured images. Laboratory and in vivo experiments were performed to confirm the effectiveness of ESME. RESULTS The experimental results revealed that the trigger unit generated the synchronization signals required to produce high-quality images of the heart depending on heart rate. The difference among trigger-captured images from the actual organ, which affects the quality of ESME images, was estimated at 0.65 mm from the calculated displacement of a marker on the heart. The results also revealed that a bright fluorescent image can be captured by ESME. CONCLUSION A highly sensitive fluorescence endoscope using ESME was developed and successfully tested. The experimental results indicated that the method enabled high-quality image acquisition in a very low illumination environment. This system is effective for the observation of faint fluorescence in the heart and is useful for the intraoperative examination of the heart status.
Collapse
Affiliation(s)
- Takehiro Ando
- Graduate School of Engineering, The University of Tokyo, Engineering Building No 14, Room 722, Hongo 7-3-1, Bunkyo, Tokyo, Japan.
| | | | | | | | | | | | | | | |
Collapse
|
39
|
Liao H, Ishihara H, Tran HH, Masamune K, Sakuma I, Dohi T. Precision-guided surgical navigation system using laser guidance and 3D autostereoscopic image overlay. Comput Med Imaging Graph 2010; 34:46-54. [DOI: 10.1016/j.compmedimag.2009.07.003] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2009] [Revised: 05/21/2009] [Accepted: 07/16/2009] [Indexed: 10/20/2022]
|
40
|
Lim YT, Park JH, Kwon KC, Kim N. Resolution-enhanced integral imaging microscopy that uses lens array shifting. OPTICS EXPRESS 2009; 17:19253-63. [PMID: 20372662 DOI: 10.1364/oe.17.019253] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
A resolution-enhanced integral imaging microscope that uses lens array shifting is proposed in this study. The lens shift method maintains the same field of view of the reconstructed orthographic view images with increased spatial density. In this study, multiple sets of the elemental images were captured with horizontal and vertical shifts of the micro lens array and combined together to form a single set of the elemental images. From the combined elemental images, orthographic view images and depth slice images of the microscopic specimen were generated with enhanced resolution.
Collapse
Affiliation(s)
- Young-Tae Lim
- College of Electrical and Computer Engineering, Chungbuk National University,410 SungBong-Ro, Heungduk-Gu, Cheongju-Si, Chungbuk 361-763, Korea
| | | | | | | |
Collapse
|
41
|
Sielhorst T, Feuerstein M, Navab N. Advanced Medical Displays: A Literature Review of Augmented Reality. ACTA ACUST UNITED AC 2008. [DOI: 10.1109/jdt.2008.2001575] [Citation(s) in RCA: 201] [Impact Index Per Article: 12.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
42
|
Afthinos JN, Latif MJ, Bhora FY, Connery CP, McGinty JJ, Burra A, Attiyeh M, Todd GJ, Belsley SJ. What technical barriers exist for real-time fluoroscopic and video image overlay in robotic surgery? Int J Med Robot 2008; 4:368-72. [DOI: 10.1002/rcs.221] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
43
|
Giraldez JG, Caversaccio M, Pappas I, Kowal J, Rohrer U, Marti G, Baur C, Nolte LP, Ballester MAG. Design and clinical evaluation of an image-guided surgical microscope with an integrated tracking system. Int J Comput Assist Radiol Surg 2007. [DOI: 10.1007/s11548-006-0066-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
44
|
Abstract
Contemporary imaging modalities can now provide the surgeon with high quality three- and four-dimensional images depicting not only normal anatomy and pathology, but also vascularity and function. A key component of image-guided surgery (IGS) is the ability to register multi-modal pre-operative images to each other and to the patient. The other important component of IGS is the ability to track instruments in real time during the procedure and to display them as part of a realistic model of the operative volume. Stereoscopic, virtual- and augmented-reality techniques have been implemented to enhance the visualization and guidance process. For the most part, IGS relies on the assumption that the pre-operatively acquired images used to guide the surgery accurately represent the morphology of the tissue during the procedure. This assumption may not necessarily be valid, and so intra-operative real-time imaging using interventional MRI, ultrasound, video and electrophysiological recordings are often employed to ameliorate this situation. Although IGS is now in extensive routine clinical use in neurosurgery and is gaining ground in other surgical disciplines, there remain many drawbacks that must be overcome before it can be employed in more general minimally-invasive procedures. This review overviews the roots of IGS in neurosurgery, provides examples of its use outside the brain, discusses the infrastructure required for successful implementation of IGS approaches and outlines the challenges that must be overcome for IGS to advance further.
Collapse
Affiliation(s)
- Terry M Peters
- Robarts Research Institute, University of Western Ontario, PO Box 5015, 100 Perth Drive, London, ON N6A 5K8, Canada.
| |
Collapse
|
45
|
Paul P, Fleig O, Jannin P. Augmented virtuality based on stereoscopic reconstruction in multimodal image-guided neurosurgery: methods and performance evaluation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2005; 24:1500-11. [PMID: 16279086 DOI: 10.1109/tmi.2005.857029] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Displaying anatomical and physiological information derived from preoperative medical images in the operating room is critical in image-guided neurosurgery. This paper presents a new approach referred to as augmented virtuality (AV) for displaying intraoperative views of the operative field over three-dimensional (3-D) multimodal preoperative images onto an external screen during surgery. A calibrated stereovision system was set up between the surgical microscope and the binocular tubes. Three-dimensional surface meshes of the operative field were then generated using stereopsis. These reconstructed 3-D surface meshes were directly displayed without any additional geometrical transform over preoperative images of the patient in the physical space. Performance evaluation was achieved using a physical skull phantom. Accuracy of the reconstruction method itself was shown to be within 1 mm (median: 0.76 mm +/- 0.27), whereas accuracy of the overall approach was shown to be within 3 mm (median: 2.29 mm +/- 0.59), including the image-to-physical space registration error. We report the results of six surgical cases where AV was used in conjunction with augmented reality. AV not only enabled vision beyond the cortical surface but also gave an overview of the surgical area. This approach facilitated understanding of the spatial relationship between the operative field and the preoperative multimodal 3-D images of the patient.
Collapse
Affiliation(s)
- Perrine Paul
- Laboratoire IDM, Faculté de Médecine, 35043 Rennes Cedex, France.
| | | | | |
Collapse
|
46
|
Figl M, Ede C, Hummel J, Wanschitz F, Ewers R, Bergmann H, Birkfellner W. A fully automated calibration method for an optical see-through head-mounted operating microscope with variable zoom and focus. IEEE TRANSACTIONS ON MEDICAL IMAGING 2005; 24:1492-9. [PMID: 16279085 DOI: 10.1109/tmi.2005.856746] [Citation(s) in RCA: 17] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Ever since the development of the first applications in image-guided therapy (IGT), the use of head-mounted displays (HMDs) was considered an important extension of existing IGT technologies. Several approaches to utilizing HMDs and modified medical devices for augmented reality (AR) visualization were implemented. These approaches include video-see through systems, semitransparent mirrors, modified endoscopes, and modified operating microscopes. Common to all these devices is the fact that a precise calibration between the display and three-dimensional coordinates in the patient's frame of reference is compulsory. In optical see-through devices based on complex optical systems such as operating microscopes or operating binoculars-as in the case of the system presented in this paper-this procedure can become increasingly difficult since precise camera calibration for every focus and zoom position is required. We present a method for fully automatic calibration of the operating binocular Varioscope M5 AR for the full range of zoom and focus settings available. Our method uses a special calibration pattern, a linear guide driven by a stepping motor, and special calibration software. The overlay error in the calibration plane was found to be 0.14-0.91 mm, which is less than 1% of the field of view. Using the motorized calibration rig as presented in the paper, we were also able to assess the dynamic latency when viewing augmentation graphics on a mobile target; spatial displacement due to latency was found to be in the range of 1.1-2.8 mm maximum, the disparity between the true object and its computed overlay represented latency of 0.1 s. We conclude that the automatic calibration method presented in this paper is sufficient in terms of accuracy and time requirements for standard uses of optical see-through systems in a clinical environment.
Collapse
Affiliation(s)
- Michael Figl
- Center for Biomedical Engineering and Physics, Medical University of Vienna, A-1090 Vienna, Austria.
| | | | | | | | | | | | | |
Collapse
|
47
|
Liao H, Iwahara M, Katayama Y, Hata N, Dohi T. Three-dimensional display with a long viewing distance by use of integral photography. OPTICS LETTERS 2005; 30:613-615. [PMID: 15791993 DOI: 10.1364/ol.30.000613] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
We developed a technique of three-dimensional (3-D) display for distant viewing of a 3-D image without the need for special glasses. The photobased integral photography (IP) method allows precise 3-D images to be displayed at long viewing distances without any influence from deviated or distorted lenses in a lens array. We calculate elemental images from a referential viewing area for each lens and project the corresponding result images to photographic film through each lens. We succeed in creating an image display that appears to have three dimensionality even when viewed from a distance, with an image depth of 5.7 m or more in front of the display and 3.5 m or more behind the display. To the best of our knowledge, the long-distance IP display presented here is technically unique because it is the first report of generation of an image with such a long viewing distance.
Collapse
Affiliation(s)
- Hongen Liao
- Graduate School of Information Science and Technology, University of Tokyo, 7-3-1 Hongo Bunkyo-ku, Tokyo 113-8656, Japan.
| | | | | | | | | |
Collapse
|
48
|
Liao H, Iwahara M, Koike T, Hata N, Sakuma I, Dohi T. Scalable high-resolution integral videography autostereoscopic display with a seamless multiprojection system. APPLIED OPTICS 2005; 44:305-315. [PMID: 15717819 DOI: 10.1364/ao.44.000305] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
We propose a scalable high-resolution autostereoscopic display that uses integral videography (IV) and a seamless multiprojection system. IV is an animated extension of integral photography (IP). Although IP and IV are ideal ways to display three-dimensional images, their spatial viewing resolution needs improvement; the pixel pitch of the display and the lens pitch are the main factors affecting IV image quality. We improved the quality by increasing the number and density of the pixels. Using multiple projectors, we create a scalable high-resolution image and project it onto a small screen using long-focal-length projection optics. To generate seamless IV images, we developed an image calibration method for geometric correction and color modulation. We also fabricated a lens array especially for the display device. Experiments were conducted with nine XGA projectors and nine PCs for parallel image rendering and displaying. A total of 2868 x 2150 pixels were displayed on a 241 mm x 181 mm (302.4 dots/in.) rear-projection screen. The lens pitch was 1.016 mm, corresponding to 12 pixels of the projected image. Measurement of the geometric accuracy of the reproduced IV images demonstrated that the spatial resolution of the display system matched that of the theoretical analysis.
Collapse
Affiliation(s)
- Hongen Liao
- Graduate School of Information Science and Technology, University of Tokyo, Tokyo 113-8656, Japan.
| | | | | | | | | | | |
Collapse
|