1
|
Li C, Zhang G, Zhao B, Xie D, Du H, Duan X, Hu Y, Zhang L. Advances of surgical robotics: image-guided classification and application. Natl Sci Rev 2024; 11:nwae186. [PMID: 39144738 PMCID: PMC11321255 DOI: 10.1093/nsr/nwae186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 04/19/2024] [Accepted: 05/07/2024] [Indexed: 08/16/2024] Open
Abstract
Surgical robotics application in the field of minimally invasive surgery has developed rapidly and has been attracting increasingly more research attention in recent years. A common consensus has been reached that surgical procedures are to become less traumatic and with the implementation of more intelligence and higher autonomy, which is a serious challenge faced by the environmental sensing capabilities of robotic systems. One of the main sources of environmental information for robots are images, which are the basis of robot vision. In this review article, we divide clinical image into direct and indirect based on the object of information acquisition, and into continuous, intermittent continuous, and discontinuous according to the target-tracking frequency. The characteristics and applications of the existing surgical robots in each category are introduced based on these two dimensions. Our purpose in conducting this review was to analyze, summarize, and discuss the current evidence on the general rules on the application of image technologies for medical purposes. Our analysis gives insight and provides guidance conducive to the development of more advanced surgical robotics systems in the future.
Collapse
Affiliation(s)
- Changsheng Li
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China
| | - Gongzi Zhang
- Department of Orthopedics, Chinese PLA General Hospital, Beijing 100141, China
| | - Baoliang Zhao
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Dongsheng Xie
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China
- School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China
| | - Hailong Du
- Department of Orthopedics, Chinese PLA General Hospital, Beijing 100141, China
| | - Xingguang Duan
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China
- School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China
| | - Ying Hu
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Lihai Zhang
- Department of Orthopedics, Chinese PLA General Hospital, Beijing 100141, China
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| |
Collapse
|
2
|
Lin Z, Lei C, Yang L. Modern Image-Guided Surgery: A Narrative Review of Medical Image Processing and Visualization. SENSORS (BASEL, SWITZERLAND) 2023; 23:9872. [PMID: 38139718 PMCID: PMC10748263 DOI: 10.3390/s23249872] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Revised: 11/15/2023] [Accepted: 12/13/2023] [Indexed: 12/24/2023]
Abstract
Medical image analysis forms the basis of image-guided surgery (IGS) and many of its fundamental tasks. Driven by the growing number of medical imaging modalities, the research community of medical imaging has developed methods and achieved functionality breakthroughs. However, with the overwhelming pool of information in the literature, it has become increasingly challenging for researchers to extract context-relevant information for specific applications, especially when many widely used methods exist in a variety of versions optimized for their respective application domains. By being further equipped with sophisticated three-dimensional (3D) medical image visualization and digital reality technology, medical experts could enhance their performance capabilities in IGS by multiple folds. The goal of this narrative review is to organize the key components of IGS in the aspects of medical image processing and visualization with a new perspective and insights. The literature search was conducted using mainstream academic search engines with a combination of keywords relevant to the field up until mid-2022. This survey systemically summarizes the basic, mainstream, and state-of-the-art medical image processing methods as well as how visualization technology like augmented/mixed/virtual reality (AR/MR/VR) are enhancing performance in IGS. Further, we hope that this survey will shed some light on the future of IGS in the face of challenges and opportunities for the research directions of medical image processing and visualization.
Collapse
Affiliation(s)
- Zhefan Lin
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310030, China;
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| | - Chen Lei
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| | - Liangjing Yang
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310030, China;
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| |
Collapse
|
3
|
Ramalhinho J, Yoo S, Dowrick T, Koo B, Somasundaram M, Gurusamy K, Hawkes DJ, Davidson B, Blandford A, Clarkson MJ. The value of Augmented Reality in surgery - A usability study on laparoscopic liver surgery. Med Image Anal 2023; 90:102943. [PMID: 37703675 PMCID: PMC10958137 DOI: 10.1016/j.media.2023.102943] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 06/29/2023] [Accepted: 08/24/2023] [Indexed: 09/15/2023]
Abstract
Augmented Reality (AR) is considered to be a promising technology for the guidance of laparoscopic liver surgery. By overlaying pre-operative 3D information of the liver and internal blood vessels on the laparoscopic view, surgeons can better understand the location of critical structures. In an effort to enable AR, several authors have focused on the development of methods to obtain an accurate alignment between the laparoscopic video image and the pre-operative 3D data of the liver, without assessing the benefit that the resulting overlay can provide during surgery. In this paper, we present a study that aims to assess quantitatively and qualitatively the value of an AR overlay in laparoscopic surgery during a simulated surgical task on a phantom setup. We design a study where participants are asked to physically localise pre-operative tumours in a liver phantom using three image guidance conditions - a baseline condition without any image guidance, a condition where the 3D surfaces of the liver are aligned to the video and displayed on a black background, and a condition where video see-through AR is displayed on the laparoscopic video. Using data collected from a cohort of 24 participants which include 12 surgeons, we observe that compared to the baseline, AR decreases the median localisation error of surgeons on non-peripheral targets from 25.8 mm to 9.2 mm. Using subjective feedback, we also identify that AR introduces usability improvements in the surgical task and increases the perceived confidence of the users. Between the two tested displays, the majority of participants preferred to use the AR overlay instead of navigated view of the 3D surfaces on a separate screen. We conclude that AR has the potential to improve performance and decision making in laparoscopic surgery, and that improvements in overlay alignment accuracy and depth perception should be pursued in the future.
Collapse
Affiliation(s)
- João Ramalhinho
- Wellcome ESPRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom.
| | - Soojeong Yoo
- Wellcome ESPRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom; UCL Interaction Centre, University College London, London, United Kingdom
| | - Thomas Dowrick
- Wellcome ESPRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| | - Bongjin Koo
- Wellcome ESPRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| | - Murali Somasundaram
- Division of Surgery and Interventional Sciences, University College London, London, United Kingdom
| | - Kurinchi Gurusamy
- Division of Surgery and Interventional Sciences, University College London, London, United Kingdom
| | - David J Hawkes
- Wellcome ESPRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| | - Brian Davidson
- Division of Surgery and Interventional Sciences, University College London, London, United Kingdom
| | - Ann Blandford
- Wellcome ESPRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom; UCL Interaction Centre, University College London, London, United Kingdom
| | - Matthew J Clarkson
- Wellcome ESPRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| |
Collapse
|
4
|
Ma L, Huang T, Wang J, Liao H. Visualization, registration and tracking techniques for augmented reality guided surgery: a review. Phys Med Biol 2023; 68. [PMID: 36580681 DOI: 10.1088/1361-6560/acaf23] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Accepted: 12/29/2022] [Indexed: 12/31/2022]
Abstract
Augmented reality (AR) surgical navigation has developed rapidly in recent years. This paper reviews and analyzes the visualization, registration, and tracking techniques used in AR surgical navigation systems, as well as the application of these AR systems in different surgical fields. The types of AR visualization are divided into two categories ofin situvisualization and nonin situvisualization. The rendering contents of AR visualization are various. The registration methods include manual registration, point-based registration, surface registration, marker-based registration, and calibration-based registration. The tracking methods consist of self-localization, tracking with integrated cameras, external tracking, and hybrid tracking. Moreover, we describe the applications of AR in surgical fields. However, most AR applications were evaluated through model experiments and animal experiments, and there are relatively few clinical experiments, indicating that the current AR navigation methods are still in the early stage of development. Finally, we summarize the contributions and challenges of AR in the surgical fields, as well as the future development trend. Despite the fact that AR-guided surgery has not yet reached clinical maturity, we believe that if the current development trend continues, it will soon reveal its clinical utility.
Collapse
Affiliation(s)
- Longfei Ma
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, People's Republic of China
| | - Tianqi Huang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, People's Republic of China
| | - Jie Wang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, People's Republic of China
| | - Hongen Liao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, People's Republic of China
| |
Collapse
|
5
|
Jiang J, Zhang J, Sun J, Wu D, Xu S. User's image perception improved strategy and application of augmented reality systems in smart medical care: A review. Int J Med Robot 2023; 19:e2497. [PMID: 36629798 DOI: 10.1002/rcs.2497] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 12/26/2022] [Accepted: 01/06/2023] [Indexed: 01/12/2023]
Abstract
BACKGROUND Augmented reality (AR) is a new human-computer interaction technology that combines virtual reality, computer vision, and computer networks. With the rapid advancement of the medical field towards intelligence and data visualisation, AR systems are becoming increasingly popular in the medical field because they can provide doctors with clear enough medical images and accurate image navigation in practical applications. However, it has been discovered that different display types of AR systems have different effects on doctors' perception of the image after virtual-real fusion during the actual medical application. If doctors cannot correctly perceive the image, they may be unable to correctly match the virtual information with the real world, which will have a significant impact on their ability to recognise complex structures. METHODS This paper uses Citespace, a literature analysis tool, to visualise and analyse the research hotspots when AR systems are used in the medical field. RESULTS A visual analysis of the 1163 articles retrieved from the Web of Science Core Collection database reveals that display technology and visualisation technology are the key research directions of AR systems at the moment. CONCLUSION This paper categorises AR systems based on their display principles, reviews current image perception optimisation schemes for various types of systems, and analyses and compares different display types of AR systems based on their practical applications in the field of smart medical care so that doctors can select the appropriate display types based on different application scenarios. Finally, the future development direction of AR display technology is anticipated in order for AR technology to be more effectively applied in the field of smart medical care. The advancement of display technology for AR systems is critical for their use in the medical field, and the advantages and disadvantages of various display types should be considered in different application scenarios to select the best AR system.
Collapse
Affiliation(s)
- Jingang Jiang
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin University of Science and Technology, Harbin, Heilongjiang, China.,Robotics & Its Engineering Research Center, Harbin University of Science and Technology, Harbin, Heilongjiang, China
| | - Jiawei Zhang
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin University of Science and Technology, Harbin, Heilongjiang, China
| | - Jianpeng Sun
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin University of Science and Technology, Harbin, Heilongjiang, China
| | - Dianhao Wu
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin University of Science and Technology, Harbin, Heilongjiang, China
| | - Shuainan Xu
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin University of Science and Technology, Harbin, Heilongjiang, China
| |
Collapse
|
6
|
Posa A, Barbieri P, Mazza G, Tanzilli A, Natale L, Sala E, Iezzi R. Technological Advancements in Interventional Oncology. Diagnostics (Basel) 2023; 13:228. [PMID: 36673038 PMCID: PMC9857620 DOI: 10.3390/diagnostics13020228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 12/31/2022] [Accepted: 01/02/2023] [Indexed: 01/11/2023] Open
Abstract
Interventional radiology, and particularly interventional oncology, represents one of the medical subspecialties in which technological advancements and innovations play an utterly fundamental role. Artificial intelligence, consisting of big data analysis and feature extrapolation through computational algorithms for disease diagnosis and treatment response evaluation, is nowadays playing an increasingly important role in various healthcare fields and applications, from diagnosis to treatment response prediction. One of the fields which greatly benefits from artificial intelligence is interventional oncology. In addition, digital health, consisting of practical technological applications, can assist healthcare practitioners in their daily activities. This review aims to cover the most useful, established, and interesting artificial intelligence and digital health innovations and updates, to help physicians become more and more involved in their use in clinical practice, particularly in the field of interventional oncology.
Collapse
Affiliation(s)
- Alessandro Posa
- Department of Diagnostic Imaging, Oncologic Radiotherapy and Hematology—A. Gemelli University Hospital Foundation IRCCS, L.go A. Gemelli 8, 00168 Rome, Italy
| | - Pierluigi Barbieri
- Department of Diagnostic Imaging, Oncologic Radiotherapy and Hematology—A. Gemelli University Hospital Foundation IRCCS, L.go A. Gemelli 8, 00168 Rome, Italy
| | - Giulia Mazza
- Department of Diagnostic Imaging, Oncologic Radiotherapy and Hematology—A. Gemelli University Hospital Foundation IRCCS, L.go A. Gemelli 8, 00168 Rome, Italy
| | - Alessandro Tanzilli
- Department of Diagnostic Imaging, Oncologic Radiotherapy and Hematology—A. Gemelli University Hospital Foundation IRCCS, L.go A. Gemelli 8, 00168 Rome, Italy
| | - Luigi Natale
- Department of Diagnostic Imaging, Oncologic Radiotherapy and Hematology—A. Gemelli University Hospital Foundation IRCCS, L.go A. Gemelli 8, 00168 Rome, Italy
- Istituto di Radiodiagnostica, Università Cattolica del Sacro Cuore, 00168 Rome, Italy
| | - Evis Sala
- Department of Diagnostic Imaging, Oncologic Radiotherapy and Hematology—A. Gemelli University Hospital Foundation IRCCS, L.go A. Gemelli 8, 00168 Rome, Italy
- Istituto di Radiodiagnostica, Università Cattolica del Sacro Cuore, 00168 Rome, Italy
| | - Roberto Iezzi
- Department of Diagnostic Imaging, Oncologic Radiotherapy and Hematology—A. Gemelli University Hospital Foundation IRCCS, L.go A. Gemelli 8, 00168 Rome, Italy
- Istituto di Radiodiagnostica, Università Cattolica del Sacro Cuore, 00168 Rome, Italy
| |
Collapse
|
7
|
Augmented reality (AR) and fracture mapping model on middle-aged femoral neck fracture: A proof-of-concept towards interactive visualization. MEDICINE IN NOVEL TECHNOLOGY AND DEVICES 2022. [DOI: 10.1016/j.medntd.2022.100190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
8
|
Durutović O, Filipović A, Milićević K, Somani B, Emiliani E, Skolarikos A, Janković MM. 3D Imaging Segmentation and 3D Rendering Process for a Precise Puncture Strategy During PCNL – a Pilot Study. Front Surg 2022; 9:891596. [PMID: 35592119 PMCID: PMC9110964 DOI: 10.3389/fsurg.2022.891596] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Accepted: 04/11/2022] [Indexed: 11/13/2022] Open
Abstract
Percutaneous nephrolithotomy (PCNL) is frequently used as the first-line treatment of large and complex stones. The key point for successful complex stone removal with minimal risk of complications is to establish the most appropriate access route. Understanding the three-dimensional (3D) relationship of kidney stones and renal collecting systems is crucial for planning and creating an optimal access route. By using a 3D volume segmentation tool a more accurate approach to the renal collecting system and stone treatment could be planned. The objective of this study was assessing the impact of 3D software in getting the desired access.
Collapse
Affiliation(s)
- Otaš Durutović
- Faculty of Medicine, University of Belgrade, Belgrade, Serbia
- Clinic of Urology, University Clinical Centre of Serbia, Belgrade, Serbia
| | - Aleksandar Filipović
- Faculty of Medicine, University of Belgrade, Belgrade, Serbia
- Center for Radiology and Magnetic Resonance Imaging, University Clinical Centre of Serbia, Belgrade, Serbia
- Correspondence: Aleksandar Filipović
| | - Katarina Milićević
- Laboratory for Biomedical Instrumentation and Technologies, Department of Signals and Systems, University of Belgrade, School of Electrical Engineering, Belgrade, Serbia
| | - Bhaskar Somani
- Faculty of Medicine, University Hospital Southampton, Southampton, United Kingdom
| | - Esteban Emiliani
- Department of Urology, Fundacion Puigvert, Autonomous University of Barcelona, Barcelona, Spain
| | - Andreas Skolarikos
- National and Kapodistrian University of Athens, 2nd Department of Urology, Sismanoglio Hospital, Athens, Greece
| | - Milica M. Janković
- Laboratory for Biomedical Instrumentation and Technologies, Department of Signals and Systems, University of Belgrade, School of Electrical Engineering, Belgrade, Serbia
| |
Collapse
|
9
|
Maleki M, Tehrani AF, Aray A, Ranjbar M. Intramedullary nail holes laser indicator, a non-invasive technique for interlocking of intramedullary nails. Sci Rep 2021; 11:21166. [PMID: 34707138 PMCID: PMC8551185 DOI: 10.1038/s41598-021-00382-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Accepted: 10/12/2021] [Indexed: 11/09/2022] Open
Abstract
Interlocking of intramedullary nails is a challenging procedure in orthopedic trauma surgery. Numerous methods have been described to facilitate this process. But they are exposed patient and surgical team to X-rays or involves trial and error. An accurate and non-invasive method has been provided to easily interlocking intramedullary nails. By transferring a safe visible light inside the nail, a drilling position appears which use to drilling bone toward the nail hole. The wavelength of this light was obtained from ex-vivo spectroscopy on biological tissues which has optimal transmission, reflectance, and absorption properties. Moreover, animal and human experiments were performed to evaluate performance of the proposed system. Ex-vivo performance experiments were performed successfully on two groups of cow and sheep samples. Output parameters were procedure time and drilling quality which there were significant differences between the two groups in procedure time (P < 0.05). But no significant differences were observed in drilling quality (P > 0.05). Moreover, an In-vivo performance experiment was performed successfully on a middle-aged man. To compare the provided method, targeting-arm, and free-hand techniques, two human experiments were performed on a middle-aged and a young man. The results indicate the advantage of the proposed technique in the procedure time (P < 0.05), while the drilling quality is equal to the free-hand technique (P = 0.05). Intramedullary nail holes laser indicator is a safe and accurate method that reduced surgical time and simplifies the process. This new technology makes it easier to interlocking the intramedullary nail which can have good clinical applications.
Collapse
Affiliation(s)
- Mohammadreza Maleki
- Department of Mechanical Engineering, Isfahan University of Technology, 84156-83111, Isfahan, Iran.
| | - Alireza Fadaei Tehrani
- Department of Mechanical Engineering, Isfahan University of Technology, 84156-83111, Isfahan, Iran
| | - Ayda Aray
- Department of Physics, Isfahan University of Technology, 84156-83111, Isfahan, Iran
| | - Mehdi Ranjbar
- Department of Physics, Isfahan University of Technology, 84156-83111, Isfahan, Iran
| |
Collapse
|
10
|
Singh HP, Kumar P. Developments in the human machine interface technologies and their applications: a review. J Med Eng Technol 2021; 45:552-573. [PMID: 34184601 DOI: 10.1080/03091902.2021.1936237] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
Human-machine interface (HMI) techniques use bioelectrical signals to gain real-time synchronised communication between the human body and machine functioning. HMI technology not only provides a real-time control access but also has the ability to control multiple functions at a single instance of time with modest human inputs and increased efficiency. The HMI technologies yield advanced control access on numerous applications such as health monitoring, medical diagnostics, development of prosthetic and assistive devices, automotive and aerospace industry, robotic controls and many more fields. In this paper, various physiological signals, their acquisition and processing techniques along with their respective applications in different HMI technologies have been discussed.
Collapse
Affiliation(s)
- Harpreet Pal Singh
- Department of Mechanical Engineering, Punjabi University, Patiala, India
| | - Parlad Kumar
- Department of Mechanical Engineering, Punjabi University, Patiala, India
| |
Collapse
|
11
|
Chen F, Cui X, Han B, Liu J, Zhang X, Liao H. Augmented reality navigation for minimally invasive knee surgery using enhanced arthroscopy. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 201:105952. [PMID: 33561710 DOI: 10.1016/j.cmpb.2021.105952] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Accepted: 01/21/2021] [Indexed: 06/12/2023]
Abstract
PURPOSE During the minimally invasive knee surgery, surgeons insert surgical instruments and arthroscopy through small incisions, and implement treatment assisted by 2D arthroscopic images. However, this 2D arthroscopic navigation faces several problems. Firstly, the guidance information is displayed on a screen away from the surgical area, which makes hand/eye coordination difficult. Secondly, the small incision limits the surgeons to view the internal knee structures only from an arthroscopic camera. In addition, arthroscopic images commonly appear obscure visions. METHODS To solve these problems, we proposed a novel in-situ augmented reality navigation system with the enhanced arthroscopic information. Firstly, intraoperative anatomical locations were obtained by using arthroscopic images and arthroscopy calibration. Secondly, tissue properties-based model deformation method was proposed to update the 3D preoperative knee model with anatomical location information. Then, the updated model was further rendered with glasses-free real 3D display for achieving the global in-situ augmented reality view. In addition, virtual arthroscopic images were generated from the updated preoperative model to provide the anatomical information of the operation area. RESULTS Experimental results demonstrated that virtual arthroscopic images could reflect the correct structure information with a mean error of 0.32 mm. Compared with 2D arthroscopic navigation, the proposed augmented reality navigation reduced the targeting errors by 2.10 mm and 2.70 mm for the experiments of knee phantom and in-vitro swine knee, respectively. CONCLUSION Our navigation method is helpful for minimally invasive knee surgery since it can provide the global in-situ information and detail anatomical information.
Collapse
Affiliation(s)
- Fang Chen
- Department of Computer Science and Engineering, Nanjing University of Aeronautics and Astronautics, MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, Nanjing, China.
| | - Xiwen Cui
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Boxuan Han
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Jia Liu
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Xinran Zhang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Hongen Liao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China.
| |
Collapse
|
12
|
Abstract
Mixed reality (MR) merges virtual information into the real world through computer technology, in which the real environment and virtual objects can get spliced in the same image or space at real time so that it can effectively express and integrate the virtual and real worlds and allow high feedback interaction. This technology combines the many advantages of virtual realityand augmented reality, and has a promising future in the medical field. At present, MR technology is just at the beginning stage in the medical field in the world, whose application in neurosurgery is also rarely reported. Given this, the authors described the research progress of MR in neurosurgery including preoperative planning and intraoperative guidance, doctor-patient communication, teaching rounds, physician training, and so on.
Collapse
|
13
|
Fick T, van Doormaal JAM, Hoving EW, Willems PWA, van Doormaal TPC. Current Accuracy of Augmented Reality Neuronavigation Systems: Systematic Review and Meta-Analysis. World Neurosurg 2020; 146:179-188. [PMID: 33197631 DOI: 10.1016/j.wneu.2020.11.029] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Revised: 11/04/2020] [Accepted: 11/05/2020] [Indexed: 12/17/2022]
Abstract
BACKGROUND Augmented reality neuronavigation (ARN) systems can overlay three-dimensional anatomy and disease without the need for a two-dimensional external monitor. Accuracy is crucial for their clinical applicability. We performed a systematic review regarding the reported accuracy of ARN systems and compared them with the accuracy of conventional infrared neuronavigation (CIN). METHODS PubMed and Embase were searched for ARN and CIN systems. For ARN, type of system, method of patient-to-image registration, accuracy method, and accuracy of the system were noted. For CIN, navigation accuracy, expressed as target registration error (TRE), was noted. A meta-analysis was performed comparing the TRE of ARN and CIN systems. RESULTS Thirty-five studies were included, 12 for ARN and 23 for CIN. ARN systems could be divided into head-mounted display and heads-up display. In ARN, 4 methods were encountered for patient-to-image registration, of which point-pair matching was the one most frequently used. Five methods for assessing accuracy were described. Ninety-four TRE measurements of ARN systems were compared with 9058 TRE measurements of CIN systems. Mean TRE was 2.5 mm (95% confidence interval, 0.7-4.4) for ARN systems and 2.6 mm (95% confidence interval, 2.1-3.1) for CIN systems. CONCLUSIONS In ARN, there seems to be lack of agreement regarding the best method to assess accuracy. Nevertheless, ARN systems seem able to achieve an accuracy comparable to CIN systems. Future studies should be prospective and compare TREs, which should be measured in a standardized fashion.
Collapse
Affiliation(s)
- Tim Fick
- Department of Neuro-oncology, Princess Máxima Center for Pediatric Oncology, Utrecht, The Netherlands.
| | - Jesse A M van Doormaal
- Department of Oral and Maxillofacial Surgery, University Medical Centre Utrecht, Utrecht, The Netherlands
| | - Eelco W Hoving
- Department of Neuro-oncology, Princess Máxima Center for Pediatric Oncology, Utrecht, The Netherlands
| | - Peter W A Willems
- Department of Neurosurgery, University Medical Centre Utrecht, Utrecht, The Netherlands
| | - Tristan P C van Doormaal
- Department of Neurosurgery, University Medical Centre Utrecht, Utrecht, The Netherlands; Department of Neurosurgery, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Switzerland
| |
Collapse
|
14
|
Javidi B, Carnicer A, Arai J, Fujii T, Hua H, Liao H, Martínez-Corral M, Pla F, Stern A, Waller L, Wang QH, Wetzstein G, Yamaguchi M, Yamamoto H. Roadmap on 3D integral imaging: sensing, processing, and display. OPTICS EXPRESS 2020; 28:32266-32293. [PMID: 33114917 DOI: 10.1364/oe.402193] [Citation(s) in RCA: 54] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2020] [Accepted: 08/27/2020] [Indexed: 06/11/2023]
Abstract
This Roadmap article on three-dimensional integral imaging provides an overview of some of the research activities in the field of integral imaging. The article discusses various aspects of the field including sensing of 3D scenes, processing of captured information, and 3D display and visualization of information. The paper consists of a series of 15 sections from the experts presenting various aspects of the field on sensing, processing, displays, augmented reality, microscopy, object recognition, and other applications. Each section represents the vision of its author to describe the progress, potential, vision, and challenging issues in this field.
Collapse
|
15
|
Chen F, Cui X, Liu J, Han B, Zhang X, Zhang D, Liao H. Tissue Structure Updating for In Situ Augmented Reality Navigation Using Calibrated Ultrasound and Two-Level Surface Warping. IEEE Trans Biomed Eng 2020; 67:3211-3222. [PMID: 32175853 DOI: 10.1109/tbme.2020.2979535] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
OBJECTIVE In minimally invasive surgery (MIS), in situ augmented reality (AR) navigation systems are usually implemented using a glasses-free 3D display to represent the preoperative tissue structure, and can provide intuitive see-through guidance information. However, due to changes in intraoperative tissue, the preoperative tissue structure is not able to exactly correspond to reality, which influences the precision of in situ AR navigation. To solve this problem, we propose a method to update the tissue structure for in situ AR navigation in such way to reflect changes in intraoperative tissue. METHODS The proposed method to update the tissue structure is based on the calibrated ultrasound and two-level surface warping technologies. Firstly, the particle filter-based calibration is implemented to perform ultrasound calibration and obtain intraoperative position of anatomical points. Secondly, intraoperative positions of anatomical points are inputted in the two-level surface warping method to update the preoperative tissue structure. Finally, the glasses-free real 3-D display of the updated tissue structure is finished, and is superimposed onto a patient by a translucent mirror for in situ AR navigation. RESULTS we validated the proposed method by simulating liver tissue intervention, and achieved the tissue updating accuracy of 92.86%. Furthermore, the targeting error of AR navigation based on the proposed method was also evaluated through minimally invasive liver surgery, and the acquired mean targeting error was 1.92 mm. CONCLUSION The results demonstrate that the proposed AR navigation method is effective. SIGNIFICANCE The proposed method can facilitate MIS, as it provides accurate 3D navigation.
Collapse
|
16
|
Pérez-Pachón L, Poyade M, Lowe T, Gröning F. Image Overlay Surgery Based on Augmented Reality: A Systematic Review. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2020; 1260:175-195. [PMID: 33211313 DOI: 10.1007/978-3-030-47483-6_10] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Augmented Reality (AR) applied to surgical guidance is gaining relevance in clinical practice. AR-based image overlay surgery (i.e. the accurate overlay of patient-specific virtual images onto the body surface) helps surgeons to transfer image data produced during the planning of the surgery (e.g. the correct resection margins of tissue flaps) to the operating room, thus increasing accuracy and reducing surgery times. We systematically reviewed 76 studies published between 2004 and August 2018 to explore which existing tracking and registration methods and technologies allow healthcare professionals and researchers to develop and implement these systems in-house. Most studies used non-invasive markers to automatically track a patient's position, as well as customised algorithms, tracking libraries or software development kits (SDKs) to compute the registration between patient-specific 3D models and the patient's body surface. Few studies combined the use of holographic headsets, SDKs and user-friendly game engines, and described portable and wearable systems that combine tracking, registration, hands-free navigation and direct visibility of the surgical site. Most accuracy tests included a low number of subjects and/or measurements and did not normally explore how these systems affect surgery times and success rates. We highlight the need for more procedure-specific experiments with a sufficient number of subjects and measurements and including data about surgical outcomes and patients' recovery. Validation of systems combining the use of holographic headsets, SDKs and game engines is especially interesting as this approach facilitates an easy development of mobile AR applications and thus the implementation of AR-based image overlay surgery in clinical practice.
Collapse
Affiliation(s)
- Laura Pérez-Pachón
- School of Medicine, Medical Sciences and Nutrition, University of Aberdeen, Aberdeen, UK.
| | - Matthieu Poyade
- School of Simulation and Visualisation, Glasgow School of Art, Glasgow, UK
| | - Terry Lowe
- School of Medicine, Medical Sciences and Nutrition, University of Aberdeen, Aberdeen, UK
- Head and Neck Oncology Unit, Aberdeen Royal Infirmary (NHS Grampian), Aberdeen, UK
| | - Flora Gröning
- School of Medicine, Medical Sciences and Nutrition, University of Aberdeen, Aberdeen, UK
| |
Collapse
|
17
|
Ma C, Cui X, Chen F, Ma L, Xin S, Liao H. Knee arthroscopic navigation using virtual-vision rendering and self-positioning technology. Int J Comput Assist Radiol Surg 2019; 15:467-477. [PMID: 31808070 DOI: 10.1007/s11548-019-02099-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2019] [Accepted: 11/18/2019] [Indexed: 01/17/2023]
Abstract
PURPOSE Knee arthroscopy suffers from a lack of depth information and easy occlusion of the visual field. To solve these limitations, we propose an arthroscopic navigation system based on self-positioning technology, with the guidance of virtual-vision views. This system can work without any external tracking devices or added markers, thus increasing the working range and improving the robustness of the rotating operation. METHODS The fly-through view and global positioning view for surgical guidance are rendered through virtual-vision rendering in real time. The fly-through view provides surgeons with navigating the arthroscope in the internal anatomical structures using a virtual camera perspective. The global positioning view shows the posture of the arthroscope relative to the preoperative model in a transparent manner. The posture of the arthroscope is estimated from the fusion of visual and inertial data based on the visual-inertial stereo slam. A flexible calibration method that transforms the posture of the arthroscope in the physical world into the virtual-vision rendering framework is proposed for the arthroscopic navigation system with self-positioning information. RESULTS Quantitative experiments for evaluating self-positioning accuracy were performed. For translation, the acquired mean error was 0.41 ± 0.28 mm; for rotation, it was 0.11° ± 0.07°. The tracking range of the proposed system was approximately 1.4 times that of the traditional external optical tracking system for the rotating operation. Simulated surgical operations were performed on the phantom. The fly-through and global positing views were paired with original arthroscopic images for intuitive surgical guidance. CONCLUSION The proposed system provides surgeons with both fly-through and global positioning views without a dependence on the traditional external tracking systems for surgical guidance. The feasibility and robustness of the system are evaluated, and it shows promise for medical applications.
Collapse
Affiliation(s)
- Cong Ma
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, China
| | - Xiwen Cui
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, China
| | - Fang Chen
- Department of Computer Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, 211106, China
| | - Longfei Ma
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, China
| | - Shenghai Xin
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, China
| | - Hongen Liao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, China.
| |
Collapse
|
18
|
Chen G, Huang T, Fan Z, Zhang X, Liao H. A naked eye 3D display and interaction system for medical education and training. J Biomed Inform 2019; 100:103319. [DOI: 10.1016/j.jbi.2019.103319] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2019] [Revised: 10/12/2019] [Accepted: 10/22/2019] [Indexed: 11/17/2022]
|
19
|
Ma C, Chen G, Zhang X, Ning G, Liao H. Moving-Tolerant Augmented Reality Surgical Navigation System Using Autostereoscopic Three-Dimensional Image Overlay. IEEE J Biomed Health Inform 2019; 23:2483-2493. [DOI: 10.1109/jbhi.2018.2885378] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
20
|
Huang T, Han B, Zhang X, Liao H. High-performance autostereoscopic display based on the lenticular tracking method. OPTICS EXPRESS 2019; 27:20421-20434. [PMID: 31510136 DOI: 10.1364/oe.27.020421] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/10/2019] [Accepted: 06/21/2019] [Indexed: 06/10/2023]
Abstract
We propose a novel full-parallax autostereoscopic display based on a lenticular tracking method to achieve separation between the viewing angle and image resolution and to improve these two parameters simultaneously. The proposed method enables the viewing angle to be independent of the image resolution and has the potential to solve the long-term trade-off problem in integral photography. By employing the lenticular lens array instead of the micro-lens array in integral photography with viewing tracking, the proposed method shows a high-image resolution and wide viewing angle 3D display with full parallax. A real-time tracking and rendering algorithm for the display method is also proposed in this study. The experimental results, compared with those of the conventional integral photography display and the tracking-based integral photography display, demonstrate the feasibility of this lenticular tracking display technology and its advantages in display resolution and viewing angle, suggesting its potential in practical three-dimensional applications.
Collapse
|
21
|
Wüller H, Behrens J, Garthaus M, Marquard S, Remmers H. A scoping review of augmented reality in nursing. BMC Nurs 2019; 18:19. [PMID: 31123428 PMCID: PMC6521519 DOI: 10.1186/s12912-019-0342-2] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2018] [Accepted: 04/17/2019] [Indexed: 01/11/2023] Open
Abstract
Background Augmented reality (AR) has the potential to be utilized in various fields. Nursing fulfils the requirements of smart glass use cases, and technology may be one method of supporting nurses that face challenges such as demographic change. The development of AR to assist in nursing is now feasible. Attempts to develop applications have been made, but there has not been an overview regarding the existing research. Objective The aim of this scoping review is to provide an overview of the current research regarding AR in nursing to identify possible research gaps. This led to the following research question: “To date, what research has been performed regarding the use of AR in nursing?”. A focus has been placed on the topics involving cases, evaluations, and devices used. Methods A scoping review was carried out with the methodological steps outlined by Arksey and O’Malley (2005) and further enhanced by Levac et al. (2010). A broad range of keywords were used systematically in eight databases including PubMed, Web of Science and ACM to search for topics in nursing. Results The search led to 23 publications that were included in the final analysis. The majority of the identified publications describe pilot studies. The methods used for identifying use cases and evaluating applications differ among the included studies. Furthermore, the devices used vary from study to study and may include smart glasses, tablets, and smart watches, among others. Previous studies predominantly evaluated the use of smart glasses. In addition, evaluations did not take framing conditions into account. Reviewed publications that evaluated the use of AR in nursing also identified technical challenges associated with AR. Conclusions These results show that the use of AR in nursing may have positive implications. While current studies focus on evaluating prototypes, future studies should focus on performing long-term evaluations to take framing conditions and the long-term consequences of AR into consideration. Our findings are important and informative for nurses and technicians who are involved in the development of new technologies. They can use our findings to reflect on their own design of case identification, requirements for elicitation and evaluation.
Collapse
Affiliation(s)
- Hanna Wüller
- School of Human Sciences, Osnabrück University, Osnabrück, Lower Saxony Germany
| | - Jonathan Behrens
- School of Human Sciences, Osnabrück University, Osnabrück, Lower Saxony Germany
| | - Marcus Garthaus
- School of Human Sciences, Osnabrück University, Osnabrück, Lower Saxony Germany
| | - Sara Marquard
- School of Human Sciences, Osnabrück University, Osnabrück, Lower Saxony Germany
| | - Hartmut Remmers
- School of Human Sciences, Osnabrück University, Osnabrück, Lower Saxony Germany
| |
Collapse
|
22
|
Uppot RN, Laguna B, McCarthy CJ, De Novi G, Phelps A, Siegel E, Courtier J. Implementing Virtual and Augmented Reality Tools for Radiology Education and Training, Communication, and Clinical Care. Radiology 2019; 291:570-580. [PMID: 30990383 DOI: 10.1148/radiol.2019182210] [Citation(s) in RCA: 72] [Impact Index Per Article: 14.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Advances in virtual immersive and augmented reality technology, commercially available for the entertainment and gaming industry, hold potential for education and clinical use in medicine and the field of medical imaging. Radiology departments have begun exploring the use of these technologies to help with radiology education and clinical care. The purpose of this review article is to summarize how three institutions have explored using virtual and augmented reality for radiology.
Collapse
Affiliation(s)
- Raul N Uppot
- From the Department of Radiology, Division of Interventional Radiology, Massachusetts General Hospital, 55 Fruit St, Gray 290, Boston, MA 02114 (R.N.U., C.J.M., G.D.N.); Department of Radiology and Biomedical Imaging, University of California San Francisco Medical Center, San Francisco, Calif (B.L., A.P., J.C.); and Department of Radiology, University of Maryland Medical Center, Baltimore, Md (E.S.)
| | - Benjamin Laguna
- From the Department of Radiology, Division of Interventional Radiology, Massachusetts General Hospital, 55 Fruit St, Gray 290, Boston, MA 02114 (R.N.U., C.J.M., G.D.N.); Department of Radiology and Biomedical Imaging, University of California San Francisco Medical Center, San Francisco, Calif (B.L., A.P., J.C.); and Department of Radiology, University of Maryland Medical Center, Baltimore, Md (E.S.)
| | - Colin J McCarthy
- From the Department of Radiology, Division of Interventional Radiology, Massachusetts General Hospital, 55 Fruit St, Gray 290, Boston, MA 02114 (R.N.U., C.J.M., G.D.N.); Department of Radiology and Biomedical Imaging, University of California San Francisco Medical Center, San Francisco, Calif (B.L., A.P., J.C.); and Department of Radiology, University of Maryland Medical Center, Baltimore, Md (E.S.)
| | - Gianluca De Novi
- From the Department of Radiology, Division of Interventional Radiology, Massachusetts General Hospital, 55 Fruit St, Gray 290, Boston, MA 02114 (R.N.U., C.J.M., G.D.N.); Department of Radiology and Biomedical Imaging, University of California San Francisco Medical Center, San Francisco, Calif (B.L., A.P., J.C.); and Department of Radiology, University of Maryland Medical Center, Baltimore, Md (E.S.)
| | - Andrew Phelps
- From the Department of Radiology, Division of Interventional Radiology, Massachusetts General Hospital, 55 Fruit St, Gray 290, Boston, MA 02114 (R.N.U., C.J.M., G.D.N.); Department of Radiology and Biomedical Imaging, University of California San Francisco Medical Center, San Francisco, Calif (B.L., A.P., J.C.); and Department of Radiology, University of Maryland Medical Center, Baltimore, Md (E.S.)
| | - Eliot Siegel
- From the Department of Radiology, Division of Interventional Radiology, Massachusetts General Hospital, 55 Fruit St, Gray 290, Boston, MA 02114 (R.N.U., C.J.M., G.D.N.); Department of Radiology and Biomedical Imaging, University of California San Francisco Medical Center, San Francisco, Calif (B.L., A.P., J.C.); and Department of Radiology, University of Maryland Medical Center, Baltimore, Md (E.S.)
| | - Jesse Courtier
- From the Department of Radiology, Division of Interventional Radiology, Massachusetts General Hospital, 55 Fruit St, Gray 290, Boston, MA 02114 (R.N.U., C.J.M., G.D.N.); Department of Radiology and Biomedical Imaging, University of California San Francisco Medical Center, San Francisco, Calif (B.L., A.P., J.C.); and Department of Radiology, University of Maryland Medical Center, Baltimore, Md (E.S.)
| |
Collapse
|
23
|
Sasaki H, Okaichi N, Watanabe H, Kano M, Miura M, Kawakita M, Mishina T. Color moiré reduction and resolution enhancement of flat-panel integral three-dimensional display. OPTICS EXPRESS 2019; 27:8488-8503. [PMID: 31052665 DOI: 10.1364/oe.27.008488] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/26/2018] [Accepted: 01/03/2019] [Indexed: 06/09/2023]
Abstract
Color moiré occurs owing to the subpixel structure of the display panel in the integral three-dimensional (3D) display method, deteriorating the 3D-image quality. To address this, we propose a method for reducing the color moiré and improving the 3D-image resolution, simultaneously, by combining multiple 3D images. In the prototype system, triple 3D display units with lens arrays closely attached to 8K-resolution display panels are optically combined. By controlling the color moiré of the 3D image generated on each display and shifting and combining the elemental lenses constituting the lens array, sufficient reduction in the color moiré is realized, while suppressing the deterioration of the 3D-image quality, at a distant position from the lens array in the depth direction, along with an approximately two-fold enhancement of the resolution near the lens array.
Collapse
|
24
|
Meulstee JW, Nijsink J, Schreurs R, Verhamme LM, Xi T, Delye HHK, Borstlap WA, Maal TJJ. Toward Holographic-Guided Surgery. Surg Innov 2018; 26:86-94. [DOI: 10.1177/1553350618799552] [Citation(s) in RCA: 56] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The implementation of augmented reality (AR) in image-guided surgery (IGS) can improve surgical interventions by presenting the image data directly on the patient at the correct position and in the actual orientation. This approach can resolve the switching focus problem, which occurs in conventional IGS systems when the surgeon has to look away from the operation field to consult the image data on a 2-dimensional screen. The Microsoft HoloLens, a head-mounted AR display, was combined with an optical navigation system to create an AR-based IGS system. Experiments were performed on a phantom model to determine the accuracy of the complete system and to evaluate the effect of adding AR. The results demonstrated a mean Euclidean distance of 2.3 mm with a maximum error of 3.5 mm for the complete system. Adding AR visualization to a conventional system increased the mean error by 1.6 mm. The introduction of AR in IGS was promising. The presented system provided a solution for the switching focus problem and created a more intuitive guidance system. With a further reduction in the error and more research to optimize the visualization, many surgical applications could benefit from the advantages of AR guidance.
Collapse
Affiliation(s)
| | - Johan Nijsink
- Radboud University Medical Center, Nijmegen, Netherlands
| | - Ruud Schreurs
- Radboud University Medical Center, Nijmegen, Netherlands
- Academic Medical Center, Amsterdam, Netherlands
| | | | - Tong Xi
- Radboud University Medical Center, Nijmegen, Netherlands
| | | | | | | |
Collapse
|
25
|
Chen G, Wang H, Liu M, Liao H. Hybrid camera array based calibration for computer-generated integral photography display. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2018; 35:1567-1574. [PMID: 30183012 DOI: 10.1364/josaa.35.001567] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2018] [Accepted: 07/24/2018] [Indexed: 06/08/2023]
Abstract
Integral photography (IP) is one of the most promising 3D displays that can achieve a full parallax 3D display without glasses. There is a great need to render a correct, high-precision 3D image from an IP display. To achieve a correct 3D display, calibration is needed to correct optical misalignment and optical aberrations, while it is challenging to achieve correct mapping between a microlens array and matrix display. We propose an IP calibration method for a 3D autostereoscopic integral photography display based on a sparse camera array. Our method distinguishes itself from previous methods by estimating parameters for a dense correspondence map of an IP display with a relatively flexible setup and high precision in a reasonable time cost. We also propose a workflow to enable our method to handle both a visible and invisible microlens array and obtain a great outcome. One prototype is fabricated to evaluate the feasibility of the proposed method. Moreover, we evaluate our proposed method in geometry accuracy and image quality.
Collapse
|
26
|
Tang R, Ma L, Li A, Yu L, Rong Z, Zhang X, Xiang C, Liao H, Dong J. Choledochoscopic Examination of a 3-Dimensional Printing Model Using Augmented Reality Techniques: A Preliminary Proof of Concept Study. Surg Innov 2018; 25:492-498. [PMID: 29909727 DOI: 10.1177/1553350618781622] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
BACKGROUND We applied augmented reality (AR) techniques to flexible choledochoscopy examinations. METHODS Enhanced computed tomography data of a patient with intrahepatic and extrahepatic biliary duct dilatation were collected to generate a hollow, 3-dimensional (3D) model of the biliary tree by 3D printing. The 3D printed model was placed in an opaque box. An electromagnetic (EM) sensor was internally installed in the choledochoscope instrument channel for tracking its movements through the passages of the 3D printed model, and an AR navigation platform was built using image overlay display. The porta hepatis was used as the reference marker with rigid image registration. The trajectories of the choledochoscope and the EM sensor were observed and recorded using the operator interface of the choledochoscope. RESULTS Training choledochoscopy was performed on the 3D printed model. The choledochoscope was guided into the left and right hepatic ducts, the right anterior hepatic duct, the bile ducts of segment 8, the hepatic duct in subsegment 8, the right posterior hepatic duct, and the left and the right bile ducts of the caudate lobe. Although stability in tracking was less than ideal, the virtual choledochoscope images and EM sensor tracking were effective for navigation. CONCLUSIONS AR techniques can be used to assist navigation in choledochoscopy examinations in bile duct models. Further research is needed to determine its benefits in clinical settings.
Collapse
Affiliation(s)
- Rui Tang
- 1 Department of Hepatopancreatobiliary Surgery, Tsinghua University Affiliated Beijing Tsinghua Changgung Hospital, Beijing, China
| | - Longfei Ma
- 2 Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Ang Li
- 1 Department of Hepatopancreatobiliary Surgery, Tsinghua University Affiliated Beijing Tsinghua Changgung Hospital, Beijing, China
| | - Lihan Yu
- 1 Department of Hepatopancreatobiliary Surgery, Tsinghua University Affiliated Beijing Tsinghua Changgung Hospital, Beijing, China
| | - Zhixia Rong
- 3 Hepatobiliary and Pancreatic Surgery and Liver Transplantation Team, Medical Center of University of Montreal (CHUM), Montreal, Quebec, Canada
| | - Xinjing Zhang
- 1 Department of Hepatopancreatobiliary Surgery, Tsinghua University Affiliated Beijing Tsinghua Changgung Hospital, Beijing, China
| | - Canhong Xiang
- 1 Department of Hepatopancreatobiliary Surgery, Tsinghua University Affiliated Beijing Tsinghua Changgung Hospital, Beijing, China
| | - Hongen Liao
- 2 Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Jiahong Dong
- 1 Department of Hepatopancreatobiliary Surgery, Tsinghua University Affiliated Beijing Tsinghua Changgung Hospital, Beijing, China
| |
Collapse
|
27
|
Ma L, Zhao Z, Zhang B, Jiang W, Fu L, Zhang X, Liao H. Three-dimensional augmented reality surgical navigation with hybrid optical and electromagnetic tracking for distal intramedullary nail interlocking. Int J Med Robot 2018; 14:e1909. [PMID: 29575601 DOI: 10.1002/rcs.1909] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2017] [Revised: 02/07/2018] [Accepted: 02/08/2018] [Indexed: 11/08/2022]
Affiliation(s)
- Longfei Ma
- Department of Biomedical Engineering, School of Medicine; Tsinghua University; Beijing China
| | - Zhe Zhao
- Department of Orthopedics Surgery; Beijing Tsinghua Changgung Hospital; Beijing China
| | - Boyu Zhang
- Department of Biomedical Engineering, School of Medicine; Tsinghua University; Beijing China
| | - Weipeng Jiang
- Department of Biomedical Engineering, School of Medicine; Tsinghua University; Beijing China
| | - Ligong Fu
- Department of Orthopedics Surgery; Beijing Tsinghua Changgung Hospital; Beijing China
| | - Xinran Zhang
- Department of Biomedical Engineering, School of Medicine; Tsinghua University; Beijing China
| | - Hongen Liao
- Department of Biomedical Engineering, School of Medicine; Tsinghua University; Beijing China
| |
Collapse
|
28
|
Intraoperative utilization of advanced imaging modalities in a complex kidney stone case: a pilot case study. World J Urol 2018; 36:733-743. [DOI: 10.1007/s00345-018-2260-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2017] [Accepted: 03/07/2018] [Indexed: 10/17/2022] Open
|
29
|
Fan Z, Chen G, Wang J, Liao H. Spatial Position Measurement System for Surgical Navigation Using 3-D Image Marker-Based Tracking Tools With Compact Volume. IEEE Trans Biomed Eng 2018; 65:378-389. [DOI: 10.1109/tbme.2017.2771356] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
30
|
Intelligent HMI in Orthopedic Navigation. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2018; 1093:207-224. [DOI: 10.1007/978-981-13-1396-7_17] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
|
31
|
3D Visualization and Augmented Reality for Orthopedics. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2018; 1093:193-205. [DOI: 10.1007/978-981-13-1396-7_16] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|
32
|
Qian L, Barthel A, Johnson A, Osgood G, Kazanzides P, Navab N, Fuerst B. Comparison of optical see-through head-mounted displays for surgical interventions with object-anchored 2D-display. Int J Comput Assist Radiol Surg 2017; 12:901-910. [PMID: 28343301 DOI: 10.1007/s11548-017-1564-y] [Citation(s) in RCA: 39] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2017] [Accepted: 03/13/2017] [Indexed: 10/19/2022]
Abstract
PURPOSE Optical see-through head-mounted displays (OST-HMD) feature an unhindered and instantaneous view of the surgery site and can enable a mixed reality experience for surgeons during procedures. In this paper, we present a systematic approach to identify the criteria for evaluation of OST-HMD technologies for specific clinical scenarios, which benefit from using an object-anchored 2D-display visualizing medical information. METHODS Criteria for evaluating the performance of OST-HMDs for visualization of medical information and its usage are identified and proposed. These include text readability, contrast perception, task load, frame rate, and system lag. We choose to compare three commercially available OST-HMDs, which are representatives of currently available head-mounted display technologies. A multi-user study and an offline experiment are conducted to evaluate their performance. RESULTS Statistical analysis demonstrates that Microsoft HoloLens performs best among the three tested OST-HMDs, in terms of contrast perception, task load, and frame rate, while ODG R-7 offers similar text readability. The integration of indoor localization and fiducial tracking on the HoloLens provides significantly less system lag in a relatively motionless scenario. CONCLUSIONS With ever more OST-HMDs appearing on the market, the proposed criteria could be used in the evaluation of their suitability for mixed reality surgical intervention. Currently, Microsoft HoloLens may be more suitable than ODG R-7 and Epson Moverio BT-200 for clinical usability in terms of the evaluated criteria. To the best of our knowledge, this is the first paper that presents a methodology and conducts experiments to evaluate and compare OST-HMDs for their use as object-anchored 2D-display during interventions.
Collapse
Affiliation(s)
- Long Qian
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA. .,Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA.
| | - Alexander Barthel
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA.,Computer Aided Medical Procedures, Technische Universität München, Munich, Germany
| | - Alex Johnson
- Department of Orthopaedic Surgery, Johns Hopkins Hospital, Baltimore, MD, USA
| | - Greg Osgood
- Department of Orthopaedic Surgery, Johns Hopkins Hospital, Baltimore, MD, USA
| | - Peter Kazanzides
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Nassir Navab
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA.,Computer Aided Medical Procedures, Technische Universität München, Munich, Germany
| | - Bernhard Fuerst
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|