1
|
Liu S, Fan J, Yang Y, Xiao D, Ai D, Song H, Wang Y, Yang J. Monocular endoscopy images depth estimation with multi-scale residual fusion. Comput Biol Med 2024; 169:107850. [PMID: 38145602 DOI: 10.1016/j.compbiomed.2023.107850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 11/16/2023] [Accepted: 12/11/2023] [Indexed: 12/27/2023]
Abstract
BACKGROUND Monocular depth estimation plays a fundamental role in clinical endoscopy surgery. However, the coherent illumination, smooth surfaces, and texture-less nature of endoscopy images present significant challenges to traditional depth estimation methods. Existing approaches struggle to accurately perceive depth in such settings. METHOD To overcome these challenges, this paper proposes a novel multi-scale residual fusion method for estimating the depth of monocular endoscopy images. Specifically, we address the issue of coherent illumination by leveraging image frequency domain component space transformation, thereby enhancing the stability of the scene's light source. Moreover, we employ an image radiation intensity attenuation model to estimate the initial depth map. Finally, to refine the accuracy of depth estimation, we utilize a multi-scale residual fusion optimization technique. RESULTS To evaluate the performance of our proposed method, extensive experiments were conducted on public datasets. The structural similarity measures for continuous frames in three distinct clinical data scenes reached impressive values of 0.94, 0.82, and 0.84, respectively. These results demonstrate the effectiveness of our approach in capturing the intricate details of endoscopy images. Furthermore, the depth estimation accuracy achieved remarkable levels of 89.3 % and 91.2 % for the two models' data, respectively, underscoring the robustness of our method. CONCLUSIONS Overall, the promising results obtained on public datasets highlight the significant potential of our method for clinical applications, facilitating reliable depth estimation and enhancing the quality of endoscopy surgical procedures.
Collapse
Affiliation(s)
- Shiyuan Liu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China; China Center for Information Industry Development, Beijing, 100081, China
| | - Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China.
| | - Yun Yang
- Department of General Surgery, Beijing Friendship Hospital, Capital Medical University, National Clinical Research Center for Digestive Diseases, Beijing 100050, China
| | - Deqiang Xiao
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, 100081, China
| | - Yongtian Wang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China.
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| |
Collapse
|
2
|
Kim YC, Park CU, Lee SJ, Jeong WS, Na SW, Choi JW. Application of augmented reality using automatic markerless registration for facial plastic and reconstructive surgery. J Craniomaxillofac Surg 2024; 52:246-251. [PMID: 38199944 DOI: 10.1016/j.jcms.2023.12.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Accepted: 12/20/2023] [Indexed: 01/12/2024] Open
Abstract
This study aimed to present a novel markerless augmented reality (AR) system using automatic registration based on machine-learning algorithms that visualize the facial region and provide an intraoperative guide for facial plastic and reconstructive surgeries. This study prospectively enrolled 20 patients scheduled for facial plastic and reconstructive surgeries. The AR system visualizes computed tomographic data in three-dimensional (3D) space by aligning with the point clouds captured by a 3D camera. Point cloud registration consists of two stages: the preliminary registration gives an initial estimate of the transformation using landmark detection, followed by the precise registration using Iterative Closest Point algorithms. Computed Tomography (CT) data can be visualized as two-dimensional slice images or 3D images by the AR system. The AR registration error was defined as the cloud-to-cloud distance between the surface data obtained from the CT and 3D camera. The error was calculated in each facial territory, including the upper, middle, and lower face, while patients were awake and orally intubated, respectively. The mean registration errors were 1.490 ± 0.384 mm and 1.948 ± 0.638 mm while patients were awake and orally intubated, respectively. There was a significant difference in the errors in the lower face between patients while they were awake (1.502 ± 0.480 mm) and orally intubated (2.325 ± 0.971 mm) when stratified by facial territories (p = 0.006). The markerless AR can accurately visualize the facial region with a mean overall registration error of 1-2 mm, with a slight increase in the lower face due to errors arising from tube intubation.
Collapse
Affiliation(s)
- Young Chul Kim
- Department of Plastic and Reconstructive Surgery, Ulsan University College of Medicine, Asan Medical Center, Seoul, South Korea
| | | | - Seok Joon Lee
- Department of Plastic and Reconstructive Surgery, Ulsan University College of Medicine, Asan Medical Center, Seoul, South Korea
| | - Woo Shik Jeong
- Department of Plastic and Reconstructive Surgery, Ulsan University College of Medicine, Asan Medical Center, Seoul, South Korea
| | | | - Jong Woo Choi
- Department of Plastic and Reconstructive Surgery, Ulsan University College of Medicine, Asan Medical Center, Seoul, South Korea.
| |
Collapse
|
3
|
Lin Z, Lei C, Yang L. Modern Image-Guided Surgery: A Narrative Review of Medical Image Processing and Visualization. SENSORS (BASEL, SWITZERLAND) 2023; 23:9872. [PMID: 38139718 PMCID: PMC10748263 DOI: 10.3390/s23249872] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Revised: 11/15/2023] [Accepted: 12/13/2023] [Indexed: 12/24/2023]
Abstract
Medical image analysis forms the basis of image-guided surgery (IGS) and many of its fundamental tasks. Driven by the growing number of medical imaging modalities, the research community of medical imaging has developed methods and achieved functionality breakthroughs. However, with the overwhelming pool of information in the literature, it has become increasingly challenging for researchers to extract context-relevant information for specific applications, especially when many widely used methods exist in a variety of versions optimized for their respective application domains. By being further equipped with sophisticated three-dimensional (3D) medical image visualization and digital reality technology, medical experts could enhance their performance capabilities in IGS by multiple folds. The goal of this narrative review is to organize the key components of IGS in the aspects of medical image processing and visualization with a new perspective and insights. The literature search was conducted using mainstream academic search engines with a combination of keywords relevant to the field up until mid-2022. This survey systemically summarizes the basic, mainstream, and state-of-the-art medical image processing methods as well as how visualization technology like augmented/mixed/virtual reality (AR/MR/VR) are enhancing performance in IGS. Further, we hope that this survey will shed some light on the future of IGS in the face of challenges and opportunities for the research directions of medical image processing and visualization.
Collapse
Affiliation(s)
- Zhefan Lin
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310030, China;
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| | - Chen Lei
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| | - Liangjing Yang
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310030, China;
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| |
Collapse
|
4
|
Condino S, Cutolo F, Carbone M, Cercenelli L, Badiali G, Montemurro N, Ferrari V. Registration Sanity Check for AR-guided Surgical Interventions: Experience From Head and Face Surgery. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2023; 12:258-267. [PMID: 38410181 PMCID: PMC10896424 DOI: 10.1109/jtehm.2023.3332088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 10/19/2023] [Accepted: 11/08/2023] [Indexed: 02/28/2024]
Abstract
Achieving and maintaining proper image registration accuracy is an open challenge of image-guided surgery. This work explores and assesses the efficacy of a registration sanity check method for augmented reality-guided navigation (AR-RSC), based on the visual inspection of virtual 3D models of landmarks. We analyze the AR-RSC sensitivity and specificity by recruiting 36 subjects to assess the registration accuracy of a set of 114 AR images generated from camera images acquired during an AR-guided orthognathic intervention. Translational or rotational errors of known magnitude up to ±1.5 mm/±15.5°, were artificially added to the image set in order to simulate different registration errors. This study analyses the performance of AR-RSC when varying (1) the virtual models selected for misalignment evaluation (e. g., the model of brackets, incisor teeth, and gingival margins in our experiment), (2) the type (translation/rotation) of registration error, and (3) the level of user experience in using AR technologies. Results show that: 1) the sensitivity and specificity of the AR-RSC depends on the virtual models (globally, a median true positive rate of up to 79.2% was reached with brackets, and a median true negative rate of up to 64.3% with incisor teeth), 2) there are error components that are more difficult to identify visually, 3) the level of user experience does not affect the method. In conclusion, the proposed AR-RSC, tested also in the operating room, could represent an efficient method to monitor and optimize the registration accuracy during the intervention, but special attention should be paid to the selection of the AR data chosen for the visual inspection of the registration accuracy.
Collapse
Affiliation(s)
- Sara Condino
- Department of Information EngineeringUniversity of Pisa56126PisaItaly
| | - Fabrizio Cutolo
- Department of Information EngineeringUniversity of Pisa56126PisaItaly
| | - Marina Carbone
- Department of Information EngineeringUniversity of Pisa56126PisaItaly
| | - Laura Cercenelli
- EDIMES Laboratory of BioengineeringDepartment of Experimental, Diagnostic and Specialty MedicineUniversity of Bologna40138BolognaItaly
| | - Giovanni Badiali
- Department of Information EngineeringUniversity of Pisa56126PisaItaly
| | - Nicola Montemurro
- Department of NeurosurgeryAzienda Ospedaliera Universitaria Pisana (AOUP)56127PisaItaly
| | - Vincenzo Ferrari
- Department of Information EngineeringUniversity of Pisa56126PisaItaly
| |
Collapse
|
5
|
Fan X, Tao B, Tu P, Shen Y, Wu Y, Chen X. A novel mixed reality-guided dental implant placement navigation system based on virtual-actual registration. Comput Biol Med 2023; 166:107560. [PMID: 37847946 DOI: 10.1016/j.compbiomed.2023.107560] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 09/14/2023] [Accepted: 10/10/2023] [Indexed: 10/19/2023]
Abstract
BACKGROUNDS The key to successful dental implant surgery is to place the implants accurately along the pre-operative planned paths. The application of surgical navigation systems can significantly improve the safety and accuracy of implantation. However, the frequent shift of the views of the surgeon between the surgical site and the computer screen causes troubles, which is expected to be solved by the introduction of mixed-reality technology through the wearing of HoloLens devices by enabling the alignment of the virtual three-dimensional (3D) image with the actual surgical site in the same field of view. METHODS This study utilized mixed reality technology to enhance dental implant surgery navigation. Our first step was reconstructing a virtual 3D model from pre-operative cone-beam CT (CBCT) images. We then obtained the relative position between objects using the navigation device and HoloLens camera. Via the algorithms of virtual-actual registration, the transformation matrixes between the HoloLens devices and the navigation tracker were acquired through the HoloLens-tracker registration, and the transformation matrixes between the virtual model and the patient phantom through the image-phantom registration. In addition, the algorithm of surgical drill calibration assisted in acquiring transformation matrixes between the surgical drill and the patient phantom. These algorithms allow real-time tracking of the surgical drill's location and orientation relative to the patient phantom under the navigation device. With the aid of the HoloLens 2, virtual 3D images and actual patient phantoms can be aligned accurately, providing surgeons with a clear visualization of the implant path. RESULTS Phantom experiments were conducted using 30 patient phantoms, with a total of 102 dental implants inserted. Comparisons between the actual implant paths and the pre-operatively planned implant paths showed that our system achieved a coronal deviation of 1.507 ± 0.155 mm, an apical deviation of 1.542 ± 0.143 mm, and an angular deviation of 3.468 ± 0.339°. The deviation was not significantly different from that of the navigation-guided dental implant placement but better than the freehand dental implant placement. CONCLUSION Our proposed system realizes the integration of the pre-operative planned dental implant paths and the patient phantom, which helps surgeons achieve adequate accuracy in traditional dental implant surgery. Furthermore, this system is expected to be applicable to animal and cadaveric experiments in further studies.
Collapse
Affiliation(s)
- Xingqi Fan
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Baoxin Tao
- Department of Second Dental Center, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Puxun Tu
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yihan Shen
- Department of Second Dental Center, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
| | - Yiqun Wu
- Department of Second Dental Center, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
| | - Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China; Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China.
| |
Collapse
|
6
|
Nguyen DCT, Benameur S, Mignotte M, Lavoie F. Unsupervised registration of 3D knee implant components to biplanar X-ray images. BMC Med Imaging 2023; 23:133. [PMID: 37718452 PMCID: PMC10506289 DOI: 10.1186/s12880-023-01048-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2021] [Accepted: 06/06/2023] [Indexed: 09/19/2023] Open
Abstract
BACKGROUND Registration of three-dimensional (3D) knee implant components to radiographic images provides the 3D position of the implants which aids to analyze the component alignment after total knee arthroplasty. METHODS We present an automatic 3D to two-dimensional (2D) registration using biplanar radiographic images based on a hybrid similarity measure integrating region and edge-based information. More precisely, this measure is herein defined as a weighted combination of an edge potential field-based similarity, which represents the relation between the external contours of the component projections and an edge potential field estimated on the two radiographic images, and an object specificity property, which is based on the distinction of the region-label inside and outside of the object. RESULTS The accuracy of our 3D/2D registration algorithm was assessed on a sample of 64 components (32 femoral components and 32 tibial components). In our tests, we obtained an average of the root mean square error (RMSE) of 0.18 mm, which is significantly lower than that of both single similarity methods, supporting our hypothesis of better stability and accuracy with the proposed approach. CONCLUSION Our method, which provides six accurate registration parameters (three rotations and three translations) without requiring any fiducial markers, makes it possible to perform the important analyses on the rotational alignment of the femoral and tibial components on a large number of cases. In addition, this method can be extended to register other implants or bones.
Collapse
Affiliation(s)
- Dac Cong Tai Nguyen
- Département d'Informatique et de Recherche Opérationnelle (DIRO), Université de Montréal, Montréal, Québec, Canada.
- Eiffel Medtech Inc., Montréal, Québec, Canada.
| | | | - Max Mignotte
- Département d'Informatique et de Recherche Opérationnelle (DIRO), Université de Montréal, Montréal, Québec, Canada
| | - Frédéric Lavoie
- Eiffel Medtech Inc., Montréal, Québec, Canada
- Orthopedic Surgery Department, Centre Hospitalier de l'Université de Montréal (CHUM), Montréal, Québec, Canada
| |
Collapse
|
7
|
Kwon KH, Kim MY. Robust H-K Curvature Map Matching for Patient-to-CT Registration in Neurosurgical Navigation Systems. SENSORS (BASEL, SWITZERLAND) 2023; 23:4903. [PMID: 37430817 DOI: 10.3390/s23104903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Revised: 05/16/2023] [Accepted: 05/18/2023] [Indexed: 07/12/2023]
Abstract
Image-to-patient registration is a coordinate system matching process between real patients and medical images to actively utilize medical images such as computed tomography (CT) during surgery. This paper mainly deals with a markerless method utilizing scan data of patients and 3D data from CT images. The 3D surface data of the patient are registered to CT data using computer-based optimization methods such as iterative closest point (ICP) algorithms. However, if a proper initial location is not set up, the conventional ICP algorithm has the disadvantages that it takes a long converging time and also suffers from the local minimum problem during the process. We propose an automatic and robust 3D data registration method that can accurately find a proper initial location for the ICP algorithm using curvature matching. The proposed method finds and extracts the matching area for 3D registration by converting 3D CT data and 3D scan data to 2D curvature images and by performing curvature matching between them. Curvature features have characteristics that are robust to translation, rotation, and even some deformation. The proposed image-to-patient registration is implemented with the precise 3D registration of the extracted partial 3D CT data and the patient's scan data using the ICP algorithm.
Collapse
Affiliation(s)
- Ki Hoon Kwon
- School of Electronic and Electrical Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
| | - Min Young Kim
- School of Electronic and Electrical Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
- Research Center for Neurosurgical Robotic System, Kyungpook National University, Daegu 41566, Republic of Korea
| |
Collapse
|
8
|
Suenaga H, Sakakibara A, Taniguchi A, Hoshi K. Computer-Assisted Preoperative Simulation and Augmented Reality for Extraction of Impacted Supernumerary Teeth: A Clinical Case Report of Two Cases. J Oral Maxillofac Surg 2023; 81:201-205. [PMID: 36442536 DOI: 10.1016/j.joms.2022.10.017] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Revised: 10/26/2022] [Accepted: 10/27/2022] [Indexed: 11/07/2022]
Abstract
Delayed eruption, malocclusion, poor oral hygiene, and formation of follicular cysts are some complications associated with an impacted supernumerary tooth (ST). Although surgical extraction is one of the methods to prevent these complications, it can also lead to fractured roots or has a risk of permanent injury to young teeth and gingiva. Recently, computer-assisted preoperative simulation has been helpful in planning the surgery for precise extraction of impacted ST guided with 3-dimensional images. Herein, we present 2 cases of extraction of severely impacted ST guided by preoperative computer-assisted simulation and intraoperative augmented reality. While being minimally invasive, the augmented reality-guided system can precisely highlight the tooth position. The therapeutic aspects of these procedures have also been discussed.
Collapse
Affiliation(s)
- Hideyuki Suenaga
- Lecturer, Department of Oral-Maxillofacial Surgery and Orthodontics, The University of Tokyo Hospital, Tokyo, Japan.
| | - Ayuko Sakakibara
- Clinical Fellow, Department of Oral-Maxillofacial Surgery and Orthodontics, The University of Tokyo Hospital, Tokyo, Japan
| | - Asako Taniguchi
- Assistant professor, Department of Oral-Maxillofacial Surgery and Orthodontics, The University of Tokyo Hospital, Tokyo, Japan
| | - Kazuto Hoshi
- Professor, Department of Oral-Maxillofacial Surgery and Orthodontics, The University of Tokyo Hospital, Tokyo, Japan
| |
Collapse
|
9
|
Han B, Li R, Huang T, Ma L, Liang H, Zhang X, Liao H. An accurate 3D augmented reality navigation system with enhanced autostereoscopic display for oral and maxillofacial surgery. Int J Med Robot 2022; 18:e2404. [DOI: 10.1002/rcs.2404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2021] [Revised: 03/03/2022] [Accepted: 04/05/2022] [Indexed: 11/10/2022]
Affiliation(s)
- Boxuan Han
- Department of Biomedical Engineering School of Medicine Tsinghua University Beijing China
| | - Ruiyang Li
- Department of Biomedical Engineering School of Medicine Tsinghua University Beijing China
| | - Tianqi Huang
- Department of Biomedical Engineering School of Medicine Tsinghua University Beijing China
| | - Longfei Ma
- Department of Biomedical Engineering School of Medicine Tsinghua University Beijing China
| | - Hanying Liang
- Department of Biomedical Engineering School of Medicine Tsinghua University Beijing China
| | - Xinran Zhang
- Department of Biomedical Engineering School of Medicine Tsinghua University Beijing China
| | - Hongen Liao
- Department of Biomedical Engineering School of Medicine Tsinghua University Beijing China
| |
Collapse
|
10
|
Wu B, Liu P, Xiong C, Li C, Zhang F, Shen S, Shao P, Yao P, Niu C, Xu R. Stereotactic co-axial projection imaging for augmented reality neuronavigation: a proof-of-concept study. Quant Imaging Med Surg 2022; 12:3792-3802. [PMID: 35782260 PMCID: PMC9246757 DOI: 10.21037/qims-21-1144] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2021] [Accepted: 04/27/2022] [Indexed: 11/07/2023]
Abstract
BACKGROUND Lack of intuitiveness and poor hand-eye coordination present a major technical challenge in neurosurgical navigation. METHODS We developed an integrated dexterous stereotactic co-axial projection imaging (sCPI) system featuring orthotopic image projection for augmented reality (AR) neurosurgical navigation. The performance characteristics of the sCPI system, including projection resolution and navigation accuracy, were quantitatively verified. The resolution of the sCPI was tested with a USAF1951 resolution test chart. The stereotactic navigation accuracy of the sCPI was measured using a calibration panel with a 7×7 circle array pattern. In benchtop validation, the navigation accuracy of the sCPI and the BrainLab Kick Navigation Station was compared using a skull phantom with 8 intracranial targets. Finally, we demonstrated the potential clinical application of sCPI through a clinical trial. RESULTS The resolution test showed that the resolution of the sCPI was 1.3 mm. In a stereotactic navigation accuracy test, the maximum and minimum error of the sCPI was 2.9 and 0.3 mm, and the mean error was 1.5 mm. The stereotactic navigation accuracy test also showed that the navigation error of the sCPI would increase with the pitch and yaw angle, but there was no obvious difference in navigation errors caused by different yaw directions, which meant that the navigation error is unbiased across all directions. The benchtop validation showed that the average navigation errors for the sCPI system and the Kick Navigation Station were 1.4±0.8 and 1.8±0.7 mm, the medians were 1.3 and 1.9 mm, and the average preparation times were 3 min 24 sec and 6 min 8 sec, respectively. The clinical feasibility of sCPI-assisted neurosurgical navigation was demonstrated in a clinical study. In comparison with the BrainLab device, the sCPI system required less time for preoperative preparation and enhanced the clinician experience in intraoperative visualization and navigation. CONCLUSIONS The sCPI technique can be potentially used in many surgical applications for intuitive visualization of medical information and intraoperative guidance of surgical trajectories.
Collapse
Affiliation(s)
- Bingxuan Wu
- Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, China
| | - Peng Liu
- Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, China
| | - Chi Xiong
- Department of Neurosurgery, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
| | - Chenmeng Li
- Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, China
| | - Fan Zhang
- Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, China
| | - Shuwei Shen
- Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, China
| | - Pengfei Shao
- Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, China
| | - Peng Yao
- Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, China
| | - Chaoshi Niu
- Department of Neurosurgery, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
| | - Ronald Xu
- Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, China
| |
Collapse
|
11
|
Nguyen HP, Kim T, Kim S. Markerless registration approach using dynamic touchable region model. Int J Med Robot 2022; 18:e2376. [DOI: 10.1002/rcs.2376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2021] [Revised: 01/27/2022] [Accepted: 01/29/2022] [Indexed: 11/09/2022]
Affiliation(s)
- Hang Phuong Nguyen
- Department of Electrical, Electronic, and Computer Engineering University of Ulsan Ulsan South Korea
| | - Taeho Kim
- Department of Electrical, Electronic, and Computer Engineering University of Ulsan Ulsan South Korea
| | - Sungmin Kim
- Department of Electrical, Electronic, and Computer Engineering University of Ulsan Ulsan South Korea
| |
Collapse
|
12
|
Kalaiarasan K, Prathap L, Ayyadurai M, Subhashini P, Tamilselvi T, Avudaiappan T, Infant Raj I, Alemayehu Mamo S, Mezni A. Clinical Application of Augmented Reality in Computerized Skull Base Surgery. EVIDENCE-BASED COMPLEMENTARY AND ALTERNATIVE MEDICINE : ECAM 2022; 2022:1335820. [PMID: 35600956 PMCID: PMC9117015 DOI: 10.1155/2022/1335820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Accepted: 04/19/2022] [Indexed: 12/02/2022]
Abstract
Cranial base tactics comprise the regulation of tiny and complicated structures in the domains of otology, rhinology, neurosurgery, and maxillofacial medical procedure. Basic nerves and veins are in the nearness of these buildings. Increased the truth is a coming innovation that may reform the cerebral basis approach by supplying vital physical and navigational facts brought together in a solitary presentation. In any case, the awareness and acknowledgment of prospective results of expanding reality frameworks in the cerebral base region are really poor. This article targets examining the handiness of expanded reality frameworks in cranial foundation medical procedures and emphasizes the obstacles that present innovation encounters and their prospective adjustments. A specialized perspective on distinct strategies used being produced of an improved realty framework is furthermore offered. The newest item offers an expansion in interest in expanded reality frameworks that may motivate more secure and practical procedures. In any case, a couple of concerns have to be cared to before that can be for the vast part fused into normal practice.
Collapse
Affiliation(s)
- K. Kalaiarasan
- Department of Information Technology, M. Kumarasamy College of Engineering, Karur, India
| | - Lavanya Prathap
- Department of Anatomy, Saveetha Dental College and Hospital, Saveetha Institute of Medical and Technical Sciences, Chennai, Tamil Nadu 600077, India
| | - M. Ayyadurai
- SG, Institute of ECE, Saveetha School of Engineering, SIMATS, Chennai, Tamil Nadu 600077, India
| | - P. Subhashini
- Department of Computer Science and Engineering, J.N.N Institute of Engineering, Kannigaipair, Tamil Nadu 601102, India
| | - T. Tamilselvi
- Department of Computer Science and Engineering, Panimalar Institute of Technology, Varadarajapuram, Tamil Nadu 600123, India
| | - T. Avudaiappan
- Computer Science and Engineering, K. Ramakrishnan College of Technology, Trichy 621112, India
| | - I. Infant Raj
- Department of Computer Science and Engineering, K. Ramakrishnan College of Engineering, Trichy, India
| | - Samson Alemayehu Mamo
- Department of Electrical and Computer Engineering, Faculty of Electrical and Biomedical Engineering, Institute of Technology, Hawassa University, Awasa, Ethiopia
| | - Amine Mezni
- Department of Chemistry, College of Science, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
| |
Collapse
|
13
|
de Geer A, Brouwer de Koning S, van Alphen M, van der Mierden S, Zuur C, van Leeuwen F, Loeve A, van Veen R, Karakullukcu M. Registration methods for surgical navigation of the mandible: a systematic review. Int J Oral Maxillofac Surg 2022; 51:1318-1329. [DOI: 10.1016/j.ijom.2022.01.017] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 10/18/2021] [Accepted: 01/26/2022] [Indexed: 12/20/2022]
|
14
|
Pham Dang N, Chandelon K, Barthélémy I, Devoize L, Bartoli A. A proof-of-concept augmented reality system in oral and maxillofacial surgery. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2021; 122:338-342. [PMID: 34087435 DOI: 10.1016/j.jormas.2021.05.012] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Accepted: 05/31/2021] [Indexed: 01/16/2023]
Abstract
BACKGROUND The advent of digital medical imaging, medical image analysis and computer vision has opened the surgeon horizons with the possibility to add virtual information to the real operative field. For oral and maxillofacial surgeons, overlaying anatomical structures to protect (such as teeth, sinus floors, inferior and superior alveolar nerves) or to remove (such as cysts, tumours, impacted teeth) presents a real clinical interest. MATERIAL AND METHODS Through this work, we propose a proof-of-concept markerless augmented reality system for oral and maxillofacial surgery, where a virtual scene is generated preoperatively and mixed with reality to reveal the location of hidden anatomical structures intraoperatively. We devised a computer software to process still video frames of the operating field and to display them on the operating room screens. RESULTS Firstly, we give a description of the proposed system, where virtuality aligns with reality without artificial markers. The dental occlusion plan analysis and cusps detection allow us to initialise the alignment process. Secondly, we validate the feasibility with an experimental approach on a 3D printed jaw phantom and an ex-vivo pig jaw. Thirdly, we evaluate the potential clinical benefit on a patient. CONCLUSION this proof-of-concept highlights the feasibility and the interest of augmented reality for hidden anatomical structures visualisation without artificial markers.
Collapse
Affiliation(s)
- Nathalie Pham Dang
- Department of Oral and Maxillofacial surgery, NHE - CHU de Clermont-Ferrand, Université d'Auvergne, Clermont-Ferrand 63003, France; EnCoV, Institut Pascal, UMR 6602, CNRS/UBP/SIGMA, EnCoV, 63000, Clermont-Ferrand, France; UMR Inserm/UdA, U1107, Neuro-Dol, Trigeminal Pain and Migraine, Université d'Auvergne, Clermont-Ferrand 63003, France.
| | - Kilian Chandelon
- EnCoV, Institut Pascal, UMR 6602, CNRS/UBP/SIGMA, EnCoV, 63000, Clermont-Ferrand, France
| | - Isabelle Barthélémy
- Department of Oral and Maxillofacial surgery, NHE - CHU de Clermont-Ferrand, Université d'Auvergne, Clermont-Ferrand 63003, France; UMR Inserm/UdA, U1107, Neuro-Dol, Trigeminal Pain and Migraine, Université d'Auvergne, Clermont-Ferrand 63003, France
| | - Laurent Devoize
- UMR Inserm/UdA, U1107, Neuro-Dol, Trigeminal Pain and Migraine, Université d'Auvergne, Clermont-Ferrand 63003, France; Department of Odontology, CHU de Clermont-Ferrand, Université d'Auvergne, Clermont-Ferrand 63003, France
| | - Adrien Bartoli
- EnCoV, Institut Pascal, UMR 6602, CNRS/UBP/SIGMA, EnCoV, 63000, Clermont-Ferrand, France
| |
Collapse
|
15
|
Gu W, Shah K, Knopf J, Navab N, Unberath M. Feasibility of image-based augmented reality guidance of total shoulder arthroplasty using microsoft HoloLens 1. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2021. [DOI: 10.1080/21681163.2020.1835556] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Affiliation(s)
- Wenhao Gu
- Johns Hopkins University, Baltimore, USA
| | | | | | | | | |
Collapse
|
16
|
Manni F, Mamprin M, Holthuizen R, Shan C, Burström G, Elmi-Terander A, Edström E, Zinger S, de With PHN. Multi-view 3D skin feature recognition and localization for patient tracking in spinal surgery applications. Biomed Eng Online 2021; 20:6. [PMID: 33413426 PMCID: PMC7792004 DOI: 10.1186/s12938-020-00843-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Accepted: 12/19/2020] [Indexed: 11/25/2022] Open
Abstract
BACKGROUND Minimally invasive spine surgery is dependent on accurate navigation. Computer-assisted navigation is increasingly used in minimally invasive surgery (MIS), but current solutions require the use of reference markers in the surgical field for both patient and instruments tracking. PURPOSE To improve reliability and facilitate clinical workflow, this study proposes a new marker-free tracking framework based on skin feature recognition. METHODS Maximally Stable Extremal Regions (MSER) and Speeded Up Robust Feature (SURF) algorithms are applied for skin feature detection. The proposed tracking framework is based on a multi-camera setup for obtaining multi-view acquisitions of the surgical area. Features can then be accurately detected using MSER and SURF and afterward localized by triangulation. The triangulation error is used for assessing the localization quality in 3D. RESULTS The framework was tested on a cadaver dataset and in eight clinical cases. The detected features for the entire patient datasets were found to have an overall triangulation error of 0.207 mm for MSER and 0.204 mm for SURF. The localization accuracy was compared to a system with conventional markers, serving as a ground truth. An average accuracy of 0.627 and 0.622 mm was achieved for MSER and SURF, respectively. CONCLUSIONS This study demonstrates that skin feature localization for patient tracking in a surgical setting is feasible. The technology shows promising results in terms of detected features and localization accuracy. In the future, the framework may be further improved by exploiting extended feature processing using modern optical imaging techniques for clinical applications where patient tracking is crucial.
Collapse
Affiliation(s)
- Francesca Manni
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands.
| | - Marco Mamprin
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | | | - Caifeng Shan
- Shandong University of Science and Technology, Qingdao, China
| | - Gustav Burström
- Department of Neurosurgery, Karolinska University Hospital and Department of Clinical Neuroscience, Karolinska Institute, Stockholm, Sweden
| | - Adrian Elmi-Terander
- Department of Neurosurgery, Karolinska University Hospital and Department of Clinical Neuroscience, Karolinska Institute, Stockholm, Sweden
| | - Erik Edström
- Department of Neurosurgery, Karolinska University Hospital and Department of Clinical Neuroscience, Karolinska Institute, Stockholm, Sweden
| | - Svitlana Zinger
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Peter H N de With
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| |
Collapse
|
17
|
Lungu AJ, Swinkels W, Claesen L, Tu P, Egger J, Chen X. A review on the applications of virtual reality, augmented reality and mixed reality in surgical simulation: an extension to different kinds of surgery. Expert Rev Med Devices 2020; 18:47-62. [PMID: 33283563 DOI: 10.1080/17434440.2021.1860750] [Citation(s) in RCA: 51] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Background: Research proves that the apprenticeship model, which is the gold standard for training surgical residents, is obsolete. For that reason, there is a continuing effort toward the development of high-fidelity surgical simulators to replace the apprenticeship model. Applying Virtual Reality Augmented Reality (AR) and Mixed Reality (MR) in surgical simulators increases the fidelity, level of immersion and overall experience of these simulators.Areas covered: The objective of this review is to provide a comprehensive overview of the application of VR, AR and MR for distinct surgical disciplines, including maxillofacial surgery and neurosurgery. The current developments in these areas, as well as potential future directions, are discussed.Expert opinion: The key components for incorporating VR into surgical simulators are visual and haptic rendering. These components ensure that the user is completely immersed in the virtual environment and can interact in the same way as in the physical world. The key components for the application of AR and MR into surgical simulators include the tracking system as well as the visual rendering. The advantages of these surgical simulators are the ability to perform user evaluations and increase the training frequency of surgical residents.
Collapse
Affiliation(s)
- Abel J Lungu
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Wout Swinkels
- Computational Sensing Systems, Department of Engineering Technology, Hasselt University, Diepenbeek, Belgium
| | - Luc Claesen
- Computational Sensing Systems, Department of Engineering Technology, Hasselt University, Diepenbeek, Belgium
| | - Puxun Tu
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jan Egger
- Graz University of Technology, Institute of Computer Graphics and Vision, Graz, Austria.,Graz Department of Oral &maxillofacial Surgery, Medical University of Graz, Graz, Austria.,The Laboratory of Computer Algorithms for Medicine, Medical University of Graz, Graz, Austria
| | - Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
18
|
Development of Simulation Methods in Biomedical Sciences - From Phantoms to Virtual Patients. SERBIAN JOURNAL OF EXPERIMENTAL AND CLINICAL RESEARCH 2020. [DOI: 10.2478/sjecr-2020-0051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
Abstract
Simulation is an imitation of the operation of a real process or system over time that is applied for a variety of purposes, including entertainment, education, training, system evaluation, and research. Medical simulation is an artificial presentation of real clinical situations, which is applied in education. Medical simulation allows the acquisition of clinical skills without the risk of harming the patient. Medical simulations have been developed and refined over the years-simulation models, cadavers, actors and robots have found wide application in medical training. Of more sophisticated simulation technologies, Virtual and Augmented Realities are used. The presence of science in the digital world is necessary in order to market the proven knowledge acquired in an adequate manner. The traditional teaching process, despite serious and thorough research, seems non-inspirational, and it is important that educators and teachers keep up with the times and provide students with the latest teaching and work methods.
Collapse
|
19
|
Quantitative Augmented Reality-Assisted Free-Hand Orthognathic Surgery Using Electromagnetic Tracking and Skin-Attached Dynamic Reference. J Craniofac Surg 2020; 31:2175-2181. [PMID: 33136850 DOI: 10.1097/scs.0000000000006739] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
The purpose of this study was to develop a quantitative AR-assisted free-hand orthognathic surgery method using electromagnetic (EM) tracking and skin-attached dynamic reference. The authors proposed a novel, simplified, and convenient workflow for augmented reality (AR)-assisted orthognathic surgery based on optical marker-less tracking, a comfortable display, and a non-invasive, skin-attached dynamic reference frame. The 2 registrations between the physical (EM tracking) and CT image spaces and between the physical and AR camera spaces, essential processes in AR-assisted surgery, were pre-operatively performed using the registration body complex and 3D depth camera. The intraoperative model of the maxillary bone segment (MBS) was superimposed on the real patient image with the simulated goal model on a flat-panel display, and the MBS was freely handled for repositioning with respect to the skin-attached dynamic reference tool (SRT) with quantitative visualization of landmarks of interest using only EM tracking. To evaluate the accuracy of AR-assisted Le Fort I surgery, the MBS of the phantom was simulated and repositioned by 6 translational and three rotational movements. The mean absolute deviations (MADs) between the simulation and post-operative positions of MBS landmarks by the SRT were 0.20, 0.34, 0.29, and 0.55 mm in x- (left lateral, right lateral), y- (setback, advance), and z- (impaction, elongation) directions, and RMS, respectively, while those by the BRT were 0.23, 0.37, 0.30, and 0.60 mm. There were no significant differences between the translation and rotation surgeries or among surgeries in the x-, y-, and z-axes for the SRT. The MADs in the x-, y-, and z-axes exhibited no significant differences between the SRT and BRT. The developed method showed high accuracy and reliability in free-hand orthognathic surgery using EM tracking and skin-attached dynamic reference.
Collapse
|
20
|
Mladenovic R, Dakovic D, Pereira L, Matvijenko V, Mladenovic K. Effect of augmented reality simulation on administration of local anaesthesia in paediatric patients. EUROPEAN JOURNAL OF DENTAL EDUCATION : OFFICIAL JOURNAL OF THE ASSOCIATION FOR DENTAL EDUCATION IN EUROPE 2020; 24:507-512. [PMID: 32243051 DOI: 10.1111/eje.12529] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/06/2020] [Revised: 03/21/2020] [Accepted: 03/27/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND Augmented reality (AR) is a simulation of a three-dimensional environment created using hardware and software that provides the user with realistic experiences and ability to interact. The aim of the study was to evaluate the impact of AR simulator on the perception of learning and acute stress level in students administering local anaesthesia to paediatric patients relative to standard teaching methods. MATERIAL AND METHODS The prospective study included 21 fourth- and fifth-year students enrolled in 5-year dental programme. In addition to conventional training, the students of the study group used the augmented reality simulator in a dental office 2 hours weekly in 2 weeks. The level of salivary cortisol was measured before and after the anaesthetic procedure as one of the indicators of acute stress. RESULTS A statistically significant shorter time to perform infiltrative anaesthesia technique for the anterior superior alveolar nerve was observed in students using the AR technique (28.91 ± 9.06 seconds in the study group and 39.80 ± 9.29 seconds in the control group). The level of cortisol before and after anaesthesia was statistically significant in all subjects (cortisol concentration was 0.53 μg/dL before anaesthesia and 2.44 μg/dL after the procedure); however, there was no statistically significant difference between the groups. CONCLUSION The AR concept may influence better manipulation and control of the syringe in students administering their first anaesthetic injection to paediatric patients, but may not reduce acute stress.
Collapse
Affiliation(s)
- Rasa Mladenovic
- Faculty of Medicine, Department for Dentistry, University in Pristina, Kosovska Mitrovica, Serbia
| | - Dragana Dakovic
- Faculty of Medicine of the Military Medical Academy, University of Defence, Belgrade, Serbia
| | | | - Vladimir Matvijenko
- Faculty of Medicine, Department for Dentistry, University in Pristina, Kosovska Mitrovica, Serbia
| | | |
Collapse
|
21
|
Manni F, Elmi-Terander A, Burström G, Persson O, Edström E, Holthuizen R, Shan C, Zinger S, van der Sommen F, de With PHN. Towards Optical Imaging for Spine Tracking without Markers in Navigated Spine Surgery. SENSORS (BASEL, SWITZERLAND) 2020; 20:E3641. [PMID: 32610555 PMCID: PMC7374436 DOI: 10.3390/s20133641] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Revised: 06/13/2020] [Accepted: 06/22/2020] [Indexed: 12/18/2022]
Abstract
Surgical navigation systems are increasingly used for complex spine procedures to avoid neurovascular injuries and minimize the risk for reoperations. Accurate patient tracking is one of the prerequisites for optimal motion compensation and navigation. Most current optical tracking systems use dynamic reference frames (DRFs) attached to the spine, for patient movement tracking. However, the spine itself is subject to intrinsic movements which can impact the accuracy of the navigation system. In this study, we aimed to detect the actual patient spine features in different image views captured by optical cameras, in an augmented reality surgical navigation (ARSN) system. Using optical images from open spinal surgery cases, acquired by two gray-scale cameras, spinal landmarks were identified and matched in different camera views. A computer vision framework was created for preprocessing of the spine images, detecting and matching local invariant image regions. We compared four feature detection algorithms, Speeded Up Robust Feature (SURF), Maximal Stable Extremal Region (MSER), Features from Accelerated Segment Test (FAST), and Oriented FAST and Rotated BRIEF (ORB) to elucidate the best approach. The framework was validated in 23 patients and the 3D triangulation error of the matched features was < 0 . 5 mm. Thus, the findings indicate that spine feature detection can be used for accurate tracking in navigated surgery.
Collapse
Affiliation(s)
- Francesca Manni
- Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands; (S.Z.); (F.v.d.S.); (P.H.N.d.W.)
| | - Adrian Elmi-Terander
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm SE-171 46, Sweden & Department of Neurosurgery, Karolinska University Hospital, SE-171 46 Stockholm, Sweden; (A.E.-T.); (G.B.); (O.P.); (E.E.)
| | - Gustav Burström
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm SE-171 46, Sweden & Department of Neurosurgery, Karolinska University Hospital, SE-171 46 Stockholm, Sweden; (A.E.-T.); (G.B.); (O.P.); (E.E.)
| | - Oscar Persson
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm SE-171 46, Sweden & Department of Neurosurgery, Karolinska University Hospital, SE-171 46 Stockholm, Sweden; (A.E.-T.); (G.B.); (O.P.); (E.E.)
| | - Erik Edström
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm SE-171 46, Sweden & Department of Neurosurgery, Karolinska University Hospital, SE-171 46 Stockholm, Sweden; (A.E.-T.); (G.B.); (O.P.); (E.E.)
| | | | - Caifeng Shan
- Philips Research, High Tech Campus 36, 5656 AE Eindhoven, The Netherlands;
| | - Svitlana Zinger
- Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands; (S.Z.); (F.v.d.S.); (P.H.N.d.W.)
| | - Fons van der Sommen
- Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands; (S.Z.); (F.v.d.S.); (P.H.N.d.W.)
| | - Peter H. N. de With
- Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands; (S.Z.); (F.v.d.S.); (P.H.N.d.W.)
| |
Collapse
|
22
|
Liu X, Sinha A, Ishii M, Hager GD, Reiter A, Taylor RH, Unberath M. Dense Depth Estimation in Monocular Endoscopy With Self-Supervised Learning Methods. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:1438-1447. [PMID: 31689184 PMCID: PMC7289272 DOI: 10.1109/tmi.2019.2950936] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
We present a self-supervised approach to training convolutional neural networks for dense depth estimation from monocular endoscopy data without a priori modeling of anatomy or shading. Our method only requires monocular endoscopic videos and a multi-view stereo method, e.g., structure from motion, to supervise learning in a sparse manner. Consequently, our method requires neither manual labeling nor patient computed tomography (CT) scan in the training and application phases. In a cross-patient experiment using CT scans as groundtruth, the proposed method achieved submillimeter mean residual error. In a comparison study to recent self-supervised depth estimation methods designed for natural video on in vivo sinus endoscopy data, we demonstrate that the proposed approach outperforms the previous methods by a large margin. The source code for this work is publicly available online at https://github.com/lppllppl920/EndoscopyDepthEstimation-Pytorch.
Collapse
|
23
|
Bayrak M, Alsadoon A, Prasad P, Venkata HS, Ali RS, Haddad S. A novel rotation invariant and Manhattan metric–based pose refinement: Augmented reality–based oral and maxillofacial surgery. Int J Med Robot 2020; 16:e2077. [DOI: 10.1002/rcs.2077] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2019] [Revised: 01/07/2020] [Accepted: 01/09/2020] [Indexed: 01/14/2023]
Affiliation(s)
- Mucahit Bayrak
- School of Computing and MathematicsCharles Sturt University Sydney New South Wales Australia
| | - Abeer Alsadoon
- School of Computing and MathematicsCharles Sturt University Sydney New South Wales Australia
| | - P.W.C. Prasad
- School of Computing and MathematicsCharles Sturt University Sydney New South Wales Australia
| | | | - Rasha S. Ali
- Department of Computer Techniques EngineeringAL Nisour University College Baghdad Iraq
| | - Sami Haddad
- Department of Oral and Maxillofacial ServicesGreater Western Sydney Area Health Services Mount Druitt New South Wales Australia
- Department of Oral and Maxillofacial ServicesCentral Coast Area Health Gosford New South Wales Australia
| |
Collapse
|
24
|
Pérez-Pachón L, Poyade M, Lowe T, Gröning F. Image Overlay Surgery Based on Augmented Reality: A Systematic Review. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2020; 1260:175-195. [PMID: 33211313 DOI: 10.1007/978-3-030-47483-6_10] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Augmented Reality (AR) applied to surgical guidance is gaining relevance in clinical practice. AR-based image overlay surgery (i.e. the accurate overlay of patient-specific virtual images onto the body surface) helps surgeons to transfer image data produced during the planning of the surgery (e.g. the correct resection margins of tissue flaps) to the operating room, thus increasing accuracy and reducing surgery times. We systematically reviewed 76 studies published between 2004 and August 2018 to explore which existing tracking and registration methods and technologies allow healthcare professionals and researchers to develop and implement these systems in-house. Most studies used non-invasive markers to automatically track a patient's position, as well as customised algorithms, tracking libraries or software development kits (SDKs) to compute the registration between patient-specific 3D models and the patient's body surface. Few studies combined the use of holographic headsets, SDKs and user-friendly game engines, and described portable and wearable systems that combine tracking, registration, hands-free navigation and direct visibility of the surgical site. Most accuracy tests included a low number of subjects and/or measurements and did not normally explore how these systems affect surgery times and success rates. We highlight the need for more procedure-specific experiments with a sufficient number of subjects and measurements and including data about surgical outcomes and patients' recovery. Validation of systems combining the use of holographic headsets, SDKs and game engines is especially interesting as this approach facilitates an easy development of mobile AR applications and thus the implementation of AR-based image overlay surgery in clinical practice.
Collapse
Affiliation(s)
- Laura Pérez-Pachón
- School of Medicine, Medical Sciences and Nutrition, University of Aberdeen, Aberdeen, UK.
| | - Matthieu Poyade
- School of Simulation and Visualisation, Glasgow School of Art, Glasgow, UK
| | - Terry Lowe
- School of Medicine, Medical Sciences and Nutrition, University of Aberdeen, Aberdeen, UK
- Head and Neck Oncology Unit, Aberdeen Royal Infirmary (NHS Grampian), Aberdeen, UK
| | - Flora Gröning
- School of Medicine, Medical Sciences and Nutrition, University of Aberdeen, Aberdeen, UK
| |
Collapse
|
25
|
de Francisco Ortiz Ó, Estrems Amestoy M, Sánchez Reinoso HT, Carrero-Blanco Martínez-Hombre J. Enhanced Positioning Algorithm Using a Single Image in an LCD-Camera System by Mesh Elements' Recalculation and Angle Error Orientation. MATERIALS 2019; 12:ma12244216. [PMID: 31888130 PMCID: PMC6947435 DOI: 10.3390/ma12244216] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/31/2019] [Revised: 12/10/2019] [Accepted: 12/11/2019] [Indexed: 11/16/2022]
Abstract
In this article, we present a method to position the tool in a micromachine system based on a camera-LCD screen positioning system that also provides information about angular deviations of the tool axis during its running. Both position and angular deviations are obtained by reducing a matrix of LEDs in the image to a single rectangle in the conical perspective that is treated by a photogrammetry method. This method computes the coordinates and orientation of the camera with respect to the fixed screen coordinate system. The used image consists of 5 × 5 lit LEDs, which are analyzed by the algorithm to determine a rectangle with known dimensions. The coordinates of the vertices of the rectangle in space are obtained by an inverse perspective computation from the image. The method presents a good approximation of the central point of the rectangle and provides the inclination of the workpiece with respect to the LCD screen reference system of coordinates. A test of the method is designed with the assistance of a Coordinate Measurement Machine (CMM) to check the accuracy of the positioning method. The performed test delivers a good accuracy in the position measurement of the designed method. A high dispersion in the angular deviation is detected, although the orientation of the inclination is appropriate in almost every case. This is due to the small values of the angles that makes the trigonometric function approximations very erratic. This method is a good starting point for the compensation of angular deviation in vision based micromachine tools, which is the principal source of errors in these operations and represents the main volume in the cost of machine elements' parts.
Collapse
Affiliation(s)
- Óscar de Francisco Ortiz
- Department of Engineering and Applied Technologies, University Center of Defense, San Javier Air Force Base, MDE-UPCT, 30720 Santiago de la Ribera, Spain
- Correspondence: ; Tel.: +34-968-189918
| | - Manuel Estrems Amestoy
- Mechanics, Materials and iManufacturing Engineering department, Technical University of Cartagena, 30202 Cartagena, Spain; (M.E.A.); (H.T.S.R.); (J.C.-B.M.-H.)
| | - Horacio T. Sánchez Reinoso
- Mechanics, Materials and iManufacturing Engineering department, Technical University of Cartagena, 30202 Cartagena, Spain; (M.E.A.); (H.T.S.R.); (J.C.-B.M.-H.)
| | - Julio Carrero-Blanco Martínez-Hombre
- Mechanics, Materials and iManufacturing Engineering department, Technical University of Cartagena, 30202 Cartagena, Spain; (M.E.A.); (H.T.S.R.); (J.C.-B.M.-H.)
| |
Collapse
|
26
|
Recent Trends, Technical Concepts and Components of Computer-Assisted Orthopedic Surgery Systems: A Comprehensive Review. SENSORS 2019; 19:s19235199. [PMID: 31783631 PMCID: PMC6929084 DOI: 10.3390/s19235199] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Revised: 11/08/2019] [Accepted: 11/12/2019] [Indexed: 12/17/2022]
Abstract
Computer-assisted orthopedic surgery (CAOS) systems have become one of the most important and challenging types of system in clinical orthopedics, as they enable precise treatment of musculoskeletal diseases, employing modern clinical navigation systems and surgical tools. This paper brings a comprehensive review of recent trends and possibilities of CAOS systems. There are three types of the surgical planning systems, including: systems based on the volumetric images (computer tomography (CT), magnetic resonance imaging (MRI) or ultrasound images), further systems utilize either 2D or 3D fluoroscopic images, and the last one utilizes the kinetic information about the joints and morphological information about the target bones. This complex review is focused on three fundamental aspects of CAOS systems: their essential components, types of CAOS systems, and mechanical tools used in CAOS systems. In this review, we also outline the possibilities for using ultrasound computer-assisted orthopedic surgery (UCAOS) systems as an alternative to conventionally used CAOS systems.
Collapse
|
27
|
Beams R, Kim AS, Badano A. Transverse chromatic aberration in virtual reality head-mounted displays. OPTICS EXPRESS 2019; 27:24877-24884. [PMID: 31510369 DOI: 10.1364/oe.27.024877] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/01/2019] [Accepted: 07/08/2019] [Indexed: 06/10/2023]
Abstract
We demonstrate a method for measuring the transverse chromatic aberration (TCA) in a virtual reality head-mounted display. The method relies on acquiring images of a digital bar pattern and measuring the displacement of different color bars. This procedure was used to characterize the TCAs in the Oculus Go, Oculus Rift, Samsung Gear, and HTC Vive. The results show noticeable TCAs for the Oculus devices for angles larger than 5° from the center of the field of view. TCA is less noticeable in the Vive in part due to off-axis monochromatic aberrations. Finally, user measurements were conducted, which were in excellent agreement with the laboratory results.
Collapse
|
28
|
Mladenovic R, Pereira L, Mladenovic K, Videnovic N, Bukumiric Z, Mladenovic J. Effectiveness of Augmented Reality Mobile Simulator in Teaching Local Anesthesia of Inferior Alveolar Nerve Block. J Dent Educ 2019; 83:423-428. [DOI: 10.21815/jde.019.050] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2018] [Accepted: 08/30/2018] [Indexed: 12/20/2022]
Affiliation(s)
- Rasa Mladenovic
- Faculty of Medicine; University of Pristina; Kosovska Mitrovica Serbia
| | | | | | - Nebojsa Videnovic
- Faculty of Medicine; University of Pristina; Kosovska Mitrovica Serbia
| | - Zoran Bukumiric
- Institute of Medical Statistics and Informatics; University of Belgrade; School of Medicine; Belgrade Serbia
| | - Jovan Mladenovic
- Faculty of Medicine; University of Pristina; Kosovska Mitrovica Serbia
| |
Collapse
|
29
|
Hussain R, Lalande A, Guigou C, Bozorg Grayeli A. Contribution of Augmented Reality to Minimally Invasive Computer-Assisted Cranial Base Surgery. IEEE J Biomed Health Inform 2019; 24:2093-2106. [DOI: 10.1109/jbhi.2019.2954003] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
30
|
Bosc R, Fitoussi A, Hersant B, Dao TH, Meningaud JP. Intraoperative augmented reality with heads-up displays in maxillofacial surgery: a systematic review of the literature and a classification of relevant technologies. Int J Oral Maxillofac Surg 2019; 48:132-139. [DOI: 10.1016/j.ijom.2018.09.010] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2018] [Revised: 09/16/2018] [Accepted: 09/24/2018] [Indexed: 12/30/2022]
|
31
|
Basnet BR, Alsadoon A, Withana C, Deva A, Paul M. A novel noise filtered and occlusion removal: navigational accuracy in augmented reality-based constructive jaw surgery. Oral Maxillofac Surg 2018; 22:385-401. [PMID: 30206745 DOI: 10.1007/s10006-018-0719-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2018] [Accepted: 08/28/2018] [Indexed: 06/08/2023]
Abstract
PURPOSE Augmented reality-based constructive jaw surgery has been facing various limitations such as noise in real-time images, the navigational error of implants and jaw, image overlay error, and occlusion handling which have limited the implementation of augmented reality (AR) in corrective jaw surgery. This research aimed to improve the navigational accuracy, through noise and occlusion removal, during positioning of an implant in relation to the jaw bone to be cut or drilled. METHOD The proposed system consists of a weighting-based de-noising filter and depth mapping-based occlusion removal for removing any occluded object such as surgical tools, the surgeon's body parts, and blood. RESULTS The maxillary (upper jaw) and mandibular (lower jaw) jaw bone sample results show that the proposed method can achieve the image overlay error (video accuracy) of 0.23~0.35 mm and processing time of 8-12 frames per second compared to 0.35~0.45 mm and 6-11 frames per second by the existing best system. CONCLUSION The proposed system concentrates on removing the noise from the real-time video frame and the occlusion. Thus, the acceptable range of accuracy and the processing time are provided by this study for surgeons for carrying out a smooth surgical flow.
Collapse
Affiliation(s)
- Bijaya Raj Basnet
- School of Computing and Mathematics, Charles Sturt University, Sydney Campus, Sydney, Australia
| | - Abeer Alsadoon
- School of Computing and Mathematics, Charles Sturt University, Sydney Campus, Sydney, Australia
| | - Chandana Withana
- School of Computing and Mathematics, Charles Sturt University, Sydney Campus, Sydney, Australia.
| | - Anand Deva
- Faculty of Medicine and Health Sciences, Macquarie University, Sydney, Australia
| | - Manoranjan Paul
- School of Computing and Mathematics, Charles Sturt University, Sydney Campus, Sydney, Australia
| |
Collapse
|
32
|
Abstract
Augmentation reality technology offers virtual information in addition to that of the real environment and thus opens new possibilities in various fields. The medical applications of augmentation reality are generally concentrated on surgery types, including neurosurgery, laparoscopic surgery and plastic surgery. Augmentation reality technology is also widely used in medical education and training. In dentistry, oral and maxillofacial surgery is the primary area of use, where dental implant placement and orthognathic surgery are the most frequent applications. Recent technological advancements are enabling new applications of restorative dentistry, orthodontics and endodontics. This review briefly summarizes the history, definitions, features, and components of augmented reality technology and discusses its applications and future perspectives in dentistry.
Collapse
Affiliation(s)
- Ho-Beom Kwon
- Department of Prosthodontics, School of Dentistry, Seoul National University and Dental Research Institute, Seoul, Korea
| | - Young-Seok Park
- Department of Oral Medicine and Oral Diagnosis, School of Dentistry, Seoul National University and Dental Research Institute, Seoul, Korea
| | - Jung-Suk Han
- Department of Prosthodontics, School of Dentistry, Seoul National University and Dental Research Institute, Seoul, Korea
| |
Collapse
|
33
|
Pokhrel S, Alsadoon A, Prasad PWC, Paul M. A novel augmented reality (AR) scheme for knee replacement surgery by considering cutting error accuracy. Int J Med Robot 2018; 15:e1958. [DOI: 10.1002/rcs.1958] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2018] [Revised: 07/25/2018] [Accepted: 08/17/2018] [Indexed: 12/25/2022]
Affiliation(s)
- Suraj Pokhrel
- School of Computing and Mathematics, Charles Sturt University, Sydney, Australia
| | - Abeer Alsadoon
- School of Computing and Mathematics, Charles Sturt University, Sydney, Australia
| | - P W C Prasad
- School of Computing and Mathematics, Charles Sturt University, Sydney, Australia
| | - Manoranjan Paul
- School of Computing and Mathematics, Charles Sturt University, Sydney, Australia
| |
Collapse
|
34
|
Fida B, Cutolo F, di Franco G, Ferrari M, Ferrari V. Augmented reality in open surgery. Updates Surg 2018; 70:389-400. [PMID: 30006832 DOI: 10.1007/s13304-018-0567-8] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2018] [Accepted: 07/08/2018] [Indexed: 12/17/2022]
Abstract
Augmented reality (AR) has been successfully providing surgeons an extensive visual information of surgical anatomy to assist them throughout the procedure. AR allows surgeons to view surgical field through the superimposed 3D virtual model of anatomical details. However, open surgery presents new challenges. This study provides a comprehensive overview of the available literature regarding the use of AR in open surgery, both in clinical and simulated settings. In this way, we aim to analyze the current trends and solutions to help developers and end/users discuss and understand benefits and shortcomings of these systems in open surgery. We performed a PubMed search of the available literature updated to January 2018 using the terms (1) "augmented reality" AND "open surgery", (2) "augmented reality" AND "surgery" NOT "laparoscopic" NOT "laparoscope" NOT "robotic", (3) "mixed reality" AND "open surgery", (4) "mixed reality" AND "surgery" NOT "laparoscopic" NOT "laparoscope" NOT "robotic". The aspects evaluated were the following: real data source, virtual data source, visualization processing modality, tracking modality, registration technique, and AR display type. The initial search yielded 502 studies. After removing the duplicates and by reading abstracts, a total of 13 relevant studies were chosen. In 1 out of 13 studies, in vitro experiments were performed, while the rest of the studies were carried out in a clinical setting including pancreatic, hepatobiliary, and urogenital surgeries. AR system in open surgery appears as a versatile and reliable tool in the operating room. However, some technological limitations need to be addressed before implementing it into the routine practice.
Collapse
Affiliation(s)
- Benish Fida
- Department of Information Engineering, University of Pisa, Pisa, Italy.,Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
| | - Fabrizio Cutolo
- Department of Information Engineering, University of Pisa, Pisa, Italy. .,Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy.
| | - Gregorio di Franco
- General Surgery Unit, Department of Surgery, Translational and New Technologies, University of Pisa, Pisa, Italy
| | - Mauro Ferrari
- Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy.,Vascular Surgery Unit, Cisanello University Hospital AOUP, Pisa, Italy
| | - Vincenzo Ferrari
- Department of Information Engineering, University of Pisa, Pisa, Italy.,Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
| |
Collapse
|
35
|
Ma L, Jiang W, Zhang B, Qu X, Ning G, Zhang X, Liao H. Augmented reality surgical navigation with accurate CBCT-patient registration for dental implant placement. Med Biol Eng Comput 2018; 57:47-57. [DOI: 10.1007/s11517-018-1861-9] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2018] [Accepted: 06/10/2018] [Indexed: 10/28/2022]
|
36
|
Intraoperative Evaluation of Body Surface Improvement by an Augmented Reality System That a Clinician Can Modify. PLASTIC AND RECONSTRUCTIVE SURGERY-GLOBAL OPEN 2017; 5:e1432. [PMID: 28894655 PMCID: PMC5585428 DOI: 10.1097/gox.0000000000001432] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2017] [Accepted: 06/09/2017] [Indexed: 01/01/2023]
Abstract
BACKGROUND Augmented reality (AR) technology that can combine computer-generated images with a real scene has been reported in the medical field recently. We devised the AR system for evaluation of improvements of the body surface, which is important for plastic surgery. METHODS We constructed an AR system that is easy to modify by combining existing devices and free software. We superimposed the 3-dimensional images of the body surface and the bone (obtained from VECTRA H1 and CT) onto the actual surgical field by Moverio BT-200 smart glasses and evaluated improvements of the body surface in 8 cases. RESULTS In all cases, the 3D image was successfully projected on the surgical field. Improvement of the display method of the 3D image made it easier to distinguish the different shapes in the 3D image and surgical field, making comparison easier. In a patient with fibrous dysplasia, the symmetrized body surface image was useful for confirming improvement of the real body surface. In a patient with complex facial fracture, the simulated bone image was useful as a reference for reduction. In a patient with an osteoma of the forehead, simultaneously displayed images of the body surface and the bone made it easier to understand these positional relationships. CONCLUSIONS This study confirmed that AR technology is helpful for evaluation of the body surface in several clinical applications. Our findings are not only useful for body surface evaluation but also for effective utilization of AR technology in the field of plastic surgery.
Collapse
|
37
|
Won YJ, Kang SH. Application of augmented reality for inferior alveolar nerve block anesthesia: A technical note. J Dent Anesth Pain Med 2017; 17:129-134. [PMID: 28879340 PMCID: PMC5564146 DOI: 10.17245/jdapm.2017.17.2.129] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2017] [Revised: 06/16/2017] [Accepted: 06/18/2017] [Indexed: 12/12/2022] Open
Abstract
Efforts to apply augmented reality (AR) technology in the medical field include the introduction of AR techniques into dental practice. The present report introduces a simple method of applying AR during an inferior alveolar nerve block, a procedure commonly performed in dental clinics.
Collapse
Affiliation(s)
- Yu-Jin Won
- Department of Oral and Maxillofacial Surgery, National Health Insurance Service Ilsan Hospital, Goyang, Republic of Korea
| | - Sang-Hoon Kang
- Department of Oral and Maxillofacial Surgery, National Health Insurance Service Ilsan Hospital, Goyang, Republic of Korea
| |
Collapse
|
38
|
Suenaga H, Taniguchi A, Yonenaga K, Hoshi K, Takato T. Computer-assisted preoperative simulation for positioning of plate fixation in Lefort I osteotomy: A case report. J Formos Med Assoc 2016; 115:470-4. [DOI: 10.1016/j.jfma.2016.01.003] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2015] [Revised: 01/04/2016] [Accepted: 01/10/2016] [Indexed: 10/22/2022] Open
|