1
|
Syed TA, Siddiqui MS, Abdullah HB, Jan S, Namoun A, Alzahrani A, Nadeem A, Alkhodre AB. In-Depth Review of Augmented Reality: Tracking Technologies, Development Tools, AR Displays, Collaborative AR, and Security Concerns. SENSORS (BASEL, SWITZERLAND) 2022; 23:146. [PMID: 36616745 PMCID: PMC9824627 DOI: 10.3390/s23010146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 12/11/2022] [Accepted: 12/13/2022] [Indexed: 06/17/2023]
Abstract
Augmented reality (AR) has gained enormous popularity and acceptance in the past few years. AR is indeed a combination of different immersive experiences and solutions that serve as integrated components to assemble and accelerate the augmented reality phenomena as a workable and marvelous adaptive solution for many realms. These solutions of AR include tracking as a means for keeping track of the point of reference to make virtual objects visible in a real scene. Similarly, display technologies combine the virtual and real world with the user's eye. Authoring tools provide platforms to develop AR applications by providing access to low-level libraries. The libraries can thereafter interact with the hardware of tracking sensors, cameras, and other technologies. In addition to this, advances in distributed computing and collaborative augmented reality also need stable solutions. The various participants can collaborate in an AR setting. The authors of this research have explored many solutions in this regard and present a comprehensive review to aid in doing research and improving different business transformations. However, during the course of this study, we identified that there is a lack of security solutions in various areas of collaborative AR (CAR), specifically in the area of distributed trust management in CAR. This research study also proposed a trusted CAR architecture with a use-case of tourism that can be used as a model for researchers with an interest in making secure AR-based remote communication sessions.
Collapse
Affiliation(s)
- Toqeer Ali Syed
- Faculty of Computer and Information Systems, Islamic University of Madinah, Medina 42351, Saudi Arabia
| | - Muhammad Shoaib Siddiqui
- Faculty of Computer and Information Systems, Islamic University of Madinah, Medina 42351, Saudi Arabia
| | - Hurria Binte Abdullah
- School of Social Sciences and Humanities, National University of Science and Technology (NUST), Islamabad 44000, Pakistan
| | - Salman Jan
- Malaysian Institute of Information Technology, Universiti Kuala Lumpur, Kuala Lumpur 50250, Malaysia
- Department of Computer Science, Bacha Khan University Charsadda, Charsadda 24420, Pakistan
| | - Abdallah Namoun
- Faculty of Computer and Information Systems, Islamic University of Madinah, Medina 42351, Saudi Arabia
| | - Ali Alzahrani
- Faculty of Computer and Information Systems, Islamic University of Madinah, Medina 42351, Saudi Arabia
| | - Adnan Nadeem
- Faculty of Computer and Information Systems, Islamic University of Madinah, Medina 42351, Saudi Arabia
| | - Ahmad B. Alkhodre
- Faculty of Computer and Information Systems, Islamic University of Madinah, Medina 42351, Saudi Arabia
| |
Collapse
|
2
|
Advances and Innovations in Ablative Head and Neck Oncologic Surgery Using Mixed Reality Technologies in Personalized Medicine. J Clin Med 2022; 11:jcm11164767. [PMID: 36013006 PMCID: PMC9410374 DOI: 10.3390/jcm11164767] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Revised: 08/10/2022] [Accepted: 08/12/2022] [Indexed: 11/17/2022] Open
Abstract
The benefit of computer-assisted planning in head and neck ablative and reconstructive surgery has been extensively documented over the last decade. This approach has been proven to offer a more secure surgical procedure. In the treatment of cancer of the head and neck, computer-assisted surgery can be used to visualize and estimate the location and extent of the tumor mass. Nowadays, some software tools even allow the visualization of the structures of interest in a mixed reality environment. However, the precise integration of mixed reality systems into a daily clinical routine is still a challenge. To date, this technology is not yet fully integrated into clinical settings such as the tumor board, surgical planning for head and neck tumors, or medical and surgical education. As a consequence, the handling of these systems is still of an experimental nature, and decision-making based on the presented data is not yet widely used. The aim of this paper is to present a novel, user-friendly 3D planning and mixed reality software and its potential application for ablative and reconstructive head and neck surgery.
Collapse
|
3
|
Demerath T, Stanicki A, Roelz R, Farina Nunez MT, Bissolo M, Steiert C, Fistouris P, Coenen VA, Urbach H, Fung C, Beck J, Reinacher PC. Accuracy of augmented reality-guided drainage versus stereotactic and conventional puncture in an intracerebral hemorrhage phantom model. J Neurointerv Surg 2022:neurintsurg-2022-018678. [PMID: 35853700 DOI: 10.1136/neurintsurg-2022-018678] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Accepted: 05/19/2022] [Indexed: 11/04/2022]
Abstract
BACKGROUND Minimally invasive intracranial drain placement is a common neurosurgical emergency procedure in patients with intracerebral hemorrhage (ICH). We aimed to retrospectively investigate the accuracy of conventional freehand (bedside) hemorrhage drain placement and to prospectively compare the accuracy of augmented/mixed reality-guided (AR) versus frame-based stereotaxy-guided (STX) and freehand drain placement in a phantom model. METHODS A retrospective, single-center analysis evaluated the accuracy of drain placement in 73 consecutive ICH with a visual rating of postinterventional CT data. In a head phantom with a simulated deep ICH, five neurosurgeons performed four punctures for each technique: STX, AR, and the freehand technique. The Euclidean distance to the target point and the lateral deviation of the achieved trajectory from the planned trajectory at target point level were compared between the three methods. RESULTS Analysis of the clinical cases revealed an optimal drainage position in only 46/73 (63%). Correction of the drain was necessary in 23/73 cases (32%). In the phantom study, accuracy of AR was significantly higher than the freehand method (P<0.001 for both Euclidean and lateral distances). The Euclidean distance using AR (median 3 mm) was close to that using STX (median 1.95 mm; P=0.023). CONCLUSIONS We demonstrated that the accuracy of the freehand technique was low and that subsequent position correction was common. In a phantom model, AR drainage placement was significantly more precise than the freehand method. AR has great potential to increase precision of emergency intracranial punctures in a bedside setting.
Collapse
Affiliation(s)
- Theo Demerath
- Department of Neuroradiology, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Amin Stanicki
- Department of Stereotactic and Functional Neurosurgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Roland Roelz
- Department of Neurosurgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Mateo Tomas Farina Nunez
- Department of Neurosurgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Marco Bissolo
- Department of Neurosurgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Christine Steiert
- Department of Neurosurgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Panagiotis Fistouris
- Department of Neurosurgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Volker Arnd Coenen
- Department of Stereotactic and Functional Neurosurgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Horst Urbach
- Department of Neuroradiology, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Christian Fung
- Department of Neurosurgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Jürgen Beck
- Department of Neurosurgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Peter Christoph Reinacher
- Department of Stereotactic and Functional Neurosurgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany .,Fraunhofer Institute for Laser Technology (ILT), Aachen, Germany
| |
Collapse
|
4
|
Experimental Performance Evaluation of Enhanced User Interaction Components for Web-Based Collaborative Extended Reality. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11093811] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
COVID-19-related quarantine measures resulted in a significant increase of interest in online collaboration tools. This includes virtual reality (VR) or, in more general term, extended reality (XR) solutions. Shared XR allows for activities such as presentations, training of personnel or therapy to take place in a virtual space instead of a real one. To make online XR as accessible as possible, a significant effort has been put into the development of solutions that can run directly in web browsers. One of the most recognized solutions is the A-Frame software framework, created by Mozilla VR team and supporting most of the contemporary XR hardware. In addition, an extension called Networked-Aframe allows multiple users to share virtual environments, created using A-Frame, in real time. In this article, we introduce and experimentally evaluate three components that extend the functionality of A-Frame and Networked-Aframe. The first one extends Networked-Aframe with the ability to monitor and control users in a shared virtual scene. The second one implements six degrees of freedom motion tracking for smartphone-based VR headsets. The third one brings hand gesture support to the Microsoft HoloLens holographic computer. The evaluation was performed in a dedicated local network environment with 5, 10, 15 and 20 client computers. Each computer represented one user in a shared virtual scene. Since the experiments were carried out with and without the introduced components, the results presented here can also be regarded as a performance evaluation of A-Frame and Networked-Aframe themselves.
Collapse
|
5
|
Zhao X, Miao C, Zhang H. Multi-Feature Nonlinear Optimization Motion Estimation Based on RGB-D and Inertial Fusion. SENSORS 2020; 20:s20174666. [PMID: 32824978 PMCID: PMC7506712 DOI: 10.3390/s20174666] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Revised: 08/12/2020] [Accepted: 08/17/2020] [Indexed: 11/17/2022]
Abstract
To achieve a high precision estimation of indoor robot motion, a tightly coupled RGB-D visual-inertial SLAM system is proposed herein based on multiple features. Most of the traditional visual SLAM methods only rely on points for feature matching and they often underperform in low textured scenes. Besides point features, line segments can also provide geometrical structure information of the environment. This paper utilized both points and lines in low-textured scenes to increase the robustness of RGB-D SLAM system. In addition, we implemented a fast initialization process based on the RGB-D camera to improve the real-time performance of the proposed system and designed a new backend nonlinear optimization framework. By minimizing the cost function formed by the pre-integrated IMU residuals and re-projection errors of points and lines in sliding windows, the state vector is optimized. The experiments evaluated on public datasets show that our system achieves higher accuracy and robustness on trajectories and in pose estimation compared with several state-of-the-art visual SLAM systems.
Collapse
|
6
|
Tuena C, Pedroli E, Trimarchi PD, Gallucci A, Chiappini M, Goulene K, Gaggioli A, Riva G, Lattanzio F, Giunco F, Stramba-Badiale M. Usability Issues of Clinical and Research Applications of Virtual Reality in Older People: A Systematic Review. Front Hum Neurosci 2020; 14:93. [PMID: 32322194 PMCID: PMC7156831 DOI: 10.3389/fnhum.2020.00093] [Citation(s) in RCA: 57] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2019] [Accepted: 03/02/2020] [Indexed: 12/23/2022] Open
Abstract
Aging is a condition that may be characterized by a decline in physical, sensory, and mental capacities, while increased morbidity and multimorbidity may be associated with disability. A wide range of clinical conditions (e.g., frailty, mild cognitive impairment, metabolic syndrome) and age-related diseases (e.g., Alzheimer's and Parkinson's disease, cancer, sarcopenia, cardiovascular and respiratory diseases) affect older people. Virtual reality (VR) is a novel and promising tool for assessment and rehabilitation in older people. Usability is a crucial factor that must be considered when designing virtual systems for medicine. We conducted a systematic review with Preferred Reporting Items for Systematic reviews and Meta-analysis (PRISMA) guidelines concerning the usability of VR clinical systems in aging and provided suggestions to structure usability piloting. Findings show that different populations of older people have been recruited to mainly assess usability of non-immersive VR, with particular attention paid to motor/physical rehabilitation. Mixed approach (qualitative and quantitative tools together) is the preferred methodology; technology acceptance models are the most applied theoretical frameworks, however senior adapted models are the best within this context. Despite minor interaction issues and bugs, virtual systems are rated as usable and feasible. We encourage usability and user experience pilot studies to ameliorate interaction and improve acceptance and use of VR clinical applications in older people with the aid of suggestions (VR-USOP) provided by our analysis.
Collapse
Affiliation(s)
- Cosimo Tuena
- Applied Technology for Neuro-Psychology, IRCCS Istituto Auxologico Italiano, Milan, Italy
- Department of Psychology, Catholic University of the Sacred Hearth, Milan, Italy
| | - Elisa Pedroli
- Applied Technology for Neuro-Psychology, IRCCS Istituto Auxologico Italiano, Milan, Italy
- Faculty of Psychology, University of eCampus, Novedrate, Italy
| | | | | | - Mattia Chiappini
- Applied Technology for Neuro-Psychology, IRCCS Istituto Auxologico Italiano, Milan, Italy
| | - Karine Goulene
- Department of Geriatrics and Cardiovascular Medicine, IRCCS Istituto Auxologico Italiano, Milan, Italy
| | - Andrea Gaggioli
- Applied Technology for Neuro-Psychology, IRCCS Istituto Auxologico Italiano, Milan, Italy
- Department of Psychology, Catholic University of the Sacred Hearth, Milan, Italy
| | - Giuseppe Riva
- Applied Technology for Neuro-Psychology, IRCCS Istituto Auxologico Italiano, Milan, Italy
- Department of Psychology, Catholic University of the Sacred Hearth, Milan, Italy
| | | | | | - Marco Stramba-Badiale
- Department of Geriatrics and Cardiovascular Medicine, IRCCS Istituto Auxologico Italiano, Milan, Italy
| |
Collapse
|
7
|
Abstract
Although virtual reality (VR) is a promising tool for the investigation of episodic memory phenomena, to date there has been relatively little examination of how learning mechanisms operate in VR and how these processes might compare (or contrast) with learning that occurs in real life. Moreover, the existing literature on this topic is spread across several disciplines and uses various distinct apparatuses, thus obscuring whether the differences that exist between studies might be due to genuine theoretical discrepancies or may be more simply explained by accounting for methodological variations. The current review is designed to address and elucidate several issues relevant to psychological researchers interested in understanding and/or using this technological approach to study episodic memory phenomena. The principle objectives of the review are as follows: (a) defining and discussing the various VR systems currently used for research purposes, (b) compiling research of episodic memory effects in VR as they have been studied across several disciplines, and (c) surveying major topics in this body of literature (e.g., how virtual immersion has an impact on memory; transfer effects from VR to the real world). The content of this review is designed to serve as a resource for psychologists interested in learning more about the current state of research in this field and is intended to highlight the capabilities (and constraints) associated with using this technological approach in episodic memory research.
Collapse
|
8
|
A Multi-Camera Rig with Non-Overlapping Views for Dynamic Six-Degree-of-Freedom Measurement. SENSORS 2019; 19:s19020250. [PMID: 30634653 PMCID: PMC6358974 DOI: 10.3390/s19020250] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/04/2018] [Revised: 12/28/2018] [Accepted: 01/07/2019] [Indexed: 11/16/2022]
Abstract
Large-scale measurement plays an increasingly important role in intelligent manufacturing. However, existing instruments have problems with immersive experiences. In this paper, an immersive positioning and measuring method based on augmented reality is introduced. An inside-out vision measurement approach using a multi-camera rig with non-overlapping views is presented for dynamic six-degree-of-freedom measurement. By using active LED markers, a flexible and robust solution is delivered to deal with complex manufacturing sites. The space resection adjustment principle is addressed and measurement errors are simulated. The improved Nearest Neighbor method is employed for feature correspondence. The proposed tracking method is verified by experiments and results with good performance are obtained.
Collapse
|
9
|
Piao JC, Kim SD. Adaptive Monocular Visual-Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices. SENSORS 2017; 17:s17112567. [PMID: 29112143 PMCID: PMC5712971 DOI: 10.3390/s17112567] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/23/2017] [Revised: 10/26/2017] [Accepted: 11/03/2017] [Indexed: 11/16/2022]
Abstract
Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual–inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual–inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual–inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual–inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method.
Collapse
Affiliation(s)
- Jin-Chun Piao
- Department of Computer Science, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul 03722, Korea.
| | - Shin-Dug Kim
- Department of Computer Science, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul 03722, Korea.
| |
Collapse
|