1
|
de Clercq K, Dietrich A, Núñez Velasco JP, de Winter J, Happee R. External Human-Machine Interfaces on Automated Vehicles: Effects on Pedestrian Crossing Decisions. HUMAN FACTORS 2019; 61:1353-1370. [PMID: 30912985 PMCID: PMC6820125 DOI: 10.1177/0018720819836343] [Citation(s) in RCA: 67] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/10/2018] [Accepted: 02/13/2019] [Indexed: 05/27/2023]
Abstract
OBJECTIVE In this article, we investigated the effects of external human-machine interfaces (eHMIs) on pedestrians' crossing intentions. BACKGROUND Literature suggests that the safety (i.e., not crossing when unsafe) and efficiency (i.e., crossing when safe) of pedestrians' interactions with automated vehicles could increase if automated vehicles display their intention via an eHMI. METHODS Twenty-eight participants experienced an urban road environment from a pedestrian's perspective using a head-mounted display. The behavior of approaching vehicles (yielding, nonyielding), vehicle size (small, medium, large), eHMI type (1. baseline without eHMI, 2. front brake lights, 3. Knightrider animation, 4. smiley, 5. text [WALK]), and eHMI timing (early, intermediate, late) were varied. For yielding vehicles, the eHMI changed from a nonyielding to a yielding state, and for nonyielding vehicles, the eHMI remained in its nonyielding state. Participants continuously indicated whether they felt safe to cross using a handheld button, and "feel-safe" percentages were calculated. RESULTS For yielding vehicles, the feel-safe percentages were higher for the front brake lights, Knightrider, smiley, and text, as compared with baseline. For nonyielding vehicles, the feel-safe percentages were equivalent regardless of the presence or type of eHMI, but larger vehicles yielded lower feel-safe percentages. The Text eHMI appeared to require no learning, contrary to the three other eHMIs. CONCLUSION An eHMI increases the efficiency of pedestrian-AV interactions, and a textual display is regarded as the least ambiguous. APPLICATION This research supports the development of automated vehicles that communicate with other road users.
Collapse
|
research-article |
6 |
67 |
2
|
Balsam P, Borodzicz S, Malesa K, Puchta D, Tymińska A, Ozierański K, Kołtowski Ł, Peller M, Grabowski M, Filipiak KJ, Opolski G. OCULUS study: Virtual reality-based education in daily clinical practice. Cardiol J 2018; 26:260-264. [PMID: 29297178 PMCID: PMC8086674 DOI: 10.5603/cj.a2017.0154] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2017] [Accepted: 12/14/2017] [Indexed: 01/21/2023] Open
Abstract
BACKGROUND Atrial fibrillation (AF) is associated with high risk of stroke and other thromboembolic complications. The OCULUS study aimed to evaluate the effectiveness of the three-dimensional (3D) movie in teaching patients about the consequences of AF and pharmacological stroke prevention. METHODS The study was based on a questionnaire and included 100 consecutive patients (38% women, 62% with AF history). Using the oculus glasses and a smartphone, a 3D movie describing the risk of stroke in AF was shown. Similar questions were asked immediately after, 1 week and 1 year after the projection. RESULTS Before the projection 22/100 (22.0%) declared stroke a consequence of AF, while immediately after 83/100 (83.0%) (p < 0.0001) patients declared this consequence. Seven days after, stroke as AF consequence was chosen by 74/94 (78.7%) vs. 22/94 (23.4%) when compared to the baseline knowledge; p < 0.0001, a similar trend was also observed in 1-year follow-up (64/90 [71.1%] vs. 21/90 [23.3%]; p < 0.0001). Before the projection 88.3% (83/94) patients responded, that drugs may reduce the risk of stroke, and after 1 week the number of patients increased to (94/94 [100%]; p = 0.001). After 1 year 87/90 (96.7%) answered that drugs may diminish the risk of stroke (p = 0.02 in comparison to the baseline survey 78/90 [86.7%]). Use of oral anticoagulation to reduce the risk of stroke was initially chosen by 66/94 (70.2%), by 90/94 (95.7%; p < 0.0001) 7 days after and by 83/90 (92.2%; p < 0.0001) 1 year after. CONCLUSIONS 3D movie is an effective tool in transferring knowledge about the consequences of AF and the pivotal role of oral anticoagulation in stroke prevention. TRIAL REGISTRATION ClinicalTrials.gov, NCT03104231. Registered on 28 March 2017.
Collapse
|
Clinical Trial |
7 |
24 |
3
|
Kapp S, Barz M, Mukhametov S, Sonntag D, Kuhn J. ARETT: Augmented Reality Eye Tracking Toolkit for Head Mounted Displays. SENSORS (BASEL, SWITZERLAND) 2021; 21:2234. [PMID: 33806863 PMCID: PMC8004990 DOI: 10.3390/s21062234] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Revised: 03/11/2021] [Accepted: 03/17/2021] [Indexed: 12/17/2022]
Abstract
Currently an increasing number of head mounted displays (HMD) for virtual and augmented reality (VR/AR) are equipped with integrated eye trackers. Use cases of these integrated eye trackers include rendering optimization and gaze-based user interaction. In addition, visual attention in VR and AR is interesting for applied research based on eye tracking in cognitive or educational sciences for example. While some research toolkits for VR already exist, only a few target AR scenarios. In this work, we present an open-source eye tracking toolkit for reliable gaze data acquisition in AR based on Unity 3D and the Microsoft HoloLens 2, as well as an R package for seamless data analysis. Furthermore, we evaluate the spatial accuracy and precision of the integrated eye tracker for fixation targets with different distances and angles to the user (n=21). On average, we found that gaze estimates are reported with an angular accuracy of 0.83 degrees and a precision of 0.27 degrees while the user is resting, which is on par with state-of-the-art mobile eye trackers.
Collapse
|
research-article |
4 |
21 |
4
|
Bachmann ER, Hodgson E, Hoffbauer C, Messinger J. Multi-User Redirected Walking and Resetting Using Artificial Potential Fields. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:2022-2031. [PMID: 30794513 DOI: 10.1109/tvcg.2019.2898764] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Head-mounted displays (HMDs) and large area position tracking systems can enable users to navigate virtual worlds through natural walking. Redirected walking (RDW) imperceptibly steers immersed users away from physical world obstacles allowing them to explore unbounded virtual worlds while walking in limited physical space. In cases of imminent collisions, resetting techniques can reorient them into open space. This work introduces categorically new RDW and resetting algorithms based on the use of artificial potential fields that "push" users away from obstacles and other users. Data from human subject experiments indicate that these methods reduce potential single-user resets by 66% and increase the average distance between resets by 86% compared to previous techniques. A live multi-user study demonstrates the viability of the algorithm with up to 3 concurrent users, and simulation results indicate that the algorithm scales efficiently up to at least 8 users and is effective with larger groups.
Collapse
|
|
6 |
20 |
5
|
Nie GY, Duh HBL, Liu Y, Wang Y. Analysis on Mitigation of Visually Induced Motion Sickness by Applying Dynamical Blurring on a User's Retina. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:2535-2545. [PMID: 30668475 DOI: 10.1109/tvcg.2019.2893668] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Visually induced motion sickness (MS) experienced in a 3D immersive virtual environment (VE) limits the widespread use of virtual reality (VR). This paper studies the effects of a saliency detection-based approach on the reduction of MS when the display on a user's retina is dynamic blurred. In the experiment, forty participants were exposed to a VR experience under a control condition without applying dynamic blurring, and an experimental condition applying dynamic blurring. The experimental results show that the participants under the experimental condition report a statistically significant reduction in the severity of MS symptoms on average during the VR experience compared to those under the control condition, which demonstrates that the proposed approach may alleviate visually induced MS in VR and enable users to remain in a VE for a longer period of time.
Collapse
|
|
5 |
19 |
6
|
Birlo M, Edwards PJE, Clarkson M, Stoyanov D. Utility of optical see-through head mounted displays in augmented reality-assisted surgery: A systematic review. Med Image Anal 2022; 77:102361. [PMID: 35168103 PMCID: PMC10466024 DOI: 10.1016/j.media.2022.102361] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Revised: 11/17/2021] [Accepted: 01/10/2022] [Indexed: 12/11/2022]
Abstract
This article presents a systematic review of optical see-through head mounted display (OST-HMD) usage in augmented reality (AR) surgery applications from 2013 to 2020. Articles were categorised by: OST-HMD device, surgical speciality, surgical application context, visualisation content, experimental design and evaluation, accuracy and human factors of human-computer interaction. 91 articles fulfilled all inclusion criteria. Some clear trends emerge. The Microsoft HoloLens increasingly dominates the field, with orthopaedic surgery being the most popular application (28.6%). By far the most common surgical context is surgical guidance (n=58) and segmented preoperative models dominate visualisation (n=40). Experiments mainly involve phantoms (n=43) or system setup (n=21), with patient case studies ranking third (n=19), reflecting the comparative infancy of the field. Experiments cover issues from registration to perception with very different accuracy results. Human factors emerge as significant to OST-HMD utility. Some factors are addressed by the systems proposed, such as attention shift away from the surgical site and mental mapping of 2D images to 3D patient anatomy. Other persistent human factors remain or are caused by OST-HMD solutions, including ease of use, comfort and spatial perception issues. The significant upward trend in published articles is clear, but such devices are not yet established in the operating room and clinical studies showing benefit are lacking. A focused effort addressing technical registration and perceptual factors in the lab coupled with design that incorporates human factors considerations to solve clear clinical problems should ensure that the significant current research efforts will succeed.
Collapse
|
Systematic Review |
3 |
19 |
7
|
Kothari RS, Chaudhary AK, Bailey RJ, Pelz JB, Diaz GJ. EllSeg: An Ellipse Segmentation Framework for Robust Gaze Tracking. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:2757-2767. [PMID: 33780339 DOI: 10.1109/tvcg.2021.3067765] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Ellipse fitting, an essential component in pupil or iris tracking based video oculography, is performed on previously segmented eye parts generated using various computer vision techniques. Several factors, such as occlusions due to eyelid shape, camera position or eyelashes, frequently break ellipse fitting algorithms that rely on well-defined pupil or iris edge segments. In this work, we propose training a convolutional neural network to directly segment entire elliptical structures and demonstrate that such a framework is robust to occlusions and offers superior pupil and iris tracking performance (at least 10% and 24% increase in pupil and iris center detection rate respectively within a two-pixel error margin) compared to using standard eye parts segmentation for multiple publicly available synthetic segmentation datasets.
Collapse
|
|
4 |
17 |
8
|
Martin-Gomez A, Li H, Song T, Yang S, Wang G, Ding H, Navab N, Zhao Z, Armand M. STTAR: Surgical Tool Tracking Using Off-the-Shelf Augmented Reality Head-Mounted Displays. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:3578-3593. [PMID: 37021885 PMCID: PMC10959446 DOI: 10.1109/tvcg.2023.3238309] [Citation(s) in RCA: 17] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
The use of Augmented Reality (AR) for navigation purposes has shown beneficial in assisting physicians during the performance of surgical procedures. These applications commonly require knowing the pose of surgical tools and patients to provide visual information that surgeons can use during the performance of the task. Existing medical-grade tracking systems use infrared cameras placed inside the Operating Room (OR) to identify retro-reflective markers attached to objects of interest and compute their pose. Some commercially available AR Head-Mounted Displays (HMDs) use similar cameras for self-localization, hand tracking, and estimating the objects' depth. This work presents a framework that uses the built-in cameras of AR HMDs to enable accurate tracking of retro-reflective markers without the need to integrate any additional electronics into the HMD. The proposed framework can simultaneously track multiple tools without having previous knowledge of their geometry and only requires establishing a local network between the headset and a workstation. Our results show that the tracking and detection of the markers can be achieved with an accuracy of 0.09±0.06 mm on lateral translation, 0.42 ±0.32 mm on longitudinal translation and 0.80 ±0.39° for rotations around the vertical axis. Furthermore, to showcase the relevance of the proposed framework, we evaluate the system's performance in the context of surgical procedures. This use case was designed to replicate the scenarios of k-wire insertions in orthopedic procedures. For evaluation, seven surgeons were provided with visual navigation and asked to perform 24 injections using the proposed framework. A second study with ten participants served to investigate the capabilities of the framework in the context of more general scenarios. Results from these studies provided comparable accuracy to those reported in the literature for AR-based navigation procedures.
Collapse
|
research-article |
1 |
17 |
9
|
Marchetto J, Wright WG. The Validity of an Oculus Rift to Assess Postural Changes During Balance Tasks. HUMAN FACTORS 2019; 61:1340-1352. [PMID: 30917062 DOI: 10.1177/0018720819835088] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
OBJECTIVE To investigate whether shifts in head position, measured via an Oculus Rift head-mounted display (HMD), is a valid measure of whole-body postural stability. BACKGROUND The inverted single-link pendulum model of balance suggests shifts in whole-body center of mass can be estimated from individual body segments. However, whether head position describes postural stability such as center-of-pressure (COP) remains unclear. METHOD Participants (N = 10) performed six conditions while wearing an HMD and performing a previously validated virtual reality (VR)-based balance assessment. COP was recorded with a Wii Balance Board force plate (WBB), while an HMD recorded linear and angular head displacement. Visual input was presented in the HMD (stable scene, dark scene, or dynamic scene) and somatosensory information (with or without foam) was varied across each condition. The HMD time series data were compared with the criterion-measure WBB. RESULTS Significant correlations were found between COP measures (standard deviation, range, sway area, velocity) and head-centered angular and linear displacements (roll, pitch, mediolateral and anteroposterior directions). CONCLUSIONS The Oculus Rift HMD shows promise as a measure of postural stability without additional posturography equipment. These findings support the application of VR HMD technology for assessment of postural stability across a variety of challenging conditions. APPLICATION The human factors and ergonomic benefit of such an approach is in its portability, low cost, and widespread availability for clinic and home-based investigation of postural disturbances. Fall injury affects millions of people annually, so assessment of fall risk and treatment of the underlying causes has enormous public health benefit.
Collapse
|
|
6 |
16 |
10
|
Kelly JW. Distance Perception in Virtual Reality: A Meta-Analysis of the Effect of Head-Mounted Display Characteristics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:4978-4989. [PMID: 35925852 DOI: 10.1109/tvcg.2022.3196606] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Distances are commonly underperceived in virtual reality (VR), and this finding has been documented repeatedly over more than two decades of research. Yet, there is evidence that perceived distance is more accurate in modern compared to older head-mounted displays (HMDs). This meta-analysis, based on 137 samples from 61 publications, describes egocentric distance perception across 20 HMDs and examines the relationship between perceived distance and technical HMD characteristics. Judged distance was positively associated with HMD field of view (FOV), positively associated with HMD resolution, and negatively associated with HMD weight. The effects of FOV and resolution were more pronounced among heavier HMDs. These findings suggest that future improvements in these technical characteristics may be central to resolving the problem of distance underperception in VR.
Collapse
|
Meta-Analysis |
2 |
15 |
11
|
Munusamy T, Karuppiah R, Bahuri NFA, Sockalingam S, Cham CY, Waran V. Telemedicine via Smart Glasses in Critical Care of the Neurosurgical Patient-COVID-19 Pandemic Preparedness and Response in Neurosurgery. World Neurosurg 2021; 145:e53-e60. [PMID: 32956888 PMCID: PMC7500328 DOI: 10.1016/j.wneu.2020.09.076] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Revised: 09/13/2020] [Accepted: 09/14/2020] [Indexed: 12/24/2022]
Abstract
OBJECTIVE The coronavirus disease 2019 pandemic poses major risks to health care workers in neurocritical care. Recommendations are in place to limit medical personnel attending to the neurosurgical patient as a protective measure and to conserve personal protective equipment. However, the complexity of the neurosurgical patient proves to be a challenge and an opportunity for innovation. The goal of our study was to determine if telemedicine delivered through smart glasses was feasible and effective in an alternative method of conducting ward round on neurocritical care patients during the pandemic. METHODS A random pair of neurosurgery resident and specialist conducted consecutive virtual and physical ward rounds on neurocritical patients. A virtual ward round was first conducted remotely by a specialist who received real-time audiovisual information from a resident wearing smart glasses integrated with telemedicine. Subsequently, a physical ward round was performed together by the resident and specialist on the same patient. The management plans of both ward rounds were compared, and the intrarater reliability was measured. On study completion a qualitative survey was performed. RESULTS Ten paired ward rounds were performed on 103 neurocritical care patients with excellent overall intrarater reliability. Nine out of 10 showed good to excellent internal consistency, and 1 showed acceptable internal consistency. Qualitative analysis indicated wide user acceptance and high satisfaction rate with the alternative method. CONCLUSIONS Virtual ward rounds using telemedicine via smart glasses on neurosurgical patients in critical care were feasible, effective, and widely accepted as an alternative to physical ward rounds during the coronavirus disease 2019 pandemic.
Collapse
|
research-article |
4 |
15 |
12
|
Romare C, Skär L. Smart Glasses for Caring Situations in Complex Care Environments: Scoping Review. JMIR Mhealth Uhealth 2020; 8:e16055. [PMID: 32310144 PMCID: PMC7199139 DOI: 10.2196/16055] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2019] [Revised: 11/27/2019] [Accepted: 02/06/2020] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Anesthesia departments and intensive care units represent two advanced, high-tech, and complex care environments. Health care in those environments involves different types of technology to provide safe, high-quality care. Smart glasses have previously been used in different health care settings and have been suggested to assist health care professionals in numerous areas. However, smart glasses in the complex contexts of anesthesia care and intensive care are new and innovative. An overview of existing research related to these contexts is needed before implementing smart glasses into complex care environments. OBJECTIVE The aim of this study was to highlight potential benefits and limitations with health care professionals' use of smart glasses in situations occurring in complex care environments. METHODS A scoping review with six steps was conducted to fulfill the objective. Database searches were conducted in PubMed and Scopus; original articles about health care professionals' use of smart glasses in complex care environments and/or situations occurring in those environments were included. The searches yielded a total of 20 articles that were included in the review. RESULTS Three categories were created during the qualitative content analysis: (1) smart glasses as a versatile tool that offers opportunities and challenges, (2) smart glasses entail positive and negative impacts on health care professionals, and (3) smart glasses' quality of use provides facilities and leaves room for improvement. Smart glasses were found to be both a helpful tool and a hindrance in caring situations that might occur in complex care environments. This review provides an increased understanding about different situations where smart glasses might be used by health care professionals in clinical practice in anesthesia care and intensive care; however, research about smart glasses in clinical complex care environments is limited. CONCLUSIONS Thoughtful implementation and improved hardware are needed to meet health care professionals' needs. New technology brings challenges; more research is required to elucidate how smart glasses affect patient safety, health care professionals, and quality of care in complex care environments.
Collapse
|
Scoping Review |
5 |
14 |
13
|
Ratcliff J, Supikov A, Alfaro S, Azuma R. ThinVR: Heterogeneous microlens arrays for compact, 180 degree FOV VR near-eye displays. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:1981-1990. [PMID: 32070971 DOI: 10.1109/tvcg.2020.2973064] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Today's Virtual Reality (VR) displays are dramatically better than the head-worn displays offered 30 years ago, but today's displays remain nearly as bulky as their predecessors in the 1980's. Also, almost all consumer VR displays today provide 90-110 degrees field of view (FOV), which is much smaller than the human visual system's FOV which extends beyond 180 degrees horizontally. In this paper, we propose ThinVR as a new approach to simultaneously address the bulk and limited FOV of head-worn VR displays. ThinVR enables a head-worn VR display to provide 180 degrees horizontal FOV in a thin, compact form factor. Our approach is to replace traditional large optics with a curved microlens array of custom-designed heterogeneous lenslets and place these in front of a curved display. We found that heterogeneous optics were crucial to make this approach work, since over a wide FOV, many lenslets are viewed off the central axis. We developed a custom optimizer for designing custom heterogeneous lenslets to ensure a sufficient eyebox while reducing distortions. The contribution includes an analysis of the design space for curved microlens arrays, implementation of physical prototypes, and an assessment of the image quality, eyebox, FOV, reduction in volume and pupil swim distortion. To our knowledge, this is the first work to demonstrate and analyze the potential for curved, heterogeneous microlens arrays to enable compact, wide FOV head-worn VR displays.
Collapse
|
|
5 |
13 |
14
|
Iskander J, Hossny M, Nahavandi S. Using biomechanics to investigate the effect of VR on eye vergence system. APPLIED ERGONOMICS 2019; 81:102883. [PMID: 31422246 DOI: 10.1016/j.apergo.2019.102883] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2018] [Revised: 02/05/2019] [Accepted: 06/23/2019] [Indexed: 06/10/2023]
Abstract
Vergence-accommodation conflict (VAC) is the main contributor to visual fatigue during immersion in virtual environments. Many studies have investigated the effects of VAC using 3D displays and expensive complex apparatus and setup to create natural and conflicting viewing conditions. However, a limited number of studies targeted virtual environments simulated using modern consumer-grade VR headsets. Our main objective, in this work, is to test how the modern VR headsets (VR simulated depth) could affect our vergence system, in addition to investigating the effect of the simulated depth on the eye-gaze performance. The virtual scenario used included a common virtual object (a cube) in a simple virtual environment with no constraints placed on the head and neck movement of the subjects. We used ocular biomechanics and eye tracking to compare between vergence angles in matching (ideal) and conflicting (real) viewing conditions. Real vergence angle during immersion was significantly higher than ideal vergence angle and exhibited higher variability which leads to incorrect depth cues that affects depth perception and also leads to visual fatigue for prolonged virtual experiences. Additionally, we found that as the simulated depth increases, the ability of users to manipulate virtual objects with their eyes decreases, thus, decreasing the possibilities of interaction through eye gaze. The biomechanics model used here can be further extended to study muscular activity of eye muscles during immersion. It presents an efficient and flexible assessment tool for virtual environments.
Collapse
|
|
6 |
13 |
15
|
Jung AR, Park EA. The Effectiveness of Learning to Use HMD-Based VR Technologies on Nursing Students: Chemoport Insertion Surgery. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:ijerph19084823. [PMID: 35457689 PMCID: PMC9028481 DOI: 10.3390/ijerph19084823] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Revised: 04/13/2022] [Accepted: 04/14/2022] [Indexed: 12/25/2022]
Abstract
Background: The purpose of this study was to develop a mobile head mounted display (HMD)-based virtual reality (VR) nursing education program (VRP), and to evaluate the effects on knowledge, learning attitude, satisfaction with self-practice, and learning motivation in nursing students. Methods: This was a quasi-experimental study using a nonequivalent control group pretest-posttest design to evaluate the effects of HMD-based VRP on nursing students. A Chemoport insertion surgery nursing scenario was developed with HMD-based VRP. The experimental group consisting of 30 nursing students underwent pre-debriefing, followed by VRP using HMD and debriefing. The control group, consisting of 30 nursing students, underwent pre-debriefing, followed by self-learning using handouts about Chemoport insertion surgery procedures for 30 min, and debriefing. Results: The experimental group that underwent HMD-based VRP showed significantly improved post-intervention knowledge on operating nursing (p = 0.001), learning attitude (p = 0.002), and satisfaction (p = 0.017) compared to the control group. Sub-domains of motivation, attention (p < 0.05), and relevance (p < 0.05) were significantly different between the two groups, post-intervention. Conclusions: HMD-based VRP of Chemoport insertion surgery is expected to contribute to knowledge, learning attitude, satisfaction, attention, and relevance in nursing students.
Collapse
|
research-article |
3 |
12 |
16
|
Carrera JF. A Systematic Review of the Use of Google Glass in Graduate Medical Education. J Grad Med Educ 2019; 11:637-648. [PMID: 31871562 PMCID: PMC6919184 DOI: 10.4300/jgme-d-19-00148.1] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/22/2019] [Revised: 06/13/2019] [Accepted: 08/21/2019] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Graduate medical education (GME) has emphasized the assessment of trainee competencies and milestones; however, sufficient in-person assessment is often constrained. Using mobile hands-free devices, such as Google Glass (GG) for telemedicine, allows for remote supervision, education, and assessment of residents. OBJECTIVE We reviewed available literature on the use of GG in GME in the clinical learning environment, its use for resident supervision and education, and its clinical utility and technical limitations. METHODS We conducted a systematic review in accordance with 2009 PRISMA guidelines. Applicable studies were identified through a review of PubMed, MEDLINE, and Web of Science databases for articles published from January 2013 to August 2018. Two reviewers independently screened titles, abstracts, and full-text articles that reported using GG in GME and assessed the quality of the studies. A systematic review of these studies appraised the literature for descriptions of its utility in GME. RESULTS Following our search and review process, 37 studies were included. The majority evaluated GG in surgical specialties (n = 23) for the purpose of surgical/procedural skills training or supervision. GG was predominantly used for video teleconferencing, and photo and video capture. Highlighted positive aspects of GG use included point-of-view broadcasting and capacity for 2-way communication. Most studies cited drawbacks that included suboptimal battery life and HIPAA concerns. CONCLUSIONS GG shows some promise as a device capable of enhancing GME. Studies evaluating GG in GME are limited by small sample sizes and few quantitative data. Overall experience with use of GG in GME is generally positive.
Collapse
|
Systematic Review |
6 |
11 |
17
|
Aspiotis V, Miltiadous A, Kalafatakis K, Tzimourta KD, Giannakeas N, Tsipouras MG, Peschos D, Glavas E, Tzallas AT. Assessing Electroencephalography as a Stress Indicator: A VR High-Altitude Scenario Monitored through EEG and ECG. SENSORS (BASEL, SWITZERLAND) 2022; 22:5792. [PMID: 35957348 PMCID: PMC9371026 DOI: 10.3390/s22155792] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Revised: 07/28/2022] [Accepted: 08/01/2022] [Indexed: 05/28/2023]
Abstract
Over the last decade, virtual reality (VR) has become an increasingly accessible commodity. Head-mounted display (HMD) immersive technologies allow researchers to simulate experimental scenarios that would be unfeasible or risky in real life. An example is extreme heights exposure simulations, which can be utilized in research on stress system mobilization. Until recently, electroencephalography (EEG)-related research was focused on mental stress prompted by social or mathematical challenges, with only a few studies employing HMD VR techniques to induce stress. In this study, we combine a state-of-the-art EEG wearable device and an electrocardiography (ECG) sensor with a VR headset to provoke stress in a high-altitude scenarios while monitoring EEG and ECG biomarkers in real time. A robust pipeline for signal clearing is implemented to preprocess the noise-infiltrated (due to movement) EEG data. Statistical and correlation analysis is employed to explore the relationship between these biomarkers with stress. The participant pool is divided into two groups based on their heart rate increase, where statistically important EEG biomarker differences emerged between them. Finally, the occipital-region band power changes and occipital asymmetry alterations were found to be associated with height-related stress and brain activation in beta and gamma bands, which correlates with the results of the self-reported Perceived Stress Scale questionnaire.
Collapse
|
research-article |
3 |
11 |
18
|
Li M, Seifabadi R, Long D, De Ruiter Q, Varble N, Hecht R, Negussie AH, Krishnasamy V, Xu S, Wood BJ. Smartphone- versus smartglasses-based augmented reality (AR) for percutaneous needle interventions: system accuracy and feasibility study. Int J Comput Assist Radiol Surg 2020; 15:1921-1930. [PMID: 32734314 DOI: 10.1007/s11548-020-02235-7] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2020] [Accepted: 07/14/2020] [Indexed: 11/26/2022]
Abstract
PURPOSE To compare the system accuracy and needle placement performance of smartphone- and smartglasses-based augmented reality (AR) for percutaneous needle interventions. METHODS An AR platform was developed to enable the superimposition of annotated anatomy and a planned needle trajectory onto a patient in real time. The system accuracy of the AR display on smartphone (iPhone7) and smartglasses (HoloLens1) devices was evaluated on a 3D-printed phantom. The target overlay error was measured as the distance between actual and virtual targets (n = 336) on the AR display, derived from preprocedural CT. The needle overlay angle was measured as the angular difference between actual and virtual needles (n = 12) on the AR display. Three operators each used the iPhone (n = 8), HoloLens (n = 8) and CT-guided freehand (n = 8) to guide needles into targets in a phantom. Needle placement error was measured with post-placement CT. Needle placement time was recorded from needle puncture to navigation completion. RESULTS The target overlay error of the iPhone was comparable to the HoloLens (1.75 ± 0.59 mm, 1.74 ± 0.86 mm, respectively, p = 0.9). The needle overlay angle of the iPhone and HoloLens was similar (0.28 ± 0.32°, 0.41 ± 0.23°, respectively, p = 0.26). The iPhone-guided needle placements showed reduced error compared to the HoloLens (2.58 ± 1.04 mm, 3.61 ± 2.25 mm, respectively, p = 0.05) and increased time (87 ± 17 s, 71 ± 27 s, respectively, p = 0.02). Both AR devices reduced placement error compared to CT-guided freehand (15.92 ± 8.06 mm, both p < 0.001). CONCLUSION An augmented reality platform employed on smartphone and smartglasses devices may provide accurate display and navigation guidance for percutaneous needle-based interventions.
Collapse
|
Comparative Study |
5 |
11 |
19
|
Groth C, Tauscher JP, Heesen N, Hattenbach M, Castillo S, Magnor M. Omnidirectional Galvanic Vestibular Stimulation in Virtual Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:2234-2244. [PMID: 35167472 DOI: 10.1109/tvcg.2022.3150506] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
In this paper we propose omnidirectional galvanic vestibular stimulation (GVS) to mitigate cybersickness in virtual reality applications. One of the most accepted theories indicates that Cybersickness is caused by the visually induced impression of ego motion while physically remaining at rest. As a result of this sensory mismatch, people associate negative symptoms with VR and sometimes avoid the technology altogether. To reconcile the two contradicting sensory perceptions, we investigate GVS to stimulate the vestibular canals behind our ears with low-current electrical signals that are specifically attuned to the visually displayed camera motion. We describe how to calibrate and generate the appropriate GVS signals in real-time for pre-recorded omnidirectional videos exhibiting ego-motion in all three spatial directions. For validation, we conduct an experiment presenting real-world 360° videos shot from a moving first-person perspective in a VR head-mounted display. Our findings indicate that GVS is able to significantly reduce discomfort for cybersickness-susceptible VR users, creating a deeper and more enjoyable immersive experience for many people.
Collapse
|
|
3 |
10 |
20
|
Aranda-García S, Santos-Folgar M, Fernández-Méndez F, Barcala-Furelos R, Pardo Ríos M, Hernández Sánchez E, Varela-Varela L, San Román-Mata S, Rodríguez-Núñez A. "Dispatcher, Can You Help Me? A Woman Is Giving Birth". A Pilot Study of Remote Video Assistance with Smart Glasses. SENSORS (BASEL, SWITZERLAND) 2022; 23:s23010409. [PMID: 36617008 PMCID: PMC9824362 DOI: 10.3390/s23010409] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Revised: 12/22/2022] [Accepted: 12/27/2022] [Indexed: 05/20/2023]
Abstract
Smart glasses (SG) could be a breakthrough in emergency situations, so the aim of this work was to assess the potential benefits of teleassistance with smart glasses (SG) from a midwife to a lifeguard in a simulated, unplanned, out-of-hospital birth (OHB). Thirty-eight lifeguards were randomized into SG and control (CG) groups. All participants were required to act in a simulated imminent childbirth with a maternal−fetal simulator (PROMPT Flex, Laerdal, Norway). The CG acted autonomously, while the SG group was video-assisted by a midwife through SG (Vuzix Blade, New York, NY, USA). The video assistance was based on the OHB protocol, speaking and receiving images on the SG. The performance time, compliance with the protocol steps, and perceived performance with the SG were evaluated. The midwife’s video assistance with SG allowed 35% of the SG participants to perform the complete OHB protocol. No CG participant was able to perform it (p = 0.005). All OHB protocol variables were significantly better in the SG group than in the CG (p < 0.05). Telemedicine through video assistance with SG is feasible so that a lifeguard with no knowledge of childbirth care can act according to the recommendations in a simulated, unplanned, uncomplicated OHB. Communication with the midwife by speaking and sending images to the SG is perceived as an important benefit to the performance.
Collapse
|
Randomized Controlled Trial |
3 |
10 |
21
|
Amjad F, Khan MH, Nisar MA, Farid MS, Grzegorzek M. A Comparative Study of Feature Selection Approaches for Human Activity Recognition Using Multimodal Sensory Data. SENSORS 2021; 21:s21072368. [PMID: 33805368 PMCID: PMC8036571 DOI: 10.3390/s21072368] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Revised: 03/16/2021] [Accepted: 03/26/2021] [Indexed: 12/16/2022]
Abstract
Human activity recognition (HAR) aims to recognize the actions of the human body through a series of observations and environmental conditions. The analysis of human activities has drawn the attention of the research community in the last two decades due to its widespread applications, diverse nature of activities, and recording infrastructure. Lately, one of the most challenging applications in this framework is to recognize the human body actions using unobtrusive wearable motion sensors. Since the human activities of daily life (e.g., cooking, eating) comprises several repetitive and circumstantial short sequences of actions (e.g., moving arm), it is quite difficult to directly use the sensory data for recognition because the multiple sequences of the same activity data may have large diversity. However, a similarity can be observed in the temporal occurrence of the atomic actions. Therefore, this paper presents a two-level hierarchical method to recognize human activities using a set of wearable sensors. In the first step, the atomic activities are detected from the original sensory data, and their recognition scores are obtained. Secondly, the composite activities are recognized using the scores of atomic actions. We propose two different methods of feature extraction from atomic scores to recognize the composite activities, and they include handcrafted features and the features obtained using the subspace pooling technique. The proposed method is evaluated on the large publicly available CogAge dataset, which contains the instances of both atomic and composite activities. The data is recorded using three unobtrusive wearable devices: smartphone, smartwatch, and smart glasses. We also investigated the performance evaluation of different classification algorithms to recognize the composite activities. The proposed method achieved 79% and 62.8% average recognition accuracies using the handcrafted features and the features obtained using subspace pooling technique, respectively. The recognition results of the proposed technique and their comparison with the existing state-of-the-art techniques confirm its effectiveness.
Collapse
|
Journal Article |
4 |
10 |
22
|
Kato K, Kon D, Ito T, Ichikawa S, Ueda K, Kuroda Y. Radiography education with VR using head mounted display: proficiency evaluation by rubric method. BMC MEDICAL EDUCATION 2022; 22:579. [PMID: 35902953 PMCID: PMC9331594 DOI: 10.1186/s12909-022-03645-8] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 07/21/2022] [Indexed: 05/16/2023]
Abstract
BACKGROUND The use of head mounted display (HMD)-based immersive virtual reality (VR) coaching systems (HMD-VRC) is expected to be effective for skill acquisition in radiography. The usefulness of HMD-VRC has been reported in many previous studies. However, previous studies have evaluated the effectiveness of HMD-VRC only through questionnaires. HMD-VRC has difficulties in palpation and patient interaction compared to real-world training. It is expected that these issues will have an impact on proficiency. The purpose of this study is to determine the impact of VR constraints in HMD-VRC, especially palpation and patient interaction, on radiographic skills proficiency in a real-world setting. METHODS First-year students (n = 30) at a training school for radiology technologists in Japan were randomly divided into two groups, one using HMD-VRC (HMD-VRC group) and the other practicing with conventional physical equipment (RP group) and trained for approximately one hour. The teachers then evaluated the students for proficiency using a rubric method. RESULTS In this study, it was found that some skills in the HMD-VRC group were equivalent to those of the RP group and some were significantly lower than those of the RP group. There was a significant decrease in proficiency in skills related to palpation and patient interaction. CONCLUSIONS This study suggests that HMD-VRC can be less effective than real-world training in radiographic techniques, which require palpation and patient interaction. For effective training, it is important to objectively evaluate proficiency in the real world, even for HMD-VRC with new technologies, such as haptic presentation and VR patient interaction. TRIAL REGISTRATION The study was conducted with the approval of the Ethics Committee of International University of Health and Welfare (Approval No.21-Im-035, Registration date: September 28, 2021).
Collapse
|
Randomized Controlled Trial |
3 |
10 |
23
|
Chowdhury TI, Ferdous SMS, Quarles J. VR Disability Simulation Reduces Implicit Bias Towards Persons With Disabilities. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:3079-3090. [PMID: 31825867 DOI: 10.1109/tvcg.2019.2958332] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
This article investigates how experiencing Virtual Reality (VR) Disability Simulation (DS) affects information recall and participants' implicit association towards people with disabilities (PwD). Implicit attitudes are our actions or judgments towards various concepts or stereotypes (e.g., race) which we may or may not be aware of. Previous research has shown that experiencing ownership over a dark-skinned body reduces implicit racial bias. We hypothesized that a DS with a tracked Head Mounted Display (HMD) and a wheelchair interface would have a significantly larger effect on participants' information recall and their implicit association towards PwD than a desktop monitor and gamepad. We conducted a 2 x 2 between-subjects experiment in which participants experienced a VR DS that teaches them facts about Multiple Sclerosis (MS) with factors of display (HMD, a desktop monitor) and interface (gamepad, wheelchair). Participants took two Implicit Association Tests before and after experiencing the DS. Our study results show that the participants in an immersive HMD condition performed better than the participants in the non-immersive Desktop condition in their information recall task. Moreover, a tracked HMD and a wheelchair interface had significantly larger effects on participants' implicit association towards PwD than a desktop monitor and a gamepad.
Collapse
|
|
4 |
9 |
24
|
Apiratwarakul K, Cheung LW, Tiamkao S, Phungoen P, Tientanopajai K, Taweepworadej W, Kanarkard W, Ienghong K. Smart Glasses: A New Tool for Assessing the Number of Patients in Mass-Casualty Incidents. Prehosp Disaster Med 2022; 37:480-484. [PMID: 35757837 PMCID: PMC9280067 DOI: 10.1017/s1049023x22000929] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2022] [Revised: 04/24/2022] [Accepted: 04/30/2022] [Indexed: 11/24/2022]
Abstract
INTRODUCTION Mass-casualty incidents (MCIs) are events in which many people are injured during the same period of time. This has major implications in regards to practical concerns and planning for both personnel and medical equipment. Smart glasses are modern tools that could help Emergency Medical Services (EMS) in the estimation of the number of potential patients in an MCI. However, currently there is no study regarding the advantage of employing the use of smart glasses in MCIs in Thailand. STUDY OBJECTIVE This study aims to compare the overall accuracy and amount of time used with smart glasses and comparing it to manual counting to assess the number of casualties from the scene. METHODS This study was a randomized controlled trial, field exercise experimental study in the EMS unit of Srinagarind Hospital, Thailand. The participants were divided into two groups (those with smart glasses and those doing manual counting). On the days of the simulation (February 25 and 26, 2022), the participants in the smart glasses group received a 30-minute training session on the use of the smart glasses. After that, both groups of participants counted the number of casualties on the simulation field independently. RESULTS Sixty-eight participants were examined, and in the smart glasses group, a total of 58.8% (N = 20) of the participants were male. The mean age in this group was 39.4 years old. The most experienced in the EMS smart glasses group had worked in this position for four-to-six years (44.1%). The participants in the smart glasses group had the highest scores in accurately assessing the number of casualties being between 21-30 (98.0%) compared with the manual counting group (89.2%). Additionally, the time used for assessing the number of casualties in the smart glasses group was shorter than the manual counting group in tallying the number of casualties between 11-20 (6.3 versus 11.2 seconds; P = .04) and between 21-30 (22.1 versus 44.5 seconds; P = .02). CONCLUSION The use of smart glasses to assess the number of casualties in MCIs when the number of patients is between 11 and 30 is useful in terms of greater accuracy and less time being spent than with manual counting.
Collapse
|
Randomized Controlled Trial |
3 |
9 |
25
|
Vaquero-Blasco MA, Perez-Valero E, Lopez-Gordo MA, Morillas C. Virtual Reality as a Portable Alternative to Chromotherapy Rooms for Stress Relief: A Preliminary Study. SENSORS 2020; 20:s20216211. [PMID: 33143361 PMCID: PMC7663593 DOI: 10.3390/s20216211] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Revised: 10/27/2020] [Accepted: 10/28/2020] [Indexed: 12/16/2022]
Abstract
Chromotherapy rooms are comfortable spaces, used in places like special needs schools, where stimuli are carefully selected to cope with stress. However, these rooms are expensive and require a space that cannot be reutilized. In this article, we propose the use of virtual reality (VR) as an inexpensive and portable alternative to chromotherapy rooms for stress relief. We recreated a chromotherapy room stress relief program using a commercial head mounted display (HD). We assessed the stress level of two groups (test and control) through an EEG biomarker, the relative gamma, while they experienced a relaxation session. First, participants were stressed using the Montreal imaging stress task (MIST). Then, for relaxing, the control group utilized a chromotherapy room while the test group used virtual reality. We performed a hypothesis test to compare the self- perceived stress level at different stages of the experiment and it yielded no significant differences in reducing stress for both groups, during relaxing (p-value: 0.8379, α = 0.05) or any other block. Furthermore, according to participant surveys, the use of virtual reality was deemed immersive, comfortable and pleasant (3.9 out of 5). Our preliminary results validate our approach as an inexpensive and portable alternative to chromotherapy rooms for stress relief.
Collapse
|
Journal Article |
5 |
7 |