1
|
Synthesizing affective neurophysiological signals using generative models: A review paper. J Neurosci Methods 2024; 406:110129. [PMID: 38614286 DOI: 10.1016/j.jneumeth.2024.110129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 01/04/2024] [Accepted: 04/03/2024] [Indexed: 04/15/2024]
Abstract
The integration of emotional intelligence in machines is an important step in advancing human-computer interaction. This demands the development of reliable end-to-end emotion recognition systems. However, the scarcity of public affective datasets presents a challenge. In this literature review, we emphasize the use of generative models to address this issue in neurophysiological signals, particularly Electroencephalogram (EEG) and Functional Near-Infrared Spectroscopy (fNIRS). We provide a comprehensive analysis of different generative models used in the field, examining their input formulation, deployment strategies, and methodologies for evaluating the quality of synthesized data. This review serves as a comprehensive overview, offering insights into the advantages, challenges, and promising future directions in the application of generative models in emotion recognition systems. Through this review, we aim to facilitate the progression of neurophysiological data augmentation, thereby supporting the development of more efficient and reliable emotion recognition systems.
Collapse
|
2
|
CAEVR: Biosignals-Driven Context-Aware Empathy in Virtual Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:2671-2681. [PMID: 38437090 DOI: 10.1109/tvcg.2024.3372130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2024]
Abstract
There is little research on how Virtual Reality (VR) applications can identify and respond meaningfully to users' emotional changes. In this paper, we investigate the impact of Context-Aware Empathic VR (CAEVR) on the emotional and cognitive aspects of user experience in VR. We developed a real-time emotion prediction model using electroencephalography (EEG), electrodermal activity (EDA), and heart rate variability (HRV) and used this in personalized and generalized models for emotion recognition. We then explored the application of this model in a context-aware empathic (CAE) virtual agent and an emotion-adaptive (EA) VR environment. We found a significant increase in positive emotions, cognitive load, and empathy toward the CAE agent, suggesting the potential of CAEVR environments to refine user-agent interactions. We identify lessons learned from this study and directions for future work.
Collapse
|
3
|
VR.net: A Real-world Dataset for Virtual Reality Motion Sickness Research. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:2330-2336. [PMID: 38437109 DOI: 10.1109/tvcg.2024.3372044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2024]
Abstract
Researchers have used machine learning approaches to identify motion sickness in VR experience. These approaches would certainly benefit from an accurately labeled, real-world, diverse dataset that enables the development of generalizable ML models. We introduce 'VR.net', a dataset comprising 165-hour gameplay videos from 100 real-world games spanning ten diverse genres, evaluated by 500 participants. VR.net accurately assigns 24 motion sickness-related labels for each video frame, such as camera/object movement, depth of field, and motion flow. Building such a dataset is challenging since manual labeling would require an infeasible amount of time. Instead, we implement a tool to automatically and precisely extract ground truth data from 3D engines' rendering pipelines without accessing VR games' source code. We illustrate the utility of VR.net through several applications, such as risk factor detection and sickness level prediction. We believe that the scale, accuracy, and diversity of VR.net can offer unparalleled opportunities for VR motion sickness research and beyond.We also provide access to our data collection tool, enabling researchers to contribute to the expansion of VR.net.
Collapse
|
4
|
Sitting or Standing in VR: About Comfort, Conflicts, and Hazards. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2024; 44:81-88. [PMID: 38526874 DOI: 10.1109/mcg.2024.3352349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/27/2024]
Abstract
This article examines the choices between sitting and standing in virtual reality (VR) experiences, addressing conflicts, challenges, and opportunities. It explores issues such as the risk of motion sickness in stationary users and virtual rotations, the formation of mental models, consistent authoring, affordances, and the integration of embodied interfaces for enhanced interactions. Furthermore, it delves into the significance of multisensory integration and the impact of postural mismatches on immersion and acceptance in VR. Ultimately, the article underscores the importance of aligning postural choices and embodied interfaces with the goals of VR applications, be it for entertainment or simulation, to enhance user experiences.
Collapse
|
5
|
Wearable Augmented Reality: Research Trends and Future Directions from Three Major Venues. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:4782-4793. [PMID: 37782599 DOI: 10.1109/tvcg.2023.3320231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
Wearable Augmented Reality (AR) has attracted considerable attention in recent years, as evidenced by the growing number of research publications and industry investments. With swift advancements and a multitude of interdisciplinary research areas within wearable AR, a comprehensive review is crucial for integrating the current state of the field. In this paper, we present a review of 389 research papers on wearable AR, published between 2018 and 2022 in three major venues: ISMAR, TVCG, and CHI. Drawing inspiration from previous works by Zhou et al. and Kim et al., which summarized AR research at ISMAR over the past two decades (1998-2017), we categorize the papers into different topics and identify prevailing trends. One notable finding is that wearable AR research is increasingly geared towards enabling broader consumer adoption. From our analysis, we highlight key observations related to potential future research areas essential for capitalizing on this trend and achieving widespread adoption. These include addressing challenges in Display, Tracking, Interaction, and Applications, and exploring emerging frontiers in Ethics, Accessibility, Avatar and Embodiment, and Intelligent Virtual Agents.
Collapse
|
6
|
Older adults' experiences of social isolation and loneliness: Can virtual touring increase social connectedness? A pilot study. Geriatr Nurs 2023; 53:270-279. [PMID: 37598431 DOI: 10.1016/j.gerinurse.2023.08.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 07/31/2023] [Accepted: 08/02/2023] [Indexed: 08/22/2023]
Abstract
The present pilot study explored the research aim of understanding how independent-living older adults experience social isolation and loneliness and whether virtual tour digital technology can increase social connectedness (N = 10). Through triangulation of interviews, experiences, and feedback, this study contributes to the knowledge base on the well-being of our ageing populations and how digital technologies, specifically virtual tourism, can aid in this process. The key findings reveal that the participants in our study were moderately lonely but were open to embracing more digital technology, sharing how it is instrumental in facilitating social connection and life administration. Participating in virtual tour experiences was well accepted as participants expressed enjoyment, nostalgia, and interest in future use. However, its contribution to increasing social connections needs to be clarified and requires further investigation. Several future research and education directions are provided.
Collapse
|
7
|
Enhancing social connectedness with companion robots using AI. Sci Robot 2023; 8:eadi6347. [PMID: 37436971 DOI: 10.1126/scirobotics.adi6347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/14/2023]
Abstract
Companion robots with AI may usher a new science of social connectedness that requires the development of ethical frameworks.
Collapse
|
8
|
Investigating the relationship between three-dimensional perception and presence in virtual reality-reconstructed architecture. APPLIED ERGONOMICS 2023; 109:103953. [PMID: 36642060 DOI: 10.1016/j.apergo.2022.103953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Revised: 12/02/2022] [Accepted: 12/06/2022] [Indexed: 06/17/2023]
Abstract
Identifying and characterizing the factors that affect presence in virtual environments has been acknowledged as a critical step to improving Virtual Reality (VR) applications in the built environment domain. In the search to identify those factors, the research objective was to test whether three-dimensional perception affects presence in virtual environments. A controlled within-group experiment utilizing perception and presence questionnaires was conducted, followed by data analysis, to test the hypothesized unidirectional association between three-dimensional perception and presence in two different virtual environments (non-immersive and immersive). Results indicate no association in either of the systems studied, contrary to the assumption of many scholars in the field but in line with recent studies on the topic. Consequently, VR applications in architectural design may not necessarily need to incorporate advanced stereoscopic visualization techniques to deliver highly immersive experiences, which may be achieved by addressing factors other than depth realism. As findings suggest that the levels of presence experienced by users are not subject to the display mode of a 3D model (whether immersive or non-immersive display), it may still be possible for professionals involved in the review of 3D models (e.g., designers, contractors, clients) to experience high levels of presence through non-stereoscopic VR systems provided that other presence-promoting factors are included.
Collapse
|
9
|
IEEE ISMAR 2022 Report: Toward Better Mixed Realities. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2023; 43:84-87. [PMID: 37195828 DOI: 10.1109/mcg.2023.3248399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
On October 21, 2022, the 21st IEEE International Symposium on Mixed and Augmented Reality (ISMAR 2022) was successfully completed in Singapore. ISMAR is the leading international conference in the fields of augmented reality, mixed reality, and virtual reality. This was the first time that ISMAR was held in Southeast Asia and the first time in hybrid mode. ISMAR 2022 achieved a historically high number of papers and attendees, witnessing the steady growth of the community and the scientific contributions. In this article, we report the key outcomes, impressions, research trends, and lessons learned from the conference.
Collapse
|
10
|
Immersive medical virtual reality: still a novelty or already a necessity? J Neurol Neurosurg Psychiatry 2023:jnnp-2022-330207. [PMID: 37055062 DOI: 10.1136/jnnp-2022-330207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Accepted: 03/29/2023] [Indexed: 04/15/2023]
|
11
|
Brain activity during cybersickness: a scoping review. VIRTUAL REALITY 2023; 27:1-25. [PMID: 37360812 PMCID: PMC10092918 DOI: 10.1007/s10055-023-00795-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 03/23/2023] [Indexed: 06/28/2023]
Abstract
Virtual reality (VR) experiences can cause a range of negative symptoms such as nausea, disorientation, and oculomotor discomfort, which is collectively called cybersickness. Previous studies have attempted to develop a reliable measure for detecting cybersickness instead of using questionnaires, and electroencephalogram (EEG) has been regarded as one of the possible alternatives. However, despite the increasing interest, little is known about which brain activities are consistently associated with cybersickness and what types of methods should be adopted for measuring discomfort through brain activity. We conducted a scoping review of 33 experimental studies in cybersickness and EEG found through database searches and screening. To understand these studies, we organized the pipeline of EEG analysis into four steps (preprocessing, feature extraction, feature selection, classification) and surveyed the characteristics of each step. The results showed that most studies performed frequency or time-frequency analysis for EEG feature extraction. A part of the studies applied a classification model to predict cybersickness indicating an accuracy between 79 and 100%. These studies tended to use HMD-based VR with a portable EEG headset for measuring brain activity. Most VR content shown was scenic views such as driving or navigating a road, and the age of participants was limited to people in their 20 s. This scoping review contributes to presenting an overview of cybersickness-related EEG research and establishing directions for future work. Supplementary Information The online version contains supplementary material available at 10.1007/s10055-023-00795-y.
Collapse
|
12
|
Using Virtual Replicas to Improve Mixed Reality Remote Collaboration. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; PP:2785-2795. [PMID: 37027731 DOI: 10.1109/tvcg.2023.3247113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
In this paper, we explore how virtual replicas can enhance Mixed Reality (MR) remote collaboration with a 3D reconstruction of the task space. People in different locations may need to work together remotely on complicated tasks. For example, a local user could follow a remote expert's instructions to complete a physical task. However, it could be challenging for the local user to fully understand the remote expert's intentions without effective spatial referencing and action demonstration. In this research, we investigate how virtual replicas can work as a spatial communication cue to improve MR remote collaboration. This approach segments the foreground manipulable objects in the local environment and creates corresponding virtual replicas of physical task objects. The remote user can then manipulate these virtual replicas to explain the task and guide their partner. This enables the local user to rapidly and accurately understand the remote expert's intentions and instructions. Our user study with an object assembly task found that using virtual replica manipulation was more efficient than using 3D annotation drawing in an MR remote collaboration scenario. We report and discuss the findings and limitations of our system and study, and present directions for future research.
Collapse
|
13
|
BeHere: a VR/SAR remote collaboration system based on virtual replicas sharing gesture and avatar in a procedural task. VIRTUAL REALITY 2023; 27:1409-1430. [PMID: 36686612 PMCID: PMC9838545 DOI: 10.1007/s10055-023-00748-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Accepted: 01/04/2023] [Indexed: 06/05/2023]
Abstract
In this paper, we focus on the significance of remote collaboration using virtual replicas, avatar, and gesture on a procedural task in industry; thus, we present a Virtual Reality (VR)/Spatial Augmented Reality (SAR) remote collaboration system, BeHere, based on 3D virtual replicas and sharing gestures and avatar. BeHere enables a remote expert in VR to guide a local worker in real-time to complete a procedural task in the real-world. For the remote VR site, we construct a 3D virtual environment using virtual replicas, and the user can manipulate them by using gestures in an intuitive interaction and see their partners' 3D virtual avatar. For the local site, we use SAR to enable the local worker to see instructions projected onto the real-world based on the shared virtual replicas and gestures. We conducted a formal user study to evaluate the prototype system in terms of performance, social presence, workload, and ranking and user preference. We found that the combination of visual cues of gestures, avatar, and virtual replicas plays a positive role in improving user experience, especially for remote VR users. More significantly, our study provides useful information and important design implications for further research on the use of gesture-, gaze- and avatar-based cues as well as virtual replicas in VR/AR remote collaboration on a procedural task in industry. Supplementary Information The online version contains supplementary material available at 10.1007/s10055-023-00748-5.
Collapse
|
14
|
Using Facial Micro-Expressions in Combination With EEG and Physiological Signals for Emotion Recognition. Front Psychol 2022; 13:864047. [PMID: 35837650 PMCID: PMC9275379 DOI: 10.3389/fpsyg.2022.864047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 05/30/2022] [Indexed: 11/13/2022] Open
Abstract
Emotions are multimodal processes that play a crucial role in our everyday lives. Recognizing emotions is becoming more critical in a wide range of application domains such as healthcare, education, human-computer interaction, Virtual Reality, intelligent agents, entertainment, and more. Facial macro-expressions or intense facial expressions are the most common modalities in recognizing emotional states. However, since facial expressions can be voluntarily controlled, they may not accurately represent emotional states. Earlier studies have shown that facial micro-expressions are more reliable than facial macro-expressions for revealing emotions. They are subtle, involuntary movements responding to external stimuli that cannot be controlled. This paper proposes using facial micro-expressions combined with brain and physiological signals to more reliably detect underlying emotions. We describe our models for measuring arousal and valence levels from a combination of facial micro-expressions, Electroencephalography (EEG) signals, galvanic skin responses (GSR), and Photoplethysmography (PPG) signals. We then evaluate our model using the DEAP dataset and our own dataset based on a subject-independent approach. Lastly, we discuss our results, the limitations of our work, and how these limitations could be overcome. We also discuss future directions for using facial micro-expressions and physiological signals in emotion recognition.
Collapse
|
15
|
Situated VR: Toward a Congruent Hybrid Reality Without Experiential Artifacts. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2022; 42:7-18. [PMID: 35671280 DOI: 10.1109/mcg.2022.3154358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The vision of extended reality (XR) systems is living in a world where real and virtual elements seamlessly and contextually augment experiences of ourselves and the worlds we inhabit. While this integration promises exciting opportunities for the future of XR, it comes with the risk of experiential distortions and feelings of dissociation, especially related to virtual reality (VR). When transitioning from a virtual world to the real world, users report of experiential structures that linger on, as sort of after images, causing disruptions in their daily life. In this work, we define these atypical experiences as experiential artifacts (EAs) and present preliminary results from an informal survey conducted online with 76 VR users to highlight different types of artifacts and their durations. To avoid disruptions caused by these artifacts and simultaneously increase the user's sense of presence, we propose the idea of situated VR, which blends the real and virtual in novel ways that can reduce incongruencies between the two worlds. We discuss the implications of EAs, and through examples from our own work in building hybrid experiences, we demonstrate the potential and relevance of situated VR in the design of a future, more immersive, artifact-free hybrid reality.
Collapse
|
16
|
Abstract
Physical props serving as proxies for virtual objects (haptic proxies) offer a cheap, convenient, and compelling way of delivering a sense of touch in virtual reality (VR). To successfully use haptic proxies for VR, they have to be both similar to and colocated with their virtual counterparts. In this article, we introduce a taxonomy organizing techniques using haptic proxies for VR into eight categories based on when the techniques are deployed (offline or real-time), what reality is being manipulated (physical or virtual reality), and the purpose of the techniques (to affect object perception or the mapping between real and virtual objects). Finally, we discuss key advantages and limitations of the different categories of techniques.
Collapse
|
17
|
Special Issue on Highlights of ACM Intelligent User Interface (IUI) 2018. ACM T INTERACT INTEL 2020. [DOI: 10.1145/3357206] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
18
|
Getting your game on: Using virtual reality to improve real table tennis skills. PLoS One 2019; 14:e0222351. [PMID: 31504070 PMCID: PMC6736297 DOI: 10.1371/journal.pone.0222351] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2019] [Accepted: 08/27/2019] [Indexed: 01/22/2023] Open
Abstract
Objective The present study investigates skill transfer from Virtual Reality (VR) sports training to the real world, using the fast-paced sport of table tennis. Background A key assumption of VR training is that the learned skills and experiences transfer to the real world. Yet, in certain application areas, such as VR sports training, the research testing this assumption is sparse. Design Real-world table tennis performance was assessed using a mixed-model analysis of variance. The analysis comprised a between-subjects (VR training group vs control group) and a within-subjects (pre- and post-training) factor. Method Fifty-seven participants (23 females) were either assigned to a VR training group (n = 29) or no-training control group (n = 28). During VR training, participants were immersed in competitive table tennis matches against an artificial intelligence opponent. An expert table tennis coach evaluated participants on real-world table tennis playing before and after the training phase. Blinded regarding participant's group assignment, the expert assessed participants’ backhand, forehand and serving on quantitative aspects (e.g. count of rallies without errors) and quality of skill aspects (e.g. technique and consistency). Results VR training significantly improved participants’ real-world table tennis performance compared to a no-training control group in both quantitative (p < .001, Cohen’s d = 1.08) and quality of skill assessments (p < .001, Cohen’s d = 1.10). Conclusions This study adds to a sparse yet expanding literature, demonstrating real-world skill transfer from Virtual Reality in an athletic task.
Collapse
|
19
|
The Effects of Sharing Awareness Cues in Collaborative Mixed Reality. Front Robot AI 2019; 6:5. [PMID: 33501022 PMCID: PMC7805624 DOI: 10.3389/frobt.2019.00005] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2018] [Accepted: 01/16/2019] [Indexed: 11/13/2022] Open
Abstract
Augmented and Virtual Reality provide unique capabilities for Mixed Reality collaboration. This paper explores how different combinations of virtual awareness cues can provide users with valuable information about their collaborator's attention and actions. In a user study (n = 32, 16 pairs), we compared different combinations of three cues: Field-of-View (FoV) frustum, Eye-gaze ray, and Head-gaze ray against a baseline condition showing only virtual representations of each collaborator's head and hands. Through a collaborative object finding and placing task, the results showed that awareness cues significantly improved user performance, usability, and subjective preferences, with the combination of the FoV frustum and the Head-gaze ray being best. This work establishes the feasibility of room-scale MR collaboration and the utility of providing virtual awareness cues.
Collapse
|
20
|
Superman vs Giant: A Study on Spatial Perception for a Multi-Scale Mixed Reality Flying Telepresence Interface. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:2974-2982. [PMID: 30387715 DOI: 10.1109/tvcg.2018.2868594] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
The advancements in Mixed Reality (MR), Unmanned Aerial Vehicle, and multi-scale collaborative virtual environments have led to new interface opportunities for remote collaboration. This paper explores a novel concept of flying telepresence for multi-scale mixed reality remote collaboration. This work could enable remote collaboration at a larger scale such as building construction. We conducted a user study with three experiments. The first experiment compared two interfaces, static and dynamic IPD, on simulator sickness and body size perception. The second experiment tested the user perception of a virtual object size under three levels of IPD and movement gain manipulation with a fixed eye height in a virtual environment having reduced or rich visual cues. Our last experiment investigated the participant's body size perception for two levels of manipulation of the IPDs and heights using stereo video footage to simulate a flying telepresence experience. The studies found that manipulating IPDs and eye height influenced the user's size perception. We present our findings and share the recommendations for designing a multi-scale MR flying telepresence interface.
Collapse
|
21
|
Revisiting Trends in Augmented Reality Research: A Review of the 2nd Decade of ISMAR (2008-2017). IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:2947-2962. [PMID: 30188833 DOI: 10.1109/tvcg.2018.2868591] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
In 2008, Zhou et al. presented a survey paper summarizing the previous ten years of ISMAR publications, which provided invaluable insights into the research challenges and trends associated with that time period. Ten years later, we review the research that has been presented at ISMAR conferences since the survey of Zhou et al., at a time when both academia and the AR industry are enjoying dramatic technological changes. Here we consider the research results and trends of the last decade of ISMAR by carefully reviewing the ISMAR publications from the period of 2008-2017, in the context of the first ten years. The numbers of papers for different research topics and their impacts by citations were analyzed while reviewing them-which reveals that there is a sharp increase in AR evaluation and rendering research. Based on this review we offer some observations related to potential future research areas or trends, which could be helpful to AR researchers and industry members looking ahead.
Collapse
|
22
|
A Comparison of Predictive Spatial Augmented Reality Cues for Procedural Tasks. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:2846-2856. [PMID: 30334797 DOI: 10.1109/tvcg.2018.2868587] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Previous research has demonstrated that Augmented Reality can reduce a user's task response time and mental effort when completing a procedural task. This paper investigates techniques to improve user performance and reduce mental effort by providing projector-based Spatial Augmented Reality predictive cues for future responses. The objective of the two experiments conducted in this study was to isolate the performance and mental effort differences from several different annotation cueing techniques for simple (Experiment 1) and complex (Experiment 2) button-pressing tasks. Comporting with existing cognitive neuroscience literature on prediction, attentional orienting, and interference, we hypothesized that for both simple procedural tasks and complex search-based tasks, having a visual cue guiding to the next task's location would positively impact performance relative to a baseline, no-cue condition. Additionally, we predicted that direction-based cues would provide a more significant positive impact than target-based cues. The results indicated that providing a line to the next task was the most effective technique for improving the users' task time and mental effort in both the simple and complex tasks.
Collapse
|
23
|
Narrative and Spatial Memory for Jury Viewings in a Reconstructed Virtual Environment. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2018; 24:2917-2926. [PMID: 30222574 DOI: 10.1109/tvcg.2018.2868569] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
This paper showcases one way of how virtual reconstruction can be used in a courtroom. The results of a pilot study on narrative and spatial memory are presented in the context of viewing real and virtual copies of a simulated crime scene. Based on current court procedures, three different viewing options were compared: photographs, a real life visit, and a 3D virtual reconstruction of the scene viewed in a Virtual Reality headset. Participants were also given a written narrative that included the spatial locations of stolen goods and were measured on their ability to recall and understand these spatial relationships of those stolen items. The results suggest that Virtual Reality is more reliable for spatial memory compared to photographs and that Virtual Reality provides a compromise for when physical viewing of crime scenes are not possible. We conclude that Virtual Reality is a promising medium for the court.
Collapse
|
24
|
A Systematic Review of 10 Years of Augmented Reality Usability Studies: 2005 to 2014. Front Robot AI 2018; 5:37. [PMID: 33500923 PMCID: PMC7805955 DOI: 10.3389/frobt.2018.00037] [Citation(s) in RCA: 62] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2017] [Accepted: 03/19/2018] [Indexed: 11/13/2022] Open
Abstract
Augmented Reality (AR) interfaces have been studied extensively over the last few decades, with a growing number of user-based experiments. In this paper, we systematically review 10 years of the most influential AR user studies, from 2005 to 2014. A total of 291 papers with 369 individual user studies have been reviewed and classified based on their application areas. The primary contribution of the review is to present the broad landscape of user-based AR research, and to provide a high-level view of how that landscape has changed. We summarize the high-level contributions from each category of papers, and present examples of the most influential user studies. We also identify areas where there have been few user studies, and opportunities for future research. Among other things, we find that there is a growing trend toward handheld AR user studies, and that most studies are conducted in laboratory settings and do not involve pilot testing. This research will be useful for AR researchers who want to follow best practices in designing their own AR user studies.
Collapse
|
25
|
LivePhantom: Retrieving Virtual World Light Data to Real Environments. PLoS One 2016; 11:e0166424. [PMID: 27930663 PMCID: PMC5145151 DOI: 10.1371/journal.pone.0166424] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2016] [Accepted: 10/30/2016] [Indexed: 11/18/2022] Open
Abstract
To achieve realistic Augmented Reality (AR), shadows play an important role in creating a 3D impression of a scene. Casting virtual shadows on real and virtual objects is one of the topics of research being conducted in this area. In this paper, we propose a new method for creating complex AR indoor scenes using real time depth detection to exert virtual shadows on virtual and real environments. A Kinect camera was used to produce a depth map for the physical scene mixing into a single real-time transparent tacit surface. Once this is created, the camera’s position can be tracked from the reconstructed 3D scene. Real objects are represented by virtual object phantoms in the AR scene enabling users holding a webcam and a standard Kinect camera to capture and reconstruct environments simultaneously. The tracking capability of the algorithm is shown and the findings are assessed drawing upon qualitative and quantitative methods making comparisons with previous AR phantom generation applications. The results demonstrate the robustness of the technique for realistic indoor rendering in AR systems.
Collapse
|
26
|
Do You See What I See? The Effect of Gaze Tracking on Task Space Remote Collaboration. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2016; 22:2413-22. [PMID: 27479970 DOI: 10.1109/tvcg.2016.2593778] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
We present results from research exploring the effect of sharing virtual gaze and pointing cues in a wearable interface for remote collaboration. A local worker wears a Head-mounted Camera, Eye-tracking camera and a Head-Mounted Display and shares video and virtual gaze information with a remote helper. The remote helper can provide feedback using a virtual pointer on the live video view. The prototype system was evaluated with a formal user study. Comparing four conditions, (1) NONE (no cue), (2) POINTER, (3) EYE-TRACKER and (4) BOTH (both pointer and eye-tracker cues), we observed that the task completion performance was best in the BOTH condition with a significant difference of POINTER and EYETRACKER individually. The use of eye-tracking and a pointer also significantly improved the co-presence felt between the users. We discuss the implications of this research and the limitations of the developed system that could be improved in further work.
Collapse
|
27
|
Climbing With a Head-Mounted Display: Dual-Task Costs. HUMAN FACTORS 2016; 58:452-461. [PMID: 26865416 DOI: 10.1177/0018720815623431] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/15/2015] [Accepted: 11/26/2015] [Indexed: 06/05/2023]
Abstract
OBJECTIVE We explored the dual-task costs of climbers performing a visual communication task using a head-mounted display (HMD) while simultaneously climbing along a vertical surface. BACKGROUND Climbing is affected by secondary auditory cognitive tasks, and climbing impairs later recall of secondary task information; the effects of visually presented tasks are less clear. Given that HMDs are projected to be adopted into emergency response work, questions are raised about the effects of HMD use during climbing or other physical tasks. METHOD Climbers performed five conditions-a climbing-only condition, two dual-task climbing conditions (words presented on the HMD with and without auditory warnings while climbing), and two seated control conditions (words presented on the HMD with and without auditory warnings)-in a repeated-measures design. Motion data were also collected to examine participant motion around word presentation. RESULTS We found a decrease in both climbing performance and word recall under dual-task conditions, paralleling results found in previous research using auditory tasks. Participants slowed around word presentations on the HMD. Additional comparisons to previous research indicate that physical tasks may be more detrimental to word recall than are seated tasks and that visual stimuli might hinder climbing performance more than do audible stimuli. CONCLUSION Complex physical activity, like climbing, is disruptive to memory rehearsal and later recall, and cognitive tasks disrupt physical performance. APPLICATION Avoiding cognitive HMD tasks requiring later recall during complex physical activity is advisable. However, these systems may be developed to provide intelligent assistance, or memory augmentation, in these settings.
Collapse
|
28
|
|
29
|
|
30
|
Hands in space: gesture interaction with augmented-reality interfaces. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2014; 34:77-80. [PMID: 24808171 DOI: 10.1109/mcg.2014.8] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Researchers at the Human Interface Technology Laboratory New Zealand (HIT Lab NZ) are investigating free-hand gestures for natural interaction with augmented-reality interfaces. They've applied the results to systems for desktop computers and mobile devices.
Collapse
|
31
|
Shape recognition and pose estimation for mobile Augmented Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2011; 17:1369-1379. [PMID: 21041876 DOI: 10.1109/tvcg.2010.241] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Nestor is a real-time recognition and camera pose estimation system for planar shapes. The system allows shapes that carry contextual meanings for humans to be used as Augmented Reality (AR) tracking targets. The user can teach the system new shapes in real time. New shapes can be shown to the system frontally, or they can be automatically rectified according to previously learned shapes. Shapes can be automatically assigned virtual content by classification according to a shape class library. Nestor performs shape recognition by analyzing contour structures and generating projective-invariant signatures from their concavities. The concavities are further used to extract features for pose estimation and tracking. Pose refinement is carried out by minimizing the reprojection error between sample points on each image contour and its library counterpart. Sample points are matched by evolving an active contour in real time. Our experiments show that the system provides stable and accurate registration, and runs at interactive frame rates on a Nokia N95 mobile phone.
Collapse
|
32
|
Human-Robot Collaboration: A Literature Review and Augmented Reality Approach in Design. INT J ADV ROBOT SYST 2008. [DOI: 10.5772/5664] [Citation(s) in RCA: 153] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
NASA's vision for space exploration stresses the cultivation of human-robotic systems. Similar systems are also envisaged for a variety of hazardous earthbound applications such as urban search and rescue. Recent research has pointed out that to reduce human workload, costs, fatigue driven error and risk, intelligent robotic systems will need to be a significant part of mission design. However, little attention has been paid to joint human-robot teams. Making human-robot collaboration natural and efficient is crucial. In particular, grounding, situational awareness, a common frame of reference and spatial referencing are vital in effective communication and collaboration. Augmented Reality (AR), the overlaying of computer graphics onto the real worldview, can provide the necessary means for a human-robotic system to fulfill these requirements for effective collaboration. This article reviews the field of human-robot interaction and augmented reality, investigates the potential avenues for creating natural human-robot collaboration through spatial dialogue utilizing AR and proposes a holistic architectural design for human-robot collaboration.
Collapse
|
33
|
Abstract
Most interactive computer graphics appear on a screen separate from the real world and the user's surroundings. However this does not always have to be the case. In augmented reality (AR) interfaces, three-dimensional virtual images appear superimposed over real objects. AR applications typically use head-mounted or handheld displays to make computer graphics appear in the user's environment.
Collapse
|
34
|
Abstract
A person stands in front of a large projection screen on which is shown a checked floor. They say, "Make a table," and a wooden table appears in the middle of the floor."On the table, place a vase," they gesture using a fist relative to palm of their other hand to show the relative location of the vase on the table. A vase appears at the correct location."Next to the table place a chair." A chair appears to the right of the table."Rotate it like this," while rotating their hand causes the chair to turn towards the table."View the scene from this direction," they say while pointing one hand towards the palm of the other. The scene rotates to match their hand orientation.
In a matter of moments, a simple scene has been created using natural speech and gesture. The interface of the future? Not at all; Koons, Thorisson and Bolt demonstrated this work in 1992 [23]. Although research such as this has shown the value of combining speech and gesture at the interface, most computer graphics are still being developed with tools no more intuitive than a mouse and keyboard. This need not be the case. Current speech and gesture technologies make multimodal interfaces with combined voice and gesture input easily achievable. There are several commercial versions of continuous dictation software currently available, while tablets and pens are widely supported in graphics applications. However, having this capability doesn't mean that voice and gesture should be added to every modeling package in a haphazard manner. There are numerous issues that must be addressed in order to develop an intuitive interface that uses the strengths of both input modalities.In this article we describe motivations for adding voice and gesture to graphical applications, review previous work showing different ways these modalities may be used and outline some general interface guidelines. Finally, we give an overview of promising areas for future research. Our motivation for writing this is to spur developers to build compelling interfaces that will make speech and gesture as common on the desktop as the keyboard and mouse.
Collapse
|
35
|
New interface metaphors for complex information space visualization: an ECG monitor object prototype. Stud Health Technol Inform 1996; 39:131-40. [PMID: 10168910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/11/2023]
Abstract
Wearable augmented reality medical (WARM) interfaces could provide ubiquitous point-of-care decision support and enhance the quality and efficiency of clinicians' efforts. Creation of such systems involves the design and evaluation of new information displays that leverage the representational and presentational capabilities of three-dimensional AR environments. We describe our first efforts in this process: the implementation of interface objects for display of real-time electrocardiographic monitoring information and an evaluation methodology using a simulated clinical environment. Our pilot data confirm the utility of presentation modes that place simultaneous information tasks in close proximity, and highlight issues encountered in designing new representations of medical information.
Collapse
|
36
|
Poster 38: Expert Systems in Otolaryngology. Otolaryngol Head Neck Surg 1996. [DOI: 10.1016/s0194-5998(96)80792-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
37
|
The expert surgical assistant. An intelligent virtual environment with multimodal input. Stud Health Technol Inform 1995; 29:590-607. [PMID: 10172851] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/11/2023]
Abstract
Virtual Reality has made computer interfaces more intuitive but not more intelligent. This paper shows how an expert system can be coupled with multimodal input in a virtual environment to provide an intelligent simulation tool or surgical assistant. This is accomplished in three steps. First, voice and gestural input is interpreted and represented in a common semantic form. Second, a rule-based expert system is used to infer context and user actions from this semantic representation. Finally, the inferred user actions are matched against steps in a surgical procedure to monitor the user's progress and provide automatic feedback. In addition, the system can respond immediately to multimodal commands for navigational assistance and/or identification of critical anatomical structures. To show how these methods are used we present a prototype sinus surgery interface. The approach described here may easily be extended to a wide variety of medical and non-medical training applications by making simple changes to the expert system database and virtual environment models. Successful implementation of an expert system in both simulated and real surgery has enormous potential for the surgeon both in training and clinical practice.
Collapse
|
38
|
Abstract
[C7H16AsO2]+.Br, Mr = 287.03, orthorhombic, P2(1)2(1)2(1), a = 10.121 (3), b = 11.745 (2), c = 9.530 (1) A, V = 1132.8 (6) A3, Z = 4, Dx = 1.682 Mg m-3, lambda(Mo K alpha) = 0.71069 A, mu = 6.86 mm-1, F(000) = 568, T = 296 K, R = 0.034 for 919 observed reflections. Crystalline acetylarsenocholine bromide exists in the trans-gauche conformation which is similar to the solution conformation of acetylcholine. The cationic structure is compared with known crystalline acetylcholine salts. In the crystal structure, each Br ion appears to link the arsonium ends of four cations.
Collapse
|
39
|
Hidden-part suppression on three-dimensional plots. Med Phys 1975; 2:26-7. [PMID: 1128457 DOI: 10.1118/1.594161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
Abstract
The conceptual and mathematical description of a method to draw a three-dimensional histographic surface in oblique projection, with the drawing of hidden parts suppressed, is given.
Collapse
|