1
|
Wang M, Li YJ, Shi J, Steinicke F. SceneFusion: Room-Scale Environmental Fusion for Efficient Traveling Between Separate Virtual Environments. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:4615-4630. [PMID: 37126613 DOI: 10.1109/tvcg.2023.3271709] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Traveling between scenes has become a major requirement for navigation in numerous virtual reality (VR) social platforms and game applications, allowing users to efficiently explore multiple virtual environments (VEs). To facilitate scene transition, prevalent techniques such as instant teleportation and virtual portals have been extensively adopted. However, these techniques exhibit limitations when there is a need for frequent travel between separate VEs, particularly within indoor environments, resulting in low efficiency. In this article, we first analyze the design rationale for a novel navigation method supporting efficient travel between virtual indoor scenes. Based on the analysis, we introduce the SceneFusion technique that fuses separate virtual rooms into an integrated environment. SceneFusion enables users to perceive rich visual information from both rooms simultaneously, achieving high visual continuity and spatial awareness. While existing teleportation techniques passively transport users, SceneFusion allows users to actively access the fused environment using short-range locomotion techniques. User experiments confirmed that SceneFusion outperforms instant teleportation and virtual portal techniques in terms of efficiency, workload, and preference for both single-user exploration and multi-user collaboration tasks in separate VEs. Thus, SceneFusion presents an effective solution for seamless traveling between virtual indoor scenes.
Collapse
|
2
|
Croucher C, Powell W, Stevens B, Miller-Dicks M, Powell V, Wiltshire TJ, Spronck P. LoCoMoTe - A Framework for Classification of Natural Locomotion in VR by Task, Technique and Modality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:5765-5781. [PMID: 37695974 DOI: 10.1109/tvcg.2023.3313439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/13/2023]
Abstract
Virtual reality (VR) research has provided overviews of locomotion techniques, how they work, their strengths and overall user experience. Considerable research has investigated new methodologies, particularly machine learning to develop redirection algorithms. To best support the development of redirection algorithms through machine learning, we must understand how best to replicate human navigation and behaviour in VR, which can be supported by the accumulation of results produced through live-user experiments. However, it can be difficult to identify, select and compare relevant research without a pre-existing framework in an ever-growing research field. Therefore, this work aimed to facilitate the ongoing structuring and comparison of the VR-based natural walking literature by providing a standardised framework for researchers to utilise. We applied thematic analysis to study methodology descriptions from 140 VR-based papers that contained live-user experiments. From this analysis, we developed the LoCoMoTe framework with three themes: navigational decisions, technique implementation, and modalities. The LoCoMoTe framework provides a standardised approach to structuring and comparing experimental conditions. The framework should be continually updated to categorise and systematise knowledge and aid in identifying research gaps and discussions.
Collapse
|
3
|
Herbert OM, Pérez-Granados D, Ruiz MAO, Cadena Martínez R, Gutiérrez CAG, Antuñano MAZ. Static and Dynamic Hand Gestures: A Review of Techniques of Virtual Reality Manipulation. SENSORS (BASEL, SWITZERLAND) 2024; 24:3760. [PMID: 38931542 PMCID: PMC11207792 DOI: 10.3390/s24123760] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Revised: 05/24/2024] [Accepted: 05/27/2024] [Indexed: 06/28/2024]
Abstract
This review explores the historical and current significance of gestures as a universal form of communication with a focus on hand gestures in virtual reality applications. It highlights the evolution of gesture detection systems from the 1990s, which used computer algorithms to find patterns in static images, to the present day where advances in sensor technology, artificial intelligence, and computing power have enabled real-time gesture recognition. The paper emphasizes the role of hand gestures in virtual reality (VR), a field that creates immersive digital experiences through the Ma blending of 3D modeling, sound effects, and sensing technology. This review presents state-of-the-art hardware and software techniques used in hand gesture detection, primarily for VR applications. It discusses the challenges in hand gesture detection, classifies gestures as static and dynamic, and grades their detection difficulty. This paper also reviews the haptic devices used in VR and their advantages and challenges. It provides an overview of the process used in hand gesture acquisition, from inputs and pre-processing to pose detection, for both static and dynamic gestures.
Collapse
Affiliation(s)
- Oswaldo Mendoza Herbert
- Engineering Departament, Centro de Investigación, Innovación y Desarrollo Tecnológico de UVM (CIIDETEC-Querétaro), Universidad del Valle de México, Querétaro 76230, Mexico;
| | - David Pérez-Granados
- Engineering Departament, Centro de Investigación, Innovación y Desarrollo Tecnológico de UVM (CIIDETEC-Coyoacán), Universidad del Valle de México, Coyoacán 04910, Mexico; (D.P.-G.); (M.A.O.R.)
| | - Mauricio Alberto Ortega Ruiz
- Engineering Departament, Centro de Investigación, Innovación y Desarrollo Tecnológico de UVM (CIIDETEC-Coyoacán), Universidad del Valle de México, Coyoacán 04910, Mexico; (D.P.-G.); (M.A.O.R.)
| | - Rodrigo Cadena Martínez
- Postgraduate Departament, Universidad Tecnológica de México (UNITEC), México City 11320, Mexico;
| | - Carlos Alberto González Gutiérrez
- Engineering Departament, Centro de Investigación, Innovación y Desarrollo Tecnológico de UVM (CIIDETEC-Querétaro), Universidad del Valle de México, Querétaro 76230, Mexico;
| | - Marco Antonio Zamora Antuñano
- Engineering Departament, Centro de Investigación, Innovación y Desarrollo Tecnológico de UVM (CIIDETEC-Querétaro), Universidad del Valle de México, Querétaro 76230, Mexico;
| |
Collapse
|
4
|
Novotny J, Laidlaw DH. Evaluating Text Reading Speed in VR Scenes and 3D Particle Visualizations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:2602-2612. [PMID: 38437104 DOI: 10.1109/tvcg.2024.3372093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2024]
Abstract
This work reports how text size and other rendering conditions affect reading speeds in a virtual reality environment and a scientific data analysis application. Displaying text legibly yet space-efficiently is a challenging problem in immersive displays. Effective text displays that enable users to read at their maximum speed must consider the variety of virtual reality (VR) display hardware and possible visual exploration tasks. We investigate how text size and display parameters affect reading speed and legibility in three state-of-the-art VR displays: two head-mounted displays and one CAVE. In our perception experiments, we establish limits where reading speed declines as the text size approaches the so-called critical print sizes (CPS) of individual displays, which can inform the design of uniform reading experiences across different VR systems. We observe an inverse correlation between display resolution and CPS. Yet, even in high-fidelity VR systems, the measured CPS was larger than in comparable physical text displays, highlighting the value of increased VR display resolutions in certain visualization scenarios. Our findings indicate that CPS can be an effective metric for evaluating VR display usability. Additionally, we evaluate the effects of text panel placement, orientation, and occlusion-reducing rendering methods on reading speeds in generic volumetric particle visualizations. Our study provides insights into the trade-off between text representation and legibility in cluttered immersive environments with specific suggestions for visualization designers and highlight areas for further research.
Collapse
|
5
|
Souchet AD, Lourdeaux D, Burkhardt JM, Hancock PA. Design guidelines for limiting and eliminating virtual reality-induced symptoms and effects at work: a comprehensive, factor-oriented review. Front Psychol 2023; 14:1161932. [PMID: 37359863 PMCID: PMC10288216 DOI: 10.3389/fpsyg.2023.1161932] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Accepted: 05/16/2023] [Indexed: 06/28/2023] Open
Abstract
Virtual reality (VR) can induce side effects known as virtual reality-induced symptoms and effects (VRISE). To address this concern, we identify a literature-based listing of these factors thought to influence VRISE with a focus on office work use. Using those, we recommend guidelines for VRISE amelioration intended for virtual environment creators and users. We identify five VRISE risks, focusing on short-term symptoms with their short-term effects. Three overall factor categories are considered: individual, hardware, and software. Over 90 factors may influence VRISE frequency and severity. We identify guidelines for each factor to help reduce VR side effects. To better reflect our confidence in those guidelines, we graded each with a level of evidence rating. Common factors occasionally influence different forms of VRISE. This can lead to confusion in the literature. General guidelines for using VR at work involve worker adaptation, such as limiting immersion times to between 20 and 30 min. These regimens involve taking regular breaks. Extra care is required for workers with special needs, neurodiversity, and gerontechnological concerns. In addition to following our guidelines, stakeholders should be aware that current head-mounted displays and virtual environments can continue to induce VRISE. While no single existing method fully alleviates VRISE, workers' health and safety must be monitored and safeguarded when VR is used at work.
Collapse
Affiliation(s)
- Alexis D. Souchet
- Heudiasyc UMR 7253, Alliance Sorbonne Université, Université de Technologie de Compiègne, CNRS, Compiègne, France
- Institute for Creative Technologies, University of Southern California, Los Angeles, CA, United States
| | - Domitile Lourdeaux
- Heudiasyc UMR 7253, Alliance Sorbonne Université, Université de Technologie de Compiègne, CNRS, Compiègne, France
| | | | - Peter A. Hancock
- Department of Psychology, University of Central Florida, Orlando, FL, United States
| |
Collapse
|
6
|
Moullec Y, Cogne M, Saint-Aubert J, Lecuyer A. Assisted walking-in-place: Introducing assisted motion to walking-by-cycling in embodied virtual reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:2796-2805. [PMID: 37015135 DOI: 10.1109/tvcg.2023.3247070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
In this paper, we investigate the use of a motorized bike to support the walk of a self-avatar in virtual reality (VR). While existing walking-in-place (WIP) techniques render compelling walking experiences, they can be judged repetitive and strenuous. Our approach consists in assisting a WIP technique so that the user does not have to actively move in order to reduce effort and fatigue. We chose to assist a technique called walking-by-cycling, which consists in mapping the cycling motion of a bike onto the walking of the user's self-avatar, by using a motorized bike. We expected that our approach could provide participants with a compelling walking experience while reducing the effort required to navigate. We conducted a within-subjects study where we compared "assisted walking-by-cycling" to a traditional active walking-by-cycling implementation, and to a standard condition where the user is static. In the study, we measured embodiment, including ownership and agency, walking sensation, perceived effort and fatigue. Results showed that assisted walking-by-cycling induced more ownership, agency, and walking sensation than the static simulation. Additionally, assisted walking-by-cycling induced levels of ownership and walking sensation similar to that of active walking-by-cycling, but it induced less perceived effort. Taken together, this work promotes the use of assisted walking-by-cycling in situations where users cannot or do not want to exert much effort while walking in embodied VR such as for injured or disabled users, for prolonged uses, medical rehabilitation, or virtual visits.
Collapse
|
7
|
Pavic K, Chaby L, Gricourt T, Vergilino-Perez D. Feeling Virtually Present Makes Me Happier: The Influence of Immersion, Sense of Presence, and Video Contents on Positive Emotion Induction. CYBERPSYCHOLOGY, BEHAVIOR, AND SOCIAL NETWORKING 2023; 26:238-245. [PMID: 37001171 PMCID: PMC10125398 DOI: 10.1089/cyber.2022.0245] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/03/2023]
Abstract
Immersive technologies, such as Virtual Reality (VR), have great potential for enhancing users' emotions and wellbeing. However, how immersion, Virtual Environment contents, and sense of presence (SoP) influence emotional responses remains to be clarified to efficiently foster positive emotions. Consequently, a total of 26 participants (16 women, 10 men, 22.73 ± 2.69 years old) were exposed to 360-degree videos of natural and social contents on both a highly immersive Head-Mounted Display and a low immersive computer screen. Subjective emotional responses and SoP were assessed after each video using self-reports, while a wearable wristband collected continuously electrodermal activity and heart rate to record physiological emotional responses. Findings supported the added value of immersion, as more positive emotions and greater subjective arousal were reported after viewing the videos in the highly immersive setting, regardless of the video contents. In addition to usually employed natural contents, the findings also provide initial evidence for the effectiveness of social contents in eliciting positive emotions. Finally, structural equation models shed light on the indirect effect of immersion, through spatial and spatial SoP on subjective arousal. Overall, these are encouraging results about the effectiveness of VR for fostering positive emotions. Future studies should further investigate the influence of user characteristics on VR experiences to foster efficiently positive emotions among a broad range of potential users.
Collapse
Affiliation(s)
- Katarina Pavic
- Université Paris Cité, Vision Action Cognition (VAC), Boulogne-Billancourt, France
- Sorbonne Université, CNRS, Institut des Systèmes Intelligents et de Robotique (ISIR), Paris, France
- Research and Development Department, SocialDream, Bourg-de-Péage, France
- Address correspondence to: Dr. Katarina Pavic, Université Paris Cité, Vision Action Cognition (VAC), 71 Avenue Edouard Vaillant, Boulogne-Billancourt Cedex 92774, France
| | - Laurence Chaby
- Sorbonne Université, CNRS, Institut des Systèmes Intelligents et de Robotique (ISIR), Paris, France
- Université Paris Cité, UFR de Psychologie, Boulogne-Billancourt, France
| | - Thierry Gricourt
- Research and Development Department, SocialDream, Bourg-de-Péage, France
| | | |
Collapse
|
8
|
Hashemian AM, Adhikari A, Kruijff E, Heyde MVD, Riecke BE. Leaning-Based Interfaces Improve Ground-Based VR Locomotion in Reach-the-Target, Follow-the-Path, and Racing Tasks. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:1748-1768. [PMID: 34847032 DOI: 10.1109/tvcg.2021.3131422] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Using standard handheld interfaces for VR locomotion may not provide a believable self-motion experience and can contribute to unwanted side effects such as motion sickness, disorientation, or increased cognitive load. This paper demonstrates how using a seated leaning-based locomotion interface -HeadJoystick- in VR ground-based navigation affects user experience, usability, and performance. In three within-subject studies, we compared controller (touchpad/thumbstick) with a more embodied interface ("HeadJoystick") where users moved their head and/or leaned in the direction of desired locomotion. In both conditions, users sat on a regular office chair and used it to control virtual rotations. In the first study, 24 participants used HeadJoystick versus Controller in three complementary tasks including reach-the-target, follow-the-path, and racing (dynamic obstacle avoidance). In the second study, 18 participants repeatedly used HeadJoystick versus Controller (8 one-minute trials each) in a reach-the-target task. To evaluate potential benefits of different brake mechanisms, in the third study 18 participants were asked to stop within each target area for one second. All three studies consistently showed advantages of HeadJoystick over Controller: we observed improved performance in all tasks, as well as higher user ratings for enjoyment, spatial presence, immersion, vection intensity, usability, ease of learning, ease of use, and rated potential for daily and long-term use, while reducing motion sickness and task load. Overall, our results suggest that leaning-based interfaces such as HeadJoystick provide an interesting and more embodied alternative to handheld interfaces in driving, reach-the-target, and follow-the-path tasks, and potentially a wider range of scenarios.
Collapse
|
9
|
Jeong D, Jeong M, Yang U, Han K. Eyes on me: Investigating the role and influence of eye-tracking data on user modeling in virtual reality. PLoS One 2022; 17:e0278970. [PMID: 36580442 PMCID: PMC9799296 DOI: 10.1371/journal.pone.0278970] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Accepted: 11/24/2022] [Indexed: 12/30/2022] Open
Abstract
Research has shown that sensor data generated by a user during a VR experience is closely related to the user's behavior or state, meaning that the VR user can be quantitatively understood and modeled. Eye-tracking as a sensor signal has been studied in prior research, but its usefulness in a VR context has been less examined, and most extant studies have dealt with eye-tracking within a single environment. Our goal is to expand the understanding of the relationship between eye-tracking data and user modeling in VR. In this paper, we examined the role and influence of eye-tracking data in predicting a level of cybersickness and types of locomotion. We developed and applied the same structure of a deep learning model to the multi-sensory data collected from two different studies (cybersickness and locomotion) with a total of 50 participants. The experiment results highlight not only a high applicability of our model to sensor data in a VR context, but also a significant relevance of eye-tracking data as a potential supplement to improving the model's performance and the importance of eye-tracking data in learning processes overall. We conclude by discussing the relevance of these results to potential future studies on this topic.
Collapse
Affiliation(s)
- Dayoung Jeong
- Department of Artificial Intelligence, Hanyang University, Seoul, Republic of Korea
| | - Mingon Jeong
- Department of Artificial Intelligence, Hanyang University, Seoul, Republic of Korea
| | - Ungyeon Yang
- Electronics and Telecommunications Research Institute, Daejeon, Republic of Korea
| | - Kyungsik Han
- Department of Artificial Intelligence, Hanyang University, Seoul, Republic of Korea
- * E-mail:
| |
Collapse
|
10
|
Zary N, Eysenbach G, VanDerNagel J, Heylen D. Immersive Virtual Reality Avatars for Embodiment Illusions in People With Mild to Borderline Intellectual Disability: User-Centered Development and Feasibility Study. JMIR Serious Games 2022; 10:e39966. [PMID: 36476721 PMCID: PMC9773028 DOI: 10.2196/39966] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 09/22/2022] [Accepted: 10/31/2022] [Indexed: 12/12/2022] Open
Abstract
BACKGROUND Immersive virtual reality (IVR) has been investigated as a tool for treating psychiatric conditions. In particular, the practical nature of IVR, by offering a doing instead of talking approach, could support people who do not benefit from existing treatments. Hence, people with mild to borderline intellectual disability (MBID; IQ=50-85) might profit particularly from IVR therapies, for instance, to circumvent issues in understanding relevant concepts and interrelations. In this context, immersing the user into a virtual body (ie, avatar) appears promising for enhancing learning (eg, by changing perspectives) and usability (eg, natural interactions). However, design requirements, immersion procedures, and proof of concept of such embodiment illusion (ie, substituting the real body with a virtual one) have not been explored in this group. OBJECTIVE Our study aimed to establish design guidelines for IVR embodiment illusions in people with MBID. We explored 3 factors to induce the illusion, by testing the avatar's appearance, locomotion using IVR controllers, and virtual object manipulation. Furthermore, we report on the feasibility to induce the embodiment illusion and provide procedural guidance. METHODS We conducted a user-centered study with 29 end users in care facilities, to investigate the avatar's appearance, controller-based locomotion (ie, teleport, joystick, or hybrid), and object manipulation. Overall, 3 iterations were conducted using semistructured interviews to explore design factors to induce embodiment illusions in our group. To further understand the influence of interactions on the illusion, we measured the sense of embodiment (SoE) during 5 interaction tasks. RESULTS IVR embodiment illusions can be induced in adults with MBID. To induce the illusion, having a high degree of control over the body outweighed avatar customization, despite the participants' desire to replicate their own body image. Similarly, the highest SoE was measured during object manipulation tasks, which required a combination of (virtual) locomotion and object manipulation behavior. Notably, interactions that are implausible (eg, teleport and occlusions when grabbing) showed a negative influence on SoE. In contrast, implementing artificial interaction aids into the IVR avatar's hands (ie, for user interfaces) did not diminish the illusion, presuming that the control was unimpaired. Nonetheless, embodiment illusions showed a tedious and complex need for (control) habituation (eg, motion sickness), possibly hindering uptake in practice. CONCLUSIONS Balancing the embodiment immersion by focusing on interaction habituation (eg, controller-based locomotion) and lowering customization effort seems crucial to achieve both high SoE and usability for people with MBID. Hence, future studies should investigate the requirements for natural IVR avatar interactions by using multisensory integrations for the virtual body (eg, animations, physics-based collision, and touch) and other interaction techniques (eg, hand tracking and redirected walking). In addition, procedures and use for learning should be explored for tailored mental health therapies in people with MBID.
Collapse
Affiliation(s)
| | | | - Joanne VanDerNagel
- Department of Human Media Interaction, University of Twente, Enschede, Netherlands.,Centre for Addiction and Intellectual Disability, Tactus Addiction Care, Enschede, Netherlands.,Nijmegen Institute for Scientist-Practitioners in Addiction, Radboud University, Nijmegen, Netherlands
| | - Dirk Heylen
- Department of Human Media Interaction, University of Twente, Enschede, Netherlands
| |
Collapse
|
11
|
Lei MK, Cheng KB. Biomechanical fidelity of athletic training using virtual reality head-mounted display: the case of preplanned and unplanned sidestepping. Sports Biomech 2022:1-22. [PMID: 36412262 DOI: 10.1080/14763141.2022.2146528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Accepted: 11/07/2022] [Indexed: 11/23/2022]
Abstract
Virtual reality has recently been recognised as an effective tool for investigating visual-perceptual tasks. To develop a sport-specific virtual environment with realistic locomotion, it is crucial to examine the effect of using virtual reality devices on athletes performing intense and complex movements. Twelve collegiate football players were instructed to perform pre-planned and unplanned sidestepping in both environments with the same dimension and experimental setup in the virtual environment as in the real one. Analysis of the performance and knee biomechanical parameters showed that movements performed in the two environments were generally comparable. Consistent changes in approach velocity and knee angle/moment under unplanned conditions (compared with preplanned conditions) were also found in the virtual environment as in the real one, except for the significantly larger peak flexion angle (p < .05) observed in the virtual environment. Interestingly, half of the participants changed from producing abduction to adduction moment at the weight acceptance phase in the preplanned condition (p < .05). These findings suggested that while it is generally feasible to use virtual reality head-mounted displays for designated experiments and training, the effect of wearing virtual reality devices could be somewhat subject-specific.
Collapse
Affiliation(s)
- Man Kit Lei
- Institute of Physical Education, Health and Leisure Studies, National Cheng Kung University, Tainan City, Taiwan
| | - Kuangyou B Cheng
- Institute of Physical Education, Health and Leisure Studies, National Cheng Kung University, Tainan City, Taiwan
| |
Collapse
|
12
|
The Trends and Challenges of Virtual Technology Usage in Western Balkan Educational Institutions. INFORMATION 2022. [DOI: 10.3390/info13110525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
Abstract
Higher educational institutions in Western Balkan countries strive for continuous development of their teaching and learning processes. One of the priorities is employing state-of-the-art technology to facilitate experience-based learning, and virtual and augmented reality are two of the most effective solutions to providing the opportunity to practice the acquired theoretical knowledge. This report presents (apart from the theoretical introduction to the issue) an overall picture of the knowledge of AR and VR technology in education in Western Balkan universities. It is based on a semi-structured online questionnaire whose recipients were academic staff and students from universities in Albania, Kosovo, and North Macedonia. The questionnaire differed for each target group; the version for academics comprised 11 questions for 710 respondents, and the version for students comprised 10 questions for 2217 respondents. This paper presents and discusses the results for each question with the aim to illustrate Western Balkan countries’ current state of VR and AR application in education.
Collapse
|
13
|
Oxley JA, Meyer G, Cant I, Bellantuono GM, Butcher M, Levers A, Westgarth C. A pilot study investigating human behaviour towards DAVE (Dog Assisted Virtual Environment) and interpretation of non-reactive and aggressive behaviours during a virtual reality exploration task. PLoS One 2022; 17:e0274329. [PMID: 36170291 PMCID: PMC9518854 DOI: 10.1371/journal.pone.0274329] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2022] [Accepted: 08/12/2022] [Indexed: 12/03/2022] Open
Abstract
Dog aggression is a public health concern because dog bites often lead to physical and psychological trauma in humans. It is also a welfare concern for dogs. To prevent aggressive behaviours, it is important to understand human behaviour towards dogs and our ability to interpret signs of dog aggression. This poses ethical challenges for humans and dogs. The aim of this study was to introduce, describe and pilot test a virtual reality dog model (DAVE (Dog Assisted Virtual Environment)). The Labrador model has two different modes displaying aggressive and non-reactive non-aggressive behaviours. The aggressive behaviours displayed are based on the current understanding of canine ethology and expert feedback. The objective of the study was to test the recognition of dog behaviour and associated human approach and avoidance behaviour. Sixteen university students were recruited via an online survey to participate in a practical study, and randomly allocated to two experimental conditions, an aggressive followed by a non-reactive virtual reality model (group AN) or vice versa (group NA). Participants were instructed to ‘explore the area’ in each condition, followed by a survey. A Wilcoxon and Mann Whitney U test was used to compare the closest distance to the dog within and between groups respectively. Participants moved overall significantly closer to the non-reactive dog compared to the aggressive dog (p≤0.001; r = 0.8). Descriptions of the aggressive dog given by participants often used motivational or emotional terms. There was little evidence of simulator sickness and presence scores were high indicating sufficient immersion in the virtual environment. Participants appeared to perceive the dog as realistic and behaved and interacted with the dog model in a manner that might be expected during an interaction with a live dog. This study also highlights the promising results for the potential future use of virtual reality in behavioural research (i.e., human-dog interactions), education (i.e. safety around dogs) and psychological treatment (e.g. dog phobia treatment).
Collapse
Affiliation(s)
- James A. Oxley
- Department of Livestock and One Health, University of Liverpool, Leahurst Campus, Neston, Cheshire, United Kingdom
| | - Georg Meyer
- Institute of Digital Engineering and Autonomous Systems, University of Liverpool, Liverpool, United Kingdom
| | - Iain Cant
- Virtual Engineering Centre, Daresbury, United Kingdom
| | | | | | - Andrew Levers
- Virtual Engineering Centre, Daresbury, United Kingdom
| | - Carri Westgarth
- Department of Livestock and One Health, University of Liverpool, Leahurst Campus, Neston, Cheshire, United Kingdom
- * E-mail:
| |
Collapse
|
14
|
A Typology of Virtual Reality Locomotion Techniques. MULTIMODAL TECHNOLOGIES AND INTERACTION 2022. [DOI: 10.3390/mti6090072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Researchers have proposed a wide range of categorization schemes in order to characterize the space of VR locomotion techniques. In a previous work, a typology of VR locomotion techniques was proposed, introducing motion-based, roomscale-based, controller-based, and teleportation-based types of VR locomotion. The fact that (i) the proposed typology is used widely and makes a significant research impact in the field and (ii) the VR locomotion field is a considerably active research field, creates the need for this typology to be up-to-date and valid. Therefore, the present study builds on this previous work, and the typology’s consistency is investigated through a systematic literature review. Altogether, 42 articles were included in this literature review, eliciting 80 instances of 10 VR locomotion techniques. The results indicated that current typology cannot cover teleportation-based techniques enabled by motion (e.g., gestures and gazes). Therefore, the typology was updated, and a new type was added: “motion-based teleporting.”
Collapse
|
15
|
Technologies for Multimodal Interaction in Extended Reality—A Scoping Review. MULTIMODAL TECHNOLOGIES AND INTERACTION 2021. [DOI: 10.3390/mti5120081] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
When designing extended reality (XR) applications, it is important to consider multimodal interaction techniques, which employ several human senses simultaneously. Multimodal interaction can transform how people communicate remotely, practice for tasks, entertain themselves, process information visualizations, and make decisions based on the provided information. This scoping review summarized recent advances in multimodal interaction technologies for head-mounted display-based (HMD) XR systems. Our purpose was to provide a succinct, yet clear, insightful, and structured overview of emerging, underused multimodal technologies beyond standard video and audio for XR interaction, and to find research gaps. The review aimed to help XR practitioners to apply multimodal interaction techniques and interaction researchers to direct future efforts towards relevant issues on multimodal XR. We conclude with our perspective on promising research avenues for multimodal interaction technologies.
Collapse
|
16
|
Sakhare A, Stradford J, Ravichandran R, Deng R, Ruiz J, Subramanian K, Suh J, Pa J. Simultaneous Exercise and Cognitive Training in Virtual Reality Phase 2 Pilot Study: Impact on Brain Health and Cognition in Older Adults. Brain Plast 2021; 7:111-130. [PMID: 34868877 PMCID: PMC8609488 DOI: 10.3233/bpl-210126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/27/2021] [Indexed: 11/15/2022] Open
Abstract
Background: Aerobic exercise and environmental enrichment have been shown to enhance brain function. Virtual reality (VR) is a promising method for combining these activities in a meaningful and ecologically valid way. Objective: The purpose of this Phase 2 pilot study was to calculate relative change and effect sizes to assess the impact of simultaneous exercise and cognitive training in VR on brain health and cognition in older adults. Methods: Twelve cognitively normal older adults (64.7±8.8 years old, 8 female) participated in a 12-week intervention, 3 sessions/week for 25–50 minutes/session at 50–80% HRmax. Participants cycled on a custom-built stationary exercise bike while wearing a VR head-mounted display and navigating novel virtual environments to train spatial memory. Brain and cognitive changes were assessed using MRI imaging and a cognitive battery. Results: Medium effect size (ES) improvements in cerebral flow and brain structure were observed. Pulsatility, a measure of peripheral vascular resistance, decreased 10.5% (ES(d) = 0.47). Total grey matter volume increased 0.73% (ES(r) = 0.38), while thickness of the superior parietal lobule, a region associated with spatial orientation, increased 0.44% (ES(r) = 0.30). Visual memory discrimination related to pattern separation showed a large improvement of 68% (ES(ηp2) = 0.43). Cognitive flexibility (Trail Making Test B) (ES(r) = 0.42) and response inhibition (ES(W) = 0.54) showed medium improvements of 14% and 34%, respectively. Conclusions: Twelve weeks of simultaneous exercise and cognitive training in VR elicits positive changes in brain volume, vascular resistance, memory, and executive function with moderate-to-large effect sizes in our pilot study.
Collapse
Affiliation(s)
- Ashwin Sakhare
- Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA.,Department of Neurology, Mark and Mary Stevens Neuroimaging and Informatics Institute, University of Southern California, Los Angeles, CA, USA
| | - Joy Stradford
- Department of Neurology, Mark and Mary Stevens Neuroimaging and Informatics Institute, University of Southern California, Los Angeles, CA, USA
| | - Roshan Ravichandran
- Department of Neurology, Mark and Mary Stevens Neuroimaging and Informatics Institute, University of Southern California, Los Angeles, CA, USA
| | - Rong Deng
- Department of Neurology, Mark and Mary Stevens Neuroimaging and Informatics Institute, University of Southern California, Los Angeles, CA, USA
| | - Julissa Ruiz
- Department of Neurology, Mark and Mary Stevens Neuroimaging and Informatics Institute, University of Southern California, Los Angeles, CA, USA
| | - Keshav Subramanian
- Department of Neurology, Mark and Mary Stevens Neuroimaging and Informatics Institute, University of Southern California, Los Angeles, CA, USA
| | - Jaymee Suh
- Department of Neurology, Mark and Mary Stevens Neuroimaging and Informatics Institute, University of Southern California, Los Angeles, CA, USA
| | - Judy Pa
- Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA.,Department of Neurology, Mark and Mary Stevens Neuroimaging and Informatics Institute, University of Southern California, Los Angeles, CA, USA
| |
Collapse
|
17
|
ATON: An Open-Source Framework for Creating Immersive, Collaborative and Liquid Web-Apps for Cultural Heritage. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app112211062] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The web and its recent advancements represent a great opportunity to build universal, rich, multi-user and immersive Web3D/WebXR applications targeting Cultural Heritage field—including 3D presenters, inspection tools, applied VR games, collaborative teaching tools and much more. Such opportunity although, introduces additional challenges besides common issues and limitations typically encountered in this context. The “ideal” Web3D application should be able to reach every device, automatically adapting its interface, rendering and interaction models—resulting in a single, liquid product that can be consumed on mobile devices, PCs, Museum kiosks and immersive AR/VR devices, without any installation required for final users. The open-source ATON framework is the result of research and development activities carried out during the last 5 years through national and international projects: it is designed around modern and robust web standards, open specifications and large open-source ecosystems. This paper describes the framework architecture and its components, assessed and validated through different case studies. ATON offers institutions, researchers, professionals a scalable, flexible and modular solution to craft and deploy liquid web-applications, providing novel and advanced features targeting Cultural Heritage field in terms of 3D presentation, annotation, immersive interaction and real-time collaboration.
Collapse
|
18
|
Maroli A, Narwane VS, Gardas BB. Applications of IoT for achieving sustainability in agricultural sector: A comprehensive review. JOURNAL OF ENVIRONMENTAL MANAGEMENT 2021; 298:113488. [PMID: 34388541 DOI: 10.1016/j.jenvman.2021.113488] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/31/2021] [Revised: 07/22/2021] [Accepted: 08/05/2021] [Indexed: 06/13/2023]
Abstract
This research aims to achieve a holistic understanding of the current IoT scenario by conducting a comparative analysis of the prevailing literature regarding IoT applications in the agricultural domain. Also, it proposes a framework for IoT adoption in the case sector. A systematic literature review was conducted with a methodology that focused on scientific articles authored in the English language that were published in peer-reviewed journals in the last five years. Initially, 179 research papers were extracted from the SCOPUS database and finally, 82 relevant articles were considered for the study which were classified into various categories and studied thoroughly. Based on a comprehensive survey of the selected articles four research questions were identified and successfully addressed. The results highlighted that research efforts pertaining to IoT applications of agriculture have matured from their initial conceptual stage and now reached the implementation phase. Also, it was observed that Machine Learning based algorithms were utilized extensively in recent research studies. For the first time, an exhaustive study has been conducted holistically to comprehend the recent advances in the field of IoT applications for the agricultural sector.
Collapse
Affiliation(s)
- Ankit Maroli
- Department of Mechanical Engineering, K.J. Somaiya College of Engineering, Somaiya Vidyavihar University, Ghatkopar East, Mumbai, Maharashtra, 400077, India.
| | - Vaibhav S Narwane
- Department of Mechanical Engineering, K.J. Somaiya College of Engineering, Somaiya Vidyavihar University, Ghatkopar East, Mumbai, Maharashtra, 400077, India.
| | - Bhaskar B Gardas
- Department of Mechanical Engineering, M.H. Saboo Siddik College of Engineering, University of Mumbai, 8, Saboo Siddik Polytechnic Road, Mumbai, Maharashtra, 400008, India.
| |
Collapse
|
19
|
Steed A, Takala TM, Archer D, Lages W, Lindeman RW. Directions for 3D User Interface Research from Consumer VR Games. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:4171-4182. [PMID: 34449366 DOI: 10.1109/tvcg.2021.3106431] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
With the continuing development of affordable immersive virtual reality (VR) systems, there is now a growing market for consumer content. The current form of consumer systems is not dissimilar to the lab-based VR systems of the past 30 years: the primary input mechanism is a head-tracked display and one or two tracked hands with buttons and joysticks on hand-held controllers. Over those 30 years, a very diverse academic literature has emerged that covers design and ergonomics of 3D user interfaces (3DUIs). However, the growing consumer market has engaged a very broad range of creatives that have built a very diverse set of designs. Sometimes these designs adopt findings from the academic literature, but other times they experiment with completely novel or counter-intuitive mechanisms. In this paper and its online adjunct, we report on novel 3DUI design patterns that are interesting from both design and research perspectives: they are highly novel, potentially broadly re-usable and/or suggest interesting avenues for evaluation. The supplemental material, which is a living document, is a crowd-sourced repository of interesting patterns. This paper is a curated snapshot of those patterns that were considered to be the most fruitful for further elaboration.
Collapse
|
20
|
Oriti D, Manuri F, Pace FD, Sanna A. Harmonize: a shared environment for extended immersive entertainment. VIRTUAL REALITY 2021; 27:1-14. [PMID: 34642567 PMCID: PMC8495433 DOI: 10.1007/s10055-021-00585-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/15/2020] [Accepted: 09/18/2021] [Indexed: 06/13/2023]
Abstract
Virtual reality (VR) and augmented reality (AR) applications are very diffuse nowadays. Moreover, recent technology innovations led to the diffusion of commercial head-mounted displays for immersive VR: users can enjoy entertainment activities that fill their visual fields, experiencing the sensation of physical presence in these virtual immersive environments. Even if AR and VR are mostly used separately, they can be effectively combined to provide a multi-user shared environment (SE), where two or more users perform some specific tasks in a cooperative or competitive way, providing a wider set of interactions and use cases compared to immersive VR alone. However, due to the differences between the two technologies, it is difficult to develop SEs offering a similar experience for both AR and VR users. This paper presents Harmonize, a novel framework to deploy applications based on SEs with a comparable experience for both AR and VR users. Moreover, the framework is hardware-independent, and it has been designed to be as much extendable to novel hardware as possible. An immersive game has been designed to test and to evaluate the validity of the proposed framework. The assessment of the system through the System Usability Scale questionnaire and the Game Experience Questionnaire shows a positive evaluation.
Collapse
Affiliation(s)
- Damiano Oriti
- Dipartimento di Automatica e Informatica, Politecnico di Torino, Corso Duca degli Abruzzi, 24, 10129 Torino, Italy
| | - Federico Manuri
- Dipartimento di Automatica e Informatica, Politecnico di Torino, Corso Duca degli Abruzzi, 24, 10129 Torino, Italy
| | - Francesco De Pace
- Dipartimento di Automatica e Informatica, Politecnico di Torino, Corso Duca degli Abruzzi, 24, 10129 Torino, Italy
| | - Andrea Sanna
- Dipartimento di Automatica e Informatica, Politecnico di Torino, Corso Duca degli Abruzzi, 24, 10129 Torino, Italy
| |
Collapse
|
21
|
Kim YM, Rhiu I. A comparative study of navigation interfaces in virtual reality environments: A mixed-method approach. APPLIED ERGONOMICS 2021; 96:103482. [PMID: 34116411 DOI: 10.1016/j.apergo.2021.103482] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/01/2020] [Revised: 04/26/2021] [Accepted: 05/20/2021] [Indexed: 06/12/2023]
Abstract
Recently, motion-based navigation interfaces have been widely utilized in virtual reality (VR) environments. However, improper navigation interfaces can negatively impact the VR experience, and because different interfaces have different characteristics, the navigation experience may vary. Although comparative studies have been conducted with various interfaces, information obtained by focusing on qualitative evaluation was limited. Thus, this study explores the effects from three navigation interfaces (walking-in-place (WIP), joystick, and teleportation) on user performance, sense of presence, workload, usability, and motion sickness through a mixed-method design. Twenty-one participants were asked to perform a navigation task using selected navigation interfaces. The results indicated different advantages and disadvantages in the navigation interfaces for each evaluation metric. In particular, it was found that more research on user safety is required for the WIP interface. The findings of this study are expected to contribute to the development of guidelines for applying navigation interfaces to specific VR environments.
Collapse
Affiliation(s)
- Yong Min Kim
- Department of Big Data and AI, Hoseo University, Asan, South Korea.
| | - Ilsun Rhiu
- Division of Future Convergence (HCI Science Major), Dongduk Women's University, Seoul, South Korea.
| |
Collapse
|
22
|
Methodological and institutional considerations for the use of 360-degree video and pet animals in human subject research: An experimental case study from the United States. Behav Res Methods 2021; 53:977-992. [PMID: 32918168 DOI: 10.3758/s13428-020-01458-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Head-mounted virtual-reality headsets and virtual-reality content have experienced large technological advances and rapid proliferation over the last years. These immersive technologies bear great potential for the facilitation of the study of human decision-making and behavior in safe, perceptually realistic virtual environments. Best practices and guidelines for the effective and efficient use of 360-degree video in experimental research is also evolving. In this paper, we summarize our research group's experiences with a sizable experimental case study on virtual-reality technology, 360-degree video, pet animals, and human participants. Specifically, we discuss the institutional, methodological, and technological challenges encountered during the implementation of our 18-month-long research project on human emotional response to short-duration 360-degree videos of human-pet interactions. Our objective in this paper is to contribute to the growing body of research on 360-degree video and to lower barriers related to the conceptualization and practice of research at the intersection of virtual-reality experiences, 360-degree video, live animals, and human behavior. Practical suggestions for human-subject researchers interested in utilizing virtual-reality technology, 360-degree videos, and pet animals as a part of their research are discussed.
Collapse
|
23
|
Comparison of Controller-Based Locomotion Techniques for Visual Observation in Virtual Reality. MULTIMODAL TECHNOLOGIES AND INTERACTION 2021. [DOI: 10.3390/mti5070031] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Many virtual reality (VR) applications use teleport for locomotion. The non-continuous locomotion of teleport is suited for VR controllers and can minimize simulator sickness, but it can also reduce spatial awareness compared to continuous locomotion. Our aim was to create continuous, controller-based locomotion techniques that would support spatial awareness. We compared the new techniques, slider and grab, with teleport in a task where participants counted small visual targets in a VR environment. Task performance was assessed by asking participants to report how many visual targets they found. The results showed that slider and grab were significantly faster to use than teleport, and they did not cause significantly more simulator sickness than teleport. Moreover, the continuous techniques provided better spatial awareness than teleport.
Collapse
|
24
|
Li H, Mavros P, Krukar J, Hölscher C. The effect of navigation method and visual display on distance perception in a large-scale virtual building. Cogn Process 2021; 22:239-259. [PMID: 33564939 PMCID: PMC8179918 DOI: 10.1007/s10339-020-01011-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2020] [Accepted: 12/16/2020] [Indexed: 11/30/2022]
Abstract
Immersive virtual reality (VR) technology has become a popular method for fundamental and applied spatial cognition research. One challenge researchers face is emulating walking in a large-scale virtual space although the user is in fact in a small physical space. To address this, a variety of movement interfaces in VR have been proposed, from traditional joysticks to teleportation and omnidirectional treadmills. These movement methods tap into different mental processes of spatial learning during navigation, but their impacts on distance perception remain unclear. In this paper, we investigated the role of visual display, proprioception, and optic flow on distance perception in a large-scale building by manipulating four different movement methods. Eighty participants either walked in a real building, or moved through its virtual replica using one of three movement methods: VR-treadmill, VR-touchpad, and VR-teleportation. Results revealed that, first, visual display played a major role in both perceived and traversed distance estimates but did not impact environmental distance estimates. Second, proprioception and optic flow did not impact the overall accuracy of distance perception, but having only an intermittent optic flow (in the VR-teleportation movement method) impaired the precision of traversed distance estimates. In conclusion, movement method plays a significant role in distance perception but does not impact the configurational knowledge learned in a large-scale real and virtual building, and the VR-touchpad movement method provides an effective interface for navigation in VR.
Collapse
Affiliation(s)
- Hengshan Li
- Future Cities Laboratory, Singapore-ETH Centre, 1 CREATE Way, CREATE Tower, 138602, Singapore, Singapore.
| | - Panagiotis Mavros
- Future Cities Laboratory, Singapore-ETH Centre, 1 CREATE Way, CREATE Tower, 138602, Singapore, Singapore
| | - Jakub Krukar
- Institute for Geoinformatics, University of Muenster, Münster, Germany
| | - Christoph Hölscher
- Future Cities Laboratory, Singapore-ETH Centre, 1 CREATE Way, CREATE Tower, 138602, Singapore, Singapore
- Department of Humanities, Social and Political Sciences, ETH Zürich, Zurich, Switzerland
| |
Collapse
|
25
|
Gromer D, Kiser DP, Pauli P. Thigmotaxis in a virtual human open field test. Sci Rep 2021; 11:6670. [PMID: 33758204 PMCID: PMC7988123 DOI: 10.1038/s41598-021-85678-5] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2020] [Accepted: 03/04/2021] [Indexed: 11/16/2022] Open
Abstract
Animal models are used to study neurobiological mechanisms in mental disorders. Although there has been significant progress in the understanding of neurobiological underpinnings of threat-related behaviors and anxiety, little progress was made with regard to new or improved treatments for mental disorders. A possible reason for this lack of success is the unknown predictive and cross-species translational validity of animal models used in preclinical studies. Re-translational approaches, therefore, seek to establish cross-species translational validity by identifying behavioral operations shared across species. To this end, we implemented a human open field test in virtual reality and measured behavioral indices derived from animal studies in three experiments (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\textit{N}=31$$\end{document}N=31, \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\textit{N}=30$$\end{document}N=30, and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\textit{N}=80$$\end{document}N=80). In addition, we investigated the associations between anxious traits and such behaviors. Results indicated a strong similarity in behavior across species, i.e., participants in our study—like rodents in animal studies—preferred to stay in the outer region of the open field, as indexed by multiple behavioral parameters. However, correlational analyses did not clearly indicate that these behaviors were a function of anxious traits of participants. We conclude that the realized virtual open field test is able to elicit thigmotaxis and thus demonstrates cross-species validity of this aspect of the test. Modulatory effects of anxiety on human open field behavior should be examined further by incorporating possible threats in the virtual scenario and/or by examining participants with higher anxiety levels or anxiety disorder patients.
Collapse
Affiliation(s)
- Daniel Gromer
- Department of Psychology (Biological Psychology, Clinical Psychology, and Psychotherapy), University of Würzburg, Würzburg, Germany.
| | - Dominik P Kiser
- Department of Psychology (Biological Psychology, Clinical Psychology, and Psychotherapy), University of Würzburg, Würzburg, Germany
| | - Paul Pauli
- Department of Psychology (Biological Psychology, Clinical Psychology, and Psychotherapy), University of Würzburg, Würzburg, Germany.,Center of Mental Health, Medical Faculty, University of Würzburg, Würzburg, Germany
| |
Collapse
|
26
|
Controlling Teleportation-Based Locomotion in Virtual Reality with Hand Gestures: A Comparative Evaluation of Two-Handed and One-Handed Techniques. ELECTRONICS 2021. [DOI: 10.3390/electronics10060715] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Virtual Reality (VR) technology offers users the possibility to immerse and freely navigate through virtual worlds. An important component for achieving a high degree of immersion in VR is locomotion. Often discussed in the literature, a natural and effective way of controlling locomotion is still a general problem which needs to be solved. Recently, VR headset manufacturers have been integrating more sensors, allowing hand or eye tracking without any additional required equipment. This enables a wide range of application scenarios with natural freehand interaction techniques where no additional hardware is required. This paper focuses on techniques to control teleportation-based locomotion with hand gestures, where users are able to move around in VR using their hands only. With the help of a comprehensive study involving 21 participants, four different techniques are evaluated. The effectiveness and efficiency as well as user preferences of the presented techniques are determined. Two two-handed and two one-handed techniques are evaluated, revealing that it is possible to move comfortable and effectively through virtual worlds with a single hand only.
Collapse
|
27
|
Cannavo A, Calandra D, Prattico FG, Gatteschi V, Lamberti F. An Evaluation Testbed for Locomotion in Virtual Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:1871-1889. [PMID: 33079670 DOI: 10.1109/tvcg.2020.3032440] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
A common operation performed in Virtual Reality (VR) environments is locomotion. Although real walking can represent a natural and intuitive way to manage displacements in such environments, its use is generally limited by the size of the area tracked by the VR system (typically, the size of a room) or requires expensive technologies to cover particularly extended settings. A number of approaches have been proposed to enable effective explorations in VR, each characterized by different hardware requirements and costs, and capable to provide different levels of usability and performance. However, the lack of a well-defined methodology for assessing and comparing available approaches makes it difficult to identify, among the various alternatives, the best solutions for selected application domains. To deal with this issue, this article introduces a novel evaluation testbed which, by building on the outcomes of many separate works reported in the literature, aims to support a comprehensive analysis of the considered design space. An experimental protocol for collecting objective and subjective measures is proposed, together with a scoring system able to rank locomotion approaches based on a weighted set of requirements. Testbed usage is illustrated in a use case requesting to select the technique to adopt in a given application scenario.
Collapse
|
28
|
The Study of Walking, Walkability and Wellbeing in Immersive Virtual Environments. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:ijerph18020364. [PMID: 33418896 PMCID: PMC7825096 DOI: 10.3390/ijerph18020364] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 12/22/2020] [Accepted: 12/31/2020] [Indexed: 12/22/2022]
Abstract
Recent approaches in the research on walkable environments and wellbeing go beyond correlational analysis to consider the specific characteristics of individuals and their interaction with the immediate environment. Accordingly, a need has been accentuated for new human-centered methods to improve our understanding of the mechanisms underlying environmental effects on walking and consequently on wellbeing. Immersive virtual environments (IVEs) were suggested as a potential method that can advance this type of research as they offer a unique combination between controlled experimental environments that allow drawing causal conclusions and a high level of environmental realism that supports ecological validity. The current study pilot tested a walking simulator with additional sensor technologies, including biosensors, eye tracking and gait sensors. Results found IVEs to facilitate extremely high tempo-spatial-resolution measurement of physical walking parameters (e.g., speed, number of gaits) along with walking experience and wellbeing (e.g., electrodermal activity, heartrate). This level of resolution is useful in linking specific environmental stimuli to the psychophysiological and behavioral reactions, which cannot be obtained in real-world and self-report research designs. A set of guidelines for implementing IVE technology for research is suggested in order to standardize its use and allow new researchers to engage with this emerging field of research.
Collapse
|
29
|
A Review of Automated Speech-Based Interaction for Cognitive Screening. MULTIMODAL TECHNOLOGIES AND INTERACTION 2020. [DOI: 10.3390/mti4040093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Language, speech and conversational behaviours reflect cognitive changes that may precede physiological changes and offer a much more cost-effective option for detecting preclinical cognitive decline. Artificial intelligence and machine learning have been established as a means to facilitate automated speech-based cognitive screening through automated recording and analysis of linguistic, speech and conversational behaviours. In this work, a scoping literature review was performed to document and analyse current automated speech-based implementations for cognitive screening from the perspective of human–computer interaction. At this stage, the goal was to identify and analyse the characteristics that define the interaction between the automated speech-based screening systems and the users, potentially revealing interaction-related patterns and gaps. In total, 65 articles were identified as appropriate for inclusion, from which 15 articles satisfied the inclusion criteria. The literature review led to the documentation and further analysis of five interaction-related themes: (i) user interface, (ii) modalities, (iii) speech-based communication, (iv) screening content and (v) screener. Cognitive screening through speech-based interaction might benefit from two practices: (1) implementing more multimodal user interfaces that facilitate—amongst others—speech-based screening and (2) introducing the element of motivation in the speech-based screening process.
Collapse
|
30
|
Two Decades of Touchable and Walkable Virtual Reality for Blind and Visually Impaired People: A High-Level Taxonomy. MULTIMODAL TECHNOLOGIES AND INTERACTION 2020. [DOI: 10.3390/mti4040079] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Although most readers associate the term virtual reality (VR) with visually appealing entertainment content, this technology also promises to be helpful to disadvantaged people like blind or visually impaired people. While overcoming physical objects’ and spaces’ limitations, virtual objects and environments that can be spatially explored have a particular benefit. To give readers a complete, clear and concise overview of current and past publications on touchable and walkable audio supplemented VR applications for blind and visually impaired users, this survey paper presents a high-level taxonomy to cluster the work done up to now from the perspective of technology, interaction and application. In this respect, we introduced a classification into small-, medium- and large-scale virtual environments to cluster and characterize related work. Our comprehensive table shows that especially grounded force feedback devices for haptic feedback (‘small scale’) were strongly researched in different applications scenarios and mainly from an exocentric perspective, but there are also increasingly physically (‘medium scale’) or avatar-walkable (‘large scale’) egocentric audio-haptic virtual environments. In this respect, novel and widespread interfaces such as smartphones or nowadays consumer grade VR components represent a promising potential for further improvements. Our survey paper provides a database on related work to foster the creation process of new ideas and approaches for both technical and methodological aspects.
Collapse
|
31
|
Recommendations for Integrating a P300-Based Brain–Computer Interface in Virtual Reality Environments for Gaming: An Update. COMPUTERS 2020. [DOI: 10.3390/computers9040092] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The integration of a P300-based brain–computer interface (BCI) into virtual reality (VR) environments is promising for the video games industry. However, it faces several limitations, mainly due to hardware constraints and limitations engendered by the stimulation needed by the BCI. The main restriction is still the low transfer rate that can be achieved by current BCI technology, preventing movement while using VR. The goal of this paper is to review current limitations and to provide application creators with design recommendations to overcome them, thus significantly reducing the development time and making the domain of BCI more accessible to developers. We review the design of video games from the perspective of BCI and VR with the objective of enhancing the user experience. An essential recommendation is to use the BCI only for non-complex and non-critical tasks in the game. Also, the BCI should be used to control actions that are naturally integrated into the virtual world. Finally, adventure and simulation games, especially if cooperative (multi-user), appear to be the best candidates for designing an effective VR game enriched by BCI technology.
Collapse
|
32
|
Kondo Y, Fukuhara K, Suda Y, Higuchi T. Training older adults with virtual reality use to improve collision-avoidance behavior when walking through an aperture. Arch Gerontol Geriatr 2020; 92:104265. [PMID: 33011429 DOI: 10.1016/j.archger.2020.104265] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2020] [Revised: 09/11/2020] [Accepted: 09/21/2020] [Indexed: 11/18/2022]
Abstract
Many older adults perform collision-avoidance behavior either insufficiently (i.e., frequent collision) or inefficiently (i.e., exaggerated behavior to ensure collision-avoidance). The present study examined whether a training system using virtual reality (VR) simulation enhanced older adults' collision-avoidance behavior in response to a VR image of an aperture during real walking. Twenty-five (n = 13 intervention group and n = 12 control group) older individuals participated. During training, a VR image of walking through an aperture was projected onto a large screen. Participants in the intervention group tried to avoid virtual collision with the minimum body rotation required to walk on the spot through a variety of narrow apertures. Participants in the control group remained without body rotation while walking on the spot through a wide aperture. A comparison between pre-test and post-test performances in the real environment indicated that after the training, significantly smaller body rotation angles were observed in the intervention group. This suggests that the training led participants to modify their behavior to try to move efficiently during real walking. However, although not significant, collision rates also tended to be greater, suggesting that, at least for some participants, the modification required to avoid collision was too difficult. Transfer of the learned behavior using the VR environment to real walking is discussed.
Collapse
Affiliation(s)
- Yuki Kondo
- Department of Health Promotion Science, Tokyo Metropolitan University, Tokyo, Japan; Department of Physical Rehabilitation, National Center Hospital, National Center of Neurology and Psychiatry, Tokyo, Japan
| | - Kazunobu Fukuhara
- Department of Health Promotion Science, Tokyo Metropolitan University, Tokyo, Japan
| | - Yuki Suda
- Department of Health Promotion Science, Tokyo Metropolitan University, Tokyo, Japan
| | - Takahiro Higuchi
- Department of Health Promotion Science, Tokyo Metropolitan University, Tokyo, Japan.
| |
Collapse
|
33
|
Zhao J, Sensibaugh T, Bodenheimer B, McNamara TP, Nazareth A, Newcombe N, Minear M, Klippel A. Desktop versus immersive virtual environments: effects on spatial learning. SPATIAL COGNITION AND COMPUTATION 2020. [DOI: 10.1080/13875868.2020.1817925] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Affiliation(s)
- Jiayan Zhao
- Department of Geography, Pennsylvania State University, University Park, PA, USA
| | | | - Bobby Bodenheimer
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA
| | | | - Alina Nazareth
- The Spatial Intelligence and Learning Center, Temple University, Philadelphia, PA, USA
| | - Nora Newcombe
- Department of Psychology, Temple University, Philadelphia, PA, USA
| | - Meredith Minear
- Department of Psychology, University of Wyoming, Laramie, WY, USA
| | - Alexander Klippel
- Department of Geography, Pennsylvania State University, University Park, PA, USA
| |
Collapse
|
34
|
Marín-Morales J, Llinares C, Guixeres J, Alcañiz M. Emotion Recognition in Immersive Virtual Reality: From Statistics to Affective Computing. SENSORS (BASEL, SWITZERLAND) 2020; 20:E5163. [PMID: 32927722 PMCID: PMC7570837 DOI: 10.3390/s20185163] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Revised: 09/07/2020] [Accepted: 09/08/2020] [Indexed: 12/16/2022]
Abstract
Emotions play a critical role in our daily lives, so the understanding and recognition of emotional responses is crucial for human research. Affective computing research has mostly used non-immersive two-dimensional (2D) images or videos to elicit emotional states. However, immersive virtual reality, which allows researchers to simulate environments in controlled laboratory conditions with high levels of sense of presence and interactivity, is becoming more popular in emotion research. Moreover, its synergy with implicit measurements and machine-learning techniques has the potential to impact transversely in many research areas, opening new opportunities for the scientific community. This paper presents a systematic review of the emotion recognition research undertaken with physiological and behavioural measures using head-mounted displays as elicitation devices. The results highlight the evolution of the field, give a clear perspective using aggregated analysis, reveal the current open issues and provide guidelines for future research.
Collapse
Affiliation(s)
- Javier Marín-Morales
- Instituto de Investigación e Innovación en Bioingeniería, Universitat Politècnica de València, 46022 València, Spain; (C.L.); (J.G.); (M.A.)
| | | | | | | |
Collapse
|
35
|
A comparison of virtual locomotion methods in movement experts and non-experts: testing the contributions of body-based and visual translation for spatial updating. Exp Brain Res 2020; 238:1911-1923. [PMID: 32556428 DOI: 10.1007/s00221-020-05851-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2020] [Accepted: 06/10/2020] [Indexed: 10/24/2022]
Abstract
Both visual and body-based (vestibular and proprioceptive) information contribute to spatial updating, or the way a navigator keeps track of self-position during movement. Research has tested the relative contributions of these sources of information and found mixed results, with some studies demonstrating the importance of body-based information, especially for translation, and some demonstrating the sufficiency of visual information. Here, we invoke an individual differences approach to test whether some individuals may be more dependent on certain types of information compared to others. Movement experts tend to be dependent on motor processes in small-scale spatial tasks, which can help or hurt performance, but it is unknown if this effect extends into large-scale spatial tasks like spatial updating. In the current study, expert dancers and non-dancers completed a virtual reality point-to-origin task with three locomotion methods that varied the availability of body-based and visual information for translation: walking, joystick, and teleporting. We predicted decrements in performance in both groups as self-motion information was reduced, and that dancers would show a larger cost. Surprisingly, both dancers and non-dancers performed with equal accuracy in walking and joystick and were impaired in teleporting, with no large differences between groups. We found slower response times for both groups with reductions in self-motion information, and minimal evidence for a larger cost for dancers. While we did not see strong dance effects, more participation in spatial activities related to decreased angular error. Together, the results suggest a flexibility in reliance on visual or body-based information for translation in spatial updating that generalizes across dancers and non-dancers, but significant decrements associated with removing both of these sources of information.
Collapse
|
36
|
ArkaeVision VR Game: User Experience Research between Real and Virtual Paestum. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10093182] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The design of a virtual reality (VR) cultural application is aimed at supporting the steps of the learning process-like concrete experimentation, reflection and abstraction—which are generally difficult to induce when looking at ruins and artifacts that bring back to the past. With the use of virtual technologies (e.g., holographic surfaces, head-mounted displays, motion—cation sensors) those steps are surely supported thanks to the immersiveness and natural interaction granted by such devices. VR can indeed help to symbolically recreate the context of life of cultural objects, presenting them in their original place of belonging, while they were used for example, increasing awareness and understanding of history. The ArkaeVision VR application takes advantages of storytelling and user experience design to tell the story of artifacts and sites of an important cultural heritage site of Italy, Paestum, creating a dramaturgy around them and relying upon historical and artistic content revised by experts. Visitors will virtually travel into the temple dedicated to Hera II of Paestum, in the first half of the fifth century BC, wearing an immersive viewer–HTC Vive; here, they will interact with the priestess Ariadne, a digital actor, who will guide them on a virtual tour presenting the beliefs, the values and habits of an ancient population of the Magna Graecia city. In the immersive VR application, the memory is indeed influenced by the visitors’ ability to proceed with the exploratory activity. Two evaluation sessions were planned and conducted to understand the effectiveness of the immersive experience, usability of the virtual device and the learnability of the digital storytelling. Results revealed that certainly the realism of the virtual reconstructions, the atmosphere and the “sense of the past” that pervades the whole VR cultural experience, characterize the positive feedback of visitors, their emotional engagement and their interest to proceed with the exploration.
Collapse
|
37
|
Kelly JW, Ostrander AG, Lim AF, Cherep LA, Gilbert SB. Teleporting through virtual environments: Effects of path scale and environment scale on spatial updating. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:1841-1850. [PMID: 32070962 DOI: 10.1109/tvcg.2020.2973051] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Virtual reality systems typically allow users to physically walk and turn, but virtual environments (VEs) often exceed the available walking space. Teleporting has become a common user interface, whereby the user aims a laser pointer to indicate the desired location, and sometimes orientation, in the VE before being transported without self-motion cues. This study evaluated the influence of rotational self-motion cues on spatial updating performance when teleporting, and whether the importance of rotational cues varies across movement scale and environment scale. Participants performed a triangle completion task by teleporting along two outbound path legs before pointing to the unmarked path origin. Rotational self-motion reduced overall errors across all levels of movement scale and environment scale, though it also introduced a slight bias toward under-rotation. The importance of rotational self-motion was exaggerated when navigating large triangles and when the surrounding environment was large. Navigating a large triangle within a small VE brought participants closer to surrounding landmarks and boundaries, which led to greater reliance on piloting (landmark-based navigation) and therefore reduced-but did not eliminate-the impact of rotational self-motion cues. These results indicate that rotational self-motion cues are important when teleporting, and that navigation can be improved by enabling piloting.
Collapse
|
38
|
Efficacy of Virtual Reality in Painting Art Exhibitions Appreciation. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10093012] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Virtual reality (VR) technology has been employed in a wide range of fields, from entertainment to medicine and engineering. Advances in VR also provide new opportunities in art exhibitions. This study discusses the experience of art appreciation through desktop virtual reality (Desktop VR) or head-mounted display virtual reality (HMD VR) and compares it with appreciating a physical painting. Seventy-eight university students participated in the study. According to the findings of this study, painting evaluation and the emotions expressed during the appreciation show no significant difference under these three conditions, indicating that the participants believe that paintings, regardless of whether they are viewed through VR, are similar. Owing to the limitation of the operation, the participants considered HMD VR to be a tool that hinders free appreciation of paintings. In addition, attention should be paid to the proper projected size of words and paintings for better reading and viewing. The above indicates that through digital technology, we can shorten the gap between a virtual painting and a physical one; however, we must still improve the design of object size and the interaction in the VR context so that a virtual exhibition can be as impressive as a physical one.
Collapse
|
39
|
Encoding, Exchange and Manipulation of Captured Immersive VR Sessions for Learning Environments: the PRISMIN Framework. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10062026] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Capturing immersive VR sessions performed by remote learners using head-mounted displays (HMDs) may provide valuable insights on their interaction patterns, virtual scene saliency and spatial analysis. Large collected records can be exploited as transferable data for learning assessment, detect unexpected interactions or fine-tune immersive VR environments. Within the online learning segment, the exchange of such records among different peers over the network presents several challenges related to data transport and/or its decoding routines. In the presented work, we investigate applications of an image-based encoding model and its implemented architecture to capture users’ interactions performed during VR sessions. We present the PRISMIN framework and how the underneath image-based encoding can be exploited to exchange and manipulate captured VR sessions, comparing it to existing approaches. Qualitative and quantitative results are presented in order to assess the encoding model and the developed open-source framework.
Collapse
|
40
|
Coolen B, Beek PJ, Geerse DJ, Roerdink M. Avoiding 3D Obstacles in Mixed Reality: Does It Differ from Negotiating Real Obstacles? SENSORS (BASEL, SWITZERLAND) 2020; 20:E1095. [PMID: 32079351 PMCID: PMC7071133 DOI: 10.3390/s20041095] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/18/2019] [Revised: 01/29/2020] [Accepted: 02/14/2020] [Indexed: 12/22/2022]
Abstract
Mixed-reality technologies are evolving rapidly, allowing for gradually more realistic interaction with digital content while moving freely in real-world environments. In this study, we examined the suitability of the Microsoft HoloLens mixed-reality headset for creating locomotor interactions in real-world environments enriched with 3D holographic obstacles. In Experiment 1, we compared the obstacle-avoidance maneuvers of 12 participants stepping over either real or holographic obstacles of different heights and depths. Participants' avoidance maneuvers were recorded with three spatially and temporally integrated Kinect v2 sensors. Similar to real obstacles, holographic obstacles elicited obstacle-avoidance maneuvers that scaled with obstacle dimensions. However, with holographic obstacles, some participants showed dissimilar trail or lead foot obstacle-avoidance maneuvers compared to real obstacles: they either consistently failed to raise their trail foot or crossed the obstacle with extreme lead-foot margins. In Experiment 2, we examined the efficacy of mixed-reality video feedback in altering such dissimilar avoidance maneuvers. Participants quickly adjusted their trail-foot crossing height and gradually lowered extreme lead-foot crossing heights in the course of mixed-reality video feedback trials, and these improvements were largely retained in subsequent trials without feedback. Participant-specific differences in real and holographic obstacle avoidance notwithstanding, the present results suggest that 3D holographic obstacles supplemented with mixed-reality video feedback may be used for studying and perhaps also training 3D obstacle avoidance.
Collapse
Affiliation(s)
- Bert Coolen
- Department of Human Movement Sciences, Faculty of Behavioural and Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam Movement Sciences, Van der Boechorststraat 7, 1081 BT Amsterdam, The Netherlands; (P.J.B.); (D.J.G.); (M.R.)
| | | | | | | |
Collapse
|
41
|
Sakhare AR, Yang V, Stradford J, Tsang I, Ravichandran R, Pa J. Cycling and Spatial Navigation in an Enriched, Immersive 3D Virtual Park Environment: A Feasibility Study in Younger and Older Adults. Front Aging Neurosci 2019; 11:218. [PMID: 31474851 PMCID: PMC6706817 DOI: 10.3389/fnagi.2019.00218] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2019] [Accepted: 08/02/2019] [Indexed: 12/20/2022] Open
Abstract
Background Cognitive decline is a significant public health concern in older adults. Identifying new ways to maintain cognitive and brain health throughout the lifespan is of utmost importance. Simultaneous exercise and cognitive engagement has been shown to enhance brain function in animal and human studies. Virtual reality (VR) may be a promising approach for conducting simultaneous exercise and cognitive studies. In this study, we evaluated the feasibility of cycling in a cognitively enriched and immersive spatial navigation VR environment in younger and older adults. Methods A total of 20 younger (25.9 ± 3.7 years) and 20 older (63.6 ± 5.6 years) adults participated in this study. Participants completed four trials (2 learning and 2 recall) of cycling while wearing a head-mounted device (HMD) and navigating a VR park environment. Questionnaires were administered to assess adverse effects, mood, presence, and physical exertion levels associated with cycling in the VR environment. Results A total of 4 subjects withdrew from the study due to adverse effects, yielding a 90% completion rate. Simulator sickness levels were enhanced in both age groups with exposure to the VR environment but were within an acceptable range. Exposure to the virtual environment was associated with high arousal and low stress levels, suggesting a state of excitement, and most participants reported enjoyment of the spatial navigation task and VR environment. No association was found between physical exertion levels and simulator sickness levels. Conclusion This study demonstrates that spatial navigation while cycling is feasible and that older adults report similar experiences to younger adults. VR may be a powerful tool for engaging physical and cognitive activity in older adults with acceptable adverse effects and with reports of enjoyment. Future studies are needed to assess the efficacy of a combined exercise and cognitive VR program as an intervention for promoting healthy brain aging, especially in older adults with increased risk of age-related cognitive decline.
Collapse
Affiliation(s)
- Ashwin R Sakhare
- Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States.,Department of Neurology, Mark and Mary Stevens Neuroimaging and Informatics Institute, University of Southern California, Los Angeles, CA, United States
| | - Vincent Yang
- Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States.,Department of Neurology, Mark and Mary Stevens Neuroimaging and Informatics Institute, University of Southern California, Los Angeles, CA, United States
| | - Joy Stradford
- Department of Neurology, Mark and Mary Stevens Neuroimaging and Informatics Institute, University of Southern California, Los Angeles, CA, United States
| | - Ivan Tsang
- Department of Neurology, Mark and Mary Stevens Neuroimaging and Informatics Institute, University of Southern California, Los Angeles, CA, United States
| | - Roshan Ravichandran
- Department of Neurology, Mark and Mary Stevens Neuroimaging and Informatics Institute, University of Southern California, Los Angeles, CA, United States
| | - Judy Pa
- Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States.,Department of Neurology, Mark and Mary Stevens Neuroimaging and Informatics Institute, University of Southern California, Los Angeles, CA, United States
| |
Collapse
|
42
|
Text Input in Virtual Reality: A Preliminary Evaluation of the Drum-Like VR Keyboard. TECHNOLOGIES 2019. [DOI: 10.3390/technologies7020031] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The drum-like virtual reality (VR) keyboard is a contemporary, controller-based interface for text input in VR that uses a drum set metaphor. The controllers are used as sticks which, through downward movements, “press” the keys of the virtual keyboard. In this work, a preliminary feasibility study of the drum-like VR keyboard is described, focusing on the text entry rate and accuracy as well as its usability and the user experience it offers. Seventeen participants evaluated the drum-like VR keyboard by having a typing session and completing a usability and a user experience questionnaire. The interface achieved a good usability score, positive experiential feedback around its entertaining and immersive qualities, a satisfying text entry rate (24.61 words-per-minute), as well as moderate-to-high total error rate (7.2%) that can probably be further improved in future studies. The work provides strong indications that the drum-like VR keyboard can be an effective and entertaining way to type in VR.
Collapse
|
43
|
Tsaramirsis G, Buhari SM, Basheri M, Stojmenovic M. Navigating Virtual Environments Using Leg Poses and Smartphone Sensors. SENSORS 2019; 19:s19020299. [PMID: 30642131 PMCID: PMC6359150 DOI: 10.3390/s19020299] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/01/2018] [Revised: 01/03/2019] [Accepted: 01/10/2019] [Indexed: 11/16/2022]
Abstract
Realization of navigation in virtual environments remains a challenge as it involves complex operating conditions. Decomposition of such complexity is attainable by fusion of sensors and machine learning techniques. Identifying the right combination of sensory information and the appropriate machine learning technique is a vital ingredient for translating physical actions to virtual movements. The contributions of our work include: (i) Synchronization of actions and movements using suitable multiple sensor units, and (ii) selection of the significant features and an appropriate algorithm to process them. This work proposes an innovative approach that allows users to move in virtual environments by simply moving their legs towards the desired direction. The necessary hardware includes only a smartphone that is strapped to the subjects' lower leg. Data from the gyroscope, accelerometer and campus sensors of the mobile device are transmitted to a PC where the movement is accurately identified using a combination of machine learning techniques. Once the desired movement is identified, the movement of the virtual avatar in the virtual environment is realized. After pre-processing the sensor data using the box plot outliers approach, it is observed that Artificial Neural Networks provided the highest movement identification accuracy of 84.2% on the training dataset and 84.1% on testing dataset.
Collapse
Affiliation(s)
- Georgios Tsaramirsis
- Information Technology Department, King Abdulaziz University, Jeddah 21589, Saudi Arabia.
| | - Seyed M Buhari
- Information Technology Department, King Abdulaziz University, Jeddah 21589, Saudi Arabia.
| | - Mohammed Basheri
- Information Technology Department, King Abdulaziz University, Jeddah 21589, Saudi Arabia.
| | - Milos Stojmenovic
- Department of Computer Science and Electrical Engineering, Singidunum University, 11000 Belgrade, Serbia.
| |
Collapse
|
44
|
Abstract
This chapter deals with the problem of including motion cues in VR applications. From the challenges of this technology to the latest trends in the field, the authors discuss the benefits and problems of including these particular perceptual cues. First, readers will know how motion cues are usually generated in simulators and VR applications in general. Then, the authors list the major problems of this process and the reasons why its development has not followed the pace of the rest of VR elements (mainly the display technology), reviewing the motion vs. no-motion question from several perspectives. The general answer to this discussion is that motion cues are necessary in VR applications—mostly vehicle simulators—that rely on motion, although, unlike audio-visual cues, there can be specific considerations for each particular solution that may suggest otherwise. Therefore, it is of the utmost importance to analyze the requirements of each VR application before deciding upon this question.
Collapse
|
45
|
Recommendations for Integrating a P300-Based Brain Computer Interface in Virtual Reality Environments for Gaming. COMPUTERS 2018. [DOI: 10.3390/computers7020034] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
46
|
A Novel Immersive VR Game Model for Recontextualization in Virtual Environments: The μVRModel. MULTIMODAL TECHNOLOGIES AND INTERACTION 2018. [DOI: 10.3390/mti2020020] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
|