1
|
Chandio Y, Interrante V, Anwar FM. Human Factors at Play: Understanding the Impact of Conditioning on Presence and Reaction Time in Mixed Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:2400-2410. [PMID: 38437088 DOI: 10.1109/tvcg.2024.3372120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2024]
Abstract
A prerequisite to improving the presence of a user in mixed reality (MR) is the ability to measure and quantify presence. Traditionally, subjective questionnaires have been used to assess the level of presence. However, recent studies have shown that presence is correlated with objective and systemic human performance measures such as reaction time. These studies analyze the correlation between presence and reaction time when technical factors such as object realism and plausibility of the object's behavior change. However, additional psychological and physiological human factors can also impact presence. It is unclear if presence can be mapped to and correlated with reaction time when human factors such as conditioning are involved. To answer this question, we conducted an exploratory study ($N=60$) where the relationship between presence and reaction time was assessed under three different conditioning scenarios: control, positive, and negative. We demonstrated that human factors impact presence. We found that presence scores and reaction times are significantly correlated (correlation coefficient of -0.64), suggesting that the impact of human factors on reaction time correlates with its effect on presence. In demonstrating that, our study takes another important step toward using objective and systemic measures like reaction time as a presence measure.
Collapse
|
2
|
Gil Rodríguez R, Hedjar L, Toscani M, Guarnera D, Guarnera GC, Gegenfurtner KR. Color constancy mechanisms in virtual reality environments. J Vis 2024; 24:6. [PMID: 38727688 PMCID: PMC11098049 DOI: 10.1167/jov.24.5.6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2023] [Accepted: 03/11/2024] [Indexed: 05/18/2024] Open
Abstract
Prior research has demonstrated high levels of color constancy in real-world scenarios featuring single light sources, extensive fields of view, and prolonged adaptation periods. However, exploring the specific cues humans rely on becomes challenging, if not unfeasible, with actual objects and lighting conditions. To circumvent these obstacles, we employed virtual reality technology to craft immersive, realistic settings that can be manipulated in real time. We designed forest and office scenes illuminated by five colors. Participants selected a test object most resembling a previously shown achromatic reference. To study color constancy mechanisms, we modified scenes to neutralize three contributors: local surround (placing a uniform-colored leaf under test objects), maximum flux (keeping the brightest object constant), and spatial mean (maintaining a neutral average light reflectance), employing two methods for the latter: changing object reflectances or introducing new elements. We found that color constancy was high in conditions with all cues present, aligning with past research. However, removing individual cues led to varied impacts on constancy. Local surrounds significantly reduced performance, especially under green illumination, showing strong interaction between greenish light and rose-colored contexts. In contrast, the maximum flux mechanism barely affected performance, challenging assumptions used in white balancing algorithms. The spatial mean experiment showed disparate effects: Adding objects slightly impacted performance, while changing reflectances nearly eliminated constancy, suggesting human color constancy relies more on scene interpretation than pixel-based calculations.
Collapse
Affiliation(s)
| | - Laysa Hedjar
- Psychology Department, Justus-Liebig University, Giessen, Germany
| | - Matteo Toscani
- Psychology Department, Bournemouth University, Poole, UK
| | - Dar'ya Guarnera
- School of Arts and Creative Technologies, University of York, York, UK
| | | | | |
Collapse
|
3
|
Castet E, Termoz-Masson J, Vizcay S, Delachambre J, Myrodia V, Aguilar C, Matonti F, Kornprobst P. PTVR - A software in Python to make virtual reality experiments easier to build and more reproducible. J Vis 2024; 24:19. [PMID: 38652657 PMCID: PMC11044846 DOI: 10.1167/jov.24.4.19] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Accepted: 02/25/2024] [Indexed: 04/25/2024] Open
Abstract
Researchers increasingly use virtual reality (VR) to perform behavioral experiments, especially in vision science. These experiments are usually programmed directly in so-called game engines that are extremely powerful. However, this process is tricky and time-consuming as it requires solid knowledge of game engines. Consequently, the anticipated prohibitive effort discourages many researchers who want to engage in VR. This paper introduces the Perception Toolbox for Virtual Reality (PTVR) library, allowing visual perception studies in VR to be created using high-level Python script programming. A crucial consequence of using a script is that an experiment can be described by a single, easy-to-read piece of code, thus improving VR studies' transparency, reproducibility, and reusability. We built our library upon a seminal open-source library released in 2018 that we have considerably developed since then. This paper aims to provide a comprehensive overview of the PTVR software for the first time. We introduce the main objects and features of PTVR and some general concepts related to the three-dimensional (3D) world. This new library should dramatically reduce the difficulty of programming experiments in VR and elicit a whole new set of visual perception studies with high ecological validity.
Collapse
Affiliation(s)
- Eric Castet
- Aix Marseille Univ, CNRS, CRPN, Marseille, France
| | | | | | | | | | | | | | | |
Collapse
|
4
|
Wiesing M, Zimmermann E. Intentional binding - Is it just causal binding? A replication study of Suzuki et al. (2019). Conscious Cogn 2024; 119:103665. [PMID: 38354485 DOI: 10.1016/j.concog.2024.103665] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Revised: 01/23/2024] [Accepted: 02/06/2024] [Indexed: 02/16/2024]
Abstract
Intentional actions produce a temporal compression between the action and its outcome, known as intentional binding. However, Suzuki et al. (2019) recently showed that temporal compression can be observed without intentional actions. However, their results show a clear regression to the mean, which might have confounded the estimates of temporal intervals. To control these effects, we presented temporal intervals block-wise. Indeed, we found systematically greater compression for active than passive trials, in contrast to Suzuki et al. (2019). In our second experiment, our goal was to conceptually replicate the previous study. However, we were unable to reproduce their results and instead found more pronounced temporal compression in active trials compared to passive ones. In a subsequent attempt at a direct replication, we did not observe the same findings as the original study. Our findings reinforce the theory that intentions rather than causality cause temporal binding. During the preparation of this work, the authors used ChatGPT in order to improve the readability of the paper. After using this tool/service, the authors reviewed and edited the content as needed and take full responsibility for the content of the publication.
Collapse
Affiliation(s)
- Michael Wiesing
- Institute for Experimental Psychology, Heinrich Heine University Duesseldorf, Germany.
| | - Eckart Zimmermann
- Institute for Experimental Psychology, Heinrich Heine University Duesseldorf, Germany
| |
Collapse
|
5
|
Lin X, Li R, Chen Z, Xiong J. Design strategies for VR science and education games from an embodied cognition perspective: a literature-based meta-analysis. Front Psychol 2024; 14:1292110. [PMID: 38259582 PMCID: PMC10801899 DOI: 10.3389/fpsyg.2023.1292110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Accepted: 12/20/2023] [Indexed: 01/24/2024] Open
Abstract
Introduction Natural science education, as an important means to improve the scientific literacy of citizens, combines science education games with virtual reality (VR) technology and is a major developmental direction in the field of gamified learning. Methods To investigate the impact of VR science education games on learning efficiency from the perspective of embodied cognition, this study uses the China National Knowledge Infrastructure (CNKI) and Web of Science (WOS) databases as the main source of samples. A meta-analysis of 40 studies was conducted to examine teaching content, game interaction, and immersion mode. Results The study found that (1) VR science and education games have a moderately positive impact on the overall learning effect; (2) regarding teaching content, the learning effect of skill training via VR science and education games is significant; (3) regarding interaction form, the learning effect on active interaction is significantly better than that of passive interaction; (4) regarding immersion mode, somatosensory VR games have a significant impact on the enhancement of students' learning; (5) regarding application disciplines, VR science education games have a greater impact on science, engineering, language and other disciplines; (6) regarding academic segments, the learning effect on college students is most significant; and (7) regarding experimental intervention time, short-term intervention is most effective. Discussion Accordingly, this article proposes strategies for VR science game design from the perspective of embodied cognition: a five-phase strategy including skill training, human-computer interaction, and environmental immersion, aiming to improve the learning effect and experience of users.
Collapse
Affiliation(s)
| | | | | | - Jiayi Xiong
- School of Information Technology in Education, South China Normal University, Guangzhou, China
| |
Collapse
|
6
|
Eckhoff D, Schnupp J, Cassinelli A. Temporal precision and accuracy of audio-visual stimuli in mixed reality systems. PLoS One 2024; 19:e0295817. [PMID: 38165851 PMCID: PMC10760685 DOI: 10.1371/journal.pone.0295817] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Accepted: 11/30/2023] [Indexed: 01/04/2024] Open
Abstract
Mixed Reality (MR) techniques, such as Virtual (VR) and Augmented Reality (AR), are gaining popularity as a new methodology for neuroscience and psychology research. In studies involving audiovisual stimuli, it is crucial to have MR systems that can deliver these bimodal stimuli with controlled timing between the onset of each modality. However, the extent to which modern MR setups can achieve the necessary precision and accuracy of audiovisual stimulus onset asynchronies (SOAs) remains largely unknown. The objective of this study is to systematically evaluate the lag and variability between the auditory and visual onset of audiovisual stimuli produced on popular modern MR head-mounted displays (HMDs) from Meta, Microsoft, HTC, and Varjo in conjunction with commonly used development environments such as Unity and the Unreal Engine. To accomplish this, we developed a low-cost measurement system that enabled us to measure the actual SOA and its associated jitter. Our findings revealed that certain MR systems exhibited significant SOAs, with one case averaging 156.63 ms, along with jitter of up to ±11.82 ms. Using our methodology, we successfully conducted experimental calibration of a headset, achieving SOAs of -3.89 ± 1.56 ms. This paper aims to raise awareness among neuroscience researchers regarding the limitations of MR systems in delivering audiovisual stimuli without prior calibration. Furthermore, we present cost-effective methods to calibrate these systems, thereby facilitating the replication of future results.
Collapse
Affiliation(s)
- Daniel Eckhoff
- School of Creative Media, City University of Hong Kong, Kowloon Tong, Hong Kong
| | - Jan Schnupp
- Department of Neuroscience, City University of Hong Kong, Kowloon Tong, Hong Kong
| | - Alvaro Cassinelli
- School of Creative Media, City University of Hong Kong, Kowloon Tong, Hong Kong
| |
Collapse
|
7
|
Warburton M, Mon-Williams M, Mushtaq F, Morehead JR. Measuring motion-to-photon latency for sensorimotor experiments with virtual reality systems. Behav Res Methods 2023; 55:3658-3678. [PMID: 36217006 PMCID: PMC10616216 DOI: 10.3758/s13428-022-01983-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/19/2022] [Indexed: 11/08/2022]
Abstract
Consumer virtual reality (VR) systems are increasingly being deployed in research to study sensorimotor behaviors, but properties of such systems require verification before being used as scientific tools. The 'motion-to-photon' latency (the lag between a user making a movement and the movement being displayed within the display) is a particularly important metric as temporal delays can degrade sensorimotor performance. Extant approaches to quantifying this measure have involved the use of bespoke software and hardware and produce a single measure of latency and ignore the effect of the motion prediction algorithms used in modern VR systems. This reduces confidence in the generalizability of the results. We developed a novel, system-independent, high-speed camera-based latency measurement technique to co-register real and virtual controller movements, allowing assessment of how latencies change through a movement. We applied this technique to measure the motion-to-photon latency of controller movements in the HTC Vive, Oculus Rift, Oculus Rift S, and Valve Index, using the Unity game engine and SteamVR. For the start of a sudden movement, all measured headsets had mean latencies between 21 and 42 ms. Once motion prediction could account for the inherent delays, the latency was functionally reduced to 2-13 ms, and our technique revealed that this reduction occurs within ~25-58 ms of movement onset. Our findings indicate that sudden accelerations (e.g., movement onset, impacts, and direction changes) will increase latencies and lower spatial accuracy. Our technique allows researchers to measure these factors and determine the impact on their experimental design before collecting sensorimotor data from VR systems.
Collapse
Affiliation(s)
| | - Mark Mon-Williams
- School of Psychology, University of Leeds, Leeds, UK
- Centre for Immersive Technologies, University of Leeds, Leeds, UK
- Centre for Applied Education Research, Wolfson Centre for Applied Health Research, Bradford Teaching Hospitals NHS Foundation Trust, Bradford, West Yorkshire, UK
- National Centre for Optics, Vision and Eye Care, University of South-Eastern Norway, Hasbergs vei 36, 3616, Kongsberg, Norway
| | - Faisal Mushtaq
- School of Psychology, University of Leeds, Leeds, UK
- Centre for Immersive Technologies, University of Leeds, Leeds, UK
| | - J Ryan Morehead
- School of Psychology, University of Leeds, Leeds, UK
- Centre for Immersive Technologies, University of Leeds, Leeds, UK
| |
Collapse
|
8
|
Schuetz I, Karimpur H, Fiehler K. vexptoolbox: A software toolbox for human behavior studies using the Vizard virtual reality platform. Behav Res Methods 2023; 55:570-582. [PMID: 35322350 PMCID: PMC10027796 DOI: 10.3758/s13428-022-01831-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/09/2022] [Indexed: 11/08/2022]
Abstract
Virtual reality (VR) is a powerful tool for researchers due to its potential to study dynamic human behavior in highly naturalistic environments while retaining full control over the presented stimuli. Due to advancements in consumer hardware, VR devices are now very affordable and have also started to include technologies such as eye tracking, further extending potential research applications. Rendering engines such as Unity, Unreal, or Vizard now enable researchers to easily create complex VR environments. However, implementing the experimental design can still pose a challenge, and these packages do not provide out-of-the-box support for trial-based behavioral experiments. Here, we present a Python toolbox, designed to facilitate common tasks when developing experiments using the Vizard VR platform. It includes functionality for common tasks like creating, randomizing, and presenting trial-based experimental designs or saving results to standardized file formats. Moreover, the toolbox greatly simplifies continuous recording of eye and body movements using any hardware supported in Vizard. We further implement and describe a simple goal-directed reaching task in VR and show sample data recorded from five volunteers. The toolbox, example code, and data are all available on GitHub under an open-source license. We hope that our toolbox can simplify VR experiment development, reduce code duplication, and aid reproducibility and open-science efforts.
Collapse
Affiliation(s)
- Immo Schuetz
- Experimental Psychology, Justus Liebig University, Otto-Behaghel-Str. 10 F, 35394, Giessen, Germany.
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Giessen, Germany.
| | - Harun Karimpur
- Experimental Psychology, Justus Liebig University, Otto-Behaghel-Str. 10 F, 35394, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Giessen, Germany
| | - Katja Fiehler
- Experimental Psychology, Justus Liebig University, Otto-Behaghel-Str. 10 F, 35394, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Giessen, Germany
| |
Collapse
|
9
|
Ocklenburg S, Peterburs J. Monitoring Brain Activity in VR: EEG and Neuroimaging. Curr Top Behav Neurosci 2023; 65:47-71. [PMID: 37306852 DOI: 10.1007/7854_2023_423] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Virtual reality (VR) is increasingly used in neuroscientific research to increase ecological validity without sacrificing experimental control, to provide a richer visual and multisensory experience, and to foster immersion and presence in study participants, which leads to increased motivation and affective experience. But the use of VR, particularly when coupled with neuroimaging or neurostimulation techniques such as electroencephalography (EEG), functional magnetic resonance imaging (fMRI), or transcranial magnetic stimulation (TMS), also yields some challenges. These include intricacies of the technical setup, increased noise in the data due to movement, and a lack of standard protocols for data collection and analysis. This chapter examines current approaches to recording, pre-processing, and analyzing electrophysiological (stationary and mobile EEG), as well as neuroimaging data recorded during VR engagement. It also discusses approaches to synchronizing these data with other data streams. In general, previous research has used a range of different approaches to technical setup and data processing, and detailed reporting of procedures is urgently needed in future studies to ensure comparability and replicability. More support for open-source VR software as well as the development of consensus and best practice papers on issues such as the handling of movement artifacts in mobile EEG-VR will be essential steps in ensuring the continued success of this exciting and powerful technique in neuroscientific research.
Collapse
Affiliation(s)
- Sebastian Ocklenburg
- Department of Psychology, Faculty for Life Sciences, MSH Medical School Hamburg, Hamburg, Germany.
- ICAN Institute for Cognitive and Affective Neuroscience, MSH Medical School Hamburg, Hamburg, Germany.
- Faculty of Psychology, Institute of Cognitive Neuroscience, Biopsychology, Ruhr University Bochum, Bochum, Germany.
| | - Jutta Peterburs
- Institute of Systems Medicine & Department of Human Medicine, MSH Medical School Hamburg, Hamburg, Germany
| |
Collapse
|
10
|
Du W, Zhong X, Jia Y, Jiang R, Yang H, Ye Z, Zong Z. A Novel Scenario-Based, Mixed-Reality Platform for Training Nontechnical Skills of Battlefield First Aid: Prospective Interventional Study. JMIR Serious Games 2022; 10:e40727. [PMID: 36472903 PMCID: PMC9768658 DOI: 10.2196/40727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 09/13/2022] [Accepted: 10/31/2022] [Indexed: 12/12/2022] Open
Abstract
BACKGROUND Although battlefield first aid (BFA) training shares many common features with civilian training, such as the need to address technical skills and nontechnical skills (NTSs), it is more highly scenario-dependent. Studies into extended reality show clear benefits in medical training; however, the training effects of extended reality on NTSs, including teamwork and decision-making in BFA, have not been fully proven. OBJECTIVE The current study aimed to create and test a scenario-based, mixed-reality platform suitable for training NTSs in BFA. METHODS First, using next-generation modeling technology and an animation synchronization system, a 10-person offensive battle drill was established. Decision-making training software addressing basic principles of tactical combat casualty care was constructed and integrated into the scenarios with Unreal Engine 4 (Epic Games). Large-space teamwork and virtual interaction systems that made sense in the proposed platform were developed. Unreal Engine 4 and software engineering technology were used to combine modules to establish a mixed-reality BFA training platform. A total of 20 Grade 4 medical students were recruited to accept BFA training with the platform. Pretraining and posttraining tests were carried out in 2 forms to evaluate the training effectiveness: one was knowledge acquisition regarding the NTS and the other was a real-world, scenario-based test. In addition, the students were asked to rate their agreement with a series of survey items on a 5-point Likert scale. RESULTS A battlefield geographic environment, tactical scenarios, scenario-based decision software, large-space teamwork, and virtual interaction system modules were successfully developed and combined to establish the mixed-reality training platform for BFA. The posttraining score of the students' knowledge acquisition was significantly higher than that of pretraining (t=-12.114; P≤.001). Furthermore, the NTS score and the total score that the students obtained in the real-world test were significantly higher than those before training (t=-17.756 and t=-21.354, respectively; P≤.001). However, there was no significant difference between the scores of technical skills that the students obtained before and after training. A posttraining survey revealed that the students found the platform helpful in improving NTSs for BFA, and they were confident in applying BFA skills after training. However, most trainees thought that the platform was not helpful for improving the technical skills of BFA, and 45% (9/20) of the trainees were not satisfied with the simulation effect. CONCLUSIONS A scenario-based, mixed-reality platform was constructed in this study. In this platform, interaction of the movement of multiple players in a large space and the interaction of decision-making by the trainees between the real world and the virtual world were accomplished. The platform could improve the NTSs of BFA. Future works, including improvement of the simulation effects and development of a training platform that could effectively improve both the technical skills and NTSs of BFA, will be carried out.
Collapse
Affiliation(s)
- Wenqiong Du
- State Key Laboratory of Trauma, Burn and Combined Injury, Department for Combat Casualty Care Training, Training Base for Army Health Care, Army Medical University, Chongqing, China
| | - Xin Zhong
- State Key Laboratory of Trauma, Burn and Combined Injury, Department for Combat Casualty Care Training, Training Base for Army Health Care, Army Medical University, Chongqing, China
| | - Yijun Jia
- State Key Laboratory of Trauma, Burn and Combined Injury, Department for Combat Casualty Care Training, Training Base for Army Health Care, Army Medical University, Chongqing, China
| | - Renqing Jiang
- State Key Laboratory of Trauma, Burn and Combined Injury, Department for Combat Casualty Care Training, Training Base for Army Health Care, Army Medical University, Chongqing, China
| | - Haoyang Yang
- State Key Laboratory of Trauma, Burn and Combined Injury, Department for Combat Casualty Care Training, Training Base for Army Health Care, Army Medical University, Chongqing, China
| | - Zhao Ye
- State Key Laboratory of Trauma, Burn and Combined Injury, Department for Combat Casualty Care Training, Training Base for Army Health Care, Army Medical University, Chongqing, China
| | - Zhaowen Zong
- State Key Laboratory of Trauma, Burn and Combined Injury, Department for Combat Casualty Care Training, Training Base for Army Health Care, Army Medical University, Chongqing, China
| |
Collapse
|
11
|
Gil Rodríguez R, Bayer F, Toscani M, Guarnera D, Guarnera GC, Gegenfurtner KR. Colour Calibration of a Head Mounted Display for Colour Vision Research Using Virtual Reality. SN COMPUTER SCIENCE 2021; 3:22. [PMID: 34778840 PMCID: PMC8551135 DOI: 10.1007/s42979-021-00855-7] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/03/2020] [Accepted: 09/06/2021] [Indexed: 12/01/2022]
Abstract
Virtual reality (VR) technology offers vision researchers the opportunity to conduct immersive studies in simulated real-world scenes. However, an accurate colour calibration of the VR head mounted display (HMD), both in terms of luminance and chromaticity, is required to precisely control the presented stimuli. Such a calibration presents significant new challenges, for example, due to the large field of view of the HMD, or the software implementation used for scene rendering, which might alter the colour appearance of objects. Here, we propose a framework for calibrating an HMD using an imaging colorimeter, the I29 (Radiant Vision Systems, Redmond, WA, USA). We examine two scenarios, both with and without using a rendering software for visualisation. In addition, we present a colour constancy experiment design for VR through a gaming engine software, Unreal Engine 4. The colours of the objects of study are chosen according to the previously defined calibration. Results show a high-colour constancy performance among participants, in agreement with recent studies performed on real-world scenarios. Our studies show that our methodology allows us to control and measure the colours presented in the HMD, effectively enabling the use of VR technology for colour vision research.
Collapse
Affiliation(s)
| | - Florian Bayer
- Department of Psychology, Justus-Liebig-Universität Giessen, Giessen, Germany
| | - Matteo Toscani
- Department of Psychology, Justus-Liebig-Universität Giessen, Giessen, Germany
| | - Dar’ya Guarnera
- Department of Computer Science, Norwegian University of Science and Technology, Gjøvik, Norway
| | - Giuseppe Claudio Guarnera
- Department of Computer Science, Norwegian University of Science and Technology, Gjøvik, Norway
- University of York, York, UK
| | | |
Collapse
|
12
|
Accuracy and precision of visual and auditory stimulus presentation in virtual reality in Python 2 and 3 environments for human behavior research. Behav Res Methods 2021; 54:729-751. [PMID: 34346042 PMCID: PMC9046309 DOI: 10.3758/s13428-021-01663-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/29/2021] [Indexed: 11/08/2022]
Abstract
Virtual reality (VR) is a new methodology for behavioral studies. In such studies, the millisecond accuracy and precision of stimulus presentation are critical for data replicability. Recently, Python, which is a widely used programming language for scientific research, has contributed to reliable accuracy and precision in experimental control. However, little is known about whether modern VR environments have millisecond accuracy and precision for stimulus presentation, since most standard methods in laboratory studies are not optimized for VR environments. The purpose of this study was to systematically evaluate the accuracy and precision of visual and auditory stimuli generated in modern VR head-mounted displays (HMDs) from HTC and Oculus using Python 2 and 3. We used the newest Python tools for VR and Black Box Toolkit to measure the actual time lag and jitter. The results showed that there was an 18-ms time lag for visual stimulus in both HMDs. For the auditory stimulus, the time lag varied between 40 and 60 ms, depending on the HMD. The jitters of those time lags were 1 ms for visual stimulus and 4 ms for auditory stimulus, which are sufficiently low for general experiments. These time lags were robustly equal, even when auditory and visual stimuli were presented simultaneously. Interestingly, all results were perfectly consistent in both Python 2 and 3 environments. Thus, the present study will help establish a more reliable stimulus control for psychological and neuroscientific research controlled by Python environments.
Collapse
|
13
|
Vasser M, Aru J. Guidelines for immersive virtual reality in psychological research. Curr Opin Psychol 2020; 36:71-76. [PMID: 32563049 DOI: 10.1016/j.copsyc.2020.04.010] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2020] [Revised: 04/27/2020] [Accepted: 04/28/2020] [Indexed: 01/03/2023]
Abstract
Virtual reality (VR) holds immense promise as a research tool to deliver results that are generalizable to the real world. However, the methodology used in different VR studies varies substantially. While many of these approaches claim to use 'immersive VR', the different hardware and software choices lead to issues regarding reliability and validity of psychological VR research. Questions arise about quantifying presence, the optimal level of graphical realism, the problem of being in dual realities and reproducibility of VR research. We discuss how VR research paradigms could be evaluated and offer a list of practical recommendations to have common guidelines for psychological VR research.
Collapse
Affiliation(s)
- Madis Vasser
- Institute of Computer Science, University of Tartu, Estonia
| | - Jaan Aru
- Institute of Computer Science, University of Tartu, Estonia; Institute of Biology, Humboldt University Berlin, Germany.
| |
Collapse
|