1
|
Pavic K, Vergilino-Perez D, Gricourt T, Chaby L. Age-related differences in subjective and physiological emotion evoked by immersion in natural and social virtual environments. Sci Rep 2024; 14:15320. [PMID: 38961132 PMCID: PMC11222553 DOI: 10.1038/s41598-024-66119-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Accepted: 06/27/2024] [Indexed: 07/05/2024] Open
Abstract
Age-related changes in emotional processing are complex, with a bias toward positive information. However, the impact of aging on emotional responses in positive everyday situations remains unclear. Virtual Reality (VR) has emerged as a promising tool for investigating emotional processing, offering a unique balance between ecological validity and experimental control. Yet, limited evidence exists regarding its efficacy to elicit positive emotions in older adults. Our study aimed to explore age-related differences in positive emotional responses to immersion in both social and nonsocial virtual emotional environments. We exposed 34 younger adults and 24 older adults to natural and social 360-degree video content through a low immersive computer screen and a highly immersive Head-Mounted Display, while recording participants' physiological reactions. Participants also provided self-report of their emotions and sense of presence. The findings support VR's efficacy in eliciting positive emotions in both younger and older adults, with age-related differences in emotional responses influenced by the specific video content rather than immersion level. These findings underscore the potential of VR as a valuable tool for examining age-related differences in emotional responses and developing VR applications to enhance emotional wellbeing across diverse user populations.
Collapse
Affiliation(s)
- Katarina Pavic
- Université Paris Cité, Vision Action Cognition, F-92100, Boulogne-Billancourt, France
- SocialDream, Research and Development Department, Bourg-de-Péage, France
| | | | - Thierry Gricourt
- SocialDream, Research and Development Department, Bourg-de-Péage, France
| | - Laurence Chaby
- Université Paris Cité, Vision Action Cognition, F-92100, Boulogne-Billancourt, France.
- Sorbonne Université, Institut des systemes intelligents et de robotique (ISIR), CNRS, F-75005, Paris, France.
| |
Collapse
|
2
|
Li M, Pan J, Li Y, Gao Y, Qin H, Shen Y. Multimodal Physiological Analysis of Impact of Emotion on Cognitive Control in VR. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:2044-2054. [PMID: 38437118 DOI: 10.1109/tvcg.2024.3372101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2024]
Abstract
Cognitive control is often perplexing to elucidate and can be easily influenced by emotions. Understanding the individual cognitive control level is crucial for enhancing VR interaction and designing adaptive and self-correcting VR/AR applications. Emotions can reallocate processing resources and influence cognitive control performance. However, current research has primarily emphasized the impact of emotional valence on cognitive control tasks, neglecting emotional arousal. In this study, we comprehensively investigate the influence of emotions on cognitive control based on the arousal-valence model. A total of 26 participants are recruited, inducing emotions through VR videos with high ecological validity and then performing related cognitive control tasks. Leveraging physiological data including EEG, HRV, and EDA, we employ classification techniques such as SVM, KNN, and deep learning to categorize cognitive control levels. The experiment results demonstrate that high-arousal emotions significantly enhance users' cognitive control abilities. Utilizing complementary information among multi-modal physiological signal features, we achieve an accuracy of 84.52% in distinguishing between high and low cognitive control. Additionally, time-frequency analysis results confirm the existence of neural patterns related to cognitive control, contributing to a better understanding of the neural mechanisms underlying cognitive control in VR. Our research indicates that physiological signals measured from both the central and autonomic nervous systems can be employed for cognitive control classification, paving the way for novel approaches to improve VR/AR interactions.
Collapse
|
3
|
Peng K, Moussavi Z, Karunakaran KD, Borsook D, Lesage F, Nguyen DK. iVR-fNIRS: studying brain functions in a fully immersive virtual environment. NEUROPHOTONICS 2024; 11:020601. [PMID: 38577629 PMCID: PMC10993907 DOI: 10.1117/1.nph.11.2.020601] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 03/05/2024] [Accepted: 03/06/2024] [Indexed: 04/06/2024]
Abstract
Immersive virtual reality (iVR) employs head-mounted displays or cave-like environments to create a sensory-rich virtual experience that simulates the physical presence of a user in a digital space. The technology holds immense promise in neuroscience research and therapy. In particular, virtual reality (VR) technologies facilitate the development of diverse tasks and scenarios closely mirroring real-life situations to stimulate the brain within a controlled and secure setting. It also offers a cost-effective solution in providing a similar sense of interaction to users when conventional stimulation methods are limited or unfeasible. Although combining iVR with traditional brain imaging techniques may be difficult due to signal interference or instrumental issues, recent work has proposed the use of functional near infrared spectroscopy (fNIRS) in conjunction with iVR for versatile brain stimulation paradigms and flexible examination of brain responses. We present a comprehensive review of current research studies employing an iVR-fNIRS setup, covering device types, stimulation approaches, data analysis methods, and major scientific findings. The literature demonstrates a high potential for iVR-fNIRS to explore various types of cognitive, behavioral, and motor functions in a fully immersive VR (iVR) environment. Such studies should set a foundation for adaptive iVR programs for both training (e.g., in novel environments) and clinical therapeutics (e.g., pain, motor and sensory disorders and other psychiatric conditions).
Collapse
Affiliation(s)
- Ke Peng
- University of Manitoba, Department of Electrical and Computer Engineering, Price Faculty of Engineering, Winnipeg, Manitoba, Canada
| | - Zahra Moussavi
- University of Manitoba, Department of Electrical and Computer Engineering, Price Faculty of Engineering, Winnipeg, Manitoba, Canada
| | - Keerthana Deepti Karunakaran
- Massachusetts General Hospital, Harvard Medical School, Department of Psychiatry, Boston, Massachusetts, United States
| | - David Borsook
- Massachusetts General Hospital, Harvard Medical School, Department of Psychiatry, Boston, Massachusetts, United States
- Massachusetts General Hospital, Harvard Medical School, Department of Radiology, Boston, Massachusetts, United States
| | - Frédéric Lesage
- University of Montreal, Institute of Biomedical Engineering, Department of Electrical Engineering, Ecole Polytechnique, Montreal, Quebec, Canada
- Montreal Heart Institute, Montreal, Quebec, Canada
| | - Dang Khoa Nguyen
- University of Montreal, Department of Neurosciences, Montreal, Quebec, Canada
- Research Center of the Hospital Center of the University of Montreal, Department of Neurology, Montreal, Quebec, Canada
| |
Collapse
|
4
|
Zhang C, Su L, Li S, Fu Y. Differential Brain Activation for Four Emotions in VR-2D and VR-3D Modes. Brain Sci 2024; 14:326. [PMID: 38671977 PMCID: PMC11048237 DOI: 10.3390/brainsci14040326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Revised: 03/10/2024] [Accepted: 03/25/2024] [Indexed: 04/28/2024] Open
Abstract
Similar to traditional imaging, virtual reality (VR) imagery encompasses nonstereoscopic (VR-2D) and stereoscopic (VR-3D) modes. Currently, Russell's emotional model has been extensively studied in traditional 2D and VR-3D modes, but there is limited comparative research between VR-2D and VR-3D modes. In this study, we investigate whether Russell's emotional model exhibits stronger brain activation states in VR-3D mode compared to VR-2D mode. By designing an experiment covering four emotional categories (high arousal-high pleasure (HAHV), high arousal-low pleasure (HALV), low arousal-low pleasure (LALV), and low arousal-high pleasure (LAHV)), EEG signals were collected from 30 healthy undergraduate and graduate students while watching videos in both VR modes. Initially, power spectral density (PSD) computations revealed distinct brain activation patterns in different emotional states across the two modes, with VR-3D videos inducing significantly higher brainwave energy, primarily in the frontal, temporal, and occipital regions. Subsequently, Differential entropy (DE) feature sets, selected via a dual ten-fold cross-validation Support Vector Machine (SVM) classifier, demonstrate satisfactory classification accuracy, particularly superior in the VR-3D mode. The paper subsequently presents a deep learning-based EEG emotion recognition framework, adeptly utilizing the frequency, spatial, and temporal information of EEG data to improve recognition accuracy. The contribution of each individual feature to the prediction probabilities is discussed through machine-learning interpretability based on Shapley values. The study reveals notable differences in brain activation states for identical emotions between the two modes, with VR-3D mode showing more pronounced activation.
Collapse
Affiliation(s)
| | - Lei Su
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China; (C.Z.); (S.L.); (Y.F.)
| | | | | |
Collapse
|
5
|
Bi Y, Liu X, Zhao X, Wei S, Li J, Wang F, Luo W, Hu L. Enhancing pain modulation: the efficacy of synchronous combination of virtual reality and transcutaneous electrical nerve stimulation. Gen Psychiatr 2023; 36:e101164. [PMID: 38143714 PMCID: PMC10749042 DOI: 10.1136/gpsych-2023-101164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Accepted: 11/03/2023] [Indexed: 12/26/2023] Open
Abstract
Introduction Virtual reality (VR) and transcutaneous electrical nerve stimulation (TENS) have emerged as effective interventions for pain reduction. However, their standalone applications often yield limited analgesic effects, particularly in certain painful conditions. Aims Our hypothesis was that the combination of VR with TENS in a synchronous manner could produce the best analgesic effect among the four experimental conditions. Methods To address this challenge, we proposed a novel pain modulation strategy that synchronously combines VR and TENS, aiming to capitalise on both techniques' complementary pain modulation mechanisms. Thirty-two healthy subjects participated in the study and underwent three types of interventions: VR alone, a combination of VR with conventional TENS, and a combination of VR with synchronous TENS. Additionally, a control condition with no intervention was included. Perceived pain intensity, pain unpleasantness, positive and negative affect scores, and electroencephalographic (EEG) data were collected before and after the interventions. To delve into the potential moderating role of pain intensity on the analgesic efficacy of VR combined with synchronous TENS, we incorporated two distinct levels of painful stimuli: one representing mild to moderate pain (ie, low pain) and the other representing moderate to severe pain (ie, high pain). Results Our findings revealed that both combination interventions exhibited superior analgesic effects compared with the VR-alone intervention when exposed to low and high pain stimuli. Notably, the combination of VR with synchronous TENS demonstrated greater analgesic efficacy than the combination of VR with conventional TENS. EEG data further supported these results, indicating that both combination interventions elicited a greater reduction in event-related potential magnitude compared with the VR-alone intervention during exposure to low and high pain stimuli. Moreover, the synchronous combination intervention induced a more significant reduction in N2 amplitude than the VR-alone intervention during exposure to low pain stimuli. No significant differences in EEG response changes were detected between the two combination interventions. Both combination interventions resulted in a greater reduction in negative affect compared with the VR-alone intervention. Conclusions Altogether, our study highlights the effectiveness of the synchronous combination of VR and TENS in enhancing pain modulation. These findings offer valuable insights for developing innovative pain treatments, emphasising the importance of tailored and multifaceted therapeutic approaches for various painful conditions.
Collapse
Affiliation(s)
- Yanzhi Bi
- CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Xu Liu
- CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, Liaoning, China
| | - Xiangyue Zhao
- CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Shiyu Wei
- CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Jingwei Li
- CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Faguang Wang
- School of Intelligent Manufacturing, Wenzhou Polytechnic, Wenzhou, Zhejiang, China
| | - Wenbo Luo
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, Liaoning, China
- Key Laboratory of Brain and Cognitive Neuroscience, Dalian, Liaoning, China
| | - Li Hu
- CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
6
|
Chen J, Wang X, Huang C, Hu X, Shen X, Zhang D. A Large Finer-grained Affective Computing EEG Dataset. Sci Data 2023; 10:740. [PMID: 37880266 PMCID: PMC10600242 DOI: 10.1038/s41597-023-02650-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Accepted: 10/16/2023] [Indexed: 10/27/2023] Open
Abstract
Affective computing based on electroencephalogram (EEG) has gained increasing attention for its objectivity in measuring emotional states. While positive emotions play a crucial role in various real-world applications, such as human-computer interactions, the state-of-the-art EEG datasets have primarily focused on negative emotions, with less consideration given to positive emotions. Meanwhile, these datasets usually have a relatively small sample size, limiting exploration of the important issue of cross-subject affective computing. The proposed Finer-grained Affective Computing EEG Dataset (FACED) aimed to address these issues by recording 32-channel EEG signals from 123 subjects. During the experiment, subjects watched 28 emotion-elicitation video clips covering nine emotion categories (amusement, inspiration, joy, tenderness; anger, fear, disgust, sadness, and neutral emotion), providing a fine-grained and balanced categorization on both the positive and negative sides of emotion. The validation results show that emotion categories can be effectively recognized based on EEG signals at both the intra-subject and the cross-subject levels. The FACED dataset is expected to contribute to developing EEG-based affective computing algorithms for real-world applications.
Collapse
Affiliation(s)
- Jingjing Chen
- Dept. of Psychology, School of Social Sciences, Tsinghua University, Beijing, China
- Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing, China
| | - Xiaobin Wang
- Dept. of Psychology, School of Social Sciences, Tsinghua University, Beijing, China
- Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing, China
| | - Chen Huang
- Dept. of Psychology, School of Social Sciences, Tsinghua University, Beijing, China
- Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing, China
| | - Xin Hu
- Dept. of Psychology, School of Social Sciences, Tsinghua University, Beijing, China
- Dept. of Psychiatry, School of Medicine, University of Pittsburgh, Pittsburgh, USA
| | - Xinke Shen
- Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing, China
- Dept. of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Dan Zhang
- Dept. of Psychology, School of Social Sciences, Tsinghua University, Beijing, China.
- Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing, China.
| |
Collapse
|
7
|
Lapborisuth P, Koorathota S, Sajda P. Pupil-linked arousal modulates network-level EEG signatures of attention reorienting during immersive multitasking. J Neural Eng 2023; 20:046043. [PMID: 37595578 DOI: 10.1088/1741-2552/acf1cb] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2023] [Accepted: 08/18/2023] [Indexed: 08/20/2023]
Abstract
Objective. When multitasking, we must dynamically reorient our attention between different tasks. Attention reorienting is thought to arise through interactions of physiological arousal and brain-wide network dynamics. In this study, we investigated the relationship between pupil-linked arousal and electroencephalography (EEG) brain dynamics in a multitask driving paradigm conducted in virtual reality. We hypothesized that there would be an interaction between arousal and EEG dynamics and that this interaction would correlate with multitasking performance.Approach. We collected EEG and eye tracking data while subjects drove a motorcycle through a simulated city environment, with the instructions to count the number of target images they observed while avoiding crashing into a lead vehicle. The paradigm required the subjects to continuously reorient their attention between the two tasks. Subjects performed the paradigm under two conditions, one more difficult than the other.Main results. We found that task difficulty did not strongly correlate with pupil-linked arousal, and overall task performance increased as arousal level increased. A single-trial analysis revealed several interesting relationships between pupil-linked arousal and task-relevant EEG dynamics. Employing exact low-resolution electromagnetic tomography, we found that higher pupil-linked arousal led to greater EEG oscillatory activity, especially in regions associated with the dorsal attention network and ventral attention network (VAN). Consistent with our hypothesis, we found a relationship between EEG functional connectivity and pupil-linked arousal as a function of multitasking performance. Specifically, we found decreased functional connectivity between regions in the salience network (SN) and the VAN as pupil-linked arousal increased, suggesting that improved multitasking performance at high arousal levels may be due to a down-regulation in coupling between the VAN and the SN. Our results suggest that when multitasking, our brain rebalances arousal-based reorienting so that individual task demands can be met without prematurely reorienting to competing tasks.
Collapse
Affiliation(s)
- Pawan Lapborisuth
- Department of Biomedical Engineering, Columbia University, New York, NY, United States of America
| | - Sharath Koorathota
- Department of Biomedical Engineering, Columbia University, New York, NY, United States of America
| | - Paul Sajda
- Department of Biomedical Engineering, Columbia University, New York, NY, United States of America
- Department of Electrical Engineering, Columbia University, New York, NY, United States of America
- Department of Radiology, Columbia University Irving Medical Center, New York, NY, United States of America
- Data Science Institute, Columbia University, New York, NY, United States of America
| |
Collapse
|
8
|
Glomb K, Piotrowski P, Romanowska IA. It is not real until it feels real: Testing a new method for simulation of eyewitness experience with virtual reality technology and equipment. Behav Res Methods 2023:10.3758/s13428-023-02186-2. [PMID: 37507651 DOI: 10.3758/s13428-023-02186-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/28/2023] [Indexed: 07/30/2023]
Abstract
Laboratory research in the psychology of witness testimony is often criticized for its lack of ecological validity, including the use of unrealistic artificial stimuli to test memory performance. The purpose of our study is to present a method that can provide an intermediary between laboratory research and field studies or naturalistic experiments that are difficult to control and administer. It uses Video-360° technology and virtual reality (VR) equipment, which cuts subjects off from external stimuli and gives them control over the visual field. This can potentially increase the realism of the eyewitness's experience. To test the method, we conducted an experiment comparing the immersion effect, emotional response, and memory performance between subjects who watched a video presenting a mock crime on a head-mounted display (VR goggles; n = 57) and a screen (n = 50). The results suggest that, compared to those who watched the video on a screen, the VR group had a deeper sense of immersion, that is, of being part of the scene presented. At the same time, they were not distracted or cognitively overloaded by the more complex virtual environment, and remembered just as much detail about the crime as those viewing it on the screen. Additionally, we noted significant differences between subjects in ratings of emotions felt during the video. This may suggest that the two formats evoke different types of discrete emotions. Overall, the results confirm the usefulness of the proposed method in witness research.
Collapse
Affiliation(s)
- Kaja Glomb
- Faculty of Management and Social Communication, Jagiellonian University in Krakow, Kraków, Poland.
| | - Przemysław Piotrowski
- Faculty of Management and Social Communication, Jagiellonian University in Krakow, Kraków, Poland
| | | |
Collapse
|
9
|
Cheng W, Wang X, Zou J, Li M, Tian F. A High-Density EEG Study Investigating the Neural Correlates of Continuity Editing Theory in VR Films. SENSORS (BASEL, SWITZERLAND) 2023; 23:5886. [PMID: 37447736 DOI: 10.3390/s23135886] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 05/28/2023] [Accepted: 06/22/2023] [Indexed: 07/15/2023]
Abstract
This paper presents a cognitive psychology experiment to explore the differences between 2D and virtual reality (VR) film editing techniques. We recruited sixteen volunteers to view a range of different display modes and edit types of experimental material. An electroencephalogram (EEG) was recorded simultaneously while the participants watched. Subjective results showed that the VR mode reflects higher load scores, particularly in the effort dimension. Different editing types have no effect on subjective immersion scores. The VR mode elicited stronger EEG energy, with differences concentrated in the occipital, parietal, and central regions. On the basis of this, visual evoked potential (VEP) analyses were conducted, and the results indicated that VR mode triggered greater spatial attention, while editing in 2D mode induced stronger semantic updating and active understanding. Furthermore, we found that while the effect of different edit types in both display modes is similar, cross-axis editing triggered greater cognitive violations than continuity editing, which could serve as scientific theoretical support for the development of future VR film editing techniques.
Collapse
Affiliation(s)
- Wanqiu Cheng
- Shanghai Film Academy, Shanghai University, Shanghai 200072, China
| | - Xuefei Wang
- Shanghai Film Academy, Shanghai University, Shanghai 200072, China
| | - Jiahui Zou
- Shanghai Film Academy, Shanghai University, Shanghai 200072, China
| | - Mingxuan Li
- Shanghai Film Academy, Shanghai University, Shanghai 200072, China
| | - Feng Tian
- Shanghai Film Academy, Shanghai University, Shanghai 200072, China
- Shanghai Film Special Effects Engineering Technology Research Center, Shanghai University, Shanghai 200072, China
| |
Collapse
|
10
|
Davidov A, Razumnikova O, Bakaev M. Nature in the Heart and Mind of the Beholder: Psycho-Emotional and EEG Differences in Perception of Virtual Nature Due to Gender. Vision (Basel) 2023; 7:vision7020030. [PMID: 37092463 PMCID: PMC10123600 DOI: 10.3390/vision7020030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2022] [Revised: 03/21/2023] [Accepted: 03/25/2023] [Indexed: 04/05/2023] Open
Abstract
Natural environment experiences in virtual reality (VR) can be a feasible option for people unable to connect with real nature. Existing research mostly focuses on health and emotional advantages of the “virtual nature” therapy, but studies of its neuropsychological effects related to visual perception are rare. In our experiment, 20 subjects watched nature-related video content in VR headsets (3D condition) and on a computer screen (2D condition). In addition to the gender factor, we considered the individual Environmental Identity Index (EID) and collected the self-assessment of the emotional state per the components of Valence, Arousal, and Dominance in each experimental condition. Besides the psychometric data, we also registered brainwave activity (EEG) and analyzed it with the 7 frequency bands. For EID, which was considerably higher in women, we found significant positive correlation with Valence (i.e., beneficial effect of the natural stimuli on the psycho-emotional status). At the same time, the analysis of the EEG data suggests a considerable impact of the VR immersion itself, with higher relaxation alpha effect in 3D vs. 2D condition in men. The novel and most pronounced effect of the gender factor was found in the relation between the EID and the EEG powers in the high-frequency bands—that is, positive correlation of these variables in women (0.64 < Rs < 0.74) but negative correlation in men (−0.66 < Rs < −0.72). Our results imply individually different and gender-dependent effects of the natural stimulus in VR. Correspondingly, the video and VR content development should consider this and aim to provide a user characteristics-tailored experience.
Collapse
Affiliation(s)
- Artem Davidov
- Department of Data Collection and Processing Systems, Novosibirsk State Technical University, 630073 Novosibirsk, Russia
| | - Olga Razumnikova
- Department of Data Collection and Processing Systems, Novosibirsk State Technical University, 630073 Novosibirsk, Russia
| | - Maxim Bakaev
- Department of Data Collection and Processing Systems, Novosibirsk State Technical University, 630073 Novosibirsk, Russia
| |
Collapse
|
11
|
Pavic K, Chaby L, Gricourt T, Vergilino-Perez D. Feeling Virtually Present Makes Me Happier: The Influence of Immersion, Sense of Presence, and Video Contents on Positive Emotion Induction. CYBERPSYCHOLOGY, BEHAVIOR, AND SOCIAL NETWORKING 2023; 26:238-245. [PMID: 37001171 PMCID: PMC10125398 DOI: 10.1089/cyber.2022.0245] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/03/2023]
Abstract
Immersive technologies, such as Virtual Reality (VR), have great potential for enhancing users' emotions and wellbeing. However, how immersion, Virtual Environment contents, and sense of presence (SoP) influence emotional responses remains to be clarified to efficiently foster positive emotions. Consequently, a total of 26 participants (16 women, 10 men, 22.73 ± 2.69 years old) were exposed to 360-degree videos of natural and social contents on both a highly immersive Head-Mounted Display and a low immersive computer screen. Subjective emotional responses and SoP were assessed after each video using self-reports, while a wearable wristband collected continuously electrodermal activity and heart rate to record physiological emotional responses. Findings supported the added value of immersion, as more positive emotions and greater subjective arousal were reported after viewing the videos in the highly immersive setting, regardless of the video contents. In addition to usually employed natural contents, the findings also provide initial evidence for the effectiveness of social contents in eliciting positive emotions. Finally, structural equation models shed light on the indirect effect of immersion, through spatial and spatial SoP on subjective arousal. Overall, these are encouraging results about the effectiveness of VR for fostering positive emotions. Future studies should further investigate the influence of user characteristics on VR experiences to foster efficiently positive emotions among a broad range of potential users.
Collapse
Affiliation(s)
- Katarina Pavic
- Université Paris Cité, Vision Action Cognition (VAC), Boulogne-Billancourt, France
- Sorbonne Université, CNRS, Institut des Systèmes Intelligents et de Robotique (ISIR), Paris, France
- Research and Development Department, SocialDream, Bourg-de-Péage, France
- Address correspondence to: Dr. Katarina Pavic, Université Paris Cité, Vision Action Cognition (VAC), 71 Avenue Edouard Vaillant, Boulogne-Billancourt Cedex 92774, France
| | - Laurence Chaby
- Sorbonne Université, CNRS, Institut des Systèmes Intelligents et de Robotique (ISIR), Paris, France
- Université Paris Cité, UFR de Psychologie, Boulogne-Billancourt, France
| | - Thierry Gricourt
- Research and Development Department, SocialDream, Bourg-de-Péage, France
| | | |
Collapse
|
12
|
Multimodal Assessment of Changes in Physiological Indicators when Presenting a Video Fragment on Screen (2D) versus a VR (3D) Environment. Behav Neurol 2022; 2022:5346128. [PMID: 36479230 PMCID: PMC9722301 DOI: 10.1155/2022/5346128] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 11/11/2022] [Accepted: 11/12/2022] [Indexed: 11/29/2022] Open
Abstract
The increasing role of virtual environments in society, especially in the context of the pandemic and evolving metaverse technologies, requires a closer study of the physiological state of humans using virtual reality (VR) for entertainment, work, or learning. Despite the fact that many physiological reactions to the content presented in various modalities under VR conditions have already been described, often these studies do not reflect the full range of changes in the physiological reactions that occur to a person during their immersion in the virtual world. This study was designed to find and compare the most sensitive physiological indicators that change when viewing an emotionally intense video fragment in standard format on screen and in virtual reality conditions (in a VR helmet). The research methodology involved randomly presenting a group of subjects with visual content-a short video clip-first on screen (2D) and then in a virtual reality helmet (3D). A special feature of this study is the use of multimodal physiological state assessment throughout the content presentation, in conjunction with psychological testing of the study participants before and after the start of the study. It has been discovered that the most informative physiological indicators reflecting the subjects' condition under virtual reality conditions were changes in theta rhythm amplitude, skin conductance, standard deviation of normal RR-intervals (SDRR), and changes in photoplethysmogram (PPG). The study results suggest that in the process of immersion in a virtual environment, the participants develop a complex functional state, different from the state when watching on screen, which is characterised by the restructuring of autonomic regulation and activation of emotion structures of the brain.
Collapse
|
13
|
Radhakrishnan U, Chinello F, Koumaditis K. Investigating the effectiveness of immersive VR skill training and its link to physiological arousal. VIRTUAL REALITY 2022; 27:1091-1115. [PMID: 36405878 PMCID: PMC9663202 DOI: 10.1007/s10055-022-00699-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Accepted: 09/13/2022] [Indexed: 06/05/2023]
Abstract
This paper details the motivations, design, and analysis of a study using a fine motor skill training task in both VR and physical conditions. The objective of this between-subjects study was to (a) investigate the effectiveness of immersive virtual reality for training participants in the 'buzz-wire' fine motor skill task compared to physical training and (b) investigate the link between participants' arousal with their improvements in task performance. Physiological arousal levels in the form of electro-dermal activity (EDA) and ECG (Electrocardiogram) data were collected from 87 participants, randomly distributed across the two conditions. Results indicated that VR training is as good as, or even slightly better than, training in physical training in improving task performance. Moreover, the participants in the VR condition reported an increase in self-efficacy and immersion, while marginally significant differences were observed in the presence and the temporal demand (retrieved from NASA-TLX measurements). Participants in the VR condition showed on average less arousal than those in the physical condition. Though correlation analyses between performance metrics and arousal levels did not depict any statistically significant results, a closer examination of EDA values revealed that participants with lower arousal levels during training, across conditions, demonstrated better improvements in performance than those with higher arousal. These findings demonstrate the effectiveness of VR in training and the potential of using arousal and training performance data for designing adaptive VR training systems. This paper also discusses implications for researchers who consider using biosensors and VR for motor skill experiments. Supplementary Information The online version contains supplementary material available at 10.1007/s10055-022-00699-3.
Collapse
Affiliation(s)
- Unnikrishnan Radhakrishnan
- Department of Business Development and Technology, Aarhus University, Birk Centerpark 15, 7400 Herning, Denmark
| | - Francesco Chinello
- Department of Business Development and Technology, Aarhus University, Birk Centerpark 15, 7400 Herning, Denmark
| | - Konstantinos Koumaditis
- Department of Business Development and Technology, Aarhus University, Birk Centerpark 15, 7400 Herning, Denmark
| |
Collapse
|
14
|
Tian F, Wang X, Cheng W, Lee M, Jin Y. A Comparative Study on the Temporal Effects of 2D and VR Emotional Arousal. SENSORS (BASEL, SWITZERLAND) 2022; 22:8491. [PMID: 36366201 PMCID: PMC9656226 DOI: 10.3390/s22218491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 10/25/2022] [Accepted: 10/26/2022] [Indexed: 06/16/2023]
Abstract
Previous research comparing traditional two-dimensional (2D) and virtual reality with stereoscopic vision (VR-3D) stimulations revealed that VR-3D resulted in higher levels of immersion. However, the effects of these two visual modes on emotional stimulus processing have not been thoroughly investigated, and the underlying neural processing mechanisms remain unclear. Thus, this paper introduced a cognitive psychological experiment that was conducted to investigate how these two visual modes influence emotional processing. To reduce fatigue, participants (n = 16) were randomly assigned to watch a series of 2D and VR-3D short emotional videos for two days. During their participation, electroencephalograms (EEG) were recorded simultaneously. The results showed that even in the absence of sound, visual stimuli in the VR environment significantly increased emotional arousal, especially in the frontal region, parietal region, temporal region, and occipital region. On this basis, visual evoked potential (VEP) analysis was performed. VR stimulation compared to 2D led to a larger P1 component amplitude, while VEP analysis based on the time course of the late event-related potential component revealed that, after 1200 ms, the differences across visual modes became stable and significant. Furthermore, the results also confirmed that VEP in the early stages is more sensitive to emotions and presumably there are corresponding emotion regulation mechanisms in the late stages.
Collapse
Affiliation(s)
- Feng Tian
- Shanghai Film Academy, Shanghai University, Shanghai 200072, China
- Shanghai Film Special Effects Engineering Technology Research Center, Shanghai University, Shanghai 200072, China
| | - Xuefei Wang
- Shanghai Film Academy, Shanghai University, Shanghai 200072, China
| | - Wanqiu Cheng
- Shanghai Film Academy, Shanghai University, Shanghai 200072, China
| | - Mingxuan Lee
- Shanghai Film Academy, Shanghai University, Shanghai 200072, China
| | - Yuanyuan Jin
- Shanghai Film Academy, Shanghai University, Shanghai 200072, China
| |
Collapse
|
15
|
Isenstein EL, Waz T, LoPrete A, Hernandez Y, Knight EJ, Busza A, Tadin D. Rapid assessment of hand reaching using virtual reality and application in cerebellar stroke. PLoS One 2022; 17:e0275220. [PMID: 36174027 PMCID: PMC9522266 DOI: 10.1371/journal.pone.0275220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Accepted: 09/13/2022] [Indexed: 11/19/2022] Open
Abstract
The acquisition of sensory information about the world is a dynamic and interactive experience, yet the majority of sensory research focuses on perception without action and is conducted with participants who are passive observers with very limited control over their environment. This approach allows for highly controlled, repeatable experiments and has led to major advances in our understanding of basic sensory processing. Typical human perceptual experiences, however, are far more complex than conventional action-perception experiments and often involve bi-directional interactions between perception and action. Innovations in virtual reality (VR) technology offer an approach to close this notable disconnect between perceptual experiences and experiments. VR experiments can be conducted with a high level of empirical control while also allowing for movement and agency as well as controlled naturalistic environments. New VR technology also permits tracking of fine hand movements, allowing for seamless empirical integration of perception and action. Here, we used VR to assess how multisensory information and cognitive demands affect hand movements while reaching for virtual targets. First, we manipulated the visibility of the reaching hand to uncouple vision and proprioception in a task measuring accuracy while reaching toward a virtual target (n = 20, healthy young adults). The results, which as expected revealed multisensory facilitation, provided a rapid and a highly sensitive measure of isolated proprioceptive accuracy. In the second experiment, we presented the virtual target only briefly and showed that VR can be used as an efficient and robust measurement of spatial memory (n = 18, healthy young adults). Finally, to assess the feasibility of using VR to study perception and action in populations with physical disabilities, we showed that the results from the visual-proprioceptive task generalize to two patients with recent cerebellar stroke. Overall, we show that VR coupled with hand-tracking offers an efficient and adaptable way to study human perception and action.
Collapse
Affiliation(s)
- E. L. Isenstein
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, United States of America
- Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, United States of America
- Center for Visual Science, University of Rochester, Rochester, NY, United States of America
| | - T. Waz
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, United States of America
| | - A. LoPrete
- Center for Visual Science, University of Rochester, Rochester, NY, United States of America
- Center for Neuroscience and Behavior, American University, Washington, DC, United States of America
- Bioengineering Graduate Group, University of Pennsylvania, Philadelphia, PA, United States of America
| | - Y. Hernandez
- Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, United States of America
- The City College of New York, CUNY, New York, NY, United States of America
| | - E. J. Knight
- Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, United States of America
- Division of Developmental and Behavioral Pediatrics, Department of Pediatrics, University of Rochester School of Medicine and Dentistry, Rochester, New York, United States of America
| | - A. Busza
- Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, United States of America
- Department of Neurology, University of Rochester Medical Center, Rochester, NY, United States of America
| | - D. Tadin
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, United States of America
- Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, United States of America
- Center for Visual Science, University of Rochester, Rochester, NY, United States of America
- Department of Ophthalmology, University of Rochester School of Medicine and Dentistry, Rochester, New York, United States of America
| |
Collapse
|
16
|
Ochi G, Kuwamizu R, Fujimoto T, Ikarashi K, Yamashiro K, Sato D. The Effects of Acute Virtual Reality Exergaming on Mood and Executive Function: Exploratory Crossover Trial. JMIR Serious Games 2022; 10:e38200. [PMID: 36169992 PMCID: PMC9557761 DOI: 10.2196/38200] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 08/03/2022] [Accepted: 09/02/2022] [Indexed: 11/25/2022] Open
Abstract
Background Virtual reality (VR) exergaming is a new intervention strategy to help humans engage in physical activity to enhance mood. VR exergaming may improve both mood and executive function by acting on the prefrontal cortex, expanding the potential benefits. However, the impact of VR exergaming on executive function has not been fully investigated, and associated intervention strategies have not yet been established. Objective This study aims to investigate the effects of 10 minutes of VR exergaming on mood and executive function. Methods A total of 12 participants played the exergame “FitXR” under 3 conditions: (1) a VR exergame condition (ie, exercise with a head-mounted display condition [VR-EX]) in which they played using a head-mounted display, (2) playing the exergame in front of a flat display (2D-EX), and (3) a resting condition in which they sat in a chair. The color-word Stroop task (CWST), which assesses executive function; the short form of the Profile of Mood States second edition (POMS2); and the short form of the Two-Dimensional Mood Scale (TDMS), which assess mood, were administered before and after the exercise or rest conditions. Results The VR-EX condition increased the POMS2 vigor activity score (rest and VR-EX: t11=3.69, P=.003) as well as the TDMS arousal (rest vs 2D-EX: t11=5.34, P<.001; rest vs VR-EX: t11=5.99, P<.001; 2D-EX vs VR-EX: t11=3.02, P=.01) and vitality scores (rest vs 2D-EX: t11=3.74, P=.007; rest vs VR-EX: t11=4.84, P=.002; 2D-EX vs VR-EX: t11=3.53, P=.006), suggesting that VR exergaming enhanced mood. Conversely, there was no effect on CWST performance in either the 2D-EX or VR-EX conditions. Interestingly, the VR-EX condition showed a significant positive correlation between changes in CWST arousal and reaction time (r=0.58, P=.046). This suggests that the effect of exergaming on improving executive function may disappear under an excessively increased arousal level in VR exergaming. Conclusions Our findings showed that 10 minutes of VR exergaming enhanced mood but did not affect executive function. This suggests that some VR content may increase cognitive demands, leading to psychological fatigue and cognitive decline as an individual approaches the limits of available attentional capacity. Future research must examine the combination of exercise and VR that enhances both brain function and mood.
Collapse
Affiliation(s)
- Genta Ochi
- Department of Health and Sports, Niigata University of Health and Welfare, Niigata, Japan.,Institute for Human Movement and Medical Sciences, Niigata University of Health and Welfare, Niigata, Japan
| | - Ryuta Kuwamizu
- Laboratory of Exercise Biochemistry and Neuroendocrinology, Faculty of Health and Sport Sciences, University of Tsukuba, Tsukuba, Japan
| | - Tomomi Fujimoto
- Department of Health and Sports, Niigata University of Health and Welfare, Niigata, Japan.,Institute for Human Movement and Medical Sciences, Niigata University of Health and Welfare, Niigata, Japan
| | - Koyuki Ikarashi
- Institute for Human Movement and Medical Sciences, Niigata University of Health and Welfare, Niigata, Japan.,Major of Health and Welfare, Graduate School of Niigata University of Health and Welfare, Niigata, Japan
| | - Koya Yamashiro
- Department of Health and Sports, Niigata University of Health and Welfare, Niigata, Japan.,Institute for Human Movement and Medical Sciences, Niigata University of Health and Welfare, Niigata, Japan
| | - Daisuke Sato
- Department of Health and Sports, Niigata University of Health and Welfare, Niigata, Japan.,Institute for Human Movement and Medical Sciences, Niigata University of Health and Welfare, Niigata, Japan
| |
Collapse
|
17
|
Daşdemir Y. Cognitive investigation on the effect of augmented reality-based reading on emotion classification performance: A new dataset. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103942] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
18
|
田 丰, 张 文, 李 颖. [Electrophysiological characteristics of emotion arousal difference between stereoscopic and non-stereoscopic virtual reality films]. SHENG WU YI XUE GONG CHENG XUE ZA ZHI = JOURNAL OF BIOMEDICAL ENGINEERING = SHENGWU YIXUE GONGCHENGXUE ZAZHI 2022; 39:56-66. [PMID: 35231966 PMCID: PMC9927750 DOI: 10.7507/1001-5515.202101010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Revised: 11/28/2021] [Indexed: 06/14/2023]
Abstract
There are two modes to display panoramic movies in virtual reality (VR) environment: non-stereoscopic mode (2D) and stereoscopic mode (3D). It has not been fully studied whether there are differences in the activation effect between these two continuous display modes on emotional arousal and what characteristics of the related neural activity are. In this paper, we designed a cognitive psychology experiment in order to compare the effects of VR-2D and VR-3D on emotional arousal by analyzing synchronously collected scalp electroencephalogram signals. We used support vector machine (SVM) to verify the neurophysiological differences between the two modes in VR environment. The results showed that compared with VR-2D films, VR-3D films evoked significantly higher electroencephalogram (EEG) power (mainly reflected in α and β activities). The significantly improved β wave power in VR-3D mode showed that 3D vision brought more intense cortical activity, which might lead to higher arousal. At the same time, the more intense α activity in the occipital region of the brain also suggested that VR-3D films might cause higher visual fatigue. By the means of neurocinematics, this paper demonstrates that EEG activity can well reflect the effects of different vision modes on the characteristics of the viewers' neural activities. The current study provides theoretical support not only for the future exploration of the image language under the VR perspective, but for future VR film shooting methods and human emotion research.
Collapse
Affiliation(s)
- 丰 田
- 上海大学 上海电影学院(上海 200072)Shanghai Film Academy, Shanghai University, Shanghai 200072, P. R. China
- 上海大学 上海电影特效工程技术研究中心(上海 200072)Shanghai Engineering Research Center of Motion Picture Special Effects, Shanghai University, Shanghai 200072, P. R. China
| | - 文睿 张
- 上海大学 上海电影学院(上海 200072)Shanghai Film Academy, Shanghai University, Shanghai 200072, P. R. China
| | - 颖洁 李
- 上海大学 上海电影学院(上海 200072)Shanghai Film Academy, Shanghai University, Shanghai 200072, P. R. China
- 上海大学 上海电影特效工程技术研究中心(上海 200072)Shanghai Engineering Research Center of Motion Picture Special Effects, Shanghai University, Shanghai 200072, P. R. China
| |
Collapse
|
19
|
A High-Density EEG Study Investigating VR Film Editing and Cognitive Event Segmentation Theory. SENSORS 2021; 21:s21217176. [PMID: 34770482 PMCID: PMC8586935 DOI: 10.3390/s21217176] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Revised: 10/25/2021] [Accepted: 10/26/2021] [Indexed: 12/02/2022]
Abstract
This paper introduces a cognitive psychological experiment that was conducted to analyze how traditional film editing methods and the application of cognitive event segmentation theory perform in virtual reality (VR). Thirty volunteers were recruited and asked to watch a series of short VR videos designed in three dimensions: time, action (characters), and space. Electroencephalograms (EEG) were recorded simultaneously during their participation. Subjective results show that any of the editing methods used would lead to an increased load and reduced immersion. Furthermore, the cognition of event segmentation theory also plays an instructive role in VR editing, with differences mainly focusing on frontal, parietal, and central regions. On this basis, visual evoked potential (VEP) analysis was performed, and the standardized low-resolution brain electromagnetic tomography algorithm (sLORETA) traceability method was used to analyze the data. The results of the VEP analysis suggest that shearing usually elicits a late event-related potential component, while the sources of VEP are mainly the frontal and parietal lobes. The insights derived from this work can be used as guidance for VR content creation, allowing VR image editing to reveal greater richness and unique beauty.
Collapse
|