1
|
Peng K, Moussavi Z, Karunakaran KD, Borsook D, Lesage F, Nguyen DK. iVR-fNIRS: studying brain functions in a fully immersive virtual environment. NEUROPHOTONICS 2024; 11:020601. [PMID: 38577629 PMCID: PMC10993907 DOI: 10.1117/1.nph.11.2.020601] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 03/05/2024] [Accepted: 03/06/2024] [Indexed: 04/06/2024]
Abstract
Immersive virtual reality (iVR) employs head-mounted displays or cave-like environments to create a sensory-rich virtual experience that simulates the physical presence of a user in a digital space. The technology holds immense promise in neuroscience research and therapy. In particular, virtual reality (VR) technologies facilitate the development of diverse tasks and scenarios closely mirroring real-life situations to stimulate the brain within a controlled and secure setting. It also offers a cost-effective solution in providing a similar sense of interaction to users when conventional stimulation methods are limited or unfeasible. Although combining iVR with traditional brain imaging techniques may be difficult due to signal interference or instrumental issues, recent work has proposed the use of functional near infrared spectroscopy (fNIRS) in conjunction with iVR for versatile brain stimulation paradigms and flexible examination of brain responses. We present a comprehensive review of current research studies employing an iVR-fNIRS setup, covering device types, stimulation approaches, data analysis methods, and major scientific findings. The literature demonstrates a high potential for iVR-fNIRS to explore various types of cognitive, behavioral, and motor functions in a fully immersive VR (iVR) environment. Such studies should set a foundation for adaptive iVR programs for both training (e.g., in novel environments) and clinical therapeutics (e.g., pain, motor and sensory disorders and other psychiatric conditions).
Collapse
Affiliation(s)
- Ke Peng
- University of Manitoba, Department of Electrical and Computer Engineering, Price Faculty of Engineering, Winnipeg, Manitoba, Canada
| | - Zahra Moussavi
- University of Manitoba, Department of Electrical and Computer Engineering, Price Faculty of Engineering, Winnipeg, Manitoba, Canada
| | - Keerthana Deepti Karunakaran
- Massachusetts General Hospital, Harvard Medical School, Department of Psychiatry, Boston, Massachusetts, United States
| | - David Borsook
- Massachusetts General Hospital, Harvard Medical School, Department of Psychiatry, Boston, Massachusetts, United States
- Massachusetts General Hospital, Harvard Medical School, Department of Radiology, Boston, Massachusetts, United States
| | - Frédéric Lesage
- University of Montreal, Institute of Biomedical Engineering, Department of Electrical Engineering, Ecole Polytechnique, Montreal, Quebec, Canada
- Montreal Heart Institute, Montreal, Quebec, Canada
| | - Dang Khoa Nguyen
- University of Montreal, Department of Neurosciences, Montreal, Quebec, Canada
- Research Center of the Hospital Center of the University of Montreal, Department of Neurology, Montreal, Quebec, Canada
| |
Collapse
|
2
|
Jörges B, Bury N, McManus M, Bansal A, Allison RS, Jenkin M, Harris LR. The effects of long-term exposure to microgravity and body orientation relative to gravity on perceived traveled distance. NPJ Microgravity 2024; 10:28. [PMID: 38480736 PMCID: PMC10937641 DOI: 10.1038/s41526-024-00376-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Accepted: 03/04/2024] [Indexed: 03/17/2024] Open
Abstract
Self-motion perception is a multi-sensory process that involves visual, vestibular, and other cues. When perception of self-motion is induced using only visual motion, vestibular cues indicate that the body remains stationary, which may bias an observer's perception. When lowering the precision of the vestibular cue by for example, lying down or by adapting to microgravity, these biases may decrease, accompanied by a decrease in precision. To test this hypothesis, we used a move-to-target task in virtual reality. Astronauts and Earth-based controls were shown a target at a range of simulated distances. After the target disappeared, forward self-motion was induced by optic flow. Participants indicated when they thought they had arrived at the target's previously seen location. Astronauts completed the task on Earth (supine and sitting upright) prior to space travel, early and late in space, and early and late after landing. Controls completed the experiment on Earth using a similar regime with a supine posture used to simulate being in space. While variability was similar across all conditions, the supine posture led to significantly higher gains (target distance/perceived travel distance) than the sitting posture for the astronauts pre-flight and early post-flight but not late post-flight. No difference was detected between the astronauts' performance on Earth and onboard the ISS, indicating that judgments of traveled distance were largely unaffected by long-term exposure to microgravity. Overall, this constitutes mixed evidence as to whether non-visual cues to travel distance are integrated with relevant visual cues when self-motion is simulated using optic flow alone.
Collapse
Affiliation(s)
- Björn Jörges
- Center for Vision Research, York University, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.
| | - Nils Bury
- Center for Vision Research, York University, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada
- Institute of Visual Computing, Hochschule Bonn-Rhein-Sieg, Grantham-Allee 20, St. Augustin, 53757, Germany
| | - Meaghan McManus
- Center for Vision Research, York University, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada
- Department of Experimental Psychology, Justus Liebig University Giessen, Otto-Behaghel-Strasse 10F, 35394, Giessen, Germany
| | - Ambika Bansal
- Center for Vision Research, York University, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada
| | - Robert S Allison
- Center for Vision Research, York University, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada
| | - Michael Jenkin
- Center for Vision Research, York University, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada
| | - Laurence R Harris
- Center for Vision Research, York University, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.
| |
Collapse
|
3
|
Abstract
Prior research on film viewing has demonstrated that participants frequently fail to notice spatiotemporal disruptions, such as scene edits in the movies. Whether such insensitivity to spatiotemporal disruptions extends beyond scene edits in film viewing is not well understood. Across three experiments, we created spatiotemporal disruptions by presenting participants with minute long movie clips, and occasionally jumping the movie clips ahead or backward in time. Participants were instructed to press a button when they noticed any disruptions while watching the clips. The results from experiments 1 and 2 indicate that participants failed to notice the disruptions in continuity about 10% to 30% of the time depending on the magnitude of the jump. In addition, detection rates were lower by approximately 10% when the videos jumped ahead in time compared to the backward jumps across all jump magnitudes, suggesting a role of knowledge about the future affects jump detection. An additional analysis used optic flow similarity during these disruptions. Our findings suggest that insensitivity to spatiotemporal disruptions during film viewing is influenced by knowledge about future states.
Collapse
Affiliation(s)
- Aditya Upadhyayula
- Center for Mind and Brain, University of California - Davis, Davis, CA, USA.,
| | - John M. Henderson
- Center for Mind and Brain, University of California – Davis, Davis, CA, USA,Department of Psychology, University of California – Davis, Davis, CA, USA,
| |
Collapse
|
4
|
Horrocks EAB, Mareschal I, Saleem AB. Walking humans and running mice: perception and neural encoding of optic flow during self-motion. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210450. [PMID: 36511417 PMCID: PMC9745880 DOI: 10.1098/rstb.2021.0450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Accepted: 08/30/2022] [Indexed: 12/15/2022] Open
Abstract
Locomotion produces full-field optic flow that often dominates the visual motion inputs to an observer. The perception of optic flow is in turn important for animals to guide their heading and interact with moving objects. Understanding how locomotion influences optic flow processing and perception is therefore essential to understand how animals successfully interact with their environment. Here, we review research investigating how perception and neural encoding of optic flow are altered during self-motion, focusing on locomotion. Self-motion has been found to influence estimation and sensitivity for optic flow speed and direction. Nonvisual self-motion signals also increase compensation for self-driven optic flow when parsing the visual motion of moving objects. The integration of visual and nonvisual self-motion signals largely follows principles of Bayesian inference and can improve the precision and accuracy of self-motion perception. The calibration of visual and nonvisual self-motion signals is dynamic, reflecting the changing visuomotor contingencies across different environmental contexts. Throughout this review, we consider experimental research using humans, non-human primates and mice. We highlight experimental challenges and opportunities afforded by each of these species and draw parallels between experimental findings. These findings reveal a profound influence of locomotion on optic flow processing and perception across species. This article is part of a discussion meeting issue 'New approaches to 3D vision'.
Collapse
Affiliation(s)
- Edward A. B. Horrocks
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College London, London WC1H 0AP, UK
| | - Isabelle Mareschal
- School of Biological and Behavioural Sciences, Queen Mary, University of London, London E1 4NS, UK
| | - Aman B. Saleem
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College London, London WC1H 0AP, UK
| |
Collapse
|
5
|
Ali MA, Bollmann JH. Motion vision: Course control in the developing visual system. Curr Biol 2022; 32:R520-R523. [PMID: 35671725 DOI: 10.1016/j.cub.2022.04.084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
As we move around, the image pattern on our retina is constantly changing. Nervous systems have evolved to detect such global 'optic flow' patterns. A new study reveals how optic flow is encoded in the larval zebrafish brain and could be used for the estimation of self-motion.
Collapse
Affiliation(s)
- Mir Ahsan Ali
- Institute of Biology I, Faculty of Biology, University of Freiburg, 79104 Freiburg, Germany
| | - Johann H Bollmann
- Institute of Biology I, Faculty of Biology, University of Freiburg, 79104 Freiburg, Germany; Bernstein Center Freiburg, University of Freiburg, 79104 Freiburg, Germany.
| |
Collapse
|