1
|
Mynick A, Steel A, Jayaraman A, Botch TL, Burrows A, Robertson CE. Memory-based predictions prime perceptual judgments across head turns in immersive, real-world scenes. Curr Biol 2024:S0960-9822(24)01565-3. [PMID: 39694030 DOI: 10.1016/j.cub.2024.11.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Revised: 07/23/2024] [Accepted: 11/14/2024] [Indexed: 12/20/2024]
Abstract
Each view of our environment captures only a subset of our immersive surroundings. Yet, our visual experience feels seamless. A puzzle for human neuroscience is to determine what cognitive mechanisms enable us to overcome our limited field of view and efficiently anticipate new views as we sample our visual surroundings. Here, we tested whether memory-based predictions of upcoming scene views facilitate efficient perceptual judgments across head turns. We tested this hypothesis using immersive, head-mounted virtual reality (VR). After learning a set of immersive real-world environments, participants (n = 101 across 4 experiments) were briefly primed with a single view from a studied environment and then turned left or right to make a perceptual judgment about an adjacent scene view. We found that participants' perceptual judgments were faster when they were primed with images from the same (vs. neutral or different) environments. Importantly, priming required memory: it only occurred in learned (vs. novel) environments, where the link between adjacent scene views was known. Further, consistent with a role in supporting active vision, priming only occurred in the direction of planned head turns and only benefited judgments for scene views presented in their learned spatiotopic positions. Taken together, we propose that memory-based predictions facilitate rapid perception across large-scale visual actions, such as head and body movements, and may be critical for efficient behavior in complex immersive environments.
Collapse
Affiliation(s)
- Anna Mynick
- Department of Psychological and Brain Sciences, Dartmouth College, 3 Maynard Street, Hanover, NH 03755, USA.
| | - Adam Steel
- Department of Psychological and Brain Sciences, Dartmouth College, 3 Maynard Street, Hanover, NH 03755, USA
| | - Adithi Jayaraman
- Department of Psychological and Brain Sciences, Dartmouth College, 3 Maynard Street, Hanover, NH 03755, USA
| | - Thomas L Botch
- Department of Psychological and Brain Sciences, Dartmouth College, 3 Maynard Street, Hanover, NH 03755, USA
| | - Allie Burrows
- Department of Psychological and Brain Sciences, Dartmouth College, 3 Maynard Street, Hanover, NH 03755, USA
| | - Caroline E Robertson
- Department of Psychological and Brain Sciences, Dartmouth College, 3 Maynard Street, Hanover, NH 03755, USA.
| |
Collapse
|
2
|
Stein N, Watson T, Lappe M, Westendorf M, Durant S. Eye and head movements in visual search in the extended field of view. Sci Rep 2024; 14:8907. [PMID: 38632334 PMCID: PMC11023950 DOI: 10.1038/s41598-024-59657-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 04/12/2024] [Indexed: 04/19/2024] Open
Abstract
In natural environments, head movements are required to search for objects outside the field of view (FoV). Here we investigate the power of a salient target in an extended visual search array to facilitate faster detection once this item comes into the FoV by a head movement. We conducted two virtual reality experiments using spatially clustered sets of stimuli to observe target detection and head and eye movements during visual search. Participants completed search tasks with three conditions: (1) target in the initial FoV, (2) head movement needed to bring the target into the FoV, (3) same as condition 2 but the periphery was initially hidden and appeared after the head movement had brought the location of the target set into the FoV. We measured search time until participants found a more salient (O) or less salient (T) target among distractors (L). On average O's were found faster than T's. Gaze analysis showed that saliency facilitation occurred due to the target guiding the search only if it was within the initial FoV. When targets required a head movement to enter the FoV, participants followed the same search strategy as in trials without a visible target in the periphery. Moreover, faster search times for salient targets were only caused by the time required to find the target once the target set was reached. This suggests that the effect of stimulus saliency differs between visual search on fixed displays and when we are actively searching through an extended visual field.
Collapse
Affiliation(s)
- Niklas Stein
- Institute for Psychology, University of Münster, 48143, Münster, Germany.
- Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, 48143, Münster, Germany.
| | - Tamara Watson
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, NSW, 2751, Australia
| | - Markus Lappe
- Institute for Psychology, University of Münster, 48143, Münster, Germany
- Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, 48143, Münster, Germany
| | - Maren Westendorf
- Institute for Psychology, University of Münster, 48143, Münster, Germany
| | - Szonya Durant
- Department of Psychology, Royal Holloway, University of London, Egham, TW20 0EX, UK
| |
Collapse
|
3
|
Beitner J, Helbing J, David EJ, Võ MLH. Using a flashlight-contingent window paradigm to investigate visual search and object memory in virtual reality and on computer screens. Sci Rep 2024; 14:8596. [PMID: 38615047 PMCID: PMC11379806 DOI: 10.1038/s41598-024-58941-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Accepted: 04/04/2024] [Indexed: 04/15/2024] Open
Abstract
A popular technique to modulate visual input during search is to use gaze-contingent windows. However, these are often rather discomforting, providing the impression of visual impairment. To counteract this, we asked participants in this study to search through illuminated as well as dark three-dimensional scenes using a more naturalistic flashlight with which they could illuminate the rooms. In a surprise incidental memory task, we tested the identities and locations of objects encountered during search. Importantly, we tested this study design in both immersive virtual reality (VR; Experiment 1) and on a desktop-computer screen (Experiment 2). As hypothesized, searching with a flashlight increased search difficulty and memory usage during search. We found a memory benefit for identities of distractors in the flashlight condition in VR but not in the computer screen experiment. Surprisingly, location memory was comparable across search conditions despite the enormous difference in visual input. Subtle differences across experiments only appeared in VR after accounting for previous recognition performance, hinting at a benefit of flashlight search in VR. Our findings highlight that removing visual information does not necessarily impair location memory, and that screen experiments using virtual environments can elicit the same major effects as VR setups.
Collapse
Affiliation(s)
- Julia Beitner
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany.
| | - Jason Helbing
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Erwan Joël David
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
- LIUM, Le Mans Université, Le Mans, France
| | - Melissa Lê-Hoa Võ
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
| |
Collapse
|
4
|
Han S, Blake R, Aubuchon C, Tadin D. Binocular rivalry under naturalistic geometry: Evidence from worlds simulated in virtual reality. PNAS NEXUS 2024; 3:pgae054. [PMID: 38380058 PMCID: PMC10877069 DOI: 10.1093/pnasnexus/pgae054] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Accepted: 01/30/2024] [Indexed: 02/22/2024]
Abstract
Binocular rivalry is a fascinating, widely studied visual phenomenon in which perception alternates between two competing images. This experience, however, is generally restricted to laboratory settings where two irreconcilable images are presented separately to the two eyes, an implausible geometry where two objects occupy the same physical location. Such laboratory experiences are in stark contrast to everyday visual behavior, where rivalry is almost never encountered, casting doubt on whether rivalry is relevant to our understanding of everyday binocular vision. To investigate the external validity of binocular rivalry, we manipulated the geometric plausibility of rival images using a naturalistic, cue-rich, 3D-corridor model created in virtual reality. Rival stimuli were presented in geometrically implausible, semi-plausible, or plausible layouts. Participants tracked rivalry fluctuations in each of these three layouts and for both static and moving rival stimuli. Results revealed significant and canonical binocular rivalry alternations regardless of geometrical plausibility and stimulus type. Rivalry occurred for layouts that mirrored the unnatural geometry used in laboratory studies and for layouts that mimicked real-world occlusion geometry. In a complementary 3D modeling analysis, we show that interocular conflict caused by geometrically plausible occlusion is a common outcome in a visual scene containing multiple objects. Together, our findings demonstrate that binocular rivalry can reliably occur for both geometrically implausible interocular conflicts and conflicts caused by a common form of naturalistic occlusion. Thus, key features of binocular rivalry are not simply laboratory artifacts but generalize to conditions that match the geometry of everyday binocular vision.
Collapse
Affiliation(s)
- Shui'er Han
- Center for Visual Science, University of Rochester, Rochester, NY 14642, USA
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14642, USA
- Institute for Infocomm Research Agency for Science, Technology and Research, Singapore 138632, Singapore
- Centre for Frontier AI Research, Agency for Science, Technology and Research, Singapore 138632, Singapore
| | - Randolph Blake
- Department of Psychology, Vanderbilt University, Nashville, TN 37240, USA
- Vanderbilt Vision Research Center, Vanderbilt University, Nashville, TN 37232, USA
| | - Celine Aubuchon
- Department of Cognitive Linguistic and Psychological Sciences, Brown University, Providence, RI 02912, USA
| | - Duje Tadin
- Center for Visual Science, University of Rochester, Rochester, NY 14642, USA
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14642, USA
- Department of Neuroscience, University of Rochester, Rochester, NY 14642, USA
- Department of Ophthalmology, University of Rochester, Rochester, NY 14642, USA
| |
Collapse
|