1
|
Mynick A, Steel A, Jayaraman A, Botch TL, Burrows A, Robertson CE. Memory-based predictions prime perceptual judgments across head turns in immersive, real-world scenes. Curr Biol 2025; 35:121-130.e6. [PMID: 39694030 DOI: 10.1016/j.cub.2024.11.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Revised: 07/23/2024] [Accepted: 11/14/2024] [Indexed: 12/20/2024]
Abstract
Each view of our environment captures only a subset of our immersive surroundings. Yet, our visual experience feels seamless. A puzzle for human neuroscience is to determine what cognitive mechanisms enable us to overcome our limited field of view and efficiently anticipate new views as we sample our visual surroundings. Here, we tested whether memory-based predictions of upcoming scene views facilitate efficient perceptual judgments across head turns. We tested this hypothesis using immersive, head-mounted virtual reality (VR). After learning a set of immersive real-world environments, participants (n = 101 across 4 experiments) were briefly primed with a single view from a studied environment and then turned left or right to make a perceptual judgment about an adjacent scene view. We found that participants' perceptual judgments were faster when they were primed with images from the same (vs. neutral or different) environments. Importantly, priming required memory: it only occurred in learned (vs. novel) environments, where the link between adjacent scene views was known. Further, consistent with a role in supporting active vision, priming only occurred in the direction of planned head turns and only benefited judgments for scene views presented in their learned spatiotopic positions. Taken together, we propose that memory-based predictions facilitate rapid perception across large-scale visual actions, such as head and body movements, and may be critical for efficient behavior in complex immersive environments.
Collapse
Affiliation(s)
- Anna Mynick
- Department of Psychological and Brain Sciences, Dartmouth College, 3 Maynard Street, Hanover, NH 03755, USA.
| | - Adam Steel
- Department of Psychological and Brain Sciences, Dartmouth College, 3 Maynard Street, Hanover, NH 03755, USA
| | - Adithi Jayaraman
- Department of Psychological and Brain Sciences, Dartmouth College, 3 Maynard Street, Hanover, NH 03755, USA
| | - Thomas L Botch
- Department of Psychological and Brain Sciences, Dartmouth College, 3 Maynard Street, Hanover, NH 03755, USA
| | - Allie Burrows
- Department of Psychological and Brain Sciences, Dartmouth College, 3 Maynard Street, Hanover, NH 03755, USA
| | - Caroline E Robertson
- Department of Psychological and Brain Sciences, Dartmouth College, 3 Maynard Street, Hanover, NH 03755, USA.
| |
Collapse
|
2
|
Trinkl N, Wolfe JM. Image memorability influences memory for where the item was seen but not when. Mem Cognit 2024:10.3758/s13421-024-01635-3. [PMID: 39256320 DOI: 10.3758/s13421-024-01635-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/22/2024] [Indexed: 09/12/2024]
Abstract
Observers can determine whether they have previously seen hundreds of images with more than 80% accuracy. This "massive memory" for WHAT we have seen is accompanied by smaller but still massive memories for WHERE and WHEN the item was seen (spatial & temporal massive memory). Recent studies have shown that certain images are more easily remembered than others (higher "memorability"). Does memorability influence spatial massive memory and temporal massive memory? In two experiments, viewers saw 150 images presented twice in random order. These 300 images were sequentially presented at random locations in a 7 × 7 grid. If an image was categorized as old, observers clicked on the spot in the grid where they thought they had previously seen it. They also noted when they had seen it: Experiment 1-clicking on a timeline; Experiment 2-estimating the trial number when the item first appeared. Replicating prior work, data show that high-memorability images are remembered better than low-memorability images. Interestingly, in both experiments, spatial memory precision was correlated with image memorability, while temporal memory precision did not vary as a function of memorability. Apparently, properties that make images memorable help us remember WHERE but not WHEN those images were presented. The lack of correlation between memorability and temporal memory is, of course, a negative result and should be treated with caution.
Collapse
Affiliation(s)
- Nathan Trinkl
- Visual Attention Laboratory, Dept. of Surgery, Brigham and Women's Hospital, Boston, MA, USA
| | - Jeremy M Wolfe
- Visual Attention Laboratory, Dept. of Surgery, Brigham and Women's Hospital, Boston, MA, USA.
- Depts of Ophthalmology & Radiology, Harvard Medical School, Boston, MA, USA.
- Visual Attention Lab, Department of Surgery, Brigham & Women's Hospital, 900 Commonwealth Ave, 3rd Floor, Boston, MA, 02215, USA.
| |
Collapse
|
3
|
Walcher S, Korda Ž, Körner C, Benedek M. How workload and availability of spatial reference shape eye movement coupling in visuospatial working memory. Cognition 2024; 249:105815. [PMID: 38761645 DOI: 10.1016/j.cognition.2024.105815] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Revised: 03/27/2024] [Accepted: 05/13/2024] [Indexed: 05/20/2024]
Abstract
Eyes are active in memory recall and visual imagination, yet our grasp of the underlying qualities and factors of these internally coupled eye movements is limited. To explore this, we studied 50 participants, examining how workload, spatial reference availability, and imagined movement direction influence internal coupling of eye movements. We designed a visuospatial working memory task in which participants mentally moved a black patch along a path within a matrix and each trial involved one step along this path (presented via speakers: up, down, left, or right). We varied workload by adjusting matrix size (3 × 3 vs. 5 × 5), manipulated availability of a spatial frame of reference by presenting either a blank screen (requiring participants to rely solely on their mental representation of the matrix) or spatial reference in the form of an empty matrix, and contrasted active task performance to two control conditions involving only active or passive listening. Our findings show that eye movements consistently matched the imagined movement of the patch in the matrix, not driven solely by auditory or semantic cues. While workload influenced pupil diameter, perceived demand, and performance, it had no observable impact on internal coupling. The availability of spatial reference enhanced coupling of eye movements, leading more frequent, precise, and resilient saccades against noise and bias. The absence of workload effects on coupled saccades in our study, in combination with the relatively high degree of coupling observed even in the invisible matrix condition, indicates that eye movements align with shifts in attention across both visually and internally represented information. This suggests that coupled eye movements are not merely strategic efforts to reduce workload, but rather a natural response to where attention is directed.
Collapse
Affiliation(s)
- Sonja Walcher
- Creative Cognition Lab, Institute of Psychology, University of Graz, Graz, Austria.
| | - Živa Korda
- Creative Cognition Lab, Institute of Psychology, University of Graz, Graz, Austria.
| | - Christof Körner
- Cognitive Psychology & Neuroscience, Institute of Psychology, University of Graz, Graz, Austria.
| | - Mathias Benedek
- Creative Cognition Lab, Institute of Psychology, University of Graz, Graz, Austria.
| |
Collapse
|
4
|
Nolte D, Vidal De Palol M, Keshava A, Madrid-Carvajal J, Gert AL, von Butler EM, Kömürlüoğlu P, König P. Combining EEG and eye-tracking in virtual reality: Obtaining fixation-onset event-related potentials and event-related spectral perturbations. Atten Percept Psychophys 2024:10.3758/s13414-024-02917-3. [PMID: 38977612 DOI: 10.3758/s13414-024-02917-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/03/2024] [Indexed: 07/10/2024]
Abstract
Extensive research conducted in controlled laboratory settings has prompted an inquiry into how results can be generalized to real-world situations influenced by the subjects' actions. Virtual reality lends itself ideally to investigating complex situations but requires accurate classification of eye movements, especially when combining it with time-sensitive data such as EEG. We recorded eye-tracking data in virtual reality and classified it into gazes and saccades using a velocity-based classification algorithm, and we cut the continuous data into smaller segments to deal with varying noise levels, as introduced in the REMoDNav algorithm. Furthermore, we corrected for participants' translational movement in virtual reality. Various measures, including visual inspection, event durations, and the velocity and dispersion distributions before and after gaze onset, indicate that we can accurately classify the continuous, free-exploration data. Combining the classified eye-tracking with the EEG data, we generated fixation-onset event-related potentials (ERPs) and event-related spectral perturbations (ERSPs), providing further evidence for the quality of the eye-movement classification and timing of the onset of events. Finally, investigating the correlation between single trials and the average ERP and ERSP identified that fixation-onset ERSPs are less time sensitive, require fewer repetitions of the same behavior, and are potentially better suited to study EEG signatures in naturalistic settings. We modified, designed, and tested an algorithm that allows the combination of EEG and eye-tracking data recorded in virtual reality.
Collapse
Affiliation(s)
- Debora Nolte
- Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090, Osnabrueck, Germany.
| | - Marc Vidal De Palol
- Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090, Osnabrueck, Germany
| | - Ashima Keshava
- Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090, Osnabrueck, Germany
| | - John Madrid-Carvajal
- Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090, Osnabrueck, Germany
| | - Anna L Gert
- Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090, Osnabrueck, Germany
| | - Eva-Marie von Butler
- Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090, Osnabrueck, Germany
| | - Pelin Kömürlüoğlu
- Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090, Osnabrueck, Germany
| | - Peter König
- Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090, Osnabrueck, Germany
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| |
Collapse
|
5
|
Abstract
Working memory enables us to bridge past sensory information to upcoming future behaviour. Accordingly, by its very nature, working memory is concerned with two components: the past and the future. Yet, in conventional laboratory tasks, these two components are often conflated, such as when sensory information in working memory is encoded and tested at the same location. We developed a task in which we dissociated the past (encoded location) and future (to-be-tested location) attributes of visual contents in working memory. This enabled us to independently track the utilisation of past and future memory attributes through gaze, as observed during mnemonic selection. Our results reveal the joint consideration of past and future locations. This was prevalent even at the single-trial level of individual saccades that were jointly biased to the past and future. This uncovers the rich nature of working memory representations, whereby both past and future memory attributes are retained and can be accessed together when memory contents become relevant for behaviour.
Collapse
Affiliation(s)
- Baiwei Liu
- Institute for Brain and Behavior Amsterdam, Department of Experimental and Applied Psychology, Vrije Universiteit AmsterdamAmsterdamNetherlands
| | - Zampeta-Sofia Alexopoulou
- Institute for Brain and Behavior Amsterdam, Department of Experimental and Applied Psychology, Vrije Universiteit AmsterdamAmsterdamNetherlands
| | - Freek van Ede
- Institute for Brain and Behavior Amsterdam, Department of Experimental and Applied Psychology, Vrije Universiteit AmsterdamAmsterdamNetherlands
| |
Collapse
|
6
|
Gresch D, Boettcher SEP, van Ede F, Nobre AC. Shifting attention between perception and working memory. Cognition 2024; 245:105731. [PMID: 38278040 DOI: 10.1016/j.cognition.2024.105731] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Revised: 11/02/2023] [Accepted: 01/18/2024] [Indexed: 01/28/2024]
Abstract
Most everyday tasks require shifting the focus of attention between sensory signals in the external environment and internal contents in working memory. To date, shifts of attention have been investigated within each domain, but shifts between the external and internal domain remain poorly understood. We developed a combined perception and working-memory task to investigate and compare the consequences of shifting spatial attention within and between domains in the service of a common orientation-reproduction task. Participants were sequentially cued to attend to items either in working memory or to an upcoming sensory stimulation. Stay trials provided a baseline condition, while shift trials required participants to shift their attention to another item within the same or different domain. Validating our experimental approach, we found evidence that participants shifted attention effectively in either domain (Experiment 1). In addition, we observed greater costs when transitioning attention between as compared to within domains (Experiments 1, 2). Strikingly, these costs persisted even when participants were given more time to complete the attentional shift (Experiment 2). Biases in fixational gaze behaviour tracked attentional orienting in both domains, but revealed no latency or magnitude difference for within- versus between-domain shifts (Experiment 1). Collectively, the results from Experiments 1 and 2 suggest that shifting between attentional domains might be regulated by a unique control function. Our results break new ground for exploring the ubiquitous act of shifting attention between perception and working memory to guide adaptive behaviour in everyday cognition.
Collapse
Affiliation(s)
- Daniela Gresch
- Department of Experimental Psychology, University of Oxford, Oxford, UK; Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK.
| | - Sage E P Boettcher
- Department of Experimental Psychology, University of Oxford, Oxford, UK; Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK.
| | - Freek van Ede
- Institute for Brain and Behavior Amsterdam, Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, the Netherlands.
| | - Anna C Nobre
- Department of Experimental Psychology, University of Oxford, Oxford, UK; Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK; Wu Tsai Institute, Yale University, New Haven, CT, USA; Department of Psychology, Yale University, New Haven, CT, USA.
| |
Collapse
|
7
|
Neri P. Human sensory adaptation to the ecological structure of environmental statistics. J Vis 2024; 24:3. [PMID: 38441884 PMCID: PMC10916885 DOI: 10.1167/jov.24.3.3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 11/22/2023] [Indexed: 03/07/2024] Open
Abstract
Humans acquire sensory information via fast, highly specialized detectors: For example, edge detectors monitor restricted regions of visual space over timescales of 100-200 ms. Surprisingly, this study demonstrates that their operation is nevertheless shaped by the ecological consistency of slow global statistical structure in the environment. In the experiments, humans acquired feature information from brief localized elements embedded within a virtual environment. Cast shadows are important for determining the appearance and layout of the environment. When the statistical reliability of shadows was manipulated, human feature detectors implicitly adapted to these changes over minutes, adjusting their response properties to emphasize either "image-based" or "object-based" anchoring of local visual elements. More specifically, local visual operators were more firmly anchored around object representations when shadows were reliable. As shadow reliability was reduced, visual operators disengaged from objects and became anchored around image features. These results indicate that the notion of sensory adaptation must be reframed around complex statistical constructs with ecological validity. These constructs far exceed the spatiotemporal selectivity bandwidth of sensory detectors, thus demonstrating the highly integrated nature of sensory processing during natural behavior.
Collapse
Affiliation(s)
- Peter Neri
- Laboratoire des Systèmes Perceptifs (UMR8248), École normale supérieure, PSL Research University, Paris, France
- https://sites.google.com/site/neripeter/
| |
Collapse
|
8
|
Gu Q, Zhang Q, Han Y, Li P, Gao Z, Shen M. Microsaccades reflect attention shifts: a mini review of 20 years of microsaccade research. Front Psychol 2024; 15:1364939. [PMID: 38440250 PMCID: PMC10909968 DOI: 10.3389/fpsyg.2024.1364939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Accepted: 02/06/2024] [Indexed: 03/06/2024] Open
Abstract
Microsaccades are small, involuntary eye movements that occur during fixation. Since the 1950s, researchers have conducted extensive research on the role of microsaccades in visual information processing, and found that they also play an important role in human advanced visual cognitive activities. Research over the past 20 years further suggested that there is a close relationship between microsaccades and visual attention, yet lacking a timely review. The current article aims to provide a state-of-the-art review and bring microsaccades studies into the sight of attention research. We firstly introduce basic characteristics about microsaccades, then summarized the empirical evidence supporting the view that microsaccades can reflect both external (perception) and internal (working memory) attention shifts. We finally conclude and highlight three promising avenues for future research.
Collapse
Affiliation(s)
- Quan Gu
- Yongjiang Laboratory, Ningbo, China
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, China
| | - Qikai Zhang
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, China
| | - Yueming Han
- Shanghai Institute of Technical Physics of the Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| | | | - Zaifeng Gao
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, China
| | - Mowei Shen
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, China
| |
Collapse
|
9
|
de Vries E, Fejer G, van Ede F. No obligatory trade-off between the use of space and time for working memory. COMMUNICATIONS PSYCHOLOGY 2023; 1:41. [PMID: 38665249 PMCID: PMC11041649 DOI: 10.1038/s44271-023-00042-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Accepted: 11/24/2023] [Indexed: 04/28/2024]
Abstract
Space and time can each act as scaffolds for the individuation and selection of visual objects in working memory. Here we ask whether there is a trade-off between the use of space and time for visual working memory: whether observers will rely less on space, when memoranda can additionally be individuated through time. We tracked the use of space through directional biases in microsaccades after attention was directed to memory contents that had been encoded simultaneously or sequentially to the left and right of fixation. We found that spatial gaze biases were preserved when participants could (Experiment 1) and even when they had to (Experiment 2) additionally rely on time for object individuation. Thus, space remains a profound organizing medium for working memory even when other organizing sources are available and utilized, with no evidence for an obligatory trade-off between the use of space and time.
Collapse
Affiliation(s)
- Eelke de Vries
- Department of Experimental and Applied Psychology, Institute for Brain and Behavior Amsterdam, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - George Fejer
- Department of Psychology, Cognitive Psychology, University of Konstanz, Konstanz, Germany
| | - Freek van Ede
- Department of Experimental and Applied Psychology, Institute for Brain and Behavior Amsterdam, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
10
|
Martarelli CS, Chiquet S, Ertl M. Keeping track of reality: embedding visual memory in natural behaviour. Memory 2023; 31:1295-1305. [PMID: 37727126 DOI: 10.1080/09658211.2023.2260148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Accepted: 07/21/2023] [Indexed: 09/21/2023]
Abstract
Since immersive virtual reality (IVR) emerged as a research method in the 1980s, the focus has been on the similarities between IVR and actual reality. In this vein, it has been suggested that IVR methodology might fill the gap between laboratory studies and real life. IVR allows for high internal validity (i.e., a high degree of experimental control and experimental replicability), as well as high external validity by letting participants engage with the environment in an almost natural manner. Despite internal validity being crucial to experimental designs, external validity also matters in terms of the generalizability of results. In this paper, we first highlight and summarise the similarities and differences between IVR, desktop situations (both non-immersive VR and computer experiments), and reality. In the second step, we propose that IVR is a promising tool for visual memory research in terms of investigating the representation of visual information embedded in natural behaviour. We encourage researchers to carry out experiments on both two-dimensional computer screens and in immersive virtual environments to investigate visual memory and validate and replicate the findings. IVR is valuable because of its potential to improve theoretical understanding and increase the psychological relevance of the findings.
Collapse
Affiliation(s)
| | - Sandra Chiquet
- Faculty of Psychology, UniDistance Suisse, Brig, Switzerland
| | - Matthias Ertl
- Department of Psychology, University of Bern, Bern, Switzerland
| |
Collapse
|
11
|
Draschkow D, Anderson NC, David E, Gauge N, Kingstone A, Kumle L, Laurent X, Nobre AC, Shiels S, Võ MLH. Using XR (Extended Reality) for Behavioral, Clinical, and Learning Sciences Requires Updates in Infrastructure and Funding. POLICY INSIGHTS FROM THE BEHAVIORAL AND BRAIN SCIENCES 2023; 10:317-323. [PMID: 37900910 PMCID: PMC10602770 DOI: 10.1177/23727322231196305] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/31/2023]
Abstract
Extended reality (XR, including augmented and virtual reality) creates a powerful intersection between information technology and cognitive, clinical, and education sciences. XR technology has long captured the public imagination, and its development is the focus of major technology companies. This article demonstrates the potential of XR to (1) deliver behavioral insights, (2) transform clinical treatments, and (3) improve learning and education. However, without appropriate policy, funding, and infrastructural investment, many research institutions will struggle to keep pace with the advances and opportunities of XR. To realize the full potential of XR for basic and translational research, funding should incentivize (1) appropriate training, (2) open software solutions, and (3) collaborations between complementary academic and industry partners. Bolstering the XR research infrastructure with the right investments and incentives is vital for delivering on the potential for transformative discoveries, innovations, and applications.
Collapse
Affiliation(s)
- Dejan Draschkow
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, UK
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Nicola C. Anderson
- Department of Psychology, University of British Columbia, Vancouver, Canada
| | - Erwan David
- Department of Psychology, Scene Grammar Lab, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Nathan Gauge
- OxSTaR Oxford Simulation Teaching and Research, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
| | - Alan Kingstone
- Department of Psychology, University of British Columbia, Vancouver, Canada
| | - Levi Kumle
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, UK
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Xavier Laurent
- Centre for Teaching and Learning, University of Oxford, Oxford, UK
| | - Anna C. Nobre
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, UK
- Department of Experimental Psychology, University of Oxford, Oxford, UK
- Wu Tsai Institute, Yale University, New Haven, USA
| | - Sally Shiels
- OxSTaR Oxford Simulation Teaching and Research, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
| | - Melissa L.-H. Võ
- Department of Psychology, Scene Grammar Lab, Goethe University Frankfurt, Frankfurt am Main, Germany
| |
Collapse
|
12
|
Steel A, Garcia BD, Goyal K, Mynick A, Robertson CE. Scene Perception and Visuospatial Memory Converge at the Anterior Edge of Visually Responsive Cortex. J Neurosci 2023; 43:5723-5737. [PMID: 37474310 PMCID: PMC10401646 DOI: 10.1523/jneurosci.2043-22.2023] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 07/10/2023] [Accepted: 07/14/2023] [Indexed: 07/22/2023] Open
Abstract
To fluidly engage with the world, our brains must simultaneously represent both the scene in front of us and our memory of the immediate surrounding environment (i.e., local visuospatial context). How does the brain's functional architecture enable sensory and mnemonic representations to closely interface while also avoiding sensory-mnemonic interference? Here, we asked this question using first-person, head-mounted virtual reality and fMRI. Using virtual reality, human participants of both sexes learned a set of immersive, real-world visuospatial environments in which we systematically manipulated the extent of visuospatial context associated with a scene image in memory across three learning conditions, spanning from a single FOV to a city street. We used individualized, within-subject fMRI to determine which brain areas support memory of the visuospatial context associated with a scene during recall (Experiment 1) and recognition (Experiment 2). Across the whole brain, activity in three patches of cortex was modulated by the amount of known visuospatial context, each located immediately anterior to one of the three scene perception areas of high-level visual cortex. Individual subject analyses revealed that these anterior patches corresponded to three functionally defined place memory areas, which selectively respond when visually recalling personally familiar places. In addition to showing activity levels that were modulated by the amount of visuospatial context, multivariate analyses showed that these anterior areas represented the identity of the specific environment being recalled. Together, these results suggest a convergence zone for scene perception and memory of the local visuospatial context at the anterior edge of high-level visual cortex.SIGNIFICANCE STATEMENT As we move through the world, the visual scene around us is integrated with our memory of the wider visuospatial context. Here, we sought to understand how the functional architecture of the brain enables coexisting representations of the current visual scene and memory of the surrounding environment. Using a combination of immersive virtual reality and fMRI, we show that memory of visuospatial context outside the current FOV is represented in a distinct set of brain areas immediately anterior and adjacent to the perceptually oriented scene-selective areas of high-level visual cortex. This functional architecture would allow efficient interaction between immediately adjacent mnemonic and perceptual areas while also minimizing interference between mnemonic and perceptual representations.
Collapse
Affiliation(s)
- Adam Steel
- Department of Psychological & Brain Sciences, Dartmouth College, Hanover, New Hampshire 03755
| | - Brenda D Garcia
- Department of Psychological & Brain Sciences, Dartmouth College, Hanover, New Hampshire 03755
| | - Kala Goyal
- Department of Psychological & Brain Sciences, Dartmouth College, Hanover, New Hampshire 03755
| | - Anna Mynick
- Department of Psychological & Brain Sciences, Dartmouth College, Hanover, New Hampshire 03755
| | - Caroline E Robertson
- Department of Psychological & Brain Sciences, Dartmouth College, Hanover, New Hampshire 03755
| |
Collapse
|
13
|
Chawoush B, Draschkow D, van Ede F. Capacity and selection in immersive visual working memory following naturalistic object disappearance. J Vis 2023; 23:9. [PMID: 37548958 PMCID: PMC10411649 DOI: 10.1167/jov.23.8.9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2023] [Accepted: 07/06/2023] [Indexed: 08/08/2023] Open
Abstract
Visual working memory-holding past visual information in mind for upcoming behavior-is commonly studied following the abrupt removal of visual objects from static two-dimensional (2D) displays. In everyday life, visual objects do not typically vanish from the environment in front of us. Rather, visual objects tend to enter working memory following self or object motion: disappearing from view gradually and changing the spatial relation between memoranda and observer. Here, we used virtual reality (VR) to investigate whether two classic findings from visual working memory research-a capacity of around three objects and the reliance on space for object selection-generalize to more naturalistic modes of object disappearance. Our static reference condition mimicked traditional laboratory tasks whereby visual objects were held static in front of the participant and removed from view abruptly. In our critical flow condition, the same visual objects flowed by participants, disappearing from view gradually and behind the observer. We considered visual working memory performance and capacity, as well as space-based mnemonic selection, indexed by directional biases in gaze. Despite vastly distinct modes of object disappearance and altered spatial relations between memoranda and observer, we found comparable capacity and comparable gaze signatures of space-based mnemonic selection. This finding reveals how classic findings from visual working memory research generalize to immersive situations with more naturalistic modes of object disappearance and with dynamic spatial relations between memoranda and observer.
Collapse
Affiliation(s)
- Babak Chawoush
- Institute for Brain and Behavior Amsterdam, Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Dejan Draschkow
- Department of Experimental Psychology, University of Oxford, United Kingdom
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, United Kingdom
| | - Freek van Ede
- Institute for Brain and Behavior Amsterdam, Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
14
|
Nobre AC, van Ede F. Attention in flux. Neuron 2023; 111:971-986. [PMID: 37023719 DOI: 10.1016/j.neuron.2023.02.032] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Revised: 02/20/2023] [Accepted: 02/22/2023] [Indexed: 04/08/2023]
Abstract
Selective attention comprises essential infrastructural functions supporting cognition-anticipating, prioritizing, selecting, routing, integrating, and preparing signals to guide adaptive behavior. Most studies have examined its consequences, systems, and mechanisms in a static way, but attention is at the confluence of multiple sources of flux. The world advances, we operate within it, our minds change, and all resulting signals progress through multiple pathways within the dynamic networks of our brains. Our aim in this review is to raise awareness of and interest in three important facets of how timing impacts our understanding of attention. These include the challenges posed to attention by the timing of neural processing and psychological functions, the opportunities conferred to attention by various temporal structures in the environment, and how tracking the time courses of neural and behavioral modulations with continuous measures yields surprising insights into the workings and principles of attention.
Collapse
Affiliation(s)
- Anna C Nobre
- Department of Experimental Psychology, University of Oxford, Oxford OX2 6GG, UK; Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford OX3 7JX, UK.
| | - Freek van Ede
- Institute for Brain and Behavior Amsterdam, Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam 1081BT, the Netherlands.
| |
Collapse
|
15
|
Beitner J, Helbing J, Draschkow D, David EJ, Võ MLH. Flipping the world upside down: Using eye tracking in virtual reality to study visual search in inverted scenes. J Eye Mov Res 2023; 15:10.16910/jemr.15.3.5. [PMID: 37215533 PMCID: PMC10195094 DOI: 10.16910/jemr.15.3.5] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/19/2024] Open
Abstract
Image inversion is a powerful tool for investigating cognitive mechanisms of visual perception. However, studies have mainly used inversion in paradigms presented on twodimensional computer screens. It remains open whether disruptive effects of inversion also hold true in more naturalistic scenarios. In our study, we used scene inversion in virtual reality in combination with eye tracking to investigate the mechanisms of repeated visual search through three-dimensional immersive indoor scenes. Scene inversion affected all gaze and head measures except fixation durations and saccade amplitudes. Our behavioral results, surprisingly, did not entirely follow as hypothesized: While search efficiency dropped significantly in inverted scenes, participants did not utilize more memory as measured by search time slopes. This indicates that despite the disruption, participants did not try to compensate the increased difficulty by using more memory. Our study highlights the importance of investigating classical experimental paradigms in more naturalistic scenarios to advance research on daily human behavior.
Collapse
Affiliation(s)
- Julia Beitner
- Department of Psychology, Goethe University Frankfurt, Germany
- Corresponding author,
| | - Jason Helbing
- Department of Psychology, Goethe University Frankfurt, Germany
| | - Dejan Draschkow
- Department of Experimental Psychology, University of Oxford, UK
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, UK
| | - Erwan J David
- Department of Psychology, Goethe University Frankfurt, Germany
| | - Melissa L-H Võ
- Department of Psychology, Goethe University Frankfurt, Germany
| |
Collapse
|
16
|
Schuetz I, Karimpur H, Fiehler K. vexptoolbox: A software toolbox for human behavior studies using the Vizard virtual reality platform. Behav Res Methods 2023; 55:570-582. [PMID: 35322350 PMCID: PMC10027796 DOI: 10.3758/s13428-022-01831-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/09/2022] [Indexed: 11/08/2022]
Abstract
Virtual reality (VR) is a powerful tool for researchers due to its potential to study dynamic human behavior in highly naturalistic environments while retaining full control over the presented stimuli. Due to advancements in consumer hardware, VR devices are now very affordable and have also started to include technologies such as eye tracking, further extending potential research applications. Rendering engines such as Unity, Unreal, or Vizard now enable researchers to easily create complex VR environments. However, implementing the experimental design can still pose a challenge, and these packages do not provide out-of-the-box support for trial-based behavioral experiments. Here, we present a Python toolbox, designed to facilitate common tasks when developing experiments using the Vizard VR platform. It includes functionality for common tasks like creating, randomizing, and presenting trial-based experimental designs or saving results to standardized file formats. Moreover, the toolbox greatly simplifies continuous recording of eye and body movements using any hardware supported in Vizard. We further implement and describe a simple goal-directed reaching task in VR and show sample data recorded from five volunteers. The toolbox, example code, and data are all available on GitHub under an open-source license. We hope that our toolbox can simplify VR experiment development, reduce code duplication, and aid reproducibility and open-science efforts.
Collapse
Affiliation(s)
- Immo Schuetz
- Experimental Psychology, Justus Liebig University, Otto-Behaghel-Str. 10 F, 35394, Giessen, Germany.
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Giessen, Germany.
| | - Harun Karimpur
- Experimental Psychology, Justus Liebig University, Otto-Behaghel-Str. 10 F, 35394, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Giessen, Germany
| | - Katja Fiehler
- Experimental Psychology, Justus Liebig University, Otto-Behaghel-Str. 10 F, 35394, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Giessen, Germany
| |
Collapse
|
17
|
Abstract
Flexible behavior requires guidance not only by sensations that are available immediately but also by relevant mental contents carried forward through working memory. Therefore, selective-attention functions that modulate the contents of working memory to guide behavior (inside-out) are just as important as those operating on sensory signals to generate internal contents (outside-in). We review the burgeoning literature on selective attention in the inside-out direction and underscore its functional, flexible, and future-focused nature. We discuss in turn the purpose (why), targets (what), sources (when), and mechanisms (how) of selective attention inside working memory, using visual working memory as a model. We show how the study of internal selective attention brings new insights concerning the core cognitive processes of attention and working memory and how considering selective attention and working memory together paves the way for a rich and integrated understanding of how mind serves behavior.
Collapse
Affiliation(s)
- Freek van Ede
- Institute for Brain and Behavior Amsterdam, and Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, Netherlands;
| | - Anna C Nobre
- Departments of Experimental Psychology and Psychiatry, Oxford Centre for Human Brain Activity, and Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, United Kingdom;
| |
Collapse
|
18
|
Botch TL, Garcia BD, Choi YB, Feffer N, Robertson CE. Active visual search in naturalistic environments reflects individual differences in classic visual search performance. Sci Rep 2023; 13:631. [PMID: 36635491 PMCID: PMC9837148 DOI: 10.1038/s41598-023-27896-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Accepted: 01/10/2023] [Indexed: 01/13/2023] Open
Abstract
Visual search is a ubiquitous activity in real-world environments. Yet, traditionally, visual search is investigated in tightly controlled paradigms, where head-restricted participants locate a minimalistic target in a cluttered array that is presented on a computer screen. Do traditional visual search tasks predict performance in naturalistic settings, where participants actively explore complex, real-world scenes? Here, we leverage advances in virtual reality technology to test the degree to which classic and naturalistic search are limited by a common factor, set size, and the degree to which individual differences in classic search behavior predict naturalistic search behavior in a large sample of individuals (N = 75). In a naturalistic search task, participants looked for an object within their environment via a combination of head-turns and eye-movements using a head-mounted display. Then, in a classic search task, participants searched for a target within a simple array of colored letters using only eye-movements. In each task, we found that participants' search performance was impacted by increases in set size-the number of items in the visual display. Critically, we observed that participants' efficiency in classic search tasks-the degree to which set size slowed performance-indeed predicted efficiency in real-world scenes. These results demonstrate that classic, computer-based visual search tasks are excellent models of active, real-world search behavior.
Collapse
Affiliation(s)
- Thomas L Botch
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, 03755, USA.
| | - Brenda D Garcia
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, 03755, USA
| | - Yeo Bi Choi
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, 03755, USA
| | - Nicholas Feffer
- Department of Computer Science, Dartmouth College, Hanover, NH, 03755, USA
- Department of Computer Science, Stanford University, Stanford, CA, 94305, USA
| | - Caroline E Robertson
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, 03755, USA
| |
Collapse
|
19
|
Congruence-based contextual plausibility modulates cortical activity during vibrotactile perception in virtual multisensory environments. Commun Biol 2022; 5:1360. [PMID: 36509971 PMCID: PMC9744907 DOI: 10.1038/s42003-022-04318-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Accepted: 11/29/2022] [Indexed: 12/14/2022] Open
Abstract
How congruence cues and congruence-based expectations may together shape perception in virtual reality (VR) still need to be unravelled. We linked the concept of plausibility used in VR research with congruence-based modulation by assessing brain responses while participants experienced vehicle riding experiences in VR scenarios. Perceptual plausibility was manipulated by sensory congruence, with multisensory stimulations confirming with common expectations of road scenes being plausible. We hypothesized that plausible scenarios would elicit greater cortical responses. The results showed that: (i) vibrotactile stimulations at expected intensities, given embedded audio-visual information, engaged greater cortical activities in frontal and sensorimotor regions; (ii) weaker plausible stimulations resulted in greater responses in the sensorimotor cortex than stronger but implausible stimulations; (iii) frontal activities under plausible scenarios negatively correlated with plausibility violation costs in the sensorimotor cortex. These results potentially indicate frontal regulation of sensory processing and extend previous evidence of contextual modulation to the tactile sense.
Collapse
|