1
|
Korda Ž, Walcher S, Körner C, Benedek M. Internal coupling: Eye behavior coupled to visual imagery. Neurosci Biobehav Rev 2024; 165:105855. [PMID: 39153584 DOI: 10.1016/j.neubiorev.2024.105855] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2024] [Revised: 08/02/2024] [Accepted: 08/13/2024] [Indexed: 08/19/2024]
Abstract
Our eyes do not only respond to visual perception but also to internal cognition involving visual imagery, which can be referred to as internal coupling. This review synthesizes evidence on internal coupling across diverse domains including episodic memory and simulation, visuospatial memory, numerical cognition, object movement, body movement, and brightness imagery. In each domain, eye movements consistently reflect distinct aspects of mental imagery typically akin to those seen in corresponding visual experiences. Several findings further suggest that internal coupling may not only coincide with but also supports internal cognition as evidenced by improved cognitive performance. Available theoretical accounts suggest that internal coupling may serve at least two functional roles in visual imagery: facilitating memory reconstruction and indicating shifts in internal attention. Moreover, recent insights into the neurobiology of internal coupling highlight substantially shared neural pathways in externally and internally directed cognition. The review concludes by identifying open questions and promising avenues for future research such as exploring moderating roles of context and individual differences in internal coupling.
Collapse
Affiliation(s)
- Živa Korda
- Department of Psychology, University of Graz, Graz, Austria.
| | - Sonja Walcher
- Department of Psychology, University of Graz, Graz, Austria
| | | | | |
Collapse
|
2
|
Ferro D, Cash-Padgett T, Wang MZ, Hayden BY, Moreno-Bote R. Gaze-centered gating, reactivation, and reevaluation of economic value in orbitofrontal cortex. Nat Commun 2024; 15:6163. [PMID: 39039055 PMCID: PMC11263430 DOI: 10.1038/s41467-024-50214-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Accepted: 07/03/2024] [Indexed: 07/24/2024] Open
Abstract
During economic choice, options are often considered in alternation, until commitment. Nonetheless, neuroeconomics typically ignores the dynamic aspects of deliberation. We trained two male macaques to perform a value-based decision-making task in which two risky offers were presented in sequence at the opposite sides of the visual field, each followed by a delay epoch where offers were invisible. Surprisingly, during the two delays, subjects tend to look at empty locations where the offers had previously appeared, with longer fixations increasing the probability of choosing the associated offer. Spiking activity in orbitofrontal cortex reflects the value of the gazed offer, or of the offer associated with the gazed empty spatial location, even if it is not the most recent. This reactivation reflects a reevaluation process, as fluctuations in neural spiking correlate with upcoming choice. Our results suggest that look-at-nothing gazing triggers the reactivation of a previously seen offer for further evaluation.
Collapse
Affiliation(s)
- Demetrio Ferro
- Center for Brain and Cognition, Universitat Pompeu Fabra, 08002, Barcelona, Spain.
- Department of Information and Communication Technologies, Universitat Pompeu Fabra, 08002, Barcelona, Spain.
| | - Tyler Cash-Padgett
- Department of Neuroscience, Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN55455, USA
| | - Maya Zhe Wang
- Department of Neuroscience, Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN55455, USA
| | - Benjamin Y Hayden
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, 77030, USA
| | - Rubén Moreno-Bote
- Center for Brain and Cognition, Universitat Pompeu Fabra, 08002, Barcelona, Spain
- Department of Information and Communication Technologies, Universitat Pompeu Fabra, 08002, Barcelona, Spain
- Serra Húnter Fellow Programme, Universitat Pompeu Fabra, Barcelona, Spain
| |
Collapse
|
3
|
Brooks PP, Guzman BA, Kensinger EA, Norman KA, Ritchey M. Eye tracking evidence for the reinstatement of emotionally negative and neutral memories. PLoS One 2024; 19:e0303755. [PMID: 38758747 PMCID: PMC11101026 DOI: 10.1371/journal.pone.0303755] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Accepted: 04/30/2024] [Indexed: 05/19/2024] Open
Abstract
Recent eye tracking studies have linked gaze reinstatement-when eye movements from encoding are reinstated during retrieval-with memory performance. In this study, we investigated whether gaze reinstatement is influenced by the affective salience of information stored in memory, using an adaptation of the emotion-induced memory trade-off paradigm. Participants learned word-scene pairs, where scenes were composed of negative or neutral objects located on the left or right side of neutral backgrounds. This allowed us to measure gaze reinstatement during scene memory tests based on whether people looked at the side of the screen where the object had been located. Across two experiments, we behaviorally replicated the emotion-induced memory trade-off effect, in that negative object memory was better than neutral object memory at the expense of background memory. Furthermore, we found evidence that gaze reinstatement was related to recognition memory for the object and background scene components. This effect was generally comparable for negative and neutral memories, although the effects of valence varied somewhat between the two experiments. Together, these findings suggest that gaze reinstatement occurs independently of the processes contributing to the emotion-induced memory trade-off effect.
Collapse
Affiliation(s)
- Paula P. Brooks
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, United States of America
- Department of Psychology and Neuroscience, Boston College, Chestnut Hill, MA, United States of America
| | - Brigitte A. Guzman
- Department of Psychology and Neuroscience, Boston College, Chestnut Hill, MA, United States of America
| | - Elizabeth A. Kensinger
- Department of Psychology and Neuroscience, Boston College, Chestnut Hill, MA, United States of America
| | - Kenneth A. Norman
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, United States of America
- Department of Psychology, Princeton University, Princeton, NJ, United States of America
| | - Maureen Ritchey
- Department of Psychology and Neuroscience, Boston College, Chestnut Hill, MA, United States of America
| |
Collapse
|
4
|
Woodry R, Curtis CE, Winawer J. Feedback scales the spatial tuning of cortical responses during visual memory. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.11.589111. [PMID: 38659957 PMCID: PMC11042180 DOI: 10.1101/2024.04.11.589111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/26/2024]
Abstract
Perception, working memory, and long-term memory each evoke neural responses in visual cortex, suggesting that memory uses encoding mechanisms shared with perception. While previous research has largely focused on how perception and memory are similar, we hypothesized that responses in visual cortex would differ depending on the origins of the inputs. Using fMRI, we quantified spatial tuning in visual cortex while participants (both sexes) viewed, maintained in working memory, or retrieved from long-term memory a peripheral target. In each of these conditions, BOLD responses were spatially tuned and were aligned with the target's polar angle in all measured visual field maps including V1. As expected given the increasing sizes of receptive fields, polar angle tuning during perception increased in width systematically up the visual hierarchy from V1 to V2, V3, hV4, and beyond. In stark contrast, the widths of tuned responses were broad across the visual hierarchy during working memory and long-term memory, matched to the widths in perception in later visual field maps but much broader in V1. This pattern is consistent with the idea that mnemonic responses in V1 stem from top-down sources. Moreover, these tuned responses when biased (clockwise or counterclockwise of target) predicted matched biases in memory, suggesting that the readout of maintained and reinstated mnemonic responses influences memory guided behavior. We conclude that feedback constrains spatial tuning during memory, where earlier visual maps inherit broader tuning from later maps thereby impacting the precision of memory.
Collapse
Affiliation(s)
- Robert Woodry
- Department of Psychology, New York University, New York City, NY 10003
| | - Clayton E. Curtis
- Department of Psychology, New York University, New York City, NY 10003
- Center for Neural Science, New York University, New York City, NY 10003
| | - Jonathan Winawer
- Department of Psychology, New York University, New York City, NY 10003
- Center for Neural Science, New York University, New York City, NY 10003
| |
Collapse
|
5
|
Krasich K, O'Neill K, De Brigard F. Looking at Mental Images: Eye-Tracking Mental Simulation During Retrospective Causal Judgment. Cogn Sci 2024; 48:e13426. [PMID: 38528803 DOI: 10.1111/cogs.13426] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Revised: 11/13/2023] [Accepted: 02/26/2024] [Indexed: 03/27/2024]
Abstract
How do people evaluate causal relationships? Do they just consider what actually happened, or do they also consider what could have counterfactually happened? Using eye tracking and Gaussian process modeling, we investigated how people mentally simulated past events to judge what caused the outcomes to occur. Participants played a virtual ball-shooting game and then-while looking at a blank screen-mentally simulated (a) what actually happened, (b) what counterfactually could have happened, or (c) what caused the outcome to happen. Our findings showed that participants moved their eyes in patterns consistent with the actual or counterfactual events that they mentally simulated. When simulating what caused the outcome to occur, participants moved their eyes consistent with simulations of counterfactual possibilities. These results favor counterfactual theories of causal reasoning, demonstrate how eye movements can reflect simulation during this reasoning and provide a novel approach for investigating retrospective causal reasoning and counterfactual thinking.
Collapse
Affiliation(s)
| | - Kevin O'Neill
- Center for Cognitive Neuroscience, Duke University
- Department of Psychology & Neuroscience, Duke University
| | - Felipe De Brigard
- Center for Cognitive Neuroscience, Duke University
- Department of Psychology & Neuroscience, Duke University
- Department of Philosophy, Duke University
| |
Collapse
|
6
|
Servais A, Hurter C, Barbeau EJ. Attentional switch to memory: An early and critical phase of the cognitive cascade allowing autobiographical memory retrieval. Psychon Bull Rev 2023; 30:1707-1721. [PMID: 37118526 DOI: 10.3758/s13423-023-02270-w] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/09/2023] [Indexed: 04/30/2023]
Abstract
Remembering and mentally reliving yesterday's lunch is a typical example of episodic autobiographical memory retrieval. In the present review, we reappraised the complex cascade of cognitive processes involved in memory retrieval, by highlighting one particular phase that has received little interest so far: attentional switch to memory (ASM). As attention cannot be simultaneously directed toward external stimuli and internal memories, there has to be an attentional switch from the external to the internal world in order to initiate memory retrieval. We formulated hypotheses and developed hypothetical models of both the cognitive and brain processes that accompany ASM. We suggest that gaze aversion could serve as an objective temporal marker of the point at which people switch their attention to memory, and highlight several fields (neuropsychology, neuroscience, social cognition, comparative psychology) in which ASM markers could be essential. Our review thus provides a new framework for understanding the early stages of autobiographical memory retrieval.
Collapse
Affiliation(s)
- Anaïs Servais
- CerCo, CNRS UMR5549-Université de Toulouse, CHU Purpan, Pavillon Baudot, 31052, Toulouse, France.
- ENAC, 7, avenue Edouard Belin, 31055, Toulouse, France.
| | | | - Emmanuel J Barbeau
- CerCo, CNRS UMR5549-Université de Toulouse, CHU Purpan, Pavillon Baudot, 31052, Toulouse, France
| |
Collapse
|
7
|
Servais A, Préa N, Hurter C, Barbeau EJ. Why and when do you look away when trying to remember? Gaze aversion as a marker of the attentional switch to the internal world during memory retrieval. Acta Psychol (Amst) 2023; 240:104041. [PMID: 37774488 DOI: 10.1016/j.actpsy.2023.104041] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Revised: 09/15/2023] [Accepted: 09/21/2023] [Indexed: 10/01/2023] Open
Abstract
It is common to look away while trying to remember specific information, for example during autobiographical memory retrieval, a behavior referred to as gaze aversion. Given the competition between internal and external attention, gaze aversion is assumed to play a role in visual decoupling, i.e., suppressing environmental distractors during internal tasks. This suggests a link between gaze aversion and the attentional switch from the outside world to a temporary internal mental space that takes place during the initial stage of memory retrieval, but this assumption has never been verified so far. We designed a protocol where 33 participants answered 48 autobiographical questions while their eye movements were recorded with an eye-tracker and a camcorder. Results indicated that gaze aversion occurred early (median 1.09 s) and predominantly during the access phase of memory retrieval-i.e., the moment when the attentional switch is assumed to take place. In addition, gaze aversion lasted a relatively long time (on average 6 s), and was notably decoupled from concurrent head movements. These results support a role of gaze aversion in perceptual decoupling. Gaze aversion was also related to higher retrieval effort and was rare during memories which came spontaneously to mind. This suggests that gaze aversion might be required only when cognitive effort is required to switch the attention toward the internal world to help retrieving hard-to-access memories. Compared to eye vergence, another visual decoupling strategy, the association with the attentional switch seemed specific to gaze aversion. Our results provide for the first time several arguments supporting the hypothesis that gaze aversion is related to the attentional switch from the outside world to memory.
Collapse
Affiliation(s)
- Anaïs Servais
- Centre de recherche Cerveau et Cognition (CerCo), UMR5549 (CNRS-UPS), Pavillon Baudot, 31052 Toulouse, France; National Civil Aviation School (ENAC), 7 avenue Edouard Belin, 31055 Toulouse, France.
| | - Noémie Préa
- Centre de recherche Cerveau et Cognition (CerCo), UMR5549 (CNRS-UPS), Pavillon Baudot, 31052 Toulouse, France
| | - Christophe Hurter
- National Civil Aviation School (ENAC), 7 avenue Edouard Belin, 31055 Toulouse, France.
| | - Emmanuel J Barbeau
- Centre de recherche Cerveau et Cognition (CerCo), UMR5549 (CNRS-UPS), Pavillon Baudot, 31052 Toulouse, France.
| |
Collapse
|
8
|
Gautier J, El Haj M. Eyes don't lie: Eye movements differ during covert and overt autobiographical recall. Cognition 2023; 235:105416. [PMID: 36821995 DOI: 10.1016/j.cognition.2023.105416] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 02/14/2023] [Accepted: 02/15/2023] [Indexed: 02/24/2023]
Abstract
In everyday life, autobiographical memories are revisited silently (i.e., covert recall) or shared with others (i.e., overt recall), yet most research regarding eye movements and autobiographical recall has focused on overt recall. With that in mind, the aim of the current study was to evaluate eye movements during the retrieval of autobiographical memories (with a focus on emotion), recollected during covert and overt recall. Forty-three participants recalled personal memories out loud and silently, while wearing eye-tracking glasses, and rated these memories in terms of mental imagery and emotional intensity. Analyses showed fewer and longer fixations, fewer and shorter saccades, and fewer blinks during covert recall compared with overt recall. Participants perceived more mental images and had a more intense emotional experience during covert recall. These results are discussed considering cognitive load theories and the various functions of autobiographical recall. We theorize that fewer and longer fixations during covert recall may be due to more intense mental imagery. This study enriches the field of research on eye movements and autobiographical memory by addressing how we retrieve memories silently, a common activity of everyday life. More broadly, our results contribute to building objective tools to measure autobiographical memory, alongside already existing subjective scales.
Collapse
Affiliation(s)
- Joanna Gautier
- Nantes Université, Univ Angers, Laboratoire de Psychologie des Pays de la Loire (LPPL - EA 4638), Chemin de la Censive du Tertre, F44000 Nantes, France.
| | - Mohamad El Haj
- Nantes Université, Univ Angers, Laboratoire de Psychologie des Pays de la Loire (LPPL - EA 4638), Chemin de la Censive du Tertre, F44000 Nantes, France; CHU Nantes, Clinical Gerontology Department, Bd Jacques Monod, F44300, Nantes, France; Institut Universitaire de France, Paris, France
| |
Collapse
|
9
|
Favila SE, Kuhl BA, Winawer J. Perception and memory have distinct spatial tuning properties in human visual cortex. Nat Commun 2022; 13:5864. [PMID: 36257949 PMCID: PMC9579130 DOI: 10.1038/s41467-022-33161-8] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2020] [Accepted: 09/06/2022] [Indexed: 11/12/2022] Open
Abstract
Reactivation of earlier perceptual activity is thought to underlie long-term memory recall. Despite evidence for this view, it is unclear whether mnemonic activity exhibits the same tuning properties as feedforward perceptual activity. Here, we leverage population receptive field models to parameterize fMRI activity in human visual cortex during spatial memory retrieval. Though retinotopic organization is present during both perception and memory, large systematic differences in tuning are also evident. Whereas there is a three-fold decline in spatial precision from early to late visual areas during perception, this pattern is not observed during memory retrieval. This difference cannot be explained by reduced signal-to-noise or poor performance on memory trials. Instead, by simulating top-down activity in a network model of cortex, we demonstrate that this property is well explained by the hierarchical structure of the visual system. Together, modeling and empirical results suggest that computational constraints imposed by visual system architecture limit the fidelity of memory reactivation in sensory cortex.
Collapse
Affiliation(s)
- Serra E Favila
- Department of Psychology, New York University, New York, NY, 10003, USA.
- Department of Psychology, Columbia University, New York, NY, 10027, USA.
| | - Brice A Kuhl
- Department of Psychology, University of Oregon, Eugene, OR, 97403, USA
- Institute of Neuroscience, University of Oregon, Eugene, OR, 97403, USA
| | - Jonathan Winawer
- Department of Psychology, New York University, New York, NY, 10003, USA
- Center for Neural Science, New York University, New York, NY, 10003, USA
| |
Collapse
|
10
|
Chiquet S, Martarelli CS, Mast FW. Imagery-related eye movements in 3D space depend on individual differences in visual object imagery. Sci Rep 2022; 12:14136. [PMID: 35986076 PMCID: PMC9391428 DOI: 10.1038/s41598-022-18080-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Accepted: 08/04/2022] [Indexed: 11/09/2022] Open
Abstract
During recall of visual information people tend to move their eyes even though there is nothing to see. Previous studies indicated that such eye movements are related to the spatial location of previously seen items on 2D screens, but they also showed that eye movement behavior varies significantly across individuals. The reason for these differences remains unclear. In the present study we used immersive virtual reality to investigate how individual tendencies to process and represent visual information contribute to eye fixation patterns in visual imagery of previously inspected objects in three-dimensional (3D) space. We show that participants also look back to relevant locations when they are free to move in 3D space. Furthermore, we found that looking back to relevant locations depends on individual differences in visual object imagery abilities. We suggest that object visualizers rely less on spatial information because they tend to process and represent the visual information in terms of color and shape rather than in terms of spatial layout. This finding indicates that eye movements during imagery are subject to individual strategies, and the immersive setting in 3D space made individual differences more likely to unfold.
Collapse
|
11
|
Johansson R, Nyström M, Dewhurst R, Johansson M. Eye-movement replay supports episodic remembering. Proc Biol Sci 2022; 289:20220964. [PMID: 35703049 PMCID: PMC9198773 DOI: 10.1098/rspb.2022.0964] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
Abstract
When we bring to mind something we have seen before, our eyes spontaneously unfold in a sequential pattern strikingly similar to that made during the original encounter, even in the absence of supporting visual input. Oculomotor movements of the eye may then serve the opposite purpose of acquiring new visual information; they may serve as self-generated cues, pointing to stored memories. Over 50 years ago Donald Hebb, the forefather of cognitive neuroscience, posited that such a sequential replay of eye movements supports our ability to mentally recreate visuospatial relations during episodic remembering. However, direct evidence for this influential claim is lacking. Here we isolate the sequential properties of spontaneous eye movements during encoding and retrieval in a pure recall memory task and capture their encoding-retrieval overlap. Critically, we show that the fidelity with which a series of consecutive eye movements from initial encoding is sequentially retained during subsequent retrieval predicts the quality of the recalled memory. Our findings provide direct evidence that such scanpaths are replayed to assemble and reconstruct spatio-temporal relations as we remember and further suggest that distinct scanpath properties differentially contribute depending on the nature of the goal-relevant memory.
Collapse
|
12
|
Neural reactivation and judgements of vividness reveal separable contributions to mnemonic representation. Neuroimage 2022; 255:119205. [PMID: 35427774 DOI: 10.1016/j.neuroimage.2022.119205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Revised: 04/04/2022] [Accepted: 04/08/2022] [Indexed: 11/22/2022] Open
Abstract
Mnemonic representations vary in fidelity, sharpness, and strength-qualities that can be examined using both introspective judgements of mental states and objective measures of brain activity. Subjective and objective measures are both valid ways of "reading out" the content of someone's internal mnemonic states, each with different strengths and weaknesses. St-Laurent and colleagues (2015) compared the neural correlates of memory vividness ratings with patterns of neural reactivation evoked during memory recall and found considerable overlap between the two, suggesting a common neural basis underlying these different markers of representational quality. Here we extended this work with meta-analytic methods by pooling together four neuroimaging datasets in order to contrast the neural substrates of neural reactivation and those of vividness judgements. While reactivation and vividness judgements correlated positively with one another and were associated with common univariate activity in the dorsal attention network and anterior hippocampus, some notable differences were also observed. Vividness judgments were tied to stronger activation in the striatum and dorsal attention network, together with activity suppression in default mode network nodes. We also observed a trend for reactivation to be more closely associated with early visual cortex activity. A mediation analysis found support for the hypothesis that neural reactivation is necessary for memory vividness, with activity in the anterior hippocampus associated with greater reactivation. Our results suggest that neural reactivation and vividness judgements reflect common mnemonic processes but differ in the extent to which they engage effortful, attentional processes. Additionally, the similarity between reactivation and vividness appears to arise, partly, through hippocampal engagement during memory retrieval.
Collapse
|
13
|
Wynn JS, Van Genugten RDI, Sheldon S, Schacter DL. Schema-related eye movements support episodic simulation. Conscious Cogn 2022; 100:103302. [PMID: 35240421 PMCID: PMC9007866 DOI: 10.1016/j.concog.2022.103302] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2021] [Revised: 02/02/2022] [Accepted: 02/16/2022] [Indexed: 11/29/2022]
Abstract
Recent work indicates that eye movements support the retrieval of episodic memories by reactivating the spatiotemporal context in which they were encoded. Although similar mechanisms have been thought to support simulation of future episodes, there is currently no evidence favoring this proposal. In the present study, we investigated the role of eye movements in episodic simulation by comparing the gaze patterns of individual participants imagining future scene and event scenarios to across-participant gaze templates for those same scenarios, reflecting their shared features (i.e., schemas). Our results provide novel evidence that eye movements during episodic simulation in the face of distracting visual noise are (1) schema-specific and (2) predictive of simulation success. Together, these findings suggest that eye movements support episodic simulation via reinstatement of scene and event schemas, and more broadly, that interactions between the memory and oculomotor effector systems may underlie critical cognitive processes including constructive episodic simulation.
Collapse
Affiliation(s)
- Jordana S Wynn
- Department of Psychology, Harvard University, Cambridge, USA.
| | | | - Signy Sheldon
- Department of Psychology, McGill University, Montreal, Canada
| | | |
Collapse
|
14
|
Kragel JE, Voss JL. Looking for the neural basis of memory. Trends Cogn Sci 2022; 26:53-65. [PMID: 34836769 PMCID: PMC8678329 DOI: 10.1016/j.tics.2021.10.010] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Revised: 10/14/2021] [Accepted: 10/15/2021] [Indexed: 01/03/2023]
Abstract
Memory neuroscientists often measure neural activity during task trials designed to recruit specific memory processes. Behavior is championed as crucial for deciphering brain-memory linkages but is impoverished in typical experiments that rely on summary judgments. We criticize this approach as being blind to the multiple cognitive, neural, and behavioral processes that occur rapidly within a trial to support memory. Instead, time-resolved behaviors such as eye movements occur at the speed of cognition and neural activity. We highlight successes using eye-movement tracking with in vivo electrophysiology to link rapid hippocampal oscillations to encoding and retrieval processes that interact over hundreds of milliseconds. This approach will improve research on the neural basis of memory because it pinpoints discrete moments of brain-behavior-cognition correspondence.
Collapse
Affiliation(s)
- James E Kragel
- Department of Neurology, The University of Chicago, 5841 South Maryland Avenue, Chicago, IL 60637, USA.
| | - Joel L Voss
- Department of Neurology, The University of Chicago, 5841 South Maryland Avenue, Chicago, IL 60637, USA
| |
Collapse
|
15
|
Pounder Z, Jacob J, Evans S, Loveday C, Eardley AF, Silvanto J. Only minimal differences between individuals with congenital aphantasia and those with typical imagery on neuropsychological tasks that involve imagery. Cortex 2022; 148:180-192. [DOI: 10.1016/j.cortex.2021.12.010] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Revised: 09/16/2021] [Accepted: 12/17/2021] [Indexed: 11/16/2022]
|
16
|
Neurofunctional Symmetries and Asymmetries during Voluntary out-of- and within-Body Vivid Imagery Concurrent with Orienting Attention and Visuospatial Detection. Symmetry (Basel) 2021. [DOI: 10.3390/sym13081549] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023] Open
Abstract
We explored whether two visual mental imagery experiences may be differentiated by electroencephalographic (EEG) and performance interactions with concurrent orienting external attention (OEA) to stimulus location and subsequent visuospatial detection. We measured within-subject (N = 10) event-related potential (ERP) changes during out-of-body imagery (OBI)—vivid imagery of a vertical line outside of the head/body—and within-body imagery (WBI)—vivid imagery of the line within one’s own head. Furthermore, we measured ERP changes and line offset Vernier acuity (hyperacuity) performance concurrent with those imagery, compared to baseline detection without imagery. Relative to OEA baseline, OBI yielded larger N200 and P300, whereas WBI yielded larger P50, P100, N400, and P800. Additionally, hyperacuity dropped significantly when concurrent with both imagery types. Partial least squares analysis combined behavioural performance, ERPs, and/or event-related EEG band power (ERBP). For both imagery types, hyperacuity reduction correlated with opposite frontal and occipital ERP amplitude and polarity changes. Furthermore, ERP modulation and ERBP synchronizations for all EEG frequencies correlated inversely with hyperacuity. Dipole Source Localization Analysis revealed unique generators in the left middle temporal gyrus (WBI) and in the right frontal middle gyrus (OBI), whereas the common generators were in the left precuneus and middle occipital cortex (cuneus). Imagery experiences, we conclude, can be identified by symmetric and asymmetric combined neurophysiological-behavioural patterns in interactions with the width of attentional focus.
Collapse
|
17
|
Bone MB, Buchsbaum BR. Detailed Episodic Memory Depends on Concurrent Reactivation of Basic Visual Features within the Posterior Hippocampus and Early Visual Cortex. Cereb Cortex Commun 2021; 2:tgab045. [PMID: 34414371 PMCID: PMC8370760 DOI: 10.1093/texcom/tgab045] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Revised: 06/22/2021] [Accepted: 06/25/2021] [Indexed: 11/13/2022] Open
Abstract
The hippocampus is a key brain region for the storage and retrieval of episodic memories, but how it performs this function is unresolved. Leading theories posit that the hippocampus stores a sparse representation, or "index," of the pattern of neocortical activity that occurred during perception. During retrieval, reactivation of the index by a partial cue facilitates the reactivation of the associated neocortical pattern. Therefore, episodic retrieval requires joint reactivation of the hippocampal index and the associated neocortical networks. To test this theory, we examine the relation between performance on a recognition memory task requiring retrieval of image-specific visual details and feature-specific reactivation within the hippocampus and neocortex. We show that trial-by-trial recognition accuracy correlates with neural reactivation of low-level features (e.g., luminosity and edges) within the posterior hippocampus and early visual cortex for participants with high recognition lure accuracy. As predicted, the two regions interact, such that recognition accuracy correlates with hippocampal reactivation only when reactivation co-occurs within the early visual cortex (and vice versa). In addition to supporting leading theories of hippocampal function, our findings show large individual differences in the features underlying visual memory and suggest that the anterior and posterior hippocampus represents gist-like and detailed features, respectively.
Collapse
Affiliation(s)
- Michael B Bone
- Rotman Research Institute at Baycrest, Toronto, Ontario, M6A 2E1, Canada
| | | |
Collapse
|
18
|
Wynn JS, Liu ZX, Ryan JD. Neural Correlates of Subsequent Memory-Related Gaze Reinstatement. J Cogn Neurosci 2021; 34:1547-1562. [PMID: 34272959 DOI: 10.1162/jocn_a_01761] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Mounting evidence linking gaze reinstatement-the recapitulation of encoding-related gaze patterns during retrieval-to behavioral measures of memory suggests that eye movements play an important role in mnemonic processing. Yet, the nature of the gaze scanpath, including its informational content and neural correlates, has remained in question. In this study, we examined eye movement and neural data from a recognition memory task to further elucidate the behavioral and neural bases of functional gaze reinstatement. Consistent with previous work, gaze reinstatement during retrieval of freely viewed scene images was greater than chance and predictive of recognition memory performance. Gaze reinstatement was also associated with viewing of informationally salient image regions at encoding, suggesting that scanpaths may encode and contain high-level scene content. At the brain level, gaze reinstatement was predicted by encoding-related activity in the occipital pole and BG, neural regions associated with visual processing and oculomotor control. Finally, cross-voxel brain pattern similarity analysis revealed overlapping subsequent memory and subsequent gaze reinstatement modulation effects in the parahippocampal place area and hippocampus, in addition to the occipital pole and BG. Together, these findings suggest that encoding-related activity in brain regions associated with scene processing, oculomotor control, and memory supports the formation, and subsequent recapitulation, of functional scanpaths. More broadly, these findings lend support to scanpath theory's assertion that eye movements both encode, and are themselves embedded in, mnemonic representations.
Collapse
Affiliation(s)
| | | | - Jennifer D Ryan
- Rotman Research Institute at Baycrest Health Sciences.,University of Toronto
| |
Collapse
|
19
|
Gaze-pattern similarity at encoding may interfere with future memory. Sci Rep 2021; 11:7697. [PMID: 33833314 PMCID: PMC8032786 DOI: 10.1038/s41598-021-87258-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2020] [Accepted: 03/24/2021] [Indexed: 11/11/2022] Open
Abstract
Human brains have a remarkable ability to separate streams of visual input into distinct memory-traces. It is unclear, however, how this ability relates to the way these inputs are explored via unique gaze-patterns. Moreover, it is yet unknown how motivation to forget or remember influences the link between gaze similarity and memory. In two experiments, we used a modified directed-forgetting paradigm and either showed blurred versions of the encoded scenes (Experiment 1) or pink noise images (Experiment 2) during attempted memory control. Both experiments demonstrated that higher levels of across-stimulus gaze similarity relate to worse future memory. Although this across-stimulus interference effect was unaffected by motivation, it depended on the perceptual overlap between stimuli and was more pronounced for different scene comparisons, than scene–pink noise comparisons. Intriguingly, these findings echo the pattern similarity effects from the neuroimaging literature and pinpoint a mechanism that could aid the regulation of unwanted memories.
Collapse
|
20
|
Martarelli CS, Mast FW. Pictorial low-level features in mental images: evidence from eye fixations. PSYCHOLOGICAL RESEARCH 2021; 86:350-363. [PMID: 33751199 DOI: 10.1007/s00426-021-01497-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2020] [Accepted: 02/26/2021] [Indexed: 11/29/2022]
Abstract
It is known that eye movements during object imagery reflect areas visited during encoding. But will eye movements also reflect pictorial low-level features of imagined stimuli? In this paper, three experiments are reported in which we investigate whether low-level properties of mental images elicit specific eye movements. Based on the conceptualization of mental images as depictive representations, we expected low-level visual features to influence eye fixations during mental imagery, in the absence of a visual input. In a first experiment, twenty-five participants performed a visual imagery task with high vs. low spatial frequency and high vs. low contrast gratings. We found that both during visual perception and during mental imagery, first fixations were more often allocated to the low spatial frequency-high contrast grating, thus showing that eye fixations were influenced not only by physical properties of visual stimuli but also by its imagined counterpart. In a second experiment, twenty-two participants imagined high contrast and low contrast stimuli that they had not encoded before. Again, participants allocated more fixations to the high contrast mental images than to the low contrast mental images. In a third experiment, we ruled out task difficulty as confounding variable. Our results reveal that low-level visual features are represented in the mind's eye and thus, they contribute to the characterization of mental images in terms of how much perceptual information is re-instantiated during mental imagery.
Collapse
Affiliation(s)
| | - Fred W Mast
- Department of Psychology, University of Bern, Bern, Switzerland
| |
Collapse
|
21
|
Gurtner LM, Hartmann M, Mast FW. Eye movements during visual imagery and perception show spatial correspondence but have unique temporal signatures. Cognition 2021; 210:104597. [PMID: 33508576 DOI: 10.1016/j.cognition.2021.104597] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2020] [Revised: 01/07/2021] [Accepted: 01/08/2021] [Indexed: 11/20/2022]
Abstract
Eye fixation patterns during mental imagery are similar to those during perception of the same picture, suggesting that oculomotor mechanisms play a role in mental imagery (i.e., the "looking at nothing" effect). Previous research has focused on the spatial similarities of eye movements during perception and mental imagery. The primary aim of this study was to assess whether the spatial similarity translates to the temporal domain. We used recurrence quantification analysis (RQA) to assess the temporal structure of eye fixations in visual perception and mental imagery and we compared the temporal as well as the spatial characteristics in mental imagery with perception by means of Bayesian hierarchical regression models. We further investigated how person and picture-specific characteristics contribute to eye movement behavior in mental imagery. Working memory capacity and mental imagery abilities were assessed to either predict gaze dynamics in visual imagery or to moderate a possible correspondence between spatial or temporal gaze dynamics in perception and mental imagery. We were able to show the spatial similarity of fixations between visual perception and imagery and we provide first evidence for its moderation by working memory capacity. Interestingly, the temporal gaze dynamics in mental imagery were unrelated to those in perception and their variance between participants was not explained by variance in visuo-spatial working memory capacity or vividness of mental images. The semantic content of the imagined pictures was the only meaningful predictor of temporal gaze dynamics. The spatial correspondence reflects shared spatial structure of mental images and perceived pictures, while the unique temporal gaze behavior could be driven by generation, maintenance and protection processes specific to visual imagery. The unique temporal gaze dynamics offer a window to new insights into the genuine process of mental imagery independent of its similarity to perception.
Collapse
Affiliation(s)
- Lilla M Gurtner
- Department of Psychology, University of Bern, Fabrikstrasse 8, 3012 Bern, Switzerland.
| | - Matthias Hartmann
- Department of Psychology, University of Bern, Fabrikstrasse 8, 3012 Bern, Switzerland; Faculty of Psychology, UniDistance Suisse, Überlandstrasse 12, 3900 Brig, Switzerland
| | - Fred W Mast
- Department of Psychology, University of Bern, Fabrikstrasse 8, 3012 Bern, Switzerland
| |
Collapse
|
22
|
Armson MJ, Diamond NB, Levesque L, Ryan JD, Levine B. Vividness of recollection is supported by eye movements in individuals with high, but not low trait autobiographical memory. Cognition 2021; 206:104487. [DOI: 10.1016/j.cognition.2020.104487] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2020] [Revised: 10/05/2020] [Accepted: 10/07/2020] [Indexed: 11/25/2022]
|
23
|
Lalla A, Agostino C, Sheldon S. The link between detail generation and eye movements when encoding and retrieving complex images. Memory 2020; 28:1231-1244. [PMID: 33016244 DOI: 10.1080/09658211.2020.1828927] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2020] [Accepted: 09/22/2020] [Indexed: 02/03/2023]
Abstract
Examining eye movement patterns when encoding and retrieving visually complex memories is useful to understand the link between visuo-perceptual processes and how associated details are represented within these memories. Here, we used images of real-world scenes (e.g., a couple grocery shopping) to examine how encoding and retrieval eye movements are linked to the details used to describe complex images during these two phases of memory. Given that memories are often elaborated upon during retrieval, we also examined whether eye-movements at retrieval related to details that were the same as those described when encoding the image (reinstated details) as well as details about the image event that were not initially described at encoding (newly generated details). Testing young healthy participants, we found that retrieval eye movements, specifically eye fixation rate, predicted reinstated details, but not newly generated details. This suggests that visuo-perceptual processes are preferentially engaged at retrieval to reactivate perceived information. At encoding, we found a relationship between eye movements and detail generation that changed over time. This relationship was positive early on in the encoding phase but changed to a negative relationship later in the phase, indicating that a unique relationship exists between activating visuo-perceptual processes during early encoding versus late encoding. Overall, our results provide new insights into how visuo-perceptual processes contribute to different components of complex memory.
Collapse
Affiliation(s)
- Azara Lalla
- Department of Psychology, McGill University, Montreal, Canada
| | | | - Signy Sheldon
- Department of Psychology, McGill University, Montreal, Canada
| |
Collapse
|
24
|
Hilton C, Muffato V, Slattery TJ, Miellet S, Wiener J. Differences in Encoding Strategy as a Potential Explanation for Age-Related Decline in Place Recognition Ability. Front Psychol 2020; 11:2182. [PMID: 33013562 PMCID: PMC7511632 DOI: 10.3389/fpsyg.2020.02182] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2020] [Accepted: 08/04/2020] [Indexed: 12/23/2022] Open
Abstract
The ability to recognise places is known to deteriorate with advancing age. In this study, we investigated the contribution of age-related changes in spatial encoding strategies to declining place recognition ability. We recorded eye movements while younger and older adults completed a place recognition task first described by Muffato et al. (2019). Participants first learned places, which were defined by an array of four objects, and then decided whether the next place they were shown was the same or different to the one they learned. Places could be shown from the same spatial perspective as during learning or from a shifted perspective (30° or 60°). Places that were different to those during learning were changed either by substituting an object in the place with a novel object or by swapping the locations of two objects. We replicated the findings of Muffato et al. (2019) showing that sensitivity to detect changes in a place declined with advancing age and declined when the spatial perspective was shifted. Additionally, older adults were particularly impaired on trials in which object locations were swapped; however, they were not differentially affected by perspective changes compared to younger adults. During place encoding, older adults produced more fixations and saccades, shorter fixation durations, and spent less time looking at objects compared to younger adults. Further, we present an analysis of gaze chaining, designed to capture spatio-temporal aspects of gaze behaviour. The chaining measure was a significant predictor of place recognition performance. We found significant differences between age groups on the chaining measure and argue that these differences in gaze behaviour are indicative of differences in encoding strategy between age groups. In summary, we report a direct replication of Muffato et al. (2019) and provide evidence for age-related differences in spatial encoding strategies, which are related to place recognition performance.
Collapse
Affiliation(s)
- Christopher Hilton
- Psychology Department, Ageing and Dementia Research Centre, Bournemouth University, Bournemouth, United Kingdom.,Biological Psychology and Neuroergonomics, Berlin Institute of Technology, Berlin, Germany
| | - Veronica Muffato
- Department of General Psychology, University of Padua, Padua, Italy
| | - Timothy J Slattery
- Psychology Department, Ageing and Dementia Research Centre, Bournemouth University, Bournemouth, United Kingdom
| | - Sebastien Miellet
- Active Vision Lab, School of Psychology, University of Wollongong, Wollongong, NSW, Australia
| | - Jan Wiener
- Psychology Department, Ageing and Dementia Research Centre, Bournemouth University, Bournemouth, United Kingdom
| |
Collapse
|
25
|
Annerer-Walcher S, Körner C, Beaty RE, Benedek M. Eye behavior predicts susceptibility to visual distraction during internally directed cognition. Atten Percept Psychophys 2020; 82:3432-3444. [PMID: 32500390 PMCID: PMC7536161 DOI: 10.3758/s13414-020-02068-1] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
When we engage in internally directed cognition (e.g., planning or imagination), our eye behavior decouples from external stimuli and couples to internal representations (e.g., internal visualizations of ideas). Here, we investigated whether eye behavior predicts the susceptibility to visual distraction during internally directed cognition. To this end, participants performed a divergent thinking task, which required internally directed attention, and we measured distraction in terms of attention capture by unrelated images. We used multilevel mixed models to predict visual distraction by eye behavior right before distractor onset. In Study 1 (N = 38), visual distraction was predicted by increased saccade and blink rate, and higher pupil dilation. We replicated these findings in Study 2 using the same task, but with less predictable distractor onsets and a larger sample (N = 144). We also explored whether individual differences in susceptibility to visual distraction were related to cognitive ability and task performance. Taken together, variation in eye behavior was found to be a consistent predictor of visual distraction during internally directed cognition. This highlights the relevance of eye parameters as objective indicators of internal versus external attentional focus and distractibility during complex mental tasks.
Collapse
Affiliation(s)
| | - Christof Körner
- University of Graz, Universitätsplatz 2, 8010, Graz, Austria
| | - Roger E Beaty
- Pennsylvania State University, University Park, PA, USA
| | - Mathias Benedek
- University of Graz, Universitätsplatz 2, 8010, Graz, Austria.
| |
Collapse
|
26
|
Kragel JE, Voss JL. Temporal context guides visual exploration during scene recognition. J Exp Psychol Gen 2020; 150:873-889. [PMID: 32969680 DOI: 10.1037/xge0000827] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
Abstract
Memories for episodes are temporally structured. Cognitive models derived from list-learning experiments attribute this structure to the retrieval of temporal context information that indicates when a memory occurred. These models predict key features of memory recall, such as the strong tendency to retrieve studied items in the order in which they were first encountered. Can such models explain ecological memory behaviors, such as eye movements during encoding and retrieval of complex visual stimuli? We tested predictions from retrieved-context models using three data sets involving recognition memory and free viewing of complex scenes. Subjects reinstated sequences of eye movements from one scene-viewing episode to the next. Moreover, sequence reinstatement decayed over time and was associated with successful memory. We observed memory-driven reinstatement even after accounting for intrinsic scene properties that produced consistent eye movements. These findings confirm predictions of retrieved-context models, suggesting retrieval of temporal context influences complex behaviors generated during naturalistic memory experiences. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Collapse
|
27
|
Harada Y, Ohyama J. The effect of task-irrelevant spatial contexts on 360-degree attention. PLoS One 2020; 15:e0237717. [PMID: 32810159 PMCID: PMC7437462 DOI: 10.1371/journal.pone.0237717] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2020] [Accepted: 07/31/2020] [Indexed: 11/19/2022] Open
Abstract
The effect of spatial contexts on attention is important for evaluating the risk of human errors and the accessibility of information in different situations. In traditional studies, this effect has been investigated using display-based and non-laboratory procedures. However, these two procedures are inadequate for measuring attention directed toward 360-degree environments and controlling exogeneous stimuli. In order to resolve these limitations, we used a virtual-reality-based procedure and investigated how spatial contexts of 360-degree environments influence attention. In the experiment, 20 students were asked to search for and report a target that was presented at any location in 360-degree virtual spaces as accurately and quickly as possible. Spatial contexts comprised a basic context (a grey and objectless space) and three specific contexts (a square grid floor, a cubic room, and an infinite floor). We found that response times for the task and eye movements were influenced by the spatial context of 360-degree surrounding spaces. In particular, although total viewing times for the contexts did not match the saliency maps, the differences in total viewing times between the basic and specific contexts did resemble the maps. These results suggest that attention comprises basic and context-dependent characteristics, and the latter are influenced by the saliency of 360-degree contexts even when the contexts are irrelevant to a task.
Collapse
Affiliation(s)
- Yuki Harada
- National Institute of Advanced Industrial Science and Technology, Human Augmentation Research Center, Tsukuba, Ibaraki, Japan
- Department of Rehabilitation for Brain Functions, Research Institute of National Rehabilitation Center for Persons with Disabilities, Tokorozawa, Saitama, Japan
| | - Junji Ohyama
- National Institute of Advanced Industrial Science and Technology, Human Augmentation Research Center, Tsukuba, Ibaraki, Japan
| |
Collapse
|
28
|
Weeks JC, Grady CL, Hasher L, Buchsbaum BR. Holding On to the Past: Older Adults Show Lingering Neural Activation of No-Longer-Relevant Items in Working Memory. J Cogn Neurosci 2020; 32:1946-1962. [PMID: 32573381 DOI: 10.1162/jocn_a_01596] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Goal-relevant information can be maintained in working memory over a brief delay interval to guide an upcoming decision. There is also evidence suggesting the existence of a complementary process: namely, the ability to suppress information that is no longer relevant to ongoing task goals. Moreover, this ability to suppress or inhibit irrelevant information appears to decline with age. In this study, we compared younger and older adults undergoing fMRI on a working memory task designed to address whether the modulation of neural representations of relevant and no-longer-relevant items during a delay interval is related to age and overall task performance. Following from the theoretical predictions of the inhibitory deficit hypothesis of aging, we hypothesized that older adults would show higher activation of no-longer-relevant items during a retention delay compared to young adults and that higher activation of these no-longer-relevant items would predict worse recognition memory accuracy for relevant items. Our results support this prediction and more generally demonstrate the importance of goal-driven modulation of neural activity in successful working memory maintenance. Furthermore, we showed that the largest age differences in the regulation of category-specific pattern activity during working memory maintenance were seen throughout the medial temporal lobe and prominently in the hippocampus, further establishing the importance of "long-term memory" retrieval mechanisms in the context of high-load working memory tasks that place large demands on attentional selection mechanisms.
Collapse
Affiliation(s)
- Jennifer C Weeks
- University of Toronto.,Rotman Research Institute at Baycrest, Toronto, Ontario, Canada
| | - Cheryl L Grady
- University of Toronto.,Rotman Research Institute at Baycrest, Toronto, Ontario, Canada
| | - Lynn Hasher
- University of Toronto.,Rotman Research Institute at Baycrest, Toronto, Ontario, Canada
| | - Bradley R Buchsbaum
- University of Toronto.,Rotman Research Institute at Baycrest, Toronto, Ontario, Canada
| |
Collapse
|
29
|
Bone MB, Ahmad F, Buchsbaum BR. Feature-specific neural reactivation during episodic memory. Nat Commun 2020; 11:1945. [PMID: 32327642 PMCID: PMC7181630 DOI: 10.1038/s41467-020-15763-2] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2019] [Accepted: 03/12/2020] [Indexed: 12/04/2022] Open
Abstract
We present a multi-voxel analytical approach, feature-specific informational connectivity (FSIC), that leverages hierarchical representations from a neural network to decode neural reactivation in fMRI data collected while participants performed an episodic visual recall task. We show that neural reactivation associated with low-level (e.g. edges), high-level (e.g. facial features), and semantic (e.g. “terrier”) features occur throughout the dorsal and ventral visual streams and extend into the frontal cortex. Moreover, we show that reactivation of both low- and high-level features correlate with the vividness of the memory, whereas only reactivation of low-level features correlates with recognition accuracy when the lure and target images are semantically similar. In addition to demonstrating the utility of FSIC for mapping feature-specific reactivation, these findings resolve the contributions of low- and high-level features to the vividness of visual memories and challenge a strict interpretation the posterior-to-anterior visual hierarchy. Memory recollection involves reactivation of neural activity that occurred during the recalled experience. Here, the authors show that neural reactivation can be decomposed into visual-semantic features, is widely synchronized throughout the brain, and predicts memory vividness and accuracy.
Collapse
Affiliation(s)
- Michael B Bone
- Rotman Research Institute at Baycrest, Toronto, ON, M6A 2E1, Canada. .,Department of Psychology, University of Toronto, Toronto, ON, M5S 1A1, Canada.
| | - Fahad Ahmad
- Rotman Research Institute at Baycrest, Toronto, ON, M6A 2E1, Canada
| | - Bradley R Buchsbaum
- Rotman Research Institute at Baycrest, Toronto, ON, M6A 2E1, Canada.,Department of Psychology, University of Toronto, Toronto, ON, M5S 1A1, Canada
| |
Collapse
|
30
|
Johansson R, Johansson M. Gaze position regulates memory accessibility during competitive memory retrieval. Cognition 2020; 197:104169. [DOI: 10.1016/j.cognition.2019.104169] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2019] [Revised: 12/19/2019] [Accepted: 12/20/2019] [Indexed: 10/25/2022]
|
31
|
Wynn JS, Ryan JD, Buchsbaum BR. Eye movements support behavioral pattern completion. Proc Natl Acad Sci U S A 2020; 117:6246-6254. [PMID: 32123109 PMCID: PMC7084073 DOI: 10.1073/pnas.1917586117] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
The ability to recall a detailed event from a simple reminder is supported by pattern completion, a cognitive operation performed by the hippocampus wherein existing mnemonic representations are retrieved from incomplete input. In behavioral studies, pattern completion is often inferred through the false endorsement of lure (i.e., similar) items as old. However, evidence that such a response is due to the specific retrieval of a similar, previously encoded item is severely lacking. We used eye movement (EM) monitoring during a partial-cue recognition memory task to index reinstatement of lure images behaviorally via the recapitulation of encoding-related EMs or gaze reinstatement. Participants reinstated encoding-related EMs following degraded retrieval cues and this reinstatement was negatively correlated with accuracy for lure images, suggesting that retrieval of existing representations (i.e., pattern completion) underlies lure false alarms. Our findings provide evidence linking gaze reinstatement and pattern completion and advance a functional role for EMs in memory retrieval.
Collapse
Affiliation(s)
- Jordana S Wynn
- Department of Psychology, University of Toronto, Toronto, ON M55 3G3, Canada;
- Rotman Research Institute, Baycrest Hospital, Toronto, ON M6A 2E1, Canada
| | - Jennifer D Ryan
- Department of Psychology, University of Toronto, Toronto, ON M55 3G3, Canada
- Rotman Research Institute, Baycrest Hospital, Toronto, ON M6A 2E1, Canada
| | - Bradley R Buchsbaum
- Department of Psychology, University of Toronto, Toronto, ON M55 3G3, Canada
- Rotman Research Institute, Baycrest Hospital, Toronto, ON M6A 2E1, Canada
| |
Collapse
|
32
|
St-Laurent M, Rosenbaum RS, Olsen RK, Buchsbaum BR. Representation of viewed and recalled film clips in patterns of brain activity in a person with developmental amnesia. Neuropsychologia 2020; 142:107436. [PMID: 32194085 DOI: 10.1016/j.neuropsychologia.2020.107436] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2019] [Revised: 02/03/2020] [Accepted: 03/10/2020] [Indexed: 01/29/2023]
Abstract
As clear memories transport us back into the past, the brain also revives prior patterns of neural activity, a phenomenon known as neural reactivation. While growing evidence indicates a link between neural reactivation and typical variations in memory performance in healthy individuals, it is unclear how and to what extent reactivation is disrupted by a memory disorder. The current study characterizes neural reactivation in a case of amnesia using Multivoxel Pattern Analysis (MVPA). We tested NC, an individual with developmental amnesia linked to a diencephalic stroke, and 19 young adult controls on a functional magnetic resonance imaging (fMRI) task during which participants viewed and recalled short videos multiple times. An encoding classifier trained and tested to identify videos based on brain activity patterns elicited at perception revealed superior classification in NC. The enhanced consistency in stimulus representation we observed in NC at encoding was accompanied by an absence of multivariate repetition suppression, which occurred over repeated viewing in the controls. Another recall classifier trained and tested to identify videos during mental replay indicated normal levels of classification in NC, despite his poor memory for stimulus content. However, a cross-condition classifier trained on perception trials and tested on mental replay trials-a strict test of reactivation-revealed significantly poorer classification in NC. Thus, while NC's brain activity was consistent and stimulus-specific during mental replay, this specificity did not reflect the reactivation of patterns elicited at perception to the same extent as controls. Fittingly, we identified brain regions for which activity supported stimulus representation during mental replay to a greater extent in NC than in controls. This activity was not modeled on perception, suggesting that compensatory patterns of representation based on generic knowledge can support consistent mental constructs when memory is faulty. Our results reveal several ways in which amnesia impacts distributed patterns of stimulus representation during encoding and retrieval.
Collapse
Affiliation(s)
- Marie St-Laurent
- Rotman Research Institute at Baycrest, 3560 Bathurst Street, Toronto, Ontario, M6A 2E1, Canada.
| | - R Shayna Rosenbaum
- Rotman Research Institute at Baycrest, 3560 Bathurst Street, Toronto, Ontario, M6A 2E1, Canada; Department of Psychology, York University, Faculty of Health, Behavioural Sciences Building, 4700 Keele Street, Toronto, Ontario, M3J 1P3, Canada
| | - Rosanna K Olsen
- Rotman Research Institute at Baycrest, 3560 Bathurst Street, Toronto, Ontario, M6A 2E1, Canada; Department of Psychology, University of Toronto, 100 St.George Street, 4th Floor, Toronto, ON, M5S 3G3, Canada
| | - Bradley R Buchsbaum
- Rotman Research Institute at Baycrest, 3560 Bathurst Street, Toronto, Ontario, M6A 2E1, Canada; Department of Psychology, University of Toronto, 100 St.George Street, 4th Floor, Toronto, ON, M5S 3G3, Canada
| |
Collapse
|
33
|
Ryan JD, Shen K, Liu Z. The intersection between the oculomotor and hippocampal memory systems: empirical developments and clinical implications. Ann N Y Acad Sci 2020; 1464:115-141. [PMID: 31617589 PMCID: PMC7154681 DOI: 10.1111/nyas.14256] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2019] [Revised: 08/29/2019] [Accepted: 09/19/2019] [Indexed: 12/28/2022]
Abstract
Decades of cognitive neuroscience research has shown that where we look is intimately connected to what we remember. In this article, we review findings from human and nonhuman animals, using behavioral, neuropsychological, neuroimaging, and computational modeling methods, to show that the oculomotor and hippocampal memory systems interact in a reciprocal manner, on a moment-to-moment basis, mediated by a vast structural and functional network. Visual exploration serves to efficiently gather information from the environment for the purpose of creating new memories, updating existing memories, and reconstructing the rich, vivid details from memory. Conversely, memory increases the efficiency of visual exploration. We call for models of oculomotor control to consider the influence of the hippocampal memory system on the cognitive control of eye movements, and for models of hippocampal and broader medial temporal lobe function to consider the influence of the oculomotor system on the development and expression of memory. We describe eye movement-based applications for the detection of neurodegeneration and delivery of therapeutic interventions for mental health disorders for which the hippocampus is implicated and memory dysfunctions are at the forefront.
Collapse
Affiliation(s)
- Jennifer D. Ryan
- Rotman Research InstituteBaycrestTorontoOntarioCanada
- Department of PsychologyUniversity of TorontoTorontoOntarioCanada
- Department of PsychiatryUniversity of TorontoTorontoOntarioCanada
| | - Kelly Shen
- Rotman Research InstituteBaycrestTorontoOntarioCanada
| | - Zhong‐Xu Liu
- Department of Behavioral SciencesUniversity of Michigan‐DearbornDearbornMichigan
| |
Collapse
|
34
|
Wynn JS, Shen K, Ryan JD. Eye Movements Actively Reinstate Spatiotemporal Mnemonic Content. Vision (Basel) 2019; 3:E21. [PMID: 31735822 PMCID: PMC6802778 DOI: 10.3390/vision3020021] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Revised: 05/09/2019] [Accepted: 05/10/2019] [Indexed: 12/23/2022] Open
Abstract
Eye movements support memory encoding by binding distinct elements of the visual world into coherent representations. However, the role of eye movements in memory retrieval is less clear. We propose that eye movements play a functional role in retrieval by reinstating the encoding context. By overtly shifting attention in a manner that broadly recapitulates the spatial locations and temporal order of encoded content, eye movements facilitate access to, and reactivation of, associated details. Such mnemonic gaze reinstatement may be obligatorily recruited when task demands exceed cognitive resources, as is often observed in older adults. We review research linking gaze reinstatement to retrieval, describe the neural integration between the oculomotor and memory systems, and discuss implications for models of oculomotor control, memory, and aging.
Collapse
Affiliation(s)
- Jordana S. Wynn
- Rotman Research Institute, Baycrest, 3560 Bathurst St., Toronto, ON M6A 2E1, Canada
- Department of Psychology, University of Toronto, 100 St George St., Toronto, ON M5S 3G3, Canada
| | - Kelly Shen
- Rotman Research Institute, Baycrest, 3560 Bathurst St., Toronto, ON M6A 2E1, Canada
| | - Jennifer D. Ryan
- Rotman Research Institute, Baycrest, 3560 Bathurst St., Toronto, ON M6A 2E1, Canada
- Department of Psychology, University of Toronto, 100 St George St., Toronto, ON M5S 3G3, Canada
- Department of Psychiatry, University of Toronto, 250 College St., Toronto, ON M5T 1R8, Canada
| |
Collapse
|
35
|
Bicanski A, Burgess N. A Computational Model of Visual Recognition Memory via Grid Cells. Curr Biol 2019; 29:979-990.e4. [PMID: 30853437 PMCID: PMC6428694 DOI: 10.1016/j.cub.2019.01.077] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2018] [Revised: 12/23/2018] [Accepted: 01/30/2019] [Indexed: 02/07/2023]
Abstract
Models of face, object, and scene recognition traditionally focus on massively parallel processing of low-level features, with higher-order representations emerging at later processing stages. However, visual perception is tightly coupled to eye movements, which are necessarily sequential. Recently, neurons in entorhinal cortex have been reported with grid cell-like firing in response to eye movements, i.e., in visual space. Following the presumed role of grid cells in vector navigation, we propose a model of recognition memory for familiar faces, objects, and scenes, in which grid cells encode translation vectors between salient stimulus features. A sequence of saccadic eye-movement vectors, moving from one salient feature to the expected location of the next, potentially confirms an initial hypothesis (accumulating evidence toward a threshold) about stimulus identity, based on the relative feature layout (i.e., going beyond recognition of individual features). The model provides an explicit neural mechanism for the long-held view that directed saccades support hypothesis-driven, constructive perception and recognition; is compatible with holistic face processing; and constitutes the first quantitative proposal for a role of grid cells in visual recognition. The variance of grid cell activity along saccade trajectories exhibits 6-fold symmetry across 360 degrees akin to recently reported fMRI data. The model suggests that disconnecting grid cells from occipitotemporal inputs may yield prosopagnosia-like symptoms. The mechanism is robust with regard to partial visual occlusion, can accommodate size and position invariance, and suggests a functional explanation for medial temporal lobe involvement in visual memory for relational information and memory-guided attention.
Collapse
Affiliation(s)
- Andrej Bicanski
- Institute of Cognitive Neuroscience, University College London, Alexandra House, 17 Queen Square, WC1N 3AZ London, UK.
| | - Neil Burgess
- Institute of Cognitive Neuroscience, University College London, Alexandra House, 17 Queen Square, WC1N 3AZ London, UK.
| |
Collapse
|
36
|
|
37
|
Eye Movement-Related Confounds in Neural Decoding of Visual Working Memory Representations. eNeuro 2018; 5:eN-NWR-0401-17. [PMID: 30310862 PMCID: PMC6179574 DOI: 10.1523/eneuro.0401-17.2018] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2017] [Revised: 06/03/2018] [Accepted: 06/12/2018] [Indexed: 11/21/2022] Open
Abstract
A relatively new analysis technique, known as neural decoding or multivariate pattern analysis (MVPA), has become increasingly popular for cognitive neuroimaging studies over recent years. These techniques promise to uncover the representational contents of neural signals, as well as the underlying code and the dynamic profile thereof. A field in which these techniques have led to novel insights in particular is that of visual working memory (VWM). In the present study, we subjected human volunteers to a combined VWM/imagery task while recording their neural signals using magnetoencephalography (MEG). We applied multivariate decoding analyses to uncover the temporal profile underlying the neural representations of the memorized item. Analysis of gaze position however revealed that our results were contaminated by systematic eye movements, suggesting that the MEG decoding results from our originally planned analyses were confounded. In addition to the eye movement analyses, we also present the original analyses to highlight how these might have readily led to invalid conclusions. Finally, we demonstrate a potential remedy, whereby we train the decoders on a functional localizer that was specifically designed to target bottom-up sensory signals and as such avoids eye movements. We conclude by arguing for more awareness of the potentially pervasive and ubiquitous effects of eye movement-related confounds.
Collapse
|
38
|
Annerer-Walcher S, Körner C, Benedek M. Eye behavior does not adapt to expected visual distraction during internally directed cognition. PLoS One 2018; 13:e0204963. [PMID: 30265715 PMCID: PMC6161918 DOI: 10.1371/journal.pone.0204963] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2018] [Accepted: 09/16/2018] [Indexed: 11/18/2022] Open
Abstract
When focused on a specific internal task like calculating a multiplication in mind we are able to ignore sensory distraction. This may be achieved by effective perceptual decoupling during internally directed cognition. The present study investigated whether decoupling from external events during internally directed cognition represents an active shielding mechanism that adapts to expected external distraction or a passive/automatic shielding mechanism that is independent of external distraction. Participants performed multiplications in mind (e.g. 26 x 7), a task that required to turn attention inward as soon as the problem was encoded. At the beginning of a block of trials, participants were informed whether or not distractors could appear during the calculation period, thereby potentially allowing them to prepare for the distractors. We tracked their eye behavior as markers of perceptual decoupling and workload. Turning attention inward to calculate the multiplication elicited evidence of perceptual decoupling for five of six eye parameters: blink rate, saccade and microsaccade rate increased, gaze was less constricted to the center, and pupils dilated. Although participants perceived blocks with distractors as more challenging, performance and eye behavior markers of both perceptual decoupling and workload were unaffected. This result supports the notion of perceptual decoupling as an automatic mechanism: focusing inward induces desensitization to external events independent of external distraction.
Collapse
|