1
|
Clement A, Anderson BA. Statistically learned associations among objects bias attention. Atten Percept Psychophys 2024; 86:2251-2261. [PMID: 39198359 DOI: 10.3758/s13414-024-02941-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/29/2024] [Indexed: 09/01/2024]
Abstract
A growing body of research suggests that semantic relationships among objects can influence the control of attention. There is also some evidence that learned associations among objects can bias attention. However, it is unclear whether these findings are due to statistical learning or existing semantic relationships. In the present study, we examined whether statistically learned associations among objects can bias attention in the absence of existing semantic relationships. Participants searched for one of four targets among pairs of novel shapes and identified whether the target was present or absent from the display. In an initial training phase, each target was paired with an associated distractor in a fixed spatial configuration. In a subsequent test phase, each target could be paired with the previously associated distractor or a different distractor. In our first experiment, the previously associated distractor was always presented in the same pair as the target. Participants were faster to respond when this distractor was present on target-present trials. In our second experiment, the previously associated distractor was presented in a different pair than the target in the test phase. In this case, participants were slower to respond when this distractor was present on both target-present and target-absent trials. Together, these findings provide clear evidence that statistically learned associations among objects can bias attention, analogous to the effects of semantic relationships on attention.
Collapse
Affiliation(s)
- Andrew Clement
- Department of Psychological & Brain Sciences, Texas A&M University, College Station, TX, USA.
- Department of Psychology and Neuroscience, Millsaps College, 1701 N. State St, Jackson, MS, 39210, USA.
| | - Brian A Anderson
- Department of Psychological & Brain Sciences, Texas A&M University, College Station, TX, USA
| |
Collapse
|
2
|
Damiano C, Leemans M, Wagemans J. Exploring the Semantic-Inconsistency Effect in Scenes Using a Continuous Measure of Linguistic-Semantic Similarity. Psychol Sci 2024; 35:623-634. [PMID: 38652604 DOI: 10.1177/09567976241238217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/25/2024] Open
Abstract
Viewers use contextual information to visually explore complex scenes. Object recognition is facilitated by exploiting object-scene relations (which objects are expected in a given scene) and object-object relations (which objects are expected because of the occurrence of other objects). Semantically inconsistent objects deviate from these expectations, so they tend to capture viewers' attention (the semantic-inconsistency effect). Some objects fit the identity of a scene more or less than others, yet semantic inconsistencies have hitherto been operationalized as binary (consistent vs. inconsistent). In an eye-tracking experiment (N = 21 adults), we study the semantic-inconsistency effect in a continuous manner by using the linguistic-semantic similarity of an object to the scene category and to other objects in the scene. We found that both highly consistent and highly inconsistent objects are viewed more than other objects (U-shaped relationship), revealing that the (in)consistency effect is more than a simple binary classification.
Collapse
Affiliation(s)
- Claudia Damiano
- Department of Psychology, University of Toronto
- Laboratory of Experimental Psychology, Department of Brain and Cognition, KU Leuven
| | - Maarten Leemans
- Laboratory of Experimental Psychology, Department of Brain and Cognition, KU Leuven
| | - Johan Wagemans
- Laboratory of Experimental Psychology, Department of Brain and Cognition, KU Leuven
| |
Collapse
|
3
|
Biggs AT, Pettijohn KA, Blacker KJ. Contextual cueing during lethal force training: How target design and repetition can alter threat assessments. MILITARY PSYCHOLOGY 2024; 36:353-365. [PMID: 38661462 PMCID: PMC11057649 DOI: 10.1080/08995605.2023.2178785] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Accepted: 04/27/2021] [Indexed: 03/06/2023]
Abstract
Lethal force training requires individuals to make threat assessments, which involves holistic scenario processing to identify potential threats. Photorealistic targets can make threat/non-threat judgments substantially more genuine and challenging compared to simple cardboard or silhouette targets. Unfortunately, repeated target use also brings unintended consequences that could invalidate threat assessment processes conducted during training. Contextually rich or unique targets could be implicitly memorable in a way that allows observers to recall weapon locations rather than forcing observers to conduct a naturalistic assessment. Experiment 1 demonstrated robust contextual cueing effects in a well-established shoot/don't-shoot stimulus set, and Experiment 2 extended this finding from complex scene stimuli to simple actor-only stimuli. Experiment 3 demonstrated that these effects also occurred among trained professionals using rifles rather than computer-based tasks. Taken together, these findings demonstrate the potential for uncontrolled target repetition to alter the fundamental processes of threat assessment during lethal force training.
Collapse
Affiliation(s)
- Adam T. Biggs
- Medical Department, Naval Special Warfare Command, Coronado, California
| | - Kyle A. Pettijohn
- Aeromedical Department, Naval Medical Research Unit – Dayton, Wright-Patterson AFB, Dayton, Ohio
| | - Kara J. Blacker
- Aeromedical Department, Naval Medical Research Unit – Dayton, Wright-Patterson AFB, Dayton, Ohio
| |
Collapse
|
4
|
Anderson BA. Trichotomy revisited: A monolithic theory of attentional control. Vision Res 2024; 217:108366. [PMID: 38387262 PMCID: PMC11523554 DOI: 10.1016/j.visres.2024.108366] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Revised: 02/05/2024] [Accepted: 02/09/2024] [Indexed: 02/24/2024]
Abstract
The control of attention was long held to reflect the influence of two competing mechanisms of assigning priority, one goal-directed and the other stimulus-driven. Learning-dependent influences on the control of attention that could not be attributed to either of those two established mechanisms of control gave rise to the concept of selection history and a corresponding third mechanism of attentional control. The trichotomy framework that ensued has come to dominate theories of attentional control over the past decade, replacing the historical dichotomy. In this theoretical review, I readily affirm that distinctions between the influence of goals, salience, and selection history are substantive and meaningful, and that abandoning the dichotomy between goal-directed and stimulus-driven mechanisms of control was appropriate. I do, however, question whether a theoretical trichotomy is the right answer to the problem posed by selection history. If we reframe the influence of goals and selection history as different flavors of memory-dependent modulations of attentional priority and if we characterize the influence of salience as a consequence of insufficient competition from such memory-dependent sources of priority, it is possible to account for a wide range of attention-related phenomena with only one mechanism of control. The monolithic framework for the control of attention that I propose offers several concrete advantages over a trichotomy framework, which I explore here.
Collapse
Affiliation(s)
- Brian A Anderson
- Texas A&M University, Department of Psychological & Brain Sciences, 4235 TAMU, College Station, TX 77843-4235, United States.
| |
Collapse
|
5
|
Geyer T, Zinchenko A, Seitz W, Balik M, Müller HJ, Conci M. Mission impossible? Spatial context relearning following a target relocation event depends on cue predictiveness. Psychon Bull Rev 2024; 31:148-155. [PMID: 37434045 PMCID: PMC10867038 DOI: 10.3758/s13423-023-02328-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/02/2023] [Indexed: 07/13/2023]
Abstract
Visual search for a target is faster when the spatial layout of distractors is repeatedly encountered, illustrating that statistical learning of contextual invariances facilitates attentional guidance (contextual cueing; Chun & Jiang, 1998, Cognitive Psychology, 36, 28-71). While contextual learning is usually relatively efficient, relocating the target to an unexpected location (within an otherwise unchanged search layout) typically abolishes contextual cueing and the benefits deriving from invariant contexts recover only slowly with extensive training (Zellin et al., 2014, Psychonomic Bulletin & Review, 21(4), 1073-1079). However, a recent study by Peterson et al. (2022, Attention, Perception, & Psychophysics, 84(2), 474-489) in fact reported rather strong adaptation of spatial contextual memories following target position changes, thus contrasting with prior work. Peterson et al. argued that previous studies may have been underpowered to detect a reliable recovery of contextual cueing after the change. However, their experiments also used a specific display design that frequently presented the targets at the same locations, which might reduce the predictability of the contextual cues thereby facilitating its flexible relearning (irrespective of statistical power). The current study was a (high-powered) replication of Peterson et al., taking into account both statistical power and target overlap in context-memory adaptation. We found reliable contextual cueing for the initial target location irrespective of whether the targets shared their location across multiple displays, or not. However, contextual adaptation following a target relocation event occurred only when target locations were shared. This suggests that cue predictability modulates contextual adaptation, over and above a possible (yet negligible) influence of statistical power.
Collapse
Affiliation(s)
- Thomas Geyer
- Department of Psychology, Ludwig Maximilian University of Munich, Leopoldstraße 13, 80802, Munich, Germany
- Munich Center of Neurosciences-Brain & Mind, Munich, Germany
- NICUM-NeuroImaging Core Unit Munich, Munich, Germany
| | - Artyom Zinchenko
- Department of Psychology, Ludwig Maximilian University of Munich, Leopoldstraße 13, 80802, Munich, Germany.
| | - Werner Seitz
- Department of Psychology, Ludwig Maximilian University of Munich, Leopoldstraße 13, 80802, Munich, Germany
| | - Merve Balik
- Department of Psychology, Ludwig Maximilian University of Munich, Leopoldstraße 13, 80802, Munich, Germany
| | - Hermann J Müller
- Department of Psychology, Ludwig Maximilian University of Munich, Leopoldstraße 13, 80802, Munich, Germany
- Munich Center of Neurosciences-Brain & Mind, Munich, Germany
| | - Markus Conci
- Department of Psychology, Ludwig Maximilian University of Munich, Leopoldstraße 13, 80802, Munich, Germany
- Munich Center of Neurosciences-Brain & Mind, Munich, Germany
| |
Collapse
|
6
|
A-Izzeddin EJ, Mattingley JB, Harrison WJ. The influence of natural image statistics on upright orientation judgements. Cognition 2024; 242:105631. [PMID: 37820487 DOI: 10.1016/j.cognition.2023.105631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 09/24/2023] [Accepted: 09/28/2023] [Indexed: 10/13/2023]
Abstract
Humans have well-documented priors for many features present in nature that guide visual perception. Despite being putatively grounded in the statistical regularities of the environment, scene priors are frequently violated due to the inherent variability of visual features from one scene to the next. However, these repeated violations do not appreciably challenge visuo-cognitive function, necessitating the broad use of priors in conjunction with context-specific information. We investigated the trade-off between participants' internal expectations formed from both longer-term priors and those formed from immediate contextual information using a perceptual inference task and naturalistic stimuli. Notably, our task required participants to make perceptual inferences about naturalistic images using their own internal criteria, rather than making comparative judgements. Nonetheless, we show that observers' performance is well approximated by a model that makes inferences using a prior for low-level image statistics, aggregated over many images. We further show that the dependence on this prior is rapidly re-weighted against contextual information, even when misleading. Our results therefore provide insight into how apparent high-level interpretations of scene appearances follow from the most basic of perceptual processes, which are grounded in the statistics of natural images.
Collapse
Affiliation(s)
- Emily J A-Izzeddin
- Queensland Brain Institute, Building 79, University of Queensland, St Lucia, QLD 4072, Australia.
| | - Jason B Mattingley
- Queensland Brain Institute, Building 79, University of Queensland, St Lucia, QLD 4072, Australia; School of Psychology, Building 24A, University of Queensland, St Lucia, QLD 4072, Australia
| | - William J Harrison
- Queensland Brain Institute, Building 79, University of Queensland, St Lucia, QLD 4072, Australia; School of Psychology, Building 24A, University of Queensland, St Lucia, QLD 4072, Australia
| |
Collapse
|
7
|
Peelen MV, Berlot E, de Lange FP. Predictive processing of scenes and objects. NATURE REVIEWS PSYCHOLOGY 2024; 3:13-26. [PMID: 38989004 PMCID: PMC7616164 DOI: 10.1038/s44159-023-00254-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 10/25/2023] [Indexed: 07/12/2024]
Abstract
Real-world visual input consists of rich scenes that are meaningfully composed of multiple objects which interact in complex, but predictable, ways. Despite this complexity, we recognize scenes, and objects within these scenes, from a brief glance at an image. In this review, we synthesize recent behavioral and neural findings that elucidate the mechanisms underlying this impressive ability. First, we review evidence that visual object and scene processing is partly implemented in parallel, allowing for a rapid initial gist of both objects and scenes concurrently. Next, we discuss recent evidence for bidirectional interactions between object and scene processing, with scene information modulating the visual processing of objects, and object information modulating the visual processing of scenes. Finally, we review evidence that objects also combine with each other to form object constellations, modulating the processing of individual objects within the object pathway. Altogether, these findings can be understood by conceptualizing object and scene perception as the outcome of a joint probabilistic inference, in which "best guesses" about objects act as priors for scene perception and vice versa, in order to concurrently optimize visual inference of objects and scenes.
Collapse
Affiliation(s)
- Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Eva Berlot
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Floris P de Lange
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
8
|
Seitz W, Zinchenko A, Müller HJ, Geyer T. Contextual cueing of visual search reflects the acquisition of an optimal, one-for-all oculomotor scanning strategy. COMMUNICATIONS PSYCHOLOGY 2023; 1:20. [PMID: 39242890 PMCID: PMC11332235 DOI: 10.1038/s44271-023-00019-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 08/21/2023] [Indexed: 09/09/2024]
Abstract
Visual search improves when a target is encountered repeatedly at a fixed location within a stable distractor arrangement (spatial context), compared to non-repeated contexts. The standard account attributes this contextual-cueing effect to the acquisition of display-specific long-term memories, which, when activated by the current display, cue attention to the target location. Here we present an alternative, procedural-optimization account, according to which contextual facilitation arises from the acquisition of generic oculomotor scanning strategies, optimized with respect to the entire set of displays, with frequently searched displays accruing greater weight in the optimization process. To decide between these alternatives, we examined measures of the similarity, across time-on-task, of the spatio-temporal sequences of fixations through repeated and non-repeated displays. We found scanpath similarity to increase generally with learning, but more for repeated versus non-repeated displays. This pattern contradicts display-specific guidance, but supports one-for-all scanpath optimization.
Collapse
Affiliation(s)
- Werner Seitz
- Department Psychologie, Ludwig-Maximilians-Universität München, Munich, Germany.
| | - Artyom Zinchenko
- Department Psychologie, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Hermann J Müller
- Department Psychologie, Ludwig-Maximilians-Universität München, Munich, Germany
- Munich Center for Neurosciences - Brain & Mind, Ludwig-Maximilians-Universität München, Planegg-Martinsried, Munich, Germany
| | - Thomas Geyer
- Department Psychologie, Ludwig-Maximilians-Universität München, Munich, Germany
- Munich Center for Neurosciences - Brain & Mind, Ludwig-Maximilians-Universität München, Planegg-Martinsried, Munich, Germany
- NICUM - NeuroImaging Core Unit Munich, Ludwig-Maximilians-Universität München, Munich, Germany
| |
Collapse
|
9
|
Hannula DE, Minor GN, Slabbekoorn D. Conscious awareness and memory systems in the brain. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2023; 14:e1648. [PMID: 37012615 DOI: 10.1002/wcs.1648] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Revised: 01/06/2023] [Accepted: 03/05/2023] [Indexed: 04/05/2023]
Abstract
The term "memory" typically refers to conscious retrieval of events and experiences from our past, but experience can also change our behaviour without corresponding awareness of the learning process or the associated outcome. Based primarily on early neuropsychological work, theoretical perspectives have distinguished between conscious memory, said to depend critically on structures in the medial temporal lobe (MTL), and a collection of performance-based memories that do not. The most influential of these memory systems perspectives, the declarative memory theory, continues to be a mainstay of scientific work today despite mounting evidence suggesting that contributions of MTL structures go beyond the kinds or types of memory that can be explicitly reported. Consistent with these reports, more recent perspectives have focused increasingly on the processing operations supported by particular brain regions and the qualities or characteristics of resulting representations whether memory is expressed with or without awareness. These alternatives to the standard model generally converge on two key points. First, the hippocampus is critical for relational memory binding and representation even without awareness and, second, there may be little difference between some types of priming and explicit, familiarity-based recognition. Here, we examine the evolution of memory systems perspectives and critically evaluate scientific evidence that has challenged the status quo. Along the way, we highlight some of the challenges that researchers encounter in the context of this work, which can be contentious, and describe innovative methods that have been used to examine unconscious memory in the lab. This article is categorized under: Psychology > Memory Psychology > Theory and Methods Philosophy > Consciousness.
Collapse
|
10
|
Liao MR, Kim AJ, Anderson BA. Neural correlates of value-driven spatial orienting. Psychophysiology 2023; 60:e14321. [PMID: 37171022 PMCID: PMC10524674 DOI: 10.1111/psyp.14321] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 04/10/2023] [Accepted: 04/11/2023] [Indexed: 05/13/2023]
Abstract
Reward learning has been shown to habitually guide overt spatial attention to specific regions of a scene. However, the neural mechanisms that support this bias are unknown. In the present study, participants learned to orient themselves to a particular quadrant of a scene (a high-value quadrant) to maximize monetary gains. This learning was scene-specific, with the high-value quadrant varying across different scenes. During a subsequent test phase, participants were faster at identifying a target if it appeared in the high-value quadrant (valid), and initial saccades were more likely to be made to the high-value quadrant. fMRI analyses during the test phase revealed learning-dependent priority signals in the caudate tail, superior colliculus, frontal eye field, anterior cingulate cortex, and insula, paralleling findings concerning feature-based, value-driven attention. In addition, ventral regions typically associated with scene selection and spatial information processing, including the hippocampus, parahippocampal gyrus, and temporo-occipital cortex, were also implicated. Taken together, our findings offer new insights into the neural architecture subserving value-driven attention, both extending our understanding of nodes in the attention network previously implicated in feature-based, value-driven attention and identifying a ventral network of brain regions implicated in reward's influence on scene-dependent spatial orienting.
Collapse
Affiliation(s)
- Ming-Ray Liao
- Department of Psychological and Brain Sciences, Texas A&M University, College Station, Texas, USA
| | - Andy J Kim
- Department of Psychological and Brain Sciences, Texas A&M University, College Station, Texas, USA
| | - Brian A Anderson
- Department of Psychological and Brain Sciences, Texas A&M University, College Station, Texas, USA
| |
Collapse
|
11
|
Nuthmann A, Clark CNL. Pseudoneglect during object search in naturalistic scenes. Exp Brain Res 2023; 241:2345-2360. [PMID: 37610677 PMCID: PMC10471692 DOI: 10.1007/s00221-023-06679-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Accepted: 07/25/2023] [Indexed: 08/24/2023]
Abstract
Pseudoneglect, that is the tendency to pay more attention to the left side of space, is typically assessed with paper-and-pencil tasks, particularly line bisection. In the present study, we used an everyday task with more complex stimuli. Subjects' task was to look for pre-specified objects in images of real-world scenes. In half of the scenes, the search object was located on the left side of the image (L-target); in the other half of the scenes, the target was on the right side (R-target). To control for left-right differences in the composition of the scenes, half of the scenes were mirrored horizontally. Eye-movement recordings were used to track the course of pseudoneglect on a millisecond timescale. Subjects' initial eye movements were biased to the left of the scene, but less so for R-targets than for L-targets, indicating that pseudoneglect was modulated by task demands and scene guidance. We further analyzed how horizontal gaze positions changed over time. When the data for L- and R-targets were pooled, the leftward bias lasted, on average, until the first second of the search process came to an end. Even for right-side targets, the gaze data showed an early left-bias, which was compensated by adjustments in the direction and amplitude of later saccades. Importantly, we found that pseudoneglect affected search efficiency by leading to less efficient scan paths and consequently longer search times for R-targets compared with L-targets. It may therefore be prudent to take spatial asymmetries into account when studying visual search in scenes.
Collapse
Affiliation(s)
- Antje Nuthmann
- Institute of Psychology, University of Kiel, Olshausenstr. 62, 24118, Kiel, Germany.
- Psychology Department, School of Philosophy, Psychology and Language Sciences, University of Edinburgh, Edinburgh, UK.
| | - Christopher N L Clark
- Psychology Department, School of Philosophy, Psychology and Language Sciences, University of Edinburgh, Edinburgh, UK
| |
Collapse
|
12
|
Ganesan S, Melnik N, Azanon E, Pollmann S. A gaze-contingent saccadic re-referencing training with simulated central vision loss. J Vis 2023; 23:13. [PMID: 36662502 PMCID: PMC9872842 DOI: 10.1167/jov.23.1.13] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2022] [Accepted: 11/14/2022] [Indexed: 01/21/2023] Open
Abstract
Patients with central vision loss (CVL) adopt an eccentric retinal location for fixation, a preferred retinal location (PRL), to compensate for vision loss at the fovea. Although most patients with CVL are able to rapidly use a PRL instead of the fovea, saccadic re-referencing to a PRL develops slowly. Without re-referencing, saccades land the saccade target in the scotoma. This results in corrective saccades and leads to inefficient visual exploration. Here, we tested a new method to train saccadic re-referencing. Healthy participants performed gaze-contingent visual search tasks with simulated central scotoma in which participants had to fixate targets with an experimenter-defined forced retinal location (FRL). In experiment 1, we compared single-target search and foraging search tasks in the course of five training sessions. Results showed that both tasks improved the efficiency of gaze sequences and led to saccadic re-referencing to the FRL. In experiment 2, we trained participants extensively for 25 sessions, both with and without a gaze-contingent FRL-marker visible during training. After extensive training, observers' performance approached that of foveal vision. Thus, gaze-contingent FRL-fixation may become an efficient tool for saccadic re-referencing training in patients with central vision loss.
Collapse
Affiliation(s)
- Sharavanan Ganesan
- Department of Psychology, Otto-von-Guericke University, Magdeburg, Germany
| | - Natalia Melnik
- Department of Psychology, Otto-von-Guericke University, Magdeburg, Germany
| | - Elena Azanon
- Center for Behavioral Brain Sciences, Otto-von-Guericke University, Magdeburg, Germany
- Department of Neurology, Otto-von-Guericke University Magdeburg, Magdeburg, Germany
- Department of Behavioral Neurology, Leibniz Institute for Neurobiology, Magdeburg, Germany
| | - Stefan Pollmann
- Department of Psychology, Otto-von-Guericke University, Magdeburg, Germany
- Center for Behavioral Brain Sciences, Otto-von-Guericke University, Magdeburg, Germany
| |
Collapse
|
13
|
Pollmann S, Schneider WX. Working memory and active sampling of the environment: Medial temporal contributions. HANDBOOK OF CLINICAL NEUROLOGY 2022; 187:339-357. [PMID: 35964982 DOI: 10.1016/b978-0-12-823493-8.00029-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Working memory (WM) refers to the ability to maintain and actively process information-either derived from perception or long-term memory (LTM)-for intelligent thought and action. This chapter focuses on the contributions of the temporal lobe, particularly medial temporal lobe (MTL) to WM. First, neuropsychological evidence for the involvement of MTL in WM maintenance is reviewed, arguing for a crucial role in the case of retaining complex relational bindings between memorized features. Next, MTL contributions at the level of neural mechanisms are covered-with a focus on WM encoding and maintenance, including interactions with ventral temporal cortex. Among WM use processes, we focus on active sampling of environmental information, a key input source to capacity-limited WM. MTL contributions to the bidirectional relationship between active sampling and memory are highlighted-WM control of active sampling and sampling as a way of selecting input to WM. Memory-based sampling studies relying on scene and object inspection, visual-based exploration behavior (e.g., vicarious behavior), and memory-guided visual search are reviewed. The conclusion is that MTL serves an important function in the selection of information from perception and transfer from LTM to capacity-limited WM.
Collapse
Affiliation(s)
- Stefan Pollmann
- Department of Psychology and Center for Behavioral Brain Sciences, Otto-von-Guericke-University, Magdeburg, Germany.
| | - Werner X Schneider
- Department of Psychology and Center for Cognitive Interaction Technology, Bielefeld University, Bielefeld, Germany
| |
Collapse
|
14
|
Simulated central vision loss does not impair implicit location probability learning when participants search through simple displays. Atten Percept Psychophys 2021; 84:1901-1912. [PMID: 34921336 PMCID: PMC8682040 DOI: 10.3758/s13414-021-02416-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/15/2021] [Indexed: 11/08/2022]
Abstract
Central vision loss disrupts voluntary shifts of spatial attention during visual search. Recently, we reported that a simulated scotoma impaired learned spatial attention towards regions likely to contain search targets. In that task, search items were overlaid on natural scenes. Because natural scenes can induce explicit awareness of learned biases leading to voluntary shifts of attention, here we used a search display with a blank background less likely to induce awareness of target location probabilities. Participants searched both with and without a simulated central scotoma: a training phase contained targets more often in one screen quadrant and a testing phase contained targets equally often in all quadrants. In Experiment 1, training used no scotoma, while testing alternated between blocks of scotoma and no-scotoma search. Experiment 2 training included the scotoma and testing again alternated between scotoma and no-scotoma search. Response times and saccadic behaviors in both experiments showed attentional biases towards the high-probability target quadrant during scotoma and no-scotoma search. Whereas simulated central vision loss impairs learned spatial attention in the context of natural scenes, our results show that this may not arise from impairments to the basic mechanisms of attentional learning indexed by visual search tasks without scenes.
Collapse
|
15
|
Anderson BA, Kim H, Kim AJ, Liao MR, Mrkonja L, Clement A, Grégoire L. The past, present, and future of selection history. Neurosci Biobehav Rev 2021; 130:326-350. [PMID: 34499927 PMCID: PMC8511179 DOI: 10.1016/j.neubiorev.2021.09.004] [Citation(s) in RCA: 56] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Revised: 07/08/2021] [Accepted: 09/02/2021] [Indexed: 01/22/2023]
Abstract
The last ten years of attention research have witnessed a revolution, replacing a theoretical dichotomy (top-down vs. bottom-up control) with a trichotomy (biased by current goals, physical salience, and selection history). This third new mechanism of attentional control, selection history, is multifaceted. Some aspects of selection history must be learned over time whereas others reflect much more transient influences. A variety of different learning experiences can shape the attention system, including reward, aversive outcomes, past experience searching for a target, target‒non-target relations, and more. In this review, we provide an overview of the historical forces that led to the proposal of selection history as a distinct mechanism of attentional control. We then propose a formal definition of selection history, with concrete criteria, and identify different components of experience-driven attention that fit within this definition. The bulk of the review is devoted to exploring how these different components relate to one another. We conclude by proposing an integrative account of selection history centered on underlying themes that emerge from our review.
Collapse
Affiliation(s)
- Brian A Anderson
- Texas A&M University, College Station, TX, 77843, United States.
| | - Haena Kim
- Texas A&M University, College Station, TX, 77843, United States
| | - Andy J Kim
- Texas A&M University, College Station, TX, 77843, United States
| | - Ming-Ray Liao
- Texas A&M University, College Station, TX, 77843, United States
| | - Lana Mrkonja
- Texas A&M University, College Station, TX, 77843, United States
| | - Andrew Clement
- Texas A&M University, College Station, TX, 77843, United States
| | | |
Collapse
|
16
|
Contextual cueing is not flexible. Conscious Cogn 2021; 93:103164. [PMID: 34157518 DOI: 10.1016/j.concog.2021.103164] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Revised: 05/05/2021] [Accepted: 06/09/2021] [Indexed: 11/22/2022]
Abstract
Target detection is faster when search displays repeat, but properties of the memory representations that give rise to this contextual cueing effect remain uncertain. We adapted the contextual cueing task using an ABA design and recorded the eye movements of healthy young adults to determine whether the memory representations are flexible. Targets moved to a new location during the B phase and then returned to their original locations (second A phase). Contextual cueing effects in the first A phase were reinstated immediately in the second A phase, and response time costs eventually gave way to a repeated search advantage in the B phase, suggesting that two target-context associations were learned. However, this apparent flexibility disappeared when eye tracking data were used to subdivide repeated displays based on B-phase viewing of the original target quadrant. Therefore, memory representations acquired in the contextual cueing task resist change and are not flexible.
Collapse
|
17
|
No explicit memory for individual trial display configurations in a visual search task. Mem Cognit 2021; 49:1705-1721. [PMID: 34100195 DOI: 10.3758/s13421-021-01185-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/27/2021] [Indexed: 11/08/2022]
Abstract
Previous evidence demonstrated that individuals can recall a target's location in a search display even if location information is completely task-irrelevant. This finding raises the question: does this ability to automatically encode a single item's location into a reportable memory trace extend to other aspects of spatial information as well? We tested this question using a paradigm designed to elicit attribute amnesia (Chen & Wyble, Psychological Science, 26(2) 203-210, 2015a). Participants were initially asked to report the location of a target letter among digits with stimuli arranged to form one of two or four spatial configurations varying randomly across trials. After completing numerous trials that matched their expectations, participants were surprised with a series of unexpected questions probing their memory for various aspects of the display they had just viewed. Participants had a profound inability to report which spatial configuration they had just perceived when the target's location was not unique to a specific configuration (i.e., orthogonal). Despite being unable to report the most recent configuration, answer choices on the surprise trial were focused around previously seen configurations, rather than novel configurations. Thus, there were clear memories of the set of configurations that had been viewed during the experiment but not of the specific configuration from the most recent trial. This finding helps to set boundary conditions on previous findings regarding the automatic encoding of location information into memory.
Collapse
|
18
|
The effects of perceptual cues on visual statistical learning: Evidence from children and adults. Mem Cognit 2021; 49:1645-1664. [PMID: 33876401 DOI: 10.3758/s13421-021-01179-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/01/2021] [Indexed: 11/08/2022]
Abstract
In visual statistical learning, one can extract the statistical regularities of target locations in an incidental manner. The current study examined the impact of salient perceptual cues on one type of visual statistical learning: probability cueing effects. In a visual search task, the target appeared more often in one quadrant (i.e., rich) than the other quadrants (i.e., sparse). Then, the screen was rotated by 90° and the targets appeared in the four quadrants with equal probabilities. In Experiment 1 without the addition of salient perceptual cues, adults showed significant probability cueing effects, but did not show a persistent attentional bias in the testing phase. In Experiments 2, 3, and 4, salient perceptual cues were added to the rich or the sparse quadrants. Adults showed significant probability cueing effects but no persistent attentional bias. In Experiment 5, younger children, older children, and adults showed significant probability cueing effects. All three groups also showed an attentional gradient phenomenon: reaction times were slower when the targets were in the sparse quadrant diagonal to, rather than adjacent to, the rich quadrant. Furthermore, both children groups showed a persistent egocentric attentional bias in the testing phase. These findings indicated that salient perceptual cues enhanced but did not reduce probability cueing effects, children and adults shared similar basic attentional mechanisms in probability cueing effects, and children and adults showed differences in the persistence of attentional bias.
Collapse
|
19
|
Rehrig GL, Cheng M, McMahan BC, Shome R. Why are the batteries in the microwave?: Use of semantic information under uncertainty in a search task. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2021; 6:32. [PMID: 33855644 PMCID: PMC8046897 DOI: 10.1186/s41235-021-00294-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/18/2020] [Accepted: 03/23/2021] [Indexed: 11/10/2022]
Abstract
A major problem in human cognition is to understand how newly acquired information and long-standing beliefs about the environment combine to make decisions and plan behaviors. Over-dependence on long-standing beliefs may be a significant source of suboptimal decision-making in unusual circumstances. While the contribution of long-standing beliefs about the environment to search in real-world scenes is well-studied, less is known about how new evidence informs search decisions, and it is unclear whether the two sources of information are used together optimally to guide search. The present study expanded on the literature on semantic guidance in visual search by modeling a Bayesian ideal observer's use of long-standing semantic beliefs and recent experience in an active search task. The ability to adjust expectations to the task environment was simulated using the Bayesian ideal observer, and subjects' performance was compared to ideal observers that depended on prior knowledge and recent experience to varying degrees. Target locations were either congruent with scene semantics, incongruent with what would be expected from scene semantics, or random. Half of the subjects were able to learn to search for the target in incongruent locations over repeated experimental sessions when it was optimal to do so. These results suggest that searchers can learn to prioritize recent experience over knowledge of scenes in a near-optimal fashion when it is beneficial to do so, as long as the evidence from recent experience was learnable.
Collapse
Affiliation(s)
- Gwendolyn L Rehrig
- Department of Psychology, University of California, Davis, CA, 95616, USA.
| | - Michelle Cheng
- School of Social Sciences, Nanyang Technological University, Singapore, 639798, Singapore
| | - Brian C McMahan
- Department of Computer Science, Rutgers University-New Brunswick, New Brunswick, USA
| | - Rahul Shome
- Department of Computer Science, Rice University, Houston, USA
| |
Collapse
|
20
|
Bahle B, Kershner AM, Hollingworth A. Categorical cuing: Object categories structure the acquisition of statistical regularities to guide visual search. J Exp Psychol Gen 2021; 150:2552-2566. [PMID: 33829823 DOI: 10.1037/xge0001059] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Recent statistical regularities have been demonstrated to influence visual search across a wide variety of learning mechanisms and search features. To function in the guidance of real-world search, however, such learning must be contingent on the context in which the search occurs and the object that is the target of search. The former has been studied extensively under the rubric of contextual cuing. Here, we examined, for the first time, categorical cuing: The role of object categories in structuring the acquisition of statistical regularities used to guide visual search. After an exposure session in which participants viewed six exemplars with the same general color in each of 40 different real-world categories, they completed a categorical search task, in which they searched for any member of a category based on a label cue. Targets that matched recent within-category regularities were found faster than targets that did not (Experiment 1). Such categorical cuing was also found to span multiple recent colors within a category (Experiment 2). It was observed to influence both the guidance of search to the target object (Experiment 3) and the basic operation of assigning single exemplars to categories (Experiment 4). Finally, the rapid acquisition of category-specific regularities was also quickly modified, with the benefit rapidly decreasing during the search session as participants were exposed equally to the two possible colors in each category. The results demonstrate that object categories organize the acquisition of perceptual regularities and that this learning exerts strong control over the instantiation of the category representation as a template for visual search. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Collapse
|
21
|
Abstract
Safe driving demands the coordination of multiple sensory and cognitive functions, such as vision and attention. Patients with neurologic or ophthalmic disease are exposed to selective pathophysiologic insults to driving-critical systems, placing them at a higher risk for unsafe driving and restricted driving privileges. Here, we evaluate how vision and attention contribute to unsafe driving across different patient populations. In ophthalmic disease, we focus on macular degeneration, glaucoma, diabetic retinopathy, and cataract; in neurologic disease, we focus on Alzheimer's disease, Parkinson's disease, and multiple sclerosis. Unsafe driving is generally associated with impaired vision and attention in ophthalmic and neurologic patients, respectively. Furthermore, patients with ophthalmic disease experience some degree of impairment in attention. Similarly, patients with neurologic disease experience some degree of impairment in vision. While numerous studies have demonstrated a relationship between impaired vision and unsafe driving in neurologic disease, there remains a dearth of knowledge regarding the relationship between impaired attention and unsafe driving in ophthalmic disease. In summary, this chapter confirms-and offers opportunities for future research into-the contribution of vision and attention to safe driving.
Collapse
Affiliation(s)
- David E Anderson
- Department of Ophthalmology & Visual Sciences, University of Nebraska Medical Center, Omaha, NE, United States
| | - Deepta A Ghate
- Department of Ophthalmology & Visual Sciences, University of Nebraska Medical Center, Omaha, NE, United States
| | - Matthew Rizzo
- Department of Neurological Sciences, University of Nebraska Medical Center, Omaha, NE, United States.
| |
Collapse
|
22
|
Visual statistical learning in children and adults: evidence from probability cueing. PSYCHOLOGICAL RESEARCH 2020; 85:2911-2921. [PMID: 33170355 DOI: 10.1007/s00426-020-01445-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2020] [Accepted: 10/26/2020] [Indexed: 10/23/2022]
Abstract
In visual statistical learning (VSL), one can extract and exhibit memory for the statistical regularities of target locations in an incidental manner. The current study examined the development of VSL using the probability cueing paradigm with salient perceptual cues. We also investigated the elicited attention gradient phenomenon in VSL. In a visual search task, the target first appeared more often in one quadrant (i.e., rich) than the other quadrants (i.e., sparse). Then, the participants rotated the screen by 90° and the targets appeared in the four quadrants with equal probabilities. Each quadrant had a unique background color and was, hence, associated with salient perceptual cues. 1st-4th graders and adults participated. All participants showed probability cueing effects to a similar extent. We observed an attention gradient phenomenon, as all participants responded slower to the sparse quadrant that was distant from, rather than the ones that were adjacent to the rich quadrant. In the testing phase, all age groups showed persistent attentional biases based on both egocentric and allocentric perspectives. These findings showed that probability cueing effects may develop early, that perceptual cues can bias attention guidance during VSL for both children and adults, and that VSL can elicit a spaced-based attention gradient phenomenon for children and adults.
Collapse
|
23
|
|
24
|
Marek N, Pollmann S. Contextual-Cueing beyond the Initial Field of View-A Virtual Reality Experiment. Brain Sci 2020; 10:brainsci10070446. [PMID: 32668806 PMCID: PMC7407752 DOI: 10.3390/brainsci10070446] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2020] [Revised: 06/30/2020] [Accepted: 07/07/2020] [Indexed: 11/16/2022] Open
Abstract
In visual search, participants can incidentally learn spatial target-distractor configurations, leading to shorter search times for repeated compared to novel configurations. Usually, this is tested within the limited visual field provided by a computer monitor. While contextual cueing is typically investigated on two-dimensional screens, we present for the first time an implementation of a classic contextual cueing task (search for a T-shape among L-shapes) in a three-dimensional virtual environment. This enabled us to test if the typical finding of incidental learning of repeated search configurations, manifested by shorter search times, would hold in a three-dimensional virtual reality (VR) environment. One specific aspect that was tested by combining virtual reality and contextual cueing was if contextual cueing would hold for targets outside the initial field of view (FOV), requiring head movements to be found. In keeping with two-dimensional search studies, reduced search times were observed after the first epoch and remained stable in the remaining experiment. Importantly, comparable search time reductions were observed for targets both within and outside of the initial FOV. The results show that a repeated distractors-only configuration in the initial FOV can guide search for target locations requiring a head movement to be seen.
Collapse
Affiliation(s)
- Nico Marek
- Department of Psychology, Otto-von-Guericke Universität Magdeburg, 39106 Magdeburg, Germany;
- Correspondence: ; Tel.: +49-(0)391-67-51929
| | - Stefan Pollmann
- Department of Psychology, Otto-von-Guericke Universität Magdeburg, 39106 Magdeburg, Germany;
- Center for Brain and Behavioral Sciences, Otto-von-Guericke Universität Magdeburg, 39106 Magdeburg, Germany
- Beijing Key Laboratory of Learning and Cognition and School of Psychology, Capital Normal University, Beijing 100048, China
| |
Collapse
|
25
|
Pollmann S, Geringswald F, Wei P, Porracin E. Intact Contextual Cueing for Search in Realistic Scenes with Simulated Central or Peripheral Vision Loss. Transl Vis Sci Technol 2020; 9:15. [PMID: 32855862 PMCID: PMC7422911 DOI: 10.1167/tvst.9.8.15] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Accepted: 05/29/2020] [Indexed: 11/24/2022] Open
Abstract
Purpose Search in repeatedly presented visual search displays can benefit from implicit learning of the display items' spatial configuration. This effect has been named contextual cueing. Previously, contextual cueing was found to be reduced in observers with foveal or peripheral vision loss. Whereas this previous work used symbolic (T among L-shape) search displays with arbitrary configurations, here we investigated search in realistic scenes. Search in meaningful realistic scenes may benefit much more from explicit memory of the target location. We hypothesized that this explicit recall of the target location reduces visuospatial working memory demands on search considerably, thereby enabling efficient search guidance by learnt contextual cues in observers with vision loss. Methods Two experiments with gaze-contingent scotoma simulation (Experiment 1: central scotoma, Experiment 2: peripheral scotoma) were carried out with normal-sighted observers (total n = 39/40). Observers had to find a cup in pseudorealistic indoor scenes and discriminate the direction of the cup's handle. Results With both central and peripheral scotoma simulation, contextual cueing was observed in repeatedly presented configurations. Conclusions The data show that patients suffering from central or peripheral vision loss may benefit more from memory-guided visual search than would be expected from scotoma simulation and patient studies using abstract symbolic search displays. Translational Relevance In the assessment of visual search in patients with vision loss, semantically meaningless abstract search displays may gain insights into deficient search functions, but more realistic meaningful search scenes are needed to assess whether search deficits can be compensated.
Collapse
Affiliation(s)
- Stefan Pollmann
- Beijing Key Laboratory of Learning and Cognition and School of Psychology, Capital Normal University, Beijing, China.,Department of Psychology, Otto-von-Guericke-University, Magdeburg, Germany.,Center for Behavioral Brain Sciences, Otto-von-Guericke-University, Magdeburg, Germany
| | | | - Ping Wei
- Beijing Key Laboratory of Learning and Cognition and School of Psychology, Capital Normal University, Beijing, China
| | - Eleonora Porracin
- Department of Psychology, Otto-von-Guericke-University, Magdeburg, Germany
| |
Collapse
|
26
|
Research thematic and emerging trends of contextual cues: a bibliometrics and visualization approach. LIBRARY HI TECH 2020. [DOI: 10.1108/lht-11-2019-0237] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
PurposeThe paper aims to clarify the importance of the psychological processing of contextual cues in the mining of individual attention resources. In recent years, the research of more open spatial perspective, such as spatial and scene perception, has gradually turned to the recognition of contextual cues, accumulating rich literature and becoming a hotspot of interdisciplinary research. Nevertheless, besides the fields of psychology and neuroscience, researchers in other fields lack systematic knowledge of contextual cues. The purpose of this study is to expand the research field of contextual cues.Design/methodology/approachWe retrieved 494 papers on contextual cues from SCI/SSCI core database of the Web of Science in 1992–2019. Then, we used several bibliometric and sophisticated network analysis tools, such as HistCite, CiteSpace, VOSviewe and Pajek, to identify the time-and-space knowledge map, research hotspots, evolution process, emerging trends and primary path of contextual cues.FindingsThe paper found the core scholars, major journals, research institutions, and the popularity of citation to be closely related to the research of contextual cues. In addition, we constructed a co-word network of contextual cues, confirming the concept of behavior implementation intentions and filling in the research gap in the field of behavior science. Then, the quantitative analysis of the burst literature on contextual cues revealed that the research on it that focused more on multi-objective cues. Furthermore, an analysis of the main path helped researchers clearly understand and grasp in the development trend and evolution track of contextual cues.Originality/valueGiven academic research usually lags behind management practice, our systematic review of the literature to a certain extent make a bridge between theory and practice.
Collapse
|
27
|
Nickel AE, Hopkins LS, Minor GN, Hannula DE. Attention capture by episodic long-term memory. Cognition 2020; 201:104312. [PMID: 32387722 DOI: 10.1016/j.cognition.2020.104312] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2019] [Revised: 03/16/2020] [Accepted: 04/19/2020] [Indexed: 10/24/2022]
Abstract
Everyday behavior depends upon the operation of concurrent cognitive processes. In visual search, studies that examine memory-attention interactions have indicated that long-term memory facilitates search for a target (e.g., contextual cueing), but the potential for memories to capture attention and decrease search efficiency has not been investigated. To address this gap in the literature, five experiments were conducted to examine whether task-irrelevant encoded objects might capture attention. In each experiment, participants encoded scene-object pairs. Then, in a visual search task, 6-object search displays were presented and participants were told to make a single saccade to targets defined by shape (e.g., diamond among differently colored circles; Experiments 1, 4, and 5) or by color (e.g., blue shape among differently shaped gray objects; Experiments 2 and 3). Sometimes, one of the distractors was from the encoded set, and occasionally the scene that had been paired with that object was presented prior to the search display. Results indicated that eye movements were made, in error, more often to encoded distractors than to baseline distractors, and that this effect was greatest when the corresponding scene was presented prior to search. When capture did occur, participants looked longer at encoded distractors if scenes had been presented, an effect that we attribute to the representational match between a retrieved associate and the identity of the encoded distractor in the search display. In addition, the presence of a scene resulted in slower saccade deployment when participants made first saccades to targets, as instructed. Experiments 4 and 5 suggest that this slowdown may be due to the relatively rare and therefore, surprising, appearance of visual stimulus information prior to search. Collectively, results suggest that information encoded into episodic memory can capture attention, which is consistent with the recent proposal that selection history can guide attentional selection.
Collapse
Affiliation(s)
- Allison E Nickel
- Department of Psychology, University of Wisconsin - Milwaukee, Milwaukee, WI, USA
| | - Lauren S Hopkins
- Department of Psychology, University of Wisconsin - Milwaukee, Milwaukee, WI, USA
| | - Greta N Minor
- Department of Psychology, University of Wisconsin - Milwaukee, Milwaukee, WI, USA
| | - Deborah E Hannula
- Department of Psychology, University of Wisconsin - Milwaukee, Milwaukee, WI, USA.
| |
Collapse
|
28
|
Yoo SA, Rosenbaum RS, Tsotsos JK, Fallah M, Hoffman KL. Long-term memory and hippocampal function support predictive gaze control during goal-directed search. J Vis 2020; 20:10. [PMID: 32455429 PMCID: PMC7409592 DOI: 10.1167/jov.20.5.10] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Eye movements during visual search change with prior experience for search stimuli. Previous studies measured these gaze effects shortly after initial viewing, typically during free viewing; it remains open whether the effects are preserved across long delays and for goal-directed search, and which memory system guides gaze. In Experiment 1, we analyzed eye movements of healthy adults viewing novel and repeated scenes while searching for a scene-embedded target. The task was performed across different time points to examine the repetition effects in long-term memory, and memory types were grouped based on explicit recall of targets. In Experiment 2, an amnesic person with bilateral extended hippocampal damage and the age-matched control group performed the same task with shorter intervals to determine whether or not the repetition effects depend on hippocampal function. When healthy adults explicitly remembered repeated target-scene pairs, search time and fixation duration decreased, and gaze was directed closer to the target region, than when they forgot targets. These effects were seen even after a one-month delay from their initial viewing, suggesting the effects are associated with long-term, explicit memory. Saccadic amplitude was not strongly modulated by scene repetition or explicit recall of targets. The amnesic person did not show explicit recall or implicit repetition effects, whereas his control group showed similar patterns to those seen in Experiment 1. The results reveal several aspects of gaze control that are influenced by long-term memory. The dependence of gaze effects on medial temporal lobe integrity support a role for this region in predictive gaze control.
Collapse
|
29
|
Thibaut M, Boucart M, Tran THC. Object search in neovascular age-related macular degeneration: the crowding effect. Clin Exp Optom 2019; 103:648-655. [PMID: 31698519 DOI: 10.1111/cxo.12982] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2019] [Revised: 09/10/2019] [Accepted: 09/13/2019] [Indexed: 10/25/2022] Open
Abstract
BACKGROUND Visual search, an activity that relies on central vision, is frequent in daily life. This study investigates the effect of spacing between items in an object search task in participants with central vision loss. METHODS Patients with neovascular age-related macular degeneration (AMD), age-matched controls, and young controls were included. The stimuli were displays of four, six and nine objects randomly presented in a 'crowded' (spacing 1.5°) or 'uncrowded' (spacing 6°) condition. For each of 96 trials, participants were asked to search for a predefined target that remained on the screen until the response was recorded. Accuracy, search time, and eye movements (number of fixations and scan path ratio) were recorded. RESULTS Compared to older controls, accuracy decreased by 31 per cent and search time increased by 61 per cent in AMD participants. Ageing also affected performance with a lower accuracy by 13.5 per cent and longer search times by 46 per cent in older compared to younger controls. Increasing the spacing between elements increased accuracy by 21 per cent in AMD participants but it had no effect in older and younger controls. Performance was not related to visual acuity or to duration of neovascular AMD, but search time was correlated to the lesion size in the 'crowded' condition. CONCLUSIONS Object search is ubiquitous in daily life activities. When visual acuity is irrevocably reduced, increasing the spacing between elements can reliably improve object search performance in patients.
Collapse
Affiliation(s)
- Miguel Thibaut
- SCALab, University of Lille, National Center for Scientific Research, Lille, France
| | - Muriel Boucart
- SCALab, University of Lille, National Center for Scientific Research, Lille, France
| | - Thi Ha Chau Tran
- Ophthalmology Department, Lille Catholic Hospitals, Catholic University of Lille, Lille, France
| |
Collapse
|
30
|
|
31
|
Meyer T, Quaedflieg CW, Bisby JA, Smeets T. Acute stress – but not aversive scene content – impairs spatial configuration learning. Cogn Emot 2019; 34:201-216. [DOI: 10.1080/02699931.2019.1604320] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Affiliation(s)
- Thomas Meyer
- Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
- Research Department of Clinical and Health Psychology, University College London, London, UK
- Psychology and Psychotherapy, University of Münster, Münster, Germany
| | | | - James A. Bisby
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Tom Smeets
- Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
- Department of Medical and Clinical Psychology, Center of Research on Psychological and Somatic Disorders (CoRPS), Tilburg University, Tilburg, The Netherlands
| |
Collapse
|
32
|
Ramey MM, Yonelinas AP, Henderson JM. Conscious and unconscious memory differentially impact attention: Eye movements, visual search, and recognition processes. Cognition 2019; 185:71-82. [PMID: 30665071 DOI: 10.1016/j.cognition.2019.01.007] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2018] [Revised: 01/08/2019] [Accepted: 01/08/2019] [Indexed: 12/27/2022]
Abstract
A hotly debated question is whether memory influences attention through conscious or unconscious processes. To address this controversy, we measured eye movements while participants searched repeated real-world scenes for embedded targets, and we assessed memory for each scene using confidence-based methods to isolate different states of subjective memory awareness. We found that memory-informed eye movements during visual search were predicted both by conscious recollection, which led to a highly precise first eye movement toward the remembered location, and by unconscious memory, which increased search efficiency by gradually directing the eyes toward the target throughout the search trial. In contrast, these eye movement measures were not influenced by familiarity-based memory (i.e., changes in subjective reports of memory strength). The results indicate that conscious recollection and unconscious memory can each play distinct and complementary roles in guiding attention to facilitate efficient extraction of visual information.
Collapse
Affiliation(s)
- Michelle M Ramey
- Department of Psychology, University of California, Davis, CA, USA; Center for Neuroscience, University of California, Davis, CA, USA; Center for Mind and Brain, University of California, Davis, CA, USA.
| | - Andrew P Yonelinas
- Department of Psychology, University of California, Davis, CA, USA; Center for Neuroscience, University of California, Davis, CA, USA
| | - John M Henderson
- Department of Psychology, University of California, Davis, CA, USA; Center for Mind and Brain, University of California, Davis, CA, USA
| |
Collapse
|
33
|
|
34
|
Smith KG, Schmidt J, Wang B, Henderson JM, Fridriksson J. Task-Related Differences in Eye Movements in Individuals With Aphasia. Front Psychol 2018; 9:2430. [PMID: 30618911 PMCID: PMC6305326 DOI: 10.3389/fpsyg.2018.02430] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2018] [Accepted: 11/19/2018] [Indexed: 11/25/2022] Open
Abstract
Background: Neurotypical young adults show task-based modulation and stability of their eye movements across tasks. This study aimed to determine whether persons with aphasia (PWA) modulate their eye movements and show stability across tasks similarly to control participants. Methods: Forty-eight PWA and age-matched control participants completed four eye-tracking tasks: scene search, scene memorization, text-reading, and pseudo-reading. Results: Main effects of task emerged for mean fixation duration, saccade amplitude, and standard deviations of each, demonstrating task-based modulation of eye movements. Group by task interactions indicated that PWA produced shorter fixations relative to controls. This effect was most pronounced for scene memorization and for individuals who recently suffered a stroke. PWA produced longer fixations, shorter saccades, and less variable eye movements in reading tasks compared to controls. Three-way interactions of group, aphasia subtype, and task also emerged. Text-reading and scene memorization were particularly effective at distinguishing aphasia subtype. Persons with anomic aphasia showed a reduction in reading saccade amplitudes relative to their respective control group and other PWA. Persons with conduction/Wernicke’s aphasia produced shorter scene memorization fixations relative to controls or PWA of other subtypes, suggesting a memorization specific effect. Positive correlations across most tasks emerged for fixation duration and did not significantly differ between controls and PWA. Conclusion: PWA generally produced shorter fixations and smaller saccades relative to controls particularly in scene memorization and text-reading, respectively. The effect was most pronounced recently after a stroke. Selectively in reading tasks, PWA produced longer fixations and shorter saccades relative to controls, consistent with reading difficulty. PWA showed task-based modulation of eye movements, though the pattern of results was somewhat abnormal relative to controls. All subtypes of PWA also demonstrated task-based modulation of eye movements. However, persons with anomic aphasia showed reduced modulation of saccade amplitude and smaller reading saccades, possibly to improve reading comprehension. Controls and PWA generally produced stabile fixation durations across tasks and did not differ in their relationship across tasks. Overall, these results suggest there is potential to differentiate among PWA with varying subtypes and from controls using eye movement measures of task-based modulation, especially reading and scene memorization tasks.
Collapse
Affiliation(s)
- Kimberly G Smith
- Department of Speech Pathology & Audiology, University of South Alabama, Mobile, AL, United States.,Department of Communication Sciences & Disorders, University of South Carolina, Columbia, SC, United States
| | - Joseph Schmidt
- Department of Psychology, University of Central Florida, Orlando, FL, United States
| | - Bin Wang
- Department of Mathematics and Statistics, University of South Alabama, Mobile, AL, United States
| | - John M Henderson
- Department of Psychology, Center for Mind and Brain, University of California, Davis, Davis, CA, United States
| | - Julius Fridriksson
- Department of Communication Sciences & Disorders, University of South Carolina, Columbia, SC, United States
| |
Collapse
|
35
|
Anderson BA, Kim H. On the representational nature of value-driven spatial attentional biases. J Neurophysiol 2018; 120:2654-2658. [PMID: 30303748 DOI: 10.1152/jn.00489.2018] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Reward learning biases attention toward both reward-associated objects and reward-associated regions of space. The relationship between objects and space in the value-based control of attention, as well as the contextual specificity of space-reward pairings, remains unclear. In the present study, using a free-viewing task, we provide evidence of overt attentional biases toward previously rewarded regions of texture scenes that lack objects. When scrutinizing a texture scene, participants look more frequently toward, and spend a longer amount of time looking at, regions that they have repeatedly oriented to in the past as a result of performance feedback. These biases were scene specific, such that different spatial contexts produced different patterns of habitual spatial orienting. Our findings indicate that reinforcement learning can modify looking behavior via a representation that is purely spatial in nature in a context-specific manner. NEW & NOTEWORTHY The representational nature of space in the value-driven control of attention remains unclear. Here, we provide evidence for scene-specific overt spatial attentional biases following reinforcement learning, even though the scenes contained no objects. Our findings indicate that reinforcement learning can modify looking behavior via a representation that is purely spatial in nature in a context-specific manner.
Collapse
Affiliation(s)
| | - Haena Kim
- Texas A&M University , College Station, Texas
| |
Collapse
|
36
|
Anderson BA, Kim H. Mechanisms of value-learning in the guidance of spatial attention. Cognition 2018; 178:26-36. [DOI: 10.1016/j.cognition.2018.05.005] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2017] [Revised: 04/24/2018] [Accepted: 05/05/2018] [Indexed: 12/20/2022]
|
37
|
Abstract
Recent research has expanded the list of factors that control spatial attention. Beside current goals and perceptual salience, statistical learning, reward, motivation and emotion also affect attention. But do these various factors influence spatial attention in the same manner, as suggested by the integrated framework of attention, or do they target different aspects of spatial attention? Here I present evidence that the control of attention may be implemented in two ways. Whereas current goals typically modulate where in space attention is prioritized, search habits affect how one moves attention in space. Using the location probability learning paradigm, I show that a search habit forms when people frequently find a visual search target in one region of space. Attentional cuing by probability learning differs from that by current goals. Probability cuing is implicit and persists long after the probability cue is no longer valid. Whereas explicit goal-driven attention codes space in an environment-centered reference frame, probability cuing is viewer-centered and is insensitive to secondary working memory load and aging. I propose a multi-level framework that separates the source of attentional control from its implementation. Similar to the integrated framework, the multi-level framework considers current goals, perceptual salience, and selection history as major sources of attentional control. However, these factors are implemented in two ways, controlling where spatial attention is allocated and how one shifts attention in space.
Collapse
Affiliation(s)
- Yuhong V Jiang
- Department of Psychology, University of Minnesota, Minneapolis, MN, USA.
| |
Collapse
|
38
|
Hannula DE. Attention and long-term memory: Bidirectional interactions and their effects on behavior. PSYCHOLOGY OF LEARNING AND MOTIVATION 2018. [DOI: 10.1016/bs.plm.2018.09.004] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
|
39
|
Chen D, Hutchinson JB. What Is Memory-Guided Attention? How Past Experiences Shape Selective Visuospatial Attention in the Present. Curr Top Behav Neurosci 2018; 41:185-212. [PMID: 30584646 DOI: 10.1007/7854_2018_76] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
What controls our attention? It is historically thought that there are two primary factors that determine selective attention: the perceptual salience of the stimuli and the goals based on the task at hand. However, this distinction doesn't neatly capture the varied ways our past experience can influence our ongoing mental processing. In this chapter, we aim to describe how past experience can be systematically characterized by different types of memory, and we outline experimental evidence suggesting how attention can then be guided by each of these different memory types. We highlight findings from human behavioral, neuroimaging, and neuropsychological work from the perspective of two related frameworks of human memory: the multiple memory systems (MMS) framework and the neural processing (NP) framework. The MMS framework underscores how memory can be separated based on consciousness (declarative and non-declarative memory), while the NP framework emphasizes different forms of memory as reflective of different brain processing modes (rapid encoding of flexible associations, slow encoding of rigid associations, and rapid encoding of single or unitized items). We describe how memory defined by these frameworks can guide our attention, even when they do not directly relate to perceptual salience or the goals concerning the current task. We close by briefly discussing theoretical implications as well as some interesting avenues for future research.
Collapse
|
40
|
Evaluating the influence of a fixated object's spatio-temporal properties on gaze control. Atten Percept Psychophys 2017; 78:996-1003. [PMID: 26887697 DOI: 10.3758/s13414-016-1072-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Despite recent progress in understanding the factors that determine where an observer will eventually look in a scene, we know very little about what determines how an observer decides where he or she will look next. We investigated the potential roles of object-level representations in the direction of subsequent shifts of gaze. In five experiments, we considered whether a fixated object's spatial orientation, implied motion, and perceived animacy affect gaze direction when shifting overt attention to another object. Eye movements directed away from a fixated object were biased in the direction it faced. This effect was not modified by implying a particular direction of inanimate or animate motion. Together, these results suggest that decisions regarding where one should look next are in part determined by the spatial, but not by the implied temporal, properties of the object at the current locus of fixation.
Collapse
|
41
|
Bahle B, Matsukura M, Hollingworth A. Contrasting gist-based and template-based guidance during real-world visual search. J Exp Psychol Hum Percept Perform 2017; 44:367-386. [PMID: 28795834 DOI: 10.1037/xhp0000468] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Visual search through real-world scenes is guided both by a representation of target features and by knowledge of the sematic properties of the scene (derived from scene gist recognition). In 3 experiments, we compared the relative roles of these 2 sources of guidance. Participants searched for a target object in the presence of a critical distractor object. The color of the critical distractor either matched or mismatched (a) the color of an item maintained in visual working memory for a secondary task (Experiment 1), or (b) the color of the target, cued by a picture before search commenced (Experiments 2 and 3). Capture of gaze by a matching distractor served as an index of template guidance. There were 4 main findings: (a) The distractor match effect was observed from the first saccade on the scene, (b) it was independent of the availability of scene-level gist-based guidance, (c) it was independent of whether the distractor appeared in a plausible location for the target, and (d) it was preserved even when gist-based guidance was available before scene onset. Moreover, gist-based, semantic guidance of gaze to target-plausible regions of the scene was delayed relative to template-based guidance. These results suggest that feature-based template guidance is not limited to plausible scene regions after an initial, scene-level analysis. (PsycINFO Database Record
Collapse
Affiliation(s)
- Brett Bahle
- Department of Psychological and Brain Sciences, The University of Iowa
| | - Michi Matsukura
- Department of Psychological and Brain Sciences, The University of Iowa
| | | |
Collapse
|
42
|
Meaning in learning: Contextual cueing relies on objects' visual features and not on objects' meaning. Mem Cognit 2017; 46:58-67. [PMID: 28770539 DOI: 10.3758/s13421-017-0745-9] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
People easily learn regularities embedded in the environment and utilize them to facilitate visual search. Using images of real-world objects, it has been recently shown that this learning, termed contextual cueing (CC), occurs even in complex, heterogeneous environments, but only when the same distractors are repeated at the same locations. Yet it is not clear what exactly is being learned under these conditions: the visual features of the objects or their meaning. In this study, Experiment 1 demonstrated that meaning is not necessary for this type of learning, as a similar pattern of results was found even when the objects' meaning was largely removed. Experiments 2 and 3 showed that after learning meaningful objects, CC was not diminished by a manipulation that distorted the objects' meaning but preserved most of their visual properties. By contrast, CC was eliminated when the learned objects were replaced with different category exemplars that preserved the objects' meaning but altered their visual properties. Together, these data strongly suggest that the acquired context that facilitates real-world objects search relies primarily on the visual properties and the spatial locations of the objects, but not on their meaning.
Collapse
|
43
|
Li CL, Aivar MP, Kit DM, Tong MH, Hayhoe MM. Memory and visual search in naturalistic 2D and 3D environments. J Vis 2017; 16:9. [PMID: 27299769 PMCID: PMC4913723 DOI: 10.1167/16.8.9] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The role of memory in guiding attention allocation in daily behaviors is not well understood. In experiments with two-dimensional (2D) images, there is mixed evidence about the importance of memory. Because the stimulus context in laboratory experiments and daily behaviors differs extensively, we investigated the role of memory in visual search, in both two-dimensional (2D) and three-dimensional (3D) environments. A 3D immersive virtual apartment composed of two rooms was created, and a parallel 2D visual search experiment composed of snapshots from the 3D environment was developed. Eye movements were tracked in both experiments. Repeated searches for geometric objects were performed to assess the role of spatial memory. Subsequently, subjects searched for realistic context objects to test for incidental learning. Our results show that subjects learned the room-target associations in 3D but less so in 2D. Gaze was increasingly restricted to relevant regions of the room with experience in both settings. Search for local contextual objects, however, was not facilitated by early experience. Incidental fixations to context objects do not necessarily benefit search performance. Together, these results demonstrate that memory for global aspects of the environment guides search by restricting allocation of attention to likely regions, whereas task relevance determines what is learned from the active search experience. Behaviors in 2D and 3D environments are comparable, although there is greater use of memory in 3D.
Collapse
|
44
|
Alards-Tomalin D, Brosowsky NP, Mondor TA. Auditory statistical learning: predictive frequency information affects the deployment of contextually mediated attentional resources on perceptual tasks. JOURNAL OF COGNITIVE PSYCHOLOGY 2017. [DOI: 10.1080/20445911.2017.1353518] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
| | - Nicholaus P. Brosowsky
- Department of Psychology, The Graduate Center of the City University of New York, New York, NY, USA
| | - Todd A. Mondor
- Department of Psychology, University of Manitoba, Winnipeg, MB, Canada
| |
Collapse
|
45
|
Hills PJ, Mileva M, Thompson C, Pake JM. Carryover of scanning behaviour affects upright face recognition differently to inverted face recognition. VISUAL COGNITION 2017. [DOI: 10.1080/13506285.2017.1314399] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Peter J Hills
- Department of Psychology, Bournemouth University, Dorset, UK
| | - Mila Mileva
- Department of Psychology, University of York, York, UK
| | | | - J. Michael Pake
- Department of Psychology, Anglia Ruskin University, Cambridge, UK
| |
Collapse
|
46
|
Montefusco-Siegmund R, Leonard TK, Hoffman KL. Hippocampal gamma-band Synchrony and pupillary responses index memory during visual search. Hippocampus 2017; 27:425-434. [PMID: 28032676 DOI: 10.1002/hipo.22702] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/20/2016] [Indexed: 01/20/2023]
Abstract
Memory for scenes is supported by the hippocampus, among other interconnected structures, but the neural mechanisms related to this process are not well understood. To assess the role of the hippocampus in memory-guided scene search, we recorded local field potentials and multiunit activity from the hippocampus of macaques as they performed goal-directed search tasks using natural scenes. We additionally measured pupil size during scene presentation, which in humans is modulated by recognition memory. We found that both pupil dilation and search efficiency accompanied scene repetition, thereby indicating memory for scenes. Neural correlates included a brief increase in hippocampal multiunit activity and a sustained synchronization of unit activity to gamma band oscillations (50-70 Hz). The repetition effects on hippocampal gamma synchronization occurred when pupils were most dilated, suggesting an interaction between aroused, attentive processing and hippocampal correlates of recognition memory. These results suggest that the hippocampus may support memory-guided visual search through enhanced local gamma synchrony. © 2016 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
| | - Timothy K Leonard
- Department of Psychology, Centre for Vision Research, York University, Toronto, Ontario, Canada
| | - Kari L Hoffman
- Department of Psychology, Department of Biology, Centre for Vision Research, Toronto, Ontario, Canada
| |
Collapse
|
47
|
Higuchi Y, Saiki J. Implicit Learning of Spatial Configuration Occurs without Eye Movement. JAPANESE PSYCHOLOGICAL RESEARCH 2017. [DOI: 10.1111/jpr.12147] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
48
|
Abstract
According to consciousness involvement, human’s learning can be roughly classified into explicit learning and implicit learning. Contrasting strongly to explicit learning with clear targets and rules, such as our school study of mathematics, learning is implicit when we acquire new information without intending to do so. Research from psychology indicates that implicit learning is ubiquitous in our daily life. Moreover, implicit learning plays an important role in human visual perception. But in the past 60 years, most of the well-known machine-learning models aimed to simulate explicit learning while the work of modeling implicit learning was relatively limited, especially for computer vision applications. This article proposes a novel unsupervised computational model for implicit visual learning by exploring dissipative system, which provides a unifying macroscopic theory to connect biology with physics. We test the proposed Dissipative Implicit Learning Model (DILM) on various datasets. The experiments show that DILM not only provides a good match to human behavior but also improves the explicit machine-learning performance obviously on image classification tasks.
Collapse
Affiliation(s)
- Yan Liu
- The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Yang Liu
- The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Shenghua Zhong
- The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Songtao Wu
- The Hong Kong Polytechnic University, Hong Kong SAR, China
| |
Collapse
|
49
|
Henderson JM. Gaze Control as Prediction. Trends Cogn Sci 2017; 21:15-23. [DOI: 10.1016/j.tics.2016.11.003] [Citation(s) in RCA: 98] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2016] [Revised: 11/07/2016] [Accepted: 11/08/2016] [Indexed: 11/25/2022]
|
50
|
Eye tracking to investigate cue processing in medical decision-making: A scoping review. COMPUTERS IN HUMAN BEHAVIOR 2017. [DOI: 10.1016/j.chb.2016.09.022] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|