1
|
Udale R, Farrell S, Kent C. No evidence of binding items to spatial configuration representations in visual working memory. Mem Cognit 2018; 46:955-968. [PMID: 29777438 PMCID: PMC6096642 DOI: 10.3758/s13421-018-0814-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
Abstract
When detecting changes in visual features (e.g., colour or shape), object locations, represented as points within a configuration, might also be automatically represented in working memory. If the configuration of a scene is represented automatically, the locations of individual items might form part of this representation, irrespective of their relevance to the task. Participants took part in a change-detection task in which they studied displays containing different sets of items (shapes, letters, objects), which varied in their task relevance. Specifically, they were asked to remember the features of two sets, and ignore the third set. During the retention interval, an audio cue indicated which of the to-be-remembered sets would become the target set (having a 50% probability of containing a new feature). At test, they were asked to indicate whether a new feature was present amongst the target set. We measured binding of individual items to the configuration by manipulating the locations of the different sets so that their position in the test display either matched or mismatched their original location in the study display. If items are automatically bound to the configuration, location changes should disrupt performance, even if they were explicitly instructed not to remember the features of that particular set of items. There was no effect on performance of changing the locations of any of the sets between study and test displays, indicating that the configural representation did not enter their decision stage, and therefore that individual item representations are not necessarily bound to the configuration.
Collapse
Affiliation(s)
- Rob Udale
- School of Experimental Psychology, University of Bristol, Bristol, UK.
| | | | | |
Collapse
|
2
|
Abstract
When representing visual features such as color and shape in visual working memory (VWM), participants also represent the locations of those features as a spatial configuration of the locations of those features in the display. In everyday life, we encounter objects against some background, yet it is unclear whether the configural representation in memory obligatorily constitutes the entire display, including that (often task-irrelevant) background information. In three experiments, participants completed a change detection task on color and shape; the memoranda were presented in front of uniform gray backgrounds, a textured background (Exp. 1), or a background containing location placeholders (Exps. 2 and 3). When whole-display probes were presented, changes to the objects' locations or feature bindings impacted memory performance-implying that the spatial configuration of the probes influenced participants' change decisions. Furthermore, when only a single item was probed, the effect of changing its location or feature bindings was either diminished or completely extinguished, implying that single probes do not necessarily elicit the entire spatial configuration. Critically, when task-irrelevant backgrounds were also presented that may have provided a spatial configuration for the single probes, the effect of location or bindings was not moderated. These findings suggest that although the spatial configuration of a display guides VWM-based recognition, this information does not necessarily always influence the decision process during change detection.
Collapse
|
3
|
Decomposing experience-driven attention: Opposite attentional effects of previously predictive cues. Atten Percept Psychophys 2017; 78:2185-98. [PMID: 27068051 DOI: 10.3758/s13414-016-1101-z] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
A central function of the brain is to track the dynamic statistical regularities in the environment - such as what predicts what over time. How does this statistical learning process alter sensory and attentional processes? Drawing upon animal conditioning and predictive coding, we developed a learning procedure that revealed two distinct components through which prior learning-experience controls attention. During learning, a visual search task was used in which the target randomly appeared at one of several locations but always inside an encloser of a particular color - the learned color served to direct attention to the target location. During test, the color no longer predicted the target location. When the same search task was used in the subsequent test, we found that the learned color continued to attract attention despite the behavior being counterproductive for the task and despite the presence of a completely predictive cue. However, when tested with a flanker task that had minimal location uncertainty - the target was at the fixation surrounded by a distractor - participants were better at ignoring distractors in the learned color than other colors. Evidently, previously predictive cues capture attention in the same search task but can be better suppressed in a flanker task. These results demonstrate opposing components - capture and inhibition - in experience-driven attention, with their manifestations crucially dependent on task context. We conclude that associative learning enhances context-sensitive top-down modulation while it reduces bottom-up sensory drive and facilitates suppression, supporting a learning-based predictive coding account.
Collapse
|
4
|
Abstract
A basic principle in visual neuroscience is the retinotopic organization of neural receptive fields. Here, we review behavioral, neurophysiological, and neuroimaging evidence for nonretinotopic processing of visual stimuli. A number of behavioral studies have shown perception depending on object or external-space coordinate systems, in addition to retinal coordinates. Both single-cell neurophysiology and neuroimaging have provided evidence for the modulation of neural firing by gaze position and processing of visual information based on craniotopic or spatiotopic coordinates. Transient remapping of the spatial and temporal properties of neurons contingent on saccadic eye movements has been demonstrated in visual cortex, as well as frontal and parietal areas involved in saliency/priority maps, and is a good candidate to mediate some of the spatial invariance demonstrated by perception. Recent studies suggest that spatiotopic selectivity depends on a low spatial resolution system of maps that operates over a longer time frame than retinotopic processing and is strongly modulated by high-level cognitive factors such as attention. The interaction of an initial and rapid retinotopic processing stage, tied to new fixations, and a longer lasting but less precise nonretinotopic level of visual representation could underlie the perception of both a detailed and a stable visual world across saccadic eye movements.
Collapse
|
5
|
|
6
|
Herzog MH, Thunell E, Ögmen H. Putting low-level vision into global context: Why vision cannot be reduced to basic circuits. Vision Res 2015; 126:9-18. [PMID: 26456069 DOI: 10.1016/j.visres.2015.09.009] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2015] [Revised: 07/28/2015] [Accepted: 09/18/2015] [Indexed: 11/28/2022]
Abstract
To cope with the complexity of vision, most models in neuroscience and computer vision are of hierarchical and feedforward nature. Low-level vision, such as edge and motion detection, is explained by basic low-level neural circuits, whose outputs serve as building blocks for more complex circuits computing higher level features such as shape and entire objects. There is an isomorphism between states of the outer world, neural circuits, and perception, inspired by the positivistic philosophy of the mind. Here, we show that although such an approach is conceptually and mathematically appealing, it fails to explain many phenomena including crowding, visual masking, and non-retinotopic processing.
Collapse
Affiliation(s)
- Michael H Herzog
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Switzerland.
| | - Evelina Thunell
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Switzerland
| | - Haluk Ögmen
- Department of Electrical and Computer Engineering, Center for Neuro-Engineering and Cognitive Science, University of Houston, TX, USA
| |
Collapse
|
7
|
Uchimura M, Nakano T, Morito Y, Ando H, Kitazawa S. Automatic representation of a visual stimulus relative to a background in the right precuneus. Eur J Neurosci 2015; 42:1651-9. [PMID: 25925368 PMCID: PMC5032987 DOI: 10.1111/ejn.12935] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2015] [Revised: 04/23/2015] [Accepted: 04/27/2015] [Indexed: 11/29/2022]
Abstract
Our brains represent the position of a visual stimulus egocentrically, in either retinal or craniotopic coordinates. In addition, recent behavioral studies have shown that the stimulus position is automatically represented allocentrically relative to a large frame in the background. Here, we investigated neural correlates of the ‘background coordinate’ using an fMRI adaptation technique. A red dot was presented at different locations on a screen, in combination with a rectangular frame that was also presented at different locations, while the participants looked at a fixation cross. When the red dot was presented repeatedly at the same location relative to the rectangular frame, the fMRI signals significantly decreased in the right precuneus. No adaptation was observed after repeated presentations relative to a small, but salient, landmark. These results suggest that the background coordinate is implemented in the right precuneus.
Collapse
Affiliation(s)
- Motoaki Uchimura
- Dynamic Brain Network Laboratory, Graduate School of Frontier Biosciences, Osaka University, 1-3 Yamadaoka, Suita, Osaka, 565-0871, Japan.,Department of Brain Physiology, Graduate School of Medicine, Osaka University, 1-3 Yamadaoka, Suita, Osaka, 565-0871, Japan.,Japan Society for the Promotion of Science, 5-3-1 Kojimachi, Chiyoda, Tokyo, 102-0083, Japan
| | - Tamami Nakano
- Dynamic Brain Network Laboratory, Graduate School of Frontier Biosciences, Osaka University, 1-3 Yamadaoka, Suita, Osaka, 565-0871, Japan.,Department of Brain Physiology, Graduate School of Medicine, Osaka University, 1-3 Yamadaoka, Suita, Osaka, 565-0871, Japan.,Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology, Osaka University, Suita, Osaka, 565-0871, Japan
| | - Yusuke Morito
- Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology, Osaka University, Suita, Osaka, 565-0871, Japan
| | - Hiroshi Ando
- Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology, Osaka University, Suita, Osaka, 565-0871, Japan.,Multisensory Cognition and Computation Laboratory, National Institute of Information and Communications Technology, 3-5 Hikaridai, Seika, Kyoto, 619-0289, Japan
| | - Shigeru Kitazawa
- Dynamic Brain Network Laboratory, Graduate School of Frontier Biosciences, Osaka University, 1-3 Yamadaoka, Suita, Osaka, 565-0871, Japan.,Department of Brain Physiology, Graduate School of Medicine, Osaka University, 1-3 Yamadaoka, Suita, Osaka, 565-0871, Japan.,Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology, Osaka University, Suita, Osaka, 565-0871, Japan
| |
Collapse
|
8
|
Wutz A, Melcher D. The temporal window of individuation limits visual capacity. Front Psychol 2014; 5:952. [PMID: 25221534 PMCID: PMC4145468 DOI: 10.3389/fpsyg.2014.00952] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2014] [Accepted: 08/10/2014] [Indexed: 12/21/2022] Open
Abstract
One of the main tasks of vision is to individuate and recognize specific objects. Unlike the detection of basic features, object individuation is strictly limited in capacity. Previous studies of capacity, in terms of subitizing ranges or visual working memory, have emphasized spatial limits in the number of objects that can be apprehended simultaneously. Here, we present psychophysical and electrophysiological evidence that capacity limits depend instead on time. Contrary to what is commonly assumed, subitizing, the reading-out a small set of individual objects, is not an instantaneous process. Instead, individuation capacity increases in steps within the lifetime of visual persistence of the stimulus, suggesting that visual capacity limitations arise as a result of the narrow window of feedforward processing. We characterize this temporal window as coordinating individuation and integration of sensory information over a brief interval of around 100 ms. Neural signatures of integration windows are revealed in reset alpha oscillations shortly after stimulus onset within generators in parietal areas. Our findings suggest that short-lived alpha phase synchronization (≈1 cycle) is key for individuation and integration of visual transients on rapid time scales (<100 ms). Within this time frame intermediate-level vision provides an equilibrium between the competing needs to individuate invariant objects, integrate information about those objects over time, and remain sensitive to dynamic changes in sensory input. We discuss theoretical and practical implications of temporal windows in visual processing, how they create a fundamental capacity limit, and their role in constraining the real-time dynamics of visual processing.
Collapse
Affiliation(s)
- Andreas Wutz
- Active Perception Laboratory, Center for Mind/Brain Sciences, University of TrentoRovereto Italy
| | | |
Collapse
|
9
|
Furlanetto T, Gallace A, Ansuini C, Becchio C. Effects of arm crossing on spatial perspective-taking. PLoS One 2014; 9:e95748. [PMID: 24752571 PMCID: PMC3994149 DOI: 10.1371/journal.pone.0095748] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2013] [Accepted: 03/29/2014] [Indexed: 11/19/2022] Open
Abstract
Human social interactions often require people to take a different perspective than their own. Although much research has been done on egocentric spatial representation in a solo context, little is known about how space is mapped in relation to other bodies. Here we used a spatial perspective-taking paradigm to investigate whether observing a person holding his arms crossed over the body midline has an impact on the encoding of left/right and front/back spatial relations from that person's perspective. In three experiments, we compared performance in a task in which spatial judgments were made from the perspective of the participant or from that of a co-experimenter. Depending on the experimental condition, the participant's and the co-experimenter's arms were either crossed or not crossed over the midline. Our results showed that crossing the arms had a specific effect on spatial judgments based on a first-person perspective. More specifically, the responses corresponding to the dominant hand side were slower in the crossed than in the uncrossed arms condition. Crucially, a similar effect was also found when the participants adopted the perspective of a person holding his arms crossed, but not when the other person's arms were held in an unusual but uncrossed posture. Taken together these findings indicate that egocentric space and altercentric space are similarly coded in neurocognitive maps structured with respect to specific body segments.
Collapse
Affiliation(s)
- Tiziano Furlanetto
- Centre for Cognitive Science, Department of Psychology, Università degli Studi di Torino, Torino, Italy
| | - Alberto Gallace
- Department of Psychology, Università degli Studi di Milano Bicocca, Milano, Italy
| | - Caterina Ansuini
- Department of Robotics, Brain and Cognitive Science, Fondazione Istituto Italiano di Tecnologia, Genova, Italy
| | - Cristina Becchio
- Centre for Cognitive Science, Department of Psychology, Università degli Studi di Torino, Torino, Italy
- Department of Robotics, Brain and Cognitive Science, Fondazione Istituto Italiano di Tecnologia, Genova, Italy
- * E-mail:
| |
Collapse
|
10
|
Lin Z. Object-centered representations support flexible exogenous visual attention across translation and reflection. Cognition 2013; 129:221-31. [PMID: 23942348 DOI: 10.1016/j.cognition.2013.07.001] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2012] [Revised: 06/28/2013] [Accepted: 07/03/2013] [Indexed: 10/26/2022]
Abstract
Visual attention can be deployed to stimuli based on our willful, top-down goal (endogenous attention) or on their intrinsic saliency against the background (exogenous attention). Flexibility is thought to be a hallmark of endogenous attention, whereas decades of research show that exogenous attention is attracted to the retinotopic locations of the salient stimuli. However, to the extent that salient stimuli in the natural environment usually form specific spatial relations with the surrounding context and are dynamic, exogenous attention, to be adaptive, should embrace these structural regularities. Here we test a non-retinotopic, object-centered mechanism in exogenous attention, in which exogenous attention is dynamically attracted to a relative, object-centered location. Using a moving frame configuration, we presented two frames in succession, forming either apparent translational motion or in mirror reflection, with a completely uninformative, transient cue presented at one of the item locations in the first frame. Despite that the cue is presented in a spatially separate frame, in both translation and mirror reflection, behavioralperformance in visual search is enhanced when the target in the second frame appears at the same relative location as the cue location than at other locations. These results provide unambiguous evidence for non-retinotopic exogenous attention and further reveal an object-centered mechanism supporting flexible exogenous attention. Moreover, attentional generalization across mirror reflection may constitute an attentional correlate of perceptual generalization across lateral mirror images, supporting an adaptive, functional account of mirror images confusion.
Collapse
Affiliation(s)
- Zhicheng Lin
- Department of Psychology, University of Minnesota, Twin Cities, USA; Department of Psychology, University of Washington, Seattle, USA.
| |
Collapse
|