1
|
Addleman DA, Rajasingh R, Störmer VS. Attention to object categories: Selection history determines the breadth of attentional tuning during real-world object search. J Exp Psychol Gen 2024:2024-77183-001. [PMID: 38647455 DOI: 10.1037/xge0001575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/25/2024]
Abstract
People excel at learning the statistics of their environments. For instance, people rapidly learn to pay attention to locations that frequently contain visual search targets. Here, we investigated how frequently finding specific objects as search targets influences attentional selection during real-world object search. We investigated how learning that a specific object (e.g., a coat) is task-relevant affects searching for that object and whether a previously frequent target would influence search more broadly for all items of that target's category (e.g., all coats). Across five experiments, one or more objects from a single category were likely targets during a training phase, after which objects from many categories became equally likely to be targets in a neutral testing phase. Participants learned to find a single frequent target object faster than other objects (Experiment 1, N = 44). This learning was specific to that object, with no advantage in finding a novel category-matched object (Experiment 2, N = 32). In contrast, learning to prioritize multiple exemplars from one category spread to untrained objects from the same category, and this spread was comparable whether participants learned to find two, four, or six exemplars (Experiment 3, N = 72). These differences in the breadth of attention were due to variability in the learning environment and not differences in task (Experiment 4, N = 24). Finally, a set-size manipulation showed that learning affects attentional guidance itself, not only postselective processing (Experiment 5, N = 96). These experiments demonstrate that the breadth of attentional tuning is flexibly adjusted based on recent experience to effectively address task demands. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Collapse
Affiliation(s)
| | - Reshma Rajasingh
- Department of Psychological and Brain Sciences, Dartmouth College
| | - Viola S Störmer
- Department of Psychological and Brain Sciences, Dartmouth College
| |
Collapse
|
2
|
Chung YH, Tam J, Wyble B, Störmer VS. Conceptual information of meaningful objects is stored incidentally. J Exp Psychol Learn Mem Cogn 2024:2024-71451-001. [PMID: 38573722 DOI: 10.1037/xlm0001339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/05/2024]
Abstract
Prior research has shown that visual working memory capacity is enhanced for meaningful stimuli (i.e., real-world objects) compared to abstract shapes (i.e., colored circles). Here, we hypothesized that the shape of meaningful objects would be better remembered incidentally than the shape of nonmeaningful objects in a color memory task where the shape of the objects is task-irrelevant. We used a surprise-trial paradigm in which participants performed a color memory task for several trials before being probed with a surprise trial that asked them about the shape of the last object they saw. Across three experiments, we found a memory advantage for recognizable shapes relative to scrambled versions of these shapes (Experiment 1) that was robust across different encoding times (Experiment 2), and the addition of a verbal suppression task (Experiment 3). Interestingly, this advantage disappeared when all objects were from the same category (Experiment 4), suggesting that people are incidentally encoding broad conceptual information about object identities, but not visual details. Finally, when we asked about the location of objects in a surprise trial, we did not observe any difference between the two stimulus types (Experiment 5). Overall, these results show that conceptual information about the categories of meaningful objects is incidentally encoded into working memory even when task-irrelevant. This privilege for meaningful information does not exhibit a trade-off with location memory, suggesting that meaningful features influence representations of visual working memory in higher-level visual regions without altering the use of spatial reference frames at the lower level. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Collapse
Affiliation(s)
- Yong Hoon Chung
- Department of Psychological and Brain Science, Dartmouth College
| | - Joyce Tam
- Department of Psychology, Pennsylvania State University
| | - Brad Wyble
- Department of Psychology, Pennsylvania State University
| | - Viola S Störmer
- Department of Psychological and Brain Science, Dartmouth College
| |
Collapse
|
3
|
Brady TF, Störmer VS. Comparing memory capacity across stimuli requires maximally dissimilar foils: Using deep convolutional neural networks to understand visual working memory capacity for real-world objects. Mem Cognit 2024; 52:595-609. [PMID: 37973770 DOI: 10.3758/s13421-023-01485-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/17/2023] [Indexed: 11/19/2023]
Abstract
The capacity of visual working and visual long-term memory plays a critical role in theories of cognitive architecture and the relationship between memory and other cognitive systems. Here, we argue that before asking the question of how capacity varies across different stimuli or what the upper bound of capacity is for a given memory system, it is necessary to establish a methodology that allows a fair comparison between distinct stimulus sets and conditions. One of the most important factors determining performance in a memory task is target/foil dissimilarity. We argue that only by maximizing the dissimilarity of the target and foil in each stimulus set can we provide a fair basis for memory comparisons between stimuli. In the current work we focus on a way to pick such foils objectively for complex, meaningful real-world objects by using deep convolutional neural networks, and we validate this using both memory tests and similarity metrics. Using this method, we then provide evidence that there is a greater capacity for real-world objects relative to simple colors in visual working memory; critically, we also show that this difference can be reduced or eliminated when non-comparable foils are used, potentially explaining why previous work has not always found such a difference. Our study thus demonstrates that working memory capacity depends on the type of information that is remembered and that assessing capacity depends critically on foil dissimilarity, especially when comparing memory performance and other cognitive systems across different stimulus sets.
Collapse
Affiliation(s)
- Timothy F Brady
- Department of Psychology, University of California San Diego, La Jolla, CA, 92093, USA.
| | - Viola S Störmer
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| |
Collapse
|
4
|
Chapman AF, Störmer VS. Representational structures as a unifying framework for attention. Trends Cogn Sci 2024:S1364-6613(24)00002-0. [PMID: 38280837 DOI: 10.1016/j.tics.2024.01.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 01/04/2024] [Accepted: 01/05/2024] [Indexed: 01/29/2024]
Abstract
Our visual system consciously processes only a subset of the incoming information. Selective attention allows us to prioritize relevant inputs, and can be allocated to features, locations, and objects. Recent advances in feature-based attention suggest that several selection principles are shared across these domains and that many differences between the effects of attention on perceptual processing can be explained by differences in the underlying representational structures. Moving forward, it can thus be useful to assess how attention changes the structure of the representational spaces over which it operates, which include the spatial organization, feature maps, and object-based coding in visual cortex. This will ultimately add to our understanding of how attention changes the flow of visual information processing more broadly.
Collapse
Affiliation(s)
- Angus F Chapman
- Department of Psychological and Brain Sciences, Boston University, Boston, MA, USA.
| | - Viola S Störmer
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA.
| |
Collapse
|
5
|
Chung YH, Brady TF, Störmer VS. Sequential encoding aids working memory for meaningful objects' identities but not for their colors. Mem Cognit 2023:10.3758/s13421-023-01486-4. [PMID: 37948024 DOI: 10.3758/s13421-023-01486-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/19/2023] [Indexed: 11/12/2023]
Abstract
Previous studies have found that real-world objects' identities are better remembered than simple features like colored circles, and this effect is particularly pronounced when these stimuli are encoded one by one in a serial, item-based way. Recent work has also demonstrated that memory for simple features like color is improved if these colors are part of real-world objects, suggesting that meaningful objects can serve as a robust memory scaffold for their associated low-level features. However, it is unclear whether the improved color memory that arises from the colors appearing on real-world objects is affected by encoding format, in particular whether items are encoded sequentially or simultaneously. We test this using randomly colored silhouettes of recognizable versus unrecognizable scrambled objects that offer a uniquely controlled set of stimuli to test color working memory of meaningful versus non-meaningful objects. Participants were presented with four stimuli (silhouettes of objects or scrambled shapes) simultaneously or sequentially. After a short delay, they reported either which colors or which shapes they saw in a two-alternative forced-choice task. We replicated previous findings that meaningful stimuli boost working memory performance for colors (Exp. 1). We found that when participants remembered the colors (Exp. 2) there was no difference in performance across the two encoding formats. However, when participants remembered the shapes and thus identity of the objects (Exp. 3), sequential presentation resulted in better performance than simultaneous presentation. Overall, these results show that different encoding formats can flexibly impact visual working memory depending on what the memory-relevant feature is.
Collapse
Affiliation(s)
- Yong Hoon Chung
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA.
| | - Timothy F Brady
- Department of Psychology, University of California San Diego, San Diego, CA, USA
| | - Viola S Störmer
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| |
Collapse
|
6
|
Itthipuripat S, Phangwiwat T, Wiwatphonthana P, Sawetsuttipan P, Chang KY, Störmer VS, Woodman GF, Serences JT. Dissociable Neural Mechanisms Underlie the Effects of Attention on Visual Appearance and Response Bias. J Neurosci 2023; 43:6628-6652. [PMID: 37620156 PMCID: PMC10538590 DOI: 10.1523/jneurosci.2192-22.2023] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 07/10/2023] [Accepted: 08/13/2023] [Indexed: 08/26/2023] Open
Abstract
A prominent theoretical framework spanning philosophy, psychology, and neuroscience holds that selective attention penetrates early stages of perceptual processing to alter the subjective visual experience of behaviorally relevant stimuli. For example, searching for a red apple at the grocery store might make the relevant color appear brighter and more saturated compared with seeing the exact same red apple while searching for a yellow banana. In contrast, recent proposals argue that data supporting attention-related changes in appearance reflect decision- and motor-level response biases without concurrent changes in perceptual experience. Here, we tested these accounts by evaluating attentional modulations of EEG responses recorded from male and female human subjects while they compared the perceived contrast of attended and unattended visual stimuli rendered at different levels of physical contrast. We found that attention enhanced the amplitude of the P1 component, an early evoked potential measured over visual cortex. A linking model based on signal detection theory suggests that response gain modulations of the P1 component track attention-induced changes in perceived contrast as measured with behavior. In contrast, attentional cues induced changes in the baseline amplitude of posterior alpha band oscillations (∼9-12 Hz), an effect that best accounts for cue-induced response biases, particularly when no stimuli are presented or when competing stimuli are similar and decisional uncertainty is high. The observation of dissociable neural markers that are linked to changes in subjective appearance and response bias supports a more unified theoretical account and demonstrates an approach to isolate subjective aspects of selective information processing.SIGNIFICANCE STATEMENT Does attention alter visual appearance, or does it simply induce response bias? In the present study, we examined these competing accounts using EEG and linking models based on signal detection theory. We found that response gain modulations of the visually evoked P1 component best accounted for attention-induced changes in visual appearance. In contrast, cue-induced baseline shifts in alpha band activity better explained response biases. Together, these results suggest that attention concurrently impacts visual appearance and response bias, and that these processes can be experimentally isolated.
Collapse
Affiliation(s)
- Sirawaj Itthipuripat
- Neuroscience Center for Research and Innovation, Learning Institute, King Mongkut’s University of Technology Thonburi, Bangkok, 10140, Thailand
- Big Data Experience Center, King Mongkut’s University of Technology Thonburi, Bangkok, 10140, Thailand
| | - Tanagrit Phangwiwat
- Neuroscience Center for Research and Innovation, Learning Institute, King Mongkut’s University of Technology Thonburi, Bangkok, 10140, Thailand
- Big Data Experience Center, King Mongkut’s University of Technology Thonburi, Bangkok, 10140, Thailand
- Computer Engineering Department, Faculty of Engineering, King Mongkut’s University of Technology Thonburi Bangkok, 10140, Thailand
| | - Praewpiraya Wiwatphonthana
- Neuroscience Center for Research and Innovation, Learning Institute, King Mongkut’s University of Technology Thonburi, Bangkok, 10140, Thailand
- SECCLO Consortium, Department of Computer Science, Aalto University School of Science, Espoo, 02150, Finland
| | - Prapasiri Sawetsuttipan
- Neuroscience Center for Research and Innovation, Learning Institute, King Mongkut’s University of Technology Thonburi, Bangkok, 10140, Thailand
- Big Data Experience Center, King Mongkut’s University of Technology Thonburi, Bangkok, 10140, Thailand
- Computer Engineering Department, Faculty of Engineering, King Mongkut’s University of Technology Thonburi Bangkok, 10140, Thailand
| | - Kai-Yu Chang
- Department of Cognitive Science, University of California–San Diego, La Jolla, California 92093-1090
| | - Viola S. Störmer
- Department of Psychological and Brain Science, Dartmouth College, Hanover, New Hampshire 03755
| | - Geoffrey F. Woodman
- Department of Psychology, Center for Integrative and Cognitive Neuroscience, and Interdisciplinary Program in Neuroscience, Vanderbilt University, Nashville, Tennessee 37235
| | - John T. Serences
- Neurosciences Graduate Program, Department of Psychology, University of California–San Diego, La Jolla, California 92093-1090
| |
Collapse
|
7
|
Noonan MP, Störmer VS. Contextual and Temporal Constraints for Attentional Capture: Commentary on Theeuwes' 2023 Review "The Attentional Capture Debate: When Can We Avoid Salient Distractors and when Not?". J Cogn 2023; 6:37. [PMID: 37426062 PMCID: PMC10327855 DOI: 10.5334/joc.274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Accepted: 04/06/2023] [Indexed: 07/11/2023] Open
Abstract
Salient distractors demand our attention. Their salience, derived from intensity, relative contrast or learned relevance, captures our limited information capacity. This is typically an adaptive response as salient stimuli may require an immediate change in behaviour. However, sometimes apparent salient distractors do not capture attention. Theeuwes, in his recent commentary, has proposed certain boundary conditions of the visual scene that result in one of two search modes, serial or parallel, that determine whether we can avoid salient distractors or not. Here, we argue that a more complete theory should consider the temporal and contextual factors that influence the very salience of the distractor itself.
Collapse
Affiliation(s)
- MaryAnn P. Noonan
- Department of Psychology, University of York, Heslington, York, UK
- Department of Experimental Psychology, University of Oxford, South Parks Road, Oxford, UK
| | - Viola S. Störmer
- Department of Psychological and Brain Sciences, Dartmouth College, USA
| |
Collapse
|
8
|
Chung YH, Störmer VS. Unveiling the time course of visual stabilization through human electrophysiology. iScience 2023; 26:106800. [PMID: 37255656 PMCID: PMC10225885 DOI: 10.1016/j.isci.2023.106800] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Revised: 02/15/2023] [Accepted: 04/28/2023] [Indexed: 06/01/2023] Open
Abstract
Object positions are coded relative to their surroundings, presumably providing visual stability during eye movements. But when does this perceived stability arise? Here we used a visual illusion, the frame-induced position shift, and measured electrophysiological activity elicited by an object whose perceived position was either shifted because of a surrounding frame or not, thus dissociating perceived and physical locations. We found that visually evoked responses were sensitive to only physical location earlier in time (∼70 ms), but both physical and illusory location information was present at a later time point (∼140 ms). Furthermore, location information could be reliably decoded across physical and illusory locations during the later time interval but not during the earlier time interval, demonstrating that neural activity patterns are shared between the two processes at a later stage. These results suggest that visual stability of objects emerges relatively late and is thus dependent on recurrent feedback from higher processing stages.
Collapse
Affiliation(s)
- Yong Hoon Chung
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| | - Viola S. Störmer
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| |
Collapse
|
9
|
Chung YH, Brady TF, Störmer VS. No Fixed Limit for Storing Simple Visual Features: Realistic Objects Provide an Efficient Scaffold for Holding Features in Mind. Psychol Sci 2023:9567976231171339. [PMID: 37227786 DOI: 10.1177/09567976231171339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/27/2023] Open
Abstract
Prominent theories of visual working memory postulate that the capacity to maintain a particular visual feature is fixed. In contrast to these theories, recent studies have demonstrated that meaningful objects are better remembered than simple, nonmeaningful stimuli. Here, we tested whether this is solely because meaningful stimuli can recruit additional features-and thus more storage capacity-or whether simple visual features that are not themselves meaningful can also benefit from being part of a meaningful object. Across five experiments (30 young adults each), we demonstrated that visual working memory capacity for color is greater when colors are part of recognizable real-world objects compared with unrecognizable objects. Our results indicate that meaningful stimuli provide a potent scaffold to help maintain simple visual feature information, possibly because they effectively increase the objects' distinctiveness from each other and reduce interference.
Collapse
Affiliation(s)
- Yong Hoon Chung
- Department of Psychological and Brain Sciences, Dartmouth College
| | - Timothy F Brady
- Department of Psychology, University of California San Diego
| | - Viola S Störmer
- Department of Psychological and Brain Sciences, Dartmouth College
| |
Collapse
|
10
|
Chapman AF, Chunharas C, Störmer VS. Feature-based attention warps the perception of visual features. Sci Rep 2023; 13:6487. [PMID: 37081047 PMCID: PMC10119379 DOI: 10.1038/s41598-023-33488-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Accepted: 04/13/2023] [Indexed: 04/22/2023] Open
Abstract
Selective attention improves sensory processing of relevant information but can also impact the quality of perception. For example, attention increases visual discrimination performance and at the same time boosts apparent stimulus contrast of attended relative to unattended stimuli. Can attention also lead to perceptual distortions of visual representations? Optimal tuning accounts of attention suggest that processing is biased towards "off-tuned" features to maximize the signal-to-noise ratio in favor of the target, especially when targets and distractors are confusable. Here, we tested whether such tuning gives rise to phenomenological changes of visual features. We instructed participants to select a color among other colors in a visual search display and subsequently asked them to judge the appearance of the target color in a 2-alternative forced choice task. Participants consistently judged the target color to appear more dissimilar from the distractor color in feature space. Critically, the magnitude of these perceptual biases varied systematically with the similarity between target and distractor colors during search, indicating that attentional tuning quickly adapts to current task demands. In control experiments we rule out possible non-attentional explanations such as color contrast or memory effects. Overall, our results demonstrate that selective attention warps the representational geometry of color space, resulting in profound perceptual changes across large swaths of feature space. Broadly, these results indicate that efficient attentional selection can come at a perceptual cost by distorting our sensory experience.
Collapse
Affiliation(s)
- Angus F Chapman
- Department of Psychology, UC San Diego, La Jolla, CA, 92092, USA.
- Department of Psychological and Brain Sciences, Boston University, 64 Cummington Mall, Boston, MA, 02215, USA.
| | - Chaipat Chunharas
- Cognitive Clinical and Computational Neuroscience Lab, KCMH Chula Neuroscience Center, Thai Red Cross Society, Department of Internal Medicine, Chulalongkorn University, Bangkok, 10330, Thailand
| | - Viola S Störmer
- Department of Brain and Psychological Sciences, Dartmouth College, Hanover, NH, USA
| |
Collapse
|
11
|
Chapman AF, Störmer VS. Efficient tuning of attention to narrow and broad ranges of task-relevant feature values. Visual Cognition 2023. [DOI: 10.1080/13506285.2023.2192993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/30/2023]
|
12
|
Williams JR, Markov YA, Tiurina NA, Störmer VS. What You See Is What You Hear: Sounds Alter the Contents of Visual Perception. Psychol Sci 2022; 33:2109-2122. [PMID: 36179072 DOI: 10.1177/09567976221121348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022] Open
Abstract
Visual object recognition is not performed in isolation but depends on prior knowledge and context. Here, we found that auditory context plays a critical role in visual object perception. Using a psychophysical task in which naturalistic sounds were paired with noisy visual inputs, we demonstrated across two experiments (young adults; ns = 18-40 in Experiments 1 and 2, respectively) that the representations of ambiguous visual objects were shifted toward the visual features of an object that were related to the incidental sound. In a series of control experiments, we found that these effects were not driven by decision or response biases (ns = 40-85) nor were they due to top-down expectations (n = 40). Instead, these effects were driven by the continuous integration of audiovisual inputs during perception itself. Together, our results demonstrate that the perceptual experience of visual objects is directly shaped by naturalistic auditory context, which provides independent and diagnostic information about the visual world.
Collapse
Affiliation(s)
| | - Yuri A Markov
- Laboratory of Psychophysics, Brain Mind Institute, Ecole Polytechnique Federale de Lausanne (EPFL)
| | - Natalia A Tiurina
- Laboratory of Psychophysics, Brain Mind Institute, Ecole Polytechnique Federale de Lausanne (EPFL)
| | - Viola S Störmer
- Department of Psychology, University of California San Diego.,Department of Brain and Psychological Sciences, Dartmouth College
| |
Collapse
|
13
|
Chapman AF, Störmer VS. Feature similarity is non-linearly related to attentional selection: Evidence from visual search and sustained attention tasks. J Vis 2022; 22:4. [PMID: 35834377 PMCID: PMC9290316 DOI: 10.1167/jov.22.8.4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Although many theories of attention highlight the importance of similarity between target and distractor items for selection, few studies have directly quantified the function underlying this relationship. Across two commonly used tasks-visual search and sustained attention-we investigated how target-distractor similarity impacts feature-based attentional selection. Importantly, we found comparable patterns of performance in both visual search and sustained feature-based attention tasks, with performance (response times and d', respectively) plateauing at medium target-distractor distances (40°-50° around a luminance-matched color wheel). In contrast, visual search efficiency, as measured by search slopes, was affected by a much more narrow range of similarity levels (10°-20°). We assessed the relationship between target-distractor similarity and attentional performance using both a stimulus-based and psychologically-based measure of similarity and found this nonlinear relationship in both cases. However, psychological similarity accounted for some of the nonlinearities observed in the data, suggesting that measures of psychological similarity are more appropriate when studying effects of target-distractor similarities. These findings place novel constraints on models of selective attention and emphasize the importance of considering the similarity structure of the feature space over which attention operates. Broadly, the nonlinear effects of similarity on attention are consistent with accounts that propose attention exaggerates the distance between competing representations, possibly through enhancement of off-tuned neurons.
Collapse
Affiliation(s)
- Angus F Chapman
- Department of Psychology, University of California, San Diego, La Jolla, CA, USA.,
| | - Viola S Störmer
- Department of Brain and Psychological Sciences, Dartmouth College, Hanover, NH, USA.,
| |
Collapse
|
14
|
Wöstmann M, Störmer VS, Obleser J, Addleman DA, Andersen SK, Gaspelin N, Geng JJ, Luck SJ, Noonan MP, Slagter HA, Theeuwes J. Ten simple rules to study distractor suppression. Prog Neurobiol 2022. [PMID: 35427732 DOI: 10.1016/j.pneurobio.2022.102269] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Distractor suppression refers to the ability to filter out distracting and task-irrelevant information. Distractor suppression is essential for survival and considered a key aspect of selective attention. Despite the recent and rapidly evolving literature on distractor suppression, we still know little about how the brain suppresses distracting information. What limits progress is that we lack mutually agreed upon principles of how to study the neural basis of distractor suppression and its manifestation in behavior. Here, we offer ten simple rules that we believe are fundamental when investigating distractor suppression. We provide guidelines on how to design conclusive experiments on distractor suppression (Rules 1-3), discuss different types of distractor suppression that need to be distinguished (Rules 4-6), and provide an overview of models of distractor suppression and considerations of how to evaluate distractor suppression statistically (Rules 7-10). Together, these rules provide a concise and comprehensive synopsis of promising advances in the field of distractor suppression. Following these rules will propel research on distractor suppression in important ways, not only by highlighting prominent issues to both new and more advanced researchers in the field, but also by facilitating communication between sub-disciplines.
Collapse
Affiliation(s)
- Malte Wöstmann
- Department of Psychology, University of Lübeck, Lübeck, Germany; Center of Brain, Behavior and Metabolism (CBBM), University of Lübeck, Lübeck, Germany.
| | - Viola S Störmer
- Department of Psychological and Brain Sciences, Dartmouth College, USA.
| | - Jonas Obleser
- Department of Psychology, University of Lübeck, Lübeck, Germany; Center of Brain, Behavior and Metabolism (CBBM), University of Lübeck, Lübeck, Germany
| | | | - Søren K Andersen
- School of Psychology, University of Aberdeen, UK; Department of Psychology, University of Southern Denmark, Denmark
| | - Nicholas Gaspelin
- Department of Psychology and Department of Integrative Neuroscience, Binghamton University, State University of New York, USA
| | - Joy J Geng
- Center for Mind and Brain and Department of Psychology, University of California, Davis, USA
| | - Steven J Luck
- Center for Mind and Brain and Department of Psychology, University of California, Davis, USA
| | | | - Heleen A Slagter
- Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands; Institute for Brain and Behavior, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| | - Jan Theeuwes
- Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands; Institute for Brain and Behavior, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
15
|
Wöstmann M, Störmer VS, Obleser J, Addleman DA, Andersen SK, Gaspelin N, Geng JJ, Luck SJ, Noonan MP, Slagter HA, Theeuwes J. Ten simple rules to study distractor suppression. Prog Neurobiol 2022; 213:102269. [PMID: 35427732 PMCID: PMC9069241 DOI: 10.1016/j.pneurobio.2022.102269] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Revised: 03/28/2022] [Accepted: 04/04/2022] [Indexed: 01/23/2023]
Abstract
Distractor suppression refers to the ability to filter out distracting and task-irrelevant information. Distractor suppression is essential for survival and considered a key aspect of selective attention. Despite the recent and rapidly evolving literature on distractor suppression, we still know little about how the brain suppresses distracting information. What limits progress is that we lack mutually agreed upon principles of how to study the neural basis of distractor suppression and its manifestation in behavior. Here, we offer ten simple rules that we believe are fundamental when investigating distractor suppression. We provide guidelines on how to design conclusive experiments on distractor suppression (Rules 1–3), discuss different types of distractor suppression that need to be distinguished (Rules 4–6), and provide an overview of models of distractor suppression and considerations of how to evaluate distractor suppression statistically (Rules 7–10). Together, these rules provide a concise and comprehensive synopsis of promising advances in the field of distractor suppression. Following these rules will propel research on distractor suppression in important ways, not only by highlighting prominent issues to both new and more advanced researchers in the field, but also by facilitating communication between sub-disciplines. Distractor suppression is the ability to filter out irrelevant information. At present, we know little about how the brain suppresses distraction. We offer ten rules that are fundamental when investigating distractor suppression. Following the rules will propel research and foster interaction between disciplines.
Collapse
Affiliation(s)
- Malte Wöstmann
- Department of Psychology, University of Lübeck, Lübeck, Germany; Center of Brain, Behavior and Metabolism (CBBM), University of Lübeck, Lübeck, Germany.
| | - Viola S Störmer
- Department of Psychological and Brain Sciences, Dartmouth College, USA.
| | - Jonas Obleser
- Department of Psychology, University of Lübeck, Lübeck, Germany; Center of Brain, Behavior and Metabolism (CBBM), University of Lübeck, Lübeck, Germany
| | | | - Søren K Andersen
- School of Psychology, University of Aberdeen, UK; Department of Psychology, University of Southern Denmark, Denmark
| | - Nicholas Gaspelin
- Department of Psychology and Department of Integrative Neuroscience, Binghamton University, State University of New York, USA
| | - Joy J Geng
- Center for Mind and Brain and Department of Psychology, University of California, Davis, USA
| | - Steven J Luck
- Center for Mind and Brain and Department of Psychology, University of California, Davis, USA
| | | | - Heleen A Slagter
- Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands; Institute for Brain and Behavior, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| | - Jan Theeuwes
- Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands; Institute for Brain and Behavior, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
16
|
Williams JR, Brady TF, Störmer VS. Guidance of attention by working memory is a matter of representational fidelity. J Exp Psychol Hum Percept Perform 2022; 48:202-231. [PMID: 35084932 DOI: 10.1037/xhp0000985] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Items that are held in visual working memory can guide attention toward matching features in the environment. Predominant theories propose that to guide attention, a memory item must be internally prioritized and given a special template status, which builds on the assumption that there are qualitatively distinct states in working memory. Here, we propose that no distinct states in working memory are necessary to explain why some items guide attention and others do not. Instead, we propose variations in attentional guidance arise because individual memories naturally vary in their representational fidelity, and only highly accurate memories automatically guide attention. Across a series of experiments and a simulation we show that (a) items in working memory vary naturally in representational fidelity; (b) attention is guided by all well-represented items, though frequently only one item is represented well enough to guide; and (c) no special working memory state for prioritized items is necessary to explain guidance. These findings challenge current models of attentional guidance and working memory and instead support a simpler account for how working memory and attention interact: Only the representational fidelity of memories, which varies naturally between items, determines whether and how strongly a memory representation guides attention. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Collapse
|
17
|
Marin A, Störmer VS, Carver LJ. Expectations about dynamic visual objects facilitates early sensory processing of congruent sounds. Cortex 2021; 144:198-211. [PMID: 34673436 DOI: 10.1016/j.cortex.2021.08.006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2021] [Revised: 05/17/2021] [Accepted: 08/05/2021] [Indexed: 11/17/2022]
Abstract
The perception of a moving object can lead to the expectation of its sound, yet little is known about how visual expectations influence auditory processing. We examined how visual perception of an object moving continuously across the visual field influences early auditory processing of a sound that occurred congruently or incongruently with the object's motion. In Experiment 1, electroencephalogram (EEG) activity was recorded from adults who passively viewed a ball that appeared either on the left or right boundary of a display and continuously traversed along the horizontal midline to make contact and elicit a bounce sound off the opposite boundary. Our main analysis focused on the auditory-evoked event-related potential. For audio-visual (AV) trials, a sound accompanied the visual input when the ball contacted the opposite boundary (AV-synchronous), or the sound occurred before contact (AV-asynchronous). We also included audio-only and visual-only trials. AV-synchronous sounds elicited an earlier and attenuated auditory response relative to AV-asynchronous or audio-only events. In Experiment 2, we examined the roles of expectancy and multisensory integration in influencing this response. In addition to the audio-only, AV-synchronous, and AV-asynchronous conditions, participants were shown a ball that became occluded prior to reaching the boundary of the display, but elicited an expected sound at the point of occluded collision. The auditory response during the AV-occluded condition resembled that of the AV-synchronous condition, suggesting that expectations induced by a moving object can influence early auditory processing. Broadly, the results suggest that dynamic visual stimuli can help generate expectations about the timing of sounds, which then facilitates the processing of auditory information that matches these expectations.
Collapse
Affiliation(s)
- Andrew Marin
- University of California, San Diego (UCSD), Psychology Department, La Jolla, CA, USA.
| | - Viola S Störmer
- Dartmouth College, Department of Psychological and Brain Sciences, Hanover, NH, USA.
| | - Leslie J Carver
- University of California, San Diego (UCSD), Psychology Department, La Jolla, CA, USA.
| |
Collapse
|
18
|
Abstract
A new study suggests that visual working memory usage is interestingly low during a more naturalistic virtual reality paradigm, compared to capacity estimates from traditional lab studies. This raises new questions about the use of working memory in everyday tasks.
Collapse
Affiliation(s)
- Jamal Williams
- Department of Psychology, University of California, San Diego, CA 92093, USA.
| | - Viola S Störmer
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH 03755, USA.
| |
Collapse
|
19
|
Keefe JM, Pokta E, Störmer VS. Cross-modal orienting of exogenous attention results in visual-cortical facilitation, not suppression. Sci Rep 2021; 11:10237. [PMID: 33986384 PMCID: PMC8119727 DOI: 10.1038/s41598-021-89654-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2020] [Accepted: 04/29/2021] [Indexed: 11/10/2022] Open
Abstract
Attention may be oriented exogenously (i.e., involuntarily) to the location of salient stimuli, resulting in improved perception. However, it is unknown whether exogenous attention improves perception by facilitating processing of attended information, suppressing processing of unattended information, or both. To test this question, we measured behavioral performance and cue-elicited neural changes in the electroencephalogram as participants (N = 19) performed a task in which a spatially non-predictive auditory cue preceded a visual target. Critically, this cue was either presented at a peripheral target location or from the center of the screen, allowing us to isolate spatially specific attentional activity. We find that both behavior and attention-mediated changes in visual-cortical activity are enhanced at the location of a cue prior to the onset of a target, but that behavior and neural activity at an unattended target location is equivalent to that following a central cue that does not direct attention (i.e., baseline). These results suggest that exogenous attention operates via facilitation of information at an attended location.
Collapse
Affiliation(s)
- Jonathan M Keefe
- Department of Psychology, University of California, San Diego, 92092, USA.
| | - Emilia Pokta
- Department of Psychology, University of California, San Diego, 92092, USA
| | - Viola S Störmer
- Department of Psychology, University of California, San Diego, 92092, USA
- Department of Brain and Psychological Sciences, Dartmouth College, Hanover, USA
| |
Collapse
|
20
|
Geweke F, Pokta E, Störmer VS. Spatial distance of target locations affects the time course of both endogenous and exogenous attentional deployment. J Exp Psychol Hum Percept Perform 2021; 47:774-783. [PMID: 33844570 DOI: 10.1037/xhp0000909] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Spatial attention can be deployed exogenously, based on salient events in the environment, or endogenously, based on current task goals. Numerous studies have compared the time courses of these two types of attention, and have demonstrated that exogenous attention is fast and transient and endogenous attention is relatively slow but sustained. In the present study we investigated whether and how the temporal dynamics of exogenous and endogenous attention differ in terms of where attention is deployed in the visual field, in particular at locations nearby or far from fixation. Across a series of experiments, we measured attentional shift times for each type of attention, and found overall slower deployment of endogenous relative to exogenous attention, in line with previous research. Importantly, we also consistently found that it takes longer to deploy attention at more distant locations relative to nearby locations, regardless of how attention was instigated. Overall, our results suggest that the temporal limits of attentional deployment across different spatial distances are similar for exogenous and endogenous attention, pointing to shared constraints underlying both attentional modes. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Collapse
|
21
|
Brady TF, Störmer VS. The role of meaning in visual working memory: Real-world objects, but not simple features, benefit from deeper processing. J Exp Psychol Learn Mem Cogn 2021; 48:942-958. [PMID: 33764123 DOI: 10.1037/xlm0001014] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Visual working memory is a capacity-limited cognitive system used to actively store and manipulate visual information. Visual working memory capacity is not fixed, but varies by stimulus type: Stimuli that are more meaningful are better remembered. In the current work, we investigate what conditions lead to the strongest benefits for meaningful stimuli. We propose that in some situations participants may try to encode the entire display holistically (i.e., in a quick "snapshot"). This may lead them to treat objects as simply meaningless, colored "blobs", rather than individually and in a high-level way, which could reduce benefits of meaningful stimuli. In a series of experiments, we directly test whether real-world objects, colors, perceptually matched less-meaningful objects, and fully scrambled objects benefit from deeper processing. We systematically vary the presentation format of stimuli at encoding to be either simultaneous-encouraging a parallel, "take-a-quick-snapshot" strategy-or present the stimuli sequentially, promoting a serial, each-item-at-once strategy. We find large advantages for meaningful objects in all conditions, but find that real-world objects-and to a lesser degree lightly scrambled, still meaningful versions of the objects-benefit from the sequential encoding and thus deeper, focused-on-individual-items processing, while colors do not. Our results suggest single-feature objects may be an outlier in their affordance of parallel, quick processing, and that in more realistic memory situations, visual working memory likely relies upon representations resulting from in-depth processing of objects (e.g., in higher-level visual areas) rather than solely being represented in terms of their low-level features. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Collapse
|
22
|
Asp IE, Störmer VS, Brady TF. Greater Visual Working Memory Capacity for Visually Matched Stimuli When They Are Perceived as Meaningful. J Cogn Neurosci 2021; 33:902-918. [PMID: 33571076 DOI: 10.1162/jocn_a_01693] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Almost all models of visual working memory-the cognitive system that holds visual information in an active state-assume it has a fixed capacity: Some models propose a limit of three to four objects, where others propose there is a fixed pool of resources for each basic visual feature. Recent findings, however, suggest that memory performance is improved for real-world objects. What supports these increases in capacity? Here, we test whether the meaningfulness of a stimulus alone influences working memory capacity while controlling for visual complexity and directly assessing the active component of working memory using EEG. Participants remembered ambiguous stimuli that could either be perceived as a face or as meaningless shapes. Participants had higher performance and increased neural delay activity when the memory display consisted of more meaningful stimuli. Critically, by asking participants whether they perceived the stimuli as a face or not, we also show that these increases in visual working memory capacity and recruitment of additional neural resources are because of the subjective perception of the stimulus and thus cannot be driven by physical properties of the stimulus. Broadly, this suggests that the capacity for active storage in visual working memory is not fixed but that more meaningful stimuli recruit additional working memory resources, allowing them to be better remembered.
Collapse
Affiliation(s)
- Isabel E Asp
- University of California.,Veterans Affairs San Diego Healthcare System, La Jolla, California
| | | | | |
Collapse
|
23
|
Keefe JM, Störmer VS. Lateralized alpha activity and slow potential shifts over visual cortex track the time course of both endogenous and exogenous orienting of attention. Neuroimage 2020; 225:117495. [PMID: 33184032 DOI: 10.1016/j.neuroimage.2020.117495] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Revised: 10/13/2020] [Accepted: 10/20/2020] [Indexed: 11/17/2022] Open
Abstract
Spatial attention can be oriented endogenously, based on current task goals, or exogenously, triggered by salient events in the environment. Based upon literature demonstrating differences in the time course and neural substrates of each type of orienting, these two attention systems are often treated as fundamentally distinct. However, recent studies suggest that rhythmic neural activity in the alpha band (8-13 Hz) and slow waves in the event-related potential (ERP) may emerge over parietal-occipital cortex following both endogenous and exogenous attention cues. To assess whether these neural changes index common processes of spatial attention, we conducted two within-subject experiments varying the two main dimensions over which endogenous and exogenous attention tasks typically differ: cue informativity (spatially predictive vs. non-predictive) and cue format (centrally vs. peripherally presented). This task design allowed us to tease apart neural changes related to top-down goals and those driven by the reflexive orienting of spatial attention, and examine their interactions in a novel hybrid cross-modal attention task. Our data demonstrate that both central and peripheral cues elicit lateralized ERPs over parietal-occipital cortex, though at different points in time, consistent with these ERPs reflecting the orienting of spatial attention. Lateralized alpha activity was also present across all tasks, emerging rapidly for peripheral cues and sustaining longer for spatially informative cues. Overall, these data indicate that distinct slow-wave ERPs index the spatial orienting of endogenous and exogenous attention, while lateralized alpha activity represents a common signature of visual-cortical biasing in anticipation of potential targets across both types of attention.
Collapse
Affiliation(s)
- Jonathan M Keefe
- Department of Psychology, University of California, San Diego 92092, USA.
| | - Viola S Störmer
- Department of Psychology, University of California, San Diego 92092, USA; Department of Psychological and Brain Sciences, Dartmouth College, USA
| |
Collapse
|
24
|
Barszcz A, Chapman AF, Chunharas C, Störmer VS. Feature-based attention warps perception of color. J Vis 2020. [DOI: 10.1167/jov.20.11.1304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
- Audrey Barszcz
- Department of Psychology, University of California San Diego
| | | | - Chaipat Chunharas
- Department of Medicine, King Chulalongkorn Memorial Hospital, Chulalongkorn University, Bangkok, Thailand
| | | |
Collapse
|
25
|
Chapman AF, Geweke F, Störmer VS. Feature-based attention resolves differences in target-distractor similarity through multiple mechanisms. J Vis 2019. [DOI: 10.1167/19.10.45a] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
- Angus F Chapman
- Department of Psychology, University of California, San Diego
| | | | - Viola S Störmer
- Department of Psychology, University of California, San Diego
| |
Collapse
|
26
|
Affiliation(s)
| | - Viola S Störmer
- Department of Psychology, University of California, San Diego
| |
Collapse
|
27
|
Störmer VS. Ensemble perception of faces within the focus of attention is biased towards unattended and task-irrelevant faces. J Vis 2019. [DOI: 10.1167/19.10.16d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
- Viola S Störmer
- Department of Psychology, University of California San Diego
| |
Collapse
|
28
|
Keefe JM, Störmer VS. Voluntary and involuntary attention elicit distinct biasing signals in visual cortex. J Vis 2019. [DOI: 10.1167/19.10.214b] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
29
|
Störmer VS, McDonald JJ, Hillyard SA. Involuntary orienting of attention to sight or sound relies on similar neural biasing mechanisms in early visual processing. Neuropsychologia 2019; 132:107122. [DOI: 10.1016/j.neuropsychologia.2019.107122] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2018] [Revised: 05/31/2019] [Accepted: 06/11/2019] [Indexed: 10/26/2022]
|
30
|
Amadeo MB, Störmer VS, Campus C, Gori M. Peripheral sounds elicit stronger activity in contralateral occipital cortex in blind than sighted individuals. Sci Rep 2019; 9:11637. [PMID: 31406158 PMCID: PMC6690873 DOI: 10.1038/s41598-019-48079-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Accepted: 07/26/2019] [Indexed: 11/17/2022] Open
Abstract
Previous research has shown that peripheral, task-irrelevant sounds elicit activity in contralateral visual cortex of sighted people, as revealed by a sustained positive deflection in the event-related potential (ERP) over the occipital scalp contralateral to the sound’s location. This Auditory-evoked Contralateral Occipital Positivity (ACOP) appears between 200–450 ms after sound onset, and is present even when the task is entirely auditory and no visual stimuli are presented at all. Here, we investigate whether this cross-modal activation of contralateral visual cortex is influenced by visual experience. To this end, ERPs were recorded in 12 sighted and 12 blind subjects during a unimodal auditory task. Participants listened to a stream of sounds and pressed a button every time they heard a central target tone, while ignoring the peripheral noise bursts. It was found that task-irrelevant noise bursts elicited a larger ACOP in blind compared to sighted participants, indicating for the first time that peripheral sounds can enhance neural activity in visual cortex in a spatially lateralized manner even in visually deprived individuals. Overall, these results suggest that the cross-modal activation of contralateral visual cortex triggered by peripheral sounds does not require any visual input to develop, and is rather enhanced by visual deprivation.
Collapse
Affiliation(s)
- Maria Bianca Amadeo
- U-VIP: Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genova, Italy. .,Department of Informatics, Bioengineering, Robotics and Systems Engineering, Università degli Studi di Genova, Genova, Italy.
| | - Viola S Störmer
- Department of Psychology and Neuroscience Graduate Program, University of California San Diego, San Diego, USA
| | - Claudio Campus
- U-VIP: Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genova, Italy
| | - Monica Gori
- U-VIP: Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genova, Italy
| |
Collapse
|
31
|
Abstract
Attention, the mechanism by which information is selected for further processing, has mostly been studied within the visual system. While this research has been exceptionally successful, it is important to understand how attention operates across the sensory modalities. This review focuses on recent studies showing that orienting to a peripheral, salient sound affects visual processing: it enhances visual perception, boosts visual-cortical responses, and modulates visual cortex activity before the appearance of a visual object. Critically, all of these effects are spatially selective, indicating that spatial attention facilitates perceptual processing at an attended location across sensory modalities. The neural changes in visual cortex triggered by the sounds not only resemble some of the neural modulations reported in uni-modal visual attention studies, but also reveal some important differences.
Collapse
Affiliation(s)
- Viola S Störmer
- Department of Psychology, University of California, San Diego, United States.
| |
Collapse
|
32
|
Störmer VS, Cohen MA, Alvarez GA. Tuning Attention to Object Categories: Spatially Global Effects of Attention to Faces in Visual Processing. J Cogn Neurosci 2019; 31:937-947. [PMID: 30912729 DOI: 10.1162/jocn_a_01400] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Feature-based attention is known to enhance visual processing globally across the visual field, even at task-irrelevant locations. Here, we asked whether attention to object categories, in particular faces, shows similar location-independent tuning. Using EEG, we measured the face-selective N170 component of the EEG signal to examine neural responses to faces at task-irrelevant locations while participants attended to faces at another task-relevant location. Across two experiments, we found that visual processing of faces was amplified at task-irrelevant locations when participants attended to faces relative to when participants attended to either buildings or scrambled face parts. The fact that we see this enhancement with the N170 suggests that these attentional effects occur at the earliest stage of face processing. Two additional behavioral experiments showed that it is easier to attend to the same object category across the visual field relative to two distinct categories, consistent with object-based attention spreading globally. Together, these results suggest that attention to high-level object categories shows similar spatially global effects on visual processing as attention to simple, individual, low-level features.
Collapse
|
33
|
Brady TF, Störmer VS, Shafer-Skelton A, Williams JR, Chapman AF, Schill HM. Scaling up visual attention and visual working memory to the real world. PSYCHOLOGY OF LEARNING AND MOTIVATION 2019. [DOI: 10.1016/bs.plm.2019.03.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/16/2023]
|
34
|
Abstract
While substantial work has focused on how the visual system achieves basic-level recognition, less work has asked about how it supports large-scale distinctions between objects, such as animacy and real-world size. Previous work has shown that these dimensions are reflected in our neural object representations (Konkle & Caramazza, 2013), and that objects of different real-world sizes have different mid-level perceptual features (Long, Konkle, Cohen, & Alvarez, 2016). Here, we test the hypothesis that animates and manmade objects also differ in mid-level perceptual features. To do so, we generated synthetic images of animals and objects that preserve some texture and form information ("texforms"), but are not identifiable at the basic level. We used visual search efficiency as an index of perceptual similarity, as search is slower when targets are perceptually similar to distractors. Across three experiments, we find that observers can find animals faster among objects than among other animals, and vice versa, and that these results hold when stimuli are reduced to unrecognizable texforms. Electrophysiological evidence revealed that this mixed-animacy search advantage emerges during early stages of target individuation, and not during later stages associated with semantic processing. Lastly, we find that perceived curvature explains part of the mixed-animacy search advantage and that observers use perceived curvature to classify texforms as animate/inanimate. Taken together, these findings suggest that mid-level perceptual features, including curvature, contain cues to whether an object may be animate versus manmade. We propose that the visual system capitalizes on these early cues to facilitate object detection, recognition, and classification.
Collapse
Affiliation(s)
- Bria Long
- Department of Psychology, Harvard University, Cambridge, MA,
| | - Viola S Störmer
- Department of Psychology, University of California, San Diego, CA, USA
| | - George A Alvarez
- Department of Psychology, Harvard University, Cambridge, MA, USA
| |
Collapse
|
35
|
Feng W, Störmer VS, Martinez A, McDonald JJ, Hillyard SA. Involuntary orienting of attention to a sound desynchronizes the occipital alpha rhythm and improves visual perception. Neuroimage 2017; 150:318-328. [DOI: 10.1016/j.neuroimage.2017.02.033] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2016] [Revised: 02/12/2017] [Accepted: 02/13/2017] [Indexed: 10/20/2022] Open
|
36
|
Abstract
Can attention alter the impression of a face? Previous studies showed that attention modulates the appearance of lower-level visual features. For instance, attention can make a simple stimulus appear to have higher contrast than it actually does. We tested whether attention can also alter the perception of a higher-order property-namely, facial attractiveness. We asked participants to judge the relative attractiveness of two faces after summoning their attention to one of the faces using a briefly presented visual cue. Across trials, participants judged the attended face to be more attractive than the same face when it was unattended. This effect was not due to decision or response biases, but rather was due to changes in perceptual processing of the faces. These results show that attention alters perceived facial attractiveness, and broadly demonstrate that attention can influence higher-level perception and may affect people's initial impressions of one another.
Collapse
|
37
|
Störmer VS, Feng W, Martinez A, McDonald JJ, Hillyard SA. Salient, Irrelevant Sounds Reflexively Induce Alpha Rhythm Desynchronization in Parallel with Slow Potential Shifts in Visual Cortex. J Cogn Neurosci 2016; 28:433-45. [DOI: 10.1162/jocn_a_00915] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Recent findings suggest that a salient, irrelevant sound attracts attention to its location involuntarily and facilitates processing of a colocalized visual event [McDonald, J. J., Störmer, V. S., Martinez, A., Feng, W. F., & Hillyard, S. A. Salient sounds activate human visual cortex automatically. Journal of Neuroscience, 33, 9194–9201, 2013]. Associated with this cross-modal facilitation is a sound-evoked slow potential over the contralateral visual cortex termed the auditory-evoked contralateral occipital positivity (ACOP). Here, we further tested the hypothesis that a salient sound captures visual attention involuntarily by examining sound-evoked modulations of the occipital alpha rhythm, which has been strongly associated with visual attention. In two purely auditory experiments, lateralized irrelevant sounds triggered a bilateral desynchronization of occipital alpha-band activity (10–14 Hz) that was more pronounced in the hemisphere contralateral to the sound's location. The timing of the contralateral alpha-band desynchronization overlapped with that of the ACOP (∼240–400 msec), and both measures of neural activity were estimated to arise from neural generators in the ventral-occipital cortex. The magnitude of the lateralized alpha desynchronization was correlated with ACOP amplitude on a trial-by-trial basis and between participants, suggesting that they arise from or are dependent on a common neural mechanism. These results support the hypothesis that the sound-induced alpha desynchronization and ACOP both reflect the involuntary cross-modal orienting of spatial attention to the sound's location.
Collapse
Affiliation(s)
| | | | - Antigona Martinez
- 3University of California at San Diego
- 4Nathan Kline Institute for Psychiatric Research, Orangeburg, NY
| | | | | |
Collapse
|
38
|
Störmer VS, Alvarez GA. Feature-based attention elicits surround suppression in feature space. Curr Biol 2014; 24:1985-8. [PMID: 25155510 DOI: 10.1016/j.cub.2014.07.030] [Citation(s) in RCA: 97] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2014] [Revised: 07/10/2014] [Accepted: 07/11/2014] [Indexed: 11/26/2022]
Abstract
It is known that focusing attention on a particular feature (e.g., the color red) facilitates the processing of all objects in the visual field containing that feature [1-7]. Here, we show that such feature-based attention not only facilitates processing but also actively inhibits processing of similar, but not identical, features globally across the visual field. We combined behavior and electrophysiological recordings of frequency-tagged potentials in human observers to measure this inhibitory surround in feature space. We found that sensory signals of an attended color (e.g., red) were enhanced, whereas sensory signals of colors similar to the target color (e.g., orange) were suppressed relative to colors more distinct from the target color (e.g., yellow). Importantly, this inhibitory effect spreads globally across the visual field, thus operating independently of location. These findings suggest that feature-based attention comprises an excitatory peak surrounded by a narrow inhibitory zone in color space to attenuate the most distracting and potentially confusable stimuli during visual perception. This selection profile is akin to what has been reported for location-based attention [8-10] and thus suggests that such center-surround mechanisms are an overarching principle of attention across different domains in the human brain.
Collapse
Affiliation(s)
- Viola S Störmer
- Department of Psychology, Harvard University, 33 Kirkland Street, Cambridge, MA 02138, USA.
| | - George A Alvarez
- Department of Psychology, Harvard University, 33 Kirkland Street, Cambridge, MA 02138, USA
| |
Collapse
|
39
|
Störmer VS, Li SC, Heekeren HR, Lindenberger U. Normative shifts of cortical mechanisms of encoding contribute to adult age differences in visual–spatial working memory. Neuroimage 2013; 73:167-75. [DOI: 10.1016/j.neuroimage.2013.02.004] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2012] [Revised: 01/30/2013] [Accepted: 02/03/2013] [Indexed: 11/16/2022] Open
|
40
|
Störmer VS, Li SC, Heekeren HR, Lindenberger U. Normal aging delays and compromises early multifocal visual attention during object tracking. J Cogn Neurosci 2012; 25:188-202. [PMID: 23016765 DOI: 10.1162/jocn_a_00303] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Declines in selective attention are one of the sources contributing to age-related impairments in a broad range of cognitive functions. Most previous research on mechanisms underlying older adults' selection deficits has studied the deployment of visual attention to static objects and features. Here we investigate neural correlates of age-related differences in spatial attention to multiple objects as they move. We used a multiple object tracking task, in which younger and older adults were asked to keep track of moving target objects that moved randomly in the visual field among irrelevant distractor objects. By recording the brain's electrophysiological responses during the tracking period, we were able to delineate neural processing for targets and distractors at early stages of visual processing (~100-300 msec). Older adults showed less selective attentional modulation in the early phase of the visual P1 component (100-125 msec) than younger adults, indicating that early selection is compromised in old age. However, with a 25-msec delay relative to younger adults, older adults showed distinct processing of targets (125-150 msec), that is, a delayed yet intact attentional modulation. The magnitude of this delayed attentional modulation was related to tracking performance in older adults. The amplitude of the N1 component (175-210 msec) was smaller in older adults than in younger adults, and the target amplification effect of this component was also smaller in older relative to younger adults. Overall, these results indicate that normal aging affects the efficiency and timing of early visual processing during multiple object tracking.
Collapse
Affiliation(s)
- Viola S Störmer
- Max Planck Institute for Human Development, Berlin, Germany.
| | | | | | | |
Collapse
|
41
|
Störmer VS, Passow S, Biesenack J, Li SC. Dopaminergic and cholinergic modulations of visual-spatial attention and working memory: Insights from molecular genetic research and implications for adult cognitive development. Dev Psychol 2012; 48:875-89. [DOI: 10.1037/a0026198] [Citation(s) in RCA: 60] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
42
|
Abstract
A lateralized event-related potential (ERP) component elicited by attention-directing cues (ADAN) has been linked to frontal-lobe control but is often absent when spatial attention is deployed in the auditory modality. Here, we tested the hypothesis that ERP activity associated with frontal-lobe control of auditory spatial attention is distributed bilaterally by comparing ERPs elicited by attention-directing cues and neutral cues in a unimodal auditory task. This revealed an initial ERP positivity over the anterior scalp and a later ERP negativity over the parietal scalp. Distributed source analysis indicated that the anterior positivity was generated primarily in bilateral prefrontal cortices, whereas the more posterior negativity was generated in parietal and temporal cortices. The anterior ERP positivity likely reflects frontal-lobe attentional control, whereas the subsequent ERP negativity likely reflects anticipatory biasing of activity in auditory cortex.
Collapse
Affiliation(s)
- Viola S Störmer
- Department of Psychology, Simon Fraser University, Burnaby, British Columbia, Canada
| | | | | |
Collapse
|