1
|
Williams JR, Störmer VS. Cutting Through the Noise: Auditory Scenes and Their Effects on Visual Object Processing. Psychol Sci 2024:9567976241237737. [PMID: 38889285 DOI: 10.1177/09567976241237737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/20/2024] Open
Abstract
Despite the intuitive feeling that our visual experience is coherent and comprehensive, the world is full of ambiguous and indeterminate information. Here we explore how the visual system might take advantage of ambient sounds to resolve this ambiguity. Young adults (ns = 20-30) were tasked with identifying an object slowly fading in through visual noise while a task-irrelevant sound played. We found that participants demanded more visual information when the auditory object was incongruent with the visual object compared to when it was not. Auditory scenes, which are only probabilistically related to specific objects, produced similar facilitation even for unheard objects (e.g., a bench). Notably, these effects traverse categorical and specific auditory and visual-processing domains as participants performed across-category and within-category visual tasks, underscoring cross-modal integration across multiple levels of perceptual processing. To summarize, our study reveals the importance of audiovisual interactions to support meaningful perceptual experiences in naturalistic settings.
Collapse
Affiliation(s)
| | - Viola S Störmer
- Department of Psychology, University of California, San Diego
- Department of Psychological and Brain Sciences, Dartmouth College
| |
Collapse
|
2
|
Quek GL, de Heering A. Visual periodicity reveals distinct attentional signatures for face and non-face categories. Cereb Cortex 2024; 34:bhae228. [PMID: 38879816 PMCID: PMC11180377 DOI: 10.1093/cercor/bhae228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Revised: 03/19/2024] [Accepted: 05/14/2024] [Indexed: 06/19/2024] Open
Abstract
Observers can selectively deploy attention to regions of space, moments in time, specific visual features, individual objects, and even specific high-level categories-for example, when keeping an eye out for dogs while jogging. Here, we exploited visual periodicity to examine how category-based attention differentially modulates selective neural processing of face and non-face categories. We combined electroencephalography with a novel frequency-tagging paradigm capable of capturing selective neural responses for multiple visual categories contained within the same rapid image stream (faces/birds in Exp 1; houses/birds in Exp 2). We found that the pattern of attentional enhancement and suppression for face-selective processing is unique compared to other object categories: Where attending to non-face objects strongly enhances their selective neural signals during a later stage of processing (300-500 ms), attentional enhancement of face-selective processing is both earlier and comparatively more modest. Moreover, only the selective neural response for faces appears to be actively suppressed by attending towards an alternate visual category. These results underscore the special status that faces hold within the human visual system, and highlight the utility of visual periodicity as a powerful tool for indexing selective neural processing of multiple visual categories contained within the same image sequence.
Collapse
Affiliation(s)
- Genevieve L Quek
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Westmead Innovation Quarter, 160 Hawkesbury Rd, Westmead NSW 2145, Australia
| | - Adélaïde de Heering
- Unité de Recherche en Neurosciences Cognitives (UNESCOG), ULB Neuroscience Institue (UNI), Center for Research in Cognition & Neurosciences (CRCN), Université libre de Bruxelles (ULB), Avenue Franklin Roosevelt, 50-CP191, 1050 Brussels, Belgium
| |
Collapse
|
3
|
Chapman AF, Störmer VS. Representational structures as a unifying framework for attention. Trends Cogn Sci 2024; 28:416-427. [PMID: 38280837 DOI: 10.1016/j.tics.2024.01.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 01/04/2024] [Accepted: 01/05/2024] [Indexed: 01/29/2024]
Abstract
Our visual system consciously processes only a subset of the incoming information. Selective attention allows us to prioritize relevant inputs, and can be allocated to features, locations, and objects. Recent advances in feature-based attention suggest that several selection principles are shared across these domains and that many differences between the effects of attention on perceptual processing can be explained by differences in the underlying representational structures. Moving forward, it can thus be useful to assess how attention changes the structure of the representational spaces over which it operates, which include the spatial organization, feature maps, and object-based coding in visual cortex. This will ultimately add to our understanding of how attention changes the flow of visual information processing more broadly.
Collapse
Affiliation(s)
- Angus F Chapman
- Department of Psychological and Brain Sciences, Boston University, Boston, MA, USA.
| | - Viola S Störmer
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA.
| |
Collapse
|
4
|
Schmuck J, Schnuerch R, Kirsten H, Shivani V, Gibbons H. The influence of selective attention to specific emotions on the processing of faces as revealed by event-related brain potentials. Psychophysiology 2023; 60:e14325. [PMID: 37162391 DOI: 10.1111/psyp.14325] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 04/20/2023] [Accepted: 04/21/2023] [Indexed: 05/11/2023]
Abstract
Event-related potential studies using affective words have indicated that selective attention to valence can increase affective discrimination at early perceptual stages. This effect most likely relies on neural associations between perceptual features of a stimulus and its affective value. Similar to words, emotional expressions in human faces are linked to specific visual elements. Therefore, selectively attending to a given emotion should allow for the preactivation of neural networks coding for the emotion and associated first-order visual elements, leading to enhanced early processing of faces expressing the attended emotion. To investigate this, we employed an expression detection task (N = 65). Fearful, happy, and neutral faces were randomly presented in three blocks while participants were instructed to respond only to one predefined target level of expression in each block. Reaction times were the fastest for happy target faces, which was accompanied by an increased occipital P1 for happy compared with fearful faces. The N170 yielded an arousal effect (emotional > neutral) while both components were not modulated by target status. In contrast, the early posterior negativity (EPN) arousal effect tended to be larger for target compared with nontarget faces. The late positive potential (LPP) revealed large effects of status and expression as well as an interaction driven by an increased LPP specifically for nontarget fearful faces. These findings tentatively indicate that selective attention to facial affect may enhance early emotional processing (EPN) even though further research is needed. Moreover, late controlled processing of facial emotions appears to involve a negativity bias.
Collapse
Affiliation(s)
- Jonas Schmuck
- Department of Psychology, University of Bonn, Bonn, Germany
| | | | - Hannah Kirsten
- Department of Psychology, University of Bonn, Bonn, Germany
| | | | | |
Collapse
|
5
|
Cavanagh P, Caplovitz GP, Lytchenko TK, Maechler MR, Tse PU, Sheinberg DL. The Architecture of Object-Based Attention. Psychon Bull Rev 2023; 30:1643-1667. [PMID: 37081283 DOI: 10.3758/s13423-023-02281-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/22/2023] [Indexed: 04/22/2023]
Abstract
The allocation of attention to objects raises several intriguing questions: What are objects, how does attention access them, what anatomical regions are involved? Here, we review recent progress in the field to determine the mechanisms underlying object-based attention. First, findings from unconscious priming and cueing suggest that the preattentive targets of object-based attention can be fully developed object representations that have reached the level of identity. Next, the control of object-based attention appears to come from ventral visual areas specialized in object analysis that project downward to early visual areas. How feedback from object areas can accurately target the object's specific locations and features is unknown but recent work in autoencoding has made this plausible. Finally, we suggest that the three classic modes of attention may not be as independent as is commonly considered, and instead could all rely on object-based attention. Specifically, studies show that attention can be allocated to the separated members of a group-without affecting the space between them-matching the defining property of feature-based attention. At the same time, object-based attention directed to a single small item has the properties of space-based attention. We outline the architecture of object-based attention, the novel predictions it brings, and discuss how it works in parallel with other attention pathways.
Collapse
Affiliation(s)
- Patrick Cavanagh
- Department of Psychology, Glendon College, 2275 Bayview Avenue, North York, ON, M4N 3M6, Canada.
- CVR, York University, Toronto, ON, Canada.
| | | | | | | | | | - David L Sheinberg
- Department of Neuroscience, Brown University, Providence, RI, USA
- Carney Institute for Brain Science, Brown University, Providence, RI, USA
| |
Collapse
|
6
|
Huang L, Wang J, He Q, Li C, Sun Y, Seger CA, Zhang X. A source for category-induced global effects of feature-based attention in human prefrontal cortex. Cell Rep 2023; 42:113080. [PMID: 37659080 DOI: 10.1016/j.celrep.2023.113080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2023] [Revised: 06/14/2023] [Accepted: 08/16/2023] [Indexed: 09/04/2023] Open
Abstract
Global effects of feature-based attention (FBA) are generally limited to stimuli sharing the same or similar features, as hypothesized in the "feature-similarity gain model." Visual perception, however, often reflects categories acquired via experience/learning; whether the global-FBA effect can be induced by the categorized features remains unclear. Here, human subjects were trained to classify motion directions into two discrete categories and perform a classical motion-based attention task. We found a category-induced global-FBA effect in both the middle temporal area (MT+) and frontoparietal areas, where attention to a motion direction globally spread to unattended motion directions within the same category, but not to those in a different category. Effective connectivity analysis showed that the category-induced global-FBA effect in MT+ was derived by feedback from the inferior frontal junction (IFJ). Altogether, our study reveals a category-induced global-FBA effect and identifies a source for this effect in human prefrontal cortex, implying that FBA is of greater ecological significance than previously thought.
Collapse
Affiliation(s)
- Ling Huang
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, South China Normal University, Guangzhou, Guangdong 510631, China; School of Psychology, Center for Studies of Psychological Application, Guangdong Provincial Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, Guangdong 510631, China
| | - Jingyi Wang
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, South China Normal University, Guangzhou, Guangdong 510631, China; School of Psychology, Center for Studies of Psychological Application, Guangdong Provincial Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, Guangdong 510631, China
| | - Qionghua He
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, South China Normal University, Guangzhou, Guangdong 510631, China; School of Psychology, Center for Studies of Psychological Application, Guangdong Provincial Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, Guangdong 510631, China
| | - Chu Li
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, South China Normal University, Guangzhou, Guangdong 510631, China; School of Psychology, Center for Studies of Psychological Application, Guangdong Provincial Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, Guangdong 510631, China
| | - Yueling Sun
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, South China Normal University, Guangzhou, Guangdong 510631, China; School of Psychology, Center for Studies of Psychological Application, Guangdong Provincial Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, Guangdong 510631, China
| | - Carol A Seger
- School of Psychology, Center for Studies of Psychological Application, Guangdong Provincial Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, Guangdong 510631, China; Department of Psychology, Colorado State University, Fort Collins, CO 80523, USA
| | - Xilin Zhang
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, South China Normal University, Guangzhou, Guangdong 510631, China; School of Psychology, Center for Studies of Psychological Application, Guangdong Provincial Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, Guangdong 510631, China.
| |
Collapse
|
7
|
Thorat S, Peelen MV. Body shape as a visual feature: Evidence from spatially-global attentional modulation in human visual cortex. Neuroimage 2022; 255:119207. [PMID: 35427768 DOI: 10.1016/j.neuroimage.2022.119207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Revised: 03/10/2022] [Accepted: 04/08/2022] [Indexed: 10/18/2022] Open
Abstract
Feature-based attention modulates visual processing beyond the focus of spatial attention. Previous work has reported such spatially-global effects for low-level features such as color and orientation, as well as for faces. Here, using fMRI, we provide evidence for spatially-global attentional modulation for human bodies. Participants were cued to search for one of six object categories in two vertically-aligned images. Two additional, horizontally-aligned, images were simultaneously presented but were never task-relevant across three experimental sessions. Analyses time-locked to the objects presented in these task-irrelevant images revealed that responses evoked by body silhouettes were modulated by the participants' top-down attentional set, becoming more body-selective when participants searched for bodies in the task-relevant images. These effects were observed both in univariate analyses of the body-selective cortex and in multivariate analyses of the object-selective visual cortex. Additional analyses showed that this modulation reflected response gain rather than a bias induced by the cues, and that it reflected enhancement of body responses rather than suppression of non-body responses. These findings provide evidence for a spatially-global attention mechanism for body shapes, supporting the rapid and parallel detection of conspecifics in our environment.
Collapse
Affiliation(s)
- Sushrut Thorat
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.
| |
Collapse
|
8
|
Park HB, Ahn S, Zhang W. Visual search under physical effort is faster but more vulnerable to distractor interference. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2021; 6:17. [PMID: 33710497 PMCID: PMC7977006 DOI: 10.1186/s41235-021-00283-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/09/2020] [Accepted: 03/02/2021] [Indexed: 11/15/2022]
Abstract
Cognition and action are often intertwined in everyday life. It is thus pivotal to understand how cognitive processes operate with concurrent actions. The present study aims to assess how simple physical effort operationalized as isometric muscle contractions affects visual attention and inhibitory control. In a dual-task paradigm, participants performed a singleton search task and a handgrip task concurrently. In the search task, the target was a shape singleton among distractors with a homogeneous but different shape. A salient-but-irrelevant distractor with a unique color (i.e., color singleton) appeared on half of the trials (Singleton distractor present condition), and its presence often captures spatial attention. Critically, the visual search task was performed by the participants with concurrent hand grip exertion, at 5% or 40% of their maximum strength (low vs. high physical load), on a hand dynamometer. We found that visual search under physical effort is faster, but more vulnerable to distractor interference, potentially due to arousal and reduced inhibitory control, respectively. The two effects further manifest in different aspects of RT distributions that can be captured by different components of the ex-Gaussian model using hierarchical Bayesian method. Together, these results provide behavioral evidence and a novel model for two dissociable cognitive mechanisms underlying the effects of simple muscle exertion on the ongoing visual search process on a moment-by-moment basis.
Collapse
Affiliation(s)
- Hyung-Bum Park
- Department of Psychology, University of California, Riverside, USA.
| | - Shinhae Ahn
- Department of Psychology, Chungbuk National University, Cheongju, Korea
| | - Weiwei Zhang
- Department of Psychology, University of California, Riverside, USA
| |
Collapse
|