1
|
Quek GL, de Heering A. Visual periodicity reveals distinct attentional signatures for face and non-face categories. Cereb Cortex 2024; 34:bhae228. [PMID: 38879816 PMCID: PMC11180377 DOI: 10.1093/cercor/bhae228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Revised: 03/19/2024] [Accepted: 05/14/2024] [Indexed: 06/19/2024] Open
Abstract
Observers can selectively deploy attention to regions of space, moments in time, specific visual features, individual objects, and even specific high-level categories-for example, when keeping an eye out for dogs while jogging. Here, we exploited visual periodicity to examine how category-based attention differentially modulates selective neural processing of face and non-face categories. We combined electroencephalography with a novel frequency-tagging paradigm capable of capturing selective neural responses for multiple visual categories contained within the same rapid image stream (faces/birds in Exp 1; houses/birds in Exp 2). We found that the pattern of attentional enhancement and suppression for face-selective processing is unique compared to other object categories: Where attending to non-face objects strongly enhances their selective neural signals during a later stage of processing (300-500 ms), attentional enhancement of face-selective processing is both earlier and comparatively more modest. Moreover, only the selective neural response for faces appears to be actively suppressed by attending towards an alternate visual category. These results underscore the special status that faces hold within the human visual system, and highlight the utility of visual periodicity as a powerful tool for indexing selective neural processing of multiple visual categories contained within the same image sequence.
Collapse
Affiliation(s)
- Genevieve L Quek
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Westmead Innovation Quarter, 160 Hawkesbury Rd, Westmead NSW 2145, Australia
| | - Adélaïde de Heering
- Unité de Recherche en Neurosciences Cognitives (UNESCOG), ULB Neuroscience Institue (UNI), Center for Research in Cognition & Neurosciences (CRCN), Université libre de Bruxelles (ULB), Avenue Franklin Roosevelt, 50-CP191, 1050 Brussels, Belgium
| |
Collapse
|
2
|
Peelen MV, Berlot E, de Lange FP. Predictive processing of scenes and objects. NATURE REVIEWS PSYCHOLOGY 2024; 3:13-26. [PMID: 38989004 PMCID: PMC7616164 DOI: 10.1038/s44159-023-00254-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 10/25/2023] [Indexed: 07/12/2024]
Abstract
Real-world visual input consists of rich scenes that are meaningfully composed of multiple objects which interact in complex, but predictable, ways. Despite this complexity, we recognize scenes, and objects within these scenes, from a brief glance at an image. In this review, we synthesize recent behavioral and neural findings that elucidate the mechanisms underlying this impressive ability. First, we review evidence that visual object and scene processing is partly implemented in parallel, allowing for a rapid initial gist of both objects and scenes concurrently. Next, we discuss recent evidence for bidirectional interactions between object and scene processing, with scene information modulating the visual processing of objects, and object information modulating the visual processing of scenes. Finally, we review evidence that objects also combine with each other to form object constellations, modulating the processing of individual objects within the object pathway. Altogether, these findings can be understood by conceptualizing object and scene perception as the outcome of a joint probabilistic inference, in which "best guesses" about objects act as priors for scene perception and vice versa, in order to concurrently optimize visual inference of objects and scenes.
Collapse
Affiliation(s)
- Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Eva Berlot
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Floris P de Lange
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
3
|
Robinson AK, Quek GL, Carlson TA. Visual Representations: Insights from Neural Decoding. Annu Rev Vis Sci 2023; 9:313-335. [PMID: 36889254 DOI: 10.1146/annurev-vision-100120-025301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/10/2023]
Abstract
Patterns of brain activity contain meaningful information about the perceived world. Recent decades have welcomed a new era in neural analyses, with computational techniques from machine learning applied to neural data to decode information represented in the brain. In this article, we review how decoding approaches have advanced our understanding of visual representations and discuss efforts to characterize both the complexity and the behavioral relevance of these representations. We outline the current consensus regarding the spatiotemporal structure of visual representations and review recent findings that suggest that visual representations are at once robust to perturbations, yet sensitive to different mental states. Beyond representations of the physical world, recent decoding work has shone a light on how the brain instantiates internally generated states, for example, during imagery and prediction. Going forward, decoding has remarkable potential to assess the functional relevance of visual representations for human behavior, reveal how representations change across development and during aging, and uncover their presentation in various mental disorders.
Collapse
Affiliation(s)
- Amanda K Robinson
- Queensland Brain Institute, The University of Queensland, Brisbane, Australia;
| | - Genevieve L Quek
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia;
| | | |
Collapse
|
4
|
The effects of distractors on brightness perception based on a spiking network. Sci Rep 2023; 13:1517. [PMID: 36707550 PMCID: PMC9883501 DOI: 10.1038/s41598-023-28326-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Accepted: 01/17/2023] [Indexed: 01/28/2023] Open
Abstract
Visual perception can be modified by the surrounding context. Particularly, experimental observations have demonstrated that visual perception and primary visual cortical responses could be modified by properties of surrounding distractors. However, the underlying mechanism remains unclear. To simulate primary visual cortical activities in this paper, we design a k-winner-take-all (k-WTA) spiking network whose responses are generated through probabilistic inference. In simulations, images with the same target and various surrounding distractors perform as stimuli. Distractors are designed with multiple varying properties, including the luminance, the sizes and the distances to the target. Simulations for each varying property are performed with other properties fixed. Each property could modify second-layer neural responses and interactions in the network. To the same target in the designed images, the modified network responses could simulate distinguishing brightness perception consistent with experimental observations. Our model provides a possible explanation of how the surrounding distractors modify primary visual cortical responses to induce various brightness perception of the given target.
Collapse
|
5
|
Concurrent contextual and time-distant mnemonic information co-exist as feedback in the human visual cortex. Neuroimage 2023; 265:119778. [PMID: 36462731 PMCID: PMC9878579 DOI: 10.1016/j.neuroimage.2022.119778] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 11/14/2022] [Accepted: 11/29/2022] [Indexed: 12/03/2022] Open
Abstract
Efficient processing of the visual environment necessitates the integration of incoming sensory evidence with concurrent contextual inputs and mnemonic content from our past experiences. To examine how this integration takes place in the brain, we isolated different types of feedback signals from the neural patterns of non-stimulated areas of the early visual cortex in humans (i.e., V1 and V2). Using multivariate pattern analysis, we showed that both contextual and time-distant information, coexist in V1 and V2 as feedback signals. In addition, we found that the extent to which mnemonic information is reinstated in V1 and V2 depends on whether the information is retrieved episodically or semantically. Critically, this reinstatement was independent on the retrieval route in the object-selective cortex. These results demonstrate that our early visual processing contains not just direct and indirect information from the visual surrounding, but also memory-based predictions.
Collapse
|
6
|
Turini J, Võ MLH. Hierarchical organization of objects in scenes is reflected in mental representations of objects. Sci Rep 2022; 12:20068. [PMID: 36418411 PMCID: PMC9684142 DOI: 10.1038/s41598-022-24505-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Accepted: 11/16/2022] [Indexed: 11/25/2022] Open
Abstract
The arrangement of objects in scenes follows certain rules ("Scene Grammar"), which we exploit to perceive and interact efficiently with our environment. We have proposed that Scene Grammar is hierarchically organized: scenes are divided into clusters of objects ("phrases", e.g., the sink phrase); within every phrase, one object ("anchor", e.g., the sink) holds strong predictions about identity and position of other objects ("local objects", e.g., a toothbrush). To investigate if this hierarchy is reflected in the mental representations of objects, we collected pairwise similarity judgments for everyday object pictures and for the corresponding words. Similarity judgments were stronger not only for object pairs appearing in the same scene, but also object pairs appearing within the same phrase of the same scene as opposed to appearing in different phrases of the same scene. Besides, object pairs with the same status in the scenes (i.e., being both anchors or both local objects) were judged as more similar than pairs of different status. Comparing effects between pictures and words, we found similar, significant impact of scene hierarchy on the organization of mental representation of objects, independent of stimulus modality. We conclude that the hierarchical structure of visual environment is incorporated into abstract, domain general mental representations of the world.
Collapse
Affiliation(s)
- Jacopo Turini
- Scene Grammar Lab, Department of Psychology and Sports Sciences, Goethe University, Frankfurt am Main, Germany.
- Scene Grammar Lab, Institut Für Psychologie, PEG, Room 5.G105, Theodor-W.-Adorno Platz 6, 60323, Frankfurt am Main, Germany.
| | - Melissa Le-Hoa Võ
- Scene Grammar Lab, Department of Psychology and Sports Sciences, Goethe University, Frankfurt am Main, Germany
| |
Collapse
|
7
|
Aminoff EM, Durham T. Scene-selective brain regions respond to embedded objects of a scene. Cereb Cortex 2022; 33:5066-5074. [PMID: 36305640 DOI: 10.1093/cercor/bhac399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Revised: 09/12/2022] [Accepted: 09/13/2022] [Indexed: 11/14/2022] Open
Abstract
Abstract
Objects are fundamental to scene understanding. Scenes are defined by embedded objects and how we interact with them. Paradoxically, scene processing in the brain is typically discussed in contrast to object processing. Using the BOLD5000 dataset (Chang et al., 2019), we examined whether objects within a scene predicted the neural representation of scenes, as measured by functional magnetic resonance imaging in humans. Stimuli included 1,179 unique scenes across 18 semantic categories. Object composition of scenes were compared across scene exemplars in different semantic scene categories, and separately, in exemplars of the same scene category. Neural representations in scene- and object-preferring brain regions were significantly related to which objects were in a scene, with the effect at times stronger in the scene-preferring regions. The object model accounted for more variance when comparing scenes within the same semantic category to scenes from different categories. Here, we demonstrate the function of scene-preferring regions includes the processing of objects. This suggests visual processing regions may be better characterized by the processes, which are engaged when interacting with the stimulus kind, such as processing groups of objects in scenes, or processing a single object in our foreground, rather than the stimulus kind itself.
Collapse
Affiliation(s)
- Elissa M Aminoff
- Fordham University Department of Psychology, , 226 Dealy Hall, 441 E. Fordham Rd, Bronx, NY 10458, United States
| | - Tess Durham
- Fordham University Department of Psychology, , 226 Dealy Hall, 441 E. Fordham Rd, Bronx, NY 10458, United States
| |
Collapse
|
8
|
Thorat S, Quek GL, Peelen MV. Statistical learning of distractor co-occurrences facilitates visual search. J Vis 2022; 22:2. [PMID: 36053133 PMCID: PMC9440606 DOI: 10.1167/jov.22.10.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Visual search is facilitated by knowledge of the relationship between the target and the distractors, including both where the target is likely to be among the distractors and how it differs from the distractors. Whether the statistical structure among distractors themselves, unrelated to target properties, facilitates search is less well understood. Here, we assessed the benefit of distractor structure using novel shapes whose relationship to each other was learned implicitly during visual search. Participants searched for target items in arrays of shapes that comprised either four pairs of co-occurring distractor shapes (structured scenes) or eight distractor shapes randomly partitioned into four pairs on each trial (unstructured scenes). Across five online experiments (N = 1,140), we found that after a period of search training, participants were more efficient when searching for targets in structured than unstructured scenes. This structure benefit emerged independently of whether the position of the shapes within each pair was fixed or variable and despite participants having no explicit knowledge of the structured pairs they had seen. These results show that implicitly learned co-occurrence statistics between distractor shapes increases search efficiency. Increased efficiency in the rejection of regularly co-occurring distractors may contribute to the efficiency of visual search in natural scenes, where such regularities are abundant.
Collapse
Affiliation(s)
- Sushrut Thorat
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.,
| | - Genevieve L Quek
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.,The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia.,
| | - Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.,
| |
Collapse
|
9
|
Taubert J, Wardle SG, Tardiff CT, Patterson A, Yu D, Baker CI. Clutter Substantially Reduces Selectivity for Peripheral Faces in the Macaque Brain. J Neurosci 2022; 42:6739-6750. [PMID: 35868861 PMCID: PMC9436017 DOI: 10.1523/jneurosci.0232-22.2022] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Revised: 04/29/2022] [Accepted: 06/06/2022] [Indexed: 11/21/2022] Open
Abstract
According to a prominent view in neuroscience, visual stimuli are coded by discrete cortical networks that respond preferentially to specific categories, such as faces or objects. However, it remains unclear how these category-selective networks respond when viewing conditions are cluttered, i.e., when there is more than one stimulus in the visual field. Here, we asked three questions: (1) Does clutter reduce the response and selectivity for faces as a function of retinal location? (2) Is the preferential response to faces uniform across the visual field? And (3) Does the ventral visual pathway encode information about the location of cluttered faces? We used fMRI to measure the response of the face-selective network in awake, fixating macaques (two female, five male). Across a series of four experiments, we manipulated the presence and absence of clutter, as well as the location of the faces relative to the fovea. We found that clutter reduces the response to peripheral faces. When presented in isolation, without clutter, the selectivity for faces is fairly uniform across the visual field, but, when clutter is present, there is a marked decrease in the selectivity for peripheral faces. We also found no evidence of a contralateral visual field bias when faces were presented in clutter. Nonetheless, multivariate analyses revealed that the location of cluttered faces could be decoded from the multivoxel response of the face-selective network. Collectively, these findings demonstrate that clutter blunts the selectivity of the face-selective network to peripheral faces, although information about their retinal location is retained.SIGNIFICANCE STATEMENT Numerous studies that have measured brain activity in macaques have found visual regions that respond preferentially to faces. Although these regions are thought to be essential for social behavior, their responses have typically been measured while faces were presented in isolation, a situation atypical of the real world. How do these regions respond when faces are presented with other stimuli? We report that, when clutter is present, the preferential response to foveated faces is spared but preferential response to peripheral faces is reduced. Our results indicate that the presence of clutter changes the response of the face-selective network.
Collapse
Affiliation(s)
- Jessica Taubert
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, Maryland 20814
- School of Psychology, The University of Queensland, Brisbane, Queensland 4072, Australia
| | - Susan G Wardle
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, Maryland 20814
| | - Clarissa T Tardiff
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, Maryland 20814
| | - Amanda Patterson
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, Maryland 20814
| | - David Yu
- Neurophysiology Imaging Facility, National Institutes of Health, Bethesda, Maryland 20814
| | - Chris I Baker
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, Maryland 20814
| |
Collapse
|
10
|
Grootswagers T, McKay H, Varlet M. Unique contributions of perceptual and conceptual humanness to object representations in the human brain. Neuroimage 2022; 257:119350. [PMID: 35659994 DOI: 10.1016/j.neuroimage.2022.119350] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Revised: 05/09/2022] [Accepted: 05/31/2022] [Indexed: 01/18/2023] Open
Abstract
The human brain is able to quickly and accurately identify objects in a dynamic visual world. Objects evoke different patterns of neural activity in the visual system, which reflect object category memberships. However, the underlying dimensions of object representations in the brain remain unclear. Recent research suggests that objects similarity to humans is one of the main dimensions used by the brain to organise objects, but the nature of the human-similarity features driving this organisation are still unknown. Here, we investigate the relative contributions of perceptual and conceptual features of humanness to the representational organisation of objects in the human visual system. We collected behavioural judgements of human-similarity of various objects, which were compared with time-resolved neuroimaging responses to the same objects. The behavioural judgement tasks targeted either perceptual or conceptual humanness features to determine their respective contribution to perceived human-similarity. Behavioural and neuroimaging data revealed significant and unique contributions of both perceptual and conceptual features of humanness, each explaining unique variance in neuroimaging data. Furthermore, our results showed distinct spatio-temporal dynamics in the processing of conceptual and perceptual humanness features, with later and more lateralised brain responses to conceptual features. This study highlights the critical importance of social requirements in information processing and organisation in the human brain.
Collapse
Affiliation(s)
- Tijl Grootswagers
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, NSW, Australia.
| | - Harriet McKay
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, NSW, Australia
| | - Manuel Varlet
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, NSW, Australia
| |
Collapse
|
11
|
Liu X, Liu R, Guo L, Astikainen P, Ye C. Encoding specificity instead of online integration of real-world spatial regularities for objects in working memory. J Vis 2022; 22:8. [PMID: 36040269 PMCID: PMC9437652 DOI: 10.1167/jov.22.9.8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Most objects show high degrees of spatial regularity (e.g. beach umbrellas appear above, not under, beach chairs). The spatial regularities of real-world objects benefit visual working memory (VWM), but the mechanisms behind this spatial regularity effect remain unclear. The "encoding specificity" hypothesis suggests that spatial regularity will enhance the visual encoding process but will not facilitate the integration of information online during VWM maintenance. The "perception-alike" hypothesis suggests that spatial regularity will function in both visual encoding and online integration during VWM maintenance. We investigated whether VWM integrates sequentially presented real-world objects by focusing on the existence of the spatial regularity effect. Throughout five experiments, we manipulated the presentation (simultaneous vs. sequential) and regularity (with vs. without regularity) of memory arrays among pairs of real-world objects. The spatial regularity of memory objects presented simultaneously, but not sequentially, improved VWM performance. We also examined whether memory load, verbal suppression and masking, and memory array duration hindered the spatial regularity effect in sequential presentation. We found a stable absence of the spatial regularity effect, suggesting that the participants were unable to integrate real-world objects based on spatial regularities online. Our results support the encoding specificity hypothesis, wherein the spatial regularity of real-world objects can enhance the efficiency of VWM encoding, but VWM cannot exploit spatial regularity to help organize sampled sequential information into meaningful integrations.
Collapse
Affiliation(s)
- Xinyang Liu
- Institute of Brain and Psychological Sciences, Sichuan Normal University, Chengdu, China.,Department of Psychology, University of Jyvaskyla, Jyväskylä, Finland.,https://orcid.org/0000-0002-5827-7729.,
| | - Ruyi Liu
- Institute of Brain and Psychological Sciences, Sichuan Normal University, Chengdu, China.,https://orcid.org/0000-0003-3416-6159.,
| | - Lijing Guo
- Institute of Brain and Psychological Sciences, Sichuan Normal University, Chengdu, China.,https://orcid.org/0000-0002-2106-0198.,
| | - Piia Astikainen
- Department of Psychology, University of Jyvaskyla, Jyväskylä, Finland.,https://orcid.org/0000-0003-4842-7460.,
| | - Chaoxiong Ye
- Institute of Brain and Psychological Sciences, Sichuan Normal University, Chengdu, China.,Department of Psychology, University of Jyvaskyla, Jyväskylä, Finland.,Faculty of Social Sciences, Tampere University, Tampere, Finland.,Center for Machine Vision and Signal Analysis, University of Oulu, Oulu, Finland.,https://orcid.org/0000-0002-8301-7582.,
| |
Collapse
|
12
|
He T, Richter D, Wang Z, de Lange FP. Spatial and Temporal Context Jointly Modulate the Sensory Response within the Ventral Visual Stream. J Cogn Neurosci 2021; 34:332-347. [PMID: 34964889 DOI: 10.1162/jocn_a_01792] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Both spatial and temporal context play an important role in visual perception and behavior. Humans can extract statistical regularities from both forms of context to help process the present and to construct expectations about the future. Numerous studies have found reduced neural responses to expected stimuli compared with unexpected stimuli, for both spatial and temporal regularities. However, it is largely unclear whether and how these forms of context interact. In the current fMRI study, 33 human volunteers were exposed to pairs of object stimuli that could be expected or surprising in terms of their spatial and temporal context. We found reliable independent contributions of both spatial and temporal context in modulating the neural response. Specifically, neural responses to stimuli in expected compared with unexpected contexts were suppressed throughout the ventral visual stream. These results suggest that both spatial and temporal context may aid sensory processing in a similar fashion, providing evidence on how different types of context jointly modulate perceptual processing.
Collapse
|
13
|
Welbourne LE, Jonnalagadda A, Giesbrecht B, Eckstein MP. The transverse occipital sulcus and intraparietal sulcus show neural selectivity to object-scene size relationships. Commun Biol 2021; 4:768. [PMID: 34158579 PMCID: PMC8219818 DOI: 10.1038/s42003-021-02294-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2020] [Accepted: 05/26/2021] [Indexed: 02/05/2023] Open
Abstract
To optimize visual search, humans attend to objects with the expected size of the sought target relative to its surrounding scene (object-scene scale consistency). We investigate how the human brain responds to variations in object-scene scale consistency. We use functional magnetic resonance imaging and a voxel-wise feature encoding model to estimate tuning to different object/scene properties. We find that regions involved in scene processing (transverse occipital sulcus) and spatial attention (intraparietal sulcus) have the strongest responsiveness and selectivity to object-scene scale consistency: reduced activity to mis-scaled objects (either unusually smaller or larger). The findings show how and where the brain incorporates object-scene size relationships in the processing of scenes. The response properties of these brain areas might explain why during visual search humans often miss objects that are salient but at atypical sizes relative to the surrounding scene.
Collapse
Affiliation(s)
- Lauren E Welbourne
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, USA.
- Institute for Collaborative Biotechnologies, University of California, Santa Barbara, USA.
- York NeuroImaging Centre, Department of Psychology, University of York, York, UK.
| | - Aditya Jonnalagadda
- Electrical and Computer Engineering, University of California, Santa Barbara, USA
| | - Barry Giesbrecht
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, USA
- Institute for Collaborative Biotechnologies, University of California, Santa Barbara, USA
- Interdepartmental Graduate Program in Dynamical Neuroscience, University of California, Santa Barbara, USA
| | - Miguel P Eckstein
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, USA.
- Institute for Collaborative Biotechnologies, University of California, Santa Barbara, USA.
- Electrical and Computer Engineering, University of California, Santa Barbara, USA.
- Interdepartmental Graduate Program in Dynamical Neuroscience, University of California, Santa Barbara, USA.
| |
Collapse
|
14
|
Abstract
Cognitive processes-from basic sensory analysis to language understanding-are typically contextualized. While the importance of considering context for understanding cognition has long been recognized in psychology and philosophy, it has not yet had much impact on cognitive neuroscience research, where cognition is often studied in decontextualized paradigms. Here, we present examples of recent studies showing that context changes the neural basis of diverse cognitive processes, including perception, attention, memory, and language. Within the domains of perception and language, we review neuroimaging results showing that context interacts with stimulus processing, changes activity in classical perception and language regions, and recruits additional brain regions that contribute crucially to naturalistic perception and language. We discuss how contextualized cognitive neuroscience will allow for discovering new principles of the mind and brain.
Collapse
Affiliation(s)
- Roel M Willems
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands.,Centre for Language Studies, Radboud University, Nijmegen, the Netherlands.,Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
| | - Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| |
Collapse
|
15
|
Barnes L, Petit S, Badcock NA, Whyte CJ, Woolgar A. Word Detection in Individual Subjects Is Difficult to Probe With Fast Periodic Visual Stimulation. Front Neurosci 2021; 15:602798. [PMID: 33762904 PMCID: PMC7982886 DOI: 10.3389/fnins.2021.602798] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Accepted: 02/08/2021] [Indexed: 11/21/2022] Open
Abstract
Measuring cognition in single subjects presents unique challenges. On the other hand, individually sensitive measurements offer extraordinary opportunities, from informing theoretical models to enabling truly individualised clinical assessment. Here, we test the robustness of fast, periodic, and visual stimulation (FPVS), an emerging method proposed to elicit detectable responses to written words in the electroencephalogram (EEG) of individual subjects. The method is non-invasive, passive, and requires only a few minutes of testing, making it a potentially powerful tool to test comprehension in those who do not speak or who struggle with long testing procedures. In an initial study, Lochy et al. (2015) used FPVS to detect word processing in eight out of 10 fluent French readers. Here, we attempted to replicate their study in a new sample of 10 fluent English readers. Participants viewed rapid streams of pseudo-words with words embedded at regular intervals, while we recorded their EEG. Based on Lochy et al. (2015) we expected that words would elicit a steady-state response at the word-presentation frequency (2 Hz) over parieto-occipital electrode sites. However, across 40 datasets (10 participants, two conditions, and two regions of interest–ROIs), only four datasets met the criteria for a unique response to words. This corresponds to a 10% detection rate. We conclude that FPVS should be developed further before it can serve as an individually-sensitive measure of written word processing.
Collapse
Affiliation(s)
- Lydia Barnes
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, United Kingdom
| | - Selene Petit
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, United Kingdom
| | - Nicholas A Badcock
- Department of Cognitive Science, Macquarie University, Sydney, NSW, Australia.,Macquarie Centre for Reading, Macquarie University, Sydney, NSW, Australia.,School of Psychological Science, University of Western Australia, Perth, WA, Australia
| | - Christopher J Whyte
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, United Kingdom
| | - Alexandra Woolgar
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, United Kingdom.,Department of Cognitive Science, Macquarie University, Sydney, NSW, Australia
| |
Collapse
|