1
|
Melcher D, Alaberkyan A, Anastasaki C, Liu X, Deodato M, Marsicano G, Almeida D. An early effect of the parafoveal preview on post-saccadic processing of English words. Atten Percept Psychophys 2024:10.3758/s13414-024-02916-4. [PMID: 38956003 DOI: 10.3758/s13414-024-02916-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/05/2024] [Indexed: 07/04/2024]
Abstract
A key aspect of efficient visual processing is to use current and previous information to make predictions about what we will see next. In natural viewing, and when looking at words, there is typically an indication of forthcoming visual information from extrafoveal areas of the visual field before we make an eye movement to an object or word of interest. This "preview effect" has been studied for many years in the word reading literature and, more recently, in object perception. Here, we integrated methods from word recognition and object perception to investigate the timing of the preview on neural measures of word recognition. Through a combined use of EEG and eye-tracking, a group of multilingual participants took part in a gaze-contingent, single-shot saccade experiment in which words appeared in their parafoveal visual field. In valid preview trials, the same word was presented during the preview and after the saccade, while in the invalid condition, the saccade target was a number string that turned into a word during the saccade. As hypothesized, the valid preview greatly reduced the fixation-related evoked response. Interestingly, multivariate decoding analyses revealed much earlier preview effects than previously reported for words, and individual decoding performance correlated with participant reading scores. These results demonstrate that a parafoveal preview can influence relatively early aspects of post-saccadic word processing and help to resolve some discrepancies between the word and object literatures.
Collapse
Affiliation(s)
- David Melcher
- Psychology Program, Division of Science, New York University Abu Dhabi, PO Box 129188, Abu Dhabi, United Arab Emirates.
- Center for Brain and Health, NYUAD Research Institute, New York University Abu Dhabi, PO Box 129188, Abu Dhabi, United Arab Emirates.
| | - Ani Alaberkyan
- Psychology Program, Division of Science, New York University Abu Dhabi, PO Box 129188, Abu Dhabi, United Arab Emirates
| | - Chrysi Anastasaki
- Psychology Program, Division of Science, New York University Abu Dhabi, PO Box 129188, Abu Dhabi, United Arab Emirates
| | - Xiaoyi Liu
- Psychology Program, Division of Science, New York University Abu Dhabi, PO Box 129188, Abu Dhabi, United Arab Emirates
- Department of Psychology, Princeton University, Washington Rd, Princeton, NJ, 08540, USA
| | - Michele Deodato
- Psychology Program, Division of Science, New York University Abu Dhabi, PO Box 129188, Abu Dhabi, United Arab Emirates
- Center for Brain and Health, NYUAD Research Institute, New York University Abu Dhabi, PO Box 129188, Abu Dhabi, United Arab Emirates
| | - Gianluca Marsicano
- Department of Psychology, University of Bologna, Viale Berti Pichat 5, 40121, Bologna, Italy
- Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Via Rasi e Spinelli 176, 47023, Cesena, Italy
| | - Diogo Almeida
- Psychology Program, Division of Science, New York University Abu Dhabi, PO Box 129188, Abu Dhabi, United Arab Emirates
| |
Collapse
|
2
|
Teichmann L, Hebart MN, Baker CI. Dynamic representation of multidimensional object properties in the human brain. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.09.08.556679. [PMID: 37745325 PMCID: PMC10515754 DOI: 10.1101/2023.09.08.556679] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/26/2023]
Abstract
Our visual world consists of an immense number of unique objects and yet, we are easily able to identify, distinguish, interact, and reason about the things we see within a few hundred milliseconds. This requires that we integrate and focus on a wide array of object properties to support specific behavioral goals. In the current study, we examined how these rich object representations unfold in the human brain by modelling time-resolved MEG signals evoked by viewing single presentations of tens of thousands of object images. Based on millions of behavioral judgments, the object space can be captured in 66 dimensions that we use to guide our understanding of the neural representation of this space. We find that all dimensions are reflected in the time course of response with distinct temporal profiles for different object dimensions. These profiles fell into two broad types, with either a distinct and early peak (~125 ms) or a slow rise to a late peak (~300 ms). Further, early effects were stable across participants, in contrast to later effects which showed more variability, suggesting that early peaks may carry stimulus-specific and later peaks more participant-specific information. Dimensions with early peaks appeared to be primarily visual dimensions and those with later peaks more conceptual, suggesting that conceptual representations are more variable across people. Together, these data provide a comprehensive account of how behaviorally-relevant object properties unfold in the human brain and contribute to the rich nature of object vision.
Collapse
Affiliation(s)
- Lina Teichmann
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda MD, USA
| | - Martin N Hebart
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda MD, USA
- Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Department of Medicine, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), Universities of Marburg, Giessen, and Darmstadt, Germany
| | - Chris I Baker
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda MD, USA
| |
Collapse
|
3
|
Grootswagers T, Robinson AK, Shatek SM, Carlson TA. Mapping the dynamics of visual feature coding: Insights into perception and integration. PLoS Comput Biol 2024; 20:e1011760. [PMID: 38190390 PMCID: PMC10798643 DOI: 10.1371/journal.pcbi.1011760] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Revised: 01/19/2024] [Accepted: 12/13/2023] [Indexed: 01/10/2024] Open
Abstract
The basic computations performed in the human early visual cortex are the foundation for visual perception. While we know a lot about these computations, a key missing piece is how the coding of visual features relates to our perception of the environment. To investigate visual feature coding, interactions, and their relationship to human perception, we investigated neural responses and perceptual similarity judgements to a large set of visual stimuli that varied parametrically along four feature dimensions. We measured neural responses using electroencephalography (N = 16) to 256 grating stimuli that varied in orientation, spatial frequency, contrast, and colour. We then mapped the response profiles of the neural coding of each visual feature and their interactions, and related these to independently obtained behavioural judgements of stimulus similarity. The results confirmed fundamental principles of feature coding in the visual system, such that all four features were processed simultaneously but differed in their dynamics, and there was distinctive conjunction coding for different combinations of features in the neural responses. Importantly, modelling of the behaviour revealed that every stimulus feature contributed to perceptual judgements, despite the untargeted nature of the behavioural task. Further, the relationship between neural coding and behaviour was evident from initial processing stages, signifying that the fundamental features, not just their interactions, contribute to perception. This study highlights the importance of understanding how feature coding progresses through the visual hierarchy and the relationship between different stages of processing and perception.
Collapse
Affiliation(s)
- Tijl Grootswagers
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia
- School of Computer, Data and Mathematical Sciences, Western Sydney University, Sydney, Australia
| | - Amanda K. Robinson
- Queensland Brain Institute, The University of Queensland, Brisbane, Australia
| | - Sophia M. Shatek
- School of Psychology, The University of Sydney, Sydney, Australia
| | | |
Collapse
|
4
|
Robinson AK, Quek GL, Carlson TA. Visual Representations: Insights from Neural Decoding. Annu Rev Vis Sci 2023; 9:313-335. [PMID: 36889254 DOI: 10.1146/annurev-vision-100120-025301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/10/2023]
Abstract
Patterns of brain activity contain meaningful information about the perceived world. Recent decades have welcomed a new era in neural analyses, with computational techniques from machine learning applied to neural data to decode information represented in the brain. In this article, we review how decoding approaches have advanced our understanding of visual representations and discuss efforts to characterize both the complexity and the behavioral relevance of these representations. We outline the current consensus regarding the spatiotemporal structure of visual representations and review recent findings that suggest that visual representations are at once robust to perturbations, yet sensitive to different mental states. Beyond representations of the physical world, recent decoding work has shone a light on how the brain instantiates internally generated states, for example, during imagery and prediction. Going forward, decoding has remarkable potential to assess the functional relevance of visual representations for human behavior, reveal how representations change across development and during aging, and uncover their presentation in various mental disorders.
Collapse
Affiliation(s)
- Amanda K Robinson
- Queensland Brain Institute, The University of Queensland, Brisbane, Australia;
| | - Genevieve L Quek
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia;
| | | |
Collapse
|
5
|
Smit S, Moerel D, Zopf R, Rich AN. Vicarious touch: Overlapping neural patterns between seeing and feeling touch. Neuroimage 2023; 278:120269. [PMID: 37423272 DOI: 10.1016/j.neuroimage.2023.120269] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 07/04/2023] [Accepted: 07/06/2023] [Indexed: 07/11/2023] Open
Abstract
Simulation theories propose that vicarious touch arises when seeing someone else being touched triggers corresponding representations of being touched. Prior electroencephalography (EEG) findings show that seeing touch modulates both early and late somatosensory responses (measured with or without direct tactile stimulation). Functional Magnetic Resonance Imaging (fMRI) studies have shown that seeing touch increases somatosensory cortical activation. These findings have been taken to suggest that when we see someone being touched, we simulate that touch in our sensory systems. The somatosensory overlap when seeing and feeling touch differs between individuals, potentially underpinning variation in vicarious touch experiences. Increases in amplitude (EEG) or cerebral blood flow response (fMRI), however, are limited in that they cannot test for the information contained in the neural signal: seeing touch may not activate the same information as feeling touch. Here, we use time-resolved multivariate pattern analysis on whole-brain EEG data from people with and without vicarious touch experiences to test whether seen touch evokes overlapping neural representations with the first-hand experience of touch. Participants felt touch to the fingers (tactile trials) or watched carefully matched videos of touch to another person's fingers (visual trials). In both groups, EEG was sufficiently sensitive to allow decoding of touch location (little finger vs. thumb) on tactile trials. However, only in individuals who reported feeling touch when watching videos of touch could a classifier trained on tactile trials distinguish touch location on visual trials. This demonstrates that, for people who experience vicarious touch, there is overlap in the information about touch location held in the neural patterns when seeing and feeling touch. The timecourse of this overlap implies that seeing touch evokes similar representations to later stages of tactile processing. Therefore, while simulation may underlie vicarious tactile sensations, our findings suggest this involves an abstracted representation of directly felt touch.
Collapse
Affiliation(s)
- Sophie Smit
- Perception in Action Research Centre & School of Psychological Sciences, Macquarie University, 16 University Ave, NSW 2109, Australia.
| | - Denise Moerel
- Perception in Action Research Centre & School of Psychological Sciences, Macquarie University, 16 University Ave, NSW 2109, Australia; School of Psychology, The University of Sydney, Griffith Taylor Building A19, Camperdown, NSW 2050, Australia
| | - Regine Zopf
- Department of Psychosomatic Medicine and Psychotherapy, Jena University Hospital, Philosophenweg 3, Jena 07743, Federal Republic of Germany
| | - Anina N Rich
- Perception in Action Research Centre & School of Psychological Sciences, Macquarie University, 16 University Ave, NSW 2109, Australia
| |
Collapse
|
6
|
Kramer LE, Chen YC, Long B, Konkle T, Cohen MR. Contributions of early and mid-level visual cortex to high-level object categorization. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.05.31.541514. [PMID: 37398251 PMCID: PMC10312552 DOI: 10.1101/2023.05.31.541514] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/04/2023]
Abstract
The complexity of visual features for which neurons are tuned increases from early to late stages of the ventral visual stream. Thus, the standard hypothesis is that high-level functions like object categorization are primarily mediated by higher visual areas because they require more complex image formats that are not evident in early visual processing stages. However, human observers can categorize images as objects or animals or as big or small even when the images preserve only some low- and mid-level features but are rendered unidentifiable ('texforms', Long et al., 2018). This observation suggests that even the early visual cortex, in which neurons respond to simple stimulus features, may already encode signals about these more abstract high-level categorical distinctions. We tested this hypothesis by recording from populations of neurons in early and mid-level visual cortical areas while rhesus monkeys viewed texforms and their unaltered source stimuli (simultaneous recordings from areas V1 and V4 in one animal and separate recordings from V1 and V4 in two others). Using recordings from a few dozen neurons, we could decode the real-world size and animacy of both unaltered images and texforms. Furthermore, this neural decoding accuracy across stimuli was related to the ability of human observers to categorize texforms by real-world size and animacy. Our results demonstrate that neuronal populations early in the visual hierarchy contain signals useful for higher-level object perception and suggest that the responses of early visual areas to simple stimulus features display preliminary untangling of higher-level distinctions.
Collapse
Affiliation(s)
| | | | - Bria Long
- University of California, Los Angeles
| | | | | |
Collapse
|
7
|
Wamain Y, Garric C, Lenoble Q. Dynamics of low-pass-filtered object categories: A decoding approach to ERP recordings. Vision Res 2023; 204:108165. [PMID: 36584582 DOI: 10.1016/j.visres.2022.108165] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Revised: 11/24/2022] [Accepted: 12/14/2022] [Indexed: 12/29/2022]
Abstract
Rapid analysis of low spatial frequencies (LSFs) in the brain conveys the global shape of the object and allows for rapid expectations about the visual input. Evidence has suggested that LSF processing differs as a function of the semantic category to identify. The present study sought to specify the neural dynamics of the LSF contribution to the rapid object representation of living versus non-living objects. In this EEG experiment, participants had to categorize an object displayed at different spatial frequencies (LSF or non-filtered). Behavioral results showed an advantage for living versus non-living objects and a decrease in performance with LSF pictures of pieces of furniture only. Moreover, despite a difference in classification performance between LSF and non-filtered pictures for living items, the behavioral performance was maintained, which suggests that classification under our specific condition can be based on LSF information, in particular for living items.
Collapse
Affiliation(s)
- Yannick Wamain
- Univ. Lille, CNRS, UMR 9193 - SCALab - Sciences Cognitives et Sciences Affectives, F-59000 Lille, France.
| | - Clémentine Garric
- Univ. Lille, Inserm, CHU Lille, U1172 - LilNCog - Lille Neuroscience & Cognition, F-59000 Lille, France
| | - Quentin Lenoble
- Univ. Lille, Inserm, CHU Lille, U1172 - LilNCog - Lille Neuroscience & Cognition, F-59000 Lille, France
| |
Collapse
|
8
|
Hebart MN, Contier O, Teichmann L, Rockter AH, Zheng CY, Kidder A, Corriveau A, Vaziri-Pashkam M, Baker CI. THINGS-data, a multimodal collection of large-scale datasets for investigating object representations in human brain and behavior. eLife 2023; 12:e82580. [PMID: 36847339 PMCID: PMC10038662 DOI: 10.7554/elife.82580] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Accepted: 02/25/2023] [Indexed: 03/01/2023] Open
Abstract
Understanding object representations requires a broad, comprehensive sampling of the objects in our visual world with dense measurements of brain activity and behavior. Here, we present THINGS-data, a multimodal collection of large-scale neuroimaging and behavioral datasets in humans, comprising densely sampled functional MRI and magnetoencephalographic recordings, as well as 4.70 million similarity judgments in response to thousands of photographic images for up to 1,854 object concepts. THINGS-data is unique in its breadth of richly annotated objects, allowing for testing countless hypotheses at scale while assessing the reproducibility of previous findings. Beyond the unique insights promised by each individual dataset, the multimodality of THINGS-data allows combining datasets for a much broader view into object processing than previously possible. Our analyses demonstrate the high quality of the datasets and provide five examples of hypothesis-driven and data-driven applications. THINGS-data constitutes the core public release of the THINGS initiative (https://things-initiative.org) for bridging the gap between disciplines and the advancement of cognitive neuroscience.
Collapse
Affiliation(s)
- Martin N Hebart
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of HealthBethesdaUnited States
- Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
- Department of Medicine, Justus Liebig University GiessenGiessenGermany
| | - Oliver Contier
- Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
- Max Planck School of Cognition, Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| | - Lina Teichmann
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of HealthBethesdaUnited States
| | - Adam H Rockter
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of HealthBethesdaUnited States
| | - Charles Y Zheng
- Machine Learning Core, National Institute of Mental Health, National Institutes of HealthBethesdaUnited States
| | - Alexis Kidder
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of HealthBethesdaUnited States
| | - Anna Corriveau
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of HealthBethesdaUnited States
| | - Maryam Vaziri-Pashkam
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of HealthBethesdaUnited States
| | - Chris I Baker
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of HealthBethesdaUnited States
| |
Collapse
|
9
|
Guidolin A, Desroches M, Victor JD, Purpura KP, Rodrigues S. Geometry of spiking patterns in early visual cortex: a topological data analytic approach. J R Soc Interface 2022; 19:20220677. [PMID: 36382589 PMCID: PMC9667368 DOI: 10.1098/rsif.2022.0677] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Accepted: 10/21/2022] [Indexed: 11/17/2022] Open
Abstract
In the brain, spiking patterns live in a high-dimensional space of neurons and time. Thus, determining the intrinsic structure of this space presents a theoretical and experimental challenge. To address this challenge, we introduce a new framework for applying topological data analysis (TDA) to spike train data and use it to determine the geometry of spiking patterns in the visual cortex. Key to our approach is a parametrized family of distances based on the timing of spikes that quantifies the dissimilarity between neuronal responses. We applied TDA to visually driven single-unit and multiple single-unit spiking activity in macaque V1 and V2. TDA across timescales reveals a common geometry for spiking patterns in V1 and V2 which, among simple models, is most similar to that of a low-dimensional space endowed with Euclidean or hyperbolic geometry with modest curvature. Remarkably, the inferred geometry depends on timescale and is clearest for the timescales that are important for encoding contrast, orientation and spatial correlations.
Collapse
Affiliation(s)
- Andrea Guidolin
- MCEN Team, BCAM – Basque Center for Applied Mathematics, 48009 Bilbao, Basque Country, Spain
- Department of Mathematics, KTH Royal Institute of Technology, SE-100 44 Stockholm, Sweden
| | - Mathieu Desroches
- MathNeuro Team, Inria at Université Côte d’Azur, 06902 Sophia Antipolis, France
| | - Jonathan D. Victor
- Feil Family Brain and Mind Research Institute, Weill Cornell Medical College, New York, NY 10065, USA
| | - Keith P. Purpura
- Feil Family Brain and Mind Research Institute, Weill Cornell Medical College, New York, NY 10065, USA
| | - Serafim Rodrigues
- MCEN Team, BCAM – Basque Center for Applied Mathematics, 48009 Bilbao, Basque Country, Spain
- Ikerbasque – The Basque Foundation for Science, 48009 Bilbao, Basque Country, Spain
| |
Collapse
|
10
|
Grootswagers T, McKay H, Varlet M. Unique contributions of perceptual and conceptual humanness to object representations in the human brain. Neuroimage 2022; 257:119350. [PMID: 35659994 DOI: 10.1016/j.neuroimage.2022.119350] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Revised: 05/09/2022] [Accepted: 05/31/2022] [Indexed: 01/18/2023] Open
Abstract
The human brain is able to quickly and accurately identify objects in a dynamic visual world. Objects evoke different patterns of neural activity in the visual system, which reflect object category memberships. However, the underlying dimensions of object representations in the brain remain unclear. Recent research suggests that objects similarity to humans is one of the main dimensions used by the brain to organise objects, but the nature of the human-similarity features driving this organisation are still unknown. Here, we investigate the relative contributions of perceptual and conceptual features of humanness to the representational organisation of objects in the human visual system. We collected behavioural judgements of human-similarity of various objects, which were compared with time-resolved neuroimaging responses to the same objects. The behavioural judgement tasks targeted either perceptual or conceptual humanness features to determine their respective contribution to perceived human-similarity. Behavioural and neuroimaging data revealed significant and unique contributions of both perceptual and conceptual features of humanness, each explaining unique variance in neuroimaging data. Furthermore, our results showed distinct spatio-temporal dynamics in the processing of conceptual and perceptual humanness features, with later and more lateralised brain responses to conceptual features. This study highlights the critical importance of social requirements in information processing and organisation in the human brain.
Collapse
Affiliation(s)
- Tijl Grootswagers
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, NSW, Australia.
| | - Harriet McKay
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, NSW, Australia
| | - Manuel Varlet
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, NSW, Australia
| |
Collapse
|
11
|
Wang R, Janini D, Konkle T. Mid-level Feature Differences Support Early Animacy and Object Size Distinctions: Evidence from Electroencephalography Decoding. J Cogn Neurosci 2022; 34:1670-1680. [PMID: 35704550 PMCID: PMC9438936 DOI: 10.1162/jocn_a_01883] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Responses to visually presented objects along the cortical surface of the human brain have a large-scale organization reflecting the broad categorical divisions of animacy and object size. Emerging evidence indicates that this topographical organization is supported by differences between objects in mid-level perceptual features. With regard to the timing of neural responses, images of objects quickly evoke neural responses with decodable information about animacy and object size, but are mid-level features sufficient to evoke these rapid neural responses? Or is slower iterative neural processing required to untangle information about animacy and object size from mid-level features, requiring hundreds of milliseconds more processing time? To answer this question, we used EEG to measure human neural responses to images of objects and their texform counterparts-unrecognizable images that preserve some mid-level feature information about texture and coarse form. We found that texform images evoked neural responses with early decodable information about both animacy and real-world size, as early as responses evoked by original images. Furthermore, successful cross-decoding indicates that both texform and original images evoke information about animacy and size through a common underlying neural basis. Broadly, these results indicate that the visual system contains a mid-level feature bank carrying linearly decodable information on animacy and size, which can be rapidly activated without requiring explicit recognition or protracted temporal processing.
Collapse
|
12
|
Shatek SM, Robinson AK, Grootswagers T, Carlson TA. Capacity for movement is an organisational principle in object representations. Neuroimage 2022; 261:119517. [PMID: 35901917 DOI: 10.1016/j.neuroimage.2022.119517] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Revised: 07/22/2022] [Accepted: 07/24/2022] [Indexed: 11/18/2022] Open
Abstract
The ability to perceive moving objects is crucial for threat identification and survival. Recent neuroimaging evidence has shown that goal-directed movement is an important element of object processing in the brain. However, prior work has primarily used moving stimuli that are also animate, making it difficult to disentangle the effect of movement from aliveness or animacy in representational categorisation. In the current study, we investigated the relationship between how the brain processes movement and aliveness by including stimuli that are alive but still (e.g., plants), and stimuli that are not alive but move (e.g., waves). We examined electroencephalographic (EEG) data recorded while participants viewed static images of moving or non-moving objects that were either natural or artificial. Participants classified the images according to aliveness, or according to capacity for movement. Movement explained significant variance in the neural data over and above that of aliveness, showing that capacity for movement is an important dimension in the representation of visual objects in humans.
Collapse
Affiliation(s)
- Sophia M Shatek
- School of Psychology, University of Sydney, Camperdown, NSW 2006, Australia.
| | - Amanda K Robinson
- School of Psychology, University of Sydney, Camperdown, NSW 2006, Australia; Queensland Brain Institute, The University of Queensland, QLD, Australia
| | - Tijl Grootswagers
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Australia
| | - Thomas A Carlson
- School of Psychology, University of Sydney, Camperdown, NSW 2006, Australia
| |
Collapse
|
13
|
Gurariy G, Mruczek REB, Snow JC, Caplovitz GP. Using High-Density Electroencephalography to Explore Spatiotemporal Representations of Object Categories in Visual Cortex. J Cogn Neurosci 2022; 34:967-987. [PMID: 35286384 PMCID: PMC9169880 DOI: 10.1162/jocn_a_01845] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Visual object perception involves neural processes that unfold over time and recruit multiple regions of the brain. Here, we use high-density EEG to investigate the spatiotemporal representations of object categories across the dorsal and ventral pathways. In , human participants were presented with images from two animate object categories (birds and insects) and two inanimate categories (tools and graspable objects). In , participants viewed images of tools and graspable objects from a different stimulus set, one in which a shape confound that often exists between these categories (elongation) was controlled for. To explore the temporal dynamics of object representations, we employed time-resolved multivariate pattern analysis on the EEG time series data. This was performed at the electrode level as well as in source space of two regions of interest: one encompassing the ventral pathway and another encompassing the dorsal pathway. Our results demonstrate shape, exemplar, and category information can be decoded from the EEG signal. Multivariate pattern analysis within source space revealed that both dorsal and ventral pathways contain information pertaining to shape, inanimate object categories, and animate object categories. Of particular interest, we note striking similarities obtained in both ventral stream and dorsal stream regions of interest. These findings provide insight into the spatio-temporal dynamics of object representation and contribute to a growing literature that has begun to redefine the traditional role of the dorsal pathway.
Collapse
|
14
|
Moerel D, Grootswagers T, Robinson AK, Shatek SM, Woolgar A, Carlson TA, Rich AN. The time-course of feature-based attention effects dissociated from temporal expectation and target-related processes. Sci Rep 2022; 12:6968. [PMID: 35484363 PMCID: PMC9050682 DOI: 10.1038/s41598-022-10687-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Accepted: 03/28/2022] [Indexed: 11/17/2022] Open
Abstract
Selective attention prioritises relevant information amongst competing sensory input. Time-resolved electrophysiological studies have shown stronger representation of attended compared to unattended stimuli, which has been interpreted as an effect of attention on information coding. However, because attention is often manipulated by making only the attended stimulus a target to be remembered and/or responded to, many reported attention effects have been confounded with target-related processes such as visual short-term memory or decision-making. In addition, attention effects could be influenced by temporal expectation about when something is likely to happen. The aim of this study was to investigate the dynamic effect of attention on visual processing using multivariate pattern analysis of electroencephalography (EEG) data, while (1) controlling for target-related confounds, and (2) directly investigating the influence of temporal expectation. Participants viewed rapid sequences of overlaid oriented grating pairs while detecting a "target" grating of a particular orientation. We manipulated attention, one grating was attended and the other ignored (cued by colour), and temporal expectation, with stimulus onset timing either predictable or not. We controlled for target-related processing confounds by only analysing non-target trials. Both attended and ignored gratings were initially coded equally in the pattern of responses across EEG sensors. An effect of attention, with preferential coding of the attended stimulus, emerged approximately 230 ms after stimulus onset. This attention effect occurred even when controlling for target-related processing confounds, and regardless of stimulus onset expectation. These results provide insight into the effect of feature-based attention on the dynamic processing of competing visual information.
Collapse
Affiliation(s)
- Denise Moerel
- School of Psychological Sciences, Macquarie University, Sydney, Australia.
- Perception in Action Research Centre, Macquarie University, Sydney, Australia.
- School of Psychology, University of Sydney, Sydney, Australia.
| | - Tijl Grootswagers
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia
- School of Psychology, University of Sydney, Sydney, Australia
| | - Amanda K Robinson
- School of Psychology, University of Sydney, Sydney, Australia
- Queensland Brain Institute, The University of Queensland, Brisbane, Australia
| | - Sophia M Shatek
- School of Psychology, University of Sydney, Sydney, Australia
| | - Alexandra Woolgar
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK
| | | | - Anina N Rich
- School of Psychological Sciences, Macquarie University, Sydney, Australia
- Perception in Action Research Centre, Macquarie University, Sydney, Australia
- Centre for Elite Performance, Expertise and Training, Macquarie University, Sydney, Australia
| |
Collapse
|
15
|
Bruera A, Poesio M. Exploring the Representations of Individual Entities in the Brain Combining EEG and Distributional Semantics. Front Artif Intell 2022; 5:796793. [PMID: 35280237 PMCID: PMC8905499 DOI: 10.3389/frai.2022.796793] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2021] [Accepted: 01/25/2022] [Indexed: 11/23/2022] Open
Abstract
Semantic knowledge about individual entities (i.e., the referents of proper names such as Jacinta Ardern) is fine-grained, episodic, and strongly social in nature, when compared with knowledge about generic entities (the referents of common nouns such as politician). We investigate the semantic representations of individual entities in the brain; and for the first time we approach this question using both neural data, in the form of newly-acquired EEG data, and distributional models of word meaning, employing them to isolate semantic information regarding individual entities in the brain. We ran two sets of analyses. The first set of analyses is only concerned with the evoked responses to individual entities and their categories. We find that it is possible to classify them according to both their coarse and their fine-grained category at appropriate timepoints, but that it is hard to map representational information learned from individuals to their categories. In the second set of analyses, we learn to decode from evoked responses to distributional word vectors. These results indicate that such a mapping can be learnt successfully: this counts not only as a demonstration that representations of individuals can be discriminated in EEG responses, but also as a first brain-based validation of distributional semantic models as representations of individual entities. Finally, in-depth analyses of the decoder performance provide additional evidence that the referents of proper names and categories have little in common when it comes to their representation in the brain.
Collapse
Affiliation(s)
- Andrea Bruera
- Cognitive Science Research Group, School of Electronic Engineering and Computer Science, Queen Mary University of London, London, United Kingdom
| | | |
Collapse
|
16
|
Unraveling the Neural Mechanisms Which Encode Rapid Streams of Visual Input. J Neurosci 2022; 42:1170-1172. [PMID: 35173038 DOI: 10.1523/jneurosci.2013-21.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Revised: 12/02/2021] [Accepted: 12/10/2021] [Indexed: 11/21/2022] Open
|
17
|
Grootswagers T, Zhou I, Robinson AK, Hebart MN, Carlson TA. Human EEG recordings for 1,854 concepts presented in rapid serial visual presentation streams. Sci Data 2022; 9:3. [PMID: 35013331 PMCID: PMC8748587 DOI: 10.1038/s41597-021-01102-7] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Accepted: 11/03/2021] [Indexed: 01/07/2023] Open
Abstract
The neural basis of object recognition and semantic knowledge has been extensively studied but the high dimensionality of object space makes it challenging to develop overarching theories on how the brain organises object knowledge. To help understand how the brain allows us to recognise, categorise, and represent objects and object categories, there is a growing interest in using large-scale image databases for neuroimaging experiments. In the current paper, we present THINGS-EEG, a dataset containing human electroencephalography responses from 50 subjects to 1,854 object concepts and 22,248 images in the THINGS stimulus set, a manually curated and high-quality image database that was specifically designed for studying human vision. The THINGS-EEG dataset provides neuroimaging recordings to a systematic collection of objects and concepts and can therefore support a wide array of research to understand visual object processing in the human brain.
Collapse
Affiliation(s)
- Tijl Grootswagers
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia.
- School of Psychology, The University of Sydney, Sydney, Australia.
| | - Ivy Zhou
- School of Psychology, The University of Sydney, Sydney, Australia
| | | | - Martin N Hebart
- Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Thomas A Carlson
- School of Psychology, The University of Sydney, Sydney, Australia
| |
Collapse
|
18
|
Grootswagers T, Robinson AK. Overfitting the Literature to One Set of Stimuli and Data. Front Hum Neurosci 2021; 15:682661. [PMID: 34305552 PMCID: PMC8295535 DOI: 10.3389/fnhum.2021.682661] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Accepted: 06/16/2021] [Indexed: 12/02/2022] Open
Abstract
A large number of papers in Computational Cognitive Neuroscience are developing and testing novel analysis methods using one specific neuroimaging dataset and problematic experimental stimuli. Publication bias and confirmatory exploration will result in overfitting to the limited available data. We highlight the problems with this specific dataset and argue for the need to collect more good quality open neuroimaging data using a variety of experimental stimuli, in order to test the generalisability of current published results, and allow for more robust results in future work.
Collapse
Affiliation(s)
- Tijl Grootswagers
- The MARCS Institute for Brain, Behaviour and Development, Sydney, NSW, Australia.,School of Psychology, Western Sydney University, Sydney, NSW, Australia.,School of Psychology, University of Sydney, Sydney, NSW, Australia
| | | |
Collapse
|
19
|
Prochnow A, Bluschke A, Weissbach A, Münchau A, Roessner V, Mückschel M, Beste C. Neural dynamics of stimulus-response representations during inhibitory control. J Neurophysiol 2021; 126:680-692. [PMID: 34232752 DOI: 10.1152/jn.00163.2021] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023] Open
Abstract
The investigation of action control processes is one major field in cognitive neuroscience and several theoretical frameworks have been proposed. One established framework is the "Theory of Event Coding" (TEC). However, only rarely, this framework has been used in the context of response inhibition and how stimulus-response association or binding processes modulate response inhibition performance. Particularly the neural dynamics of stimulus-response representations during inhibitory control are elusive. To address this, we examined n = 40 healthy controls and combined temporal EEG signal decomposition with source localization and temporal generalization multivariate pattern analysis (MVPA). We show that overlaps in features of stimuli used to trigger either response execution or inhibition compromised task performance. According to TEC, this indicates that binding processes in event file representations impact response inhibition through partial repetition costs. In the EEG data, reconfiguration of event files modulated processes in time windows well-known to reflect distinct response inhibition mechanisms. Crucially, event file coding processes were only evident in a specific fraction of neurophysiological activity associated with the inferior parietal cortex (BA40). Within that specific fraction of neurophysiological activity, the decoding of the dynamics of event file representations using temporal generalization MVPA suggested that event file representations are stable across several hundred milliseconds, and that event file coding during inhibitory control is reflected by a sustained activation pattern of neural dynamics.NEW & NOTEWORTHY The "mental representation" of how stimulus input translate into the appropriate response is central for goal-directed behavior. However, little is known about the dynamics of such representations on the neurophysiological level when it comes to the inhibition of motor processes. This dynamic is shown in the current study.
Collapse
Affiliation(s)
- Astrid Prochnow
- Department of Child and Adolescent Psychiatry, Cognitive Neurophysiology, Faculty of Medicine, Technische Universität Dresden, Dresden, Germany.,University Neuropsychology Centre, Faculty of Medicine, Technische Universität Dresden, Dresden, Germany
| | - Annet Bluschke
- Department of Child and Adolescent Psychiatry, Cognitive Neurophysiology, Faculty of Medicine, Technische Universität Dresden, Dresden, Germany.,University Neuropsychology Centre, Faculty of Medicine, Technische Universität Dresden, Dresden, Germany
| | - Anne Weissbach
- Institute of Systems Motor Science, University of Lübeck, Lubeck, Germany
| | - Alexander Münchau
- Institute of Systems Motor Science, University of Lübeck, Lubeck, Germany
| | - Veit Roessner
- Department of Child and Adolescent Psychiatry, Cognitive Neurophysiology, Faculty of Medicine, Technische Universität Dresden, Dresden, Germany
| | - Moritz Mückschel
- Department of Child and Adolescent Psychiatry, Cognitive Neurophysiology, Faculty of Medicine, Technische Universität Dresden, Dresden, Germany.,University Neuropsychology Centre, Faculty of Medicine, Technische Universität Dresden, Dresden, Germany
| | - Christian Beste
- Department of Child and Adolescent Psychiatry, Cognitive Neurophysiology, Faculty of Medicine, Technische Universität Dresden, Dresden, Germany.,University Neuropsychology Centre, Faculty of Medicine, Technische Universität Dresden, Dresden, Germany
| |
Collapse
|
20
|
Li M, Yang G, Xu G. The Effect of the Graphic Structures of Humanoid Robot on N200 and P300 Potentials. IEEE Trans Neural Syst Rehabil Eng 2020; 28:1944-1954. [PMID: 32746323 DOI: 10.1109/tnsre.2020.3010250] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Humanoid robots are widely used in brain computer interface (BCI). Using a humanoid robot stimulus could increase the amplitude of event-related potentials (ERPs), which improves BCI performance. Since a humanoid robot contains many human elements, the element that increases the ERPs amplitude is unclear, and how to test the effect of it on the brain is a problem. This study used different graphic structures of an NAO humanoid robot to design three types of robot stimuli: a global robot, its local information, and its topological action. Ten subjects first conducted an odd-ball-based BCI (OD-BCI) by applying these stimuli. Then, they accomplished a delayed matching-to-sample task (DMST) that was used to specialize the encoding and retrieval phases of the OD-BCI task. In the retrieval phase of the DMST, the global stimulus induces the largest N200 and P300 potentials with the shortest latencies in the frontal, central, and occipital areas. This finding is in accordance with the P300 and classification performance of the OD-BCI task. When induced by the local stimulus, the subjects responded faster and more accurately in the retrieval phase of the DMST than in the other two conditions, indicating that the local stimulus improved the subject's responses. These results indicate that the OD-BCI task causes subject's retrieval work when the subject recognizes and outputs the stimulus. The global stimulus that contains topological and local elements could make brain react faster and induce larger ERPs, this finding could be used during the development of visual stimuli to improve BCI performance.
Collapse
|
21
|
Cohen U, Chung S, Lee DD, Sompolinsky H. Separability and geometry of object manifolds in deep neural networks. Nat Commun 2020; 11:746. [PMID: 32029727 PMCID: PMC7005295 DOI: 10.1038/s41467-020-14578-5] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2019] [Accepted: 01/08/2020] [Indexed: 01/08/2023] Open
Abstract
Stimuli are represented in the brain by the collective population responses of sensory neurons, and an object presented under varying conditions gives rise to a collection of neural population responses called an ‘object manifold’. Changes in the object representation along a hierarchical sensory system are associated with changes in the geometry of those manifolds, and recent theoretical progress connects this geometry with ‘classification capacity’, a quantitative measure of the ability to support object classification. Deep neural networks trained on object classification tasks are a natural testbed for the applicability of this relation. We show how classification capacity improves along the hierarchies of deep neural networks with different architectures. We demonstrate that changes in the geometry of the associated object manifolds underlie this improved capacity, and shed light on the functional roles different levels in the hierarchy play to achieve it, through orchestrated reduction of manifolds’ radius, dimensionality and inter-manifold correlations. Neural activity space or manifold that represents object information changes across the layers of a deep neural network. Here the authors present a theoretical account of the relationship between the geometry of the manifolds and the classification capacity of the neural networks.
Collapse
Affiliation(s)
- Uri Cohen
- Edmond and Lily Safra Center for Brain Sciences, Hebrew University of Jerusalem, Jerusalem, Israel
| | - SueYeon Chung
- Center for Brain Science, Harvard University, Cambridge, MA, USA.,Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.,Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
| | - Daniel D Lee
- Department of Electrical and Computer Engineering, Cornell Tech, New York, NY, USA
| | - Haim Sompolinsky
- Edmond and Lily Safra Center for Brain Sciences, Hebrew University of Jerusalem, Jerusalem, Israel. .,Center for Brain Science, Harvard University, Cambridge, MA, USA.
| |
Collapse
|