1
|
Yashiro R, Sawayama M, Amano K. Decoding time-resolved neural representations of orientation ensemble perception. Front Neurosci 2024; 18:1387393. [PMID: 39148524 PMCID: PMC11325722 DOI: 10.3389/fnins.2024.1387393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2024] [Accepted: 07/15/2024] [Indexed: 08/17/2024] Open
Abstract
The visual system can compute summary statistics of several visual elements at a glance. Numerous studies have shown that an ensemble of different visual features can be perceived over 50-200 ms; however, the time point at which the visual system forms an accurate ensemble representation associated with an individual's perception remains unclear. This is mainly because most previous studies have not fully addressed time-resolved neural representations that occur during ensemble perception, particularly lacking quantification of the representational strength of ensembles and their correlation with behavior. Here, we conducted orientation ensemble discrimination tasks and electroencephalogram (EEG) recordings to decode orientation representations over time while human observers discriminated an average of multiple orientations. We modeled EEG signals as a linear sum of hypothetical orientation channel responses and inverted this model to quantify the representational strength of orientation ensemble. Our analysis using this inverted encoding model revealed stronger representations of the average orientation over 400-700 ms. We also correlated the orientation representation estimated from EEG signals with the perceived average orientation reported in the ensemble discrimination task with adjustment methods. We found that the estimated orientation at approximately 600-700 ms significantly correlated with the individual differences in perceived average orientation. These results suggest that although ensembles can be quickly and roughly computed, the visual system may gradually compute an orientation ensemble over several hundred milliseconds to achieve a more accurate ensemble representation.
Collapse
Affiliation(s)
- Ryuto Yashiro
- Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan
| | - Masataka Sawayama
- Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan
| | - Kaoru Amano
- Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
2
|
Lützow Holm E, Fernández Slezak D, Tagliazucchi E. Contribution of low-level image statistics to EEG decoding of semantic content in multivariate and univariate models with feature optimization. Neuroimage 2024; 293:120626. [PMID: 38677632 DOI: 10.1016/j.neuroimage.2024.120626] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2024] [Revised: 04/23/2024] [Accepted: 04/24/2024] [Indexed: 04/29/2024] Open
Abstract
Spatio-temporal patterns of evoked brain activity contain information that can be used to decode and categorize the semantic content of visual stimuli. However, this procedure can be biased by low-level image features independently of the semantic content present in the stimuli, prompting the need to understand the robustness of different models regarding these confounding factors. In this study, we trained machine learning models to distinguish between concepts included in the publicly available THINGS-EEG dataset using electroencephalography (EEG) data acquired during a rapid serial visual presentation paradigm. We investigated the contribution of low-level image features to decoding accuracy in a multivariate model, utilizing broadband data from all EEG channels. Additionally, we explored a univariate model obtained through data-driven feature selection applied to the spatial and frequency domains. While the univariate models exhibited better decoding accuracy, their predictions were less robust to the confounding effect of low-level image statistics. Notably, some of the models maintained their accuracy even after random replacement of the training dataset with semantically unrelated samples that presented similar low-level content. In conclusion, our findings suggest that model optimization impacts sensitivity to confounding factors, regardless of the resulting classification performance. Therefore, the choice of EEG features for semantic decoding should ideally be informed by criteria beyond classifier performance, such as the neurobiological mechanisms under study.
Collapse
Affiliation(s)
- Eric Lützow Holm
- National Scientific and Technical Research Council (CONICET), Godoy Cruz 2290, CABA 1425, Argentina; Institute of Applied and Interdisciplinary Physics and Department of Physics, University of Buenos Aires, Pabellón 1, Ciudad Universitaria, CABA 1425, Argentina.
| | - Diego Fernández Slezak
- National Scientific and Technical Research Council (CONICET), Godoy Cruz 2290, CABA 1425, Argentina; Departamento de Computación, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Pabellón 1, Ciudad Universitaria, CABA 1425, Argentina; Instituto de Investigación en Ciencias de la Computación (ICC), CONICET-Universidad de Buenos Aires, Pabellón 1, Ciudad Universitaria, CABA 1425, Argentina
| | - Enzo Tagliazucchi
- National Scientific and Technical Research Council (CONICET), Godoy Cruz 2290, CABA 1425, Argentina; Institute of Applied and Interdisciplinary Physics and Department of Physics, University of Buenos Aires, Pabellón 1, Ciudad Universitaria, CABA 1425, Argentina; Latin American Brain Health (BrainLat), Universidad Adolfo Ibáñez, Av. Diag. Las Torres 2640, Peñalolén 7941169, Santiago Región Metropolitana, Chile.
| |
Collapse
|
3
|
Khadir A, Ghamsari SS, Badri S, Beigzadeh B. Discriminating orientation information with phase consistency in alpha and low-gamma frequency bands: an EEG study. Sci Rep 2024; 14:12007. [PMID: 38796618 PMCID: PMC11127946 DOI: 10.1038/s41598-024-62934-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Accepted: 05/22/2024] [Indexed: 05/28/2024] Open
Abstract
Recent studies suggest that noninvasive imaging methods (EEG, MEG) in the human brain scalp can decode the content of visual features information (orientation, color, motion, etc.) in Visual-Working Memory (VWM). Previous work demonstrated that with the sustained low-frequency Event-Related Potential (ERP under 6 Hz) of scalp EEG distributions, it is possible to accurately decode the content of orientation information in VWM during the delay interval. In addition, previous studies showed that the raw data captured by a combination of the occi-parietal electrodes could be used to decode the orientation. However, it is unclear whether the orientation information is available in other frequency bands (higher than 6 Hz) or whether this information is feasible with fewer electrodes. Furthermore, the exploration of orientation information in the phase values of the signal has not been well-addressed. In this study, we propose that orientation information is also accessible through the phase consistency of the occipital region in the alpha band frequency. Our results reveal a significant difference between orientations within 200 ms after stimulus offset in early visual sensory processing, with no apparent effect in power and Event-Related Oscillation (ERO) during this period. Additionally, in later periods (420-500 ms after stimulus offset), a noticeable difference is observed in the phase consistency of low gamma-band activity in the occipital area. Importantly, our findings suggest that phase consistency between trials of the orientation feature in the occipital alpha and low gamma-band can serve as a measure to obtain orientation information in VWM. Furthermore, the study demonstrates that phase consistency in the alpha and low gamma band can reflect the distribution of orientation-selective neuron numbers in the four main orientations in the occipital area.
Collapse
Affiliation(s)
- Alireza Khadir
- Biomechatronics and Cognitive Engineering Research Lab, School of Mechanical Engineering, Iran University of Science and Technology, Tehran, Iran
| | - Shamim Sasani Ghamsari
- Biomechatronics and Cognitive Engineering Research Lab, School of Mechanical Engineering, Iran University of Science and Technology, Tehran, Iran
| | - Samaneh Badri
- Biomechatronics and Cognitive Engineering Research Lab, School of Mechanical Engineering, Iran University of Science and Technology, Tehran, Iran
| | - Borhan Beigzadeh
- Biomechatronics and Cognitive Engineering Research Lab, School of Mechanical Engineering, Iran University of Science and Technology, Tehran, Iran.
| |
Collapse
|
4
|
Zhu J, Tian KJ, Carrasco M, Denison RN. Temporal attention recruits fronto-cingulate cortex to amplify stimulus representations. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.03.06.583738. [PMID: 38496610 PMCID: PMC10942468 DOI: 10.1101/2024.03.06.583738] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/19/2024]
Abstract
The human brain receives a continuous stream of input, but it faces significant constraints in its ability to process every item in a sequence of stimuli. Voluntary temporal attention can alleviate these constraints by using information about upcoming stimulus timing to selectively prioritize a task-relevant item over others in a sequence. But the neural mechanisms underlying this ability remain unclear. Here, we manipulated temporal attention to successive stimuli in a two-target temporal cueing task, while controlling for temporal expectation by using fully predictable stimulus timing. We recorded magnetoencephalography (MEG) in human observers and measured the effects of temporal attention on orientation representations of each stimulus using time-resolved multivariate decoding in both sensor and source space. Voluntary temporal attention enhanced the orientation representation of the first target 235-300 milliseconds after target onset. Unlike previous studies that did not isolate temporal attention from temporal expectation, we found no evidence that temporal attention enhanced early visual evoked responses. Instead, and unexpectedly, the primary source of enhanced decoding for attended stimuli in the critical time window was a contiguous region spanning left frontal cortex and cingulate cortex. The results suggest that voluntary temporal attention recruits cortical regions beyond the ventral stream at an intermediate processing stage to amplify the representation of a target stimulus, which may serve to protect it from subsequent interference by a temporal competitor.
Collapse
|
5
|
Grootswagers T, Robinson AK, Shatek SM, Carlson TA. Mapping the dynamics of visual feature coding: Insights into perception and integration. PLoS Comput Biol 2024; 20:e1011760. [PMID: 38190390 PMCID: PMC10798643 DOI: 10.1371/journal.pcbi.1011760] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Revised: 01/19/2024] [Accepted: 12/13/2023] [Indexed: 01/10/2024] Open
Abstract
The basic computations performed in the human early visual cortex are the foundation for visual perception. While we know a lot about these computations, a key missing piece is how the coding of visual features relates to our perception of the environment. To investigate visual feature coding, interactions, and their relationship to human perception, we investigated neural responses and perceptual similarity judgements to a large set of visual stimuli that varied parametrically along four feature dimensions. We measured neural responses using electroencephalography (N = 16) to 256 grating stimuli that varied in orientation, spatial frequency, contrast, and colour. We then mapped the response profiles of the neural coding of each visual feature and their interactions, and related these to independently obtained behavioural judgements of stimulus similarity. The results confirmed fundamental principles of feature coding in the visual system, such that all four features were processed simultaneously but differed in their dynamics, and there was distinctive conjunction coding for different combinations of features in the neural responses. Importantly, modelling of the behaviour revealed that every stimulus feature contributed to perceptual judgements, despite the untargeted nature of the behavioural task. Further, the relationship between neural coding and behaviour was evident from initial processing stages, signifying that the fundamental features, not just their interactions, contribute to perception. This study highlights the importance of understanding how feature coding progresses through the visual hierarchy and the relationship between different stages of processing and perception.
Collapse
Affiliation(s)
- Tijl Grootswagers
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia
- School of Computer, Data and Mathematical Sciences, Western Sydney University, Sydney, Australia
| | - Amanda K. Robinson
- Queensland Brain Institute, The University of Queensland, Brisbane, Australia
| | - Sophia M. Shatek
- School of Psychology, The University of Sydney, Sydney, Australia
| | | |
Collapse
|
6
|
Robinson AK, Quek GL, Carlson TA. Visual Representations: Insights from Neural Decoding. Annu Rev Vis Sci 2023; 9:313-335. [PMID: 36889254 DOI: 10.1146/annurev-vision-100120-025301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/10/2023]
Abstract
Patterns of brain activity contain meaningful information about the perceived world. Recent decades have welcomed a new era in neural analyses, with computational techniques from machine learning applied to neural data to decode information represented in the brain. In this article, we review how decoding approaches have advanced our understanding of visual representations and discuss efforts to characterize both the complexity and the behavioral relevance of these representations. We outline the current consensus regarding the spatiotemporal structure of visual representations and review recent findings that suggest that visual representations are at once robust to perturbations, yet sensitive to different mental states. Beyond representations of the physical world, recent decoding work has shone a light on how the brain instantiates internally generated states, for example, during imagery and prediction. Going forward, decoding has remarkable potential to assess the functional relevance of visual representations for human behavior, reveal how representations change across development and during aging, and uncover their presentation in various mental disorders.
Collapse
Affiliation(s)
- Amanda K Robinson
- Queensland Brain Institute, The University of Queensland, Brisbane, Australia;
| | - Genevieve L Quek
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia;
| | | |
Collapse
|
7
|
Sandhaeger F, Siegel M. Testing the generalization of neural representations. Neuroimage 2023; 278:120258. [PMID: 37429371 PMCID: PMC10443234 DOI: 10.1016/j.neuroimage.2023.120258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Revised: 05/27/2023] [Accepted: 06/28/2023] [Indexed: 07/12/2023] Open
Abstract
Multivariate analysis methods are widely used in neuroscience to investigate the presence and structure of neural representations. Representational similarities across time or contexts are often investigated using pattern generalization, e.g. by training and testing multivariate decoders in different contexts, or by comparable pattern-based encoding methods. It is however unclear what conclusions can be validly drawn on the underlying neural representations when significant pattern generalization is found in mass signals such as LFP, EEG, MEG, or fMRI. Using simulations, we show how signal mixing and dependencies between measurements can drive significant pattern generalization even though the true underlying representations are orthogonal. We suggest that, using an accurate estimate of the expected pattern generalization given identical representations, it is nonetheless possible to test meaningful hypotheses about the generalization of neural representations. We offer such an estimate of the expected magnitude of pattern generalization and demonstrate how this measure can be used to assess the similarity and differences of neural representations across time and contexts.
Collapse
Affiliation(s)
- Florian Sandhaeger
- Department of Neural Dynamics and Magnetoencephalography, Hertie Institute for Clinical Brain Research, University of Tübingen, Germany; Centre for Integrative Neuroscience, University of Tübingen, Germany; MEG Center, University of Tübingen, Germany; IMPRS for Cognitive and Systems Neuroscience, University of Tübingen, Germany.
| | - Markus Siegel
- Department of Neural Dynamics and Magnetoencephalography, Hertie Institute for Clinical Brain Research, University of Tübingen, Germany; Centre for Integrative Neuroscience, University of Tübingen, Germany; MEG Center, University of Tübingen, Germany.
| |
Collapse
|
8
|
McFadyen J, Dolan RJ. Spatiotemporal Precision of Neuroimaging in Psychiatry. Biol Psychiatry 2023; 93:671-680. [PMID: 36376110 DOI: 10.1016/j.biopsych.2022.08.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Revised: 07/20/2022] [Accepted: 08/12/2022] [Indexed: 12/23/2022]
Abstract
Aberrant patterns of cognition, perception, and behavior seen in psychiatric disorders are thought to be driven by a complex interplay of neural processes that evolve at a rapid temporal scale. Understanding these dynamic processes in vivo in humans has been hampered by a trade-off between spatial and temporal resolutions inherent to current neuroimaging technology. A recent trend in psychiatric research has been the use of high temporal resolution imaging, particularly magnetoencephalography, often in conjunction with sophisticated machine learning decoding techniques. Developments here promise novel insights into the spatiotemporal dynamics of cognitive phenomena, including domains relevant to psychiatric illnesses such as reward and avoidance learning, memory, and planning. This review considers recent advances afforded by exploiting this increased spatiotemporal precision, with specific reference to applications that seek to drive a mechanistic understanding of psychopathology and the realization of preclinical translation.
Collapse
Affiliation(s)
- Jessica McFadyen
- UCL Max Planck Centre for Computational Psychiatry and Ageing Research and Wellcome Centre for Human Neuroimaging, University College London, London, United Kingdom; State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China.
| | - Raymond J Dolan
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| |
Collapse
|
9
|
Hebart MN, Contier O, Teichmann L, Rockter AH, Zheng CY, Kidder A, Corriveau A, Vaziri-Pashkam M, Baker CI. THINGS-data, a multimodal collection of large-scale datasets for investigating object representations in human brain and behavior. eLife 2023; 12:e82580. [PMID: 36847339 PMCID: PMC10038662 DOI: 10.7554/elife.82580] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Accepted: 02/25/2023] [Indexed: 03/01/2023] Open
Abstract
Understanding object representations requires a broad, comprehensive sampling of the objects in our visual world with dense measurements of brain activity and behavior. Here, we present THINGS-data, a multimodal collection of large-scale neuroimaging and behavioral datasets in humans, comprising densely sampled functional MRI and magnetoencephalographic recordings, as well as 4.70 million similarity judgments in response to thousands of photographic images for up to 1,854 object concepts. THINGS-data is unique in its breadth of richly annotated objects, allowing for testing countless hypotheses at scale while assessing the reproducibility of previous findings. Beyond the unique insights promised by each individual dataset, the multimodality of THINGS-data allows combining datasets for a much broader view into object processing than previously possible. Our analyses demonstrate the high quality of the datasets and provide five examples of hypothesis-driven and data-driven applications. THINGS-data constitutes the core public release of the THINGS initiative (https://things-initiative.org) for bridging the gap between disciplines and the advancement of cognitive neuroscience.
Collapse
Affiliation(s)
- Martin N Hebart
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of HealthBethesdaUnited States
- Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
- Department of Medicine, Justus Liebig University GiessenGiessenGermany
| | - Oliver Contier
- Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
- Max Planck School of Cognition, Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| | - Lina Teichmann
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of HealthBethesdaUnited States
| | - Adam H Rockter
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of HealthBethesdaUnited States
| | - Charles Y Zheng
- Machine Learning Core, National Institute of Mental Health, National Institutes of HealthBethesdaUnited States
| | - Alexis Kidder
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of HealthBethesdaUnited States
| | - Anna Corriveau
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of HealthBethesdaUnited States
| | - Maryam Vaziri-Pashkam
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of HealthBethesdaUnited States
| | - Chris I Baker
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of HealthBethesdaUnited States
| |
Collapse
|
10
|
van Es MWJ, Marshall TR, Spaak E, Jensen O, Schoffelen J. Phasic modulation of visual representations during sustained attention. Eur J Neurosci 2022; 55:3191-3208. [PMID: 33319447 PMCID: PMC9543919 DOI: 10.1111/ejn.15084] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2020] [Revised: 12/09/2020] [Accepted: 12/09/2020] [Indexed: 11/27/2022]
Abstract
Sustained attention has long been thought to benefit perception in a continuous fashion, but recent evidence suggests that it affects perception in a discrete, rhythmic way. Periodic fluctuations in behavioral performance over time, and modulations of behavioral performance by the phase of spontaneous oscillatory brain activity point to an attentional sampling rate in the theta or alpha frequency range. We investigated whether such discrete sampling by attention is reflected in periodic fluctuations in the decodability of visual stimulus orientation from magnetoencephalographic (MEG) brain signals. In this exploratory study, human subjects attended one of the two grating stimuli, while MEG was being recorded. We assessed the strength of the visual representation of the attended stimulus using a support vector machine (SVM) to decode the orientation of the grating (clockwise vs. counterclockwise) from the MEG signal. We tested whether decoder performance depended on the theta/alpha phase of local brain activity. While the phase of ongoing activity in the visual cortex did not modulate decoding performance, theta/alpha phase of activity in the frontal eye fields and parietal cortex, contralateral to the attended stimulus did modulate decoding performance. These findings suggest that phasic modulations of visual stimulus representations in the brain are caused by frequency-specific top-down activity in the frontoparietal attention network, though the behavioral relevance of these effects could not be established.
Collapse
Affiliation(s)
- Mats W. J. van Es
- Donders Institute for Brain, Cognition and BehaviourRadboud University NijmegenNijmegenThe Netherlands
- Wellcome Centre for Integrative NeuroimagingUniversity of OxfordOxfordUK
| | - Tom R. Marshall
- Donders Institute for Brain, Cognition and BehaviourRadboud University NijmegenNijmegenThe Netherlands
- Wellcome Centre for Integrative NeuroimagingUniversity of OxfordOxfordUK
| | - Eelke Spaak
- Donders Institute for Brain, Cognition and BehaviourRadboud University NijmegenNijmegenThe Netherlands
| | - Ole Jensen
- Donders Institute for Brain, Cognition and BehaviourRadboud University NijmegenNijmegenThe Netherlands
- Department of PsychologyUniversity of BirminghamBirminghamUK
| | - Jan‐Mathijs Schoffelen
- Donders Institute for Brain, Cognition and BehaviourRadboud University NijmegenNijmegenThe Netherlands
| |
Collapse
|
11
|
Mo C, Zhang S, Lu J, Yu M, Yao Y. Attention impedes neural representation of interpolated orientation during perceptual completion. Psychophysiology 2022; 59:e14031. [PMID: 35239985 DOI: 10.1111/psyp.14031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Revised: 01/07/2022] [Accepted: 01/21/2022] [Indexed: 11/30/2022]
Abstract
One of the most remarkable functional feats accomplished by visual system is the interpolation of missing retinal inputs based on surrounding information, a process known as perceptual completion. Perceptual completion enables the active construction of coherent, vivid percepts from spatially discontinuous visual information that is prevalent in real-life visual scenes. Despite mounting evidence linking sensory activity enhancement and perceptual completion, surprisingly little is known about whether and how attention, a fundamental modulator of sensory activities, affects perceptual completion. Using EEG-based time-resolved inverted encoding model (IEM), we reconstructed the moment-to-moment representation of the illusory grating that resulted from spatially interpolating the orientation of surrounding inducers. We found that, despite manipulation of observers' attentional focus, the illusory grating representation unfolded in time in a similar manner. Critically, attention to the surrounding inducers simultaneously attenuated the illusory grating representation and delayed its temporal development. Our findings disclosed, for the first time, the suppressive role of selective attention in perceptual completion and were suggestive of a fast, automatic neural machinery that implements the interpolation of missing visual information.
Collapse
Affiliation(s)
- Ce Mo
- Department of Psychology, Sun-Yat-Sen University, Guangzhou, China
| | - Shijia Zhang
- Center for Studies of Psychological Application, School of Psychology, South China Normal University, Guangzhou, China
| | - Junshi Lu
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| | - Mengxia Yu
- Bilingual Cognition and Development Lab, Center for Linguistics and Applied Linguistics, Guangdong University of Foreign Studies, Guangzhou, China
| | - Yujie Yao
- Center for Studies of Psychological Application, School of Psychology, South China Normal University, Guangzhou, China
| |
Collapse
|
12
|
Hajonides JE, Nobre AC, van Ede F, Stokes MG. Decoding visual colour from scalp electroencephalography measurements. Neuroimage 2021; 237:118030. [PMID: 33836272 PMCID: PMC8285579 DOI: 10.1016/j.neuroimage.2021.118030] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2020] [Revised: 03/21/2021] [Accepted: 03/28/2021] [Indexed: 11/17/2022] Open
Abstract
Recent advances have made it possible to decode various aspects of visually presented stimuli from patterns of scalp EEG measurements. As of recently, such multivariate methods have been commonly used to decode visual-spatial features such as location, orientation, or spatial frequency. In the current study, we show that it is also possible to track visual colour processing by using Linear Discriminant Analysis on patterns of EEG activity. Building on other recent demonstrations, we show that colour decoding: (1) reflects sensory qualities (as opposed to, for example, verbal labelling) with a prominent contribution from posterior electrodes contralateral to the stimulus, (2) conforms to a parametric coding space, (3) is possible in multi-item displays, and (4) is comparable in magnitude to the decoding of visual stimulus orientation. Through subsampling our data, we also provide an estimate of the approximate number of trials and participants required for robust decoding. Finally, we show that while colour decoding can be sensitive to subtle differences in luminance, our colour decoding results are primarily driven by measured colour differences between stimuli. Colour decoding opens a relevant new dimension in which to track visual processing using scalp EEG measurements, while bypassing potential confounds associated with decoding approaches that focus on spatial features.
Collapse
Affiliation(s)
- Jasper E Hajonides
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, United Kingdom; Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom.
| | - Anna C Nobre
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, United Kingdom; Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
| | - Freek van Ede
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, United Kingdom; Institute for Brain and Behavior Amsterdam, Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Netherlands
| | - Mark G Stokes
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
13
|
Hubbard RJ, Federmeier KD. Representational Pattern Similarity of Electrical Brain Activity Reveals Rapid and Specific Prediction during Language Comprehension. Cereb Cortex 2021; 31:4300-4313. [PMID: 33895819 DOI: 10.1093/cercor/bhab087] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Predicting upcoming events is a critical function of the brain, and language provides a fertile testing ground for studying prediction, as comprehenders use context to predict features of upcoming words. Many aspects of the mechanisms of prediction remain elusive, partly due to a lack of methodological tools to probe prediction formation in the moment. To elucidate what features are neurally preactivated and when, we used representational similarity analysis on previously collected sentence reading data. We compared EEG activity patterns elicited by expected and unexpected sentence final words to patterns from the preceding words of the sentence, in both strongly and weakly constraining sentences. Pattern similarity with the final word was increased in an early time window following the presentation of the pre-final word, and this increase was modulated by both expectancy and constraint. This was not seen at earlier words, suggesting that predictions were precisely timed. Additionally, pre-final word activity-the predicted representation-had negative similarity with later final word activity, but only for strongly expected words. These findings shed light on the mechanisms of prediction in the brain: rapid preactivation occurs following certain cues, but the predicted features may receive reduced processing upon confirmation.
Collapse
Affiliation(s)
- Ryan J Hubbard
- Beckman Institute for Advanced Science and Technology, University of Illinois Urbana-Champaign, Urbana, IL 61801, USA
| | - Kara D Federmeier
- Beckman Institute for Advanced Science and Technology, University of Illinois Urbana-Champaign, Urbana, IL 61801, USA.,Department of Psychology, University of Illinois Urbana-Champaign, Urbana, IL 61801, USA.,Program in Neuroscience, University of Illinois Urbana-Champaign, Urbana, IL 61801, USA
| |
Collapse
|
14
|
Barne LC, Cravo AM, de Lange FP, Spaak E. Temporal prediction elicits rhythmic preactivation of relevant sensory cortices. Eur J Neurosci 2021; 55:3324-3339. [PMID: 34322927 PMCID: PMC9545120 DOI: 10.1111/ejn.15405] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2021] [Revised: 07/10/2021] [Accepted: 07/24/2021] [Indexed: 11/28/2022]
Abstract
Being able to anticipate events before they happen facilitates stimulus processing. The anticipation of the contents of events is thought to be implemented by the elicitation of prestimulus templates in sensory cortex. In contrast, the anticipation of the timing of events is typically associated with entrainment of neural oscillations. It is so far unknown whether and in which conditions temporal expectations interact with feature‐based expectations, and, consequently, whether entrainment modulates the generation of content‐specific sensory templates. In this study, we investigated the role of temporal expectations in a sensory discrimination task. We presented participants with rhythmically interleaved visual and auditory streams of relevant and irrelevant stimuli while measuring neural activity using magnetoencephalography. We found no evidence that rhythmic stimulation induced prestimulus feature templates. However, we did observe clear anticipatory rhythmic preactivation of the relevant sensory cortices. This oscillatory activity peaked at behaviourally relevant, in‐phase, intervals. Our results suggest that temporal expectations about stimulus features do not behave similarly to explicitly cued, nonrhythmic, expectations, yet elicit a distinct form of modality‐specific preactivation.
Collapse
Affiliation(s)
- Louise Catheryne Barne
- Center for Mathematics, Computing and Cognition, Universidade Federal do ABC (UFABC), São Bernardo do Campo, Sao Paolo, Brazil.,Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.,Département Traitement de l'Information et Systèmes, ONERA, Salon-de-Provence, France
| | - André Mascioli Cravo
- Center for Mathematics, Computing and Cognition, Universidade Federal do ABC (UFABC), São Bernardo do Campo, Sao Paolo, Brazil
| | - Floris P de Lange
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Eelke Spaak
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
15
|
A Hierarchy of Functional States in Working Memory. J Neurosci 2021; 41:4461-4475. [PMID: 33888611 PMCID: PMC8152603 DOI: 10.1523/jneurosci.3104-20.2021] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2020] [Revised: 02/22/2021] [Accepted: 02/23/2021] [Indexed: 11/21/2022] Open
Abstract
Extensive research has examined how information is maintained in working memory (WM), but it remains unknown how WM is used to guide behavior. We addressed this question by combining human electrophysiology (50 subjects, male and female) with pattern analyses, cognitive modeling, and a task requiring the prolonged maintenance of two WM items and priority shifts between them. This enabled us to discern neural states coding for memories that were selected to guide the next decision from states coding for concurrently held memories that were maintained for later use, and to examine how these states contribute to WM-based decisions. Selected memories were encoded in a functionally active state. This state was reflected in spontaneous brain activity during the delay period, closely tracked moment-to-moment fluctuations in the quality of evidence integration, and also predicted when memories would interfere with each other. In contrast, concurrently held memories were encoded in a functionally latent state. This state was reflected only in stimulus-evoked brain activity, tracked memory precision at longer timescales, but did not engage with ongoing decision dynamics. Intriguingly, the two functional states were highly flexible, as priority could be dynamically shifted back and forth between memories without degrading their precision. These results delineate a hierarchy of functional states, whereby latent memories supporting general maintenance are transformed into active decision circuits to guide flexible behavior.SIGNIFICANCE STATEMENT Working memory enables maintenance of information that is no longer available in the environment. Abundant neuroscientific work has examined where in the brain working memories are stored, but it remains unknown how they are represented and used to guide behavior. Our study shows that working memories are represented in qualitatively different formats, depending on behavioral priorities. Memories that are selected for guiding behavior are encoded in an active state that transforms sensory input into decision variables, whereas other concurrently held memories are encoded in a latent state that supports precise maintenance without affecting ongoing cognition. These results dissociate mechanisms supporting memory storage and usage, and open the door to reveal not only where memories are stored but also how.
Collapse
|
16
|
van Driel J, Olivers CNL, Fahrenfort JJ. High-pass filtering artifacts in multivariate classification of neural time series data. J Neurosci Methods 2021; 352:109080. [PMID: 33508412 DOI: 10.1016/j.jneumeth.2021.109080] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2019] [Revised: 01/13/2021] [Accepted: 01/15/2021] [Indexed: 12/11/2022]
Abstract
BACKGROUND Traditionally, EEG/MEG data are high-pass filtered and baseline-corrected to remove slow drifts. Minor deleterious effects of high-pass filtering in traditional time-series analysis have been well-documented, including temporal displacements. However, its effects on time-resolved multivariate pattern classification analyses (MVPA) are largely unknown. NEW METHOD To prevent potential displacement effects, we extend an alternative method of removing slow drift noise - robust detrending - with a procedure in which we mask out all cortical events from each trial. We refer to this method as trial-masked robust detrending. RESULTS In both real and simulated EEG data of a working memory experiment, we show that both high-pass filtering and standard robust detrending create artifacts that result in the displacement of multivariate patterns into activity silent periods, particularly apparent in temporal generalization analyses, and especially in combination with baseline correction. We show that trial-masked robust detrending is free from such displacements. COMPARISON WITH EXISTING METHOD(S) Temporal displacement may emerge even with modest filter cut-off settings such as 0.05 Hz, and even in regular robust detrending. However, trial-masked robust detrending results in artifact-free decoding without displacements. Baseline correction may unwittingly obfuscate spurious decoding effects and displace them to the rest of the trial. CONCLUSIONS Decoding analyses benefit from trial-masked robust detrending, without the unwanted side effects introduced by filtering or regular robust detrending. However, for sufficiently clean data sets and sufficiently strong signals, no filtering or detrending at all may work adequately. Implications for other types of data are discussed, followed by a number of recommendations.
Collapse
Affiliation(s)
- Joram van Driel
- Institute for Brain and Behaviour Amsterdam, Vrije Universiteit Amsterdam, the Netherlands; Department of Experimental and Applied Psychology - Cognitive Psychology, Vrije Universiteit Amsterdam, the Netherlands; Faculty of Behavioural and Movement Sciences, Vrije Universiteit Amsterdam, the Netherlands
| | - Christian N L Olivers
- Institute for Brain and Behaviour Amsterdam, Vrije Universiteit Amsterdam, the Netherlands; Department of Experimental and Applied Psychology - Cognitive Psychology, Vrije Universiteit Amsterdam, the Netherlands; Faculty of Behavioural and Movement Sciences, Vrije Universiteit Amsterdam, the Netherlands
| | - Johannes J Fahrenfort
- Institute for Brain and Behaviour Amsterdam, Vrije Universiteit Amsterdam, the Netherlands; Department of Experimental and Applied Psychology - Cognitive Psychology, Vrije Universiteit Amsterdam, the Netherlands; Faculty of Behavioural and Movement Sciences, Vrije Universiteit Amsterdam, the Netherlands; Department of Psychology, University of Amsterdam, Amsterdam 1001 NK, the Netherlands; Amsterdam Brain and Cognition (ABC), University of Amsterdam, Amsterdam 1001 NK, the Netherlands.
| |
Collapse
|
17
|
Saba-Sadiya S, Chantland E, Alhanai T, Liu T, Ghassemi MM. Unsupervised EEG Artifact Detection and Correction. Front Digit Health 2021; 2:608920. [PMID: 34713069 PMCID: PMC8521924 DOI: 10.3389/fdgth.2020.608920] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Accepted: 12/14/2020] [Indexed: 12/24/2022] Open
Abstract
Electroencephalography (EEG) is used in the diagnosis, monitoring, and prognostication of many neurological ailments including seizure, coma, sleep disorders, brain injury, and behavioral abnormalities. One of the primary challenges of EEG data is its sensitivity to a breadth of non-stationary noises caused by physiological-, movement-, and equipment-related artifacts. Existing solutions to artifact detection are deficient because they require experts to manually explore and annotate data for artifact segments. Existing solutions to artifact correction or removal are deficient because they assume that the incidence and specific characteristics of artifacts are similar across both subjects and tasks (i.e., "one-size-fits-all"). In this paper, we describe a novel EEG noise-reduction method that uses representation learning to perform patient- and task-specific artifact detection and correction. More specifically, our method extracts 58 clinically relevant features and applies an ensemble of unsupervised outlier detection algorithms to identify EEG artifacts that are unique to a given task and subject. The artifact segments are then passed to a deep encoder-decoder network for unsupervised artifact correction. We compared the performance of classification models trained with and without our method and observed a 10% relative improvement in performance when using our approach. Our method provides a flexible end-to-end unsupervised framework that can be applied to novel EEG data without the need for expert supervision and can be used for a variety of clinical decision tasks, including coma prognostication and degenerative illness detection. By making our method, code, and data publicly available, our work provides a tool that is of both immediate practical utility and may also serve as an important foundation for future efforts in this domain.
Collapse
Affiliation(s)
- Sari Saba-Sadiya
- Human Augmentation and Artificial Intelligence Lab, Department of Computer Science, Michigan State University, East Lansing, MI, United States
- Neuroimaging of Perception and Attention Lab, Department of Psychology, Michigan State University, East Lansing, MI, United States
| | - Eric Chantland
- Neuroimaging of Perception and Attention Lab, Department of Psychology, Michigan State University, East Lansing, MI, United States
| | - Tuka Alhanai
- Computer Human Intelligence Lab, Department of Electrical & Computer Engineering, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| | - Taosheng Liu
- Neuroimaging of Perception and Attention Lab, Department of Psychology, Michigan State University, East Lansing, MI, United States
| | - Mohammad M. Ghassemi
- Human Augmentation and Artificial Intelligence Lab, Department of Computer Science, Michigan State University, East Lansing, MI, United States
| |
Collapse
|
18
|
Fabius JH, Fracasso A, Acunzo DJ, Van der Stigchel S, Melcher D. Low-Level Visual Information Is Maintained across Saccades, Allowing for a Postsaccadic Handoff between Visual Areas. J Neurosci 2020; 40:9476-9486. [PMID: 33115930 PMCID: PMC7724139 DOI: 10.1523/jneurosci.1169-20.2020] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 09/17/2020] [Accepted: 10/20/2020] [Indexed: 01/01/2023] Open
Abstract
Experience seems continuous and detailed despite saccadic eye movements changing retinal input several times per second. There is debate whether neural signals related to updating across saccades contain information about stimulus features, or only location pointers without visual details. We investigated the time course of low-level visual information processing across saccades by decoding the spatial frequency of a stationary stimulus that changed from one visual hemifield to the other because of a horizontal saccadic eye movement. We recorded magnetoencephalography while human subjects (both sexes) monitored the orientation of a grating stimulus, making spatial frequency task irrelevant. Separate trials, in which subjects maintained fixation, were used to train a classifier, whose performance was then tested on saccade trials. Decoding performance showed that spatial frequency information of the presaccadic stimulus remained present for ∼200 ms after the saccade, transcending retinotopic specificity. Postsaccadic information ramped up rapidly after saccade offset. There was an overlap of over 100 ms during which decoding was significant from both presaccadic and postsaccadic processing areas. This suggests that the apparent richness of perception across saccades may be supported by the continuous availability of low-level information with a "soft handoff" of information during the initial processing sweep of the new fixation.SIGNIFICANCE STATEMENT Saccades create frequent discontinuities in visual input, yet perception appears stable and continuous. How is this discontinuous input processed resulting in visual stability? Previous studies have focused on presaccadic remapping. Here we examined the time course of processing of low-level visual information (spatial frequency) across saccades with magnetoencephalography. The results suggest that spatial frequency information is not predictively remapped but also is not discarded. Instead, they suggest a soft handoff over time between different visual areas, making this information continuously available across the saccade. Information about the presaccadic stimulus remains available, while the information about the postsaccadic stimulus has also become available. The simultaneous availability of both the presaccadic and postsaccadic information could enable rich and continuous perception across saccades.
Collapse
Affiliation(s)
- Jasper H Fabius
- Institute of Neuroscience and Psychology, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow G12 8QQ, United Kingdom
| | - Alessio Fracasso
- Institute of Neuroscience and Psychology, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow G12 8QQ, United Kingdom
| | - David J Acunzo
- Center for Mind/Brain Sciences and Department of Psychology and Cognitive Sciences, University of Trento, I-38122 Trento, Italy
| | - Stefan Van der Stigchel
- Experimental Psychology, Helmholtz Institute, Utrecht University, 3584 CS, Utrecht, The Netherlands
| | - David Melcher
- Center for Mind/Brain Sciences and Department of Psychology and Cognitive Sciences, University of Trento, I-38122 Trento, Italy
- Psychology Program, Division of Science, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| |
Collapse
|
19
|
Treder MS. MVPA-Light: A Classification and Regression Toolbox for Multi-Dimensional Data. Front Neurosci 2020; 14:289. [PMID: 32581662 PMCID: PMC7287158 DOI: 10.3389/fnins.2020.00289] [Citation(s) in RCA: 82] [Impact Index Per Article: 20.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2019] [Accepted: 03/12/2020] [Indexed: 11/24/2022] Open
Abstract
MVPA-Light is a MATLAB toolbox for multivariate pattern analysis (MVPA). It provides native implementations of a range of classifiers and regression models, using modern optimization algorithms. High-level functions allow for the multivariate analysis of multi-dimensional data, including generalization (e.g., time x time) and searchlight analysis. The toolbox performs cross-validation, hyperparameter tuning, and nested preprocessing. It computes various classification and regression metrics and establishes their statistical significance, is modular and easily extendable. Furthermore, it offers interfaces for LIBSVM and LIBLINEAR as well as an integration into the FieldTrip neuroimaging toolbox. After introducing MVPA-Light, example analyses of MEG and fMRI datasets, and benchmarking results on the classifiers and regression models are presented.
Collapse
Affiliation(s)
- Matthias S Treder
- School of Computer Science & Informatics, Cardiff University, Cardiff, United Kingdom
| |
Collapse
|
20
|
Kozhemiako N, Nunes AS, Samal A, Rana KD, Calabro FJ, Hämäläinen MS, Khan S, Vaina LM. Neural activity underlying the detection of an object movement by an observer during forward self-motion: Dynamic decoding and temporal evolution of directional cortical connectivity. Prog Neurobiol 2020; 195:101824. [PMID: 32446882 DOI: 10.1016/j.pneurobio.2020.101824] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2020] [Revised: 05/09/2020] [Accepted: 05/18/2020] [Indexed: 01/13/2023]
Abstract
Relatively little is known about how the human brain identifies movement of objects while the observer is also moving in the environment. This is, ecologically, one of the most fundamental motion processing problems, critical for survival. To study this problem, we used a task which involved nine textured spheres moving in depth, eight simulating the observer's forward motion while the ninth, the target, moved independently with a different speed towards or away from the observer. Capitalizing on the high temporal resolution of magnetoencephalography (MEG) we trained a Support Vector Classifier (SVC) using the sensor-level data to identify correct and incorrect responses. Using the same MEG data, we addressed the dynamics of cortical processes involved in the detection of the independently moving object and investigated whether we could obtain confirmatory evidence for the brain activity patterns used by the classifier. Our findings indicate that response correctness could be reliably predicted by the SVC, with the highest accuracy during the blank period after motion and preceding the response. The spatial distribution of the areas critical for the correct prediction was similar but not exclusive to areas underlying the evoked activity. Importantly, SVC identified frontal areas otherwise not detected with evoked activity that seem to be important for the successful performance in the task. Dynamic connectivity further supported the involvement of frontal and occipital-temporal areas during the task periods. This is the first study to dynamically map cortical areas using a fully data-driven approach in order to investigate the neural mechanisms involved in the detection of moving objects during observer's self-motion.
Collapse
Affiliation(s)
- N Kozhemiako
- Department of Biomedical Physiology and Kinesiology, Simon Fraser University, Burnaby, BC, Canada; Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
| | - A S Nunes
- Department of Biomedical Physiology and Kinesiology, Simon Fraser University, Burnaby, BC, Canada; Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA.
| | - A Samal
- Departments of Biomedical Engineering, Neurology and the Graduate Program for Neuroscience, Boston University, Boston, MA, USA.
| | - K D Rana
- Departments of Biomedical Engineering, Neurology and the Graduate Program for Neuroscience, Boston University, Boston, MA, USA; National Institute of Mental Health, Bethesda, MD, USA.
| | - F J Calabro
- Department of Psychiatry and Biomedical Engineering, University of Pittsburgh, PA, USA.
| | - M S Hämäläinen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Harvard Medical School, Boston, MA, USA.
| | - S Khan
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Harvard Medical School, Boston, MA, USA
| | - L M Vaina
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Departments of Biomedical Engineering, Neurology and the Graduate Program for Neuroscience, Boston University, Boston, MA, USA; Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
21
|
Sabbagh D, Ablin P, Varoquaux G, Gramfort A, Engemann DA. Predictive regression modeling with MEG/EEG: from source power to signals and cognitive states. Neuroimage 2020; 222:116893. [PMID: 32439535 DOI: 10.1016/j.neuroimage.2020.116893] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2019] [Revised: 04/17/2020] [Accepted: 04/27/2020] [Indexed: 01/22/2023] Open
Abstract
Predicting biomedical outcomes from Magnetoencephalography and Electroencephalography (M/EEG) is central to applications like decoding, brain-computer-interfaces (BCI) or biomarker development and is facilitated by supervised machine learning. Yet, most of the literature is concerned with classification of outcomes defined at the event-level. Here, we focus on predicting continuous outcomes from M/EEG signal defined at the subject-level, and analyze about 600 MEG recordings from Cam-CAN dataset and about 1000 EEG recordings from TUH dataset. Considering different generative mechanisms for M/EEG signals and the biomedical outcome, we propose statistically-consistent predictive models that avoid source-reconstruction based on the covariance as representation. Our mathematical analysis and ground-truth simulations demonstrated that consistent function approximation can be obtained with supervised spatial filtering or by embedding with Riemannian geometry. Additional simulations revealed that Riemannian methods were more robust to model violations, in particular geometric distortions induced by individual anatomy. To estimate the relative contribution of brain dynamics and anatomy to prediction performance, we propose a novel model inspection procedure based on biophysical forward modeling. Applied to prediction of outcomes at the subject-level, the analysis revealed that the Riemannian model better exploited anatomical information while sensitivity to brain dynamics was similar across methods. We then probed the robustness of the models across different data cleaning options. Environmental denoising was globally important but Riemannian models were strikingly robust and continued performing well even without preprocessing. Our results suggest each method has its niche: supervised spatial filtering is practical for event-level prediction while the Riemannian model may enable simple end-to-end learning.
Collapse
Affiliation(s)
- David Sabbagh
- Université Paris-Saclay, Inria, CEA, Palaiseau, France; Inserm, UMRS-942, Paris Diderot University, Paris, France; Department of Anaesthesiology and Critical Care, Lariboisière Hospital, Assistance Publique Hôpitaux de Paris, Paris, France.
| | - Pierre Ablin
- Université Paris-Saclay, Inria, CEA, Palaiseau, France
| | | | | | - Denis A Engemann
- Université Paris-Saclay, Inria, CEA, Palaiseau, France; Max Planck Institute for Human Cognitive and Brain Sciences, Department of Neurology, D-04103, Leipzig, Germany.
| |
Collapse
|
22
|
Nobre AC, van Ede F. Under the Mind's Hood: What We Have Learned by Watching the Brain at Work. J Neurosci 2020; 40:89-100. [PMID: 31630115 PMCID: PMC6939481 DOI: 10.1523/jneurosci.0742-19.2019] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2019] [Revised: 07/14/2019] [Accepted: 08/01/2019] [Indexed: 01/08/2023] Open
Abstract
Imagine you were asked to investigate the workings of an engine, but to do so without ever opening the hood. Now imagine the engine fueled the human mind. This is the challenge faced by cognitive neuroscientists worldwide aiming to understand the neural bases of our psychological functions. Luckily, human ingenuity comes to the rescue. Around the same time as the Society for Neuroscience was being established in the 1960s, the first tools for measuring the human brain at work were becoming available. Noninvasive human brain imaging and neurophysiology have continued developing at a relentless pace ever since. In this 50 year anniversary, we reflect on how these methods have been changing our understanding of how brain supports mind.
Collapse
Affiliation(s)
- Anna Christina Nobre
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford OX3 7JX, United Kingdom, and
- Department of Experimental Psychology, University of Oxford, Oxford OX2 6GG, United Kingdom
| | - Freek van Ede
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford OX3 7JX, United Kingdom, and
| |
Collapse
|
23
|
Treder MS. MVPA-Light: A Classification and Regression Toolbox for Multi-Dimensional Data. Front Neurosci 2020. [PMID: 32581662 DOI: 10.3389/fnins.2020.0028] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/23/2023] Open
Abstract
MVPA-Light is a MATLAB toolbox for multivariate pattern analysis (MVPA). It provides native implementations of a range of classifiers and regression models, using modern optimization algorithms. High-level functions allow for the multivariate analysis of multi-dimensional data, including generalization (e.g., time x time) and searchlight analysis. The toolbox performs cross-validation, hyperparameter tuning, and nested preprocessing. It computes various classification and regression metrics and establishes their statistical significance, is modular and easily extendable. Furthermore, it offers interfaces for LIBSVM and LIBLINEAR as well as an integration into the FieldTrip neuroimaging toolbox. After introducing MVPA-Light, example analyses of MEG and fMRI datasets, and benchmarking results on the classifiers and regression models are presented.
Collapse
Affiliation(s)
- Matthias S Treder
- School of Computer Science & Informatics, Cardiff University, Cardiff, United Kingdom
| |
Collapse
|
24
|
Thielen J, Bosch SE, van Leeuwen TM, van Gerven MAJ, van Lier R. Evidence for confounding eye movements under attempted fixation and active viewing in cognitive neuroscience. Sci Rep 2019; 9:17456. [PMID: 31767911 PMCID: PMC6877555 DOI: 10.1038/s41598-019-54018-z] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2019] [Accepted: 11/08/2019] [Indexed: 12/02/2022] Open
Abstract
Eye movements can have serious confounding effects in cognitive neuroscience experiments. Therefore, participants are commonly asked to fixate. Regardless, participants will make so-called fixational eye movements under attempted fixation, which are thought to be necessary to prevent perceptual fading. Neural changes related to these eye movements could potentially explain previously reported neural decoding and neuroimaging results under attempted fixation. In previous work, under attempted fixation and passive viewing, we found no evidence for systematic eye movements. Here, however, we show that participants' eye movements are systematic under attempted fixation when active viewing is demanded by the task. Since eye movements directly affect early visual cortex activity, commonly used for neural decoding, our findings imply alternative explanations for previously reported results in neural decoding.
Collapse
Affiliation(s)
- Jordy Thielen
- Radboud University, Donders Centre for Cognition, Nijmegen, 6525 HR, The Netherlands.
| | - Sander E Bosch
- Radboud University, Donders Centre for Cognition, Nijmegen, 6525 HR, The Netherlands
| | - Tessa M van Leeuwen
- Radboud University, Donders Centre for Cognition, Nijmegen, 6525 HR, The Netherlands
| | - Marcel A J van Gerven
- Radboud University, Donders Centre for Cognition, Nijmegen, 6525 HR, The Netherlands
| | - Rob van Lier
- Radboud University, Donders Centre for Cognition, Nijmegen, 6525 HR, The Netherlands
| |
Collapse
|
25
|
Demarchi G, Sanchez G, Weisz N. Automatic and feature-specific prediction-related neural activity in the human auditory system. Nat Commun 2019; 10:3440. [PMID: 31371713 PMCID: PMC6672009 DOI: 10.1038/s41467-019-11440-1] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2018] [Accepted: 07/11/2019] [Indexed: 12/04/2022] Open
Abstract
Prior experience enables the formation of expectations of upcoming sensory events. However, in the auditory modality, it is not known whether prediction-related neural signals carry feature-specific information. Here, using magnetoencephalography (MEG), we examined whether predictions of future auditory stimuli carry tonotopic specific information. Participants passively listened to sound sequences of four carrier frequencies (tones) with a fixed presentation rate, ensuring strong temporal expectations of when the next stimulus would occur. Expectation of which frequency would occur was parametrically modulated across the sequences, and sounds were occasionally omitted. We show that increasing the regularity of the sequence boosts carrier-frequency-specific neural activity patterns during both the anticipatory and omission periods, indicating that prediction-related neural activity is indeed feature-specific. Our results illustrate that even without bottom-up input, auditory predictions can activate tonotopically specific templates. After listening to a predictable sequence of sounds, we can anticipate and predict the next sound in the sequence. Here, the authors show that during expectation of a sound, the brain generates neural activity matching that which is produced by actually hearing the same sound.
Collapse
Affiliation(s)
- Gianpaolo Demarchi
- Centre for Cognitive Neuroscience and Division of Physiological Psychology, University of Salzburg, Hellbrunnerstraße 34, 5020, Salzburg, Austria.
| | - Gaëtan Sanchez
- Centre for Cognitive Neuroscience and Division of Physiological Psychology, University of Salzburg, Hellbrunnerstraße 34, 5020, Salzburg, Austria.,Lyon Neuroscience Research Center, Brain Dynamics and Cognition Team, INSERM UMRS 1028, CNRS UMR 5292, Université Claude Bernard Lyon 1, Université de Lyon, F-69000, Lyon, France
| | - Nathan Weisz
- Centre for Cognitive Neuroscience and Division of Physiological Psychology, University of Salzburg, Hellbrunnerstraße 34, 5020, Salzburg, Austria
| |
Collapse
|
26
|
Sandhaeger F, von Nicolai C, Miller EK, Siegel M. Monkey EEG links neuronal color and motion information across species and scales. eLife 2019; 8:e45645. [PMID: 31287792 PMCID: PMC6615858 DOI: 10.7554/elife.45645] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2019] [Accepted: 06/15/2019] [Indexed: 11/26/2022] Open
Abstract
It remains challenging to relate EEG and MEG to underlying circuit processes and comparable experiments on both spatial scales are rare. To close this gap between invasive and non-invasive electrophysiology we developed and recorded human-comparable EEG in macaque monkeys during visual stimulation with colored dynamic random dot patterns. Furthermore, we performed simultaneous microelectrode recordings from 6 areas of macaque cortex and human MEG. Motion direction and color information were accessible in all signals. Tuning of the non-invasive signals was similar to V4 and IT, but not to dorsal and frontal areas. Thus, MEG and EEG were dominated by early visual and ventral stream sources. Source level analysis revealed corresponding information and latency gradients across cortex. We show how information-based methods and monkey EEG can identify analogous properties of visual processing in signals spanning spatial scales from single units to MEG - a valuable framework for relating human and animal studies.
Collapse
Affiliation(s)
- Florian Sandhaeger
- Centre for Integrative NeuroscienceUniversity of TübingenTübingenGermany
- Hertie Institute for Clinical Brain ResearchUniversity of TübingenTübingenGermany
- MEG CenterUniversity of TübingenTübingenGermany
- IMPRS for Cognitive and Systems NeuroscienceUniversity of TübingenTübingenGermany
| | - Constantin von Nicolai
- Centre for Integrative NeuroscienceUniversity of TübingenTübingenGermany
- Hertie Institute for Clinical Brain ResearchUniversity of TübingenTübingenGermany
- MEG CenterUniversity of TübingenTübingenGermany
| | - Earl K Miller
- The Picower Institute for Learning and Memory and Department of Brain and Cognitive SciencesMassachusetts Institute of TechnologyCambridgeUnited States
| | - Markus Siegel
- Centre for Integrative NeuroscienceUniversity of TübingenTübingenGermany
- Hertie Institute for Clinical Brain ResearchUniversity of TübingenTübingenGermany
- MEG CenterUniversity of TübingenTübingenGermany
| |
Collapse
|
27
|
Edelman BJ, Meng J, Suma D, Zurn C, Nagarajan E, Baxter BS, Cline CC, He B. Noninvasive neuroimaging enhances continuous neural tracking for robotic device control. Sci Robot 2019; 4:eaaw6844. [PMID: 31656937 PMCID: PMC6814169 DOI: 10.1126/scirobotics.aaw6844] [Citation(s) in RCA: 158] [Impact Index Per Article: 31.6] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
Brain-computer interfaces (BCIs) utilizing signals acquired with intracortical implants have achieved successful high-dimensional robotic device control useful for completing daily tasks. However, the substantial amount of medical and surgical expertise required to correctly implant and operate these systems significantly limits their use beyond a few clinical cases. A noninvasive counterpart requiring less intervention that can provide high-quality control would profoundly impact the integration of BCIs into the clinical and home setting. Here, we present and validate a noninvasive framework utilizing electroencephalography (EEG) to achieve the neural control of a robotic device for continuous random target tracking. This framework addresses and improves upon both the "brain" and "computer" components by respectively increasing user engagement through a continuous pursuit task and associated training paradigm, and the spatial resolution of noninvasive neural data through EEG source imaging. In all, our unique framework enhanced BCI learning by nearly 60% for traditional center-out tasks and by over 500% in the more realistic continuous pursuit task. We further demonstrated an additional enhancement in BCI control of almost 10% by using online noninvasive neuroimaging. Finally, this framework was deployed in a physical task, demonstrating a near seamless transition from the control of an unconstrained virtual cursor to the real-time control of a robotic arm. Such combined advances in the quality of neural decoding and the practical utility of noninvasive robotic arm control will have major implications on the eventual development and implementation of neurorobotics by means of noninvasive BCI.
Collapse
Affiliation(s)
- B. J. Edelman
- Department of Biomedical Engineering, University of Minnesota, Minneapolis, MN 55455, USA
| | - J. Meng
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - D. Suma
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - C. Zurn
- Department of Biomedical Engineering, University of Minnesota, Minneapolis, MN 55455, USA
| | - E. Nagarajan
- Department of Neuroscience, University of Minnesota, Minneapolis, MN 55455, USA
| | - B. S. Baxter
- Department of Biomedical Engineering, University of Minnesota, Minneapolis, MN 55455, USA
| | - C. C. Cline
- Department of Biomedical Engineering, University of Minnesota, Minneapolis, MN 55455, USA
| | - B. He
- Department of Biomedical Engineering, University of Minnesota, Minneapolis, MN 55455, USA
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| |
Collapse
|
28
|
Dijkstra N, Bosch SE, van Gerven MA. Shared Neural Mechanisms of Visual Perception and Imagery. Trends Cogn Sci 2019; 23:423-434. [DOI: 10.1016/j.tics.2019.02.004] [Citation(s) in RCA: 86] [Impact Index Per Article: 17.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2018] [Revised: 02/07/2019] [Accepted: 02/20/2019] [Indexed: 12/16/2022]
|
29
|
Tang MF, Smout CA, Arabzadeh E, Mattingley JB. Prediction error and repetition suppression have distinct effects on neural representations of visual information. eLife 2018; 7:33123. [PMID: 30547881 PMCID: PMC6312401 DOI: 10.7554/elife.33123] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2017] [Accepted: 12/13/2018] [Indexed: 12/28/2022] Open
Abstract
Predictive coding theories argue that recent experience establishes expectations in the brain that generate prediction errors when violated. Prediction errors provide a possible explanation for repetition suppression, where evoked neural activity is attenuated across repeated presentations of the same stimulus. The predictive coding account argues repetition suppression arises because repeated stimuli are expected, whereas non-repeated stimuli are unexpected and thus elicit larger neural responses. Here, we employed electroencephalography in humans to test the predictive coding account of repetition suppression by presenting sequences of visual gratings with orientations that were expected either to repeat or change in separate blocks of trials. We applied multivariate forward modelling to determine how orientation selectivity was affected by repetition and prediction. Unexpected stimuli were associated with significantly enhanced orientation selectivity, whereas selectivity was unaffected for repeated stimuli. Our results suggest that repetition suppression and expectation have separable effects on neural representations of visual feature information.
Collapse
Affiliation(s)
- Matthew F Tang
- Queensland Brain Institute, The University of Queensland, St Lucia, Australia.,Australian Research Council Centre of Excellence for Integrative Brain Function, Victoria, Australia
| | - Cooper A Smout
- Queensland Brain Institute, The University of Queensland, St Lucia, Australia.,Australian Research Council Centre of Excellence for Integrative Brain Function, Victoria, Australia
| | - Ehsan Arabzadeh
- Australian Research Council Centre of Excellence for Integrative Brain Function, Victoria, Australia.,Eccles Institute of Neuroscience, John Curtin School of Medical Research, The Australian National University, Canberra, Australia
| | - Jason B Mattingley
- Queensland Brain Institute, The University of Queensland, St Lucia, Australia.,Australian Research Council Centre of Excellence for Integrative Brain Function, Victoria, Australia.,School of Psychology, The University of Queensland, St Lucia, Australia
| |
Collapse
|
30
|
Ghosts in machine learning for cognitive neuroscience: Moving from data to theory. Neuroimage 2018; 180:88-100. [DOI: 10.1016/j.neuroimage.2017.08.019] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2017] [Revised: 07/17/2017] [Accepted: 08/04/2017] [Indexed: 12/17/2022] Open
|
31
|
Dima DC, Perry G, Messaritaki E, Zhang J, Singh KD. Spatiotemporal dynamics in human visual cortex rapidly encode the emotional content of faces. Hum Brain Mapp 2018; 39:3993-4006. [PMID: 29885055 PMCID: PMC6175429 DOI: 10.1002/hbm.24226] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2018] [Revised: 04/13/2018] [Accepted: 05/14/2018] [Indexed: 12/05/2022] Open
Abstract
Recognizing emotion in faces is important in human interaction and survival, yet existing studies do not paint a consistent picture of the neural representation supporting this task. To address this, we collected magnetoencephalography (MEG) data while participants passively viewed happy, angry and neutral faces. Using time-resolved decoding of sensor-level data, we show that responses to angry faces can be discriminated from happy and neutral faces as early as 90 ms after stimulus onset and only 10 ms later than faces can be discriminated from scrambled stimuli, even in the absence of differences in evoked responses. Time-resolved relevance patterns in source space track expression-related information from the visual cortex (100 ms) to higher-level temporal and frontal areas (200-500 ms). Together, our results point to a system optimised for rapid processing of emotional faces and preferentially tuned to threat, consistent with the important evolutionary role that such a system must have played in the development of human social interactions.
Collapse
Affiliation(s)
- Diana C. Dima
- Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff UniversityCardiffCF24 4HQUnited Kingdom
| | - Gavin Perry
- Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff UniversityCardiffCF24 4HQUnited Kingdom
| | - Eirini Messaritaki
- BRAIN Unit, School of MedicineCardiff UniversityCardiffCF24 4HQUnited Kingdom
- Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff UniversityCardiffCF24 4HQUnited Kingdom
| | - Jiaxiang Zhang
- Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff UniversityCardiffCF24 4HQUnited Kingdom
| | - Krish D. Singh
- Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff UniversityCardiffCF24 4HQUnited Kingdom
| |
Collapse
|
32
|
Malavita MS, Vidyasagar TR, McKendrick AM. Eccentricity dependence of orientation anisotropy of surround suppression of contrast-detection threshold. J Vis 2018; 18:5. [PMID: 30029269 DOI: 10.1167/18.7.5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Both neurophysiological and psychophysical data provide evidence for orientation biases in nonfoveal vision-specifically, a tendency for a Cartesian horizontal and vertical bias close to fixation, changing to a radial bias with increasing retinal eccentricity. We explore whether the strength of surround suppression of contrast detection also depends on retinotopic location and relative surround configuration (horizontal, vertical, radial, tangential) in parafoveal vision. Three visual-field locations were tested (0°, 225°, and 270°, angle increasing anticlockwise from 0° horizontal axis) at viewing eccentricities of 6° and 15°. Contrast-detection threshold was estimated with and without a surrounding annulus. At 6° eccentricity, horizontally oriented parallel center-surround (C-S) configurations resulted in greater surround suppression compared to vertically oriented parallel center-surround configurations (p = 0.001). At 15° eccentricity, radially oriented parallel center-surround stimuli conferred greater suppression than tangentially oriented stimuli (p = 0.027). Parallel surrounds resulted in greater suppression than orthogonal surrounds at both eccentricities (p < 0.05). At 6° the horizontal center was more susceptible to suppression than a vertical center (p < 0.001) for both parallel and orthogonal surrounds, while at 15° a radial center was more susceptible to suppression (relative to a tangential center), but only if the surround was parallel (p = 0.005). Our data show that orientation anisotropy of surround suppression alters with eccentricity, reflecting a link between suppression strength and visual-field retinotopy.
Collapse
Affiliation(s)
- Menaka S Malavita
- Department of Optometry and Vision Sciences, The University of Melbourne, Parkville, Victoria, Australia
| | - Trichur R Vidyasagar
- Department of Optometry and Vision Sciences, The University of Melbourne, Parkville, Victoria, Australia
| | - Allison M McKendrick
- Department of Optometry and Vision Sciences, The University of Melbourne, Parkville, Victoria, Australia
| |
Collapse
|
33
|
Roth ZN, Heeger DJ, Merriam EP. Stimulus vignetting and orientation selectivity in human visual cortex. eLife 2018; 7:e37241. [PMID: 30106372 PMCID: PMC6092116 DOI: 10.7554/elife.37241] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2018] [Accepted: 07/01/2018] [Indexed: 01/03/2023] Open
Abstract
Neural selectivity to orientation is one of the simplest and most thoroughly-studied cortical sensory features. Here, we show that a large body of research that purported to measure orientation tuning may have in fact been inadvertently measuring sensitivity to second-order changes in luminance, a phenomenon we term 'vignetting'. Using a computational model of neural responses in primary visual cortex (V1), we demonstrate the impact of vignetting on simulated V1 responses. We then used the model to generate a set of predictions, which we confirmed with functional MRI experiments in human observers. Our results demonstrate that stimulus vignetting can wholly determine the orientation selectivity of responses in visual cortex measured at a macroscopic scale, and suggest a reinterpretation of a well-established literature on orientation processing in visual cortex.
Collapse
Affiliation(s)
- Zvi N Roth
- Laboratory of Brain and CognitionNational Institute of Mental Health, National Institutes of HealthBethesdaUnited States
| | - David J Heeger
- Department of PsychologyNew York UniversityNew YorkUnited States
- Center for Neural ScienceNew York UniversityNew YorkUnited States
| | - Elisha P Merriam
- Laboratory of Brain and CognitionNational Institute of Mental Health, National Institutes of HealthBethesdaUnited States
| |
Collapse
|
34
|
van Ede F, Chekroud SR, Stokes MG, Nobre AC. Decoding the influence of anticipatory states on visual perception in the presence of temporal distractors. Nat Commun 2018; 9:1449. [PMID: 29654312 PMCID: PMC5899132 DOI: 10.1038/s41467-018-03960-z] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2017] [Accepted: 03/23/2018] [Indexed: 01/17/2023] Open
Abstract
Anticipatory states help prioritise relevant perceptual targets over competing distractor stimuli and amplify early brain responses to these targets. Here we combine electroencephalography recordings in humans with multivariate stimulus decoding to address whether anticipation also increases the amount of target identity information contained in these responses, and to ask how targets are prioritised over distractors when these compete in time. We show that anticipatory cues not only boost visual target representations, but also delay the interference on these target representations caused by temporally adjacent distractor stimuli—possibly marking a protective window reserved for high-fidelity target processing. Enhanced target decoding and distractor resistance are further predicted by the attenuation of posterior 8–14 Hz alpha oscillations. These findings thus reveal multiple mechanisms by which anticipatory states help prioritise targets from temporally competing distractors, and they highlight the potential of non-invasive multivariate electrophysiology to track cognitive influences on perception in temporally crowded contexts. Anticipation helps to prioritise the processing of task-relevant sensory targets over irrelevant distractors. Here the authors analyse visual EEG responses and show that anticipation may do so by enhancing the neural representation of the target and by delaying the interference caused by distractors that follow closely in time.
Collapse
Affiliation(s)
- Freek van Ede
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, Warneford Hospital, University of Oxford, Oxford, OX3 7JX, UK.
| | - Sammi R Chekroud
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, Warneford Hospital, University of Oxford, Oxford, OX3 7JX, UK
| | - Mark G Stokes
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, Warneford Hospital, University of Oxford, Oxford, OX3 7JX, UK.,Department of Experimental Psychology, University of Oxford, Anna Watts Building, Radcliffe Observatory Quarter, Woodstock Road, Oxford, OX2 6GG, UK
| | - Anna C Nobre
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, Warneford Hospital, University of Oxford, Oxford, OX3 7JX, UK.,Department of Experimental Psychology, University of Oxford, Anna Watts Building, Radcliffe Observatory Quarter, Woodstock Road, Oxford, OX2 6GG, UK
| |
Collapse
|
35
|
Treder MS. Improving SNR and Reducing Training Time of Classifiers in Large Datasets via Kernel Averaging. Brain Inform 2018. [DOI: 10.1007/978-3-030-05587-5_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
36
|
Abstract
Perception can be described as a process of inference, integrating bottom-up sensory inputs and top-down expectations. However, it is unclear how this process is neurally implemented. It has been proposed that expectations lead to prestimulus baseline increases in sensory neurons tuned to the expected stimulus, which in turn, affect the processing of subsequent stimuli. Recent fMRI studies have revealed stimulus-specific patterns of activation in sensory cortex as a result of expectation, but this method lacks the temporal resolution necessary to distinguish pre- from poststimulus processes. Here, we combined human magnetoencephalography (MEG) with multivariate decoding techniques to probe the representational content of neural signals in a time-resolved manner. We observed a representation of expected stimuli in the neural signal shortly before they were presented, showing that expectations indeed induce a preactivation of stimulus templates. The strength of these prestimulus expectation templates correlated with participants' behavioral improvement when the expected feature was task-relevant. These results suggest a mechanism for how predictive perception can be neurally implemented.
Collapse
|
37
|
Pantazis D, Fang M, Qin S, Mohsenzadeh Y, Li Q, Cichy RM. Decoding the orientation of contrast edges from MEG evoked and induced responses. Neuroimage 2017; 180:267-279. [PMID: 28712993 DOI: 10.1016/j.neuroimage.2017.07.022] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2017] [Revised: 06/09/2017] [Accepted: 07/12/2017] [Indexed: 10/19/2022] Open
Abstract
Visual gamma oscillations have been proposed to subserve perceptual binding, but their strong modulation by diverse stimulus features confounds interpretations of their precise functional role. Overcoming this challenge necessitates a comprehensive account of the relationship between gamma responses and stimulus features. Here we used multivariate pattern analyses on human MEG data to characterize the relationships between gamma responses and one basic stimulus feature, the orientation of contrast edges. Our findings confirmed we could decode orientation information from induced responses in two dominant frequency bands at 24-32 Hz and 50-58 Hz. Decoding was higher for cardinal than oblique orientations, with similar results also obtained for evoked MEG responses. In contrast to multivariate analyses, orientation information was mostly absent in univariate signals: evoked and induced responses in early visual cortex were similar in all orientations, with only exception an inverse oblique effect observed in induced responses, such that cardinal orientations produced weaker oscillatory signals than oblique orientations. Taken together, our results showed multivariate methods are well suited for the analysis of gamma oscillations, with multivariate patterns robustly encoding orientation information and predominantly discriminating cardinal from oblique stimuli.
Collapse
Affiliation(s)
- Dimitrios Pantazis
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA.
| | - Mingtong Fang
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Sheng Qin
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Yalda Mohsenzadeh
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Quanzheng Li
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | | |
Collapse
|
38
|
Martin Cichy R, Khosla A, Pantazis D, Oliva A. Dynamics of scene representations in the human brain revealed by magnetoencephalography and deep neural networks. Neuroimage 2017; 153:346-358. [PMID: 27039703 PMCID: PMC5542416 DOI: 10.1016/j.neuroimage.2016.03.063] [Citation(s) in RCA: 82] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2015] [Revised: 03/05/2016] [Accepted: 03/23/2016] [Indexed: 12/03/2022] Open
Abstract
Human scene recognition is a rapid multistep process evolving over time from single scene image to spatial layout processing. We used multivariate pattern analyses on magnetoencephalography (MEG) data to unravel the time course of this cortical process. Following an early signal for lower-level visual analysis of single scenes at ~100ms, we found a marker of real-world scene size, i.e. spatial layout processing, at ~250ms indexing neural representations robust to changes in unrelated scene properties and viewing conditions. For a quantitative model of how scene size representations may arise in the brain, we compared MEG data to a deep neural network model trained on scene classification. Representations of scene size emerged intrinsically in the model, and resolved emerging neural scene size representation. Together our data provide a first description of an electrophysiological signal for layout processing in humans, and suggest that deep neural networks are a promising framework to investigate how spatial layout representations emerge in the human brain.
Collapse
Affiliation(s)
- Radoslaw Martin Cichy
- Department of Education and Psychology, Free University Berlin, Berlin, Germany; Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA.
| | - Aditya Khosla
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
| | | | - Aude Oliva
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
| |
Collapse
|
39
|
Bueno FD, Morita VC, de Camargo RY, Reyes MB, Caetano MS, Cravo AM. Dynamic representation of time in brain states. Sci Rep 2017; 7:46053. [PMID: 28393850 PMCID: PMC5385543 DOI: 10.1038/srep46053] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2016] [Accepted: 03/10/2017] [Indexed: 11/09/2022] Open
Abstract
The ability to process time on the scale of milliseconds and seconds is essential for behaviour. A growing number of studies have started to focus on brain dynamics as a mechanism for temporal encoding. Although there is growing evidence in favour of this view from computational and in vitro studies, there is still a lack of results from experiments in humans. We show that high-dimensional brain states revealed by multivariate pattern analysis of human EEG are correlated to temporal judgements. First, we show that, as participants estimate temporal intervals, the spatiotemporal dynamics of their brain activity are consistent across trials. Second, we present evidence that these dynamics exhibit properties of temporal perception, such as scale invariance. Lastly, we show that it is possible to predict temporal judgements based on brain states. These results show how scalp recordings can reveal the spatiotemporal dynamics of human brain activity related to temporal processing.
Collapse
Affiliation(s)
- Fernanda Dantas Bueno
- Centro de Matemática Computação e Cognição, Universidade Federal do ABC (UFABC), Rua Santa Adélia, 166, Santo André - SP - 09210-170, Brasil
| | - Vanessa C Morita
- Centro de Matemática Computação e Cognição, Universidade Federal do ABC (UFABC), Rua Santa Adélia, 166, Santo André - SP - 09210-170, Brasil
| | - Raphael Y de Camargo
- Centro de Matemática Computação e Cognição, Universidade Federal do ABC (UFABC), Rua Santa Adélia, 166, Santo André - SP - 09210-170, Brasil
| | - Marcelo B Reyes
- Centro de Matemática Computação e Cognição, Universidade Federal do ABC (UFABC), Rua Santa Adélia, 166, Santo André - SP - 09210-170, Brasil
| | - Marcelo S Caetano
- Centro de Matemática Computação e Cognição, Universidade Federal do ABC (UFABC), Rua Santa Adélia, 166, Santo André - SP - 09210-170, Brasil
| | - André M Cravo
- Centro de Matemática Computação e Cognição, Universidade Federal do ABC (UFABC), Rua Santa Adélia, 166, Santo André - SP - 09210-170, Brasil
| |
Collapse
|
40
|
Grootswagers T, Wardle SG, Carlson TA. Decoding Dynamic Brain Patterns from Evoked Responses: A Tutorial on Multivariate Pattern Analysis Applied to Time Series Neuroimaging Data. J Cogn Neurosci 2017; 29:677-697. [DOI: 10.1162/jocn_a_01068] [Citation(s) in RCA: 329] [Impact Index Per Article: 47.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
Abstract
Multivariate pattern analysis (MVPA) or brain decoding methods have become standard practice in analyzing fMRI data. Although decoding methods have been extensively applied in brain–computer interfaces, these methods have only recently been applied to time series neuroimaging data such as MEG and EEG to address experimental questions in cognitive neuroscience. In a tutorial style review, we describe a broad set of options to inform future time series decoding studies from a cognitive neuroscience perspective. Using example MEG data, we illustrate the effects that different options in the decoding analysis pipeline can have on experimental results where the aim is to “decode” different perceptual stimuli or cognitive states over time from dynamic brain activation patterns. We show that decisions made at both preprocessing (e.g., dimensionality reduction, subsampling, trial averaging) and decoding (e.g., classifier selection, cross-validation design) stages of the analysis can significantly affect the results. In addition to standard decoding, we describe extensions to MVPA for time-varying neuroimaging data including representational similarity analysis, temporal generalization, and the interpretation of classifier weight maps. Finally, we outline important caveats in the design and interpretation of time series decoding experiments.
Collapse
Affiliation(s)
- Tijl Grootswagers
- 1Macquarie University, Sydney, Australia
- 2ARC Centre of Excellence in Cognition and its Disorders
- 3University of Sydney
| | - Susan G. Wardle
- 1Macquarie University, Sydney, Australia
- 2ARC Centre of Excellence in Cognition and its Disorders
| | - Thomas A. Carlson
- 2ARC Centre of Excellence in Cognition and its Disorders
- 3University of Sydney
| |
Collapse
|
41
|
Hearing Scenes: A Neuromagnetic Signature of Auditory Source and Reverberant Space Separation. eNeuro 2017; 4:eN-NWR-0007-17. [PMID: 28451630 PMCID: PMC5394928 DOI: 10.1523/eneuro.0007-17.2017] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2016] [Revised: 02/03/2017] [Accepted: 02/06/2017] [Indexed: 11/21/2022] Open
Abstract
Perceiving the geometry of surrounding space is a multisensory process, crucial to contextualizing object perception and guiding navigation behavior. Humans can make judgments about surrounding spaces from reverberation cues, caused by sounds reflecting off multiple interior surfaces. However, it remains unclear how the brain represents reverberant spaces separately from sound sources. Here, we report separable neural signatures of auditory space and source perception during magnetoencephalography (MEG) recording as subjects listened to brief sounds convolved with monaural room impulse responses (RIRs). The decoding signature of sound sources began at 57 ms after stimulus onset and peaked at 130 ms, while space decoding started at 138 ms and peaked at 386 ms. Importantly, these neuromagnetic responses were readily dissociable in form and time: while sound source decoding exhibited an early and transient response, the neural signature of space was sustained and independent of the original source that produced it. The reverberant space response was robust to variations in sound source, and vice versa, indicating a generalized response not tied to specific source-space combinations. These results provide the first neuromagnetic evidence for robust, dissociable auditory source and reverberant space representations in the human brain and reveal the temporal dynamics of how auditory scene analysis extracts percepts from complex naturalistic auditory signals.
Collapse
|
42
|
Magnetoencephalography for brain electrophysiology and imaging. Nat Neurosci 2017; 20:327-339. [DOI: 10.1038/nn.4504] [Citation(s) in RCA: 418] [Impact Index Per Article: 59.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2016] [Accepted: 01/17/2017] [Indexed: 12/18/2022]
|
43
|
Contini EW, Wardle SG, Carlson TA. Decoding the time-course of object recognition in the human brain: From visual features to categorical decisions. Neuropsychologia 2017; 105:165-176. [PMID: 28215698 DOI: 10.1016/j.neuropsychologia.2017.02.013] [Citation(s) in RCA: 51] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2016] [Revised: 02/16/2017] [Accepted: 02/16/2017] [Indexed: 11/29/2022]
Abstract
Visual object recognition is a complex, dynamic process. Multivariate pattern analysis methods, such as decoding, have begun to reveal how the brain processes complex visual information. Recently, temporal decoding methods for EEG and MEG have offered the potential to evaluate the temporal dynamics of object recognition. Here we review the contribution of M/EEG time-series decoding methods to understanding visual object recognition in the human brain. Consistent with the current understanding of the visual processing hierarchy, low-level visual features dominate decodable object representations early in the time-course, with more abstract representations related to object category emerging later. A key finding is that the time-course of object processing is highly dynamic and rapidly evolving, with limited temporal generalisation of decodable information. Several studies have examined the emergence of object category structure, and we consider to what degree category decoding can be explained by sensitivity to low-level visual features. Finally, we evaluate recent work attempting to link human behaviour to the neural time-course of object processing.
Collapse
Affiliation(s)
- Erika W Contini
- Department of Cognitive Science, Macquarie University, Sydney, Australia; ARC Centre of Excellence in Cognition and its Disorders and Perception in Action Research Centre, Macquarie University, Australia.
| | - Susan G Wardle
- Department of Cognitive Science, Macquarie University, Sydney, Australia; ARC Centre of Excellence in Cognition and its Disorders and Perception in Action Research Centre, Macquarie University, Australia
| | - Thomas A Carlson
- Department of Cognitive Science, Macquarie University, Sydney, Australia; ARC Centre of Excellence in Cognition and its Disorders and Perception in Action Research Centre, Macquarie University, Australia; School of Psychology, University of Sydney, Australia
| |
Collapse
|
44
|
Baker DH. Decoding eye-of-origin outside of awareness. Neuroimage 2017; 147:89-96. [PMID: 27940075 DOI: 10.1016/j.neuroimage.2016.12.008] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2016] [Revised: 11/25/2016] [Accepted: 12/03/2016] [Indexed: 11/29/2022] Open
Abstract
In the primary visual cortex of many mammals, ocular dominance columns segregate information from the two eyes. Yet under controlled conditions, most human observers are unable to correctly report the eye to which a stimulus has been shown, indicating that this information is lost during subsequent processing. This study investigates whether eye-of-origin information is available in the pattern of electrophysiological activity evoked by visual stimuli, recorded using EEG and decoded using multivariate pattern analysis. Observers (N=24) viewed sine-wave grating and plaid stimuli of different orientations, shown to either the left or right eye (or both). Using a support vector machine, eye-of-origin could be decoded above chance at around 140 and 220ms post stimulus onset, yet observers were at chance for reporting this information. Other stimulus features, such as binocularity, orientation, spatial pattern, and the presence of interocular conflict (i.e. rivalry), could also be decoded using the same techniques, though all of these were perceptually discriminable above chance. A control analysis found no evidence to support the possibility that eye dominance was responsible for the eye-of-origin effects. These results support a structural explanation for multivariate decoding of electrophysiological signals - information organised in cortical columns can be decoded, even when observers are unaware of this information.
Collapse
|
45
|
Philips RT, Chakravarthy VS. A Global Orientation Map in the Primary Visual Cortex (V1): Could a Self Organizing Model Reveal Its Hidden Bias? Front Neural Circuits 2017; 10:109. [PMID: 28111542 PMCID: PMC5216665 DOI: 10.3389/fncir.2016.00109] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2016] [Accepted: 12/14/2016] [Indexed: 11/13/2022] Open
Abstract
A remarkable accomplishment of self organizing models is their ability to simulate the development of feature maps in the cortex. Additionally, these models have been trained to tease out the differential causes of multiple feature maps, mapped on to the same output space. Recently, a Laterally Interconnected Synergetically Self Organizing Map (LISSOM) model has been used to simulate the mapping of eccentricity and meridional angle onto orthogonal axes in the primary visual cortex (V1). This model is further probed to simulate the development of the radial bias in V1, using a training set that consists of both radial (rectangular bars of random size and orientation) as well as non-radial stimuli. The radial bias describes the preference of the visual system toward orientations that match the angular position (meridional angle) of that orientation with respect to the point of fixation. Recent fMRI results have shown that there exists a coarse scale orientation map in V1, which resembles the meridional angle map, thereby providing a plausible neural basis for the radial bias. The LISSOM model, trained for the development of the retinotopic map, on probing for orientation preference, exhibits a coarse scale orientation map, consistent with these experimental results, quantified using the circular cross correlation (rc ). The rc between the orientation map developed on probing with a thin annular ring containing sinusoidal gratings with a spatial frequency of 0.5 cycles per degree (cpd) and the corresponding meridional map for the same annular ring, has a value of 0.8894. The results also suggest that the radial bias goes beyond the current understanding of a node to node correlation between the two maps.
Collapse
Affiliation(s)
- Ryan T Philips
- Computational Neuroscience Laboratory, Department of Biotechnology, Indian Institute of Technology Madras Chennai, India
| | - V Srinivasa Chakravarthy
- Computational Neuroscience Laboratory, Department of Biotechnology, Indian Institute of Technology Madras Chennai, India
| |
Collapse
|
46
|
High-resolution retinotopic maps estimated with magnetoencephalography. Neuroimage 2017; 145:107-117. [DOI: 10.1016/j.neuroimage.2016.10.017] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2016] [Revised: 09/30/2016] [Accepted: 10/11/2016] [Indexed: 11/23/2022] Open
|
47
|
Spatiotemporal dynamics of similarity-based neural representations of facial identity. Proc Natl Acad Sci U S A 2016; 114:388-393. [PMID: 28028220 DOI: 10.1073/pnas.1614763114] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Humans' remarkable ability to quickly and accurately discriminate among thousands of highly similar complex objects demands rapid and precise neural computations. To elucidate the process by which this is achieved, we used magnetoencephalography to measure spatiotemporal patterns of neural activity with high temporal resolution during visual discrimination among a large and carefully controlled set of faces. We also compared these neural data to lower level "image-based" and higher level "identity-based" model-based representations of our stimuli and to behavioral similarity judgments of our stimuli. Between ∼50 and 400 ms after stimulus onset, face-selective sources in right lateral occipital cortex and right fusiform gyrus and sources in a control region (left V1) yielded successful classification of facial identity. In all regions, early responses were more similar to the image-based representation than to the identity-based representation. In the face-selective regions only, responses were more similar to the identity-based representation at several time points after 200 ms. Behavioral responses were more similar to the identity-based representation than to the image-based representation, and their structure was predicted by responses in the face-selective regions. These results provide a temporally precise description of the transformation from low- to high-level representations of facial identity in human face-selective cortex and demonstrate that face-selective cortical regions represent multiple distinct types of information about face identity at different times over the first 500 ms after stimulus onset. These results have important implications for understanding the rapid emergence of fine-grained, high-level representations of object identity, a computation essential to human visual expertise.
Collapse
|
48
|
Edge-Related Activity Is Not Necessary to Explain Orientation Decoding in Human Visual Cortex. J Neurosci 2016; 37:1187-1196. [PMID: 28003346 DOI: 10.1523/jneurosci.2690-16.2016] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2016] [Revised: 11/22/2016] [Accepted: 11/30/2016] [Indexed: 11/21/2022] Open
Abstract
Multivariate pattern analysis is a powerful technique; however, a significant theoretical limitation in neuroscience is the ambiguity in interpreting the source of decodable information used by classifiers. This is exemplified by the continued controversy over the source of orientation decoding from fMRI responses in human V1. Recently Carlson (2014) identified a potential source of decodable information by modeling voxel responses based on the Hubel and Wiesel (1972) ice-cube model of visual cortex. The model revealed that activity associated with the edges of gratings covaries with orientation and could potentially be used to discriminate orientation. Here we empirically evaluate whether "edge-related activity" underlies orientation decoding from patterns of BOLD response in human V1. First, we systematically mapped classifier performance as a function of stimulus location using population receptive field modeling to isolate each voxel's overlap with a large annular grating stimulus. Orientation was decodable across the stimulus; however, peak decoding performance occurred for voxels with receptive fields closer to the fovea and overlapping with the inner edge. Critically, we did not observe the expected second peak in decoding performance at the outer stimulus edge as predicted by the edge account. Second, we evaluated whether voxels that contribute most to classifier performance have receptive fields that cluster in cortical regions corresponding to the retinotopic location of the stimulus edge. Instead, we find the distribution of highly weighted voxels to be approximately random, with a modest bias toward more foveal voxels. Our results demonstrate that edge-related activity is likely not necessary for orientation decoding. SIGNIFICANCE STATEMENT A significant theoretical limitation of multivariate pattern analysis in neuroscience is the ambiguity in interpreting the source of decodable information used by classifiers. For example, orientation can be decoded from BOLD activation patterns in human V1, even though orientation columns are at a finer spatial scale than 3T fMRI. Consequently, the source of decodable information remains controversial. Here we test the proposal that information related to the stimulus edges underlies orientation decoding. We map voxel population receptive fields in V1 and evaluate orientation decoding performance as a function of stimulus location in retinotopic cortex. We find orientation is decodable from voxels whose receptive fields do not overlap with the stimulus edges, suggesting edge-related activity does not substantially drive orientation decoding.
Collapse
|
49
|
Soto JLP, Lachaux JP, Baillet S, Jerbi K. A multivariate method for estimating cross-frequency neuronal interactions and correcting linear mixing in MEG data, using canonical correlations. J Neurosci Methods 2016; 271:169-81. [PMID: 27468679 DOI: 10.1016/j.jneumeth.2016.07.017] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2016] [Revised: 07/22/2016] [Accepted: 07/24/2016] [Indexed: 11/26/2022]
Abstract
BACKGROUND Cross-frequency interactions between distinct brain areas have been observed in connection with a variety of cognitive tasks. With electro- and magnetoencephalography (EEG/MEG) data, typical connectivity measures between two brain regions analyze a single quantity from each region within a specific frequency band; given the wideband nature of EEG/MEG signals, many statistical tests may be required to identify true coupling. Furthermore, because of the poor spatial resolution of activity reconstructed from EEG/MEG, some interactions may actually be due to the linear mixing of brain sources. NEW METHOD In the present work, a method for the detection of cross-frequency functional connectivity in MEG data using canonical correlation analysis (CCA) is described. We demonstrate that CCA identifies correlated signals and also the frequencies that cause the correlation. We also implement a procedure to deal with linear mixing based on symmetry properties of cross-covariance matrices. RESULTS Our tests with both simulated and real MEG data demonstrate that CCA is able to detect interacting locations and the frequencies that cause them, while accurately discarding spurious coupling. COMPARISON WITH EXISTING METHODS Recent techniques look at time delays in the activity between two locations to discard spurious interactions, while we propose a linear mixing model and demonstrate its relationship with symmetry aspects of cross-covariance matrices. CONCLUSIONS Our tests indicate the benefits of the CCA approach in connectivity studies, as it allows the simultaneous evaluation of several possible combinations of cross-frequency interactions in a single statistical test.
Collapse
Affiliation(s)
- Juan L P Soto
- Department of Telecommunications and Control Engineering, University of São Paulo, São Paulo, Brazil.
| | - Jean-Philippe Lachaux
- Brain Dynamics and Cognition Team, Lyon Neuroscience Research Center, INSERM U1028 - CNRS UMR5292 - Lyon University, Lyon, France
| | - Sylvain Baillet
- McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montreal, Canada
| | - Karim Jerbi
- Department of Psychology, University of Montreal, Montreal, Canada
| |
Collapse
|
50
|
Comparison of deep neural networks to spatio-temporal cortical dynamics of human visual object recognition reveals hierarchical correspondence. Sci Rep 2016; 6:27755. [PMID: 27282108 PMCID: PMC4901271 DOI: 10.1038/srep27755] [Citation(s) in RCA: 341] [Impact Index Per Article: 42.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2016] [Accepted: 05/23/2016] [Indexed: 11/08/2022] Open
Abstract
The complex multi-stage architecture of cortical visual pathways provides the neural basis for efficient visual object recognition in humans. However, the stage-wise computations therein remain poorly understood. Here, we compared temporal (magnetoencephalography) and spatial (functional MRI) visual brain representations with representations in an artificial deep neural network (DNN) tuned to the statistics of real-world visual recognition. We showed that the DNN captured the stages of human visual processing in both time and space from early visual areas towards the dorsal and ventral streams. Further investigation of crucial DNN parameters revealed that while model architecture was important, training on real-world categorization was necessary to enforce spatio-temporal hierarchical relationships with the brain. Together our results provide an algorithmically informed view on the spatio-temporal dynamics of visual object recognition in the human visual brain.
Collapse
|