1
|
Lin B, Kriegeskorte N. The topology and geometry of neural representations. Proc Natl Acad Sci U S A 2024; 121:e2317881121. [PMID: 39374397 PMCID: PMC11494346 DOI: 10.1073/pnas.2317881121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Accepted: 07/24/2024] [Indexed: 10/09/2024] Open
Abstract
A central question for neuroscience is how to characterize brain representations of perceptual and cognitive content. An ideal characterization should distinguish different functional regions with robustness to noise and idiosyncrasies of individual brains that do not correspond to computational differences. Previous studies have characterized brain representations by their representational geometry, which is defined by the representational dissimilarity matrix (RDM), a summary statistic that abstracts from the roles of individual neurons (or responses channels) and characterizes the discriminability of stimuli. Here, we explore a further step of abstraction: from the geometry to the topology of brain representations. We propose topological representational similarity analysis, an extension of representational similarity analysis that uses a family of geotopological summary statistics that generalizes the RDM to characterize the topology while de-emphasizing the geometry. We evaluate this family of statistics in terms of the sensitivity and specificity for model selection using both simulations and functional MRI (fMRI) data. In the simulations, the ground truth is a data-generating layer representation in a neural network model and the models are the same and other layers in different model instances (trained from different random seeds). In fMRI, the ground truth is a visual area and the models are the same and other areas measured in different subjects. Results show that topology-sensitive characterizations of population codes are robust to noise and interindividual variability and maintain excellent sensitivity to the unique representational signatures of different neural network layers and brain regions.
Collapse
Affiliation(s)
- Baihan Lin
- Department of Artificial Intelligence and Human Health, Hasso Plattner Institute for Digital Health, Icahn School of Medicine at Mount Sinai, New York, NY10029
- Department of Psychiatry, Center for Computational Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY10029
- Department of Neuroscience, Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY10029
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY10027
| | - Nikolaus Kriegeskorte
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY10027
- Department of Psychology, Columbia University, New York, NY10027
- Department of Neuroscience, Columbia University, New York, NY10027
| |
Collapse
|
2
|
Cho S, van Es M, Woolrich M, Gohil C. Comparison between EEG and MEG of static and dynamic resting-state networks. Hum Brain Mapp 2024; 45:e70018. [PMID: 39230193 PMCID: PMC11372824 DOI: 10.1002/hbm.70018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2024] [Revised: 07/29/2024] [Accepted: 08/20/2024] [Indexed: 09/05/2024] Open
Abstract
The characterisation of resting-state networks (RSNs) using neuroimaging techniques has significantly contributed to our understanding of the organisation of brain activity. Prior work has demonstrated the electrophysiological basis of RSNs and their dynamic nature, revealing transient activations of brain networks with millisecond timescales. While previous research has confirmed the comparability of RSNs identified by electroencephalography (EEG) to those identified by magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI), most studies have utilised static analysis techniques, ignoring the dynamic nature of brain activity. Often, these studies use high-density EEG systems, which limit their applicability in clinical settings. Addressing these gaps, our research studies RSNs using medium-density EEG systems (61 sensors), comparing both static and dynamic brain network features to those obtained from a high-density MEG system (306 sensors). We assess the qualitative and quantitative comparability of EEG-derived RSNs to those from MEG, including their ability to capture age-related effects, and explore the reproducibility of dynamic RSNs within and across the modalities. Our findings suggest that both MEG and EEG offer comparable static and dynamic network descriptions, albeit with MEG offering some increased sensitivity and reproducibility. Such RSNs and their comparability across the two modalities remained consistent qualitatively but not quantitatively when the data were reconstructed without subject-specific structural MRI images.
Collapse
Affiliation(s)
- SungJun Cho
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK
| | - Mats van Es
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK
| | - Mark Woolrich
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK
| | - Chetan Gohil
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK
| |
Collapse
|
3
|
Lifanov-Carr J, Griffiths BJ, Linde-Domingo J, Ferreira CS, Wilson M, Mayhew SD, Charest I, Wimber M. Reconstructing Spatiotemporal Trajectories of Visual Object Memories in the Human Brain. eNeuro 2024; 11:ENEURO.0091-24.2024. [PMID: 39242212 PMCID: PMC11439564 DOI: 10.1523/eneuro.0091-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Revised: 07/03/2024] [Accepted: 08/09/2024] [Indexed: 09/09/2024] Open
Abstract
How the human brain reconstructs, step-by-step, the core elements of past experiences is still unclear. Here, we map the spatiotemporal trajectories along which visual object memories are reconstructed during associative recall. Specifically, we inquire whether retrieval reinstates feature representations in a copy-like but reversed direction with respect to the initial perceptual experience, or alternatively, this reconstruction involves format transformations and regions beyond initial perception. Participants from two cohorts studied new associations between verbs and randomly paired object images, and subsequently recalled the objects when presented with the corresponding verb cue. We first analyze multivariate fMRI patterns to map where in the brain high- and low-level object features can be decoded during perception and retrieval, showing that retrieval is dominated by conceptual features, represented in comparatively late visual and parietal areas. A separately acquired EEG dataset is then used to track the temporal evolution of the reactivated patterns using similarity-based EEG-fMRI fusion. This fusion suggests that memory reconstruction proceeds from anterior frontotemporal to posterior occipital and parietal regions, in line with a conceptual-to-perceptual gradient but only partly following the same trajectories as during perception. Specifically, a linear regression statistically confirms that the sequential activation of ventral visual stream regions is reversed between image perception and retrieval. The fusion analysis also suggests an information relay to frontoparietal areas late during retrieval. Together, the results shed light onto the temporal dynamics of memory recall and the transformations that the information undergoes between the initial experience and its later reconstruction from memory.
Collapse
Affiliation(s)
- Julia Lifanov-Carr
- School of Psychology and Centre for Human Brain Health (CHBH), University of Birmingham, Birmingham B15 2TT, United Kingdom
| | - Benjamin J Griffiths
- School of Psychology and Centre for Human Brain Health (CHBH), University of Birmingham, Birmingham B15 2TT, United Kingdom
| | - Juan Linde-Domingo
- School of Psychology and Centre for Human Brain Health (CHBH), University of Birmingham, Birmingham B15 2TT, United Kingdom
- Department of Experimental Psychology, Mind, Brain and Behavior Research Center (CIMCYC), University of Granada, 18011 Granada, Spain
- Center for Adaptive Rationality, Max Planck Institute for Human Development, 14195 Berlin, Germany
| | - Catarina S Ferreira
- School of Psychology and Centre for Human Brain Health (CHBH), University of Birmingham, Birmingham B15 2TT, United Kingdom
| | - Martin Wilson
- School of Psychology and Centre for Human Brain Health (CHBH), University of Birmingham, Birmingham B15 2TT, United Kingdom
| | - Stephen D Mayhew
- Institute of Health and Neurodevelopment (IHN), School of Psychology, Aston University, Birmingham B4 7ET, United Kingdom
| | - Ian Charest
- Département de Psychologie, Université de Montréal, Montréal, Quebec H2V 2S9, Canada
| | - Maria Wimber
- School of Psychology and Centre for Human Brain Health (CHBH), University of Birmingham, Birmingham B15 2TT, United Kingdom
- School of Psychology & Neuroscience and Centre for Cognitive Neuroimaging (CCNi), University of Glasgow, Glasgow G12 8QB, United Kingdom
| |
Collapse
|
4
|
Li Q, Wang J, Meng Z, Chen Y, Zhang M, Hu N, Chen X, Chen A. Decoding the task specificity of post-error adjustments: Features and determinants. Neuroimage 2024; 297:120692. [PMID: 38897398 DOI: 10.1016/j.neuroimage.2024.120692] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Revised: 06/04/2024] [Accepted: 06/16/2024] [Indexed: 06/21/2024] Open
Abstract
Errors typically trigger post-error adjustments aimed at improving subsequent reactions within a single task, but little work has focused on whether these adjustments are task-general or task-specific across different tasks. We collected behavioral and electrophysiological (EEG) data when participants performed a psychological refractory period paradigm. This paradigm required them to complete Task 1 and Task 2 separated by a variable stimulus onset asynchrony (SOA). Behaviorally, post-error slowing and post-error accuracy exhibited task-general features at short SOAs but some task-specific features at long SOAs. EEG results manifest that task-general adjustments had a short-lived effect, whereas task-specific adjustments were long-lasting. Moreover, error awareness specifically conduced to the improvement of subsequent sensory processing and behavior performance in Task 1 (the task where errors occurred). These findings demonstrate that post-error adjustments rely on both transient, task-general interference and longer-lasting, task-specific control mechanisms simultaneously, with error awareness playing a crucial role in determining these mechanisms. We further discuss the contribution of central resources to the task specificity of post-error adjustments.
Collapse
Affiliation(s)
- Qing Li
- Key Laboratory of Cognition and Personality of Ministry of Education, Faculty of Psychology, Southwest University, Chongqing 400715, China
| | - Jing Wang
- School of Psychology, Liaoning Normal University, Dalian 116029, China
| | - Zong Meng
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Yongqiang Chen
- Key Laboratory of Cognition and Personality of Ministry of Education, Faculty of Psychology, Southwest University, Chongqing 400715, China
| | - Mengke Zhang
- Key Laboratory of Cognition and Personality of Ministry of Education, Faculty of Psychology, Southwest University, Chongqing 400715, China
| | - Na Hu
- School of Preschool & Special Education, Kunming University, Kunming 650214, China
| | - Xu Chen
- Key Laboratory of Cognition and Personality of Ministry of Education, Faculty of Psychology, Southwest University, Chongqing 400715, China
| | - Antao Chen
- School of Psychology, Research Center for Exercise and Brain Science, Shanghai University of Sport, Shanghai 200438, China.
| |
Collapse
|
5
|
Yashiro R, Sawayama M, Amano K. Decoding time-resolved neural representations of orientation ensemble perception. Front Neurosci 2024; 18:1387393. [PMID: 39148524 PMCID: PMC11325722 DOI: 10.3389/fnins.2024.1387393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2024] [Accepted: 07/15/2024] [Indexed: 08/17/2024] Open
Abstract
The visual system can compute summary statistics of several visual elements at a glance. Numerous studies have shown that an ensemble of different visual features can be perceived over 50-200 ms; however, the time point at which the visual system forms an accurate ensemble representation associated with an individual's perception remains unclear. This is mainly because most previous studies have not fully addressed time-resolved neural representations that occur during ensemble perception, particularly lacking quantification of the representational strength of ensembles and their correlation with behavior. Here, we conducted orientation ensemble discrimination tasks and electroencephalogram (EEG) recordings to decode orientation representations over time while human observers discriminated an average of multiple orientations. We modeled EEG signals as a linear sum of hypothetical orientation channel responses and inverted this model to quantify the representational strength of orientation ensemble. Our analysis using this inverted encoding model revealed stronger representations of the average orientation over 400-700 ms. We also correlated the orientation representation estimated from EEG signals with the perceived average orientation reported in the ensemble discrimination task with adjustment methods. We found that the estimated orientation at approximately 600-700 ms significantly correlated with the individual differences in perceived average orientation. These results suggest that although ensembles can be quickly and roughly computed, the visual system may gradually compute an orientation ensemble over several hundred milliseconds to achieve a more accurate ensemble representation.
Collapse
Affiliation(s)
- Ryuto Yashiro
- Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan
| | - Masataka Sawayama
- Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan
| | - Kaoru Amano
- Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
6
|
Motlagh SC, Joanisse M, Wang B, Mohsenzadeh Y. Unveiling the neural dynamics of conscious perception in rapid object recognition. Neuroimage 2024; 296:120668. [PMID: 38848982 DOI: 10.1016/j.neuroimage.2024.120668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Revised: 05/23/2024] [Accepted: 06/05/2024] [Indexed: 06/09/2024] Open
Abstract
Our brain excels at recognizing objects, even when they flash by in a rapid sequence. However, the neural processes determining whether a target image in a rapid sequence can be recognized or not remains elusive. We used electroencephalography (EEG) to investigate the temporal dynamics of brain processes that shape perceptual outcomes in these challenging viewing conditions. Using naturalistic images and advanced multivariate pattern analysis (MVPA) techniques, we probed the brain dynamics governing conscious object recognition. Our results show that although initially similar, the processes for when an object can or cannot be recognized diverge around 180 ms post-appearance, coinciding with feedback neural processes. Decoding analyses indicate that gist perception (partial conscious perception) can occur at ∼120 ms through feedforward mechanisms. In contrast, object identification (full conscious perception of the image) is resolved at ∼190 ms after target onset, suggesting involvement of recurrent processing. These findings underscore the importance of recurrent neural connections in object recognition and awareness in rapid visual presentations.
Collapse
Affiliation(s)
- Saba Charmi Motlagh
- Western Center for Brain and Mind, Western University, London, Ontario, Canada; Vector Institute for Artificial Intelligence, Toronto, Ontario, Canada
| | - Marc Joanisse
- Western Center for Brain and Mind, Western University, London, Ontario, Canada; Department of Psychology, Western University, London, Ontario, Canada
| | - Boyu Wang
- Western Center for Brain and Mind, Western University, London, Ontario, Canada; Vector Institute for Artificial Intelligence, Toronto, Ontario, Canada; Department of Computer Science, Western University, London, Ontario, Canada
| | - Yalda Mohsenzadeh
- Western Center for Brain and Mind, Western University, London, Ontario, Canada; Vector Institute for Artificial Intelligence, Toronto, Ontario, Canada; Department of Computer Science, Western University, London, Ontario, Canada.
| |
Collapse
|
7
|
Carretié L, Fernández-Folgueiras U, Kessel D, Alba G, Veiga-Zarza E, Tapia M, Álvarez F. An extremely fast neural mechanism to detect emotional visual stimuli: A two-experiment study. PLoS One 2024; 19:e0299677. [PMID: 38905211 PMCID: PMC11192326 DOI: 10.1371/journal.pone.0299677] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2024] [Accepted: 05/03/2024] [Indexed: 06/23/2024] Open
Abstract
Defining the brain mechanisms underlying initial emotional evaluation is a key but unexplored clue to understanding affective processing. Event-related potentials (ERPs), especially suited for investigating this issue, were recorded in two experiments (n = 36 and n = 35). We presented emotionally negative (spiders) and neutral (wheels) silhouettes homogenized regarding their visual parameters. In Experiment 1, stimuli appeared at fixation or in the periphery (200 trials per condition and location), the former eliciting a N40 (39 milliseconds) and a P80 (or C1: 80 milliseconds) component, and the latter only a P80. In Experiment 2, stimuli were presented only at fixation (500 trials per condition). Again, an N40 (45 milliseconds) was observed, followed by a P100 (or P1: 105 milliseconds). Analyses revealed significantly greater N40-C1P1 peak-to-peak amplitudes for spiders in both experiments, and ANCOVAs showed that these effects were not explained by C1P1 alone, but that processes underlying N40 significantly contributed. Source analyses pointed to V1 as an N40 focus (more clearly in Experiment 2). Sources for C1P1 included V1 (P80) and V2/LOC (P80 and P100). These results and their timing point to low-order structures (such as visual thalamic nuclei or superior colliculi) or the visual cortex itself, as candidates for initial evaluation structures.
Collapse
Affiliation(s)
- Luis Carretié
- Facultad de Psicología, Universidad Autónoma de Madrid, Madrid, Spain
| | | | - Dominique Kessel
- Facultad de Psicología, Universidad Autónoma de Madrid, Madrid, Spain
| | - Guzmán Alba
- Facultad de Psicología, Universidad Autónoma de Madrid, Madrid, Spain
| | | | - Manuel Tapia
- Facultad de Psicología, Universidad Autónoma de Madrid, Madrid, Spain
| | - Fátima Álvarez
- Facultad de Psicología, Universidad Autónoma de Madrid, Madrid, Spain
| |
Collapse
|
8
|
Wu H, Wang R, Ma Y, Liang X, Liu C, Yu D, An N, Ning X. Decoding N400m Evoked Component: A Tutorial on Multivariate Pattern Analysis for OP-MEG Data. Bioengineering (Basel) 2024; 11:609. [PMID: 38927845 PMCID: PMC11200846 DOI: 10.3390/bioengineering11060609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2024] [Revised: 05/26/2024] [Accepted: 06/11/2024] [Indexed: 06/28/2024] Open
Abstract
Multivariate pattern analysis (MVPA) has played an extensive role in interpreting brain activity, which has been applied in studies with modalities such as functional Magnetic Resonance Imaging (fMRI), Magnetoencephalography (MEG) and Electroencephalography (EEG). The advent of wearable MEG systems based on optically pumped magnetometers (OPMs), i.e., OP-MEG, has broadened the application of bio-magnetism in the realm of neuroscience. Nonetheless, it also raises challenges in temporal decoding analysis due to the unique attributes of OP-MEG itself. The efficacy of decoding performance utilizing multimodal fusion, such as MEG-EEG, also remains to be elucidated. In this regard, we investigated the impact of several factors, such as processing methods, models and modalities, on the decoding outcomes of OP-MEG. Our findings indicate that the number of averaged trials, dimensionality reduction (DR) methods, and the number of cross-validation folds significantly affect the decoding performance of OP-MEG data. Additionally, decoding results vary across modalities and fusion strategy. In contrast, decoder type, resampling frequency, and sliding window length exert marginal effects. Furthermore, we introduced mutual information (MI) to investigate how information loss due to OP-MEG data processing affect decoding accuracy. Our study offers insights for linear decoding research using OP-MEG and expand its application in the fields of cognitive neuroscience.
Collapse
Affiliation(s)
- Huanqi Wu
- Key Laboratory of Ultra-Weak Magnetic Field Measurement Technology, Ministry of Education, School of Instrumentation and Optoelectronic Engineering, Beihang University, 37 Xueyuan Rd., Haidian Dist., Beijing 100083, China; (H.W.); (R.W.); (Y.M.); (X.L.); (C.L.)
- Hangzhou Institute of National Extremely-Weak Magnetic Field Infrastructure, 465 Binan Rd., Binjiang Dist., Hangzhou 310000, China
| | - Ruonan Wang
- Key Laboratory of Ultra-Weak Magnetic Field Measurement Technology, Ministry of Education, School of Instrumentation and Optoelectronic Engineering, Beihang University, 37 Xueyuan Rd., Haidian Dist., Beijing 100083, China; (H.W.); (R.W.); (Y.M.); (X.L.); (C.L.)
- Hangzhou Institute of National Extremely-Weak Magnetic Field Infrastructure, 465 Binan Rd., Binjiang Dist., Hangzhou 310000, China
| | - Yuyu Ma
- Key Laboratory of Ultra-Weak Magnetic Field Measurement Technology, Ministry of Education, School of Instrumentation and Optoelectronic Engineering, Beihang University, 37 Xueyuan Rd., Haidian Dist., Beijing 100083, China; (H.W.); (R.W.); (Y.M.); (X.L.); (C.L.)
- Hangzhou Institute of National Extremely-Weak Magnetic Field Infrastructure, 465 Binan Rd., Binjiang Dist., Hangzhou 310000, China
| | - Xiaoyu Liang
- Key Laboratory of Ultra-Weak Magnetic Field Measurement Technology, Ministry of Education, School of Instrumentation and Optoelectronic Engineering, Beihang University, 37 Xueyuan Rd., Haidian Dist., Beijing 100083, China; (H.W.); (R.W.); (Y.M.); (X.L.); (C.L.)
- Hangzhou Institute of National Extremely-Weak Magnetic Field Infrastructure, 465 Binan Rd., Binjiang Dist., Hangzhou 310000, China
| | - Changzeng Liu
- Key Laboratory of Ultra-Weak Magnetic Field Measurement Technology, Ministry of Education, School of Instrumentation and Optoelectronic Engineering, Beihang University, 37 Xueyuan Rd., Haidian Dist., Beijing 100083, China; (H.W.); (R.W.); (Y.M.); (X.L.); (C.L.)
- Hangzhou Institute of National Extremely-Weak Magnetic Field Infrastructure, 465 Binan Rd., Binjiang Dist., Hangzhou 310000, China
| | - Dexin Yu
- Shandong Key Laboratory for Magnetic Field-Free Medicine and Functional Imaging, Institute of Magnetic Field-Free Medicine and Functional Imaging, Shandong University, 27 South Shanda Rd., Licheng Dist., Jinan 250100, China;
| | - Nan An
- Hangzhou Institute of National Extremely-Weak Magnetic Field Infrastructure, 465 Binan Rd., Binjiang Dist., Hangzhou 310000, China
| | - Xiaolin Ning
- Key Laboratory of Ultra-Weak Magnetic Field Measurement Technology, Ministry of Education, School of Instrumentation and Optoelectronic Engineering, Beihang University, 37 Xueyuan Rd., Haidian Dist., Beijing 100083, China; (H.W.); (R.W.); (Y.M.); (X.L.); (C.L.)
- Hangzhou Institute of National Extremely-Weak Magnetic Field Infrastructure, 465 Binan Rd., Binjiang Dist., Hangzhou 310000, China
- Shandong Key Laboratory for Magnetic Field-Free Medicine and Functional Imaging, Institute of Magnetic Field-Free Medicine and Functional Imaging, Shandong University, 27 South Shanda Rd., Licheng Dist., Jinan 250100, China;
- Hefei National Laboratory, Gaoxin Dist., Hefei 230093, China
| |
Collapse
|
9
|
Bi J, Gao Y, Peng Z, Ma Y. Classification of motor imagery using chaotic entropy based on sub-band EEG source localization. J Neural Eng 2024; 21:036016. [PMID: 38722315 DOI: 10.1088/1741-2552/ad4914] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2023] [Accepted: 05/09/2024] [Indexed: 05/18/2024]
Abstract
Objective.Electroencephalography (EEG) has been widely used in motor imagery (MI) research by virtue of its high temporal resolution and low cost, but its low spatial resolution is still a major criticism. The EEG source localization (ESL) algorithm effectively improves the spatial resolution of the signal by inverting the scalp EEG to extrapolate the cortical source signal, thus enhancing the classification accuracy.Approach.To address the problem of poor spatial resolution of EEG signals, this paper proposed a sub-band source chaotic entropy feature extraction method based on sub-band ESL. Firstly, the preprocessed EEG signals were filtered into 8 sub-bands. Each sub-band signal was source localized respectively to reveal the activation patterns of specific frequency bands of the EEG signals and the activities of specific brain regions in the MI task. Then, approximate entropy, fuzzy entropy and permutation entropy were extracted from the source signal as features to quantify the complexity and randomness of the signal. Finally, the classification of different MI tasks was achieved using support vector machine.Main result.The proposed method was validated on two MI public datasets (brain-computer interface (BCI) competition III IVa, BCI competition IV 2a) and the results showed that the classification accuracies were higher than the existing methods.Significance.The spatial resolution of the signal was improved by sub-band EEG localization in the paper, which provided a new idea for EEG MI research.
Collapse
Affiliation(s)
- Jicheng Bi
- College of Automation, Hangzhou Dianzi University, Hangzhou, People's Republic of China
| | - Yunyuan Gao
- College of Automation, Hangzhou Dianzi University, Hangzhou, People's Republic of China
| | - Zheng Peng
- College of Automation, Hangzhou Dianzi University, Hangzhou, People's Republic of China
| | - Yuliang Ma
- College of Automation, Hangzhou Dianzi University, Hangzhou, People's Republic of China
| |
Collapse
|
10
|
Li Y, Li S, Hu W, Yang L, Luo W. Spatial representation of multidimensional information in emotional faces revealed by fMRI. Neuroimage 2024; 290:120578. [PMID: 38499051 DOI: 10.1016/j.neuroimage.2024.120578] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2023] [Revised: 03/13/2024] [Accepted: 03/15/2024] [Indexed: 03/20/2024] Open
Abstract
Face perception is a complex process that involves highly specialized procedures and mechanisms. Investigating into face perception can help us better understand how the brain processes fine-grained, multidimensional information. This research aimed to delve deeply into how different dimensions of facial information are represented in specific brain regions or through inter-regional connections via an implicit face recognition task. To capture the representation of various facial information in the brain, we employed support vector machine decoding, functional connectivity, and model-based representational similarity analysis on fMRI data, resulting in the identification of three crucial findings. Firstly, despite the implicit nature of the task, emotions were still represented in the brain, contrasting with all other facial information. Secondly, the connection between the medial amygdala and the parahippocampal gyrus was found to be essential for the representation of facial emotion in implicit tasks. Thirdly, in implicit tasks, arousal representation occurred in the parahippocampal gyrus, while valence depended on the connection between the primary visual cortex and the parahippocampal gyrus. In conclusion, these findings dissociate the neural mechanisms of emotional valence and arousal, revealing the precise spatial patterns of multidimensional information processing in faces.
Collapse
Affiliation(s)
- Yiwen Li
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, PR China; Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, PR China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, PR China
| | - Shuaixia Li
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, PR China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, PR China
| | - Weiyu Hu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, PR China
| | - Lan Yang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, PR China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, PR China
| | - Wenbo Luo
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, PR China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, PR China.
| |
Collapse
|
11
|
Wu H, Liang X, Wang R, Ma Y, Gao Y, Ning X. A Multivariate analysis on evoked components of Chinese semantic congruity: an OP-MEG study with EEG. Cereb Cortex 2024; 34:bhae108. [PMID: 38610084 DOI: 10.1093/cercor/bhae108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2024] [Revised: 02/28/2024] [Accepted: 02/29/2024] [Indexed: 04/14/2024] Open
Abstract
The application of wearable magnetoencephalography using optically-pumped magnetometers has drawn extensive attention in the field of neuroscience. Electroencephalogram system can cover the whole head and reflect the overall activity of a large number of neurons. The efficacy of optically-pumped magnetometer in detecting event-related components can be validated through electroencephalogram results. Multivariate pattern analysis is capable of tracking the evolution of neurocognitive processes over time. In this paper, we adopted a classical Chinese semantic congruity paradigm and separately collected electroencephalogram and optically-pumped magnetometer signals. Then, we verified the consistency of optically-pumped magnetometer and electroencephalogram in detecting N400 using mutual information index. Multivariate pattern analysis revealed the difference in decoding performance of these two modalities, which can be further validated by dynamic/stable coding analysis on the temporal generalization matrix. The results from searchlight analysis provided a neural basis for this dissimilarity at the magnetoencephalography source level and the electroencephalogram sensor level. This study opens a new avenue for investigating the brain's coding patterns using wearable magnetoencephalography and reveals the differences in sensitivity between the two modalities in reflecting neuron representation patterns.
Collapse
Affiliation(s)
- Huanqi Wu
- Key Laboratory of Ultra-Weak Magnetic Field Measurement Technology, Ministry of Education, School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing 100191, China
- Hangzhou Institute of National Extremely-weak Magnetic Field Infrastructure, Hangzhou 310051, China
- Zhejiang Provincial Key Laboratory of Ultra-Weak Magnetic-Field Space and Applied Technology, Hangzhou Innovation Institute, Beihang University, Beijing 100191, China
| | - Xiaoyu Liang
- Key Laboratory of Ultra-Weak Magnetic Field Measurement Technology, Ministry of Education, School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing 100191, China
- Hangzhou Institute of National Extremely-weak Magnetic Field Infrastructure, Hangzhou 310051, China
- Zhejiang Provincial Key Laboratory of Ultra-Weak Magnetic-Field Space and Applied Technology, Hangzhou Innovation Institute, Beihang University, Beijing 100191, China
| | - Ruonan Wang
- Key Laboratory of Ultra-Weak Magnetic Field Measurement Technology, Ministry of Education, School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing 100191, China
- Hangzhou Institute of National Extremely-weak Magnetic Field Infrastructure, Hangzhou 310051, China
- Zhejiang Provincial Key Laboratory of Ultra-Weak Magnetic-Field Space and Applied Technology, Hangzhou Innovation Institute, Beihang University, Beijing 100191, China
| | - Yuyu Ma
- Key Laboratory of Ultra-Weak Magnetic Field Measurement Technology, Ministry of Education, School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing 100191, China
- Hangzhou Institute of National Extremely-weak Magnetic Field Infrastructure, Hangzhou 310051, China
- Zhejiang Provincial Key Laboratory of Ultra-Weak Magnetic-Field Space and Applied Technology, Hangzhou Innovation Institute, Beihang University, Beijing 100191, China
| | - Yang Gao
- Key Laboratory of Ultra-Weak Magnetic Field Measurement Technology, Ministry of Education, School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing 100191, China
- Hangzhou Institute of National Extremely-weak Magnetic Field Infrastructure, Hangzhou 310051, China
- Zhejiang Provincial Key Laboratory of Ultra-Weak Magnetic-Field Space and Applied Technology, Hangzhou Innovation Institute, Beihang University, Beijing 100191, China
| | - Xiaolin Ning
- Key Laboratory of Ultra-Weak Magnetic Field Measurement Technology, Ministry of Education, School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing 100191, China
- Hangzhou Institute of National Extremely-weak Magnetic Field Infrastructure, Hangzhou 310051, China
- Zhejiang Provincial Key Laboratory of Ultra-Weak Magnetic-Field Space and Applied Technology, Hangzhou Innovation Institute, Beihang University, Beijing 100191, China
- Shandong Key Laboratory for Magnetic Field-free Medicine & Functional Imaging, Institute of Magnetic Field-free Medicine & Functional Imaging, Shandong University, Shandong 264209, China
- Hefei National Laboratory, Anhui 230026, China
| |
Collapse
|
12
|
Chen Y, Li Z, Li Q, Wang J, Hu N, Zheng Y, Chen A. The neural dynamics of conflict adaptation induced by conflict observation: Evidence from univariate and multivariate analysis. Int J Psychophysiol 2024; 198:112324. [PMID: 38428745 DOI: 10.1016/j.ijpsycho.2024.112324] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2023] [Revised: 02/24/2024] [Accepted: 02/26/2024] [Indexed: 03/03/2024]
Abstract
Conflict adaptation can be expressed as greater performance (shorter response time and lower error rate) after incongruent trials when compared to congruent trials. It has been observed in designs that minimize confounding factors, i.e., feature integration, contingency learning, and temporal learning. Our current study aimed to further elucidate the temporal evolution mechanisms of conflict adaptation. To address this issue, the current study employed a combination of behavioral, univariate, and multivariate analysis (MVPA) methods in a modified color-word Stroop task, where half of the trials required button presses (DO trials), and the other half only required observation (LOOK trials). Both behavioral and the ERP results (N450 and SP) in the LOOK-DO transition trials revealed significant conflict adaptation without feature integration, contingency learning, and temporal learning, providing support for the conflict monitoring theory. Furthermore, during the LOOK trials, significant Stroop effect in the N450 and SP components were observed, indicating that conflict monitoring occurred at the stimulus level and triggered reactive control adjustments. The MVPA results decoded the congruent-incongruent and incongruent-incongruent conditions during the conflict adjustment phase but not during the conflict monitoring phase, emphasizing the unique contribution of conflict adjustment to conflict adaptation. The current research findings provided more compelling supporting evidence for the conflict monitoring theory, while also indicating that future studies should employ the present design to elucidate the specific processes of conflict adaptation.
Collapse
Affiliation(s)
- Yongqiang Chen
- Faculty of Psychology, Key Laboratory of Cognition and Personality of Ministry of Education, Southwest University, Chongqing 400715, China
| | - Zhifang Li
- Faculty of Psychology, Key Laboratory of Cognition and Personality of Ministry of Education, Southwest University, Chongqing 400715, China
| | - Qing Li
- Faculty of Psychology, Key Laboratory of Cognition and Personality of Ministry of Education, Southwest University, Chongqing 400715, China
| | - Jing Wang
- Faculty of Psychology, Key Laboratory of Cognition and Personality of Ministry of Education, Southwest University, Chongqing 400715, China
| | - Na Hu
- Department of Preschool and Special Education, Kunming University, Kunming 650214, China
| | - Yong Zheng
- Faculty of Psychology, Key Laboratory of Cognition and Personality of Ministry of Education, Southwest University, Chongqing 400715, China
| | - Antao Chen
- School of Psychology, Research Center for Exercise and Brain Science, Shanghai University of Sport, Shanghai 200438, China.
| |
Collapse
|
13
|
Lahner B, Mohsenzadeh Y, Mullin C, Oliva A. Visual perception of highly memorable images is mediated by a distributed network of ventral visual regions that enable a late memorability response. PLoS Biol 2024; 22:e3002564. [PMID: 38557761 PMCID: PMC10984539 DOI: 10.1371/journal.pbio.3002564] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Accepted: 02/26/2024] [Indexed: 04/04/2024] Open
Abstract
Behavioral and neuroscience studies in humans and primates have shown that memorability is an intrinsic property of an image that predicts its strength of encoding into and retrieval from memory. While previous work has independently probed when or where this memorability effect may occur in the human brain, a description of its spatiotemporal dynamics is missing. Here, we used representational similarity analysis (RSA) to combine functional magnetic resonance imaging (fMRI) with source-estimated magnetoencephalography (MEG) to simultaneously measure when and where the human cortex is sensitive to differences in image memorability. Results reveal that visual perception of High Memorable images, compared to Low Memorable images, recruits a set of regions of interest (ROIs) distributed throughout the ventral visual cortex: a late memorability response (from around 300 ms) in early visual cortex (EVC), inferior temporal cortex, lateral occipital cortex, fusiform gyrus, and banks of the superior temporal sulcus. Image memorability magnitude results are represented after high-level feature processing in visual regions and reflected in classical memory regions in the medial temporal lobe (MTL). Our results present, to our knowledge, the first unified spatiotemporal account of visual memorability effect across the human cortex, further supporting the levels-of-processing theory of perception and memory.
Collapse
Affiliation(s)
- Benjamin Lahner
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Yalda Mohsenzadeh
- The Brain and Mind Institute, The University of Western Ontario, London, Canada
- Department of Computer Science, The University of Western Ontario, London, Canada
- Vector Institute for Artificial Intelligence, Toronto, Ontario, Canada
| | - Caitlin Mullin
- Vision: Science to Application (VISTA), York University, Toronto, Ontario, Canada
| | - Aude Oliva
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| |
Collapse
|
14
|
Mamashli F, Khan S, Hatamimajoumerd E, Jas M, Uluç I, Lankinen K, Obleser J, Friederici AD, Maess B, Ahveninen J. Characterizing directional dynamics of semantic prediction based on inter-regional temporal generalization. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.13.580183. [PMID: 38405823 PMCID: PMC10888763 DOI: 10.1101/2024.02.13.580183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/27/2024]
Abstract
The event-related potential/field component N400(m) has been widely used as a neural index for semantic prediction. It has long been hypothesized that feedback information from inferior frontal areas plays a critical role in generating the N400. However, due to limitations in causal connectivity estimation, direct testing of this hypothesis has remained difficult. Here, magnetoencephalography (MEG) data was obtained during a classic N400 paradigm where the semantic predictability of a fixed target noun was manipulated in simple German sentences. To estimate causality, we implemented a novel approach based on machine learning and temporal generalization to estimate the effect of inferior frontal gyrus (IFG) on temporal areas. In this method, a support vector machine (SVM) classifier is trained on each time point of the neural activity in IFG to classify less predicted (LP) and highly predicted (HP) nouns and then tested on all time points of superior/middle temporal sub-regions activity (and vice versa, to establish spatio-temporal evidence for or against causality). The decoding accuracy was significantly above chance level when the classifier was trained on IFG activity and tested on future activity in superior and middle temporal gyrus (STG/MTG). The results present new evidence for a model predictive speech comprehension where predictive IFG activity is fed back to shape subsequent activity in STG/MTG, implying a feedback mechanism in N400 generation. In combination with the also observed strong feedforward effect from left STG/MTG to IFG, our findings provide evidence of dynamic feedback and feedforward influences between IFG and temporal areas during N400 generation.
Collapse
Affiliation(s)
- Fahimeh Mamashli
- Department of Radiology, Massachusetts General Hospital, Athinoula A. Martinos Center for Biomedical Imaging, Harvard Medical School, Boston, MA 02129
| | - Sheraz Khan
- Department of Radiology, Massachusetts General Hospital, Athinoula A. Martinos Center for Biomedical Imaging, Harvard Medical School, Boston, MA 02129
| | - Elaheh Hatamimajoumerd
- Department of Electrical and Computer Engineering, Northeastern University, Boston, MA 02115
| | - Mainak Jas
- Department of Radiology, Massachusetts General Hospital, Athinoula A. Martinos Center for Biomedical Imaging, Harvard Medical School, Boston, MA 02129
| | - Işıl Uluç
- Department of Radiology, Massachusetts General Hospital, Athinoula A. Martinos Center for Biomedical Imaging, Harvard Medical School, Boston, MA 02129
| | - Kaisu Lankinen
- Department of Radiology, Massachusetts General Hospital, Athinoula A. Martinos Center for Biomedical Imaging, Harvard Medical School, Boston, MA 02129
| | - Jonas Obleser
- Department of Psychology, University of Lübeck, Lübeck 23562, Germany
| | - Angela D. Friederici
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - Burkhard Maess
- MEG and Cortical Networks Group, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - Jyrki Ahveninen
- Department of Radiology, Massachusetts General Hospital, Athinoula A. Martinos Center for Biomedical Imaging, Harvard Medical School, Boston, MA 02129
| |
Collapse
|
15
|
Amaral L, Besson G, Caparelli-Dáquer E, Bergström F, Almeida J. Temporal differences and commonalities between hand and tool neural processing. Sci Rep 2023; 13:22270. [PMID: 38097608 PMCID: PMC10721913 DOI: 10.1038/s41598-023-48180-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 11/23/2023] [Indexed: 12/17/2023] Open
Abstract
Object recognition is a complex cognitive process that relies on how the brain organizes object-related information. While spatial principles have been extensively studied, less studied temporal dynamics may also offer valuable insights into this process, particularly when neural processing overlaps for different categories, as it is the case of the categories of hands and tools. Here we focus on the differences and/or similarities between the time-courses of hand and tool processing under electroencephalography (EEG). Using multivariate pattern analysis, we compared, for different time points, classification accuracy for images of hands or tools when compared to images of animals. We show that for particular time intervals (~ 136-156 ms and ~ 252-328 ms), classification accuracy for hands and for tools differs. Furthermore, we show that classifiers trained to differentiate between tools and animals generalize their learning to classification of hand stimuli between ~ 260-320 ms and ~ 376-500 ms after stimulus onset. Classifiers trained to distinguish between hands and animals, on the other hand, were able to extend their learning to the classification of tools at ~ 150 ms. These findings suggest variations in semantic features and domain-specific differences between the two categories, with later-stage similarities potentially related to shared action processing for hands and tools.
Collapse
Affiliation(s)
- L Amaral
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal.
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC, USA.
| | - G Besson
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
- CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
| | - E Caparelli-Dáquer
- Laboratory of Electrical Stimulation of the Nervous System (LabEEL), Rio de Janeiro State University, Rio de Janeiro, Brazil
| | - F Bergström
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
- CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
- Department of Psychology, University of Gothenburg, Gothenburg, Sweden
| | - J Almeida
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal.
- CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal.
| |
Collapse
|
16
|
Csaky R, van Es MWJ, Jones OP, Woolrich M. Interpretable many-class decoding for MEG. Neuroimage 2023; 282:120396. [PMID: 37805019 PMCID: PMC10938061 DOI: 10.1016/j.neuroimage.2023.120396] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Revised: 09/11/2023] [Accepted: 09/27/2023] [Indexed: 10/09/2023] Open
Abstract
Multivariate pattern analysis (MVPA) of Magnetoencephalography (MEG) and Electroencephalography (EEG) data is a valuable tool for understanding how the brain represents and discriminates between different stimuli. Identifying the spatial and temporal signatures of stimuli is typically a crucial output of these analyses. Such analyses are mainly performed using linear, pairwise, sliding window decoding models. These allow for relative ease of interpretation, e.g. by estimating a time-course of decoding accuracy, but have limited decoding performance. On the other hand, full epoch multiclass decoding models, commonly used for brain-computer interface (BCI) applications, can provide better decoding performance. However interpretation methods for such models have been designed with a low number of classes in mind. In this paper, we propose an approach that combines a multiclass, full epoch decoding model with supervised dimensionality reduction, while still being able to reveal the contributions of spatiotemporal and spectral features using permutation feature importance. Crucially, we introduce a way of doing supervised dimensionality reduction of input features within a neural network optimised for the classification task, improving performance substantially. We demonstrate the approach on 3 different many-class task-MEG datasets using image presentations. Our results demonstrate that this approach consistently achieves higher accuracy than the peak accuracy of a sliding window decoder while estimating the relevant spatiotemporal features in the MEG signal.
Collapse
Affiliation(s)
- Richard Csaky
- Oxford Centre for Human Brain Activity, Department of Psychiatry, University of Oxford, OX3 7JX, Oxford, UK; Wellcome Centre for Integrative Neuroimaging, OX3 9DU, Oxford, UK; Christ Church, OX1 1DP, Oxford, UK.
| | - Mats W J van Es
- Oxford Centre for Human Brain Activity, Department of Psychiatry, University of Oxford, OX3 7JX, Oxford, UK; Wellcome Centre for Integrative Neuroimaging, OX3 9DU, Oxford, UK.
| | - Oiwi Parker Jones
- Wellcome Centre for Integrative Neuroimaging, OX3 9DU, Oxford, UK; Department of Engineering Science, University of Oxford, OX1 3PJ, Oxford, UK; Jesus College, OX1 3DW, Oxford, UK.
| | - Mark Woolrich
- Oxford Centre for Human Brain Activity, Department of Psychiatry, University of Oxford, OX3 7JX, Oxford, UK; Wellcome Centre for Integrative Neuroimaging, OX3 9DU, Oxford, UK.
| |
Collapse
|
17
|
Carota F, Schoffelen JM, Oostenveld R, Indefrey P. Parallel or sequential? Decoding conceptual and phonological/phonetic information from MEG signals during language production. Cogn Neuropsychol 2023; 40:298-317. [PMID: 38105574 DOI: 10.1080/02643294.2023.2283239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Accepted: 11/08/2023] [Indexed: 12/19/2023]
Abstract
Speaking requires the temporally coordinated planning of core linguistic information, from conceptual meaning to articulation. Recent neurophysiological results suggested that these operations involve a cascade of neural events with subsequent onset times, whilst competing evidence suggests early parallel neural activation. To test these hypotheses, we examined the sources of neuromagnetic activity recorded from 34 participants overtly naming 134 images from 4 object categories (animals, tools, foods and clothes). Within each category, word length and phonological neighbourhood density were co-varied to target phonological/phonetic processes. Multivariate pattern analyses (MVPA) searchlights in source space decoded object categories in occipitotemporal and middle temporal cortex, and phonological/phonetic variables in left inferior frontal (BA 44) and motor cortex early on. The findings suggest early activation of multiple variables due to intercorrelated properties and interactivity of processing, thus raising important questions about the representational properties of target words during the preparatory time enabling overt speaking.
Collapse
Affiliation(s)
- Francesca Carota
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Cognitive Neuroscience, Radboud University, Nijmegen, The Netherlands
| | - Jan-Mathijs Schoffelen
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Cognitive Neuroscience, Radboud University, Nijmegen, The Netherlands
| | - Robert Oostenveld
- Donders Institute for Cognitive Neuroscience, Radboud University, Nijmegen, The Netherlands
- NatMEG, Karolinska Institutet, Stockholm, Sweden
| | - Peter Indefrey
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Cognitive Neuroscience, Radboud University, Nijmegen, The Netherlands
- Institut für Sprache und Information, Heinrich Heine University, Düsseldorf, Germany
| |
Collapse
|
18
|
Wang L, Kuperberg GR. Better Together: Integrating Multivariate with Univariate Methods, and MEG with EEG to Study Language Comprehension. LANGUAGE, COGNITION AND NEUROSCIENCE 2023; 39:991-1019. [PMID: 39444757 PMCID: PMC11495849 DOI: 10.1080/23273798.2023.2223783] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Accepted: 06/05/2023] [Indexed: 10/24/2024]
Abstract
We used MEG and EEG to examine the effects of Plausibility (anomalous vs. plausible) and Animacy (animate vs. inanimate) on activity to incoming words during language comprehension. We conducted univariate event-related and multivariate spatial similarity analyses on both datasets. The univariate and multivariate results converged in their time course and sensitivity to Plausibility. However, only the spatial similarity analyses detected effects of Animacy. The MEG and EEG findings largely converged between 300-500ms, but diverged in their univariate and multivariate responses to the anomalies between 600-1000ms. We interpret the full set of results within a predictive coding framework. In addition to the theoretical significance of these findings, we discuss the methodological implications of the convergence and divergence between the univariate and multivariate results, as well as between the MEG and EEG results. We argue that a deeper understanding of language processing can be achieved by integrating different analysis approaches and techniques.
Collapse
Affiliation(s)
- Lin Wang
- Department of Psychiatry and the Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, 02129, USA
- Department of Psychology, Tufts University, Medford, MA, 02155, USA
| | - Gina R Kuperberg
- Department of Psychiatry and the Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, 02129, USA
- Department of Psychology, Tufts University, Medford, MA, 02155, USA
| |
Collapse
|
19
|
Rahimi S, Jackson R, Farahibozorg SR, Hauk O. Time-Lagged Multidimensional Pattern Connectivity (TL-MDPC): An EEG/MEG pattern transformation based functional connectivity metric. Neuroimage 2023; 270:119958. [PMID: 36813063 PMCID: PMC10030313 DOI: 10.1016/j.neuroimage.2023.119958] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Revised: 01/16/2023] [Accepted: 02/19/2023] [Indexed: 02/23/2023] Open
Abstract
Functional and effective connectivity methods are essential to study the complex information flow in brain networks underlying human cognition. Only recently have connectivity methods begun to emerge that make use of the full multidimensional information contained in patterns of brain activation, rather than unidimensional summary measures of these patterns. To date, these methods have mostly been applied to fMRI data, and no method allows vertex-to-vertex transformations with the temporal specificity of EEG/MEG data. Here, we introduce time-lagged multidimensional pattern connectivity (TL-MDPC) as a novel bivariate functional connectivity metric for EEG/MEG research. TL-MDPC estimates the vertex-to-vertex transformations among multiple brain regions and across different latency ranges. It determines how well patterns in ROI X at time point tx can linearly predict patterns of ROI Y at time point ty. In the present study, we use simulations to demonstrate TL-MDPC's increased sensitivity to multidimensional effects compared to a unidimensional approach across realistic choices of number of trials and signal-to-noise ratios. We applied TL-MDPC, as well as its unidimensional counterpart, to an existing dataset varying the depth of semantic processing of visually presented words by contrasting a semantic decision and a lexical decision task. TL-MDPC detected significant effects beginning very early on, and showed stronger task modulations than the unidimensional approach, suggesting that it is capable of capturing more information. With TL-MDPC only, we observed rich connectivity between core semantic representation (left and right anterior temporal lobes) and semantic control (inferior frontal gyrus and posterior temporal cortex) areas with greater semantic demands. TL-MDPC is a promising approach to identify multidimensional connectivity patterns, typically missed by unidimensional approaches.
Collapse
Affiliation(s)
- Setareh Rahimi
- MRC Cognition and Brain Sciences Unit, University of Cambridge, 15 Chaucer Road, Cambridge CB2 7EF United Kingdom.
| | - Rebecca Jackson
- Department of Psychology & York Biomedical Research Institute, University of York, United Kingdom; MRC Cognition and Brain Sciences Unit, University of Cambridge, 15 Chaucer Road, Cambridge CB2 7EF United Kingdom
| | - Seyedeh-Rezvan Farahibozorg
- Wellcome Centre for Integrative Neuroimaging, Nuffield Department of Clinical Neurosciences, University of Oxford, United Kingdom
| | - Olaf Hauk
- MRC Cognition and Brain Sciences Unit, University of Cambridge, 15 Chaucer Road, Cambridge CB2 7EF United Kingdom
| |
Collapse
|
20
|
Noonan MP, Von Lautz AH, Bauer Y, Summerfield C, Stokes MS. Differential modulation of visual responses by distractor or target expectations. Atten Percept Psychophys 2023; 85:845-862. [PMID: 36460926 PMCID: PMC10066164 DOI: 10.3758/s13414-022-02617-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/01/2022] [Indexed: 12/03/2022]
Abstract
Discriminating relevant from irrelevant information in a busy visual scene is supported by statistical regularities in the environment. However, it is unclear to what extent immediate stimulus repetitions and higher order expectations (whether a repetition is statistically probable or not) are supported by the same neural mechanisms. Moreover, it is also unclear whether target and distractor-related processing are mediated by the same or different underlying neural mechanisms. Using a speeded target discrimination task, the present study implicitly cued subjects to the location of the target or the distractor via manipulations in the underlying stimulus predictability. In separate studies, we collected EEG and MEG alongside behavioural data. Results showed that reaction times were reduced with increased expectations for both types of stimuli and that these effects were driven by expected repetitions in both cases. Despite the similar behavioural pattern across target and distractors, neurophysiological measures distinguished the two stimuli. Specifically, the amplitude of the P1 was modulated by stimulus relevance, being reduced for repeated distractors and increased for repeated targets. The P1 was not, however, modulated by higher order stimulus expectations. These expectations were instead reflected in modulations in ERP amplitude and theta power in frontocentral electrodes. Finally, we observed that a single repetition of a distractor was sufficient to reduce decodability of stimulus spatial location and was also accompanied by diminished representation of stimulus features. Our results highlight the unique mechanisms involved in distractor expectation and suppression and underline the importance of studying these processes distinctly from target-related attentional control.
Collapse
Affiliation(s)
- M P Noonan
- Department of Experimental Psychology, University of Oxford, Oxford, UK.
| | - A H Von Lautz
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Y Bauer
- Division of Neurobiology, Faculty of Biology, LMU Munich, 82152, Munich, Germany
- Graduate School of Systemic Neurosciences (GSN), LMU Munich, 82152, Munich, Germany
| | - C Summerfield
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - M S Stokes
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| |
Collapse
|
21
|
Brockhoff L, Vetter L, Bruchmann M, Schindler S, Moeck R, Straube T. The effects of visual working memory load on detection and neural processing of task-unrelated auditory stimuli. Sci Rep 2023; 13:4342. [PMID: 36927846 PMCID: PMC10020478 DOI: 10.1038/s41598-023-31132-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Accepted: 03/07/2023] [Indexed: 03/18/2023] Open
Abstract
While perceptual load has been proposed to reduce the processing of task-unrelated stimuli, theoretical arguments and empirical findings for other forms of task load are inconclusive. Here, we systematically investigated the detection and neural processing of auditory stimuli varying in stimulus intensity during a stimuli-unrelated visual working memory task alternating between low and high load. We found, depending on stimulus strength, decreased stimulus detection and reduced P3, but unaffected N1 amplitudes of the event-related potential to auditory stimuli under high as compared to low load. In contrast, load independent awareness effects were observed during both early (N1) and late (P3) time windows. Findings suggest a late neural effect of visual working memory load on auditory stimuli leading to lower probability of reported awareness of these stimuli.
Collapse
Affiliation(s)
- Laura Brockhoff
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, Von-Esmarch-Str. 52, 48149, Münster, Germany.
| | - Laura Vetter
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, Von-Esmarch-Str. 52, 48149, Münster, Germany
| | - Maximilian Bruchmann
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, Von-Esmarch-Str. 52, 48149, Münster, Germany.,Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Münster, Germany
| | - Sebastian Schindler
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, Von-Esmarch-Str. 52, 48149, Münster, Germany.,Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Münster, Germany
| | - Robert Moeck
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, Von-Esmarch-Str. 52, 48149, Münster, Germany
| | - Thomas Straube
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, Von-Esmarch-Str. 52, 48149, Münster, Germany.,Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Münster, Germany
| |
Collapse
|
22
|
Ebrahiminia F, Cichy RM, Khaligh-Razavi SM. A multivariate comparison of electroencephalogram and functional magnetic resonance imaging to electrocorticogram using visual object representations in humans. Front Neurosci 2022; 16:983602. [PMID: 36330341 PMCID: PMC9624066 DOI: 10.3389/fnins.2022.983602] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Accepted: 09/23/2022] [Indexed: 09/07/2024] Open
Abstract
Today, most neurocognitive studies in humans employ the non-invasive neuroimaging techniques functional magnetic resonance imaging (fMRI) and electroencephalogram (EEG). However, how the data provided by fMRI and EEG relate exactly to the underlying neural activity remains incompletely understood. Here, we aimed to understand the relation between EEG and fMRI data at the level of neural population codes using multivariate pattern analysis. In particular, we assessed whether this relation is affected when we change stimuli or introduce identity-preserving variations to them. For this, we recorded EEG and fMRI data separately from 21 healthy participants while participants viewed everyday objects in different viewing conditions, and then related the data to electrocorticogram (ECoG) data recorded for the same stimulus set from epileptic patients. The comparison of EEG and ECoG data showed that object category signals emerge swiftly in the visual system and can be detected by both EEG and ECoG at similar temporal delays after stimulus onset. The correlation between EEG and ECoG was reduced when object representations tolerant to changes in scale and orientation were considered. The comparison of fMRI and ECoG overall revealed a tighter relationship in occipital than in temporal regions, related to differences in fMRI signal-to-noise ratio. Together, our results reveal a complex relationship between fMRI, EEG, and ECoG signals at the level of population codes that critically depends on the time point after stimulus onset, the region investigated, and the visual contents used.
Collapse
Affiliation(s)
- Fatemeh Ebrahiminia
- Department of Stem Cells and Developmental Biology, Cell Science Research Center, Royan Institute for Stem Cell Biology and Technology, Academic Center for Education, Culture and Research (ACECR), Tehran, Iran
- School of Electrical and Computer Engineering, University of Tehran, Tehran, Iran
| | | | - Seyed-Mahdi Khaligh-Razavi
- Department of Stem Cells and Developmental Biology, Cell Science Research Center, Royan Institute for Stem Cell Biology and Technology, Academic Center for Education, Culture and Research (ACECR), Tehran, Iran
| |
Collapse
|
23
|
Higgins C, van Es MWJ, Quinn AJ, Vidaurre D, Woolrich MW. The relationship between frequency content and representational dynamics in the decoding of neurophysiological data. Neuroimage 2022; 260:119462. [PMID: 35872176 PMCID: PMC10565838 DOI: 10.1016/j.neuroimage.2022.119462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Revised: 07/04/2022] [Accepted: 07/08/2022] [Indexed: 11/20/2022] Open
Abstract
Decoding of high temporal resolution, stimulus-evoked neurophysiological data is increasingly used to test theories about how the brain processes information. However, a fundamental relationship between the frequency spectra of the neural signal and the subsequent decoding accuracy timecourse is not widely recognised. We show that, in commonly used instantaneous signal decoding paradigms, each sinusoidal component of the evoked response is translated to double its original frequency in the subsequent decoding accuracy timecourses. We therefore recommend, where researchers use instantaneous signal decoding paradigms, that more aggressive low pass filtering is applied with a cut-off at one quarter of the sampling rate, to eliminate representational alias artefacts. However, this does not negate the accompanying interpretational challenges. We show that these can be resolved by decoding paradigms that utilise both a signal's instantaneous magnitude and its local gradient information as features for decoding. On a publicly available MEG dataset, this results in decoding accuracy metrics that are higher, more stable over time, and free of the technical and interpretational challenges previously characterised. We anticipate that a broader awareness of these fundamental relationships will enable stronger interpretations of decoding results by linking them more clearly to the underlying signal characteristics that drive them.
Collapse
Affiliation(s)
- Cameron Higgins
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK
| | - Mats W J van Es
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK.
| | - Andrew J Quinn
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK
| | - Diego Vidaurre
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK; Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Mark W Woolrich
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK
| |
Collapse
|
24
|
Hua L, Gao F, Leong C, Yuan Z. Neural decoding dissociates perceptual grouping between proximity and similarity in visual perception. Cereb Cortex 2022; 33:3803-3815. [PMID: 35973163 DOI: 10.1093/cercor/bhac308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Revised: 07/13/2022] [Accepted: 07/14/2022] [Indexed: 11/13/2022] Open
Abstract
Unlike single grouping principle, cognitive neural mechanism underlying the dissociation across two or more grouping principles is still unclear. In this study, a dimotif lattice paradigm that can adjust the strength of one grouping principle was used to inspect how, when, and where the processing of two grouping principles (proximity and similarity) were carried out in human brain. Our psychophysical findings demonstrated that similarity grouping effect was enhanced with reduced proximity effect when the grouping cues of proximity and similarity were presented simultaneously. Meanwhile, EEG decoding was performed to reveal the specific cognitive patterns involved in each principle by using time-resolved MVPA. More importantly, the onsets of dissociation between 2 grouping principles coincided within 3 time windows: the early-stage proximity-defined local visual element arrangement in middle occipital cortex, the middle-stage processing for feature selection modulating low-level visual cortex such as inferior occipital cortex and fusiform cortex, and the high-level cognitive integration to make decisions for specific grouping preference in the parietal areas. In addition, it was discovered that the brain responses were highly correlated with behavioral grouping. Therefore, our study provides direct evidence for a link between the human perceptual space of grouping decision-making and neural space of brain activation patterns.
Collapse
Affiliation(s)
- Lin Hua
- Centre for Cognitive and Brain Sciences, N21 Research Building, University of Macau, Avenida da Universidade, Taipa, Macau SAR 999078, China.,Faculty of Health Sciences, E12 Building, University of Macau, Avenida da Universidade, Taipa, Macau SAR 999078, China
| | - Fei Gao
- Centre for Cognitive and Brain Sciences, N21 Research Building, University of Macau, Avenida da Universidade, Taipa, Macau SAR 999078, China
| | - Chantat Leong
- Centre for Cognitive and Brain Sciences, N21 Research Building, University of Macau, Avenida da Universidade, Taipa, Macau SAR 999078, China.,Faculty of Health Sciences, E12 Building, University of Macau, Avenida da Universidade, Taipa, Macau SAR 999078, China
| | - Zhen Yuan
- Centre for Cognitive and Brain Sciences, N21 Research Building, University of Macau, Avenida da Universidade, Taipa, Macau SAR 999078, China.,Faculty of Health Sciences, E12 Building, University of Macau, Avenida da Universidade, Taipa, Macau SAR 999078, China
| |
Collapse
|
25
|
Li Y, Zhang M, Liu S, Luo W. EEG decoding of multidimensional information from emotional faces. Neuroimage 2022; 258:119374. [PMID: 35700944 DOI: 10.1016/j.neuroimage.2022.119374] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Revised: 06/03/2022] [Accepted: 06/10/2022] [Indexed: 10/18/2022] Open
Abstract
Humans can detect and recognize faces quickly, but there has been little research on the temporal dynamics of the different dimensional face information that is extracted. The present study aimed to investigate the time course of neural responses to the representation of different dimensional face information, such as age, gender, emotion, and identity. We used support vector machine decoding to obtain representational dissimilarity matrices of event-related potential responses to different faces for each subject over time. In addition, we performed representational similarity analysis with the model representational dissimilarity matrices that contained different dimensional face information. Three significant findings were observed. First, the extraction process of facial emotion occurred before that of facial identity and lasted for a long time, which was specific to the right frontal region. Second, arousal was preferentially extracted before valence during the processing of facial emotional information. Third, different dimensional face information exhibited representational stability during different periods. In conclusion, these findings reveal the precise temporal dynamics of multidimensional information processing in faces and provide powerful support for computational models on emotional face perception.
Collapse
Affiliation(s)
- Yiwen Li
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| | - Mingming Zhang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| | - Shuaicheng Liu
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| | - Wenbo Luo
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China.
| |
Collapse
|
26
|
Ferrante O, Liu L, Minarik T, Gorska U, Ghafari T, Luo H, Jensen O. FLUX: A pipeline for MEG analysis. Neuroimage 2022; 253:119047. [PMID: 35276363 PMCID: PMC9127391 DOI: 10.1016/j.neuroimage.2022.119047] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Revised: 01/26/2022] [Accepted: 02/28/2022] [Indexed: 11/23/2022] Open
Abstract
Magnetoencephalography (MEG) allows for quantifying modulations of human neuronal activity on a millisecond time scale while also making it possible to estimate the location of the underlying neuronal sources. The technique relies heavily on signal processing and source modelling. To this end, there are several open-source toolboxes developed by the community. While these toolboxes are powerful as they provide a wealth of options for analyses, the many options also pose a challenge for reproducible research as well as for researchers new to the field. The FLUX pipeline aims to make the analyses steps and setting explicit for standard analysis done in cognitive neuroscience. It focuses on quantifying and source localization of oscillatory brain activity, but it can also be used for event-related fields and multivariate pattern analysis. The pipeline is derived from the Cogitate consortium addressing a set of concrete cognitive neuroscience questions. Specifically, the pipeline including documented code is defined for MNE Python (a Python toolbox) and FieldTrip (a Matlab toolbox), and a data set on visuospatial attention is used to illustrate the steps. The scripts are provided as notebooks implemented in Jupyter Notebook and MATLAB Live Editor providing explanations, justifications and graphical outputs for the essential steps. Furthermore, we also provide suggestions for text and parameter settings to be used in registrations and publications to improve replicability and facilitate pre-registrations. The FLUX can be used for education either in self-studies or guided workshops. We expect that the FLUX pipeline will strengthen the field of MEG by providing some standardization on the basic analysis steps and by aligning approaches across toolboxes. Furthermore, we also aim to support new researchers entering the field by providing education and training. The FLUX pipeline is not meant to be static; it will evolve with the development of the toolboxes and with new insights. Furthermore, with the anticipated increase in MEG systems based on the Optically Pumped Magnetometers, the pipeline will also evolve to embrace these developments.
Collapse
Affiliation(s)
- Oscar Ferrante
- Centre for Human Brain Health, School of Psychology, University of Birmingham, UK
| | - Ling Liu
- School of Communication Science, Beijing Language and Culture University, Beijing, China; School of Psychological and Cognitive Sciences, Peking University, Beijing, China
| | - Tamas Minarik
- Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Urszula Gorska
- Department of Psychiatry, University of Wisconsin-Madison, Madison, WI, USA
| | - Tara Ghafari
- Centre for Human Brain Health, School of Psychology, University of Birmingham, UK; Department of Physiology, Medical School, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Huan Luo
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China
| | - Ole Jensen
- Centre for Human Brain Health, School of Psychology, University of Birmingham, UK.
| |
Collapse
|
27
|
Liu Z, Liu S, Li S, Li L, Zheng L, Weng X, Guo X, Lu Y, Men W, Gao J, You X. Dissociating Value-Based Neurocomputation from Subsequent Selection-Related Activations in Human Decision-Making. Cereb Cortex 2022; 32:4141-4155. [PMID: 35024797 DOI: 10.1093/cercor/bhab471] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Revised: 11/17/2021] [Accepted: 11/18/2021] [Indexed: 11/12/2022] Open
Abstract
Human decision-making requires the brain to fulfill neural computation of benefit and risk and therewith a selection between options. It remains unclear how value-based neural computation and subsequent brain activity evolve to achieve a final decision and which process is modulated by irrational factors. We adopted a sequential risk-taking task that asked participants to successively decide whether to open a box with potential reward/punishment in an eight-box trial, or not to open. With time-resolved multivariate pattern analyses, we decoded electroencephalography and magnetoencephalography responses to two successive low- and high-risk boxes before open-box action. Referencing the specificity of decoding-accuracy peak to a first-stage processing completion, we set it as the demarcation and dissociated the neural time course of decision-making into valuation and selection stages. The behavioral hierarchical drift diffusion modeling confirmed different information processing in two stages, that is, the valuation stage was related to the drift rate of evidence accumulation, while the selection stage was related to the nondecision time spent in response-producing. We further observed that medial orbitofrontal cortex participated in the valuation stage, while superior frontal gyrus engaged in the selection stage of irrational open-box decisions. Afterward, we revealed that irrational factors influenced decision-making through the selection stage rather than the valuation stage.
Collapse
Affiliation(s)
- Zhiyuan Liu
- Shaanxi Key Laboratory of Behavior and Cognitive Neuroscience, School of Psychology, Shaanxi Normal University, Xi'an 710062, China
| | - Sijia Liu
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou 310007, China
| | - Shuang Li
- School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China
| | - Lin Li
- School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China
| | - Li Zheng
- School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China
| | - Xue Weng
- School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China
| | - Xiuyan Guo
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou 310007, China.,Shanghai Key Laboratory of Magnetic Resonance, School of Physics and Materials Science, East China Normal University, Shanghai 200062, China
| | - Yang Lu
- School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China
| | - Weiwei Men
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100091, China.,Beijing City Key Laboratory for Medical Physics and Engineering, Institute of Heavy Ion Physics, School of Physics, Peking University, Beijing 100091, China
| | - Jiahong Gao
- Beijing City Key Laboratory for Medical Physics and Engineering, Institute of Heavy Ion Physics, School of Physics, Peking University, Beijing 100091, China.,Center for MRI Research and McGovern Institute for Brain Research, Peking University, Beijing 100091, China
| | - Xuqun You
- Shaanxi Key Laboratory of Behavior and Cognitive Neuroscience, School of Psychology, Shaanxi Normal University, Xi'an 710062, China
| |
Collapse
|
28
|
Yip HMK, Cheung LYT, Ngan VSH, Wong YK, Wong ACN. The Effect of Task on Object Processing revealed by EEG decoding. Eur J Neurosci 2022; 55:1174-1199. [PMID: 35023230 DOI: 10.1111/ejn.15598] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2020] [Revised: 01/05/2022] [Accepted: 01/10/2022] [Indexed: 12/01/2022]
Abstract
Recent studies showed that task demand affects object representations in higher-level visual areas and beyond, but not so much in earlier areas. There are, however, limitations in those studies including the relatively weak manipulation of task due to the use of familiar real-life objects, the low temporal resolution in fMRI, and the emphasis on the amount and not the source of information carried by brain activations. In the current study, observers categorized images of artificial objects in one of two orthogonal dimensions, shape and texture, while their brain activity was recorded with electroencephalogram (EEG). Results showed that object processing along the texture dimension was affected by task demand starting from a relatively late time (320-370ms time window) after image onset. The findings are consistent with the view that task exerts an effect on the later phases of object processing.
Collapse
Affiliation(s)
- Hoi Ming Ken Yip
- Department of Psychology, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong
| | - Leo Y T Cheung
- Department of Educational Psychology, Faculty of Education, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong
| | - Vince S H Ngan
- Department of Psychology, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong
| | - Yetta Kwailing Wong
- Department of Educational Psychology, Faculty of Education, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong
| | - Alan C N Wong
- Department of Psychology, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong
| |
Collapse
|
29
|
Cohen MX. A tutorial on generalized eigendecomposition for denoising, contrast enhancement, and dimension reduction in multichannel electrophysiology. Neuroimage 2021; 247:118809. [PMID: 34906717 DOI: 10.1016/j.neuroimage.2021.118809] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Revised: 11/20/2021] [Accepted: 12/10/2021] [Indexed: 10/19/2022] Open
Abstract
The goal of this paper is to present a theoretical and practical introduction to generalized eigendecomposition (GED), which is a robust and flexible framework used for dimension reduction and source separation in multichannel signal processing. In cognitive electrophysiology, GED is used to create spatial filters that maximize a researcher-specified contrast. For example, one may wish to exploit an assumption that different sources have different frequency content, or that sources vary in magnitude across experimental conditions. GED is fast and easy to compute, performs well in simulated and real data, and is easily adaptable to a variety of specific research goals. This paper introduces GED in a way that ties together myriad individual publications and applications of GED in electrophysiology, and provides sample MATLAB and Python code that can be tested and adapted. Practical considerations and issues that often arise in applications are discussed.
Collapse
Affiliation(s)
- Michael X Cohen
- Donders Centre for Medical Neuroscience, Radboud University Medical Center, the Netherlands.
| |
Collapse
|
30
|
Guo LL, Oghli YS, Frost A, Niemeier M. Multivariate Analysis of Electrophysiological Signals Reveals the Time Course of Precision Grasps Programs: Evidence for Nonhierarchical Evolution of Grasp Control. J Neurosci 2021; 41:9210-9222. [PMID: 34551938 PMCID: PMC8570828 DOI: 10.1523/jneurosci.0992-21.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2021] [Revised: 09/13/2021] [Accepted: 09/16/2021] [Indexed: 11/21/2022] Open
Abstract
Current understanding of the neural processes underlying human grasping suggests that grasp computations involve gradients of higher to lower level representations and, relatedly, visual to motor processes. However, it is unclear whether these processes evolve in a strictly canonical manner from higher to intermediate and to lower levels given that this knowledge importantly relies on functional imaging, which lacks temporal resolution. To examine grasping in fine temporal detail here we used multivariate EEG analysis. We asked participants to grasp objects while controlling the time at which crucial elements of grasp programs were specified. We first specified the orientation with which participants should grasp objects, and only after a delay we instructed participants about which effector to use to grasp, either the right or the left hand. We also asked participants to grasp with both hands because bimanual and left-hand grasping share intermediate-level grasp representations. We observed that grasp programs evolved in a canonical manner from visual representations, which were independent of effectors to motor representations that distinguished between effectors. However, we found that intermediate representations of effectors that partially distinguished between effectors arose after representations that distinguished among all effector types. Our results show that grasp computations do not proceed in a strictly hierarchically canonical fashion, highlighting the importance of the fine temporal resolution of EEG for a comprehensive understanding of human grasp control.SIGNIFICANCE STATEMENT A long-standing assumption of the grasp computations is that grasp representations progress from higher to lower level control in a regular, or canonical, fashion. Here, we combined EEG and multivariate pattern analysis to characterize the temporal dynamics of grasp representations while participants viewed objects and were subsequently cued to execute an unimanual or bimanual grasp. Interrogation of the temporal dynamics revealed that lower level effector representations emerged before intermediate levels of grasp representations, thereby suggesting a partially noncanonical progression from higher to lower and then to intermediate level grasp control.
Collapse
Affiliation(s)
- Lin Lawrence Guo
- Department of Psychology, University of Toronto Scarborough, Toronto, Ontario M1C 1A4, Canada
| | - Yazan Shamli Oghli
- Department of Psychology, University of Toronto Scarborough, Toronto, Ontario M1C 1A4, Canada
| | - Adam Frost
- Department of Psychology, University of Toronto Scarborough, Toronto, Ontario M1C 1A4, Canada
| | - Matthias Niemeier
- Department of Psychology, University of Toronto Scarborough, Toronto, Ontario M1C 1A4, Canada
- Centre for Vision Research, York University, Toronto, Ontario M4N 3M6, Canada
- Vision: Science to Applications, York University, Toronto, Ontario M3J 1P3, Canada
| |
Collapse
|
31
|
Hansen BC, Greene MR, Field DJ. Dynamic Electrode-to-Image (DETI) mapping reveals the human brain's spatiotemporal code of visual information. PLoS Comput Biol 2021; 17:e1009456. [PMID: 34570753 PMCID: PMC8496831 DOI: 10.1371/journal.pcbi.1009456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Revised: 10/07/2021] [Accepted: 09/16/2021] [Indexed: 11/18/2022] Open
Abstract
A number of neuroimaging techniques have been employed to understand how visual information is transformed along the visual pathway. Although each technique has spatial and temporal limitations, they can each provide important insights into the visual code. While the BOLD signal of fMRI can be quite informative, the visual code is not static and this can be obscured by fMRI’s poor temporal resolution. In this study, we leveraged the high temporal resolution of EEG to develop an encoding technique based on the distribution of responses generated by a population of real-world scenes. This approach maps neural signals to each pixel within a given image and reveals location-specific transformations of the visual code, providing a spatiotemporal signature for the image at each electrode. Our analyses of the mapping results revealed that scenes undergo a series of nonuniform transformations that prioritize different spatial frequencies at different regions of scenes over time. This mapping technique offers a potential avenue for future studies to explore how dynamic feedforward and recurrent processes inform and refine high-level representations of our visual world. The visual information that we sample from our environment undergoes a series of neural modifications, with each modification state (or visual code) consisting of a unique distribution of responses across neurons along the visual pathway. However, current noninvasive neuroimaging techniques provide an account of that code that is coarse with respect to time or space. Here, we present dynamic electrode-to-image (DETI) mapping, an analysis technique that capitalizes on the high temporal resolution of EEG to map neural signals to each pixel within a given image to reveal location-specific modifications of the visual code. The DETI technique reveals maps of features that are associated with the neural signal at each pixel and at each time point. DETI mapping shows that real-world scenes undergo a series of nonuniform modifications over both space and time. Specifically, we find that the visual code varies in a location-specific manner, likely reflecting that neural processing prioritizes different features at different image locations over time. DETI mapping therefore offers a potential avenue for future studies to explore how each modification state informs and refines the conceptual meaning of our visual world.
Collapse
Affiliation(s)
- Bruce C. Hansen
- Colgate University, Department of Psychological & Brain Sciences, Neuroscience Program, Hamilton New York, United States of America
- * E-mail:
| | - Michelle R. Greene
- Bates College, Neuroscience Program, Lewiston, Maine, United States of America
| | - David J. Field
- Cornell University, Department of Psychology, Ithaca, New York, United States of America
| |
Collapse
|
32
|
The representational dynamics of perceived voice emotions evolve from categories to dimensions. Nat Hum Behav 2021; 5:1203-1213. [PMID: 33707658 PMCID: PMC7611700 DOI: 10.1038/s41562-021-01073-0] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2020] [Accepted: 02/08/2021] [Indexed: 01/31/2023]
Abstract
Long-standing affective science theories conceive the perception of emotional stimuli either as discrete categories (for example, an angry voice) or continuous dimensional attributes (for example, an intense and negative vocal emotion). Which position provides a better account is still widely debated. Here we contrast the positions to account for acoustics-independent perceptual and cerebral representational geometry of perceived voice emotions. We combined multimodal imaging of the cerebral response to heard vocal stimuli (using functional magnetic resonance imaging and magneto-encephalography) with post-scanning behavioural assessment of voice emotion perception. By using representational similarity analysis, we find that categories prevail in perceptual and early (less than 200 ms) frontotemporal cerebral representational geometries and that dimensions impinge predominantly on a later limbic-temporal network (at 240 ms and after 500 ms). These results reconcile the two opposing views by reframing the perception of emotions as the interplay of cerebral networks with different representational dynamics that emphasize either categories or dimensions.
Collapse
|
33
|
Hubbard RJ, Federmeier KD. Representational Pattern Similarity of Electrical Brain Activity Reveals Rapid and Specific Prediction during Language Comprehension. Cereb Cortex 2021; 31:4300-4313. [PMID: 33895819 DOI: 10.1093/cercor/bhab087] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Predicting upcoming events is a critical function of the brain, and language provides a fertile testing ground for studying prediction, as comprehenders use context to predict features of upcoming words. Many aspects of the mechanisms of prediction remain elusive, partly due to a lack of methodological tools to probe prediction formation in the moment. To elucidate what features are neurally preactivated and when, we used representational similarity analysis on previously collected sentence reading data. We compared EEG activity patterns elicited by expected and unexpected sentence final words to patterns from the preceding words of the sentence, in both strongly and weakly constraining sentences. Pattern similarity with the final word was increased in an early time window following the presentation of the pre-final word, and this increase was modulated by both expectancy and constraint. This was not seen at earlier words, suggesting that predictions were precisely timed. Additionally, pre-final word activity-the predicted representation-had negative similarity with later final word activity, but only for strongly expected words. These findings shed light on the mechanisms of prediction in the brain: rapid preactivation occurs following certain cues, but the predicted features may receive reduced processing upon confirmation.
Collapse
Affiliation(s)
- Ryan J Hubbard
- Beckman Institute for Advanced Science and Technology, University of Illinois Urbana-Champaign, Urbana, IL 61801, USA
| | - Kara D Federmeier
- Beckman Institute for Advanced Science and Technology, University of Illinois Urbana-Champaign, Urbana, IL 61801, USA.,Department of Psychology, University of Illinois Urbana-Champaign, Urbana, IL 61801, USA.,Program in Neuroscience, University of Illinois Urbana-Champaign, Urbana, IL 61801, USA
| |
Collapse
|
34
|
Sharifian F, Schneider D, Arnau S, Wascher E. Decoding of cognitive processes involved in the continuous performance task. Int J Psychophysiol 2021; 167:57-68. [PMID: 34216693 DOI: 10.1016/j.ijpsycho.2021.06.012] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Revised: 06/22/2021] [Accepted: 06/25/2021] [Indexed: 10/21/2022]
Abstract
Decoding of electroencephalogram brain representations is a powerful data driven technique to assess the stream of cognitive information processing. It could promote a more thorough understanding of cognitive control networks. For many years, the continuous performance task has been utilized to investigate impaired proactive and reactive cognitive functions. So far, mainly task performance and univariate electroencephalogram were involved in such investigations. In this study, we benefit from multi-variate pattern analysis of continuous performance task variations to provide a more complete spatio-temporal outline of information processing flow involved in sustained and transient attention and response preparation. Besides effects that are well in line with previous EEG research but could be described in more spatial and temporal detail by the used methods, our results could suggest the presence of a higher order feedback control system when expectations are violated. Such a feedback control is related to modulations of behavior both intra- and inter-individually.
Collapse
Affiliation(s)
- Fariba Sharifian
- Leibniz Research Centre for Working Environments and Human Factors (IfADo), Department of Ergonomics, Ardeystr. 67, 44139 Dortmund, Germany.
| | - Daniel Schneider
- Leibniz Research Centre for Working Environments and Human Factors (IfADo), Department of Ergonomics, Ardeystr. 67, 44139 Dortmund, Germany
| | - Stefan Arnau
- Leibniz Research Centre for Working Environments and Human Factors (IfADo), Department of Ergonomics, Ardeystr. 67, 44139 Dortmund, Germany
| | - Edmund Wascher
- Leibniz Research Centre for Working Environments and Human Factors (IfADo), Department of Ergonomics, Ardeystr. 67, 44139 Dortmund, Germany
| |
Collapse
|
35
|
Opoku-Baah C, Schoenhaut AM, Vassall SG, Tovar DA, Ramachandran R, Wallace MT. Visual Influences on Auditory Behavioral, Neural, and Perceptual Processes: A Review. J Assoc Res Otolaryngol 2021; 22:365-386. [PMID: 34014416 PMCID: PMC8329114 DOI: 10.1007/s10162-021-00789-0] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Accepted: 02/07/2021] [Indexed: 01/03/2023] Open
Abstract
In a naturalistic environment, auditory cues are often accompanied by information from other senses, which can be redundant with or complementary to the auditory information. Although the multisensory interactions derived from this combination of information and that shape auditory function are seen across all sensory modalities, our greatest body of knowledge to date centers on how vision influences audition. In this review, we attempt to capture the state of our understanding at this point in time regarding this topic. Following a general introduction, the review is divided into 5 sections. In the first section, we review the psychophysical evidence in humans regarding vision's influence in audition, making the distinction between vision's ability to enhance versus alter auditory performance and perception. Three examples are then described that serve to highlight vision's ability to modulate auditory processes: spatial ventriloquism, cross-modal dynamic capture, and the McGurk effect. The final part of this section discusses models that have been built based on available psychophysical data and that seek to provide greater mechanistic insights into how vision can impact audition. The second section reviews the extant neuroimaging and far-field imaging work on this topic, with a strong emphasis on the roles of feedforward and feedback processes, on imaging insights into the causal nature of audiovisual interactions, and on the limitations of current imaging-based approaches. These limitations point to a greater need for machine-learning-based decoding approaches toward understanding how auditory representations are shaped by vision. The third section reviews the wealth of neuroanatomical and neurophysiological data from animal models that highlights audiovisual interactions at the neuronal and circuit level in both subcortical and cortical structures. It also speaks to the functional significance of audiovisual interactions for two critically important facets of auditory perception-scene analysis and communication. The fourth section presents current evidence for alterations in audiovisual processes in three clinical conditions: autism, schizophrenia, and sensorineural hearing loss. These changes in audiovisual interactions are postulated to have cascading effects on higher-order domains of dysfunction in these conditions. The final section highlights ongoing work seeking to leverage our knowledge of audiovisual interactions to develop better remediation approaches to these sensory-based disorders, founded in concepts of perceptual plasticity in which vision has been shown to have the capacity to facilitate auditory learning.
Collapse
Affiliation(s)
- Collins Opoku-Baah
- Neuroscience Graduate Program, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - Adriana M Schoenhaut
- Neuroscience Graduate Program, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - Sarah G Vassall
- Neuroscience Graduate Program, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - David A Tovar
- Neuroscience Graduate Program, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - Ramnarayan Ramachandran
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Department of Psychology, Vanderbilt University, Nashville, TN, USA
- Department of Hearing and Speech, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Vision Research Center, Nashville, TN, USA
| | - Mark T Wallace
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA.
- Department of Psychology, Vanderbilt University, Nashville, TN, USA.
- Department of Hearing and Speech, Vanderbilt University Medical Center, Nashville, TN, USA.
- Vanderbilt Vision Research Center, Nashville, TN, USA.
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.
- Department of Pharmacology, Vanderbilt University, Nashville, TN, USA.
| |
Collapse
|
36
|
Classification of visuomotor tasks based on electroencephalographic data depends on age-related differences in brain activity patterns. Neural Netw 2021; 142:363-374. [PMID: 34116449 DOI: 10.1016/j.neunet.2021.04.029] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2020] [Revised: 03/12/2021] [Accepted: 04/22/2021] [Indexed: 11/23/2022]
Abstract
Classification of physiological data provides a data driven approach to study central aspects of motor control, which changes with age. To implement such results in real-life applications for elderly it is important to identify age-specific characteristics of movement classification. We compared task-classification based on EEG derived activity patterns related to brain network characteristics between older and younger adults performing force tracking with two task characteristics (sinusoidal; constant) with the right or left hand. We extracted brain network patterns with dynamic mode decomposition (DMD) and classified the tasks on an individual level using linear discriminant analysis (LDA). Next, we compared the models' performance between the groups. Studying brain activity patterns, we identified signatures of altered motor network function reflecting dedifferentiated and compensational brain activation in older adults. We found that the classification performance of the body side was lower in older adults. However, classification performance with respect to task characteristics was better in older adults. This may indicate a higher susceptibility of brain network mechanisms to task difficulty in elderly. Signatures of dedifferentiation and compensation refer to an age-related reorganization of functional brain networks, which suggests that classification of visuomotor tracking tasks is influenced by age-specific characteristics of brain activity patterns. In addition to insights into central aspects of fine motor control, the results presented here are relevant in application-oriented areas such as brain computer interfaces.
Collapse
|
37
|
van Driel J, Olivers CNL, Fahrenfort JJ. High-pass filtering artifacts in multivariate classification of neural time series data. J Neurosci Methods 2021; 352:109080. [PMID: 33508412 DOI: 10.1016/j.jneumeth.2021.109080] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2019] [Revised: 01/13/2021] [Accepted: 01/15/2021] [Indexed: 12/11/2022]
Abstract
BACKGROUND Traditionally, EEG/MEG data are high-pass filtered and baseline-corrected to remove slow drifts. Minor deleterious effects of high-pass filtering in traditional time-series analysis have been well-documented, including temporal displacements. However, its effects on time-resolved multivariate pattern classification analyses (MVPA) are largely unknown. NEW METHOD To prevent potential displacement effects, we extend an alternative method of removing slow drift noise - robust detrending - with a procedure in which we mask out all cortical events from each trial. We refer to this method as trial-masked robust detrending. RESULTS In both real and simulated EEG data of a working memory experiment, we show that both high-pass filtering and standard robust detrending create artifacts that result in the displacement of multivariate patterns into activity silent periods, particularly apparent in temporal generalization analyses, and especially in combination with baseline correction. We show that trial-masked robust detrending is free from such displacements. COMPARISON WITH EXISTING METHOD(S) Temporal displacement may emerge even with modest filter cut-off settings such as 0.05 Hz, and even in regular robust detrending. However, trial-masked robust detrending results in artifact-free decoding without displacements. Baseline correction may unwittingly obfuscate spurious decoding effects and displace them to the rest of the trial. CONCLUSIONS Decoding analyses benefit from trial-masked robust detrending, without the unwanted side effects introduced by filtering or regular robust detrending. However, for sufficiently clean data sets and sufficiently strong signals, no filtering or detrending at all may work adequately. Implications for other types of data are discussed, followed by a number of recommendations.
Collapse
Affiliation(s)
- Joram van Driel
- Institute for Brain and Behaviour Amsterdam, Vrije Universiteit Amsterdam, the Netherlands; Department of Experimental and Applied Psychology - Cognitive Psychology, Vrije Universiteit Amsterdam, the Netherlands; Faculty of Behavioural and Movement Sciences, Vrije Universiteit Amsterdam, the Netherlands
| | - Christian N L Olivers
- Institute for Brain and Behaviour Amsterdam, Vrije Universiteit Amsterdam, the Netherlands; Department of Experimental and Applied Psychology - Cognitive Psychology, Vrije Universiteit Amsterdam, the Netherlands; Faculty of Behavioural and Movement Sciences, Vrije Universiteit Amsterdam, the Netherlands
| | - Johannes J Fahrenfort
- Institute for Brain and Behaviour Amsterdam, Vrije Universiteit Amsterdam, the Netherlands; Department of Experimental and Applied Psychology - Cognitive Psychology, Vrije Universiteit Amsterdam, the Netherlands; Faculty of Behavioural and Movement Sciences, Vrije Universiteit Amsterdam, the Netherlands; Department of Psychology, University of Amsterdam, Amsterdam 1001 NK, the Netherlands; Amsterdam Brain and Cognition (ABC), University of Amsterdam, Amsterdam 1001 NK, the Netherlands.
| |
Collapse
|
38
|
Lu Z, Ku Y. NeuroRA: A Python Toolbox of Representational Analysis From Multi-Modal Neural Data. Front Neuroinform 2021; 14:563669. [PMID: 33424573 PMCID: PMC7787009 DOI: 10.3389/fninf.2020.563669] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Accepted: 12/03/2020] [Indexed: 11/26/2022] Open
Abstract
In studies of cognitive neuroscience, multivariate pattern analysis (MVPA) is widely used as it offers richer information than traditional univariate analysis. Representational similarity analysis (RSA), as one method of MVPA, has become an effective decoding method based on neural data by calculating the similarity between different representations in the brain under different conditions. Moreover, RSA is suitable for researchers to compare data from different modalities and even bridge data from different species. However, previous toolboxes have been made to fit specific datasets. Here, we develop NeuroRA, a novel and easy-to-use toolbox for representational analysis. Our toolbox aims at conducting cross-modal data analysis from multi-modal neural data (e.g., EEG, MEG, fNIRS, fMRI, and other sources of neruroelectrophysiological data), behavioral data, and computer-simulated data. Compared with previous software packages, our toolbox is more comprehensive and powerful. Using NeuroRA, users can not only calculate the representational dissimilarity matrix (RDM), which reflects the representational similarity among different task conditions and conduct a representational analysis among different RDMs to achieve a cross-modal comparison. Besides, users can calculate neural pattern similarity (NPS), spatiotemporal pattern similarity (STPS), and inter-subject correlation (ISC) with this toolbox. NeuroRA also provides users with functions performing statistical analysis, storage, and visualization of results. We introduce the structure, modules, features, and algorithms of NeuroRA in this paper, as well as examples applying the toolbox in published datasets.
Collapse
Affiliation(s)
- Zitong Lu
- Guangdong Provincial Key Laboratory of Social Cognitive Neuroscience and Mental Health, Department of Psychology, Sun Yat-sen University, Guangzhou, China.,Peng Cheng Laboratory, Shenzhen, China.,Shanghai Key Laboratory of Brain Functional Genomics, Shanghai Changning-East China Normal University (ECNU) Mental Health Center, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
| | - Yixuan Ku
- Guangdong Provincial Key Laboratory of Social Cognitive Neuroscience and Mental Health, Department of Psychology, Sun Yat-sen University, Guangzhou, China.,Peng Cheng Laboratory, Shenzhen, China
| |
Collapse
|
39
|
Himmelberg MM, Segala FG, Maloney RT, Harris JM, Wade AR. Decoding Neural Responses to Motion-in-Depth Using EEG. Front Neurosci 2020; 14:581706. [PMID: 33362456 PMCID: PMC7758252 DOI: 10.3389/fnins.2020.581706] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2020] [Accepted: 11/23/2020] [Indexed: 11/13/2022] Open
Abstract
Two stereoscopic cues that underlie the perception of motion-in-depth (MID) are changes in retinal disparity over time (CD) and interocular velocity differences (IOVD). These cues have independent spatiotemporal sensitivity profiles, depend upon different low-level stimulus properties, and are potentially processed along separate cortical pathways. Here, we ask whether these MID cues code for different motion directions: do they give rise to discriminable patterns of neural signals, and is there evidence for their convergence onto a single "motion-in-depth" pathway? To answer this, we use a decoding algorithm to test whether, and when, patterns of electroencephalogram (EEG) signals measured from across the full scalp, generated in response to CD- and IOVD-isolating stimuli moving toward or away in depth can be distinguished. We find that both MID cue type and 3D-motion direction can be decoded at different points in the EEG timecourse and that direction decoding cannot be accounted for by static disparity information. Remarkably, we find evidence for late processing convergence: IOVD motion direction can be decoded relatively late in the timecourse based on a decoder trained on CD stimuli, and vice versa. We conclude that early CD and IOVD direction decoding performance is dependent upon fundamentally different low-level stimulus features, but that later stages of decoding performance may be driven by a central, shared pathway that is agnostic to these features. Overall, these data are the first to show that neural responses to CD and IOVD cues that move toward and away in depth can be decoded from EEG signals, and that different aspects of MID-cues contribute to decoding performance at different points along the EEG timecourse.
Collapse
Affiliation(s)
- Marc M Himmelberg
- Department of Psychology, University of York, York, United Kingdom.,Department of Psychology, New York University, New York, NY, United States
| | | | - Ryan T Maloney
- Department of Psychology, University of York, York, United Kingdom
| | - Julie M Harris
- School of Psychology and Neuroscience, University of St. Andrews, Fife, United Kingdom
| | - Alex R Wade
- Department of Psychology, University of York, York, United Kingdom.,York Biomedical Research Institute, University of York, York, United Kingdom
| |
Collapse
|
40
|
Tovar DA, Westerberg JA, Cox MA, Dougherty K, Carlson TA, Wallace MT, Maier A. Stimulus Feature-Specific Information Flow Along the Columnar Cortical Microcircuit Revealed by Multivariate Laminar Spiking Analysis. Front Syst Neurosci 2020; 14:600601. [PMID: 33328912 PMCID: PMC7734135 DOI: 10.3389/fnsys.2020.600601] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2020] [Accepted: 11/04/2020] [Indexed: 11/23/2022] Open
Abstract
Most of the mammalian neocortex is comprised of a highly similar anatomical structure, consisting of a granular cell layer between superficial and deep layers. Even so, different cortical areas process different information. Taken together, this suggests that cortex features a canonical functional microcircuit that supports region-specific information processing. For example, the primate primary visual cortex (V1) combines the two eyes' signals, extracts stimulus orientation, and integrates contextual information such as visual stimulation history. These processes co-occur during the same laminar stimulation sequence that is triggered by the onset of visual stimuli. Yet, we still know little regarding the laminar processing differences that are specific to each of these types of stimulus information. Univariate analysis techniques have provided great insight by examining one electrode at a time or by studying average responses across multiple electrodes. Here we focus on multivariate statistics to examine response patterns across electrodes instead. Specifically, we applied multivariate pattern analysis (MVPA) to linear multielectrode array recordings of laminar spiking responses to decode information regarding the eye-of-origin, stimulus orientation, and stimulus repetition. MVPA differs from conventional univariate approaches in that it examines patterns of neural activity across simultaneously recorded electrode sites. We were curious whether this added dimensionality could reveal neural processes on the population level that are challenging to detect when measuring brain activity without the context of neighboring recording sites. We found that eye-of-origin information was decodable for the entire duration of stimulus presentation, but diminished in the deepest layers of V1. Conversely, orientation information was transient and equally pronounced along all layers. More importantly, using time-resolved MVPA, we were able to evaluate laminar response properties beyond those yielded by univariate analyses. Specifically, we performed a time generalization analysis by training a classifier at one point of the neural response and testing its performance throughout the remaining period of stimulation. Using this technique, we demonstrate repeating (reverberating) patterns of neural activity that have not previously been observed using standard univariate approaches.
Collapse
Affiliation(s)
- David A. Tovar
- Neuroscience Program, Vanderbilt University, Nashville, TN, United States
- School of Medicine, Vanderbilt University, Nashville, TN, United States
| | - Jacob A. Westerberg
- Department of Psychology, Vanderbilt University, Nashville, TN, United States
- Center for Integrative and Cognitive Neuroscience, Vanderbilt University, Nashville, TN, United States
- Vanderbilt Vision Research Center, Vanderbilt University, Nashville, TN, United States
| | - Michele A. Cox
- Center for Visual Science, University of Rochester, Rochester, NY, United States
| | - Kacie Dougherty
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, United States
| | | | - Mark T. Wallace
- School of Medicine, Vanderbilt University, Nashville, TN, United States
- Department of Psychology, Vanderbilt University, Nashville, TN, United States
- Center for Integrative and Cognitive Neuroscience, Vanderbilt University, Nashville, TN, United States
- Vanderbilt Vision Research Center, Vanderbilt University, Nashville, TN, United States
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, United States
- Department of Psychiatry, Vanderbilt University, Nashville, TN, United States
- Kennedy Center for Research on Human Development, Vanderbilt University, Nashville, TN, United States
| | - Alexander Maier
- Department of Psychology, Vanderbilt University, Nashville, TN, United States
- Center for Integrative and Cognitive Neuroscience, Vanderbilt University, Nashville, TN, United States
- Vanderbilt Vision Research Center, Vanderbilt University, Nashville, TN, United States
| |
Collapse
|
41
|
Cichy RM, Oliva A. A M/EEG-fMRI Fusion Primer: Resolving Human Brain Responses in Space and Time. Neuron 2020; 107:772-781. [DOI: 10.1016/j.neuron.2020.07.001] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2020] [Revised: 06/25/2020] [Accepted: 06/30/2020] [Indexed: 10/23/2022]
|
42
|
Treder MS. MVPA-Light: A Classification and Regression Toolbox for Multi-Dimensional Data. Front Neurosci 2020; 14:289. [PMID: 32581662 PMCID: PMC7287158 DOI: 10.3389/fnins.2020.00289] [Citation(s) in RCA: 82] [Impact Index Per Article: 20.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2019] [Accepted: 03/12/2020] [Indexed: 11/24/2022] Open
Abstract
MVPA-Light is a MATLAB toolbox for multivariate pattern analysis (MVPA). It provides native implementations of a range of classifiers and regression models, using modern optimization algorithms. High-level functions allow for the multivariate analysis of multi-dimensional data, including generalization (e.g., time x time) and searchlight analysis. The toolbox performs cross-validation, hyperparameter tuning, and nested preprocessing. It computes various classification and regression metrics and establishes their statistical significance, is modular and easily extendable. Furthermore, it offers interfaces for LIBSVM and LIBLINEAR as well as an integration into the FieldTrip neuroimaging toolbox. After introducing MVPA-Light, example analyses of MEG and fMRI datasets, and benchmarking results on the classifiers and regression models are presented.
Collapse
Affiliation(s)
- Matthias S Treder
- School of Computer Science & Informatics, Cardiff University, Cardiff, United Kingdom
| |
Collapse
|
43
|
Kong NCL, Kaneshiro B, Yamins DLK, Norcia AM. Time-resolved correspondences between deep neural network layers and EEG measurements in object processing. Vision Res 2020; 172:27-45. [PMID: 32388211 DOI: 10.1016/j.visres.2020.04.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2019] [Revised: 03/18/2020] [Accepted: 04/20/2020] [Indexed: 10/24/2022]
Abstract
The ventral visual stream is known to be organized hierarchically, where early visual areas processing simplistic features feed into higher visual areas processing more complex features. Hierarchical convolutional neural networks (CNNs) were largely inspired by this type of brain organization and have been successfully used to model neural responses in different areas of the visual system. In this work, we aim to understand how an instance of these models corresponds to temporal dynamics of human object processing. Using representational similarity analysis (RSA) and various similarity metrics, we compare the model representations with two electroencephalography (EEG) data sets containing responses to a shared set of 72 images. We find that there is a hierarchical relationship between the depth of a layer and the time at which peak correlation with the brain response occurs for certain similarity metrics in both data sets. However, when comparing across layers in the neural network, the correlation onset time did not appear in a strictly hierarchical fashion. We present two additional methods that improve upon the achieved correlations by optimally weighting features from the CNN and show that depending on the similarity metric, deeper layers of the CNN provide a better correspondence than shallow layers to later time points in the EEG responses. However, we do not find that shallow layers provide better correspondences than those of deeper layers to early time points, an observation that violates the hierarchy and is in agreement with the finding from the onset-time analysis. This work makes a first comparison of various response features-including multiple similarity metrics and data sets-with respect to a neural network.
Collapse
Affiliation(s)
- Nathan C L Kong
- Department of Psychology, Stanford University, United States; Department of Electrical Engineering, Stanford University, United States.
| | - Blair Kaneshiro
- Center for Computer Research in Music and Acoustics, Stanford University, United States.
| | - Daniel L K Yamins
- Department of Psychology, Stanford University, United States; Department of Computer Science, Stanford University, United States; Wu Tsai Neurosciences Institute, Stanford University, United States.
| | - Anthony M Norcia
- Department of Psychology, Stanford University, United States; Wu Tsai Neurosciences Institute, Stanford University, United States.
| |
Collapse
|
44
|
Neural Evidence for the Prediction of Animacy Features during Language Comprehension: Evidence from MEG and EEG Representational Similarity Analysis. J Neurosci 2020; 40:3278-3291. [PMID: 32161141 DOI: 10.1523/jneurosci.1733-19.2020] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2019] [Revised: 02/26/2020] [Accepted: 02/27/2020] [Indexed: 11/21/2022] Open
Abstract
It has been proposed that people can generate probabilistic predictions at multiple levels of representation during language comprehension. We used magnetoencephalography (MEG) and electroencephalography (EEG), in combination with representational similarity analysis, to seek neural evidence for the prediction of animacy features. In two studies, MEG and EEG activity was measured as human participants (both sexes) read three-sentence scenarios. Verbs in the final sentences constrained for either animate or inanimate semantic features of upcoming nouns, and the broader discourse context constrained for either a specific noun or for multiple nouns belonging to the same animacy category. We quantified the similarity between spatial patterns of brain activity following the verbs until just before the presentation of the nouns. The MEG and EEG datasets revealed converging evidence that the similarity between spatial patterns of neural activity following animate-constraining verbs was greater than following inanimate-constraining verbs. This effect could not be explained by lexical-semantic processing of the verbs themselves. We therefore suggest that it reflected the inherent difference in the semantic similarity structure of the predicted animate and inanimate nouns. Moreover, the effect was present regardless of whether a specific word could be predicted, providing strong evidence for the prediction of coarse-grained semantic features that goes beyond the prediction of individual words.SIGNIFICANCE STATEMENT Language inputs unfold very quickly during real-time communication. By predicting ahead, we can give our brains a "head start," so that language comprehension is faster and more efficient. Although most contexts do not constrain strongly for a specific word, they do allow us to predict some upcoming information. For example, following the context of "they cautioned the…," we can predict that the next word will be animate rather than inanimate (we can caution a person, but not an object). Here, we used EEG and MEG techniques to show that the brain is able to use these contextual constraints to predict the animacy of upcoming words during sentence comprehension, and that these predictions are associated with specific spatial patterns of neural activity.
Collapse
|
45
|
Spatio-temporal dynamics of face perception. Neuroimage 2020; 209:116531. [DOI: 10.1016/j.neuroimage.2020.116531] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2019] [Revised: 12/19/2019] [Accepted: 01/08/2020] [Indexed: 11/27/2022] Open
|
46
|
Mattioni S, Rezk M, Battal C, Bottini R, Cuculiza Mendoza KE, Oosterhof NN, Collignon O. Categorical representation from sound and sight in the ventral occipito-temporal cortex of sighted and blind. eLife 2020; 9:50732. [PMID: 32108572 PMCID: PMC7108866 DOI: 10.7554/elife.50732] [Citation(s) in RCA: 42] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2019] [Accepted: 02/14/2020] [Indexed: 01/08/2023] Open
Abstract
Is vision necessary for the development of the categorical organization of the Ventral Occipito-Temporal Cortex (VOTC)? We used fMRI to characterize VOTC responses to eight categories presented acoustically in sighted and early blind individuals, and visually in a separate sighted group. We observed that VOTC reliably encodes sound categories in sighted and blind people using a representational structure and connectivity partially similar to the one found in vision. Sound categories were, however, more reliably encoded in the blind than the sighted group, using a representational format closer to the one found in vision. Crucially, VOTC in blind represents the categorical membership of sounds rather than their acoustic features. Our results suggest that sounds trigger categorical responses in the VOTC of congenitally blind and sighted people that partially match the topography and functional profile of the visual response, despite qualitative nuances in the categorical organization of VOTC between modalities and groups. The world is full of rich and dynamic visual information. To avoid information overload, the human brain groups inputs into categories such as faces, houses, or tools. A part of the brain called the ventral occipito-temporal cortex (VOTC) helps categorize visual information. Specific parts of the VOTC prefer different types of visual input; for example, one part may tend to respond more to faces, whilst another may prefer houses. However, it is not clear how the VOTC characterizes information. One idea is that similarities between certain types of visual information may drive how information is organized in the VOTC. For example, looking at faces requires using central vision, while looking at houses requires using peripheral vision. Furthermore, all faces have a roundish shape while houses tend to have a more rectangular shape. Another possibility, however, is that the categorization of different inputs cannot be explained just by vision, and is also be driven by higher-level aspects of each category. For instance, how humans use or interact with something may also influence how an input is categorized. If categories are established depending (at least partially) on these higher-level aspects, rather than purely through visual likeness, it is likely that the VOTC would respond similarly to both sounds and images representing these categories. Now, Mattioni et al. have tested how individuals with and without sight respond to eight different categories of information to find out whether or not categorization is driven purely by visual likeness. Each category was presented to participants using sounds while measuring their brain activity. In addition, a group of participants who could see were also presented with the categories visually. Mattioni et al. then compared what happened in the VOTC of the three groups – sighted people presented with sounds, blind people presented with sounds, and sighted people presented with images – in response to each category. The experiment revealed that the VOTC organizes both auditory and visual information in a similar way. However, there were more similarities between the way blind people categorized auditory information and how sighted people categorized visual information than between how sighted people categorized each type of input. Mattioni et al. also found that the region of the VOTC that responds to inanimate objects massively overlapped across the three groups, whereas the part of the VOTC that responds to living things was more variable. These findings suggest that the way that the VOTC organizes information is, at least partly, independent from vision. The experiments also provide some information about how the brain reorganizes in people who are born blind. Further studies may reveal how differences in the VOTC of people with and without sight affect regions typically associated with auditory categorization, and potentially explain how the brain reorganizes in people who become blind later in life.
Collapse
Affiliation(s)
- Stefania Mattioni
- Institute of research in Psychology (IPSY) & Institute of Neuroscience (IoNS) - University of Louvain (UCLouvain), Louvain-la-Neuve, Belgium
| | - Mohamed Rezk
- Institute of research in Psychology (IPSY) & Institute of Neuroscience (IoNS) - University of Louvain (UCLouvain), Louvain-la-Neuve, Belgium.,Centre for Mind/Brain Sciences, University of Trento, Trento, Italy
| | - Ceren Battal
- Institute of research in Psychology (IPSY) & Institute of Neuroscience (IoNS) - University of Louvain (UCLouvain), Louvain-la-Neuve, Belgium.,Centre for Mind/Brain Sciences, University of Trento, Trento, Italy
| | - Roberto Bottini
- Centre for Mind/Brain Sciences, University of Trento, Trento, Italy
| | | | | | - Olivier Collignon
- Institute of research in Psychology (IPSY) & Institute of Neuroscience (IoNS) - University of Louvain (UCLouvain), Louvain-la-Neuve, Belgium
| |
Collapse
|
47
|
Treder MS. MVPA-Light: A Classification and Regression Toolbox for Multi-Dimensional Data. Front Neurosci 2020. [PMID: 32581662 DOI: 10.3389/fnins.2020.0028] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/23/2023] Open
Abstract
MVPA-Light is a MATLAB toolbox for multivariate pattern analysis (MVPA). It provides native implementations of a range of classifiers and regression models, using modern optimization algorithms. High-level functions allow for the multivariate analysis of multi-dimensional data, including generalization (e.g., time x time) and searchlight analysis. The toolbox performs cross-validation, hyperparameter tuning, and nested preprocessing. It computes various classification and regression metrics and establishes their statistical significance, is modular and easily extendable. Furthermore, it offers interfaces for LIBSVM and LIBLINEAR as well as an integration into the FieldTrip neuroimaging toolbox. After introducing MVPA-Light, example analyses of MEG and fMRI datasets, and benchmarking results on the classifiers and regression models are presented.
Collapse
Affiliation(s)
- Matthias S Treder
- School of Computer Science & Informatics, Cardiff University, Cardiff, United Kingdom
| |
Collapse
|
48
|
Popal H, Wang Y, Olson IR. A Guide to Representational Similarity Analysis for Social Neuroscience. Soc Cogn Affect Neurosci 2019; 14:1243-1253. [PMID: 31989169 PMCID: PMC7057283 DOI: 10.1093/scan/nsz099] [Citation(s) in RCA: 49] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2019] [Revised: 10/13/2019] [Accepted: 10/22/2019] [Indexed: 01/04/2023] Open
Abstract
Representational similarity analysis (RSA) is a computational technique that uses pairwise comparisons of stimuli to reveal their representation in higher-order space. In the context of neuroimaging, mass-univariate analyses and other multivariate analyses can provide information on what and where information is represented but have limitations in their ability to address how information is represented. Social neuroscience is a field that can particularly benefit from incorporating RSA techniques to explore hypotheses regarding the representation of multidimensional data, how representations can predict behavior, how representations differ between groups and how multimodal data can be compared to inform theories. The goal of this paper is to provide a practical as well as theoretical guide to implementing RSA in social neuroscience studies.
Collapse
Affiliation(s)
- Haroon Popal
- Department of Psychology, Temple University, Philadelphia, PA
| | | | - Ingrid R Olson
- Department of Psychology, Temple University, Philadelphia, PA
| |
Collapse
|
49
|
Demarchi G, Sanchez G, Weisz N. Automatic and feature-specific prediction-related neural activity in the human auditory system. Nat Commun 2019; 10:3440. [PMID: 31371713 PMCID: PMC6672009 DOI: 10.1038/s41467-019-11440-1] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2018] [Accepted: 07/11/2019] [Indexed: 12/04/2022] Open
Abstract
Prior experience enables the formation of expectations of upcoming sensory events. However, in the auditory modality, it is not known whether prediction-related neural signals carry feature-specific information. Here, using magnetoencephalography (MEG), we examined whether predictions of future auditory stimuli carry tonotopic specific information. Participants passively listened to sound sequences of four carrier frequencies (tones) with a fixed presentation rate, ensuring strong temporal expectations of when the next stimulus would occur. Expectation of which frequency would occur was parametrically modulated across the sequences, and sounds were occasionally omitted. We show that increasing the regularity of the sequence boosts carrier-frequency-specific neural activity patterns during both the anticipatory and omission periods, indicating that prediction-related neural activity is indeed feature-specific. Our results illustrate that even without bottom-up input, auditory predictions can activate tonotopically specific templates. After listening to a predictable sequence of sounds, we can anticipate and predict the next sound in the sequence. Here, the authors show that during expectation of a sound, the brain generates neural activity matching that which is produced by actually hearing the same sound.
Collapse
Affiliation(s)
- Gianpaolo Demarchi
- Centre for Cognitive Neuroscience and Division of Physiological Psychology, University of Salzburg, Hellbrunnerstraße 34, 5020, Salzburg, Austria.
| | - Gaëtan Sanchez
- Centre for Cognitive Neuroscience and Division of Physiological Psychology, University of Salzburg, Hellbrunnerstraße 34, 5020, Salzburg, Austria.,Lyon Neuroscience Research Center, Brain Dynamics and Cognition Team, INSERM UMRS 1028, CNRS UMR 5292, Université Claude Bernard Lyon 1, Université de Lyon, F-69000, Lyon, France
| | - Nathan Weisz
- Centre for Cognitive Neuroscience and Division of Physiological Psychology, University of Salzburg, Hellbrunnerstraße 34, 5020, Salzburg, Austria
| |
Collapse
|
50
|
de Vries IE, van Driel J, Olivers CN. Decoding the status of working memory representations in preparation of visual selection. Neuroimage 2019; 191:549-559. [DOI: 10.1016/j.neuroimage.2019.02.069] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2018] [Revised: 01/27/2019] [Accepted: 02/27/2019] [Indexed: 01/02/2023] Open
|