1
|
Hakala T, Lindh-Knuutila T, Hultén A, Lehtonen M, Salmelin R. Subword Representations Successfully Decode Brain Responses to Morphologically Complex Written Words. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2024; 5:844-863. [PMID: 39301210 PMCID: PMC11410357 DOI: 10.1162/nol_a_00149] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/08/2022] [Accepted: 05/30/2024] [Indexed: 09/22/2024]
Abstract
This study extends the idea of decoding word-evoked brain activations using a corpus-semantic vector space to multimorphemic words in the agglutinative Finnish language. The corpus-semantic models are trained on word segments, and decoding is carried out with word vectors that are composed of these segments. We tested several alternative vector-space models using different segmentations: no segmentation (whole word), linguistic morphemes, statistical morphemes, random segmentation, and character-level 1-, 2- and 3-grams, and paired them with recorded MEG responses to multimorphemic words in a visual word recognition task. For all variants, the decoding accuracy exceeded the standard word-label permutation-based significance thresholds at 350-500 ms after stimulus onset. However, the critical segment-label permutation test revealed that only those segmentations that were morphologically aware reached significance in the brain decoding task. The results suggest that both whole-word forms and morphemes are represented in the brain and show that neural decoding using corpus-semantic word representations derived from compositional subword segments is applicable also for multimorphemic word forms. This is especially relevant for languages with complex morphology, because a large proportion of word forms are rare and it can be difficult to find statistically reliable surface representations for them in any large corpus.
Collapse
Affiliation(s)
- Tero Hakala
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland
- Aalto NeuroImaging, Aalto University, Espoo, Finland
| | - Tiina Lindh-Knuutila
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland
| | - Annika Hultén
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland
| | - Minna Lehtonen
- Department of Psychology and Speech-Language Pathology, University of Turku, Turku, Finland
- Centre for Multilingualism in Society Across the Lifespan, University of Oslo, Oslo, Norway
| | - Riitta Salmelin
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland
| |
Collapse
|
2
|
Yashiro R, Sawayama M, Amano K. Decoding time-resolved neural representations of orientation ensemble perception. Front Neurosci 2024; 18:1387393. [PMID: 39148524 PMCID: PMC11325722 DOI: 10.3389/fnins.2024.1387393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2024] [Accepted: 07/15/2024] [Indexed: 08/17/2024] Open
Abstract
The visual system can compute summary statistics of several visual elements at a glance. Numerous studies have shown that an ensemble of different visual features can be perceived over 50-200 ms; however, the time point at which the visual system forms an accurate ensemble representation associated with an individual's perception remains unclear. This is mainly because most previous studies have not fully addressed time-resolved neural representations that occur during ensemble perception, particularly lacking quantification of the representational strength of ensembles and their correlation with behavior. Here, we conducted orientation ensemble discrimination tasks and electroencephalogram (EEG) recordings to decode orientation representations over time while human observers discriminated an average of multiple orientations. We modeled EEG signals as a linear sum of hypothetical orientation channel responses and inverted this model to quantify the representational strength of orientation ensemble. Our analysis using this inverted encoding model revealed stronger representations of the average orientation over 400-700 ms. We also correlated the orientation representation estimated from EEG signals with the perceived average orientation reported in the ensemble discrimination task with adjustment methods. We found that the estimated orientation at approximately 600-700 ms significantly correlated with the individual differences in perceived average orientation. These results suggest that although ensembles can be quickly and roughly computed, the visual system may gradually compute an orientation ensemble over several hundred milliseconds to achieve a more accurate ensemble representation.
Collapse
Affiliation(s)
- Ryuto Yashiro
- Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan
| | - Masataka Sawayama
- Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan
| | - Kaoru Amano
- Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
3
|
Guenther S, Kosmyna N, Maes P. Image classification and reconstruction from low-density EEG. Sci Rep 2024; 14:16436. [PMID: 39013929 PMCID: PMC11252274 DOI: 10.1038/s41598-024-66228-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Accepted: 06/28/2024] [Indexed: 07/18/2024] Open
Abstract
Recent advances in visual decoding have enabled the classification and reconstruction of perceived images from the brain. However, previous approaches have predominantly relied on stationary, costly equipment like fMRI or high-density EEG, limiting the real-world availability and applicability of such projects. Additionally, several EEG-based paradigms have utilized artifactual, rather than stimulus-related information yielding flawed classification and reconstruction results. Our goal was to reduce the cost of the decoding paradigm, while increasing its flexibility. Therefore, we investigated whether the classification of an image category and the reconstruction of the image itself is possible from the visually evoked brain activity measured by a portable, 8-channel EEG. To compensate for the low electrode count and to avoid flawed predictions, we designed a theory-guided EEG setup and created a new experiment to obtain a dataset from 9 subjects. We compared five contemporary classification models with our setup reaching an average accuracy of 34.4% for 20 image classes on hold-out test recordings. For the reconstruction, the top-performing model was used as an EEG-encoder which was combined with a pretrained latent diffusion model via double-conditioning. After fine-tuning, we reconstructed images from the test set with a 1000 trial 50-class top-1 accuracy of 35.3%. While not reaching the same performance as MRI-based paradigms on unseen stimuli, our approach greatly improved the affordability and mobility of the visual decoding technology.
Collapse
Affiliation(s)
- Sven Guenther
- School of Computation, Information and Technology, Technical University of Munich, Munich, Germany.
| | - Nataliya Kosmyna
- Media Lab, Massachusetts Institute of Technology, Cambridge, USA
| | - Pattie Maes
- Media Lab, Massachusetts Institute of Technology, Cambridge, USA
| |
Collapse
|
4
|
Dirani J, Pylkkänen L. MEG Evidence That Modality-Independent Conceptual Representations Contain Semantic and Visual Features. J Neurosci 2024; 44:e0326242024. [PMID: 38806251 PMCID: PMC11223456 DOI: 10.1523/jneurosci.0326-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2024] [Revised: 04/22/2024] [Accepted: 05/15/2024] [Indexed: 05/30/2024] Open
Abstract
The semantic knowledge stored in our brains can be accessed from different stimulus modalities. For example, a picture of a cat and the word "cat" both engage similar conceptual representations. While existing research has found evidence for modality-independent representations, their content remains unknown. Modality-independent representations could be semantic, or they might also contain perceptual features. We developed a novel approach combining word/picture cross-condition decoding with neural network classifiers that learned latent modality-independent representations from MEG data (25 human participants, 15 females, 10 males). We then compared these representations to models representing semantic, sensory, and orthographic features. Results show that modality-independent representations correlate both with semantic and visual representations. There was no evidence that these results were due to picture-specific visual features or orthographic features automatically activated by the stimuli presented in the experiment. These findings support the notion that modality-independent concepts contain both perceptual and semantic representations.
Collapse
Affiliation(s)
- Julien Dirani
- Departments of Psychology, New York University, New York, New York 10003
| | - Liina Pylkkänen
- Departments of Psychology, New York University, New York, New York 10003
- Linguistics, New York University, New York, New York 10003
- NYUAD Research Institute, New York University Abu Dhabi, Abu Dhabi 129188, United Arab Emirates
| |
Collapse
|
5
|
Bezsudnova Y, Quinn AJ, Wynn SC, Jensen O. Spatiotemporal Properties of Common Semantic Categories for Words and Pictures. J Cogn Neurosci 2024; 36:1760-1769. [PMID: 38739567 DOI: 10.1162/jocn_a_02182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
The timing of semantic processing during object recognition in the brain is a topic of ongoing discussion. One way of addressing this question is by applying multivariate pattern analysis to human electrophysiological responses to object images of different semantic categories. However, although multivariate pattern analysis can reveal whether neuronal activity patterns are distinct for different stimulus categories, concerns remain on whether low-level visual features also contribute to the classification results. To circumvent this issue, we applied a cross-decoding approach to magnetoencephalography data from stimuli from two different modalities: images and their corresponding written words. We employed items from three categories and presented them in a randomized order. We show that if the classifier is trained on words, pictures are classified between 150 and 430 msec after stimulus onset, and when training on pictures, words are classified between 225 and 430 msec. The topographical map, identified using a searchlight approach for cross-modal activation in both directions, showed left lateralization, confirming the involvement of linguistic representations. These results point to semantic activation of pictorial stimuli occurring at ∼150 msec, whereas for words, the semantic activation occurs at ∼230 msec.
Collapse
Affiliation(s)
| | | | - Syanah C Wynn
- University of Birmingham
- Gutenberg University Medical Center Mainz
| | | |
Collapse
|
6
|
Kilmarx J, Tashev I, Millan JDR, Sulzer J, Lewis-Peacock J. Evaluating the Feasibility of Visual Imagery for an EEG-Based Brain-Computer Interface. IEEE Trans Neural Syst Rehabil Eng 2024; 32:2209-2219. [PMID: 38843055 PMCID: PMC11249027 DOI: 10.1109/tnsre.2024.3410870] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/20/2024]
Abstract
Visual imagery, or the mental simulation of visual information from memory, could serve as an effective control paradigm for a brain-computer interface (BCI) due to its ability to directly convey the user's intention with many natural ways of envisioning an intended action. However, multiple initial investigations into using visual imagery as a BCI control strategies have been unable to fully evaluate the capabilities of true spontaneous visual mental imagery. One major limitation in these prior works is that the target image is typically displayed immediately preceding the imagery period. This paradigm does not capture spontaneous mental imagery as would be necessary in an actual BCI application but something more akin to short-term retention in visual working memory. Results from the present study show that short-term visual imagery following the presentation of a specific target image provides a stronger, more easily classifiable neural signature in EEG than spontaneous visual imagery from long-term memory following an auditory cue for the image. We also show that short-term visual imagery and visual perception share commonalities in the most predictive electrodes and spectral features. However, visual imagery received greater influence from frontal electrodes whereas perception was mostly confined to occipital electrodes. This suggests that visual perception is primarily driven by sensory information whereas visual imagery has greater contributions from areas associated with memory and attention. This work provides the first direct comparison of short-term and long-term visual imagery tasks and provides greater insight into the feasibility of using visual imagery as a BCI control strategy.
Collapse
|
7
|
Nora A, Rinkinen O, Renvall H, Service E, Arkkila E, Smolander S, Laasonen M, Salmelin R. Impaired Cortical Tracking of Speech in Children with Developmental Language Disorder. J Neurosci 2024; 44:e2048232024. [PMID: 38589232 PMCID: PMC11140678 DOI: 10.1523/jneurosci.2048-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 03/25/2024] [Accepted: 03/26/2024] [Indexed: 04/10/2024] Open
Abstract
In developmental language disorder (DLD), learning to comprehend and express oneself with spoken language is impaired, but the reason for this remains unknown. Using millisecond-scale magnetoencephalography recordings combined with machine learning models, we investigated whether the possible neural basis of this disruption lies in poor cortical tracking of speech. The stimuli were common spoken Finnish words (e.g., dog, car, hammer) and sounds with corresponding meanings (e.g., dog bark, car engine, hammering). In both children with DLD (10 boys and 7 girls) and typically developing (TD) control children (14 boys and 3 girls), aged 10-15 years, the cortical activation to spoken words was best modeled as time-locked to the unfolding speech input at ∼100 ms latency between sound and cortical activation. Amplitude envelope (amplitude changes) and spectrogram (detailed time-varying spectral content) of the spoken words, but not other sounds, were very successfully decoded based on time-locked brain responses in bilateral temporal areas; based on the cortical responses, the models could tell at ∼75-85% accuracy which of the two sounds had been presented to the participant. However, the cortical representation of the amplitude envelope information was poorer in children with DLD compared with TD children at longer latencies (at ∼200-300 ms lag). We interpret this effect as reflecting poorer retention of acoustic-phonetic information in short-term memory. This impaired tracking could potentially affect the processing and learning of words as well as continuous speech. The present results offer an explanation for the problems in language comprehension and acquisition in DLD.
Collapse
Affiliation(s)
- Anni Nora
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo FI-00076, Finland
- Aalto NeuroImaging (ANI), Aalto University, Espoo FI-00076, Finland
| | - Oona Rinkinen
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo FI-00076, Finland
- Aalto NeuroImaging (ANI), Aalto University, Espoo FI-00076, Finland
| | - Hanna Renvall
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo FI-00076, Finland
- Aalto NeuroImaging (ANI), Aalto University, Espoo FI-00076, Finland
- BioMag Laboratory, HUS Diagnostic Center, Helsinki University Hospital, Helsinki FI-00029, Finland
| | - Elisabet Service
- Department of Linguistics and Languages, Centre for Advanced Research in Experimental and Applied Linguistics (ARiEAL), McMaster University, Hamilton, Ontario L8S 4L8, Canada
- Department of Psychology and Logopedics, University of Helsinki, Helsinki FI-00014, Finland
| | - Eva Arkkila
- Department of Otorhinolaryngology and Phoniatrics, Head and Neck Center, Helsinki University Hospital and University of Helsinki, Helsinki FI-00014, Finland
| | - Sini Smolander
- Department of Otorhinolaryngology and Phoniatrics, Head and Neck Center, Helsinki University Hospital and University of Helsinki, Helsinki FI-00014, Finland
- Research Unit of Logopedics, University of Oulu, Oulu FI-90014, Finland
- Department of Logopedics, University of Eastern Finland, Joensuu FI-80101, Finland
| | - Marja Laasonen
- Department of Otorhinolaryngology and Phoniatrics, Head and Neck Center, Helsinki University Hospital and University of Helsinki, Helsinki FI-00014, Finland
- Department of Logopedics, University of Eastern Finland, Joensuu FI-80101, Finland
| | - Riitta Salmelin
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo FI-00076, Finland
- Aalto NeuroImaging (ANI), Aalto University, Espoo FI-00076, Finland
| |
Collapse
|
8
|
Amaral L, Besson G, Caparelli-Dáquer E, Bergström F, Almeida J. Temporal differences and commonalities between hand and tool neural processing. Sci Rep 2023; 13:22270. [PMID: 38097608 PMCID: PMC10721913 DOI: 10.1038/s41598-023-48180-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 11/23/2023] [Indexed: 12/17/2023] Open
Abstract
Object recognition is a complex cognitive process that relies on how the brain organizes object-related information. While spatial principles have been extensively studied, less studied temporal dynamics may also offer valuable insights into this process, particularly when neural processing overlaps for different categories, as it is the case of the categories of hands and tools. Here we focus on the differences and/or similarities between the time-courses of hand and tool processing under electroencephalography (EEG). Using multivariate pattern analysis, we compared, for different time points, classification accuracy for images of hands or tools when compared to images of animals. We show that for particular time intervals (~ 136-156 ms and ~ 252-328 ms), classification accuracy for hands and for tools differs. Furthermore, we show that classifiers trained to differentiate between tools and animals generalize their learning to classification of hand stimuli between ~ 260-320 ms and ~ 376-500 ms after stimulus onset. Classifiers trained to distinguish between hands and animals, on the other hand, were able to extend their learning to the classification of tools at ~ 150 ms. These findings suggest variations in semantic features and domain-specific differences between the two categories, with later-stage similarities potentially related to shared action processing for hands and tools.
Collapse
Affiliation(s)
- L Amaral
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal.
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC, USA.
| | - G Besson
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
- CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
| | - E Caparelli-Dáquer
- Laboratory of Electrical Stimulation of the Nervous System (LabEEL), Rio de Janeiro State University, Rio de Janeiro, Brazil
| | - F Bergström
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
- CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
- Department of Psychology, University of Gothenburg, Gothenburg, Sweden
| | - J Almeida
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal.
- CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal.
| |
Collapse
|
9
|
Ghazaryan G, van Vliet M, Lammi L, Lindh-Knuutila T, Kivisaari S, Hultén A, Salmelin R. Cortical time-course of evidence accumulation during semantic processing. Commun Biol 2023; 6:1242. [PMID: 38066098 PMCID: PMC10709650 DOI: 10.1038/s42003-023-05611-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Accepted: 11/20/2023] [Indexed: 12/18/2023] Open
Abstract
Our understanding of the surrounding world and communication with other people are tied to mental representations of concepts. In order for the brain to recognize an object, it must determine which concept to access based on information available from sensory inputs. In this study, we combine magnetoencephalography and machine learning to investigate how concepts are represented and accessed in the brain over time. Using brain responses from a silent picture naming task, we track the dynamics of visual and semantic information processing, and show that the brain gradually accumulates information on different levels before eventually reaching a plateau. The timing of this plateau point varies across individuals and feature models, indicating notable temporal variation in visual object recognition and semantic processing.
Collapse
Affiliation(s)
- Gayane Ghazaryan
- Department of Neuroscience and Biomedical Engineering, Aalto University, P.O. Box 12200, FI-00076, Aalto, Finland.
| | - Marijn van Vliet
- Department of Neuroscience and Biomedical Engineering, Aalto University, P.O. Box 12200, FI-00076, Aalto, Finland
| | - Lotta Lammi
- Department of Neuroscience and Biomedical Engineering, Aalto University, P.O. Box 12200, FI-00076, Aalto, Finland
| | - Tiina Lindh-Knuutila
- Department of Neuroscience and Biomedical Engineering, Aalto University, P.O. Box 12200, FI-00076, Aalto, Finland
| | - Sasa Kivisaari
- Department of Neuroscience and Biomedical Engineering, Aalto University, P.O. Box 12200, FI-00076, Aalto, Finland
| | - Annika Hultén
- Department of Neuroscience and Biomedical Engineering, Aalto University, P.O. Box 12200, FI-00076, Aalto, Finland
- Aalto NeuroImaging, Aalto University, P.O. Box 12200, Aalto, FI-00076, Finland
| | - Riitta Salmelin
- Department of Neuroscience and Biomedical Engineering, Aalto University, P.O. Box 12200, FI-00076, Aalto, Finland
- Aalto NeuroImaging, Aalto University, P.O. Box 12200, Aalto, FI-00076, Finland
| |
Collapse
|
10
|
Wilmskoetter J, Roth R, McDowell K, Munsell B, Fontenot S, Andrews K, Chang A, Johnson LP, Sangtian S, Behroozmand R, van Mierlo P, Fridriksson J, Bonilha L. Semantic Categorization of Naming Responses Based on Prearticulatory Electrical Brain Activity. J Clin Neurophysiol 2023; 40:608-615. [PMID: 37931162 PMCID: PMC10628367 DOI: 10.1097/wnp.0000000000000933] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
PURPOSE Object naming requires visual decoding, conceptualization, semantic categorization, and phonological encoding, all within 400 to 600 ms of stimulus presentation and before a word is spoken. In this study, we sought to predict semantic categories of naming responses based on prearticulatory brain activity recorded with scalp EEG in healthy individuals. METHODS We assessed 19 healthy individuals who completed a naming task while undergoing EEG. The naming task consisted of 120 drawings of animate/inanimate objects or abstract drawings. We applied a one-dimensional, two-layer, neural network to predict the semantic categories of naming responses based on prearticulatory brain activity. RESULTS Classifications of animate, inanimate, and abstract responses had an average accuracy of 80%, sensitivity of 72%, and specificity of 87% across participants. Across participants, time points with the highest average weights were between 470 and 490 milliseconds after stimulus presentation, and electrodes with the highest weights were located over the left and right frontal brain areas. CONCLUSIONS Scalp EEG can be successfully used in predicting naming responses through prearticulatory brain activity. Interparticipant variability in feature weights suggests that individualized models are necessary for highest accuracy. Our findings may inform future applications of EEG in reconstructing speech for individuals with and without speech impairments.
Collapse
Affiliation(s)
- Janina Wilmskoetter
- Department of Rehabilitation Sciences, College of Health
Professions, Medical University of South Carolina; Charleston, SC 29425, USA
| | - Rebecca Roth
- Department of Neurology, College of Medicine; Medical
University of South Carolina; Charleston, SC 29425, USA
| | - Konnor McDowell
- Department of Neurology, College of Medicine; Medical
University of South Carolina; Charleston, SC 29425, USA
| | - Brent Munsell
- Department of Computer Science, College of Arts and
Sciences; University of North Carolina-Chapel Hill; Chapel Hill, NC 27599, USA
| | - Skyler Fontenot
- Department of Neurology, College of Medicine; Medical
University of South Carolina; Charleston, SC 29425, USA
| | - Keeghan Andrews
- Department of Neurology, College of Medicine; Medical
University of South Carolina; Charleston, SC 29425, USA
| | - Allen Chang
- Department of Neurology, College of Medicine; Medical
University of South Carolina; Charleston, SC 29425, USA
| | - Lorelei Phillip Johnson
- Department of Communication Sciences and Disorders;
University of South Carolina; Columbia, SC 29208, USA
| | - Stacey Sangtian
- Department of Communication Sciences and Disorders;
University of South Carolina; Columbia, SC 29208, USA
| | - Roozbeh Behroozmand
- Department of Communication Sciences and Disorders;
University of South Carolina; Columbia, SC 29208, USA
| | | | - Julius Fridriksson
- Department of Communication Sciences and Disorders;
University of South Carolina; Columbia, SC 29208, USA
| | - Leonardo Bonilha
- Department of Neurology, College of Medicine; Medical
University of South Carolina; Charleston, SC 29425, USA
| |
Collapse
|
11
|
Dirani J, Pylkkänen L. The time course of cross-modal representations of conceptual categories. Neuroimage 2023; 277:120254. [PMID: 37391047 DOI: 10.1016/j.neuroimage.2023.120254] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Revised: 05/29/2023] [Accepted: 06/27/2023] [Indexed: 07/02/2023] Open
Abstract
To what extent does language production activate cross-modal conceptual representations? In picture naming, we view specific exemplars of concepts and then name them with a label, like "dog". In overt reading, the written word does not express a specific exemplar. Here we used a decoding approach with magnetoencephalography (MEG) to address whether picture naming and overt word reading involve shared representations of superordinate categories (e.g., animal). This addresses a fundamental question about the modality-generality of conceptual representations and their temporal evolution. Crucially, we do this using a language production task that does not require explicit categorization judgment and that controls for word form properties across semantic categories. We trained our models to classify the animal/tool distinction using MEG data of one modality at each time point and then tested the generalization of those models on the other modality. We obtained evidence for the automatic activation of cross-modal semantic category representations for both pictures and words later than their respective modality-specific representations. Cross-modal representations were activated at 150 ms and lasted until around 450 ms. The time course of lexical activation was also assessed revealing that semantic category is represented before lexical access for pictures but after lexical access for words. Notably, this earlier activation of semantic category in pictures occurred simultaneously with visual representations. We thus show evidence for the spontaneous activation of cross-modal semantic categories in picture naming and word reading. These results serve to anchor a more comprehensive spatio-temporal delineation of the semantic feature space during production planning.
Collapse
Affiliation(s)
- Julien Dirani
- Department of Psychology, New York University, New York, NY, 10003, USA.
| | - Liina Pylkkänen
- Department of Psychology, New York University, New York, NY, 10003, USA; Department of Linguistics, New York University, New York, NY, 10003, USA; NYUAD Research Institute, New York University Abu Dhabi, Abu Dhabi, 129188, UAE
| |
Collapse
|
12
|
Carota F, Schoffelen JM, Oostenveld R, Indefrey P. Parallel or sequential? Decoding conceptual and phonological/phonetic information from MEG signals during language production. Cogn Neuropsychol 2023; 40:298-317. [PMID: 38105574 DOI: 10.1080/02643294.2023.2283239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Accepted: 11/08/2023] [Indexed: 12/19/2023]
Abstract
Speaking requires the temporally coordinated planning of core linguistic information, from conceptual meaning to articulation. Recent neurophysiological results suggested that these operations involve a cascade of neural events with subsequent onset times, whilst competing evidence suggests early parallel neural activation. To test these hypotheses, we examined the sources of neuromagnetic activity recorded from 34 participants overtly naming 134 images from 4 object categories (animals, tools, foods and clothes). Within each category, word length and phonological neighbourhood density were co-varied to target phonological/phonetic processes. Multivariate pattern analyses (MVPA) searchlights in source space decoded object categories in occipitotemporal and middle temporal cortex, and phonological/phonetic variables in left inferior frontal (BA 44) and motor cortex early on. The findings suggest early activation of multiple variables due to intercorrelated properties and interactivity of processing, thus raising important questions about the representational properties of target words during the preparatory time enabling overt speaking.
Collapse
Affiliation(s)
- Francesca Carota
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Cognitive Neuroscience, Radboud University, Nijmegen, The Netherlands
| | - Jan-Mathijs Schoffelen
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Cognitive Neuroscience, Radboud University, Nijmegen, The Netherlands
| | - Robert Oostenveld
- Donders Institute for Cognitive Neuroscience, Radboud University, Nijmegen, The Netherlands
- NatMEG, Karolinska Institutet, Stockholm, Sweden
| | - Peter Indefrey
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Cognitive Neuroscience, Radboud University, Nijmegen, The Netherlands
- Institut für Sprache und Information, Heinrich Heine University, Düsseldorf, Germany
| |
Collapse
|
13
|
Wilson H, Golbabaee M, Proulx MJ, Charles S, O'Neill E. EEG-based BCI Dataset of Semantic Concepts for Imagination and Perception Tasks. Sci Data 2023; 10:386. [PMID: 37322034 PMCID: PMC10272218 DOI: 10.1038/s41597-023-02287-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Accepted: 06/02/2023] [Indexed: 06/17/2023] Open
Abstract
Electroencephalography (EEG) is a widely-used neuroimaging technique in Brain Computer Interfaces (BCIs) due to its non-invasive nature, accessibility and high temporal resolution. A range of input representations has been explored for BCIs. The same semantic meaning can be conveyed in different representations, such as visual (orthographic and pictorial) and auditory (spoken words). These stimuli representations can be either imagined or perceived by the BCI user. In particular, there is a scarcity of existing open source EEG datasets for imagined visual content, and to our knowledge there are no open source EEG datasets for semantics captured through multiple sensory modalities for both perceived and imagined content. Here we present an open source multisensory imagination and perception dataset, with twelve participants, acquired with a 124 EEG channel system. The aim is for the dataset to be open for purposes such as BCI related decoding and for better understanding the neural mechanisms behind perception, imagination and across the sensory modalities when the semantic category is held constant.
Collapse
Affiliation(s)
- Holly Wilson
- Department of Computer Science, University of Bath, Bath, BA2 7AY, UK.
| | - Mohammad Golbabaee
- Department of Engineering Mathematics, University of Bristol, Bristol, BS8 1TW, UK
| | | | - Stephen Charles
- Department of Computer Science, University of Bath, Bath, BA2 7AY, UK
| | - Eamonn O'Neill
- Department of Computer Science, University of Bath, Bath, BA2 7AY, UK.
| |
Collapse
|
14
|
Ramon C, Graichen U, Gargiulo P, Zanow F, Knösche TR, Haueisen J. Spatiotemporal phase slip patterns for visual evoked potentials, covert object naming tasks, and insight moments extracted from 256 channel EEG recordings. Front Integr Neurosci 2023; 17:1087976. [PMID: 37384237 PMCID: PMC10293627 DOI: 10.3389/fnint.2023.1087976] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Accepted: 05/19/2023] [Indexed: 06/30/2023] Open
Abstract
Phase slips arise from state transitions of the coordinated activity of cortical neurons which can be extracted from the EEG data. The phase slip rates (PSRs) were studied from the high-density (256 channel) EEG data, sampled at 16.384 kHz, of five adult subjects during covert visual object naming tasks. Artifact-free data from 29 trials were averaged for each subject. The analysis was performed to look for phase slips in the theta (4-7 Hz), alpha (7-12 Hz), beta (12-30 Hz), and low gamma (30-49 Hz) bands. The phase was calculated with the Hilbert transform, then unwrapped and detrended to look for phase slip rates in a 1.0 ms wide stepping window with a step size of 0.06 ms. The spatiotemporal plots of the PSRs were made by using a montage layout of 256 equidistant electrode positions. The spatiotemporal profiles of EEG and PSRs during the stimulus and the first second of the post-stimulus period were examined in detail to study the visual evoked potentials and different stages of visual object recognition in the visual, language, and memory areas. It was found that the activity areas of PSRs were different as compared with EEG activity areas during the stimulus and post-stimulus periods. Different stages of the insight moments during the covert object naming tasks were examined from PSRs and it was found to be about 512 ± 21 ms for the 'Eureka' moment. Overall, these results indicate that information about the cortical phase transitions can be derived from the measured EEG data and can be used in a complementary fashion to study the cognitive behavior of the brain.
Collapse
Affiliation(s)
- Ceon Ramon
- Department of Electrical and Computer Engineering, University of Washington, Seattle, WA, United States
- Regional Epilepsy Center, Harborview Medical Center, University of Washington, Seattle, WA, United States
| | - Uwe Graichen
- Department of Biostatistics and Data Science, Karl Landsteiner University of Health Sciences, Krems an der Donau, Austria
| | - Paolo Gargiulo
- Institute of Biomedical and Neural Engineering, Reykjavik University, Reykjavik, Iceland
- Department of Science, Landspitali University Hospital, Reykjavik, Iceland
| | | | - Thomas R. Knösche
- Max Planck Institute for Human Cognitive and Neurosciences, Leipzig, Germany
| | - Jens Haueisen
- Institute of Biomedical Engineering and Informatics, Technische Universität Ilmenau, Ilmenau, Germany
| |
Collapse
|
15
|
Legrand N, Etard O, Viader F, Clochon P, Doidy F, Eustache F, Gagnepain P. Attentional capture mediates the emergence and suppression of intrusive memories. iScience 2022; 25:105516. [PMID: 36419855 PMCID: PMC9676635 DOI: 10.1016/j.isci.2022.105516] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 07/20/2022] [Accepted: 11/02/2022] [Indexed: 11/07/2022] Open
Abstract
Intrusive memories hijack consciousness and their control may lead to forgetting. However, the contribution of reflexive attention to qualifying a memory signal as interfering is unknown. We used machine learning to decode the brain's electrical activity and pinpoint the otherwise hidden emergence of intrusive memories reported during a memory suppression task. Importantly, the algorithm was trained on an independent attentional model of visual activity, mimicking either the abrupt and interfering appearance of visual scenes into conscious awareness or their deliberate exploration. Intrusion of memories into conscious awareness were decoded above chance. The decoding accuracy increased when the algorithm was trained using a model of reflexive attention. Conscious detection of intrusive activity decoded from the brain signal was central to the future silencing of suppressed memories and later forgetting. Unwanted memories require the reflexive orienting of attention and access to consciousness to be suppressed effectively by inhibitory control.
Collapse
Affiliation(s)
- Nicolas Legrand
- Normandie University, UNICAEN, PSL Research University, EPHE, INSERM, U1077, CHU de Caen, Neuropsychologie et Imagerie de la Mémoire Humaine, Centre Cyceron, Caen, France
| | - Olivier Etard
- Normandie University, UNICAEN, INSERM, COMETE, CYCERON, CHU Caen, 14000 Caen, France
| | - Fausto Viader
- Normandie University, UNICAEN, PSL Research University, EPHE, INSERM, U1077, CHU de Caen, Neuropsychologie et Imagerie de la Mémoire Humaine, Centre Cyceron, Caen, France
| | - Patrice Clochon
- Normandie University, UNICAEN, PSL Research University, EPHE, INSERM, U1077, CHU de Caen, Neuropsychologie et Imagerie de la Mémoire Humaine, Centre Cyceron, Caen, France
| | - Franck Doidy
- Normandie University, UNICAEN, PSL Research University, EPHE, INSERM, U1077, CHU de Caen, Neuropsychologie et Imagerie de la Mémoire Humaine, Centre Cyceron, Caen, France
| | - Francis Eustache
- Normandie University, UNICAEN, PSL Research University, EPHE, INSERM, U1077, CHU de Caen, Neuropsychologie et Imagerie de la Mémoire Humaine, Centre Cyceron, Caen, France
| | - Pierre Gagnepain
- Normandie University, UNICAEN, PSL Research University, EPHE, INSERM, U1077, CHU de Caen, Neuropsychologie et Imagerie de la Mémoire Humaine, Centre Cyceron, Caen, France
| |
Collapse
|
16
|
Carota F, Schoffelen JM, Oostenveld R, Indefrey P. The Time Course of Language Production as Revealed by Pattern Classification of MEG Sensor Data. J Neurosci 2022; 42:5745-5754. [PMID: 35680410 PMCID: PMC9302460 DOI: 10.1523/jneurosci.1923-21.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Revised: 04/05/2022] [Accepted: 04/12/2022] [Indexed: 11/21/2022] Open
Abstract
Language production involves a complex set of computations, from conceptualization to articulation, which are thought to engage cascading neural events in the language network. However, recent neuromagnetic evidence suggests simultaneous meaning-to-speech mapping in picture naming tasks, as indexed by early parallel activation of frontotemporal regions to lexical semantic, phonological, and articulatory information. Here we investigate the time course of word production, asking to what extent such "earliness" is a distinctive property of the associated spatiotemporal dynamics. Using MEG, we recorded the neural signals of 34 human subjects (26 males) overtly naming 134 images from four semantic object categories (animals, foods, tools, clothes). Within each category, we covaried word length, as quantified by the number of syllables contained in a word, and phonological neighborhood density to target lexical and post-lexical phonological/phonetic processes. Multivariate pattern analyses searchlights in sensor space distinguished the stimulus-locked spatiotemporal responses to object categories early on, from 150 to 250 ms after picture onset, whereas word length was decoded in left frontotemporal sensors at 250-350 ms, followed by the latency of phonological neighborhood density (350-450 ms). Our results suggest a progression of neural activity from posterior to anterior language regions for the semantic and phonological/phonetic computations preparing overt speech, thus supporting serial cascading models of word production.SIGNIFICANCE STATEMENT Current psycholinguistic models make divergent predictions on how a preverbal message is mapped onto articulatory output during the language planning. Serial models predict a cascading sequence of hierarchically organized neural computations from conceptualization to articulation. In contrast, parallel models posit early simultaneous activation of multiple conceptual, phonological, and articulatory information in the language system. Here we asked whether such earliness is a distinctive property of the neural dynamics of word production. The combination of the millisecond precision of MEG with multivariate pattern analyses revealed subsequent onset times for the neural events supporting semantic and phonological/phonetic operations, progressing from posterior occipitotemporal to frontal sensor areas. The findings bring new insights for refining current theories of language production.
Collapse
Affiliation(s)
- Francesca Carota
- Max Planck Institute for Psycholinguistics, 6525 XD Nijmegen, The Netherlands
- Donders Institute for Cognitive Neuroscience, Radboud University, 6525 Nijmegen, The Netherlands
| | - Jan-Mathijs Schoffelen
- Max Planck Institute for Psycholinguistics, 6525 XD Nijmegen, The Netherlands
- Donders Institute for Cognitive Neuroscience, Radboud University, 6525 Nijmegen, The Netherlands
| | - Robert Oostenveld
- Donders Institute for Cognitive Neuroscience, Radboud University, 6525 Nijmegen, The Netherlands
- NatMEG, Karolinska Institutet, Stockholm 171 77, Sweden
| | - Peter Indefrey
- Max Planck Institute for Psycholinguistics, 6525 XD Nijmegen, The Netherlands
- Donders Institute for Cognitive Neuroscience, Radboud University, 6525 Nijmegen, The Netherlands
- Institut für Sprache und Information at, Heinrich Heine University, Düsseldorf 40225, Germany
| |
Collapse
|
17
|
Iamshchinina P, Karapetian A, Kaiser D, Cichy RM. Resolving the time course of visual and auditory object categorization. J Neurophysiol 2022; 127:1622-1628. [PMID: 35583972 PMCID: PMC9190735 DOI: 10.1152/jn.00515.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Humans can effortlessly categorize objects, both when they are conveyed through visual images and spoken words. To resolve the neural correlates of object categorization, studies have so far primarily focused on the visual modality. It is therefore still unclear how the brain extracts categorical information from auditory signals. In the current study, we used EEG (n = 48) and time-resolved multivariate pattern analysis to investigate 1) the time course with which object category information emerges in the auditory modality and 2) how the representational transition from individual object identification to category representation compares between the auditory modality and the visual modality. Our results show that 1) auditory object category representations can be reliably extracted from EEG signals and 2) a similar representational transition occurs in the visual and auditory modalities, where an initial representation at the individual-object level is followed by a subsequent representation of the objects’ category membership. Altogether, our results suggest an analogous hierarchy of information processing across sensory channels. However, there was no convergence toward conceptual modality-independent representations, thus providing no evidence for a shared supramodal code. NEW & NOTEWORTHY Object categorization operates on inputs from different sensory modalities, such as vision and audition. This process was mainly studied in vision. Here, we explore auditory object categorization. We show that auditory object category representations can be reliably extracted from EEG signals and, similar to vision, auditory representations initially carry information about individual objects, which is followed by a subsequent representation of the objects’ category membership.
Collapse
Affiliation(s)
- Polina Iamshchinina
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany.,Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Agnessa Karapetian
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany
| | - Daniel Kaiser
- Mathematical Institute, Department of Mathematics and Computer Science, Physics, Geography, Justus-Liebig-Universität Gießen, Gießen, Germany.,Center for Mind, Brain and Behavior (CMBB), Philipps-Universität Marburg and Justus-Liebig-Universität Gießen, Marburg, Germany
| | - Radoslaw M Cichy
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany.,Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany
| |
Collapse
|
18
|
Rybář M, Daly I. Neural decoding of semantic concepts: A systematic literature review. J Neural Eng 2022; 19. [PMID: 35344941 DOI: 10.1088/1741-2552/ac619a] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Accepted: 03/27/2022] [Indexed: 11/12/2022]
Abstract
Objective Semantic concepts are coherent entities within our minds. They underpin our thought processes and are a part of the basis for our understanding of the world. Modern neuroscience research is increasingly exploring how individual semantic concepts are encoded within our brains and a number of studies are beginning to reveal key patterns of neural activity that underpin specific concepts. Building upon this basic understanding of the process of semantic neural encoding, neural engineers are beginning to explore tools and methods for semantic decoding: identifying which semantic concepts an individual is focused on at a given moment in time from recordings of their neural activity. In this paper we review the current literature on semantic neural decoding. Approach We conducted this review according to the Preferred Reporting Items for Systematic reviews and Meta-Analysis (PRISMA) guidelines. Specifically, we assess the eligibility of published peer-reviewed reports via a search of PubMed and Google Scholar. We identify a total of 74 studies in which semantic neural decoding is used to attempt to identify individual semantic concepts from neural activity. Results Our review reveals how modern neuroscientific tools have been developed to allow decoding of individual concepts from a range of neuroimaging modalities. We discuss specific neuroimaging methods, experimental designs, and machine learning pipelines that are employed to aid the decoding of semantic concepts. We quantify the efficacy of semantic decoders by measuring information transfer rates. We also discuss current challenges presented by this research area and present some possible solutions. Finally, we discuss some possible emerging and speculative future directions for this research area. Significance Semantic decoding is a rapidly growing area of research. However, despite its increasingly widespread popularity and use in neuroscientific research this is the first literature review focusing on this topic across neuroimaging modalities and with a focus on quantifying the efficacy of semantic decoders.
Collapse
Affiliation(s)
- Milan Rybář
- School of Computer Science and Electronic Engineering, University of Essex, Wivenhoe Park, Colchester, Essex, CO4 3SQ, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
| | - Ian Daly
- University of Essex, School of Computer Science and Electronic Engineering, Wivenhoe Park, Colchester, Colchester, Essex, CO4 3SQ, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
| |
Collapse
|
19
|
Karimi-Rouzbahani H, Woolgar A. When the Whole Is Less Than the Sum of Its Parts: Maximum Object Category Information and Behavioral Prediction in Multiscale Activation Patterns. Front Neurosci 2022; 16:825746. [PMID: 35310090 PMCID: PMC8924472 DOI: 10.3389/fnins.2022.825746] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 01/24/2022] [Indexed: 11/19/2022] Open
Abstract
Neural codes are reflected in complex neural activation patterns. Conventional electroencephalography (EEG) decoding analyses summarize activations by averaging/down-sampling signals within the analysis window. This diminishes informative fine-grained patterns. While previous studies have proposed distinct statistical features capable of capturing variability-dependent neural codes, it has been suggested that the brain could use a combination of encoding protocols not reflected in any one mathematical feature alone. To check, we combined 30 features using state-of-the-art supervised and unsupervised feature selection procedures (n = 17). Across three datasets, we compared decoding of visual object category between these 17 sets of combined features, and between combined and individual features. Object category could be robustly decoded using the combined features from all of the 17 algorithms. However, the combination of features, which were equalized in dimension to the individual features, were outperformed across most of the time points by the multiscale feature of Wavelet coefficients. Moreover, the Wavelet coefficients also explained the behavioral performance more accurately than the combined features. These results suggest that a single but multiscale encoding protocol may capture the EEG neural codes better than any combination of protocols. Our findings put new constraints on the models of neural information encoding in EEG.
Collapse
Affiliation(s)
- Hamid Karimi-Rouzbahani
- Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, United Kingdom
- Department of Cognitive Science, Perception in Action Research Centre, Macquarie University, Sydney, NSW, Australia
- Department of Computing, Macquarie University, Sydney, NSW, Australia
| | - Alexandra Woolgar
- Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, United Kingdom
- Department of Cognitive Science, Perception in Action Research Centre, Macquarie University, Sydney, NSW, Australia
| |
Collapse
|
20
|
Bruera A, Poesio M. Exploring the Representations of Individual Entities in the Brain Combining EEG and Distributional Semantics. Front Artif Intell 2022; 5:796793. [PMID: 35280237 PMCID: PMC8905499 DOI: 10.3389/frai.2022.796793] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2021] [Accepted: 01/25/2022] [Indexed: 11/23/2022] Open
Abstract
Semantic knowledge about individual entities (i.e., the referents of proper names such as Jacinta Ardern) is fine-grained, episodic, and strongly social in nature, when compared with knowledge about generic entities (the referents of common nouns such as politician). We investigate the semantic representations of individual entities in the brain; and for the first time we approach this question using both neural data, in the form of newly-acquired EEG data, and distributional models of word meaning, employing them to isolate semantic information regarding individual entities in the brain. We ran two sets of analyses. The first set of analyses is only concerned with the evoked responses to individual entities and their categories. We find that it is possible to classify them according to both their coarse and their fine-grained category at appropriate timepoints, but that it is hard to map representational information learned from individuals to their categories. In the second set of analyses, we learn to decode from evoked responses to distributional word vectors. These results indicate that such a mapping can be learnt successfully: this counts not only as a demonstration that representations of individuals can be discriminated in EEG responses, but also as a first brain-based validation of distributional semantic models as representations of individual entities. Finally, in-depth analyses of the decoder performance provide additional evidence that the referents of proper names and categories have little in common when it comes to their representation in the brain.
Collapse
Affiliation(s)
- Andrea Bruera
- Cognitive Science Research Group, School of Electronic Engineering and Computer Science, Queen Mary University of London, London, United Kingdom
| | | |
Collapse
|
21
|
Ashton K, Zinszer BD, Cichy RM, Nelson CA, Aslin RN, Bayet L. Time-resolved multivariate pattern analysis of infant EEG data: A practical tutorial. Dev Cogn Neurosci 2022; 54:101094. [PMID: 35248819 PMCID: PMC8897621 DOI: 10.1016/j.dcn.2022.101094] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Revised: 10/22/2021] [Accepted: 02/24/2022] [Indexed: 01/27/2023] Open
Abstract
Time-resolved multivariate pattern analysis (MVPA), a popular technique for analyzing magneto- and electro-encephalography (M/EEG) neuroimaging data, quantifies the extent and time-course by which neural representations support the discrimination of relevant stimuli dimensions. As EEG is widely used for infant neuroimaging, time-resolved MVPA of infant EEG data is a particularly promising tool for infant cognitive neuroscience. MVPA has recently been applied to common infant imaging methods such as EEG and fNIRS. In this tutorial, we provide and describe code to implement time-resolved, within-subject MVPA with infant EEG data. An example implementation of time-resolved MVPA based on linear SVM classification is described, with accompanying code in Matlab and Python. Results from a test dataset indicated that in both infants and adults this method reliably produced above-chance accuracy for classifying stimuli images. Extensions of the classification analysis are presented including both geometric- and accuracy-based representational similarity analysis, implemented in Python. Common choices of implementation are presented and discussed. As the amount of artifact-free EEG data contributed by each participant is lower in studies of infants than in studies of children and adults, we also explore and discuss the impact of varying participant-level inclusion thresholds on resulting MVPA findings in these datasets.
Collapse
Affiliation(s)
- Kira Ashton
- Department of Neuroscience, American University, Washington, DC 20016, USA; Center for Neuroscience and Behavior, American University, Washington, DC 20016, USA.
| | | | - Radoslaw M Cichy
- Department of Education and Psychology, Freie Universität Berlin, 14195 Berlin, Germany
| | - Charles A Nelson
- Boston Children's Hospital, Boston, MA 02115, USA; Department of Pediatrics, Harvard Medical School, Boston, MA 02115, USA; Graduate School of Education, Harvard, Cambridge, MA 02138, USA
| | - Richard N Aslin
- Haskins Laboratories, 300 George Street, New Haven, CT 06511, USA; Psychological Sciences Department, University of Connecticut, Storrs, CT 06269, USA; Department of Psychology, Yale University, New Haven, CT 06511, USA; Yale Child Study Center, School of Medicine, New Haven, CT 06519, USA
| | - Laurie Bayet
- Department of Neuroscience, American University, Washington, DC 20016, USA; Center for Neuroscience and Behavior, American University, Washington, DC 20016, USA
| |
Collapse
|
22
|
Shi R, Zhao Y, Cao Z, Liu C, Kang Y, Zhang J. Categorizing objects from MEG signals using EEGNet. Cogn Neurodyn 2021; 16:365-377. [PMID: 35401863 PMCID: PMC8934895 DOI: 10.1007/s11571-021-09717-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 08/09/2021] [Accepted: 09/02/2021] [Indexed: 11/25/2022] Open
Abstract
Magnetoencephalography (MEG) signals have demonstrated their practical application to reading human minds. Current neural decoding studies have made great progress to build subject-wise decoding models to extract and discriminate the temporal/spatial features in neural signals. In this paper, we used a compact convolutional neural network-EEGNet-to build a common decoder across subjects, which deciphered the categories of objects (faces, tools, animals, and scenes) from MEG data. This study investigated the influence of the spatiotemporal structure of MEG on EEGNet's classification performance. Furthermore, the EEGNet replaced its convolution layers with two sets of parallel convolution structures to extract the spatial and temporal features simultaneously. Our results showed that the organization of MEG data fed into the EEGNet has an effect on EEGNet classification accuracy, and the parallel convolution structures in EEGNet are beneficial to extracting and fusing spatial and temporal MEG features. The classification accuracy demonstrated that the EEGNet succeeds in building the common decoder model across subjects, and outperforms several state-of-the-art feature fusing methods.
Collapse
Affiliation(s)
- Ran Shi
- School of Artificial Intelligence, Beijing Normal University, Beijing, 100875, China
| | - Yanyu Zhao
- School of Artificial Intelligence, Beijing Normal University, Beijing, 100875, China
| | - Zhiyuan Cao
- School of Artificial Intelligence, Beijing Normal University, Beijing, 100875, China
| | - Chunyu Liu
- School of Artificial Intelligence, Beijing Normal University, Beijing, 100875, China
| | - Yi Kang
- School of Artificial Intelligence, Beijing Normal University, Beijing, 100875, China
| | - Jiacai Zhang
- School of Artificial Intelligence, Beijing Normal University, Beijing, 100875, China
- Engineering Research Center of Intelligent Technology and Educational Application, Ministry of Education, Beijing, 100875, China
| |
Collapse
|
23
|
Rybář M, Poli R, Daly I. Decoding of semantic categories of imagined concepts of animals and tools in fNIRS. J Neural Eng 2021; 18:046035. [PMID: 33780916 DOI: 10.1088/1741-2552/abf2e5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Accepted: 03/29/2021] [Indexed: 11/11/2022]
Abstract
Objective.Semantic decoding refers to the identification of semantic concepts from recordings of an individual's brain activity. It has been previously reported in functional magnetic resonance imaging and electroencephalography. We investigate whether semantic decoding is possible with functional near-infrared spectroscopy (fNIRS). Specifically, we attempt to differentiate between the semantic categories of animals and tools. We also identify suitable mental tasks for potential brain-computer interface (BCI) applications.Approach.We explore the feasibility of a silent naming task, for the first time in fNIRS, and propose three novel intuitive mental tasks based on imagining concepts using three sensory modalities: visual, auditory, and tactile. Participants are asked to visualize an object in their minds, imagine the sounds made by the object, and imagine the feeling of touching the object. A general linear model is used to extract hemodynamic responses that are then classified via logistic regression in a univariate and multivariate manner.Main results.We successfully classify all tasks with mean accuracies of 76.2% for the silent naming task, 80.9% for the visual imagery task, 72.8% for the auditory imagery task, and 70.4% for the tactile imagery task. Furthermore, we show that consistent neural representations of semantic categories exist by applying classifiers across tasks.Significance.These findings show that semantic decoding is possible in fNIRS. The study is the first step toward the use of semantic decoding for intuitive BCI applications for communication.
Collapse
Affiliation(s)
- Milan Rybář
- Brain-Computer Interfacing and Neural Engineering Laboratory, School of Computer Science and Electronic Engineering, University of Essex, Colchester, United Kingdom
| | - Riccardo Poli
- Brain-Computer Interfacing and Neural Engineering Laboratory, School of Computer Science and Electronic Engineering, University of Essex, Colchester, United Kingdom
| | - Ian Daly
- Brain-Computer Interfacing and Neural Engineering Laboratory, School of Computer Science and Electronic Engineering, University of Essex, Colchester, United Kingdom
| |
Collapse
|
24
|
Argiris G, Rumiati RI, Crepaldi D. No fruits without color: Cross-modal priming and EEG reveal different roles for different features across semantic categories. PLoS One 2021; 16:e0234219. [PMID: 33852575 PMCID: PMC8046255 DOI: 10.1371/journal.pone.0234219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Accepted: 03/22/2021] [Indexed: 11/18/2022] Open
Abstract
Category-specific impairments witnessed in patients with semantic deficits have broadly dissociated into natural and artificial kinds. However, how the category of food (more specifically, fruits and vegetables) fits into this distinction has been difficult to interpret, given a pattern of deficit that has inconsistently mapped onto either kind, despite its intuitive membership to the natural domain. The present study explores the effects of a manipulation of a visual sensory (i.e., color) or functional (i.e., orientation) feature on the consequential semantic processing of fruits and vegetables (and tools, by comparison), first at the behavioral and then at the neural level. The categorization of natural (i.e., fruits/vegetables) and artificial (i.e., utensils) entities was investigated via cross-modal priming. Reaction time analysis indicated a reduction in priming for color-modified natural entities and orientation-modified artificial entities. Standard event-related potentials (ERP) analysis was performed, in addition to linear classification. For natural entities, a N400 effect at central channel sites was observed for the color-modified condition compared relative to normal and orientation conditions, with this difference confirmed by classification analysis. Conversely, there was no significant difference between conditions for the artificial category in either analysis. These findings provide strong evidence that color is an integral property to the categorization of fruits/vegetables, thus substantiating the claim that feature-based processing guides as a function of semantic category.
Collapse
Affiliation(s)
| | | | - Davide Crepaldi
- International School for Advanced Studies (SISSA), Trieste, Italy
| |
Collapse
|
25
|
Liu C, Kang Y, Zhang L, Zhang J. Rapidly Decoding Image Categories From MEG Data Using a Multivariate Short-Time FC Pattern Analysis Approach. IEEE J Biomed Health Inform 2021; 25:1139-1150. [PMID: 32750957 DOI: 10.1109/jbhi.2020.3008731] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Recent advances in the development of multivariate analysis methods have led to the application of multivariate pattern analysis (MVPA) to investigate the interactions between brain regions using graph theory (functional connectivity, FC) and decode visual categories from functional magnetic resonance imaging (fMRI) data from a continuous multicategory paradigm. To estimate stable FC patterns from fMRI data, previous studies required long periods in the order of several minutes, in comparison to the human brain that categories visual stimuli within hundreds of milliseconds. Constructing short-time dynamic FC patterns in the order of milliseconds and decoding visual categories is a relatively novel concept. In this study, we developed a multivariate decoding algorithm based on FC patterns and applied it to magnetoencephalography (MEG) data. MEG data were recorded from participants presented with image stimuli in four categories (faces, scenes, animals and tools). MEG data from 17 participants demonstrate that short-time dynamic FC patterns yield brain activity patterns that can be used to decode visual categories with high accuracy. Our results show that FC patterns change over the time window, and FC patterns extracted in the time window of 0∼200 ms after the stimulus onset were most stable. Further, the categorizing accuracy peaked (the mean binary accuracy is above 78.6% at individual level) in the FC patterns estimated within the 0∼200 ms interval. These findings elucidate the underlying connectivity information during visual category processing on a relatively smaller time scale and demonstrate that the contribution of FC patterns to categorization fluctuates over time.
Collapse
|
26
|
van Driel J, Olivers CNL, Fahrenfort JJ. High-pass filtering artifacts in multivariate classification of neural time series data. J Neurosci Methods 2021; 352:109080. [PMID: 33508412 DOI: 10.1016/j.jneumeth.2021.109080] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2019] [Revised: 01/13/2021] [Accepted: 01/15/2021] [Indexed: 12/11/2022]
Abstract
BACKGROUND Traditionally, EEG/MEG data are high-pass filtered and baseline-corrected to remove slow drifts. Minor deleterious effects of high-pass filtering in traditional time-series analysis have been well-documented, including temporal displacements. However, its effects on time-resolved multivariate pattern classification analyses (MVPA) are largely unknown. NEW METHOD To prevent potential displacement effects, we extend an alternative method of removing slow drift noise - robust detrending - with a procedure in which we mask out all cortical events from each trial. We refer to this method as trial-masked robust detrending. RESULTS In both real and simulated EEG data of a working memory experiment, we show that both high-pass filtering and standard robust detrending create artifacts that result in the displacement of multivariate patterns into activity silent periods, particularly apparent in temporal generalization analyses, and especially in combination with baseline correction. We show that trial-masked robust detrending is free from such displacements. COMPARISON WITH EXISTING METHOD(S) Temporal displacement may emerge even with modest filter cut-off settings such as 0.05 Hz, and even in regular robust detrending. However, trial-masked robust detrending results in artifact-free decoding without displacements. Baseline correction may unwittingly obfuscate spurious decoding effects and displace them to the rest of the trial. CONCLUSIONS Decoding analyses benefit from trial-masked robust detrending, without the unwanted side effects introduced by filtering or regular robust detrending. However, for sufficiently clean data sets and sufficiently strong signals, no filtering or detrending at all may work adequately. Implications for other types of data are discussed, followed by a number of recommendations.
Collapse
Affiliation(s)
- Joram van Driel
- Institute for Brain and Behaviour Amsterdam, Vrije Universiteit Amsterdam, the Netherlands; Department of Experimental and Applied Psychology - Cognitive Psychology, Vrije Universiteit Amsterdam, the Netherlands; Faculty of Behavioural and Movement Sciences, Vrije Universiteit Amsterdam, the Netherlands
| | - Christian N L Olivers
- Institute for Brain and Behaviour Amsterdam, Vrije Universiteit Amsterdam, the Netherlands; Department of Experimental and Applied Psychology - Cognitive Psychology, Vrije Universiteit Amsterdam, the Netherlands; Faculty of Behavioural and Movement Sciences, Vrije Universiteit Amsterdam, the Netherlands
| | - Johannes J Fahrenfort
- Institute for Brain and Behaviour Amsterdam, Vrije Universiteit Amsterdam, the Netherlands; Department of Experimental and Applied Psychology - Cognitive Psychology, Vrije Universiteit Amsterdam, the Netherlands; Faculty of Behavioural and Movement Sciences, Vrije Universiteit Amsterdam, the Netherlands; Department of Psychology, University of Amsterdam, Amsterdam 1001 NK, the Netherlands; Amsterdam Brain and Cognition (ABC), University of Amsterdam, Amsterdam 1001 NK, the Netherlands.
| |
Collapse
|
27
|
Poncet M, Fabre‐Thorpe M, Chakravarthi R. A simple rule to describe interactions between visual categories. Eur J Neurosci 2020; 52:4639-4666. [DOI: 10.1111/ejn.14890] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2019] [Revised: 06/12/2020] [Accepted: 06/24/2020] [Indexed: 11/27/2022]
Affiliation(s)
- Marlene Poncet
- CerCo Université de ToulouseCNRSUPS Toulouse France
- School of Psychology University of St Andrews St Andrews UK
| | | | | |
Collapse
|
28
|
Li R, Johansen JS, Ahmed H, Ilyevsky TV, Wilbur RB, Bharadwaj HM, Siskind JM. The Perils and Pitfalls of Block Design for EEG Classification Experiments. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2020; PP:1-1. [PMID: 33211652 DOI: 10.1109/tpami.2020.2973153] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
A recent paper [31] claims to classify brain processing evoked in subjects watching ImageNet stimuli as measured with EEG and to employ a representation derived from this processing to construct a novel object classifier. That paper, together with a series of subsequent papers [11, 18, 20, 24, 25, 30, 34], claims to achieve successful results on a wide variety of computer-vision tasks, including object classification, transfer learning, and generation of images depicting human perception and thought using brain-derived representations measured through EEG. Our novel experiments and analyses demonstrate that their results crucially depend on the block design that they employ, where all stimuli of a given class are presented together, and fail with a rapid-event design, where stimuli of different classes are randomly intermixed. The block design leads to classification of arbitrary brain states based on block-level temporal correlations that are known to exist in all EEG data, rather than stimulus-related activity. Because every trial in their test sets comes from the same block as many trials in the corresponding training sets, their block design thus leads to classifying arbitrary temporal artifacts of the data instead of stimulus-related activity. This invalidates all subsequent analyses performed on this data in multiple published papers and calls into question all of the reported results. We further show that a novel object classifier constructed with a random codebook performs as well as or better than a novel object classifier constructed with the representation extracted from EEG data, suggesting that the performance of their classifier constructed with a representation extracted from EEG data does not benefit from the brain-derived representation. Together, our results illustrate the far-reaching implications of the temporal autocorrelations that exist in all neuroimaging data for classification experiments. Further, our results calibrate the underlying difficulty of the tasks involved and caution against overly optimistic, but incorrect, claims to the contrary.
Collapse
|
29
|
Reaction times predict dynamic brain representations measured with MEG for only some object categorisation tasks. Neuropsychologia 2020; 151:107687. [PMID: 33212137 DOI: 10.1016/j.neuropsychologia.2020.107687] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2020] [Revised: 09/29/2020] [Accepted: 11/10/2020] [Indexed: 11/21/2022]
Abstract
Behavioural categorisation reaction times (RTs) provide a useful way to link behaviour to brain representations measured with neuroimaging. In this framework, objects are assumed to be represented in a multidimensional activation space, with the distances between object representations indicating their degree of neural similarity. Faster RTs have been reported to correlate with greater distances from a classification decision boundary for animacy. Objects inherently belong to more than one category, yet it is not known whether the RT-distance relationship, and its evolution over the time-course of the neural response, is similar across different categories. Here we used magnetoencephalography (MEG) to address this question. Our stimuli included typically animate and inanimate objects, as well as more ambiguous examples (i.e., robots and toys). We conducted four semantic categorisation tasks on the same stimulus set assessing animacy, living, moving, and human-similarity concepts, and linked the categorisation RTs to MEG time-series decoding data. Our results show a sustained RT-distance relationship throughout the time course of object processing for not only animacy, but also categorisation according to human-similarity. Interestingly, this sustained RT-distance relationship was not observed for the living and moving category organisations, despite comparable classification accuracy of the MEG data across all four category organisations. Our findings show that behavioural RTs predict representational distance for an organisational principle other than animacy, however further research is needed to determine why this relationship is observed only for some category organisations and not others.
Collapse
|
30
|
Lui KFH, Lo JCM, Maurer U, Ho CSH, McBride C. Electroencephalography decoding of Chinese characters in primary school children and its prediction for word reading performance and development. Dev Sci 2020; 24:e13060. [PMID: 33159696 DOI: 10.1111/desc.13060] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2019] [Revised: 10/12/2020] [Accepted: 10/30/2020] [Indexed: 11/30/2022]
Abstract
Research on what neural mechanisms facilitate word reading development in non-alphabetic scripts is relatively rare. The present study was among the first to adopt a multivariate pattern classification analysis to decode electroencephalographic signals recorded for primary school children (N = 236) while performing a Chinese character decision task. Chinese is an ideal script for studying the relationship between neural discriminability (i.e., decodability) of the orthography and behavioral word reading skills since the mapping from orthography to phonology is relatively arbitrary in Chinese. This was also among the first empirical attempts to examine the extent to which decoding performance can predict current and subsequent word reading skills using a longitudinal design. Results showed that neural activation patterns of real characters can be distinguished from activation patterns for pseudo-characters, non-characters, and random stroke combinations in both younger and older children. Topography of the transformed classifier weights revealed two distinct cognitive sub-processes underlying single character recognition, but temporal generalization analysis suggested common neural mechanisms between the distinct cognitive sub-processes. Suggestive evidence from correlational and hierarchical regression analyses showed that decoding performance, assessed on average 2 months before the year 2 behavioral testing, predicted both year 1 word reading performance and the development of word reading fluency over the year. Results demonstrate that decoding performance, one indicator of how the neural system is functionally organized in processing characters and character-like stimuli, can serve as a useful neural marker in predicting current word reading skills and the capacity to learn to read.
Collapse
Affiliation(s)
- Kelvin F H Lui
- Department of Psychology, The Chinese University of Hong Kong, Hong Kong
| | - Jason C M Lo
- Department of Psychology, The Chinese University of Hong Kong, Hong Kong
| | - Urs Maurer
- Department of Psychology, The Chinese University of Hong Kong, Hong Kong.,Brain and Mind Institute, The Chinese University of Hong Kong, Hong Kong
| | - Connie S-H Ho
- Department of Psychology, The University of Hong Kong, Hong Kong
| | - Catherine McBride
- Department of Psychology, The Chinese University of Hong Kong, Hong Kong.,Brain and Mind Institute, The Chinese University of Hong Kong, Hong Kong
| |
Collapse
|
31
|
Dynamic Time-Locking Mechanism in the Cortical Representation of Spoken Words. eNeuro 2020; 7:ENEURO.0475-19.2020. [PMID: 32513662 PMCID: PMC7470935 DOI: 10.1523/eneuro.0475-19.2020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2019] [Revised: 05/15/2020] [Accepted: 06/01/2020] [Indexed: 11/21/2022] Open
Abstract
Human speech has a unique capacity to carry and communicate rich meanings. However, it is not known how the highly dynamic and variable perceptual signal is mapped to existing linguistic and semantic representations. In this novel approach, we used the natural acoustic variability of sounds and mapped them to magnetoencephalography (MEG) data using physiologically-inspired machine-learning models. We aimed at determining how well the models, differing in their representation of temporal information, serve to decode and reconstruct spoken words from MEG recordings in 16 healthy volunteers. We discovered that dynamic time-locking of the cortical activation to the unfolding speech input is crucial for the encoding of the acoustic-phonetic features of speech. In contrast, time-locking was not highlighted in cortical processing of non-speech environmental sounds that conveyed the same meanings as the spoken words, including human-made sounds with temporal modulation content similar to speech. The amplitude envelope of the spoken words was particularly well reconstructed based on cortical evoked responses. Our results indicate that speech is encoded cortically with especially high temporal fidelity. This speech tracking by evoked responses may partly reflect the same underlying neural mechanism as the frequently reported entrainment of the cortical oscillations to the amplitude envelope of speech. Furthermore, the phoneme content was reflected in cortical evoked responses simultaneously with the spectrotemporal features, pointing to an instantaneous transformation of the unfolding acoustic features into linguistic representations during speech processing.
Collapse
|
32
|
Cudlenco N, Popescu N, Leordeanu M. Reading into the mind’s eye: Boosting automatic visual recognition with EEG signals. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.12.076] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
33
|
Yang T, Kim SP. Group-Level Neural Responses to Service-to-Service Brand Extension. Front Neurosci 2019; 13:676. [PMID: 31316343 PMCID: PMC6610219 DOI: 10.3389/fnins.2019.00676] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2019] [Accepted: 06/13/2019] [Indexed: 11/26/2022] Open
Abstract
Brand extension is a marketing strategy leveraging well-established brand to promote new offerings provided as goods or service. The previous neurophysiological studies on goods-to-goods brand extension have proposed that categorization and semantic memory processes are involved in brand extension evaluation. However, it is unknown whether these same processes also underlie service-to-service brand extension. The present study, therefore, aims to investigate neural processes in consumers underlying their judgment of service-to-service brand extension. Specifically, we investigated human electroencephalographic responses to extended services that were commonly considered to fit well or badly with parent brand among consumers. For this purpose, we proposed a new stimulus grouping method to find commonly acceptable or unacceptable service extensions. In the experiment, participants reported the acceptability of 56 brand extension pairs, consisting of parent brand name (S1) and extended service name (S2). From individual acceptability responses, we assigned each pair to one of the three fit levels: high- (i.e., highly acceptable), low-, and mid-fit. Next, we selected stimuli that received high/low-fit evaluations from a majority of participants (i.e., >85%) and assigned them to a high/low population-fit group. A comparison of event-related potentials (ERPs) between population-fit groups through a paired t-test showed significant differences in the fronto-central N2 and fronto-parietal P300 amplitudes. We further evaluated inter-subject variability of these ERP components by a decoding analysis that classified N2 and/or P300 amplitudes into a high, or low population-fit class using a support vector machine. Leave-one-subject-out validation revealed classification accuracy of 60.35% with N2 amplitudes, 78.95% with P300, and 73.68% with both, indicating a relatively high inter-subject variability of N2 but low for P300. This validation showed that fronto-parietal P300 reflected neural processes more consistent across subjects in service-to-service brand extension. We further observed that the left frontal P300 amplitude was increased as fit-level increased across stimuli, indicating a semantic retrieval process to evaluate a semantic link between S1 and S2. Parietal P300 showed a higher amplitude in the high population-fit group, reflecting a similarity-based categorization process. In sum, our results suggest that service-to-service brand extension evaluation may share similar neural processes with goods-to-goods brand extension.
Collapse
Affiliation(s)
- Taeyang Yang
- Brain-Computer Interface Laboratory, Department of Human Factors Engineering, Ulsan National Institute of Science and Technology, Ulsan, South Korea
| | - Sung-Phil Kim
- Brain-Computer Interface Laboratory, Department of Human Factors Engineering, Ulsan National Institute of Science and Technology, Ulsan, South Korea
| |
Collapse
|
34
|
Tuckute G, Hansen ST, Pedersen N, Steenstrup D, Hansen LK. Single-Trial Decoding of Scalp EEG under Natural Conditions. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2019; 2019:9210785. [PMID: 31143206 PMCID: PMC6501266 DOI: 10.1155/2019/9210785] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/21/2018] [Revised: 02/12/2019] [Accepted: 02/24/2019] [Indexed: 12/04/2022]
Abstract
There is significant current interest in decoding mental states from electroencephalography (EEG) recordings. EEG signals are subject-specific, are sensitive to disturbances, and have a low signal-to-noise ratio, which has been mitigated by the use of laboratory-grade EEG acquisition equipment under highly controlled conditions. In the present study, we investigate single-trial decoding of natural, complex stimuli based on scalp EEG acquired with a portable, 32 dry-electrode sensor system in a typical office setting. We probe generalizability by a leave-one-subject-out cross-validation approach. We demonstrate that support vector machine (SVM) classifiers trained on a relatively small set of denoised (averaged) pseudotrials perform on par with classifiers trained on a large set of noisy single-trial samples. We propose a novel method for computing sensitivity maps of EEG-based SVM classifiers for visualization of EEG signatures exploited by the SVM classifiers. Moreover, we apply an NPAIRS resampling framework for estimation of map uncertainty, and thus show that effect sizes of sensitivity maps for classifiers trained on small samples of denoised data and large samples of noisy data are similar. Finally, we demonstrate that the average pseudotrial classifier can successfully predict the class of single trials from withheld subjects, which allows for fast classifier training, parameter optimization, and unbiased performance evaluation in machine learning approaches for brain decoding.
Collapse
Affiliation(s)
- Greta Tuckute
- Department of Applied Mathematics and Computer Science (DTU Compute), Technical University of Denmark, DK-2800 Kongens Lyngby, Denmark
| | - Sofie Therese Hansen
- Department of Applied Mathematics and Computer Science (DTU Compute), Technical University of Denmark, DK-2800 Kongens Lyngby, Denmark
| | - Nicolai Pedersen
- Department of Applied Mathematics and Computer Science (DTU Compute), Technical University of Denmark, DK-2800 Kongens Lyngby, Denmark
| | - Dea Steenstrup
- Department of Applied Mathematics and Computer Science (DTU Compute), Technical University of Denmark, DK-2800 Kongens Lyngby, Denmark
| | - Lars Kai Hansen
- Department of Applied Mathematics and Computer Science (DTU Compute), Technical University of Denmark, DK-2800 Kongens Lyngby, Denmark
| |
Collapse
|
35
|
The representational dynamics of visual objects in rapid serial visual processing streams. Neuroimage 2019; 188:668-679. [DOI: 10.1016/j.neuroimage.2018.12.046] [Citation(s) in RCA: 41] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2018] [Revised: 12/17/2018] [Accepted: 12/22/2018] [Indexed: 11/15/2022] Open
|
36
|
Leonardelli E, Fait E, Fairhall SL. Temporal dynamics of access to amodal representations of category-level conceptual information. Sci Rep 2019; 9:239. [PMID: 30659237 PMCID: PMC6338759 DOI: 10.1038/s41598-018-37429-2] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2018] [Accepted: 12/06/2018] [Indexed: 11/08/2022] Open
Abstract
Categories describe semantic divisions between classes of objects and category-based models are widely used for investigation of the conceptual system. One critical issue in this endeavour is the isolation of conceptual from perceptual contributions to category-differences. An unambiguous way to address this confound is combining multiple input-modalities. To this end, we showed participants person/place stimuli using name and picture modalities. Using multivariate methods, we searched for category-sensitive neural patterns shared across input-modalities and thus independent from perceptual properties. The millisecond temporal resolution of magnetoencephalography (MEG) allowed us to consider the precise timing of conceptual access and, by confronting latencies between the two modalities ("time generalization"), how latencies of processing depends on the input-modality. Our results identified category-sensitive conceptual representations common between modalities at three stages and that conceptual access for words was delayed by about 90 msec with respect to pictures. We also show that for pictures, the first conceptual pattern of activity (shared between both words and pictures) occurs as early as 110 msec. Collectively, our results indicated that conceptual access at the category-level is a multistage process and that different delays in access across these two input-modalities determine when these representations are activated.
Collapse
Affiliation(s)
- Elisa Leonardelli
- Center for Mind/Brain Sciences, University of Trento, Trento, 38068, Italy.
| | - Elisa Fait
- Center for Mind/Brain Sciences, University of Trento, Trento, 38068, Italy
| | - Scott L Fairhall
- Center for Mind/Brain Sciences, University of Trento, Trento, 38068, Italy
| |
Collapse
|
37
|
Fahimi Hnazaee M, Khachatryan E, Van Hulle MM. Semantic Features Reveal Different Networks During Word Processing: An EEG Source Localization Study. Front Hum Neurosci 2018; 12:503. [PMID: 30618684 PMCID: PMC6300518 DOI: 10.3389/fnhum.2018.00503] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2018] [Accepted: 11/29/2018] [Indexed: 11/29/2022] Open
Abstract
The neural principles behind semantic category representation are still under debate. Dominant theories mostly focus on distinguishing concrete from abstract concepts but, in such theories, divisions into categories of concrete concepts are more developed than for their abstract counterparts. An encompassing theory on semantic category representation could be within reach when charting the semantic attributes that are capable of describing both concept types. A good candidate are the three semantic dimensions defined by Osgood (potency, valence, arousal). However, to show to what extent they affect semantic processing, specific neuroimaging tools are required. Electroencephalography (EEG) is on par with the temporal resolution of cognitive behavior and source reconstruction. Using high-density set-ups, it is able to yield a spatial resolution in the scale of millimeters, sufficient to identify anatomical brain parcellations that could differentially contribute to semantic category representation. Cognitive neuroscientists traditionally focus on scalp domain analysis and turn to source reconstruction when an effect in the scalp domain has been detected. Traditional methods will potentially miss out on the fine-grained effects of semantic features as they are possibly obscured by the mixing of source activity due to volume conduction. For this reason, we have developed a mass-univariate analysis in the source domain using a mixed linear effect model. Our analyses reveal distinct networks of sources for different semantic features that are active during different stages of lexico-semantic processing of single words. With our method we identified differences in the spatio-temporal activation patterns of abstract and concrete words, high and low potency words, high and low valence words, and high and low arousal words, and in this way shed light on how word categories are represented in the brain.
Collapse
Affiliation(s)
- Mansoureh Fahimi Hnazaee
- Laboratory for Neuro- and Psychophysiology, Department of Neurosciences, KU Leuven, Leuven, Belgium
| | | | | |
Collapse
|
38
|
Words affect visual perception by activating object shape representations. Sci Rep 2018; 8:14156. [PMID: 30237542 PMCID: PMC6148044 DOI: 10.1038/s41598-018-32483-2] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2018] [Accepted: 09/07/2018] [Indexed: 11/08/2022] Open
Abstract
Linguistic labels are known to facilitate object recognition, yet the mechanism of this facilitation is not well understood. Previous psychophysical studies have suggested that words guide visual perception by activating information about visual object shape. Here we aimed to test this hypothesis at the neural level, and to tease apart the visual and semantic contribution of words to visual object recognition. We created a set of object pictures from two semantic categories with varying shapes, and obtained subjective ratings of their shape and category similarity. We then conducted a word-picture matching experiment, while recording participants’ EEG, and tested if the shape or the category similarity between the word’s referent and target picture explained the spatiotemporal pattern of the picture-evoked responses. The results show that hearing a word activates representations of its referent’s shape, which interacts with the visual processing of a subsequent picture within 100 ms from its onset. Furthermore, non-visual categorical information, carried by the word, affects the visual processing at later stages. These findings advance our understanding of the interaction between language and visual perception and provide insights into how the meanings of words are represented in the brain.
Collapse
|
39
|
Attending to Visual Stimuli versus Performing Visual Imagery as a Control Strategy for EEG-based Brain-Computer Interfaces. Sci Rep 2018; 8:13222. [PMID: 30185802 PMCID: PMC6125597 DOI: 10.1038/s41598-018-31472-9] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2018] [Accepted: 08/08/2018] [Indexed: 11/08/2022] Open
Abstract
Currently the most common imagery task used in Brain-Computer Interfaces (BCIs) is motor imagery, asking a user to imagine moving a part of the body. This study investigates the possibility to build BCIs based on another kind of mental imagery, namely "visual imagery". We study to what extent can we distinguish alternative mental processes of observing visual stimuli and imagining it to obtain EEG-based BCIs. Per trial, we instructed each of 26 users who participated in the study to observe a visual cue of one of two predefined images (a flower or a hammer) and then imagine the same cue, followed by rest. We investigated if we can differentiate between the different subtrial types from the EEG alone, as well as detect which image was shown in the trial. We obtained the following classifier performances: (i) visual imagery vs. visual observation task (71% of classification accuracy), (ii) visual observation task towards different visual stimuli (classifying one observation cue versus another observation cue with an accuracy of 61%) and (iii) resting vs. observation/imagery (77% of accuracy between imagery task versus resting state, and the accuracy of 75% between observation task versus resting state). Our results show that the presence of visual imagery and specifically related alpha power changes are useful to broaden the range of BCI control strategies.
Collapse
|
40
|
Zafar R, Kamel N, Naufal M, Malik AS, Dass SC, Ahmad RF, Abdullah JM, Reza F. A study of decoding human brain activities from simultaneous data of EEG and fMRI using MVPA. AUSTRALASIAN PHYSICAL & ENGINEERING SCIENCES IN MEDICINE 2018; 41:633-645. [PMID: 29948968 DOI: 10.1007/s13246-018-0656-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/27/2017] [Accepted: 06/05/2018] [Indexed: 10/14/2022]
Abstract
Neuroscientists have investigated the functionality of the brain in detail and achieved remarkable results but this area still need further research. Functional magnetic resonance imaging (fMRI) is considered as the most reliable and accurate technique to decode the human brain activity, on the other hand electroencephalography (EEG) is a portable and low cost solution in brain research. The purpose of this study is to find whether EEG can be used to decode the brain activity patterns like fMRI. In fMRI, data from a very specific brain region is enough to decode the brain activity patterns due to the quality of data. On the other hand, EEG can measure the rapid changes in neuronal activity patterns due to its higher temporal resolution i.e., in msec. These rapid changes mostly occur in different brain regions. In this study, multivariate pattern analysis (MVPA) is used both for EEG and fMRI data analysis and the information is extracted from distributed activation patterns of the brain. The significant information among different classes is extracted using two sample t test in both data sets. Finally, the classification analysis is done using the support vector machine. A fair comparison of both data sets is done using the same analysis techniques, moreover simultaneously collected data of EEG and fMRI is used for this comparison. The final analysis is done with the data of eight participants; the average result of all conditions are found which is 65.7% for EEG data set and 64.1% for fMRI data set. It concludes that EEG is capable of doing brain decoding with the data from multiple brain regions. In other words, decoding accuracy with EEG MVPA is as good as fMRI MVPA and is above chance level.
Collapse
Affiliation(s)
- Raheel Zafar
- Department of Engineering, National University of Modern Languages, Islamabad, Pakistan
| | - Nidal Kamel
- Centre for Intelligent Signal and Imaging Research (CISIR), Universiti Teknologi PETRONAS, Perak, Malaysia
| | - Mohamad Naufal
- Centre for Intelligent Signal and Imaging Research (CISIR), Universiti Teknologi PETRONAS, Perak, Malaysia
| | - Aamir Saeed Malik
- Centre for Intelligent Signal and Imaging Research (CISIR), Universiti Teknologi PETRONAS, Perak, Malaysia.
| | - Sarat C Dass
- Centre for Intelligent Signal and Imaging Research (CISIR), Universiti Teknologi PETRONAS, Perak, Malaysia
| | - Rana Fayyaz Ahmad
- Centre for Intelligent Signal and Imaging Research (CISIR), Universiti Teknologi PETRONAS, Perak, Malaysia
| | - Jafri M Abdullah
- Center for Neuroscience Services and Research, Universiti Sains Malaysia, Kubang Kerian, 16150, Kota Bharu, Kelantan, Malaysia.,Department of Neurosciences, School of Medical Sciences, Universiti Sains Malaysia, Kubang Kerian, 16150, Kota Bharu, Kelantan, Malaysia
| | - Faruque Reza
- Center for Neuroscience Services and Research, Universiti Sains Malaysia, Kubang Kerian, 16150, Kota Bharu, Kelantan, Malaysia.,Department of Neurosciences, School of Medical Sciences, Universiti Sains Malaysia, Kubang Kerian, 16150, Kota Bharu, Kelantan, Malaysia
| |
Collapse
|
41
|
Segaert K, Mazaheri A, Hagoort P. Binding language: structuring sentences through precisely timed oscillatory mechanisms. Eur J Neurosci 2018; 48:2651-2662. [DOI: 10.1111/ejn.13816] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2017] [Revised: 12/06/2017] [Accepted: 12/14/2017] [Indexed: 01/22/2023]
Affiliation(s)
- Katrien Segaert
- School of Psychology; University of Birmingham; Edgbaston Birmingham UK
- Centre for Human Brain Health; University of Birmingham; Birmingham UK
- Max Planck Institute for Psycholinguistics; Nijmegen The Netherlands
| | - Ali Mazaheri
- School of Psychology; University of Birmingham; Edgbaston Birmingham UK
- Centre for Human Brain Health; University of Birmingham; Birmingham UK
| | - Peter Hagoort
- Max Planck Institute for Psycholinguistics; Nijmegen The Netherlands
- Centre for Cognitive Neuroimaging; Donders Institute for Brain; Cognition and Behaviour; Radboud University Nijmegen; Nijmegen The Netherlands
| |
Collapse
|
42
|
van Vliet M, Van Hulle MM, Salmelin R. Exploring the Organization of Semantic Memory through Unsupervised Analysis of Event-related Potentials. J Cogn Neurosci 2017; 30:381-392. [PMID: 29211653 DOI: 10.1162/jocn_a_01211] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Modern multivariate methods have enabled the application of unsupervised techniques to analyze neurophysiological data without strict adherence to predefined experimental conditions. We demonstrate a multivariate method that leverages priming effects on the evoked potential to perform hierarchical clustering on a set of word stimuli. The current study focuses on the semantic relationships that play a key role in the organization of our mental lexicon of words and concepts. The N400 component of the event-related potential is considered a reliable neurophysiological response that is indicative of whether accessing one concept facilitates subsequent access to another (i.e., one "primes" the other). To further our understanding of the organization of the human mental lexicon, we propose to utilize the N400 component to drive a clustering algorithm that can uncover, given a set of words, which particular subsets of words show mutual priming. Such a scheme requires a reliable measurement of the amplitude of the N400 component without averaging across many trials, which was here achieved using a recently developed multivariate analysis method based on beamforming. We validated our method by demonstrating that it can reliably detect, without any prior information about the nature of the stimuli, a well-known feature of the organization of our semantic memory: the distinction between animate and inanimate concepts. These results motivate further application of our method to data-driven exploration of disputed or unknown relationships between stimuli.
Collapse
|
43
|
Hramov AE, Maksimenko VA, Pchelintseva SV, Runnova AE, Grubov VV, Musatov VY, Zhuravlev MO, Koronovskii AA, Pisarchik AN. Classifying the Perceptual Interpretations of a Bistable Image Using EEG and Artificial Neural Networks. Front Neurosci 2017; 11:674. [PMID: 29255403 PMCID: PMC5722852 DOI: 10.3389/fnins.2017.00674] [Citation(s) in RCA: 68] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2017] [Accepted: 11/20/2017] [Indexed: 01/04/2023] Open
Abstract
In order to classify different human brain states related to visual perception of ambiguous images, we use an artificial neural network (ANN) to analyze multichannel EEG. The classifier built on the basis of a multilayer perceptron achieves up to 95% accuracy in classifying EEG patterns corresponding to two different interpretations of the Necker cube. The important feature of our classifier is that trained on one subject it can be used for the classification of EEG traces of other subjects. This result suggests the existence of common features in the EEG structure associated with distinct interpretations of bistable objects. We firmly believe that the significance of our results is not limited to visual perception of the Necker cube images; the proposed experimental approach and developed computational technique based on ANN can also be applied to study and classify different brain states using neurophysiological data recordings. This may give new directions for future research in the field of cognitive and pathological brain activity, and for the development of brain-computer interfaces.
Collapse
Affiliation(s)
- Alexander E Hramov
- REC "Artificial Intelligence Systems and Neurotechnology", Yuri Gagarin State Technical University of Saratov, Saratov, Russia.,Faculty of Nonlinear Processes, Saratov State University, Saratov, Russia
| | - Vladimir A Maksimenko
- REC "Artificial Intelligence Systems and Neurotechnology", Yuri Gagarin State Technical University of Saratov, Saratov, Russia
| | - Svetlana V Pchelintseva
- REC "Artificial Intelligence Systems and Neurotechnology", Yuri Gagarin State Technical University of Saratov, Saratov, Russia
| | - Anastasiya E Runnova
- REC "Artificial Intelligence Systems and Neurotechnology", Yuri Gagarin State Technical University of Saratov, Saratov, Russia
| | - Vadim V Grubov
- REC "Artificial Intelligence Systems and Neurotechnology", Yuri Gagarin State Technical University of Saratov, Saratov, Russia
| | - Vyacheslav Yu Musatov
- REC "Artificial Intelligence Systems and Neurotechnology", Yuri Gagarin State Technical University of Saratov, Saratov, Russia
| | - Maksim O Zhuravlev
- REC "Artificial Intelligence Systems and Neurotechnology", Yuri Gagarin State Technical University of Saratov, Saratov, Russia.,Faculty of Nonlinear Processes, Saratov State University, Saratov, Russia
| | - Alexey A Koronovskii
- REC "Artificial Intelligence Systems and Neurotechnology", Yuri Gagarin State Technical University of Saratov, Saratov, Russia.,Faculty of Nonlinear Processes, Saratov State University, Saratov, Russia
| | - Alexander N Pisarchik
- REC "Artificial Intelligence Systems and Neurotechnology", Yuri Gagarin State Technical University of Saratov, Saratov, Russia.,Center for Biomedical Technology, Technical University of Madrid, Madrid, Spain
| |
Collapse
|
44
|
Mensen A, Marshall W, Tononi G. EEG Differentiation Analysis and Stimulus Set Meaningfulness. Front Psychol 2017; 8:1748. [PMID: 29056921 PMCID: PMC5635725 DOI: 10.3389/fpsyg.2017.01748] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2017] [Accepted: 09/21/2017] [Indexed: 11/13/2022] Open
Abstract
A set of images can be considered as meaningfully different for an observer if they can be distinguished phenomenally from one another. Each phenomenal difference must be supported by some neurophysiological differences. Differentiation analysis aims to quantify neurophysiological differentiation evoked by a given set of stimuli to assess its meaningfulness to the individual observer. As a proof of concept using high-density EEG, we show increased neurophysiological differentiation for a set of natural, meaningfully different images in contrast to another set of artificially generated, meaninglessly different images in nine participants. Stimulus-evoked neurophysiological differentiation (over 257 channels, 800 ms) was systematically greater for meaningful vs. meaningless stimulus categories both at the group level and for individual subjects. Spatial breakdown showed a central-posterior peak of differentiation, consistent with the visual nature of the stimulus sets. Temporal breakdown revealed an early peak of differentiation around 110 ms, prominent in the central-posterior region; and a later, longer-lasting peak at 300-500 ms that was spatially more distributed. The early peak of differentiation was not accompanied by changes in mean ERP amplitude, whereas the later peak was associated with a higher amplitude ERP for meaningful images. An ERP component similar to visual-awareness-negativity occurred during the nadir of differentiation across all image types. Control stimulus sets and further analysis indicate that changes in neurophysiological differentiation between meaningful and meaningless stimulus sets could not be accounted for by spatial properties of the stimuli or by stimulus novelty and predictability.
Collapse
Affiliation(s)
- Armand Mensen
- Center for Sleep and Consciousness, University of Wisconsin-Madison, Madison, WI, United States.,Department of Neurology, Inselspital Bern, Bern, Switzerland
| | - William Marshall
- Center for Sleep and Consciousness, University of Wisconsin-Madison, Madison, WI, United States
| | - Giulio Tononi
- Center for Sleep and Consciousness, University of Wisconsin-Madison, Madison, WI, United States
| |
Collapse
|
45
|
Turner WF, Johnston P, de Boer K, Morawetz C, Bode S. Multivariate pattern analysis of event-related potentials predicts the subjective relevance of everyday objects. Conscious Cogn 2017; 55:46-58. [DOI: 10.1016/j.concog.2017.07.006] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2017] [Revised: 06/09/2017] [Accepted: 07/17/2017] [Indexed: 12/31/2022]
|
46
|
Alizadeh S, Jamalabadi H, Schönauer M, Leibold C, Gais S. Decoding cognitive concepts from neuroimaging data using multivariate pattern analysis. Neuroimage 2017; 159:449-458. [DOI: 10.1016/j.neuroimage.2017.07.058] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2016] [Revised: 05/26/2017] [Accepted: 07/28/2017] [Indexed: 12/01/2022] Open
|
47
|
Kuo BC, Li CH, Lin SH, Hu SH, Yeh YY. Top-down modulation of alpha power and pattern similarity for threatening representations in visual short-term memory. Neuropsychologia 2017; 106:21-30. [PMID: 28887064 DOI: 10.1016/j.neuropsychologia.2017.09.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2017] [Revised: 09/01/2017] [Accepted: 09/01/2017] [Indexed: 12/22/2022]
Abstract
Recent studies have shown that top-down attention biases task-relevant representations in visual short-term memory (VSTM). Accumulating evidence has also revealed the modulatory effects of emotional arousal on attentional processing. However, it remains unclear how top-down attention interacts with emotional memoranda in VSTM. In this study, we investigated the mechanisms of alpha oscillations and their spatiotemporal characteristics that underlie top-down attention to threatening representations during VSTM maintenance with electroencephalography. Participants were instructed to remember a threatening object and a neutral object in a cued variant delayed response task. Retrospective cues (retro-cues) were presented to direct attention to the hemifield of a threatening object (i.e., cue-to-threat trials) or a neutral object (i.e., cue-to-neutral trials) during a retention interval prior to the probe test. We found a significant retro-cue-related alpha lateralisation over posterior regions during VSTM maintenance. The novel finding was that the magnitude of alpha lateralisation was greater for cue-to-threat objects compared to cue-to-neutral ones. These results indicated that directing attention towards threatening representations compared to neutral representations could result in greater regulation of alpha activity contralateral to the cued hemifield. Importantly, we estimated the spatiotemporal pattern similarity in alpha activity and found significantly higher similarity indexes for the posterior regions relative to the anterior regions and for the cue-to-threat objects relative to cue-to-neutral objects over the posterior regions. Together, our findings provided the oscillatory evidence of greater top-down modulations of alpha lateralisation and spatiotemporal pattern similarity for attending to threatening representations in VSTM.
Collapse
Affiliation(s)
- Bo-Cheng Kuo
- Department of Psychology, National Taiwan University, Taiwan.
| | - Chun-Hui Li
- Department of Psychology, National Taiwan University, Taiwan; Research Center for Information Technology Innovation, Academia Sinica, Taiwan
| | - Szu-Hung Lin
- Department of Psychology, National Taiwan University, Taiwan
| | - Sheng-Hung Hu
- Department of Psychology, National Taiwan University, Taiwan
| | - Yei-Yu Yeh
- Department of Psychology, National Taiwan University, Taiwan.
| |
Collapse
|
48
|
Abstract
We live our lives surrounded by symbols (e.g., road signs, logos, but especially words and numbers), and throughout our life we use them to evoke, communicate and reflect upon ideas and things that are not currently present to our senses. Symbols are represented in our brains at different levels of complexity: at the first and most simple level, as physical entities, in the corresponding primary and secondary sensory cortices. The crucial property of symbols, however, is that, despite the simplicity of their surface forms, they have the power of evoking higher order multifaceted representations that are implemented in distributed neural networks spanning a large portion of the cortex. The rich internal states that reflect our knowledge of the meaning of symbols are what we call semantic representations. In this review paper, we summarize our current knowledge of both the cognitive and neural substrates of semantic representations, focusing on concrete words (i.e., nouns or verbs referring to concrete objects and actions), which, together with numbers, are the most-studied and well defined classes of symbols. Following a systematic descriptive approach, we will organize this literature review around two key questions: what is the content of semantic representations? And, how are semantic representations implemented in the brain, in terms of localization and dynamics? While highlighting the main current opposing perspectives on these topics, we propose that a fruitful way to make substantial progress in this domain would be to adopt a geometrical view of semantic representations as points in high dimensional space, and to operationally partition the space of concrete word meaning into motor-perceptual and conceptual dimensions. By giving concrete examples of the kinds of research that can be done within this perspective, we illustrate how we believe this framework will foster theoretical speculations as well as empirical research.
Collapse
Affiliation(s)
- Valentina Borghesani
- École Doctorale Cerveau-Cognition-Comportement, Université Pierre et Marie Curie - Paris 6, 75005 Paris, France; Cognitive Neuroimaging Unit, Institut National de la Santé et de la Recherche Médicale, U992, F-91191 Gif/Yvette, France; Center for Mind/Brain Sciences, University of Trento, 38068 Rovereto, Italy.
| | - Manuela Piazza
- Cognitive Neuroimaging Unit, Institut National de la Santé et de la Recherche Médicale, U992, F-91191 Gif/Yvette, France; Center for Mind/Brain Sciences, University of Trento, 38068 Rovereto, Italy
| |
Collapse
|
49
|
Zafar R, Dass SC, Malik AS. Electroencephalogram-based decoding cognitive states using convolutional neural network and likelihood ratio based score fusion. PLoS One 2017; 12:e0178410. [PMID: 28558002 PMCID: PMC5448783 DOI: 10.1371/journal.pone.0178410] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2016] [Accepted: 05/13/2017] [Indexed: 11/18/2022] Open
Abstract
Electroencephalogram (EEG)-based decoding human brain activity is challenging, owing to the low spatial resolution of EEG. However, EEG is an important technique, especially for brain-computer interface applications. In this study, a novel algorithm is proposed to decode brain activity associated with different types of images. In this hybrid algorithm, convolutional neural network is modified for the extraction of features, a t-test is used for the selection of significant features and likelihood ratio-based score fusion is used for the prediction of brain activity. The proposed algorithm takes input data from multichannel EEG time-series, which is also known as multivariate pattern analysis. Comprehensive analysis was conducted using data from 30 participants. The results from the proposed method are compared with current recognized feature extraction and classification/prediction techniques. The wavelet transform-support vector machine method is the most popular currently used feature extraction and prediction method. This method showed an accuracy of 65.7%. However, the proposed method predicts the novel data with improved accuracy of 79.9%. In conclusion, the proposed algorithm outperformed the current feature extraction and prediction method.
Collapse
Affiliation(s)
- Raheel Zafar
- Centre for Intelligent Signal and Imaging Research (CISIR), Universiti Teknologi PETRONAS, Bandar Seri Iskandar, Perak, Malaysia
| | - Sarat C. Dass
- Centre for Intelligent Signal and Imaging Research (CISIR), Universiti Teknologi PETRONAS, Bandar Seri Iskandar, Perak, Malaysia
| | - Aamir Saeed Malik
- Centre for Intelligent Signal and Imaging Research (CISIR), Universiti Teknologi PETRONAS, Bandar Seri Iskandar, Perak, Malaysia
| |
Collapse
|
50
|
Roldan SM. Object Recognition in Mental Representations: Directions for Exploring Diagnostic Features through Visual Mental Imagery. Front Psychol 2017; 8:833. [PMID: 28588538 PMCID: PMC5441390 DOI: 10.3389/fpsyg.2017.00833] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2017] [Accepted: 05/08/2017] [Indexed: 11/13/2022] Open
Abstract
One of the fundamental goals of object recognition research is to understand how a cognitive representation produced from the output of filtered and transformed sensory information facilitates efficient viewer behavior. Given that mental imagery strongly resembles perceptual processes in both cortical regions and subjective visual qualities, it is reasonable to question whether mental imagery facilitates cognition in a manner similar to that of perceptual viewing: via the detection and recognition of distinguishing features. Categorizing the feature content of mental imagery holds potential as a reverse pathway by which to identify the components of a visual stimulus which are most critical for the creation and retrieval of a visual representation. This review will examine the likelihood that the information represented in visual mental imagery reflects distinctive object features thought to facilitate efficient object categorization and recognition during perceptual viewing. If it is the case that these representational features resemble their sensory counterparts in both spatial and semantic qualities, they may well be accessible through mental imagery as evaluated through current investigative techniques. In this review, methods applied to mental imagery research and their findings are reviewed and evaluated for their efficiency in accessing internal representations, and implications for identifying diagnostic features are discussed. An argument is made for the benefits of combining mental imagery assessment methods with diagnostic feature research to advance the understanding of visual perceptive processes, with suggestions for avenues of future investigation.
Collapse
Affiliation(s)
- Stephanie M. Roldan
- Virginia Tech Visual Neuroscience Laboratory, Psychology Department, Virginia Polytechnic Institute and State University, BlacksburgVA, United States
| |
Collapse
|