1
|
Liu W, Pan X, Zhou X. The Temporal Dynamics of Stop Consonant Perception: Evidence from Context Effects. LANGUAGE AND SPEECH 2023; 66:1046-1055. [PMID: 36775903 DOI: 10.1177/00238309231153355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
Empirical evidence and theoretical models suggest that phonetic category perception involves two stages of auditory and phonetic processing. However, few studies examined the time course of these two processing stages. With brief stop consonant segments as context stimuli, this study examined the temporal dynamics of stop consonant perception by varying the inter-stimulus interval between context and target stimuli. The results suggest that phonetic category activation of stop consonants may appear before 100 ms of processing time. Furthermore, the activation of phonetic categories resulted in contrast context effects on identifying the target stop continuum; the auditory processing of stop consonants resulted in a different context effect from those caused by phonetic category activation. The findings provide further evidence for the two-stage model of speech perception and reveal the time course of auditory and phonetic processing.
Collapse
Affiliation(s)
- Wenli Liu
- Department of Social Psychology, Zhou Enlai School of Government, Nankai University, China
| | - Xiaoguang Pan
- Department of Social Psychology, Zhou Enlai School of Government, Nankai University, China
| | - Xiang Zhou
- Department of Social Psychology, Zhou Enlai School of Government, Nankai University, China
| |
Collapse
|
2
|
Park JJ, Baek SC, Suh MW, Choi J, Kim SJ, Lim Y. The effect of topic familiarity and volatility of auditory scene on selective auditory attention. Hear Res 2023; 433:108770. [PMID: 37104990 DOI: 10.1016/j.heares.2023.108770] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Revised: 04/06/2023] [Accepted: 04/15/2023] [Indexed: 04/29/2023]
Abstract
Selective auditory attention has been shown to modulate the cortical representation of speech. This effect has been well documented in acoustically more challenging environments. However, the influence of top-down factors, in particular topic familiarity, on this process remains unclear, despite evidence that semantic information can promote speech-in-noise perception. Apart from individual features forming a static listening condition, dynamic and irregular changes of auditory scenes-volatile listening environments-have been less studied. To address these gaps, we explored the influence of topic familiarity and volatile listening on the selective auditory attention process during dichotic listening using electroencephalography. When stories with unfamiliar topics were presented, participants' comprehension was severely degraded. However, their cortical activity selectively tracked the speech of the target story well. This implies that topic familiarity hardly influences the speech tracking neural index, possibly when the bottom-up information is sufficient. However, when the listening environment was volatile and the listeners had to re-engage in new speech whenever auditory scenes altered, the neural correlates of the attended speech were degraded. In particular, the cortical response to the attended speech and the spatial asymmetry of the response to the left and right attention were significantly attenuated around 100-200 ms after the speech onset. These findings suggest that volatile listening environments could adversely affect the modulation effect of selective attention, possibly by hampering proper attention due to increased perceptual load.
Collapse
Affiliation(s)
- Jonghwa Jeonglok Park
- Center for Intelligent & Interactive Robotics, Artificial Intelligence and Robot Institute, Korea Institute of Science and Technology, Seoul 02792, South Korea; Department of Electrical and Computer Engineering, College of Engineering, Seoul National University, Seoul 08826, South Korea
| | - Seung-Cheol Baek
- Center for Intelligent & Interactive Robotics, Artificial Intelligence and Robot Institute, Korea Institute of Science and Technology, Seoul 02792, South Korea; Research Group Neurocognition of Music and Language, Max Planck Institute for Empirical Aesthetics, Grüneburgweg 14, Frankfurt am Main 60322, Germany
| | - Myung-Whan Suh
- Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University Hospital, Seoul 03080, South Korea
| | - Jongsuk Choi
- Center for Intelligent & Interactive Robotics, Artificial Intelligence and Robot Institute, Korea Institute of Science and Technology, Seoul 02792, South Korea; Department of AI Robotics, KIST School, Korea University of Science and Technology, Seoul 02792, South Korea
| | - Sung June Kim
- Department of Electrical and Computer Engineering, College of Engineering, Seoul National University, Seoul 08826, South Korea
| | - Yoonseob Lim
- Center for Intelligent & Interactive Robotics, Artificial Intelligence and Robot Institute, Korea Institute of Science and Technology, Seoul 02792, South Korea; Department of HY-KIST Bio-convergence, Hanyang University, Seoul 04763, South Korea.
| |
Collapse
|
3
|
Centanni TM, Beach SD, Ozernov-Palchik O, May S, Pantazis D, Gabrieli JDE. Categorical perception and influence of attention on neural consistency in response to speech sounds in adults with dyslexia. ANNALS OF DYSLEXIA 2022; 72:56-78. [PMID: 34495457 PMCID: PMC8901776 DOI: 10.1007/s11881-021-00241-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Accepted: 07/21/2021] [Indexed: 06/13/2023]
Abstract
Developmental dyslexia is a common neurodevelopmental disorder that is associated with alterations in the behavioral and neural processing of speech sounds, but the scope and nature of that association is uncertain. It has been proposed that more variable auditory processing could underlie some of the core deficits in this disorder. In the current study, magnetoencephalography (MEG) data were acquired from adults with and without dyslexia while they passively listened to or actively categorized tokens from a /ba/-/da/ consonant continuum. We observed no significant group difference in active categorical perception of this continuum in either of our two behavioral assessments. During passive listening, adults with dyslexia exhibited neural responses that were as consistent as those of typically reading adults in six cortical regions associated with auditory perception, language, and reading. However, they exhibited significantly less consistency in the left supramarginal gyrus, where greater inconsistency correlated significantly with worse decoding skills in the group with dyslexia. The group difference in the left supramarginal gyrus was evident only when neural data were binned with a high temporal resolution and was only significant during the passive condition. Interestingly, consistency significantly improved in both groups during active categorization versus passive listening. These findings suggest that adults with dyslexia exhibit typical levels of neural consistency in response to speech sounds with the exception of the left supramarginal gyrus and that this consistency increases during active versus passive perception of speech sounds similarly in the two groups.
Collapse
Affiliation(s)
- T M Centanni
- McGovern Institute for Brain Research and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.
- Department of Psychology, Texas Christian University, Fort Worth, TX, USA.
| | - S D Beach
- McGovern Institute for Brain Research and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA, USA
| | - O Ozernov-Palchik
- McGovern Institute for Brain Research and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - S May
- McGovern Institute for Brain Research and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- Boston College, Boston, MA, USA
| | - D Pantazis
- McGovern Institute for Brain Research and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - J D E Gabrieli
- McGovern Institute for Brain Research and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
4
|
Nagata K, Kunii N, Shimada S, Fujitani S, Takasago M, Saito N. Spatiotemporal target selection for intracranial neural decoding of abstract and concrete semantics. Cereb Cortex 2022; 32:5544-5554. [PMID: 35169837 PMCID: PMC9753048 DOI: 10.1093/cercor/bhac034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2021] [Revised: 01/18/2022] [Accepted: 01/19/2021] [Indexed: 01/25/2023] Open
Abstract
Decoding the inner representation of a word meaning from human cortical activity is a substantial challenge in the development of speech brain-machine interfaces (BMIs). The semantic aspect of speech is a novel target of speech decoding that may enable versatile communication platforms for individuals with impaired speech ability; however, there is a paucity of electrocorticography studies in this field. We decoded the semantic representation of a word from single-trial cortical activity during an imageability-based property identification task that required participants to discriminate between the abstract and concrete words. Using high gamma activity in the language-dominant hemisphere, a support vector machine classifier could discriminate the 2-word categories with significantly high accuracy (73.1 ± 7.5%). Activities in specific time components from two brain regions were identified as significant predictors of abstract and concrete dichotomy. Classification using these feature components revealed that comparable prediction accuracy could be obtained based on a spatiotemporally targeted decoding approach. Our study demonstrated that mental representations of abstract and concrete word processing could be decoded from cortical high gamma activities, and the coverage of implanted electrodes and time window of analysis could be successfully minimized. Our findings lay the foundation for the future development of semantic-based speech BMIs.
Collapse
Affiliation(s)
- Keisuke Nagata
- Department of Neurosurgery, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Naoto Kunii
- Corresponding author: Department of Neurosurgery, The University of Tokyo, 73-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan.
| | - Seijiro Shimada
- Department of Neurosurgery, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Shigeta Fujitani
- Department of Neurosurgery, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Megumi Takasago
- Department of Neurosurgery, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Nobuhito Saito
- Department of Neurosurgery, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| |
Collapse
|
5
|
Kaestner E, Wu X, Friedman D, Dugan P, Devinsky O, Carlson C, Doyle W, Thesen T, Halgren E. The Precentral Gyrus Contributions to the Early Time-Course of Grapheme-to-Phoneme Conversion. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2022; 3:18-45. [PMID: 37215328 PMCID: PMC10158576 DOI: 10.1162/nol_a_00047] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/29/2020] [Accepted: 06/16/2021] [Indexed: 05/24/2023]
Abstract
As part of silent reading models, visual orthographic information is transduced into an auditory phonological code in a process of grapheme-to-phoneme conversion (GPC). This process is often identified with lateral temporal-parietal regions associated with auditory phoneme encoding. However, the role of articulatory phonemic representations and the precentral gyrus in GPC is ambiguous. Though the precentral gyrus is implicated in many functional MRI studies of reading, it is not clear if the time course of activity in this region is consistent with the precentral gyrus being involved in GPC. We recorded cortical electrophysiology during a bimodal match/mismatch task from eight patients with perisylvian subdural electrodes to examine the time course of neural activity during a task that necessitated GPC. Patients made a match/mismatch decision between a 3-letter string and the following auditory bi-phoneme. We characterized the distribution and timing of evoked broadband high gamma (70-170 Hz) as well as phase-locking between electrodes. The precentral gyrus emerged with a high concentration of broadband high gamma responses to visual and auditory language as well as mismatch effects. The pars opercularis, supramarginal gyrus, and superior temporal gyrus were also involved. The precentral gyrus showed strong phase-locking with the caudal fusiform gyrus during letter-string presentation and with surrounding perisylvian cortex during the bimodal visual-auditory comparison period. These findings hint at a role for precentral cortex in transducing visual into auditory codes during silent reading.
Collapse
Affiliation(s)
- Erik Kaestner
- Center for Multimodal Imaging and Genetics, University of California, San Diego, USA
| | - Xiaojing Wu
- Department of Neurology, NYU Langone School of Medicine, New York, USA
| | - Daniel Friedman
- Department of Neurology, NYU Langone School of Medicine, New York, USA
| | - Patricia Dugan
- Department of Neurology, NYU Langone School of Medicine, New York, USA
| | - Orrin Devinsky
- Department of Neurology, NYU Langone School of Medicine, New York, USA
| | - Chad Carlson
- Department of Neurology, Medical College of Wisconsin, Milwaukee, USA
| | - Werner Doyle
- Department of Neurology, NYU Langone School of Medicine, New York, USA
- Department of Neurosurgery, NYU Langone School of Medicine, New York, USA
| | - Thomas Thesen
- Department of Neurology, NYU Langone School of Medicine, New York, USA
| | - Eric Halgren
- Department of Neurosciences, University of California at San Diego, La Jolla, USA
- Department of Radiology, University of California at San Diego, La Jolla, USA
| |
Collapse
|
6
|
Paulk AC, Yang JC, Cleary DR, Soper DJ, Halgren M, O’Donnell AR, Lee SH, Ganji M, Ro YG, Oh H, Hossain L, Lee J, Tchoe Y, Rogers N, Kiliç K, Ryu SB, Lee SW, Hermiz J, Gilja V, Ulbert I, Fabó D, Thesen T, Doyle WK, Devinsky O, Madsen JR, Schomer DL, Eskandar EN, Lee JW, Maus D, Devor A, Fried SI, Jones PS, Nahed BV, Ben-Haim S, Bick SK, Richardson RM, Raslan AM, Siler DA, Cahill DP, Williams ZM, Cosgrove GR, Dayeh SA, Cash SS. Microscale Physiological Events on the Human Cortical Surface. Cereb Cortex 2021; 31:3678-3700. [PMID: 33749727 PMCID: PMC8258438 DOI: 10.1093/cercor/bhab040] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2020] [Revised: 02/05/2021] [Accepted: 02/07/2021] [Indexed: 01/14/2023] Open
Abstract
Despite ongoing advances in our understanding of local single-cellular and network-level activity of neuronal populations in the human brain, extraordinarily little is known about their "intermediate" microscale local circuit dynamics. Here, we utilized ultra-high-density microelectrode arrays and a rare opportunity to perform intracranial recordings across multiple cortical areas in human participants to discover three distinct classes of cortical activity that are not locked to ongoing natural brain rhythmic activity. The first included fast waveforms similar to extracellular single-unit activity. The other two types were discrete events with slower waveform dynamics and were found preferentially in upper cortical layers. These second and third types were also observed in rodents, nonhuman primates, and semi-chronic recordings from humans via laminar and Utah array microelectrodes. The rates of all three events were selectively modulated by auditory and electrical stimuli, pharmacological manipulation, and cold saline application and had small causal co-occurrences. These results suggest that the proper combination of high-resolution microelectrodes and analytic techniques can capture neuronal dynamics that lay between somatic action potentials and aggregate population activity. Understanding intermediate microscale dynamics in relation to single-cell and network dynamics may reveal important details about activity in the full cortical circuit.
Collapse
Affiliation(s)
- Angelique C Paulk
- Department of Neurology, Massachusetts General Hospital, Boston, MA 02114, USA
| | - Jimmy C Yang
- Department of Neurology, Massachusetts General Hospital, Boston, MA 02114, USA
- Department of Neurosurgery, Massachusetts General Hospital, Boston, MA 02114, USA
| | - Daniel R Cleary
- Departments of Neurosciences and Radiology, University of California San Diego, La Jolla, CA 92093, USA
- Department of Physics, University of California San Diego, La Jolla, CA 92093, USA
- Department of Neurosurgery, University of California San Diego, La Jolla, CA 92093, USA
| | - Daniel J Soper
- Department of Neurology, Massachusetts General Hospital, Boston, MA 02114, USA
| | - Mila Halgren
- Department of Neurology, Massachusetts General Hospital, Boston, MA 02114, USA
- McGovern Institute for Brain Research and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | | | - Sang Heon Lee
- Department of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA 92093, USA
| | - Mehran Ganji
- Department of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA 92093, USA
| | - Yun Goo Ro
- Department of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA 92093, USA
| | - Hongseok Oh
- Department of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA 92093, USA
| | - Lorraine Hossain
- Materials Science and Engineering Program, University of California San Diego, La Jolla, CA 92093, USA
| | - Jihwan Lee
- Department of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA 92093, USA
| | - Youngbin Tchoe
- Department of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA 92093, USA
| | - Nicholas Rogers
- Department of Physics, University of California San Diego, La Jolla, CA 92093, USA
| | - Kivilcim Kiliç
- Departments of Neurosciences and Radiology, University of California San Diego, La Jolla, CA 92093, USA
| | - Sang Baek Ryu
- Department of Neurosurgery, Massachusetts General Hospital, Boston, MA 02114, USA
| | - Seung Woo Lee
- Department of Neurosurgery, Massachusetts General Hospital, Boston, MA 02114, USA
| | - John Hermiz
- Department of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA 92093, USA
| | - Vikash Gilja
- Department of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA 92093, USA
| | - István Ulbert
- Research Centre for Natural Sciences, Institute of Cognitive Neuroscience and Psychology, 1519 Budapest, Hungary
- Pázmány Péter Catholic University, Faculty of Information Technology and Bionics, H-1444 Budapest, Hungary
| | - Daniel Fabó
- Epilepsy Centrum, National Institute of Clinical Neurosciences, 1145 Budapest, Hungary
| | - Thomas Thesen
- Department of Biomedical Sciences, University of Houston College of Medicine, Houston, TX 77204, USA
- Comprehensive Epilepsy Center, New York University School of Medicine, New York City, NY 10016, USA
| | - Werner K Doyle
- Comprehensive Epilepsy Center, New York University School of Medicine, New York City, NY 10016, USA
| | - Orrin Devinsky
- Comprehensive Epilepsy Center, New York University School of Medicine, New York City, NY 10016, USA
| | - Joseph R Madsen
- Departments of Neurosurgery, Boston Children's Hospital and Harvard Medical School, Boston, MA 02115, USA
| | - Donald L Schomer
- Department of Neurology, Beth Israel Deaconess Medical Center, Boston, MA 02215, USA
| | - Emad N Eskandar
- Department of Neurosurgery, Massachusetts General Hospital, Boston, MA 02114, USA
- Albert Einstein College of Medicine, Montefiore Medical Center, Department of Neurosurgery, Bronx, NY 10467, USA
| | - Jong Woo Lee
- Department of Neurology, Brigham and Women's Hospital, Boston, MA 02115, USA
| | - Douglas Maus
- Department of Neurology, Massachusetts General Hospital, Boston, MA 02114, USA
| | - Anna Devor
- Departments of Neurosciences and Radiology, University of California San Diego, La Jolla, CA 92093, USA
| | - Shelley I Fried
- Department of Neurosurgery, Massachusetts General Hospital, Boston, MA 02114, USA
- Boston VA Healthcare System, 150 South Huntington Avenue, Boston, MA 02130, USA
| | - Pamela S Jones
- Department of Neurosurgery, Massachusetts General Hospital, Boston, MA 02114, USA
| | - Brian V Nahed
- Department of Neurosurgery, Massachusetts General Hospital, Boston, MA 02114, USA
| | - Sharona Ben-Haim
- Department of Neurosurgery, University of California San Diego, La Jolla, CA 92093, USA
| | - Sarah K Bick
- Department of Neurosurgery, Massachusetts General Hospital, Boston, MA 02114, USA
| | | | - Ahmed M Raslan
- Department of Neurological Surgery, Oregon Health and Science University, Portland, OR 97239, USA
| | - Dominic A Siler
- Department of Neurological Surgery, Oregon Health and Science University, Portland, OR 97239, USA
| | - Daniel P Cahill
- Department of Neurosurgery, Massachusetts General Hospital, Boston, MA 02114, USA
| | - Ziv M Williams
- Department of Neurosurgery, Massachusetts General Hospital, Boston, MA 02114, USA
| | - G Rees Cosgrove
- Department of Neurosurgery, Brigham and Women's Hospital, Boston, MA 02115, USA
| | - Shadi A Dayeh
- Department of Neurosurgery, University of California San Diego, La Jolla, CA 92093, USA
- Materials Science and Engineering Program, University of California San Diego, La Jolla, CA 92093, USA
- Department of Nanoengineering, University of California San Diego, La Jolla, CA 92093, USA
| | - Sydney S Cash
- Department of Neurology, Massachusetts General Hospital, Boston, MA 02114, USA
| |
Collapse
|
7
|
Zheng W, Minama Reddy GK, Dai F, Chandramani A, Brang D, Hunter S, Kohrman MH, Rose S, Rossi M, Tao J, Wu S, Byrne R, Frim DM, Warnke P, Towle VL. Chasing language through the brain: Successive parallel networks. Clin Neurophysiol 2020; 132:80-93. [PMID: 33360179 DOI: 10.1016/j.clinph.2020.10.007] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2020] [Revised: 10/06/2020] [Accepted: 10/11/2020] [Indexed: 11/29/2022]
Abstract
OBJECTIVE To describe the spatio-temporal dynamics and interactions during linguistic and memory tasks. METHODS Event-related electrocorticographic (ECoG) spectral patterns obtained during cognitive tasks from 26 epilepsy patients (aged: 9-60 y) were analyzed in order to examine the spatio-temporal patterns of activation of cortical language areas. ECoGs (1024 Hz/channel) were recorded from 1567 subdural electrodes and 510 depth electrodes chronically implanted over or within the frontal, parietal, occipital and/or temporal lobes as part of their surgical work-up for intractable seizures. Six language/memory tasks were performed, which required responding verbally to auditory or visual word stimuli. Detailed analysis of electrode locations allowed combining results across patients. RESULTS Transient increases in induced ECoG gamma power (70-100 Hz) were observed in response to hearing words (central superior temporal gyrus), reading text and naming pictures (occipital and fusiform cortex) and speaking (pre-central, post-central and sub-central cortex). CONCLUSIONS Between these activations there was widespread spatial divergence followed by convergence of gamma activity that reliably identified cortical areas associated with task-specific processes. SIGNIFICANCE The combined dataset supports the concept of functionally-specific locally parallel language networks that are widely distributed, partially interacting in succession to serve the cognitive and behavioral demands of the tasks.
Collapse
Affiliation(s)
- Weili Zheng
- Department of Engineering, The University of Illinois, Chicago, IL, USA
| | | | - Falcon Dai
- Department of Neurology, The University of Chicago, Chicago, IL, USA
| | | | - David Brang
- Department of Psychology, University of Michigan, Ann Arbor, MI, USA
| | - Scott Hunter
- Department of Psychiatry and Behavioral Neuroscience, The University of Chicago, Chicago, IL, USA
| | - Michael H Kohrman
- Department of Pediatrics, The University of Chicago, Chicago, IL 60487, USA
| | - Sandra Rose
- Department of Neurology, The University of Chicago, Chicago, IL, USA
| | - Marvin Rossi
- Department of Neurology, Rush University, Chicago, IL, USA
| | - James Tao
- Department of Neurology, The University of Chicago, Chicago, IL, USA
| | - Shasha Wu
- Department of Neurology, The University of Chicago, Chicago, IL, USA
| | - Richard Byrne
- Department of Surgery, Rush University, Chicago, IL, USA
| | - David M Frim
- Department of Surgery, The University of Chicago, 5841 S. Maryland Ave, 60487 Chicago, IL, USA
| | - Peter Warnke
- Department of Surgery, The University of Chicago, 5841 S. Maryland Ave, 60487 Chicago, IL, USA
| | - Vernon L Towle
- Department of Neurology, The University of Chicago, Chicago, IL, USA.
| |
Collapse
|
8
|
Broderick MP, Anderson AJ, Lalor EC. Semantic Context Enhances the Early Auditory Encoding of Natural Speech. J Neurosci 2019; 39:7564-7575. [PMID: 31371424 PMCID: PMC6750931 DOI: 10.1523/jneurosci.0584-19.2019] [Citation(s) in RCA: 65] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2019] [Revised: 07/20/2019] [Accepted: 07/29/2019] [Indexed: 01/22/2023] Open
Abstract
Speech perception involves the integration of sensory input with expectations based on the context of that speech. Much debate surrounds the issue of whether or not prior knowledge feeds back to affect early auditory encoding in the lower levels of the speech processing hierarchy, or whether perception can be best explained as a purely feedforward process. Although there has been compelling evidence on both sides of this debate, experiments involving naturalistic speech stimuli to address these questions have been lacking. Here, we use a recently introduced method for quantifying the semantic context of speech and relate it to a commonly used method for indexing low-level auditory encoding of speech. The relationship between these measures is taken to be an indication of how semantic context leading up to a word influences how its low-level acoustic and phonetic features are processed. We record EEG from human participants (both male and female) listening to continuous natural speech and find that the early cortical tracking of a word's speech envelope is enhanced by its semantic similarity to its sentential context. Using a forward modeling approach, we find that prediction accuracy of the EEG signal also shows the same effect. Furthermore, this effect shows distinct temporal patterns of correlation depending on the type of speech input representation (acoustic or phonological) used for the model, implicating a top-down propagation of information through the processing hierarchy. These results suggest a mechanism that links top-down prior information with the early cortical entrainment of words in natural, continuous speech.SIGNIFICANCE STATEMENT During natural speech comprehension, we use semantic context when processing information about new incoming words. However, precisely how the neural processing of bottom-up sensory information is affected by top-down context-based predictions remains controversial. We address this discussion using a novel approach that indexes a word's similarity to context and how well a word's acoustic and phonetic features are processed by the brain at the time of its utterance. We relate these two measures and show that lower-level auditory tracking of speech improves for words that are more related to their preceding context. These results suggest a mechanism that links top-down prior information with bottom-up sensory processing in the context of natural, narrative speech listening.
Collapse
Affiliation(s)
- Michael P Broderick
- School of Engineering, Trinity Centre for Bioengineering and Trinity College Institute of Neuroscience, Trinity College Dublin, Dublin 2, Ireland,
| | - Andrew J Anderson
- Department of Biomedical Engineering, and
- Department of Neuroscience and Del Monte Institute for Neuroscience, University of Rochester, Rochester, New York 14627
| | - Edmund C Lalor
- School of Engineering, Trinity Centre for Bioengineering and Trinity College Institute of Neuroscience, Trinity College Dublin, Dublin 2, Ireland
- Department of Biomedical Engineering, and
- Department of Neuroscience and Del Monte Institute for Neuroscience, University of Rochester, Rochester, New York 14627
| |
Collapse
|
9
|
Centanni TM, Pantazis D, Truong DT, Gruen JR, Gabrieli JDE, Hogan TP. Increased variability of stimulus-driven cortical responses is associated with genetic variability in children with and without dyslexia. Dev Cogn Neurosci 2018; 34:7-17. [PMID: 29894888 PMCID: PMC6969288 DOI: 10.1016/j.dcn.2018.05.008] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2018] [Revised: 05/14/2018] [Accepted: 05/24/2018] [Indexed: 12/17/2022] Open
Abstract
Individuals with dyslexia exhibit increased brainstem variability in response to sound. It is unknown as to whether increased variability extends to neocortical regions associated with audition and reading, extends to visual stimuli, and whether increased variability characterizes all children with dyslexia or, instead, a specific subset of children. We evaluated the consistency of stimulus-evoked neural responses in children with (N = 20) or without dyslexia (N = 12) as measured by magnetoencephalography (MEG). Approximately half of the children with dyslexia had significantly higher levels of variability in cortical responses to both auditory and visual stimuli in multiple nodes of the reading network. There was a significant and positive relationship between the number of risk alleles at rs6935076 in the dyslexia-susceptibility gene KIAA0319 and the degree of neural variability in primary auditory cortex across all participants. This gene has been linked with neural variability in rodents and in typical readers. These findings indicate that unstable representations of auditory and visual stimuli in auditory and other reading-related neocortical regions are present in a subset of children with dyslexia and support the link between the gene KIAA0319 and the auditory neural variability across children with or without dyslexia.
Collapse
Affiliation(s)
- T M Centanni
- McGovern Institute for Brain Research and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA; Department of Psychology, Texas Christian University, Fort Worth, TX, USA.
| | - D Pantazis
- McGovern Institute for Brain Research and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - D T Truong
- Departments of Pediatrics and Genetics, Yale University, New Haven, CT, USA
| | - J R Gruen
- Departments of Pediatrics and Genetics, Yale University, New Haven, CT, USA
| | - J D E Gabrieli
- McGovern Institute for Brain Research and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - T P Hogan
- Communication Sciences and Disorders, MGH Institute of Health Professions, Boston, MA, USA
| |
Collapse
|
10
|
Rauschecker JP. Where did language come from? Precursor mechanisms in nonhuman primates. Curr Opin Behav Sci 2018; 21:195-204. [PMID: 30778394 PMCID: PMC6377164 DOI: 10.1016/j.cobeha.2018.06.003] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
At first glance, the monkey brain looks like a smaller version of the human brain. Indeed, the anatomical and functional architecture of the cortical auditory system in monkeys is very similar to that of humans, with dual pathways segregated into a ventral and a dorsal processing stream. Yet, monkeys do not speak. Repeated attempts to pin this inability on one particular cause have failed. A closer look at the necessary components of language, according to Darwin, reveals that all of them got a significant boost during evolution from nonhuman to human primates. The vocal-articulatory system, in particular, has developed into the most sophisticated of all human sensorimotor systems with about a dozen effectors that, in combination with each other, result in an auditory communication system like no other. This sensorimotor network possesses all the ingredients of an internal model system that permits the emergence of sequence processing, as required for phonology and syntax in modern languages.
Collapse
Affiliation(s)
- Josef P Rauschecker
- Department of Neuroscience, Georgetown University, Washington, DC 20057, USA
| |
Collapse
|
11
|
Hermiz J, Rogers N, Kaestner E, Ganji M, Cleary DR, Carter BS, Barba D, Dayeh SA, Halgren E, Gilja V. Sub-millimeter ECoG pitch in human enables higher fidelity cognitive neural state estimation. Neuroimage 2018; 176:454-464. [PMID: 29678760 DOI: 10.1016/j.neuroimage.2018.04.027] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2017] [Revised: 03/09/2018] [Accepted: 04/11/2018] [Indexed: 10/17/2022] Open
Abstract
Electrocorticography (ECoG), electrophysiological recording from the pial surface of the brain, is a critical measurement technique for clinical neurophysiology, basic neurophysiology studies, and demonstrates great promise for the development of neural prosthetic devices for assistive applications and the treatment of neurological disorders. Recent advances in device engineering are poised to enable orders of magnitude increase in the resolution of ECoG without comprised measurement quality. This enhancement in cortical sensing enables the observation of neural dynamics from the cortical surface at the micrometer scale. While these technical capabilities may be enabling, the extent to which finer spatial scale recording enhances functionally relevant neural state inference is unclear. We examine this question by employing a high-density and low impedance 400 μm pitch microECoG (μECoG) grid to record neural activity from the human cortical surface during cognitive tasks. By applying machine learning techniques to classify task conditions from the envelope of high-frequency band (70-170Hz) neural activity collected from two study participants, we demonstrate that higher density grids can lead to more accurate binary task condition classification. When controlling for grid area and selecting task informative sub-regions of the complete grid, we observed a consistent increase in mean classification accuracy with higher grid density; in particular, 400 μm pitch grids outperforming spatially sub-sampled lower density grids up to 23%. We also introduce a modeling framework to provide intuition for how spatial properties of measurements affect the performance gap between high and low density grids. To our knowledge, this work is the first quantitative demonstration of human sub-millimeter pitch cortical surface recording yielding higher-fidelity state estimation relative to devices at the millimeter-scale, motivating the development and testing of μECoG for basic and clinical neurophysiology as well as towards the realization of high-performance neural prostheses.
Collapse
Affiliation(s)
- John Hermiz
- Department of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA 92093, USA
| | - Nicholas Rogers
- Department of Physics, University of California San Diego, La Jolla, CA, 92161, USA
| | - Erik Kaestner
- Neurosciences Program, University of California San Diego, La Jolla, CA, 92096, USA
| | - Mehran Ganji
- Department of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA 92093, USA
| | - Daniel R Cleary
- Department of Neurosurgery, University of California San Diego, La Jolla, CA, 92103, USA
| | - Bob S Carter
- Department of Neurosurgery, University of California San Diego, La Jolla, CA, 92103, USA
| | - David Barba
- Department of Neurosurgery, University of California San Diego, La Jolla, CA, 92103, USA
| | - Shadi A Dayeh
- Department of Nanoengineering, University of California San Diego, La Jolla, CA, 92093, USA; Department of Materials Science and Engineering, University of California San Diego, La Jolla, CA, 92093, USA
| | - Eric Halgren
- Department of Radiology, University of California San Diego, La Jolla, CA, 92103, USA; Department of Neurosciences, University of California San Diego, La Jolla, CA, 92103, USA
| | - Vikash Gilja
- Department of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA 92093, USA.
| |
Collapse
|
12
|
Magnuson JS, Mirman D, Luthra S, Strauss T, Harris HD. Interaction in Spoken Word Recognition Models: Feedback Helps. Front Psychol 2018; 9:369. [PMID: 29666593 PMCID: PMC5891609 DOI: 10.3389/fpsyg.2018.00369] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2017] [Accepted: 03/06/2018] [Indexed: 11/13/2022] Open
Abstract
Human perception, cognition, and action requires fast integration of bottom-up signals with top-down knowledge and context. A key theoretical perspective in cognitive science is the interactive activation hypothesis: forward and backward flow in bidirectionally connected neural networks allows humans and other biological systems to approximate optimal integration of bottom-up and top-down information under real-world constraints. An alternative view is that online feedback is neither necessary nor helpful; purely feed forward alternatives can be constructed for any feedback system, and online feedback could not improve processing and would preclude veridical perception. In the domain of spoken word recognition, the latter view was apparently supported by simulations using the interactive activation model, TRACE, with and without feedback: as many words were recognized more quickly without feedback as were recognized faster with feedback, However, these simulations used only a small set of words and did not address a primary motivation for interaction: making a model robust in noise. We conducted simulations using hundreds of words, and found that the majority were recognized more quickly with feedback than without. More importantly, as we added noise to inputs, accuracy and recognition times were better with feedback than without. We follow these simulations with a critical review of recent arguments that online feedback in interactive activation models like TRACE is distinct from other potentially helpful forms of feedback. We conclude that in addition to providing the benefits demonstrated in our simulations, online feedback provides a plausible means of implementing putatively distinct forms of feedback, supporting the interactive activation hypothesis.
Collapse
Affiliation(s)
- James S. Magnuson
- Department of Psychological Sciences, University of Connecticut, Storrs, CT, United States
- Connecticut Institute for the Brain and Cognitive Sciences, University of Connecticut, Storrs, CT, United States
| | - Daniel Mirman
- Department of Psychology, University of Alabama at Birmingham, Birmingham, AL, United States
| | - Sahil Luthra
- Department of Psychological Sciences, University of Connecticut, Storrs, CT, United States
- Connecticut Institute for the Brain and Cognitive Sciences, University of Connecticut, Storrs, CT, United States
| | - Ted Strauss
- McConnell Brain Imaging Centre, McGill University, Montreal, QC, Canada
| | - Harlan D. Harris
- Department of Psychological Sciences, University of Connecticut, Storrs, CT, United States
- Connecticut Institute for the Brain and Cognitive Sciences, University of Connecticut, Storrs, CT, United States
| |
Collapse
|
13
|
Khachatryan E, Brouwer H, Staljanssens W, Carrette E, Meurs A, Boon P, Van Roost D, Van Hulle MM. A new insight into sentence comprehension: The impact of word associations in sentence processing as shown by invasive EEG recording. Neuropsychologia 2018; 108:103-116. [PMID: 29203203 DOI: 10.1016/j.neuropsychologia.2017.12.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2017] [Revised: 11/20/2017] [Accepted: 12/01/2017] [Indexed: 11/26/2022]
|
14
|
Norris D, McQueen JM, Cutler A. Prediction, Bayesian inference and feedback in speech recognition. LANGUAGE, COGNITION AND NEUROSCIENCE 2016; 31:4-18. [PMID: 26740960 PMCID: PMC4685608 DOI: 10.1080/23273798.2015.1081703] [Citation(s) in RCA: 60] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/18/2015] [Accepted: 08/05/2015] [Indexed: 05/19/2023]
Abstract
Speech perception involves prediction, but how is that prediction implemented? In cognitive models prediction has often been taken to imply that there is feedback of activation from lexical to pre-lexical processes as implemented in interactive-activation models (IAMs). We show that simple activation feedback does not actually improve speech recognition. However, other forms of feedback can be beneficial. In particular, feedback can enable the listener to adapt to changing input, and can potentially help the listener to recognise unusual input, or recognise speech in the presence of competing sounds. The common feature of these helpful forms of feedback is that they are all ways of optimising the performance of speech recognition using Bayesian inference. That is, listeners make predictions about speech because speech recognition is optimal in the sense captured in Bayesian models.
Collapse
Affiliation(s)
- Dennis Norris
- MRC Cognition and Brain Sciences Unit, Cambridge, UK
| | - James M. McQueen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Anne Cutler
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- MARCS Institute, University of Western Sydney, Penrith South, NSW2751, Australia
| |
Collapse
|
15
|
Schlaffke L, Rüther NN, Heba S, Haag LM, Schultz T, Rosengarth K, Tegenthoff M, Bellebaum C, Schmidt‐Wilcke T. From perceptual to lexico-semantic analysis--cortical plasticity enabling new levels of processing. Hum Brain Mapp 2015; 36:4512-28. [PMID: 26304153 PMCID: PMC5049624 DOI: 10.1002/hbm.22939] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2015] [Revised: 07/23/2015] [Accepted: 07/30/2015] [Indexed: 11/09/2022] Open
Abstract
Certain kinds of stimuli can be processed on multiple levels. While the neural correlates of different levels of processing (LOPs) have been investigated to some extent, most of the studies involve skills and/or knowledge already present when performing the task. In this study we specifically sought to identify neural correlates of an evolving skill that allows the transition from perceptual to a lexico-semantic stimulus analysis. Eighteen participants were trained to decode 12 letters of Morse code that were presented acoustically inside and outside of the scanner environment. Morse code was presented in trains of three letters while brain activity was assessed with fMRI. Participants either attended to the stimulus length (perceptual analysis), or evaluated its meaning distinguishing words from nonwords (lexico-semantic analysis). Perceptual and lexico-semantic analyses shared a mutual network comprising the left premotor cortex, the supplementary motor area (SMA) and the inferior parietal lobule (IPL). Perceptual analysis was associated with a strong brain activation in the SMA and the superior temporal gyrus bilaterally (STG), which remained unaltered from pre and post training. In the lexico-semantic analysis post learning, study participants showed additional activation in the left inferior frontal cortex (IFC) and in the left occipitotemporal cortex (OTC), regions known to be critically involved in lexical processing. Our data provide evidence for cortical plasticity evolving with a learning process enabling the transition from perceptual to lexico-semantic stimulus analysis. Importantly, the activation pattern remains task-related LOP and is thus the result of a decision process as to which LOP to engage in.
Collapse
Affiliation(s)
- Lara Schlaffke
- Department of NeurologyBG‐University Hospital Bergmannsheil, Ruhr‐University BochumBochumGermany
| | - Naima N. Rüther
- Department of NeuropsychologyRuhr‐University BochumBochumGermany
| | - Stefanie Heba
- Department of NeurologyBG‐University Hospital Bergmannsheil, Ruhr‐University BochumBochumGermany
| | - Lauren M. Haag
- Department of NeurologyBG‐University Hospital Bergmannsheil, Ruhr‐University BochumBochumGermany
| | | | | | - Martin Tegenthoff
- Department of NeurologyBG‐University Hospital Bergmannsheil, Ruhr‐University BochumBochumGermany
| | - Christian Bellebaum
- Department of NeuropsychologyRuhr‐University BochumBochumGermany
- Department of PsychologyHeinrich‐Heine University DüsseldorfGermany
| | - Tobias Schmidt‐Wilcke
- Department of NeurologyBG‐University Hospital Bergmannsheil, Ruhr‐University BochumBochumGermany
| |
Collapse
|
16
|
Halgren E, Kaestner E, Marinkovic K, Cash SS, Wang C, Schomer DL, Madsen JR, Ulbert I. Laminar profile of spontaneous and evoked theta: Rhythmic modulation of cortical processing during word integration. Neuropsychologia 2015; 76:108-24. [PMID: 25801916 PMCID: PMC4575841 DOI: 10.1016/j.neuropsychologia.2015.03.021] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2014] [Revised: 03/18/2015] [Accepted: 03/18/2015] [Indexed: 01/01/2023]
Abstract
Theta may play a central role during language understanding and other extended cognitive processing, providing an envelope for widespread integration of participating cortical areas. We used linear microelectrode arrays in epileptics to define the circuits generating theta in inferotemporal, perirhinal, entorhinal, prefrontal and anterior cingulate cortices. In all locations, theta was generated by excitatory current sinks in middle layers which receive predominantly feedforward inputs, alternating with sinks in superficial layers which receive mainly feedback/associative inputs. Baseline and event-related theta were generated by indistinguishable laminar profiles of transmembrane currents and unit-firing. Word presentation could reset theta phase, permitting theta to contribute to late event-related potentials, even when theta power decreases relative to baseline. Limited recordings during sentence reading are consistent with rhythmic theta activity entrained by a given word modulating the neural background for the following word. These findings show that theta occurs spontaneously, and can be momentarily suppressed, reset and synchronized by words. Theta represents an alternation between feedforward/divergent and associative/convergent processing modes that may temporally organize sustained processing and optimize the timing of memory formation. We suggest that words are initially encoded via a ventral feedforward stream which is lexicosemantic in the anteroventral temporal lobe; its arrival may trigger a widespread theta rhythm which integrates the word within a larger context.
Collapse
Affiliation(s)
- Eric Halgren
- Departments of Radiology and Neurosciences, University of California at San Diego, La Jolla, CA 92069, USA.
| | - Erik Kaestner
- Interdepartmental Neurosciences Program, University of California at San Diego, La Jolla, CA 92069, USA
| | - Ksenija Marinkovic
- Department of Psychology, San Diego State University, San Diego, CA, USA
| | - Sydney S Cash
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02129, USA
| | - Chunmao Wang
- Departments of Radiology and Neurosciences, University of California at San Diego, La Jolla, CA 92069, USA; Interdepartmental Neurosciences Program, University of California at San Diego, La Jolla, CA 92069, USA; Department of Psychology, San Diego State University, San Diego, CA, USA; Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02129, USA; Department of Neurology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA; Department of Neurosurgery, Children's Hospital, Harvard Medical School, Boston, MA, USA; Institute of Cognitive Neuroscience and Psychology, Research Center for Natural Sciences, Hungarian Academy of Sciences, Budapest-1117, Hungary
| | - Donald L Schomer
- Department of Neurology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA
| | - Joseph R Madsen
- Department of Neurosurgery, Children's Hospital, Harvard Medical School, Boston, MA, USA
| | - Istvan Ulbert
- Institute of Cognitive Neuroscience and Psychology, Research Center for Natural Sciences, Hungarian Academy of Sciences, Budapest-1117, Hungary
| |
Collapse
|
17
|
Abstract
Sensory processing involves identification of stimulus features, but also integration with the surrounding sensory and cognitive context. Previous work in animals and humans has shown fine-scale sensitivity to context in the form of learned knowledge about the statistics of the sensory environment, including relative probabilities of discrete units in a stream of sequential auditory input. These statistics are a defining characteristic of one of the most important sequential signals humans encounter: speech. For speech, extensive exposure to a language tunes listeners to the statistics of sound sequences. To address how speech sequence statistics are neurally encoded, we used high-resolution direct cortical recordings from human lateral superior temporal cortex as subjects listened to words and nonwords with varying transition probabilities between sound segments. In addition to their sensitivity to acoustic features (including contextual features, such as coarticulation), we found that neural responses dynamically encoded the language-level probability of both preceding and upcoming speech sounds. Transition probability first negatively modulated neural responses, followed by positive modulation of neural responses, consistent with coordinated predictive and retrospective recognition processes, respectively. Furthermore, transition probability encoding was different for real English words compared with nonwords, providing evidence for online interactions with high-order linguistic knowledge. These results demonstrate that sensory processing of deeply learned stimuli involves integrating physical stimulus features with their contextual sequential structure. Despite not being consciously aware of phoneme sequence statistics, listeners use this information to process spoken input and to link low-level acoustic representations with linguistic information about word identity and meaning.
Collapse
|
18
|
Correia JM, Jansma B, Hausfeld L, Kikkert S, Bonte M. EEG decoding of spoken words in bilingual listeners: from words to language invariant semantic-conceptual representations. Front Psychol 2015; 6:71. [PMID: 25705197 PMCID: PMC4319403 DOI: 10.3389/fpsyg.2015.00071] [Citation(s) in RCA: 83] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2014] [Accepted: 01/13/2015] [Indexed: 11/13/2022] Open
Abstract
Spoken word recognition and production require fast transformations between acoustic, phonological, and conceptual neural representations. Bilinguals perform these transformations in native and non-native languages, deriving unified semantic concepts from equivalent, but acoustically different words. Here we exploit this capacity of bilinguals to investigate input invariant semantic representations in the brain. We acquired EEG data while Dutch subjects, highly proficient in English listened to four monosyllabic and acoustically distinct animal words in both languages (e.g., “paard”–“horse”). Multivariate pattern analysis (MVPA) was applied to identify EEG response patterns that discriminate between individual words within one language (within-language discrimination) and generalize meaning across two languages (across-language generalization). Furthermore, employing two EEG feature selection approaches, we assessed the contribution of temporal and oscillatory EEG features to our classification results. MVPA revealed that within-language discrimination was possible in a broad time-window (~50–620 ms) after word onset probably reflecting acoustic-phonetic and semantic-conceptual differences between the words. Most interestingly, significant across-language generalization was possible around 550–600 ms, suggesting the activation of common semantic-conceptual representations from the Dutch and English nouns. Both types of classification, showed a strong contribution of oscillations below 12 Hz, indicating the importance of low frequency oscillations in the neural representation of individual words and concepts. This study demonstrates the feasibility of MVPA to decode individual spoken words from EEG responses and to assess the spectro-temporal dynamics of their language invariant semantic-conceptual representations. We discuss how this method and results could be relevant to track the neural mechanisms underlying conceptual encoding in comprehension and production.
Collapse
Affiliation(s)
- João M Correia
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht Brain Imaging Center (M-BIC), Maastricht University Maastricht, Netherlands
| | - Bernadette Jansma
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht Brain Imaging Center (M-BIC), Maastricht University Maastricht, Netherlands
| | - Lars Hausfeld
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht Brain Imaging Center (M-BIC), Maastricht University Maastricht, Netherlands
| | - Sanne Kikkert
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht Brain Imaging Center (M-BIC), Maastricht University Maastricht, Netherlands
| | - Milene Bonte
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht Brain Imaging Center (M-BIC), Maastricht University Maastricht, Netherlands
| |
Collapse
|
19
|
Abstract
Learning to read requires the acquisition of an efficient visual procedure for quickly recognizing fine print. Thus, reading practice could induce a perceptual learning effect in early vision. Using functional magnetic resonance imaging (fMRI) in literate and illiterate adults, we previously demonstrated an impact of reading acquisition on both high- and low-level occipitotemporal visual areas, but could not resolve the time course of these effects. To clarify whether literacy affects early vs. late stages of visual processing, we measured event-related potentials to various categories of visual stimuli in healthy adults with variable levels of literacy, including completely illiterate subjects, early-schooled literate subjects, and subjects who learned to read in adulthood (ex-illiterates). The stimuli included written letter strings forming pseudowords, on which literacy is expected to have a major impact, as well as faces, houses, tools, checkerboards, and false fonts. To evaluate the precision with which these stimuli were encoded, we studied repetition effects by presenting the stimuli in pairs composed of repeated, mirrored, or unrelated pictures from the same category. The results indicate that reading ability is correlated with a broad enhancement of early visual processing, including increased repetition suppression, suggesting better exemplar discrimination, and increased mirror discrimination, as early as ∼ 100-150 ms in the left occipitotemporal region. These effects were found with letter strings and false fonts, but also were partially generalized to other visual categories. Thus, learning to read affects the magnitude, precision, and invariance of early visual processing.
Collapse
|
20
|
Ferjan Ramirez N, Leonard MK, Torres C, Hatrak M, Halgren E, Mayberry RI. Neural language processing in adolescent first-language learners. Cereb Cortex 2014; 24:2772-83. [PMID: 23696277 PMCID: PMC4153811 DOI: 10.1093/cercor/bht137] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
The relation between the timing of language input and development of neural organization for language processing in adulthood has been difficult to tease apart because language is ubiquitous in the environment of nearly all infants. However, within the congenitally deaf population are individuals who do not experience language until after early childhood. Here, we investigated the neural underpinnings of American Sign Language (ASL) in 2 adolescents who had no sustained language input until they were approximately 14 years old. Using anatomically constrained magnetoencephalography, we found that recently learned signed words mainly activated right superior parietal, anterior occipital, and dorsolateral prefrontal areas in these 2 individuals. This spatiotemporal activity pattern was significantly different from the left fronto-temporal pattern observed in young deaf adults who acquired ASL from birth, and from that of hearing young adults learning ASL as a second language for a similar length of time as the cases. These results provide direct evidence that the timing of language experience over human development affects the organization of neural language processing.
Collapse
Affiliation(s)
| | | | | | | | - Eric Halgren
- Multimodal Imaging Laboratory
- Department of Radiology
- Department of Neurosciences
- Kavli Institute for Brain and Mind, University of California, San Diego, USA
| | | |
Collapse
|
21
|
Steinschneider M, Nourski KV, Rhone AE, Kawasaki H, Oya H, Howard MA. Differential activation of human core, non-core and auditory-related cortex during speech categorization tasks as revealed by intracranial recordings. Front Neurosci 2014; 8:240. [PMID: 25157216 PMCID: PMC4128221 DOI: 10.3389/fnins.2014.00240] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2014] [Accepted: 07/22/2014] [Indexed: 11/21/2022] Open
Abstract
Speech perception requires that sounds be transformed into speech-related objects with lexical and semantic meaning. It is unclear at what level in the auditory pathways this transformation emerges. Primary auditory cortex has been implicated in both representation of acoustic sound attributes and sound objects. While non-primary auditory cortex located on the posterolateral superior temporal gyrus (PLST) is clearly involved in acoustic-to-phonetic pre-lexical representations, it is unclear what role this region plays in auditory object formation. Additional data support the importance of prefrontal cortex in the formation of auditory objects, while other data would implicate this region in auditory object selection. To help clarify the respective roles of auditory and auditory-related cortex in the formation and selection of auditory objects, we examined high gamma activity simultaneously recorded directly from Heschl's gyrus (HG), PLST and prefrontal cortex, while subjects performed auditory semantic detection tasks. Subjects were patients undergoing evaluation for treatment of medically intractable epilepsy. We found that activity in posteromedial HG and early activity on PLST was robust to sound stimuli regardless of their context, and minimally modulated by tasks. Later activity on PLST could be strongly modulated by semantic context, but not by behavioral performance. Activity within prefrontal cortex also was related to semantic context, and did co-vary with behavior. We propose that activity in posteromedial HG and early activity on PLST primarily reflect the representation of spectrotemporal sound attributes. Later activity on PLST represents a pre-lexical processing stage and is an intermediate step in the formation of word objects. Activity in prefrontal cortex appears directly involved in word object selection. The roles of other auditory and auditory-related cortical areas in the formation of word objects remain to be explored.
Collapse
Affiliation(s)
- Mitchell Steinschneider
- Departments of Neurology and Neuroscience, Albert Einstein College of MedicineBronx, NY, USA
| | - Kirill V. Nourski
- Human Brain Research Laboratory, Department of Neurosurgery, The University of IowaIowa City, IA, USA
| | - Ariane E. Rhone
- Human Brain Research Laboratory, Department of Neurosurgery, The University of IowaIowa City, IA, USA
| | - Hiroto Kawasaki
- Human Brain Research Laboratory, Department of Neurosurgery, The University of IowaIowa City, IA, USA
| | - Hiroyuki Oya
- Human Brain Research Laboratory, Department of Neurosurgery, The University of IowaIowa City, IA, USA
| | - Matthew A. Howard
- Human Brain Research Laboratory, Department of Neurosurgery, The University of IowaIowa City, IA, USA
| |
Collapse
|
22
|
Kadipasaoglu CM, Baboyan VG, Conner CR, Chen G, Saad ZS, Tandon N. Surface-based mixed effects multilevel analysis of grouped human electrocorticography. Neuroimage 2014; 101:215-24. [PMID: 25019677 DOI: 10.1016/j.neuroimage.2014.07.006] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2014] [Revised: 06/21/2014] [Accepted: 07/06/2014] [Indexed: 10/25/2022] Open
Abstract
Electrocorticography (ECoG) in humans yields data with unmatched spatio-temporal resolution that provides novel insights into cognitive operations. However, the broader application of ECoG has been confounded by difficulties in accurately depicting individual data and performing statistically valid population-level analyses. To overcome these limitations, we developed methods for accurately registering ECoG data to individual cortical topology. We integrated this technique with surface-based co-registration and a mixed-effects multilevel analysis (MEMA) to control for variable cortical surface anatomy and sparse coverage across patients, as well as intra- and inter-subject variability. We applied this surface-based MEMA (SB-MEMA) technique to a face-recognition task dataset (n=22). Compared against existing techniques, SB-MEMA yielded results much more consistent with individual data and with meta-analyses of face-specific activation studies. We anticipate that SB-MEMA will greatly expand the role of ECoG in studies of human cognition, and will enable the generation of population-level brain activity maps and accurate multimodal comparisons.
Collapse
Affiliation(s)
- C M Kadipasaoglu
- Vivian Smith Department of Neurosurgery, Univ. of Texas Medical School at Houston, 6431 Fannin Street, Suite G.550D, Houston, TX 77030, USA
| | - V G Baboyan
- Vivian Smith Department of Neurosurgery, Univ. of Texas Medical School at Houston, 6431 Fannin Street, Suite G.550D, Houston, TX 77030, USA
| | - C R Conner
- Vivian Smith Department of Neurosurgery, Univ. of Texas Medical School at Houston, 6431 Fannin Street, Suite G.550D, Houston, TX 77030, USA
| | - G Chen
- Scientific and Statistical Computing Core, NIMH/NIH/DHHS, 9000 Rockville Pike, Bethesda, MD 20892, USA
| | - Z S Saad
- Scientific and Statistical Computing Core, NIMH/NIH/DHHS, 9000 Rockville Pike, Bethesda, MD 20892, USA
| | - N Tandon
- Vivian Smith Department of Neurosurgery, Univ. of Texas Medical School at Houston, 6431 Fannin Street, Suite G.550D, Houston, TX 77030, USA; Memorial Hermann Hospital, Texas Medical Center, Houston, TX 77030, USA.
| |
Collapse
|
23
|
Leonard MK, Chang EF. Dynamic speech representations in the human temporal lobe. Trends Cogn Sci 2014; 18:472-9. [PMID: 24906217 DOI: 10.1016/j.tics.2014.05.001] [Citation(s) in RCA: 56] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2013] [Revised: 04/30/2014] [Accepted: 05/06/2014] [Indexed: 11/20/2022]
Abstract
Speech perception requires rapid integration of acoustic input with context-dependent knowledge. Recent methodological advances have allowed researchers to identify underlying information representations in primary and secondary auditory cortex and to examine how context modulates these representations. We review recent studies that focus on contextual modulations of neural activity in the superior temporal gyrus (STG), a major hub for spectrotemporal encoding. Recent findings suggest a highly interactive flow of information processing through the auditory ventral stream, including influences of higher-level linguistic and metalinguistic knowledge, even within individual areas. Such mechanisms may give rise to more abstract representations, such as those for words. We discuss the importance of characterizing representations of context-dependent and dynamic patterns of neural activity in the approach to speech perception research.
Collapse
Affiliation(s)
- Matthew K Leonard
- Department of Neurological Surgery, University of California, San Francisco, 675 Nelson Rising Lane, Room 535, San Francisco, CA 94158, USA
| | - Edward F Chang
- Department of Neurological Surgery, University of California, San Francisco, 675 Nelson Rising Lane, Room 535, San Francisco, CA 94158, USA.
| |
Collapse
|
24
|
Leonard MK, Ferjan Ramirez N, Torres C, Hatrak M, Mayberry RI, Halgren E. Neural stages of spoken, written, and signed word processing in beginning second language learners. Front Hum Neurosci 2013; 7:322. [PMID: 23847496 PMCID: PMC3698463 DOI: 10.3389/fnhum.2013.00322] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2013] [Accepted: 06/11/2013] [Indexed: 11/23/2022] Open
Abstract
We combined magnetoencephalography (MEG) and magnetic resonance imaging (MRI) to examine how sensory modality, language type, and language proficiency interact during two fundamental stages of word processing: (1) an early word encoding stage, and (2) a later supramodal lexico-semantic stage. Adult native English speakers who were learning American Sign Language (ASL) performed a semantic task for spoken and written English words, and ASL signs. During the early time window, written words evoked responses in left ventral occipitotemporal cortex, and spoken words in left superior temporal cortex. Signed words evoked activity in right intraparietal sulcus that was marginally greater than for written words. During the later time window, all three types of words showed significant activity in the classical left fronto-temporal language network, the first demonstration of such activity in individuals with so little second language (L2) instruction in sign. In addition, a dissociation between semantic congruity effects and overall MEG response magnitude for ASL responses suggested shallower and more effortful processing, presumably reflecting novice L2 learning. Consistent with previous research on non-dominant language processing in spoken languages, the L2 ASL learners also showed recruitment of right hemisphere and lateral occipital cortex. These results demonstrate that late lexico-semantic processing utilizes a common substrate, independent of modality, and that proficiency effects in sign language are comparable to those in spoken language.
Collapse
Affiliation(s)
- Matthew K Leonard
- Department of Radiology, University of California San Diego, La Jolla, CA, USA ; Multimodal Imaging Laboratory, Department of Radiology, University of California San Diego, La Jolla, CA, USA
| | | | | | | | | | | |
Collapse
|