1
|
Ten Oever S, Titone L, te Rietmolen N, Martin AE. Phase-dependent word perception emerges from region-specific sensitivity to the statistics of language. Proc Natl Acad Sci U S A 2024; 121:e2320489121. [PMID: 38805278 PMCID: PMC11161766 DOI: 10.1073/pnas.2320489121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Accepted: 04/22/2024] [Indexed: 05/30/2024] Open
Abstract
Neural oscillations reflect fluctuations in excitability, which biases the percept of ambiguous sensory input. Why this bias occurs is still not fully understood. We hypothesized that neural populations representing likely events are more sensitive, and thereby become active on earlier oscillatory phases, when the ensemble itself is less excitable. Perception of ambiguous input presented during less-excitable phases should therefore be biased toward frequent or predictable stimuli that have lower activation thresholds. Here, we show such a frequency bias in spoken word recognition using psychophysics, magnetoencephalography (MEG), and computational modelling. With MEG, we found a double dissociation, where the phase of oscillations in the superior temporal gyrus and medial temporal gyrus biased word-identification behavior based on phoneme and lexical frequencies, respectively. This finding was reproduced in a computational model. These results demonstrate that oscillations provide a temporal ordering of neural activity based on the sensitivity of separable neural populations.
Collapse
Affiliation(s)
- Sanne Ten Oever
- Language and Computation in Neural Systems group, Max Planck Institute for Psycholinguistics, NijmegenXD 6525, The Netherlands
- Language and Computation in Neural Systems group, Donders Centre for Cognitive Neuroimaging, Donders Institute for Brain, Cognition and Behaviour, Radboud University, NijmegenEN 6525, The Netherlands
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, EV 6229, The Netherlands
| | - Lorenzo Titone
- Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, LeipzigD-04303, Germany
| | - Noémie te Rietmolen
- Language and Computation in Neural Systems group, Max Planck Institute for Psycholinguistics, NijmegenXD 6525, The Netherlands
- Language and Computation in Neural Systems group, Donders Centre for Cognitive Neuroimaging, Donders Institute for Brain, Cognition and Behaviour, Radboud University, NijmegenEN 6525, The Netherlands
| | - Andrea E. Martin
- Language and Computation in Neural Systems group, Max Planck Institute for Psycholinguistics, NijmegenXD 6525, The Netherlands
- Language and Computation in Neural Systems group, Donders Centre for Cognitive Neuroimaging, Donders Institute for Brain, Cognition and Behaviour, Radboud University, NijmegenEN 6525, The Netherlands
| |
Collapse
|
2
|
Tolkacheva V, Brownsett SLE, McMahon KL, de Zubicaray GI. Perceiving and misperceiving speech: lexical and sublexical processing in the superior temporal lobes. Cereb Cortex 2024; 34:bhae087. [PMID: 38494418 PMCID: PMC10944697 DOI: 10.1093/cercor/bhae087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 02/15/2024] [Accepted: 02/16/2024] [Indexed: 03/19/2024] Open
Abstract
Listeners can use prior knowledge to predict the content of noisy speech signals, enhancing perception. However, this process can also elicit misperceptions. For the first time, we employed a prime-probe paradigm and transcranial magnetic stimulation to investigate causal roles for the left and right posterior superior temporal gyri (pSTG) in the perception and misperception of degraded speech. Listeners were presented with spectrotemporally degraded probe sentences preceded by a clear prime. To produce misperceptions, we created partially mismatched pseudo-sentence probes via homophonic nonword transformations (e.g. The little girl was excited to lose her first tooth-Tha fittle girmn wam expited du roos har derst cooth). Compared to a control site (vertex), inhibitory stimulation of the left pSTG selectively disrupted priming of real but not pseudo-sentences. Conversely, inhibitory stimulation of the right pSTG enhanced priming of misperceptions with pseudo-sentences, but did not influence perception of real sentences. These results indicate qualitatively different causal roles for the left and right pSTG in perceiving degraded speech, supporting bilateral models that propose engagement of the right pSTG in sublexical processing.
Collapse
Affiliation(s)
- Valeriya Tolkacheva
- Queensland University of Technology, School of Psychology and Counselling, O Block, Kelvin Grove, Queensland, 4059, Australia
| | - Sonia L E Brownsett
- Queensland Aphasia Research Centre, School of Health and Rehabilitation Sciences, University of Queensland, Surgical Treatment and Rehabilitation Services, Herston, Queensland, 4006, Australia
- Centre of Research Excellence in Aphasia Recovery and Rehabilitation, La Trobe University, Melbourne, Health Sciences Building 1, 1 Kingsbury Drive, Bundoora, Victoria, 3086, Australia
| | - Katie L McMahon
- Herston Imaging Research Facility, Royal Brisbane & Women’s Hospital, Building 71/918, Royal Brisbane & Women’s Hospital, Herston, Queensland, 4006, Australia
- Queensland University of Technology, School of Clinical Sciences and Centre for Biomedical Technologies, 60 Musk Avenue, Kelvin Grove, Queensland, 4059, Australia
| | - Greig I de Zubicaray
- Queensland University of Technology, School of Psychology and Counselling, O Block, Kelvin Grove, Queensland, 4059, Australia
| |
Collapse
|
3
|
Arjmandi MK, Behroozmand R. On the interplay between speech perception and production: insights from research and theories. Front Neurosci 2024; 18:1347614. [PMID: 38332858 PMCID: PMC10850291 DOI: 10.3389/fnins.2024.1347614] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 01/08/2024] [Indexed: 02/10/2024] Open
Abstract
The study of spoken communication has long been entrenched in a debate surrounding the interdependence of speech production and perception. This mini review summarizes findings from prior studies to elucidate the reciprocal relationships between speech production and perception. We also discuss key theoretical perspectives relevant to speech perception-production loop, including hyper-articulation and hypo-articulation (H&H) theory, speech motor theory, direct realism theory, articulatory phonology, the Directions into Velocities of Articulators (DIVA) and Gradient Order DIVA (GODIVA) models, and predictive coding. Building on prior findings, we propose a revised auditory-motor integration model of speech and provide insights for future research in speech perception and production, focusing on the effects of impaired peripheral auditory systems.
Collapse
Affiliation(s)
- Meisam K. Arjmandi
- Translational Auditory Neuroscience Lab, Department of Communication Sciences and Disorders, Arnold School of Public Health, University of South Carolina, Columbia, SC, United States
| | - Roozbeh Behroozmand
- Speech Neuroscience Lab, Department of Speech, Language, and Hearing, Callier Center for Communication Disorders, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, United States
| |
Collapse
|
4
|
Guilleminot P, Graef C, Butters E, Reichenbach T. Audiotactile Stimulation Can Improve Syllable Discrimination through Multisensory Integration in the Theta Frequency Band. J Cogn Neurosci 2023; 35:1760-1772. [PMID: 37677062 DOI: 10.1162/jocn_a_02045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/09/2023]
Abstract
Syllables are an essential building block of speech. We recently showed that tactile stimuli linked to the perceptual centers of syllables in continuous speech can improve speech comprehension. The rate of syllables lies in the theta frequency range, between 4 and 8 Hz, and the behavioral effect appears linked to multisensory integration in this frequency band. Because this neural activity may be oscillatory, we hypothesized that a behavioral effect may also occur not only while but also after this activity has been evoked or entrained through vibrotactile pulses. Here, we show that audiotactile integration regarding the perception of single syllables, both on the neural and on the behavioral level, is consistent with this hypothesis. We first stimulated participants with a series of vibrotactile pulses and then presented them with a syllable in background noise. We show that, at a delay of 200 msec after the last vibrotactile pulse, audiotactile integration still occurred in the theta band and syllable discrimination was enhanced. Moreover, the dependence of both the neural multisensory integration as well as of the behavioral discrimination on the delay of the audio signal with respect to the last tactile pulse was consistent with a damped oscillation. In addition, the multisensory gain is correlated with the syllable discrimination score. Our results therefore evidence the role of the theta band in audiotactile integration and provide evidence that these effects may involve oscillatory activity that still persists after the tactile stimulation.
Collapse
|
5
|
Kawakami N, Kanno S, Ota S, Morihara K, Ogawa N, Suzuki K. Auditory phonological identification impairment in primary progressive aphasia. Cortex 2023; 168:130-142. [PMID: 37714069 DOI: 10.1016/j.cortex.2023.08.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Revised: 05/08/2023] [Accepted: 08/01/2023] [Indexed: 09/17/2023]
Abstract
OBJECTIVE To examine the audiological characteristics and neuroanatomical regions associated with auditory phonological identification impairment in primary progressive aphasia (PPA). METHODS Twenty-seven patients with PPA [13 non-fluent/agrammatic variant PPA (nfvPPA), three logopenic variant PPA (lvPPA), seven semantic variant PPA (svPPA), and four mixed type PPA] were included in the study. Neuropsychological, language, audiological, and neuroradiological examinations were also performed. Auditory function examinations consisted of a pure-tone threshold test, a phonological identification task, and temporal auditory acuity tests, such as click counting or fusion. As an evaluation value of phonological identification ability, we calculated the discrepancy scores, which were the smaller discrepancy (left or right ear) in phonological identification ability scores between measured and expected values from the pure-tone threshold. In the neuroradiological examination, we evaluated the regional cerebral blood flow using 123I-iodoamphetamine single-photon emission computed tomography. RESULTS Eight of the 27 patients were allocated to the impaired phonological identification group, and four were considered to have significant impairment on further analysis. Two of these patients, one with lvPPA and one with mixed type of lvPPA and nfvPPA, showed apparent phonological identification deficits that could be observed in daily life. The discrepancy scores were not significantly related to the results of neuropsychological, language, or any other auditory examinations, except for the click counting score in the left ear. Voxel-based correlation analyses revealed that regional cerebral blood flow in the bilateral superior temporal gyrus and bilateral primary auditory cortex was significantly and positively correlated with phonological identification ability. CONCLUSIONS Our results suggest that progressive dysfunction of the bilateral superior temporal gyrus and bilateral primary auditory cortex due to neurodegenerative diseases leads to phonological identification impairment in PPA syndrome.
Collapse
Affiliation(s)
- Nobuko Kawakami
- Department of Behavioral Neurology and Cognitive Neuroscience, Tohoku University Graduate School of Medicine, Sendai, Japan.
| | - Shigenori Kanno
- Department of Behavioral Neurology and Cognitive Neuroscience, Tohoku University Graduate School of Medicine, Sendai, Japan.
| | - Shoko Ota
- Department of Behavioral Neurology and Cognitive Neuroscience, Tohoku University Graduate School of Medicine, Sendai, Japan.
| | - Keisuke Morihara
- Department of Behavioral Neurology and Cognitive Neuroscience, Tohoku University Graduate School of Medicine, Sendai, Japan; Department of Neurology and Stroke Medicine, Yokohama City University, Yokohama, Japan.
| | - Nanayo Ogawa
- Department of Behavioral Neurology and Cognitive Neuroscience, Tohoku University Graduate School of Medicine, Sendai, Japan.
| | - Kyoko Suzuki
- Department of Behavioral Neurology and Cognitive Neuroscience, Tohoku University Graduate School of Medicine, Sendai, Japan.
| |
Collapse
|
6
|
Hinkley LBN, Thompson M, Miller ZA, Borghesani V, Mizuiri D, Shwe W, Licata A, Ninomiya S, Lauricella M, Mandelli ML, Miller BL, Houde J, Gorno‐Tempini ML, Nagarajan SS. Distinct neurophysiology during nonword repetition in logopenic and non-fluent variants of primary progressive aphasia. Hum Brain Mapp 2023; 44:4833-4847. [PMID: 37516916 PMCID: PMC10472914 DOI: 10.1002/hbm.26408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 04/25/2023] [Accepted: 06/11/2023] [Indexed: 07/31/2023] Open
Abstract
Overlapping clinical presentations in primary progressive aphasia (PPA) variants present challenges for diagnosis and understanding pathophysiology, particularly in the early stages of the disease when behavioral (speech) symptoms are not clearly evident. Divergent atrophy patterns (temporoparietal degeneration in logopenic variant lvPPA, frontal degeneration in nonfluent variant nfvPPA) can partially account for differential speech production errors in the two groups in the later stages of the disease. While the existing dogma states that neurodegeneration is the root cause of compromised behavior and cortical activity in PPA, the extent to which neurophysiological signatures of speech dysfunction manifest independent of their divergent atrophy patterns remain unknown. We test the hypothesis that nonword deficits in lvPPA and nfvPPA arise from distinct patterns of neural oscillations that are unrelated to atrophy. We use a novel structure-function imaging approach integrating magnetoencephalographic imaging of neural oscillations during a non-word repetition task with voxel-based morphometry-derived measures of gray matter volume to isolate neural oscillation abnormalities independent of atrophy. We find reduced beta band neural activity in left temporal regions associated with the late stages of auditory encoding unique to patients with lvPPA and reduced high-gamma neural activity over left frontal regions associated with the early stages of motor preparation in patients with nfvPPA. Neither of these patterns of reduced cortical oscillations was explained by cortical atrophy in our statistical model. These findings highlight the importance of structure-function imaging in revealing neurophysiological sequelae in early stages of dementia when neither structural atrophy nor behavioral deficits are clinically distinct.
Collapse
Affiliation(s)
- Leighton B. N. Hinkley
- Department of Radiology and Biomedical ImagingUniversity of CaliforniaSan FranciscoCaliforniaUSA
| | - Megan Thompson
- Department of Radiology and Biomedical ImagingUniversity of CaliforniaSan FranciscoCaliforniaUSA
| | - Zachary A. Miller
- Department of NeurologyUniversity of CaliforniaSan FranciscoCaliforniaUSA
| | | | - Danielle Mizuiri
- Department of Radiology and Biomedical ImagingUniversity of CaliforniaSan FranciscoCaliforniaUSA
| | - Wendy Shwe
- Department of NeurologyUniversity of CaliforniaSan FranciscoCaliforniaUSA
| | - Abigail Licata
- Department of NeurologyUniversity of CaliforniaSan FranciscoCaliforniaUSA
| | - Seigo Ninomiya
- Department of Radiology and Biomedical ImagingUniversity of CaliforniaSan FranciscoCaliforniaUSA
| | - Michael Lauricella
- Department of NeurologyUniversity of CaliforniaSan FranciscoCaliforniaUSA
| | | | - Bruce L. Miller
- Department of NeurologyUniversity of CaliforniaSan FranciscoCaliforniaUSA
| | - John Houde
- Department of Otolaryngology – Head and Neck SurgeryUniversity of CaliforniaSan FranciscoCaliforniaUSA
| | | | - Srikantan S. Nagarajan
- Department of Radiology and Biomedical ImagingUniversity of CaliforniaSan FranciscoCaliforniaUSA
| |
Collapse
|
7
|
Elmer S, Kurthen I, Meyer M, Giroud N. A multidimensional characterization of the neurocognitive architecture underlying age-related temporal speech processing. Neuroimage 2023; 278:120285. [PMID: 37481009 DOI: 10.1016/j.neuroimage.2023.120285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 07/11/2023] [Accepted: 07/19/2023] [Indexed: 07/24/2023] Open
Abstract
Healthy aging is often associated with speech comprehension difficulties in everyday life situations despite a pure-tone hearing threshold in the normative range. Drawing on this background, we used a multidimensional approach to assess the functional and structural neural correlates underlying age-related temporal speech processing while controlling for pure-tone hearing acuity. Accordingly, we combined structural magnetic resonance imaging and electroencephalography, and collected behavioral data while younger and older adults completed a phonetic categorization and discrimination task with consonant-vowel syllables varying along a voice-onset time continuum. The behavioral results confirmed age-related temporal speech processing singularities which were reflected in a shift of the boundary of the psychometric categorization function, with older adults perceiving more syllable characterized by a short voice-onset time as /ta/ compared to younger adults. Furthermore, despite the absence of any between-group differences in phonetic discrimination abilities, older adults demonstrated longer N100/P200 latencies as well as increased P200 amplitudes while processing the consonant-vowel syllables varying in voice-onset time. Finally, older adults also exhibited a divergent anatomical gray matter infrastructure in bilateral auditory-related and frontal brain regions, as manifested in reduced cortical thickness and surface area. Notably, in the younger adults but not in the older adult cohort, cortical surface area in these two gross anatomical clusters correlated with the categorization of consonant-vowel syllables characterized by a short voice-onset time, suggesting the existence of a critical gray matter threshold that is crucial for consistent mapping of phonetic categories varying along the temporal dimension. Taken together, our results highlight the multifaceted dimensions of age-related temporal speech processing characteristics, and pave the way toward a better understanding of the relationships between hearing, speech and the brain in older age.
Collapse
Affiliation(s)
- Stefan Elmer
- Department of Computational Linguistics, Computational Neuroscience of Speech & Hearing, University of Zurich, Zurich, Switzerland; Competence center Language & Medicine, University of Zurich, Switzerland.
| | - Ira Kurthen
- Department of Computational Linguistics, Computational Neuroscience of Speech & Hearing, University of Zurich, Zurich, Switzerland
| | - Martin Meyer
- Department of Comparative Language Science, University of Zurich, Zurich, Switzerland; Center for Neuroscience Zurich, University and ETH of Zurich, Zurich, Switzerland; Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich, Switzerland; Cognitive Psychology Unit, Alpen-Adria University, Klagenfurt, Austria
| | - Nathalie Giroud
- Department of Computational Linguistics, Computational Neuroscience of Speech & Hearing, University of Zurich, Zurich, Switzerland; Center for Neuroscience Zurich, University and ETH of Zurich, Zurich, Switzerland; Competence center Language & Medicine, University of Zurich, Switzerland
| |
Collapse
|
8
|
Le Stanc L, Youssov K, Giavazzi M, Sliwinski A, Bachoud-Lévi AC, Jacquemot C. Language disorders in patients with striatal lesions: Deciphering the role of the striatum in language performance. Cortex 2023; 166:91-106. [PMID: 37354871 DOI: 10.1016/j.cortex.2023.04.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Revised: 03/22/2023] [Accepted: 04/13/2023] [Indexed: 06/26/2023]
Abstract
The classical neural model of language refers to a cortical network involving frontal, parietal and temporal regions. However, patients with subcortical lesions of the striatum have language difficulties. We investigated whether the striatum is directly involved in language or whether its role in decision-making has an indirect effect on language performance, by testing carriers of Huntington's disease (HD) mutations and controls. HD is a genetic neurodegenerative disease primarily affecting the striatum and causing language disorders. We asked carriers of the HD mutation in the premanifest (before clinical diagnosis) and early disease stages, and controls to perform two discrimination tasks, one involving linguistic and the other non-linguistic stimuli. We used the hierarchical drift diffusion model (HDDM) to analyze the participants' responses and to assess the decision and non-decision parameters separately. We hypothesized that any language deficits related to decision-making impairments would be reflected in the decision parameters of linguistic and non-linguistic tasks. We also assessed the relative contributions of both HDDM decision and non-decision parameters to the participants' behavioral data (response time and discriminability). Finally, we investigated whether the decision and non-decision parameters of the HDDM were correlated with brain atrophy. The HDDM analysis showed that patients with early HD have impaired decision parameters relative to controls, regardless of the task. In both tasks, decision parameters better explained the variance of response time and discriminability performance than non-decision parameters. In the linguistic task, decision parameters were positively correlated with gray matter volume in the ventral striatum and putamen, whereas non-decision parameters were not. Language impairment in patients with striatal atrophy is better explained by a deficit of decision-making than by a deficit of core linguistic processing. These results suggest that the striatum is involved in language through the modulation of decision-making, presumably by regulating the process of choice between linguistic alternatives.
Collapse
Affiliation(s)
- Lorna Le Stanc
- Département d'Études Cognitives, École Normale Supérieure-PSL, Paris, France; Institut Mondor de Recherche Biomédicale, Inserm U955, Equipe E01 Neuropsychologie Interventionnelle, Créteil, France; Université Paris-Est Créteil, Faculté de Médecine, Créteil, France; Université Paris Cité, LaPsyDÉ, CNRS, Paris, France
| | - Katia Youssov
- Département d'Études Cognitives, École Normale Supérieure-PSL, Paris, France; Institut Mondor de Recherche Biomédicale, Inserm U955, Equipe E01 Neuropsychologie Interventionnelle, Créteil, France; Université Paris-Est Créteil, Faculté de Médecine, Créteil, France; AP-HP, Centre de Référence Maladie de Huntington, Service de Neurologie, Hôpital Henri Mondor-Albert Chenevier, Créteil, France
| | - Maria Giavazzi
- Département d'Études Cognitives, École Normale Supérieure-PSL, Paris, France; Institut Mondor de Recherche Biomédicale, Inserm U955, Equipe E01 Neuropsychologie Interventionnelle, Créteil, France; Université Paris-Est Créteil, Faculté de Médecine, Créteil, France
| | - Agnès Sliwinski
- Département d'Études Cognitives, École Normale Supérieure-PSL, Paris, France; Institut Mondor de Recherche Biomédicale, Inserm U955, Equipe E01 Neuropsychologie Interventionnelle, Créteil, France; Université Paris-Est Créteil, Faculté de Médecine, Créteil, France; AP-HP, Centre de Référence Maladie de Huntington, Service de Neurologie, Hôpital Henri Mondor-Albert Chenevier, Créteil, France
| | - Anne-Catherine Bachoud-Lévi
- Département d'Études Cognitives, École Normale Supérieure-PSL, Paris, France; Institut Mondor de Recherche Biomédicale, Inserm U955, Equipe E01 Neuropsychologie Interventionnelle, Créteil, France; Université Paris-Est Créteil, Faculté de Médecine, Créteil, France; AP-HP, Centre de Référence Maladie de Huntington, Service de Neurologie, Hôpital Henri Mondor-Albert Chenevier, Créteil, France
| | - Charlotte Jacquemot
- Département d'Études Cognitives, École Normale Supérieure-PSL, Paris, France; Institut Mondor de Recherche Biomédicale, Inserm U955, Equipe E01 Neuropsychologie Interventionnelle, Créteil, France; Université Paris-Est Créteil, Faculté de Médecine, Créteil, France.
| |
Collapse
|
9
|
Hamadelseed O, Chan MKS, Wong MBF, Skutella T. Distinct neuroanatomical and neuropsychological features of Down syndrome compared to related neurodevelopmental disorders: a systematic review. Front Neurosci 2023; 17:1225228. [PMID: 37600012 PMCID: PMC10436105 DOI: 10.3389/fnins.2023.1225228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Accepted: 07/17/2023] [Indexed: 08/22/2023] Open
Abstract
Objectives We critically review research findings on the unique changes in brain structure and cognitive function characteristic of Down syndrome (DS) and summarize the similarities and differences with other neurodevelopmental disorders such as Williams syndrome, 22q11.2 deletion syndrome, and fragile X syndrome. Methods We conducted a meta-analysis and systematic literature review of 84 studies identified by searching PubMed, Google Scholar, and Web of Science from 1977 to October 2022. This review focuses on the following issues: (1) specific neuroanatomic and histopathological features of DS as revealed by autopsy and modern neuroimaging modalities, (2) language and memory deficits in DS, (3) the relationships between these neuroanatomical and neuropsychological features, and (4) neuroanatomic and neuropsychological differences between DS and related neurodevelopmental syndromes. Results Numerous post-mortem and morphometric neuroimaging investigations of individuals with DS have reported complex changes in regional brain volumes, most notably in the hippocampal formation, temporal lobe, frontal lobe, parietal lobe, and cerebellum. Moreover, neuropsychological assessments have revealed deficits in language development, emotional regulation, and memory that reflect these structural changes and are more severe than expected from general cognitive dysfunction. Individuals with DS also show relative preservation of multiple cognitive, linguistic, and social domains compared to normally developed controls and individuals with other neurodevelopmental disorders. However, all these neurodevelopment disorders exhibit substantial heterogeneity among individuals. Conclusion People with Down syndrome demonstrate unique neurodevelopmental abnormalities but cannot be regarded as a homogenous group. A comprehensive evaluation of individual intellectual skills is essential for all individuals with neurodevelopment disorders to develop personalized care programs.
Collapse
Affiliation(s)
- Osama Hamadelseed
- Department of Neuroanatomy, Institute of Anatomy and Cell Biology, University of Heidelberg, Heidelberg, Germany
| | - Mike K. S. Chan
- EW European Wellness Academy GmbH, Edenkoben, Germany
- Baden R&D Laboratories GmbH, Edenkoben, Germany
| | - Michelle B. F. Wong
- EW European Wellness Academy GmbH, Edenkoben, Germany
- Baden R&D Laboratories GmbH, Edenkoben, Germany
- Stellar Biomolecular Research GmbH, Edenkoben, Germany
| | - Thomas Skutella
- Department of Neuroanatomy, Institute of Anatomy and Cell Biology, University of Heidelberg, Heidelberg, Germany
| |
Collapse
|
10
|
Wang C, Zhang Y, Lim LG, Cao W, Zhang W, Wan X, Fan L, Liu Y, Zhang X, Tian Z, Liu X, Pan X, Zheng Y, Pan R, Tan Y, Zhang Z, McIntyre RS, Li Z, Ho RCM, Tang TB. An fNIRS investigation of novel expressed emotion stimulations in schizophrenia. Sci Rep 2023; 13:11141. [PMID: 37429942 DOI: 10.1038/s41598-023-38057-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Accepted: 07/02/2023] [Indexed: 07/12/2023] Open
Abstract
Living in high expressed emotion (EE) environments tends to increase the relapse rate in schizophrenia (SZ). At present, the neural substrates responsible for high EE in SZ remain poorly understood. Functional near-infrared spectroscopy (fNIRS) may be of great use to quantitatively assess cortical hemodynamics and elucidate the pathophysiology of psychiatric disorders. In this study, we designed novel low- (positivity and warmth) and high-EE (criticism, negative emotion, and hostility) stimulations, in the form of audio, to investigate cortical hemodynamics. We used fNIRS to measure hemodynamic signals while participants listened to the recorded audio. Healthy controls (HCs, [Formula: see text]) showed increased hemodynamic activation in the major language centers across EE stimulations, with stronger activation in Wernicke's area during the processing of negative emotional language. Compared to HCs, people with SZ ([Formula: see text]) exhibited smaller hemodynamic activation in the major language centers across EE stimulations. In addition, people with SZ showed weaker or insignificant hemodynamic deactivation in the medial prefrontal cortex. Notably, hemodynamic activation in SZ was found to be negatively correlated with the negative syndrome scale score at high EE. Our findings suggest that the neural mechanisms in SZ are altered and disrupted, especially during negative emotional language processing. This supports the feasibility of using the designed EE stimulations to assess people who are vulnerable to high-EE environments, such as SZ. Furthermore, our findings provide preliminary evidence for future research on functional neuroimaging biomarkers for people with psychiatric disorders.
Collapse
Affiliation(s)
| | | | - Lam Ghai Lim
- Department of Electrical and Robotics Engineering, School of Engineering, Monash University Malaysia, Jalan Lagoon Selatan, 47500, Bandar Sunway, Selangor, Malaysia.
| | - Weiqi Cao
- Huaibei Normal University, Huaibei, China
| | - Wei Zhang
- Huaibei Mental Health Center, Huaibei, China
| | | | - Lijun Fan
- Huaibei Normal University, Huaibei, China
| | - Ying Liu
- Huaibei Normal University, Huaibei, China
| | - Xi Zhang
- Huaibei Mental Health Center, Huaibei, China
| | | | | | - Xiuzhi Pan
- Huaibei Normal University, Huaibei, China
| | - Yuan Zheng
- Huaibei Normal University, Huaibei, China
| | - Riyu Pan
- Anqing Normal University, Anqing, China
| | - Yilin Tan
- Huaibei Normal University, Huaibei, China
| | | | - Roger S McIntyre
- Mood Disorders Psychopharmacology Unit, Poul Hansen Family Centre for Depression, Toronto, Canada
- Department of Pharmacology and Toxicology, University of Toronto, Toronto, Canada
- Department of Psychiatry, University of Toronto, Toronto, Canada
- Brain and Cognition Discovery Foundation, Toronto, Canada
| | - Zhifei Li
- Institute for Health Innovation and Technology (iHealthtech), National University of Singapore, Singapore, 117599, Singapore
| | - Roger C M Ho
- Institute for Health Innovation and Technology (iHealthtech), National University of Singapore, Singapore, 117599, Singapore
- Department of Psychological Medicine, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, 119228, Singapore
| | - Tong Boon Tang
- Centre for Intelligent Signal and Imaging Research (CISIR), Universiti Teknologi PETRONAS, 32610, Seri Iskandar, Perak, Malaysia
| |
Collapse
|
11
|
Aveni K, Ahmed J, Borovsky A, McRae K, Jenkins ME, Sprengel K, Fraser JA, Orange JB, Knowles T, Roberts AC. Predictive language comprehension in Parkinson's disease. PLoS One 2023; 18:e0262504. [PMID: 36753529 PMCID: PMC9907838 DOI: 10.1371/journal.pone.0262504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Accepted: 12/27/2021] [Indexed: 02/09/2023] Open
Abstract
Verb and action knowledge deficits are reported in persons with Parkinson's disease (PD), even in the absence of dementia or mild cognitive impairment. However, the impact of these deficits on combinatorial semantic processing is less well understood. Following on previous verb and action knowledge findings, we tested the hypothesis that PD impairs the ability to integrate event-based thematic fit information during online sentence processing. Specifically, we anticipated persons with PD with age-typical cognitive abilities would perform more poorly than healthy controls during a visual world paradigm task requiring participants to predict a target object constrained by the thematic fit of the agent-verb combination. Twenty-four PD and 24 healthy age-matched participants completed comprehensive neuropsychological assessments. We recorded participants' eye movements as they heard predictive sentences (The fisherman rocks the boat) alongside target, agent-related, verb-related, and unrelated images. We tested effects of group (PD/control) on gaze using growth curve models. There were no significant differences between PD and control participants, suggesting that PD participants successfully and rapidly use combinatory thematic fit information to predict upcoming language. Baseline sentences with no predictive information (e.g., Look at the drum) confirmed that groups showed equivalent sentence processing and eye movement patterns. Additionally, we conducted an exploratory analysis contrasting PD and controls' performance on low-motion-content versus high-motion-content verbs. This analysis revealed fewer predictive fixations in high-motion sentences only for healthy older adults. PD participants may adapt to their disease by relying on spared, non-action-simulation-based language processing mechanisms, although this conclusion is speculative, as the analyses of high- vs. low-motion items was highly limited by the study design. These findings provide novel evidence that individuals with PD match healthy adults in their ability to use verb meaning to predict upcoming nouns despite previous findings of verb semantic impairment in PD across a variety of tasks.
Collapse
Affiliation(s)
- Katharine Aveni
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, United States of America
| | - Juweiriya Ahmed
- Department of Psychology, Western University, London, ON, Canada
| | - Arielle Borovsky
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN, United States of America
| | - Ken McRae
- Department of Psychology, Western University, London, ON, Canada
| | - Mary E. Jenkins
- Department of Clinical Neurological Sciences, Schulich School of Medicine and Dentistry, Western University, London, ON, Canada
| | - Katherine Sprengel
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, United States of America
| | - J. Alexander Fraser
- Department of Clinical Neurological Sciences, Schulich School of Medicine and Dentistry, Western University, London, ON, Canada
- Department of Ophthalmology, Western University, St. Jo122seph’s Health Care, London, ON, Canada
| | - Joseph B. Orange
- School of Communication Sciences and Disorders, Western University, London, ON, Canada
- Canadian Centre for Activity and Aging, Western University, London, ON, Canada
| | - Thea Knowles
- Department of Psychology, Western University, London, ON, Canada
| | - Angela C. Roberts
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, United States of America
- School of Communication Sciences and Disorders, Western University, London, ON, Canada
| |
Collapse
|
12
|
Murai SA, Riquimaroux H. Long-term changes in cortical representation through perceptual learning of spectrally degraded speech. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2023; 209:163-172. [PMID: 36464716 DOI: 10.1007/s00359-022-01593-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Revised: 11/07/2022] [Accepted: 11/08/2022] [Indexed: 12/07/2022]
Abstract
Listeners can adapt to acoustically degraded speech with perceptual training. The learning processes for long periods underlies the rehabilitation of patients with hearing aids or cochlear implants. Perceptual learning of acoustically degraded speech has been associated with the frontotemporal cortices. However, neural processes during and after long-term perceptual learning remain unclear. Here we conducted perceptual training of noise-vocoded speech sounds (NVSS), which is spectrally degraded signals, and measured the cortical activity for seven days and the follow up testing (approximately 1 year later) to investigate changes in neural activation patterns using functional magnetic resonance imaging. We demonstrated that young adult participants (n = 5) improved their performance across seven experimental days, and the gains were maintained after 10 months or more. Representational similarity analysis showed that the neural activation patterns of NVSS relative to clear speech in the left posterior superior temporal sulcus (pSTS) were significantly different across seven training days, accompanying neural changes in frontal cortices. In addition, the distinct activation patterns to NVSS in the frontotemporal cortices were also observed 10-13 months after the training. We, therefore, propose that perceptual training can induce plastic changes and long-term effects on neural representations of the trained degraded speech in the frontotemporal cortices. These behavioral improvements and neural changes induced by the perceptual learning of degraded speech will provide insights into cortical mechanisms underlying adaptive processes in difficult listening situations and long-term rehabilitation of auditory disorders.
Collapse
Affiliation(s)
- Shota A Murai
- Faculty of Life and Medical Sciences, Doshisha University, 1-3 Miyakodani, Tatara, Kyotanabe, Kyoto, 610-0321, Japan.,International Research Center for Neurointelligence (WPI-IRCN), The University of Tokyo Institutes for Advanced Study, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
| | - Hiroshi Riquimaroux
- Faculty of Life and Medical Sciences, Doshisha University, 1-3 Miyakodani, Tatara, Kyotanabe, Kyoto, 610-0321, Japan.
| |
Collapse
|
13
|
Moinuddin KA, Havugimana F, Al-Fahad R, Bidelman GM, Yeasin M. Unraveling Spatial-Spectral Dynamics of Speech Categorization Speed Using Convolutional Neural Networks. Brain Sci 2022; 13:brainsci13010075. [PMID: 36672055 PMCID: PMC9856675 DOI: 10.3390/brainsci13010075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 12/22/2022] [Accepted: 12/24/2022] [Indexed: 12/31/2022] Open
Abstract
The process of categorizing sounds into distinct phonetic categories is known as categorical perception (CP). Response times (RTs) provide a measure of perceptual difficulty during labeling decisions (i.e., categorization). The RT is quasi-stochastic in nature due to individuality and variations in perceptual tasks. To identify the source of RT variation in CP, we have built models to decode the brain regions and frequency bands driving fast, medium and slow response decision speeds. In particular, we implemented a parameter optimized convolutional neural network (CNN) to classify listeners' behavioral RTs from their neural EEG data. We adopted visual interpretation of model response using Guided-GradCAM to identify spatial-spectral correlates of RT. Our framework includes (but is not limited to): (i) a data augmentation technique designed to reduce noise and control the overall variance of EEG dataset; (ii) bandpower topomaps to learn the spatial-spectral representation using CNN; (iii) large-scale Bayesian hyper-parameter optimization to find best performing CNN model; (iv) ANOVA and posthoc analysis on Guided-GradCAM activation values to measure the effect of neural regions and frequency bands on behavioral responses. Using this framework, we observe that α-β (10-20 Hz) activity over left frontal, right prefrontal/frontal, and right cerebellar regions are correlated with RT variation. Our results indicate that attention, template matching, temporal prediction of acoustics, motor control, and decision uncertainty are the most probable factors in RT variation.
Collapse
Affiliation(s)
| | - Felix Havugimana
- Department of EECE, University of Memphis, Memphis, TN 38152, USA
| | - Rakib Al-Fahad
- Department of EECE, University of Memphis, Memphis, TN 38152, USA
| | - Gavin M. Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN 47408, USA
| | - Mohammed Yeasin
- Department of EECE, University of Memphis, Memphis, TN 38152, USA
| |
Collapse
|
14
|
Chen Y, Tang E, Ding H, Zhang Y. Auditory Pitch Perception in Autism Spectrum Disorder: A Systematic Review and Meta-Analysis. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:4866-4886. [PMID: 36450443 DOI: 10.1044/2022_jslhr-22-00254] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
PURPOSE Pitch plays an important role in auditory perception of music and language. This study provides a systematic review with meta-analysis to investigate whether individuals with autism spectrum disorder (ASD) have enhanced pitch processing ability and to identify the potential factors associated with processing differences between ASD and neurotypicals. METHOD We conducted a systematic search through six major electronic databases focusing on the studies that used nonspeech stimuli to provide a qualitative and quantitative assessment across existing studies on pitch perception in autism. We identified potential participant- and methodology-related moderators and conducted metaregression analyses using mixed-effects models. RESULTS On the basis of 22 studies with a total of 464 participants with ASD, we obtained a small-to-medium positive effect size (g = 0.26) in support of enhanced pitch perception in ASD. Moreover, the mean age and nonverbal IQ of participants were found to significantly moderate the between-studies heterogeneity. CONCLUSIONS Our study provides the first meta-analysis on auditory pitch perception in ASD and demonstrates the existence of different developmental trajectories between autistic individuals and neurotypicals. In addition to age, nonverbal ability is found to be a significant contributor to the lower level/local processing bias in ASD. We highlight the need for further investigation of pitch perception in ASD under challenging listening conditions. Future neurophysiological and brain imaging studies with a longitudinal design are also needed to better understand the underlying neural mechanisms of atypical pitch processing in ASD and to help guide auditory-based interventions for improving language and social functioning. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21614271.
Collapse
Affiliation(s)
- Yu Chen
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Enze Tang
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Hongwei Ding
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences and Masonic Institute for the Developing Brain, University of Minnesota, Minneapolis
| |
Collapse
|
15
|
Fatić S, Stanojević N, Stokić M, Nenadović V, Jeličić L, Bilibajkić R, Gavrilović A, Maksimović S, Adamović T, Subotić M. Electroen cephalography correlates of word and non-word listening in children with specific language impairment: An observational study20F0. Medicine (Baltimore) 2022; 101:e31840. [PMID: 36401430 PMCID: PMC9678566 DOI: 10.1097/md.0000000000031840] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/03/2022] Open
Abstract
Auditory processing in children diagnosed with speech and language impairment (SLI) is atypical and characterized by reduced brain activation compared to typically developing (TD) children. In typical speech and language development processes, frontal, temporal, and posterior regions are engaged during single-word listening, while for non-word listening, it is highly unlikely that perceiving or speaking them is not followed by frequent neurones' activation enough to form stable network connections. This study aimed to investigate the electrophysiological cortical activity of alpha rhythm while listening words and non-words in children with SLI compared to TD children. The participants were 50 children with SLI, aged 4 to 6, and 50 age-related TD children. Groups were divided into 2 subgroups: first subgroup - children aged 4.0 to 5.0 years old (E = 25, C = 25) and second subgroup - children aged 5.0 to 6.0 years old (E = 25, C = 25). The younger children's group did not show statistically significant differences in alpha spectral power in word or non-word listening. In contrast, in the older age group for word and non-word listening, differences were present in the prefrontal, temporal, and parieto-occipital regions bilaterally. Children with SLI showed a certain lack of alpha desynchronization in word and non-word listening compared with TD children. Non-word perception arouses more brain regions because of the unknown presence of the word stimuli. The lack of adequate alpha desynchronization is consistent with established difficulties in lexical and phonological processing at the behavioral level in children with SLI.
Collapse
Affiliation(s)
- Saška Fatić
- Department for Cognitive Neuroscience, Research and Development Institute “Life Activities Advancement Center”, Belgrade, Serbia
- Department of Speech, Language, and Hearing Sciences, Institute for Experimental Phonetics and Speech Pathology ˝Đorđe Kostić˝, Belgrade, Serbia
- *Correspondence: Saška Fatić, Department for Cognitive Neuroscience, Research and Development Institute “Life Activities Advancement Center”, Gospodar Jovanova 35, Belgrade 11 000, Serbia (e-mail: )
| | - Nina Stanojević
- Department for Cognitive Neuroscience, Research and Development Institute “Life Activities Advancement Center”, Belgrade, Serbia
- Department of Speech, Language, and Hearing Sciences, Institute for Experimental Phonetics and Speech Pathology ˝Đorđe Kostić˝, Belgrade, Serbia
| | - Miodrag Stokić
- University of Belgrade, Faculty of Biology, Belgrade, Serbia
| | - Vanja Nenadović
- Department of Speech, Language, and Hearing Sciences, Institute for Experimental Phonetics and Speech Pathology ˝Đorđe Kostić˝, Belgrade, Serbia
| | - Ljiljana Jeličić
- Department for Cognitive Neuroscience, Research and Development Institute “Life Activities Advancement Center”, Belgrade, Serbia
- Department of Speech, Language, and Hearing Sciences, Institute for Experimental Phonetics and Speech Pathology ˝Đorđe Kostić˝, Belgrade, Serbia
| | - Ružica Bilibajkić
- Department for Cognitive Neuroscience, Research and Development Institute “Life Activities Advancement Center”, Belgrade, Serbia
| | - Aleksandar Gavrilović
- Faculty of Medical Sciences, Department of Neurology, University of Kragujevac, Kragujevac, Serbia
- Clinic of Neurology, Clinical Center Kragujevac, Kragujevac, Serbia
| | - Slavica Maksimović
- Department for Cognitive Neuroscience, Research and Development Institute “Life Activities Advancement Center”, Belgrade, Serbia
- Department of Speech, Language, and Hearing Sciences, Institute for Experimental Phonetics and Speech Pathology ˝Đorđe Kostić˝, Belgrade, Serbia
| | - Tatjana Adamović
- Department for Cognitive Neuroscience, Research and Development Institute “Life Activities Advancement Center”, Belgrade, Serbia
- Department of Speech, Language, and Hearing Sciences, Institute for Experimental Phonetics and Speech Pathology ˝Đorđe Kostić˝, Belgrade, Serbia
| | - Miško Subotić
- Department for Cognitive Neuroscience, Research and Development Institute “Life Activities Advancement Center”, Belgrade, Serbia
| |
Collapse
|
16
|
Specific disruption of the ventral anterior temporo-frontal network reveals key implications for language comprehension and cognition. Commun Biol 2022; 5:1077. [PMID: 36217017 PMCID: PMC9551096 DOI: 10.1038/s42003-022-03983-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2020] [Accepted: 09/12/2022] [Indexed: 11/13/2022] Open
Abstract
Recent investigations have raised the question of the role of the anterior lateral temporal cortex in language processing (ventral language network). Here we present the language and overall cognitive performance of a rare male patient with chronic middle cerebral artery cerebrovascular accident with a well-documented lesion restricted to the anterior temporal cortex and its connections via the extreme capsule with the pars triangularis of the inferior frontal gyrus (i.e. Broca’s region). The performance of this unique patient is compared with that of two chronic middle cerebral artery cerebrovascular accident male patients with damage to the classic dorsal posterior temporo-parietal language system. Diffusion tensor imaging is used to reconstruct the relevant white matter tracts of the three patients, which are also compared with those of 10 healthy individuals. The patient with the anterior temporo-frontal lesion presents with flawless and fluent speech, but selective impairment in accessing lexico-semantic information, in sharp contrast to the impairments in speech, sentence comprehension and repetition observed after lesions to the classic dorsal language system. The present results underline the contribution of the ventral language stream in lexico-semantic processing and higher cognitive functions, such as active selective controlled retrieval. Neuropsychological profiling and clinical DTI of three stroke patients highlight the importance of the ventral language system in normal language comprehension and cognition.
Collapse
|
17
|
Just give it time: Differential effects of disruption and delay on perceptual learning. Atten Percept Psychophys 2022; 84:960-980. [PMID: 35277847 DOI: 10.3758/s13414-022-02463-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/20/2022] [Indexed: 11/08/2022]
Abstract
Speech perception and production are critical skills when acquiring a new language. However, the nature of the relationship between these two processes is unclear, particularly for non-native speech sound contrasts. Although it has been assumed that perception and production are supportive, recent evidence has demonstrated that, under some circumstances, production can disrupt perceptual learning. Specifically, producing the to-be-learned contrast on each trial can disrupt perceptual learning of that contrast. Here, we treat speech perception and speech production as separate tasks. From this perspective, perceptual learning studies that include a production component on each trial create a task switch. We report two experiments that test how task switching can disrupt perceptual learning. One experiment demonstrates that the disruption caused by switching to production is sensitive to time delays: Increasing the delay between perception and production on a trial can reduce and even eliminate disruption of perceptual learning. The second experiment shows that if a task other than producing the to-be-learned contrast is imposed, the task-switching component of disruption is not influenced by a delay. These experiments provide a new understanding of the relationship between speech perception and speech production, and clarify conditions under which the two cooperate or compete.
Collapse
|
18
|
Enhancement of speech-in-noise comprehension through vibrotactile stimulation at the syllabic rate. Proc Natl Acad Sci U S A 2022; 119:e2117000119. [PMID: 35312362 PMCID: PMC9060510 DOI: 10.1073/pnas.2117000119] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Syllables are important building blocks of speech. They occur at a rate between 4 and 8 Hz, corresponding to the theta frequency range of neural activity in the cerebral cortex. When listening to speech, the theta activity becomes aligned to the syllabic rhythm, presumably aiding in parsing a speech signal into distinct syllables. However, this neural activity cannot only be influenced by sound, but also by somatosensory information. Here, we show that the presentation of vibrotactile signals at the syllabic rate can enhance the comprehension of speech in background noise. We further provide evidence that this multisensory enhancement of speech comprehension reflects the multisensory integration of auditory and tactile information in the auditory cortex. Speech unfolds over distinct temporal scales, in particular, those related to the rhythm of phonemes, syllables, and words. When a person listens to continuous speech, the syllabic rhythm is tracked by neural activity in the theta frequency range. The tracking plays a functional role in speech processing: Influencing the theta activity through transcranial current stimulation, for instance, can impact speech perception. The theta-band activity in the auditory cortex can also be modulated through the somatosensory system, but the effect on speech processing has remained unclear. Here, we show that vibrotactile feedback presented at the rate of syllables can modulate and, in fact, enhance the comprehension of a speech signal in background noise. The enhancement occurs when vibrotactile pulses occur at the perceptual center of the syllables, whereas a temporal delay between the vibrotactile signals and the speech stream can lead to a lower level of speech comprehension. We further investigate the neural mechanisms underlying the audiotactile integration through electroencephalographic (EEG) recordings. We find that the audiotactile stimulation modulates the neural response to the speech rhythm, as well as the neural response to the vibrotactile pulses. The modulations of these neural activities reflect the behavioral effects on speech comprehension. Moreover, we demonstrate that speech comprehension can be predicted by particular aspects of the neural responses. Our results evidence a role of vibrotactile information for speech processing and may have applications in future auditory prosthesis.
Collapse
|
19
|
Tamura S, Hirose N, Mitsudo T, Hoaki N, Nakamura I, Onitsuka T, Hirano Y. Multi-modal imaging of the auditory-larynx motor network for voicing perception. Neuroimage 2022; 251:118981. [PMID: 35150835 DOI: 10.1016/j.neuroimage.2022.118981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Revised: 12/20/2021] [Accepted: 02/07/2022] [Indexed: 10/19/2022] Open
Abstract
Voicing is one of the most important characteristics of phonetic speech sounds. Despite its importance, voicing perception mechanisms remain largely unknown. To explore auditory-motor networks associated with voicing perception, we firstly examined the brain regions that showed common activities for voicing production and perception using functional magnetic resonance imaging. Results indicated that the auditory and speech motor areas were activated with the operculum parietale 4 (OP4) during both voicing production and perception. Secondly, we used a magnetoencephalography and examined the dynamical functional connectivity of the auditory-motor networks during a perceptual categorization task of /da/-/ta/ continuum stimuli varying in voice onset time (VOT) from 0 to 40 ms in 10 ms steps. Significant functional connectivities from the auditory cortical regions to the larynx motor area via OP4 were observed only when perceiving the stimulus with VOT 30 ms. In addition, regional activity analysis showed that the neural representation of VOT in the auditory cortical regions was mostly correlated with categorical perception of voicing but did not reflect the perception of stimulus with VOT 30 ms. We suggest that the larynx motor area, which is considered to play a crucial role in voicing production, contributes to categorical perception of voicing by complementing the temporal processing in the auditory cortical regions.
Collapse
Affiliation(s)
- Shunsuke Tamura
- Department of Neuropsychiatry, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashiku, Fukuoka 812-8582, Japan.
| | - Nobuyuki Hirose
- Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan
| | - Takako Mitsudo
- Department of Neuropsychiatry, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashiku, Fukuoka 812-8582, Japan
| | | | - Itta Nakamura
- Department of Neuropsychiatry, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashiku, Fukuoka 812-8582, Japan
| | - Toshiaki Onitsuka
- Department of Neuropsychiatry, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashiku, Fukuoka 812-8582, Japan
| | - Yoji Hirano
- Department of Neuropsychiatry, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashiku, Fukuoka 812-8582, Japan; Neural Dynamics Laboratory, Research Service, VA Boston Healthcare System, and Department of Psychiatry, Harvard Medical School, Boston, United States
| |
Collapse
|
20
|
Meier EL. The role of disrupted functional connectivity in aphasia. HANDBOOK OF CLINICAL NEUROLOGY 2022; 185:99-119. [PMID: 35078613 DOI: 10.1016/b978-0-12-823384-9.00005-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Language is one of the most complex and specialized higher cognitive processes. Brain damage to the distributed, primarily left-lateralized language network can result in aphasia, a neurologic disorder characterized by receptive and/or expressive deficits in spoken and/or written language. Most often, aphasia is the consequence of stroke-termed poststroke aphasia (PSA)-yet, aphasia can also manifest due to neurodegenerative disease, specifically, a disorder called primary progressive aphasia (PPA). In recent years, functional connectivity neuroimaging studies have provided emerging evidence supporting theories regarding the relationships between language impairments, structural brain damage, and functional network properties in these two disorders. This chapter reviews the current evidence for the "network phenotype of stroke injury" hypothesis (Siegel et al., 2016) as it pertains to PSA and the "network degeneration hypothesis" (Seeley et al., 2009) as it pertains to PPA. Methodologic considerations for functional connectivity studies, limitations of the current functional connectivity literature in aphasia, and future directions are also discussed.
Collapse
Affiliation(s)
- Erin L Meier
- Department of Communication Sciences and Disorders, Northeastern University, Boston, MA, United States.
| |
Collapse
|
21
|
Benetti S, Collignon O. Cross-modal integration and plasticity in the superior temporal cortex. HANDBOOK OF CLINICAL NEUROLOGY 2022; 187:127-143. [PMID: 35964967 DOI: 10.1016/b978-0-12-823493-8.00026-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
In congenitally deaf people, temporal regions typically believed to be primarily auditory enhance their response to nonauditory information. The neural mechanisms and functional principles underlying this phenomenon, as well as its impact on auditory recovery after sensory restoration, yet remain debated. In this chapter, we demonstrate that the cross-modal recruitment of temporal regions by visual inputs in congenitally deaf people follows organizational principles known to be present in the hearing brain. We propose that the functional and structural mechanisms allowing optimal convergence of multisensory information in the temporal cortex of hearing people also provide the neural scaffolding for feeding visual or tactile information into the deafened temporal areas. Innate in their nature, such anatomo-functional links between the auditory and other sensory systems would represent the common substrate of both early multisensory integration and expression of selective cross-modal plasticity in the superior temporal cortex.
Collapse
Affiliation(s)
- Stefania Benetti
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Trento, Italy
| | - Olivier Collignon
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Trento, Italy; Institute for Research in Psychology and Neuroscience, Faculty of Psychology and Educational Science, UC Louvain, Louvain-la-Neuve, Belgium.
| |
Collapse
|
22
|
Hierarchical cortical networks of "voice patches" for processing voices in human brain. Proc Natl Acad Sci U S A 2021; 118:2113887118. [PMID: 34930846 DOI: 10.1073/pnas.2113887118] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/11/2021] [Indexed: 12/26/2022] Open
Abstract
Humans have an extraordinary ability to recognize and differentiate voices. It is yet unclear whether voices are uniquely processed in the human brain. To explore the underlying neural mechanisms of voice processing, we recorded electrocorticographic signals from intracranial electrodes in epilepsy patients while they listened to six different categories of voice and nonvoice sounds. Subregions in the temporal lobe exhibited preferences for distinct voice stimuli, which were defined as "voice patches." Latency analyses suggested a dual hierarchical organization of the voice patches. We also found that voice patches were functionally connected under both task-engaged and resting states. Furthermore, the left motor areas were coactivated and correlated with the temporal voice patches during the sound-listening task. Taken together, this work reveals hierarchical cortical networks in the human brain for processing human voices.
Collapse
|
23
|
Zeng HH, Huang JF, Li JR, Shen Z, Gong N, Wen YQ, Wang L, Poo MM. Distinct neuron populations for simple and compound calls in the primary auditory cortex of awake marmosets. Natl Sci Rev 2021; 8:nwab126. [PMID: 34876995 PMCID: PMC8645005 DOI: 10.1093/nsr/nwab126] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2021] [Revised: 06/11/2021] [Accepted: 07/04/2021] [Indexed: 11/12/2022] Open
Abstract
Marmosets are highly social non-human primates that live in families. They exhibit rich vocalization, but the neural basis underlying this complex vocal communication is largely unknown. Here we report the existence of specific neuron populations in marmoset A1 that respond selectively to distinct simple or compound calls made by conspecific marmosets. These neurons were spatially dispersed within A1 but distinct from those responsive to pure tones. Call-selective responses were markedly diminished when individual domains of the call were deleted or the domain sequence was altered, indicating the importance of the global rather than local spectral-temporal properties of the sound. Compound call-selective responses also disappeared when the sequence of the two simple-call components was reversed or their interval was extended beyond 1 s. Light anesthesia largely abolished call-selective responses. Our findings demonstrate extensive inhibitory and facilitatory interactions among call-evoked responses, and provide the basis for further study of circuit mechanisms underlying vocal communication in awake non-human primates.
Collapse
Affiliation(s)
- Huan-huan Zeng
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, State Key Laboratory of Neuroscience, CAS Key Laboratory of Primate Neurobiology, Chinese Academy of Sciences, Shanghai 200031, China
- Shanghai Center for Brain Science and Brain-Inspired Intelligence Technology, Shanghai 200031, China
| | - Jun-feng Huang
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, State Key Laboratory of Neuroscience, CAS Key Laboratory of Primate Neurobiology, Chinese Academy of Sciences, Shanghai 200031, China
- University of Chinese Academy of Sciences, Beijing 100086, China
- Shanghai Center for Brain Science and Brain-Inspired Intelligence Technology, Shanghai 200031, China
| | - Jun-ru Li
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, State Key Laboratory of Neuroscience, CAS Key Laboratory of Primate Neurobiology, Chinese Academy of Sciences, Shanghai 200031, China
- Shanghai Center for Brain Science and Brain-Inspired Intelligence Technology, Shanghai 200031, China
| | - Zhiming Shen
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, State Key Laboratory of Neuroscience, CAS Key Laboratory of Primate Neurobiology, Chinese Academy of Sciences, Shanghai 200031, China
- Shanghai Center for Brain Science and Brain-Inspired Intelligence Technology, Shanghai 200031, China
| | - Neng Gong
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, State Key Laboratory of Neuroscience, CAS Key Laboratory of Primate Neurobiology, Chinese Academy of Sciences, Shanghai 200031, China
- Shanghai Center for Brain Science and Brain-Inspired Intelligence Technology, Shanghai 200031, China
| | - Yun-qing Wen
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, State Key Laboratory of Neuroscience, CAS Key Laboratory of Primate Neurobiology, Chinese Academy of Sciences, Shanghai 200031, China
- Shanghai Center for Brain Science and Brain-Inspired Intelligence Technology, Shanghai 200031, China
| | | | | |
Collapse
|
24
|
Lee DY, Lee M, Lee SW. Decoding Imagined Speech Based on Deep Metric Learning for Intuitive BCI Communication. IEEE Trans Neural Syst Rehabil Eng 2021; 29:1363-1374. [PMID: 34255630 DOI: 10.1109/tnsre.2021.3096874] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Imagined speech is a highly promising paradigm due to its intuitive application and multiclass scalability in the field of brain-computer interfaces. However, optimal feature extraction and classifiers have not yet been established. Furthermore, retraining still requires a large number of trials when new classes are added. The aim of this study is (i) to increase the classification performance for imagined speech and (ii) to apply a new class using a pretrained classifier with a small number of trials. We propose a novel framework based on deep metric learning that learns the distance by comparing the similarity between samples. We also applied the instantaneous frequency and spectral entropy used for speech signals to electroencephalography signals during imagined speech. The method was evaluated on two public datasets (6-class Coretto DB and 5-class BCI Competition DB). We achieved a 6-class accuracy of 45.00 ± 3.13% and a 5-class accuracy of 48.10 ± 3.68% using the proposed method, which significantly outperformed state-of-the-art methods. Additionally, we verified that the new class could be detected through incremental learning with a small number of trials. As a result, the average accuracy is 44.50 ± 0.26% for Coretto DB and 47.12 ± 0.27% for BCI Competition DB, which shows similar accuracy to baseline accuracy without incremental learning. Our results have shown that the accuracy can be greatly improved even with a small number of trials by selecting appropriate features from imagined speech. The proposed framework could be directly used to help construct an extensible intuitive communication system based on brain-computer interfaces.
Collapse
|
25
|
Yao B, Taylor JR, Banks B, Kotz SA. Reading direct speech quotes increases theta phase-locking: Evidence for cortical tracking of inner speech? Neuroimage 2021; 239:118313. [PMID: 34175425 DOI: 10.1016/j.neuroimage.2021.118313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Revised: 05/28/2021] [Accepted: 06/24/2021] [Indexed: 11/25/2022] Open
Abstract
Growing evidence shows that theta-band (4-7 Hz) activity in the auditory cortex phase-locks to rhythms of overt speech. Does theta activity also encode the rhythmic dynamics of inner speech? Previous research established that silent reading of direct speech quotes (e.g., Mary said: "This dress is lovely!") elicits more vivid inner speech than indirect speech quotes (e.g., Mary said that the dress was lovely). As we cannot directly track the phase alignment between theta activity and inner speech over time, we used EEG to measure the brain's phase-locked responses to the onset of speech quote reading. We found that direct (vs. indirect) quote reading was associated with increased theta phase synchrony over trials at 250-500 ms post-reading onset, with sources of the evoked activity estimated in the speech processing network. An eye-tracking control experiment confirmed that increased theta phase synchrony in direct quote reading was not driven by eye movement patterns, and more likely reflects synchronous phase resetting at the onset of inner speech. These findings suggest a functional role of theta phase modulation in reading-induced inner speech.
Collapse
Affiliation(s)
- Bo Yao
- Division of Neuroscience and Experimental Psychology, School of Biological Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester M13 9PL, United Kingdom.
| | - Jason R Taylor
- Division of Neuroscience and Experimental Psychology, School of Biological Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester M13 9PL, United Kingdom
| | - Briony Banks
- Department of Psychology, Lancaster University, Lancaster LA1 4YF, United Kingdom
| | - Sonja A Kotz
- Department of Neuropsychology & Psychopharmacology, Maastricht University, Maastricht 6211 LK, Netherlands; Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| |
Collapse
|
26
|
Fairs A, Michelas A, Dufour S, Strijkers K. The Same Ultra-Rapid Parallel Brain Dynamics Underpin the Production and Perception of Speech. Cereb Cortex Commun 2021; 2:tgab040. [PMID: 34296185 PMCID: PMC8262084 DOI: 10.1093/texcom/tgab040] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Revised: 05/28/2021] [Accepted: 06/03/2021] [Indexed: 11/20/2022] Open
Abstract
The temporal dynamics by which linguistic information becomes available is one of the key properties to understand how language is organized in the brain. An unresolved debate between different brain language models is whether words, the building blocks of language, are activated in a sequential or parallel manner. In this study, we approached this issue from a novel perspective by directly comparing the time course of word component activation in speech production versus perception. In an overt object naming task and a passive listening task, we analyzed with mixed linear models at the single-trial level the event-related brain potentials elicited by the same lexico-semantic and phonological word knowledge in the two language modalities. Results revealed that both word components manifested simultaneously as early as 75 ms after stimulus onset in production and perception; differences between the language modalities only became apparent after 300 ms of processing. The data provide evidence for ultra-rapid parallel dynamics of language processing and are interpreted within a neural assembly framework where words recruit the same integrated cell assemblies across production and perception. These word assemblies ignite early on in parallel and only later on reverberate in a behavior-specific manner.
Collapse
Affiliation(s)
- Amie Fairs
- Aix-Marseille University & CNRS, LPL, 13100 Aix-en-Provence, France
| | | | - Sophie Dufour
- Aix-Marseille University & CNRS, LPL, 13100 Aix-en-Provence, France
| | | |
Collapse
|
27
|
O'Sullivan AE, Crosse MJ, Liberto GMD, de Cheveigné A, Lalor EC. Neurophysiological Indices of Audiovisual Speech Processing Reveal a Hierarchy of Multisensory Integration Effects. J Neurosci 2021; 41:4991-5003. [PMID: 33824190 PMCID: PMC8197638 DOI: 10.1523/jneurosci.0906-20.2021] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2020] [Revised: 03/16/2021] [Accepted: 03/22/2021] [Indexed: 12/27/2022] Open
Abstract
Seeing a speaker's face benefits speech comprehension, especially in challenging listening conditions. This perceptual benefit is thought to stem from the neural integration of visual and auditory speech at multiple stages of processing, whereby movement of a speaker's face provides temporal cues to auditory cortex, and articulatory information from the speaker's mouth can aid recognizing specific linguistic units (e.g., phonemes, syllables). However, it remains unclear how the integration of these cues varies as a function of listening conditions. Here, we sought to provide insight on these questions by examining EEG responses in humans (males and females) to natural audiovisual (AV), audio, and visual speech in quiet and in noise. We represented our speech stimuli in terms of their spectrograms and their phonetic features and then quantified the strength of the encoding of those features in the EEG using canonical correlation analysis (CCA). The encoding of both spectrotemporal and phonetic features was shown to be more robust in AV speech responses than what would have been expected from the summation of the audio and visual speech responses, suggesting that multisensory integration occurs at both spectrotemporal and phonetic stages of speech processing. We also found evidence to suggest that the integration effects may change with listening conditions; however, this was an exploratory analysis and future work will be required to examine this effect using a within-subject design. These findings demonstrate that integration of audio and visual speech occurs at multiple stages along the speech processing hierarchy.SIGNIFICANCE STATEMENT During conversation, visual cues impact our perception of speech. Integration of auditory and visual speech is thought to occur at multiple stages of speech processing and vary flexibly depending on the listening conditions. Here, we examine audiovisual (AV) integration at two stages of speech processing using the speech spectrogram and a phonetic representation, and test how AV integration adapts to degraded listening conditions. We find significant integration at both of these stages regardless of listening conditions. These findings reveal neural indices of multisensory interactions at different stages of processing and provide support for the multistage integration framework.
Collapse
Affiliation(s)
- Aisling E O'Sullivan
- School of Engineering, Trinity Centre for Biomedical Engineering and Trinity College Institute of Neuroscience, Trinity College Dublin, Dublin 2, Ireland
| | - Michael J Crosse
- X, The Moonshot Factory, Mountain View, CA and Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York 10461
| | - Giovanni M Di Liberto
- Laboratoire des Systèmes Perceptifs, Département d'Études Cognitives, École Normale Supérieure, Paris Sciences et Lettres University, Centre National de la Recherche Scientifique, Paris 75005, France
| | - Alain de Cheveigné
- Laboratoire des Systèmes Perceptifs, Département d'Études Cognitives, École Normale Supérieure, Paris Sciences et Lettres University, Centre National de la Recherche Scientifique, Paris 75005, France
- University College London Ear Institute, University College London, London WC1X 8EE, United Kingdom
| | - Edmund C Lalor
- School of Engineering, Trinity Centre for Biomedical Engineering and Trinity College Institute of Neuroscience, Trinity College Dublin, Dublin 2, Ireland
- Department of Biomedical Engineering and Department of Neuroscience, University of Rochester, Rochester, New York 14627
| |
Collapse
|
28
|
Holmes E, Johnsrude IS. Speech-evoked brain activity is more robust to competing speech when it is spoken by someone familiar. Neuroimage 2021; 237:118107. [PMID: 33933598 DOI: 10.1016/j.neuroimage.2021.118107] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2020] [Revised: 04/19/2021] [Accepted: 04/25/2021] [Indexed: 10/21/2022] Open
Abstract
When speech is masked by competing sound, people are better at understanding what is said if the talker is familiar compared to unfamiliar. The benefit is robust, but how does processing of familiar voices facilitate intelligibility? We combined high-resolution fMRI with representational similarity analysis to quantify the difference in distributed activity between clear and masked speech. We demonstrate that brain representations of spoken sentences are less affected by a competing sentence when they are spoken by a friend or partner than by someone unfamiliar-effectively, showing a cortical signal-to-noise ratio (SNR) enhancement for familiar voices. This effect correlated with the familiar-voice intelligibility benefit. We functionally parcellated auditory cortex, and found that the most prominent familiar-voice advantage was manifest along the posterior superior and middle temporal gyri. Overall, our results demonstrate that experience-driven improvements in intelligibility are associated with enhanced multivariate pattern activity in posterior temporal cortex.
Collapse
Affiliation(s)
- Emma Holmes
- The Brain and Mind Institute, University of Western Ontario, London, Ontario, N6A 3K7, Canada.
| | - Ingrid S Johnsrude
- The Brain and Mind Institute, University of Western Ontario, London, Ontario, N6A 3K7, Canada; School of Communication Sciences and Disorders, University of Western Ontario, London, Ontario, London, N6G 1H1, Canada
| |
Collapse
|
29
|
Chen YC, Yong W, Xing C, Feng Y, Haidari NA, Xu JJ, Gu JP, Yin X, Wu Y. Directed functional connectivity of the hippocampus in patients with presbycusis. Brain Imaging Behav 2021; 14:917-926. [PMID: 31270776 DOI: 10.1007/s11682-019-00162-z] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Presbycusis, associated with a diminished quality of life characterized by bilateral sensorineural hearing loss at high frequencies, has become an increasingly critical public health problem. This study aimed to identify directed functional connectivity (FC) of the hippocampus in patients with presbycusis and to explore the causes if the directed functional connections of the hippocampus were disrupted. Presbycusis patients (n = 32) and age-, sex-, and education-matched healthy controls (n = 40) were included in this study. The seed regions of bilateral hippocampus were selected to identify directed FC in patients with presbycusis using Granger causality analysis (GCA) approach. Correlation analyses were conducted to detect the associations of disrupted directed FC of hippocampus with clinical measures of presbycusis. Compared to healthy controls, decreased directed FC between inferior parietal lobule, insula, right supplementary motor area, middle temporal gyrus and hippocampus were detected in presbycusis patients. Furthermore, a negative correlation between TMB score and the decline of directed FC from left inferior parietal lobule to left hippocampus (r = -0.423, p = 0.025) and from right inferior parietal lobule to right hippocampus (r = -0.516, p = 0.005) were also observed. The decreased directed functional connections of the hippocampus were detected in patients with presbycusis, which was associated with specific cognitive performance. This study mainly emphasizes the crucial role of hippocampus in presbycusis and will enhance our understanding of the neuropathological mechanisms of presbycusis.
Collapse
Affiliation(s)
- Yu-Chen Chen
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, No.68, Changle Road, Nanjing, 210006, China
| | - Wei Yong
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, No.68, Changle Road, Nanjing, 210006, China
| | - Chunhua Xing
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, No.68, Changle Road, Nanjing, 210006, China
| | - Yuan Feng
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, No.68, Changle Road, Nanjing, 210006, China
| | - Nasir Ahmad Haidari
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, No.68, Changle Road, Nanjing, 210006, China
| | - Jin-Jing Xu
- Department of Otolaryngology, Nanjing First Hospital, Nanjing Medical University, No.68, Changle Road, Nanjing, 210006, China
| | - Jian-Ping Gu
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, No.68, Changle Road, Nanjing, 210006, China
| | - Xindao Yin
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, No.68, Changle Road, Nanjing, 210006, China.
| | - Yuanqing Wu
- Department of Otolaryngology, Nanjing First Hospital, Nanjing Medical University, No.68, Changle Road, Nanjing, 210006, China.
| |
Collapse
|
30
|
Burton H, Reeder RM, Holden T, Agato A, Firszt JB. Cortical Regions Activated by Spectrally Degraded Speech in Adults With Single Sided Deafness or Bilateral Normal Hearing. Front Neurosci 2021; 15:618326. [PMID: 33897343 PMCID: PMC8058229 DOI: 10.3389/fnins.2021.618326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2020] [Accepted: 03/04/2021] [Indexed: 11/13/2022] Open
Abstract
Those with profound sensorineural hearing loss from single sided deafness (SSD) generally experience greater cognitive effort and fatigue in adverse sound environments. We studied cases with right ear, SSD compared to normal hearing (NH) individuals. SSD cases were significantly less correct in naming last words in spectrally degraded 8- and 16-band vocoded sentences, despite high semantic predictability. Group differences were not significant for less intelligible 4-band sentences, irrespective of predictability. SSD also had diminished BOLD percent signal changes to these same sentences in left hemisphere (LH) cortical regions of early auditory, association auditory, inferior frontal, premotor, inferior parietal, dorsolateral prefrontal, posterior cingulate, temporal-parietal-occipital junction, and posterior opercular. Cortical regions with lower amplitude responses in SSD than NH were mostly components of a LH language network, previously noted as concerned with speech recognition. Recorded BOLD signal magnitudes were averages from all vertices within predefined parcels from these cortex regions. Parcels from different regions in SSD showed significantly larger signal magnitudes to sentences of greater intelligibility (e.g., 8- or 16- vs. 4-band) in all except early auditory and posterior cingulate cortex. Significantly lower response magnitudes occurred in SSD than NH in regions prior studies found responsible for phonetics and phonology of speech, cognitive extraction of meaning, controlled retrieval of word meaning, and semantics. The findings suggested reduced activation of a LH fronto-temporo-parietal network in SSD contributed to difficulty processing speech for word meaning and sentence semantics. Effortful listening experienced by SSD might reflect diminished activation to degraded speech in the affected LH language network parcels. SSD showed no compensatory activity in matched right hemisphere parcels.
Collapse
Affiliation(s)
- Harold Burton
- Department of Neuroscience, Washington University School of Medicine, Saint Louis, MO, United States
| | - Ruth M Reeder
- Department of Otolaryngology-Head and Neck Surgery, Washington University School of Medicine, Saint Louis, MO, United States
| | - Tim Holden
- Department of Otolaryngology-Head and Neck Surgery, Washington University School of Medicine, Saint Louis, MO, United States
| | - Alvin Agato
- Department of Neuroscience, Washington University School of Medicine, Saint Louis, MO, United States
| | - Jill B Firszt
- Department of Otolaryngology-Head and Neck Surgery, Washington University School of Medicine, Saint Louis, MO, United States
| |
Collapse
|
31
|
Lawrence RJ, Wiggins IM, Hodgson JC, Hartley DEH. Evaluating cortical responses to speech in children: A functional near-infrared spectroscopy (fNIRS) study. Hear Res 2021; 401:108155. [PMID: 33360183 PMCID: PMC7937787 DOI: 10.1016/j.heares.2020.108155] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Revised: 10/20/2020] [Accepted: 12/10/2020] [Indexed: 10/28/2022]
Abstract
Functional neuroimaging of speech processing has both research and clinical potential. This work is facilitating an ever-increasing understanding of the complex neural mechanisms involved in the processing of speech. Neural correlates of speech understanding also have potential clinical value, especially for infants and children, in whom behavioural assessments can be unreliable. Such measures would not only benefit normally hearing children experiencing speech and language delay, but also hearing impaired children with and without hearing devices. In the current study, we examined cortical correlates of speech intelligibility in normally hearing paediatric listeners. Cortical responses were measured using functional near-infrared spectroscopy (fNIRS), a non-invasive neuroimaging technique that is fully compatible with hearing devices, including cochlear implants. In nineteen normally hearing children (aged 6 - 13 years) we measured activity in temporal and frontal cortex bilaterally whilst participants listened to both clear- and noise-vocoded sentences targeting four levels of speech intelligibility. Cortical activation in superior temporal and inferior frontal cortex was generally stronger in the left hemisphere than in the right. Activation in left superior temporal cortex grew monotonically with increasing speech intelligibility. In the same region, we identified a trend towards greater activation on correctly vs. incorrectly perceived trials, suggesting a possible sensitivity to speech intelligibility per se, beyond sensitivity to changing acoustic properties across stimulation conditions. Outside superior temporal cortex, we identified other regions in which fNIRS responses varied with speech intelligibility. For example, channels overlying posterior middle temporal regions in the right hemisphere exhibited relative deactivation during sentence processing (compared to a silent baseline condition), with the amplitude of that deactivation being greater in more difficult listening conditions. This finding may represent sensitivity to components of the default mode network in lateral temporal regions, and hence effortful listening in normally hearing paediatric listeners. Our results indicate that fNIRS has the potential to provide an objective marker of speech intelligibility in normally hearing children. Should these results be found to apply to individuals experiencing language delay or to those listening through a hearing device, such as a cochlear implant, fNIRS may form the basis of a clinically useful measure of speech understanding.
Collapse
Affiliation(s)
- Rachael J Lawrence
- National Institute for Health Research (NIHR), Nottingham Biomedical Research Centre, Ropewalk House, 113 The Ropewalk, Nottingham NG1 5DU, United Kingdom; Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham NG7 2UH, United Kingdom; Nottingham University Hospitals NHS Trust, Derby Road, Nottingham NG7 2UH, United Kingdom.
| | - Ian M Wiggins
- National Institute for Health Research (NIHR), Nottingham Biomedical Research Centre, Ropewalk House, 113 The Ropewalk, Nottingham NG1 5DU, United Kingdom; Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham NG7 2UH, United Kingdom
| | - Jessica C Hodgson
- Lincoln Medical School - Universities of Nottingham and Lincoln, Charlotte Scott Building, University of Lincoln, Lincoln LN6 7TS, United Kingdom
| | - Douglas E H Hartley
- National Institute for Health Research (NIHR), Nottingham Biomedical Research Centre, Ropewalk House, 113 The Ropewalk, Nottingham NG1 5DU, United Kingdom; Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham NG7 2UH, United Kingdom; Nottingham University Hospitals NHS Trust, Derby Road, Nottingham NG7 2UH, United Kingdom
| |
Collapse
|
32
|
Kim S, Schwalje AT, Liu AS, Gander PE, McMurray B, Griffiths TD, Choi I. Pre- and post-target cortical processes predict speech-in-noise performance. Neuroimage 2021; 228:117699. [PMID: 33387631 PMCID: PMC8291856 DOI: 10.1016/j.neuroimage.2020.117699] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Revised: 11/06/2020] [Accepted: 12/23/2020] [Indexed: 12/19/2022] Open
Abstract
Understanding speech in noise (SiN) is a complex task that recruits multiple cortical subsystems. There is a variance in individuals' ability to understand SiN that cannot be explained by simple hearing profiles, which suggests that central factors may underlie the variance in SiN ability. Here, we elucidated a few cortical functions involved during a SiN task and their contributions to individual variance using both within- and across-subject approaches. Through our within-subject analysis of source-localized electroencephalography, we investigated how acoustic signal-to-noise ratio (SNR) alters cortical evoked responses to a target word across the speech recognition areas, finding stronger responses in left supramarginal gyrus (SMG, BA40 the dorsal lexicon area) with quieter noise. Through an individual differences approach, we found that listeners show different neural sensitivity to the background noise and target speech, reflected in the amplitude ratio of earlier auditory-cortical responses to speech and noise, named as an internal SNR. Listeners with better internal SNR showed better SiN performance. Further, we found that the post-speech time SMG activity explains a further amount of variance in SiN performance that is not accounted for by internal SNR. This result demonstrates that at least two cortical processes contribute to SiN performance independently: pre-target time processing to attenuate neural representation of background noise and post-target time processing to extract information from speech sounds.
Collapse
Affiliation(s)
- Subong Kim
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN 47907, USA
| | - Adam T Schwalje
- Department of Otolaryngology - Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA
| | - Andrew S Liu
- Department of Otolaryngology - Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA
| | - Phillip E Gander
- Department of Neurosurgery, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA
| | - Bob McMurray
- Department of Otolaryngology - Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA; Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA 52242, USA; Department of Psychological and Brain Sciences, University of Iowa, Iowa City, IA 52242, USA
| | - Timothy D Griffiths
- Biosciences Institute, Newcastle University, Newcastle upon Tyne NE1 7RU, UK
| | - Inyong Choi
- Department of Otolaryngology - Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA; Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA 52242, USA.
| |
Collapse
|
33
|
Herrmann B, Johnsrude IS. A model of listening engagement (MoLE). Hear Res 2020; 397:108016. [DOI: 10.1016/j.heares.2020.108016] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/21/2019] [Revised: 04/28/2020] [Accepted: 06/02/2020] [Indexed: 12/30/2022]
|
34
|
Dobri SGJ, Ross B. Total GABA level in human auditory cortex is associated with speech-in-noise understanding in older age. Neuroimage 2020; 225:117474. [PMID: 33099004 DOI: 10.1016/j.neuroimage.2020.117474] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2020] [Revised: 10/05/2020] [Accepted: 10/13/2020] [Indexed: 12/19/2022] Open
Abstract
Speech-in-noise (SIN) understanding often becomes difficult for older adults because of impaired hearing and aging-related changes in central auditory processing. Central auditory processing depends on a fine balance between excitatory and inhibitory neural mechanisms, which may be upset in older age by a change in the level of the inhibitory neurotransmitter gamma-aminobutyric acid (GABA). In this study, we used MEGA-PRESS magnetic resonance spectroscopy (MRS) to estimate GABA levels in both the left and right auditory cortices of young and older adults. We found that total auditory GABA levels were lower in older compared to young adults. To understand the relationship between GABA and hearing function, we correlated GABA levels with hearing loss and SIN performance. In older adults, the GABA level in the right auditory cortex was correlated with age and SIN performance. The relationship between chronological age and SIN loss was partially mediated by the GABA level in the right auditory cortex. These findings support the hypothesis that inhibitory mechanisms in the auditory system are reduced in aging, and this reduction relates to functional impairments.
Collapse
Affiliation(s)
- Simon G J Dobri
- Rotman Research Institute, Baycrest Centre, 3560 Bathurst Street, Toronto, ON M6A 2E1, Canada; Department of Medical Biophysics, University of Toronto, Toronto, ON, Canada.
| | - Bernhard Ross
- Rotman Research Institute, Baycrest Centre, 3560 Bathurst Street, Toronto, ON M6A 2E1, Canada; Department of Medical Biophysics, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
35
|
Dwyer K, David AS, McCarthy R, McKenna P, Peters E. Linguistic alignment and theory of mind impairments in schizophrenia patients' dialogic interactions. Psychol Med 2020; 50:2194-2202. [PMID: 31500678 DOI: 10.1017/s0033291719002289] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
BACKGROUND Impairments of contextual processing and theory of mind (ToM) have both been offered as accounts of the deviant language characterising formal thought disorder (FTD) in schizophrenia. This study investigated these processes in patients' dialogue. We predicted that FTD patients would show a decrement in linguistic alignment, associated with impaired ToM in dialogue. METHODS Speech samples were elicited via participation in an interactive computer-based task and a semi-structured interview to assess contextual processing abilities and ToM skills in dialogue, respectively, and from an interactive card-sorting task to measure syntactic alignment. Degree of alignment in dialogue and the syntactic task, and evidence of ToM in (i) dialogue and (ii) a traditional ToM task were compared across schizophrenia patients with FTD (n = 21), non-FTD patients (n = 22) and healthy controls (n = 21). RESULTS FTD patients showed less alignment than the other two groups in dialogue, and than healthy controls on the syntactic task. FTD patients showed poorer performance on the ToM task than the other two groups, but only compared to the healthy controls in dialogue. The FTD group's degree of alignment in dialogue was correlated with ToM performance in dialogue but not with the traditional ToM task or with syntactic alignment. CONCLUSIONS In dialogue, FTD patients demonstrate an impairment in employing available contextual information to facilitate their own subsequent production, which is associated with a ToM deficit. These findings indicate that a contextual processing deficit impacts on exploiting representations via the production system impoverishing the ability to make predictions about upcoming utterances in dialogue.
Collapse
Affiliation(s)
- Karen Dwyer
- Department of English Language and Literature, University College London, London, UK
- Department of Psychology, King's College London, Institute of Psychiatry, Psychology & Neuroscience, London, UK
| | - Anthony S David
- Department of Psychology, King's College London, Institute of Psychiatry, Psychology & Neuroscience, London, UK
- Institute of Mental Health, University College London, London, UK
| | - Rosaleen McCarthy
- Department of Psychology, University of Southampton, Southampton, UK
- Wessex Neurological Centre, Southampton General Hospital, Southampton University Hospital Trust, Tremona Road, Southampton, UK
| | - Peter McKenna
- FIDMAG Research Foundation, Germanes Hospitalàries, Barcelona, Spain
- CIBERSAM, Madrid , Spain
| | - Emmanuelle Peters
- Department of Psychology, King's College London, Institute of Psychiatry, Psychology & Neuroscience, London, UK
- South London and Maudsley NHS Foundation Trust, Bethlem Royal Hospital, Monks Orchard Road, Beckenham, Kent, BR3 3BX, UK
| |
Collapse
|
36
|
Leblanc R. White matter-Maximien Parchappe and the integration of articulate language. JOURNAL OF THE HISTORY OF THE NEUROSCIENCES 2020; 29:399-417. [PMID: 32243766 DOI: 10.1080/0964704x.2020.1738838] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
The Imperial Academy of Medicine of Paris met in the spring of 1865 to discuss the localization of speech. One of the participants was Maximien Parchappe (1800-1866), an alienist whose research interests lay in the cerebral cortex. This article addresses Maximien Parchappe's concept that the cognitive elements of language-such as the translation of thoughts into words, the will to express them, and the means to do so-reside within the cortical gray matter, and that they are integrated through white-matter fibers. In so doing, Parchappe anticipated Carl Wernicke's linking of the posterior aspects of the dominant frontal and temporal lobes in verbal expression, and Jules Dejerine's linking of the angular gyrus and Wernicke's area in the understanding of written language. Functional imaging has revived interest in language as a network of neuronal aggregates and has given new relevance to Parchappe's concept of the functional organization of language.
Collapse
Affiliation(s)
- R Leblanc
- Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University , Montreal, Quebec, Canada
| |
Collapse
|
37
|
Kessler M, Schierholz I, Mamach M, Wilke F, Hahne A, Büchner A, Geworski L, Bengel FM, Sandmann P, Berding G. Combined Brain-Perfusion SPECT and EEG Measurements Suggest Distinct Strategies for Speech Comprehension in CI Users With Higher and Lower Performance. Front Neurosci 2020; 14:787. [PMID: 32848560 PMCID: PMC7431776 DOI: 10.3389/fnins.2020.00787] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2020] [Accepted: 07/06/2020] [Indexed: 11/29/2022] Open
Abstract
Cochlear implantation constitutes a successful therapy of inner ear deafness, with the majority of patients showing good outcomes. There is, however, still some unexplained variability in outcomes with a number of cochlear-implant (CI) users, showing major limitations in speech comprehension. The current study used a multimodal diagnostic approach combining single-photon emission computed tomography (SPECT) and electroencephalography (EEG) to examine the mechanisms underlying speech processing in postlingually deafened CI users (N = 21). In one session, the participants performed a speech discrimination task, during which a 96-channel EEG was recorded and the perfusions marker 99mTc-HMPAO was injected intravenously. The SPECT scan was acquired 1.5 h after injection to measure the cortical activity during the speech task. The second session included a SPECT scan after injection without stimulation at rest. Analysis of EEG and SPECT data showed N400 and P600 event-related potentials (ERPs) particularly evoked by semantic violations in the sentences, and enhanced perfusion in a temporo-frontal network during task compared to rest, involving the auditory cortex bilaterally and Broca's area. Moreover, higher performance in testing for word recognition and verbal intelligence strongly correlated to the activation in this network during the speech task. However, comparing CI users with lower and higher speech intelligibility [median split with cutoff + 7.6 dB signal-to-noise ratio (SNR) in the Göttinger sentence test] revealed for CI users with higher performance additional activations of parietal and occipital regions and for those with lower performance stronger activation of superior frontal areas. Furthermore, SPECT activity was tightly coupled with EEG and cognitive abilities, as indicated by correlations between (1) cortical activation and the amplitudes in EEG, N400 (temporal and occipital areas)/P600 (parietal and occipital areas) and (2) between cortical activation in left-sided temporal and bilateral occipital/parietal areas and working memory capacity. These results suggest the recruitment of a temporo-frontal network in CI users during speech processing and a close connection between ERP effects and cortical activation in CI users. The observed differences in speech-evoked cortical activation patterns for CI users with higher and lower speech intelligibility suggest distinct processing strategies during speech rehabilitation with CI.
Collapse
Affiliation(s)
- Mariella Kessler
- Department of Nuclear Medicine, Hannover Medical School, Hanover, Germany
- Cluster of Excellence Hearing4all, Hannover Medical School, University of Oldenburg, Oldenburg, Germany
| | - Irina Schierholz
- Cluster of Excellence Hearing4all, Hannover Medical School, University of Oldenburg, Oldenburg, Germany
- Department of Otorhinolaryngology, Hannover Medical School, Hanover, Germany
- Department of Otorhinolaryngology, University of Cologne, Cologne, Germany
| | - Martin Mamach
- Cluster of Excellence Hearing4all, Hannover Medical School, University of Oldenburg, Oldenburg, Germany
- Department of Medical Physics and Radiation Protection, Hannover Medical School, Hanover, Germany
| | - Florian Wilke
- Department of Medical Physics and Radiation Protection, Hannover Medical School, Hanover, Germany
| | - Anja Hahne
- Department of Otorhinolaryngology, Faculty of Medicine Carl Gustav Carus, Saxonian Cochlear Implant Center, Technical University Dresden, Dresden, Germany
| | - Andreas Büchner
- Cluster of Excellence Hearing4all, Hannover Medical School, University of Oldenburg, Oldenburg, Germany
- Department of Otorhinolaryngology, Hannover Medical School, Hanover, Germany
| | - Lilli Geworski
- Department of Medical Physics and Radiation Protection, Hannover Medical School, Hanover, Germany
| | - Frank M. Bengel
- Department of Nuclear Medicine, Hannover Medical School, Hanover, Germany
| | - Pascale Sandmann
- Department of Otorhinolaryngology, University of Cologne, Cologne, Germany
| | - Georg Berding
- Department of Nuclear Medicine, Hannover Medical School, Hanover, Germany
- Cluster of Excellence Hearing4all, Hannover Medical School, University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
38
|
Getz LM, Toscano JC. The time-course of speech perception revealed by temporally-sensitive neural measures. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2020; 12:e1541. [PMID: 32767836 DOI: 10.1002/wcs.1541] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2019] [Revised: 05/28/2020] [Accepted: 06/26/2020] [Indexed: 11/07/2022]
Abstract
Recent advances in cognitive neuroscience have provided a detailed picture of the early time-course of speech perception. In this review, we highlight this work, placing it within the broader context of research on the neurobiology of speech processing, and discuss how these data point us toward new models of speech perception and spoken language comprehension. We focus, in particular, on temporally-sensitive measures that allow us to directly measure early perceptual processes. Overall, the data provide support for two key principles: (a) speech perception is based on gradient representations of speech sounds and (b) speech perception is interactive and receives input from higher-level linguistic context at the earliest stages of cortical processing. Implications for models of speech processing and the neurobiology of language more broadly are discussed. This article is categorized under: Psychology > Language Psychology > Perception and Psychophysics Neuroscience > Cognition.
Collapse
Affiliation(s)
- Laura M Getz
- Department of Psychological Sciences, University of San Diego, San Diego, California, USA
| | - Joseph C Toscano
- Department of Psychological and Brain Sciences, Villanova University, Villanova, Pennsylvania, USA
| |
Collapse
|
39
|
Kassuba T, Pinsk MA, Kastner S. Distinct auditory and visual tool regions with multisensory response properties in human parietal cortex. Prog Neurobiol 2020; 195:101889. [PMID: 32707071 DOI: 10.1016/j.pneurobio.2020.101889] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2020] [Revised: 05/12/2020] [Accepted: 07/17/2020] [Indexed: 12/14/2022]
Abstract
Left parietal cortex has been associated with the human-specific ability of sophisticated tool use. Yet, it is unclear how tool information is represented across senses. Here, we compared auditory and visual tool-specific activations within healthy human subjects to probe the relation of tool-specific networks, uni- and multisensory response properties, and functional and structural connectivity using functional and diffusion-weighted MRI. In each subject, we identified an auditory tool network with regions in left anterior inferior parietal cortex (aud-aIPL), bilateral posterior lateral sulcus, and left inferior precentral sulcus, and a visual tool network with regions in left aIPL (vis-aIPL) and bilateral inferior temporal gyrus. Aud-aIPL was largely separate and anterior/inferior from vis-aIPL, with varying degrees of overlap across subjects. Both regions displayed a strong preference for tools versus other stimuli presented within the same modality. Despite their modality preference, aud-aIPL and vis-aIPL and a region in left inferior precentral sulcus displayed multisensory response properties, as revealed in multivariate analyses. Thus, two largely separate tool networks are engaged by the visual and auditory modalities with nodes in parietal and prefrontal cortex potentially integrating information across senses. The diversification of tool processing in human parietal cortex underpins its critical role in complex object processing.
Collapse
Affiliation(s)
- Tanja Kassuba
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA
| | - Mark A Pinsk
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA
| | - Sabine Kastner
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA; Department of Psychology, Princeton University, Princeton, NJ 08544, USA.
| |
Collapse
|
40
|
Ullas S, Hausfeld L, Cutler A, Eisner F, Formisano E. Neural Correlates of Phonetic Adaptation as Induced by Lexical and Audiovisual Context. J Cogn Neurosci 2020; 32:2145-2158. [PMID: 32662723 DOI: 10.1162/jocn_a_01608] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
When speech perception is difficult, one way listeners adjust is by reconfiguring phoneme category boundaries, drawing on contextual information. Both lexical knowledge and lipreading cues are used in this way, but it remains unknown whether these two differing forms of perceptual learning are similar at a neural level. This study compared phoneme boundary adjustments driven by lexical or audiovisual cues, using ultra-high-field 7-T fMRI. During imaging, participants heard exposure stimuli and test stimuli. Exposure stimuli for lexical retuning were audio recordings of words, and those for audiovisual recalibration were audio-video recordings of lip movements during utterances of pseudowords. Test stimuli were ambiguous phonetic strings presented without context, and listeners reported what phoneme they heard. Reports reflected phoneme biases in preceding exposure blocks (e.g., more reported /p/ after /p/-biased exposure). Analysis of corresponding brain responses indicated that both forms of cue use were associated with a network of activity across the temporal cortex, plus parietal, insula, and motor areas. Audiovisual recalibration also elicited significant occipital cortex activity despite the lack of visual stimuli. Activity levels in several ROIs also covaried with strength of audiovisual recalibration, with greater activity accompanying larger recalibration shifts. Similar activation patterns appeared for lexical retuning, but here, no significant ROIs were identified. Audiovisual and lexical forms of perceptual learning thus induce largely similar brain response patterns. However, audiovisual recalibration involves additional visual cortex contributions, suggesting that previously acquired visual information (on lip movements) is retrieved and deployed to disambiguate auditory perception.
Collapse
Affiliation(s)
- Shruti Ullas
- Maastricht University.,Maastricht Brain Imaging Centre
| | - Lars Hausfeld
- Maastricht University.,Maastricht Brain Imaging Centre
| | | | | | - Elia Formisano
- Maastricht University.,Maastricht Brain Imaging Centre.,Maastricht Centre for Systems Biology
| |
Collapse
|
41
|
Ylinen S, Nora A, Service E. Better Phonological Short-Term Memory Is Linked to Improved Cortical Memory Representations for Word Forms and Better Word Learning. Front Hum Neurosci 2020; 14:209. [PMID: 32581751 PMCID: PMC7291706 DOI: 10.3389/fnhum.2020.00209] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2019] [Accepted: 05/08/2020] [Indexed: 11/13/2022] Open
Abstract
Language learning relies on both short-term and long-term memory. Phonological short-term memory (pSTM) is thought to play an important role in the learning of novel word forms. However, language learners may differ in their ability to maintain word representations in pSTM during interfering auditory input. We used magnetoencephalography (MEG) to investigate how pSTM capacity in better and poorer pSTM groups is linked to language learning and the maintenance of pseudowords in pSTM. In particular, MEG was recorded while participants maintained pseudowords in pSTM by covert speech rehearsal, and while these brain representations were probed by presenting auditory pseudowords with first or third syllables matching or mismatching the rehearsed item. A control condition included identical stimuli but no rehearsal. Differences in response strength between matching and mismatching syllables were interpreted as the phonological mapping negativity (PMN). While PMN for the first syllable was found in both groups, it was observed for the third syllable only in the group with better pSTM. This suggests that individuals with better pSTM maintained representations of trisyllabic pseudowords more accurately during interference than individuals with poorer pSTM. Importantly, the group with better pSTM learned words faster in a paired-associate word learning task, linking the PMN findings to language learning.
Collapse
Affiliation(s)
- Sari Ylinen
- CICERO Learning, Faculty of Educational Sciences, University of Helsinki, Helsinki, Finland.,Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland.,BioMag Laboratory, Helsinki University Central Hospital, Helsinki, Finland
| | - Anni Nora
- Department on Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland
| | - Elisabet Service
- ARiEAL Research Centre, Department of Linguistics and Languages, McMaster University, Hamilton, ON, Canada
| |
Collapse
|
42
|
Teng X, Ma M, Yang J, Blohm S, Cai Q, Tian X. Constrained Structure of Ancient Chinese Poetry Facilitates Speech Content Grouping. Curr Biol 2020; 30:1299-1305.e7. [PMID: 32142700 DOI: 10.1016/j.cub.2020.01.059] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2019] [Revised: 11/08/2019] [Accepted: 01/17/2020] [Indexed: 11/19/2022]
Abstract
Ancient Chinese poetry is constituted by structured language that deviates from ordinary language usage [1, 2]; its poetic genres impose unique combinatory constraints on linguistic elements [3]. How does the constrained poetic structure facilitate speech segmentation when common linguistic [4-8] and statistical cues [5, 9] are unreliable to listeners in poems? We generated artificial Jueju, which arguably has the most constrained structure in ancient Chinese poetry, and presented each poem twice as an isochronous sequence of syllables to native Mandarin speakers while conducting magnetoencephalography (MEG) recording. We found that listeners deployed their prior knowledge of Jueju to build the line structure and to establish the conceptual flow of Jueju. Unprecedentedly, we found a phase precession phenomenon indicating predictive processes of speech segmentation-the neural phase advanced faster after listeners acquired knowledge of incoming speech. The statistical co-occurrence of monosyllabic words in Jueju negatively correlated with speech segmentation, which provides an alternative perspective on how statistical cues facilitate speech segmentation. Our findings suggest that constrained poetic structures serve as a temporal map for listeners to group speech contents and to predict incoming speech signals. Listeners can parse speech streams by using not only grammatical and statistical cues but also their prior knowledge of the form of language. VIDEO ABSTRACT.
Collapse
Affiliation(s)
- Xiangbin Teng
- Department of Neuroscience, Max Planck Institute for Empirical Aesthetics, Frankfurt 60322, Germany
| | - Min Ma
- Google Inc., 111 8th Avenue, New York, NY 10010, United States
| | - Jinbiao Yang
- Division of Arts and Sciences, New York University Shanghai, Shanghai 200122, China; NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai 200062, China; Max Planck Institute for Psycholinguistics, Wundtlaan 1, Nijmegen 6525 XD, the Netherlands; Centre for Language Studies, Radboud University, Erasmusplein 1, Nijmegen 6525 HT, the Netherlands
| | - Stefan Blohm
- Department of Neuroscience, Max Planck Institute for Empirical Aesthetics, Frankfurt 60322, Germany
| | - Qing Cai
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai 200062, China; Key Laboratory of Brain Functional Genomics (MOE & STCSM), Shanghai Changning-ECNU Mental Health Center, Institute of Cognitive Neuroscience, School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China
| | - Xing Tian
- Division of Arts and Sciences, New York University Shanghai, Shanghai 200122, China; NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai 200062, China; Key Laboratory of Brain Functional Genomics (MOE & STCSM), Shanghai Changning-ECNU Mental Health Center, Institute of Cognitive Neuroscience, School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China.
| |
Collapse
|
43
|
Abstract
There are functional and anatomical distinctions between the neural systems involved in the recognition of sounds in the environment and those involved in the sensorimotor guidance of sound production and the spatial processing of sound. Evidence for the separation of these processes has historically come from disparate literatures on the perception and production of speech, music and other sounds. More recent evidence indicates that there are computational distinctions between the rostral and caudal primate auditory cortex that may underlie functional differences in auditory processing. These functional differences may originate from differences in the response times and temporal profiles of neurons in the rostral and caudal auditory cortex, suggesting that computational accounts of primate auditory pathways should focus on the implications of these temporal response differences.
Collapse
|
44
|
Das P, Brodbeck C, Simon JZ, Babadi B. Neuro-current response functions: A unified approach to MEG source analysis under the continuous stimuli paradigm. Neuroimage 2020; 211:116528. [PMID: 31945510 DOI: 10.1016/j.neuroimage.2020.116528] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2019] [Revised: 11/16/2019] [Accepted: 01/07/2020] [Indexed: 11/25/2022] Open
Abstract
Characterizing the neural dynamics underlying sensory processing is one of the central areas of investigation in systems and cognitive neuroscience. Neuroimaging techniques such as magnetoencephalography (MEG) and Electroencephalography (EEG) have provided significant insights into the neural processing of continuous stimuli, such as speech, thanks to their high temporal resolution. Existing work in the context of auditory processing suggests that certain features of speech, such as the acoustic envelope, can be used as reliable linear predictors of the neural response manifested in M/EEG. The corresponding linear filters are referred to as temporal response functions (TRFs). While the functional roles of specific components of the TRF are well-studied and linked to behavioral attributes such as attention, the cortical origins of the underlying neural processes are not as well understood. In this work, we address this issue by estimating a linear filter representation of cortical sources directly from neuroimaging data in the context of continuous speech processing. To this end, we introduce Neuro-Current Response Functions (NCRFs), a set of linear filters, spatially distributed throughout the cortex, that predict the cortical currents giving rise to the observed ongoing MEG (or EEG) data in response to continuous speech. NCRF estimation is cast within a Bayesian framework, which allows unification of the TRF and source estimation problems, and also facilitates the incorporation of prior information on the structural properties of the NCRFs. To generalize this analysis to M/EEG recordings which lack individual structural magnetic resonance (MR) scans, NCRFs are extended to free-orientation dipoles and a novel regularizing scheme is put forward to lessen reliance on fine-tuned coordinate co-registration. We present a fast estimation algorithm, which we refer to as the Champ-Lasso algorithm, by leveraging recent advances in optimization, and demonstrate its utility through application to simulated and experimentally recorded MEG data under auditory experiments. Our simulation studies reveal significant improvements over existing methods that typically operate in a two-stage fashion, in terms of spatial resolution, response function reconstruction, and recovering dipole orientations. The analysis of experimentally-recorded MEG data without MR scans corroborates existing findings, but also delineates the distinct cortical distribution of the underlying neural processes at high spatiotemporal resolution. In summary, we provide a principled modeling and estimation paradigm for MEG source analysis tailored to extracting the cortical origin of electrophysiological responses to continuous stimuli.
Collapse
Affiliation(s)
- Proloy Das
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD, 20742, USA; Institute for Systems Research, University of Maryland, College Park, MD, 20742, USA.
| | - Christian Brodbeck
- Institute for Systems Research, University of Maryland, College Park, MD, 20742, USA.
| | - Jonathan Z Simon
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD, 20742, USA; Institute for Systems Research, University of Maryland, College Park, MD, 20742, USA; Department of Biology, University of Maryland, College Park, MD, 20742, USA.
| | - Behtash Babadi
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD, 20742, USA; Institute for Systems Research, University of Maryland, College Park, MD, 20742, USA.
| |
Collapse
|
45
|
Rachman L, Dubal S, Aucouturier JJ. Happy you, happy me: expressive changes on a stranger's voice recruit faster implicit processes than self-produced expressions. Soc Cogn Affect Neurosci 2020; 14:559-568. [PMID: 31044241 PMCID: PMC6545538 DOI: 10.1093/scan/nsz030] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2018] [Revised: 04/09/2019] [Accepted: 04/21/2019] [Indexed: 01/09/2023] Open
Abstract
In social interactions, people have to pay attention both to the ‘what’ and ‘who’. In particular, expressive changes heard on speech signals have to be integrated with speaker identity, differentiating e.g. self- and other-produced signals. While previous research has shown that self-related visual information processing is facilitated compared to non-self stimuli, evidence in the auditory modality remains mixed. Here, we compared electroencephalography (EEG) responses to expressive changes in sequence of self- or other-produced speech sounds using a mismatch negativity (MMN) passive oddball paradigm. Critically, to control for speaker differences, we used programmable acoustic transformations to create voice deviants that differed from standards in exactly the same manner, making EEG responses to such deviations comparable between sequences. Our results indicate that expressive changes on a stranger’s voice are highly prioritized in auditory processing compared to identical changes on the self-voice. Other-voice deviants generate earlier MMN onset responses and involve stronger cortical activations in a left motor and somatosensory network suggestive of an increased recruitment of resources for less internally predictable, and therefore perhaps more socially relevant, signals.
Collapse
Affiliation(s)
- Laura Rachman
- Inserm U, CNRS UMR, Sorbonne Université UMR S, Institut du Cerveau et de la Moelle épinière, Social and Affective Neuroscience Lab, Paris, France.,Science & Technology of Music and Sound, UMR (CNRS/IRCAM/Sorbonne Université), Paris, France
| | - Stéphanie Dubal
- Inserm U, CNRS UMR, Sorbonne Université UMR S, Institut du Cerveau et de la Moelle épinière, Social and Affective Neuroscience Lab, Paris, France
| | - Jean-Julien Aucouturier
- Science & Technology of Music and Sound, UMR (CNRS/IRCAM/Sorbonne Université), Paris, France
| |
Collapse
|
46
|
Longcamp M, Hupé JM, Ruiz M, Vayssière N, Sato M. Shared premotor activity in spoken and written communication. BRAIN AND LANGUAGE 2019; 199:104694. [PMID: 31586790 DOI: 10.1016/j.bandl.2019.104694] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/07/2018] [Revised: 09/12/2019] [Accepted: 09/15/2019] [Indexed: 06/10/2023]
Abstract
The aim of the present study was to uncover a possible common neural organizing principle in spoken and written communication, through the coupling of perceptual and motor representations. In order to identify possible shared neural substrates for processing the basic units of spoken and written language, a sparse sampling fMRI acquisition protocol was performed on the same subjects in two experimental sessions with similar sets of letters being read and written and of phonemes being heard and orally produced. We found evidence of common premotor regions activated in spoken and written language, both in perception and in production. The location of those brain regions was confined to the left lateral and medial frontal cortices, at locations corresponding to the premotor cortex, inferior frontal cortex and supplementary motor area. Interestingly, the speaking and writing tasks also appeared to be controlled by largely overlapping networks, possibly indicating some domain general cognitive processing. Finally, the spatial distribution of individual activation peaks further showed more dorsal and more left-lateralized premotor activations in written than in spoken language.
Collapse
Affiliation(s)
| | - Jean-Michel Hupé
- CNRS, Université de Toulouse Paul Sabatier, CerCo, Toulouse, France
| | - Mathieu Ruiz
- CNRS, Université de Toulouse Paul Sabatier, CerCo, Toulouse, France
| | - Nathalie Vayssière
- CNRS, Université de Toulouse Paul Sabatier, CerCo, Toulouse, France; Toulouse Mind and Brain Institute, France
| | - Marc Sato
- CNRS, Aix-Marseille Univ, LPL, Aix-en-Provence, France
| |
Collapse
|
47
|
Feng G, Gan Z, Wang S, Wong PCM, Chandrasekaran B. Task-General and Acoustic-Invariant Neural Representation of Speech Categories in the Human Brain. Cereb Cortex 2019; 28:3241-3254. [PMID: 28968658 DOI: 10.1093/cercor/bhx195] [Citation(s) in RCA: 42] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2016] [Accepted: 07/13/2017] [Indexed: 11/14/2022] Open
Abstract
A significant neural challenge in speech perception includes extracting discrete phonetic categories from continuous and multidimensional signals despite varying task demands and surface-acoustic variability. While neural representations of speech categories have been previously identified in frontal and posterior temporal-parietal regions, the task dependency and dimensional specificity of these neural representations are still unclear. Here, we asked native Mandarin participants to listen to speech syllables carrying 4 distinct lexical tone categories across passive listening, repetition, and categorization tasks while they underwent functional magnetic resonance imaging (fMRI). We used searchlight classification and representational similarity analysis (RSA) to identify the dimensional structure underlying neural representation across tasks and surface-acoustic properties. Searchlight classification analyses revealed significant "cross-task" lexical tone decoding within the bilateral superior temporal gyrus (STG) and left inferior parietal lobule (LIPL). RSA revealed that the LIPL and LSTG, in contrast to the RSTG, relate to 2 critical dimensions (pitch height, pitch direction) underlying tone perception. Outside this core representational network, we found greater activation in the inferior frontal and parietal regions for stimuli that are more perceptually similar during tone categorization. Our findings reveal the specific characteristics of fronto-tempo-parietal regions that support speech representation and categorization processing.
Collapse
Affiliation(s)
- Gangyi Feng
- Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong SAR, China.,Brain and Mind Institute, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong SAR, China.,Department of Communication Sciences & Disorders, Moody College of Communication, The University of Texas at Austin, 2504A Whitis Avenue (A1100), Austin, TX, USA
| | - Zhenzhong Gan
- Center for the Study of Applied Psychology and School of Psychology, South China Normal University, Guangzhou, China
| | - Suiping Wang
- Center for the Study of Applied Psychology and School of Psychology, South China Normal University, Guangzhou, China.,Guangdong Provincial Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, China
| | - Patrick C M Wong
- Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong SAR, China.,Brain and Mind Institute, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong SAR, China
| | - Bharath Chandrasekaran
- Department of Communication Sciences & Disorders, Moody College of Communication, The University of Texas at Austin, 2504A Whitis Avenue (A1100), Austin, TX, USA.,Department of Psychology, The University of Texas at Austin, 108 E. Dean Keeton Stop, Austin, TX, USA.,Department of Linguistics, The University of Texas at Austin, 305 E. 23rd Street STOP, Austin, TX, USA.,Institute for Mental Health Research, College of Liberal Arts, The University of Texas at Austin, 305 E. 23rd St. Stop, Austin, TX, USA.,The Institute for Neuroscience, The University of Texas at Austin, 1 University Station Stop, Austin, TX, USA
| |
Collapse
|
48
|
Rudner M, Orfanidou E, Kästner L, Cardin V, Woll B, Capek CM, Rönnberg J. Neural Networks Supporting Phoneme Monitoring Are Modulated by Phonology but Not Lexicality or Iconicity: Evidence From British and Swedish Sign Language. Front Hum Neurosci 2019; 13:374. [PMID: 31695602 PMCID: PMC6817460 DOI: 10.3389/fnhum.2019.00374] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2018] [Accepted: 10/03/2019] [Indexed: 11/18/2022] Open
Abstract
Sign languages are natural languages in the visual domain. Because they lack a written form, they provide a sharper tool than spoken languages for investigating lexicality effects which may be confounded by orthographic processing. In a previous study, we showed that the neural networks supporting phoneme monitoring in deaf British Sign Language (BSL) users are modulated by phonology but not lexicality or iconicity. In the present study, we investigated whether this pattern generalizes to deaf Swedish Sign Language (SSL) users. British and SSLs have a largely overlapping phoneme inventory but are mutually unintelligible because lexical overlap is small. This is important because it means that even when signs lexicalized in BSL are unintelligible to users of SSL they are usually still phonologically acceptable. During fMRI scanning, deaf users of the two different sign languages monitored signs that were lexicalized in either one or both of those languages for phonologically contrastive elements. Neural activation patterns relating to different linguistic levels of processing were similar across SLs; in particular, we found no effect of lexicality, supporting the notion that apparent lexicality effects on sublexical processing of speech may be driven by orthographic strategies. As expected, we found an effect of phonology but not iconicity. Further, there was a difference in neural activation between the two groups in a motion-processing region of the left occipital cortex, possibly driven by cultural differences, such as education. Importantly, this difference was not modulated by the linguistic characteristics of the material, underscoring the robustness of the neural activation patterns relating to different linguistic levels of processing.
Collapse
Affiliation(s)
- Mary Rudner
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| | - Eleni Orfanidou
- Deafness, Cognition and Language Research Centre, Department of Experimental Psychology, University College London, London, United Kingdom.,School of Psychology, University of Crete, Rethymno, Greece
| | - Lena Kästner
- Deafness, Cognition and Language Research Centre, Department of Experimental Psychology, University College London, London, United Kingdom.,Department of Philosophy, Saarland University, Saarbrücken, Germany
| | - Velia Cardin
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden.,Deafness, Cognition and Language Research Centre, Department of Experimental Psychology, University College London, London, United Kingdom.,School of Psychology, University of East Anglia, Norwich, United Kingdom
| | - Bencie Woll
- Deafness, Cognition and Language Research Centre, Department of Experimental Psychology, University College London, London, United Kingdom
| | - Cheryl M Capek
- Division of Neuroscience & Experimental Psychology, School of Biological Sciences, University of Manchester, Manchester, United Kingdom
| | - Jerker Rönnberg
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| |
Collapse
|
49
|
Manca AD, Di Russo F, Sigona F, Grimaldi M. Electrophysiological evidence of phonemotopic representations of vowels in the primary and secondary auditory cortex. Cortex 2019; 121:385-398. [PMID: 31678684 DOI: 10.1016/j.cortex.2019.09.016] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2018] [Revised: 05/18/2019] [Accepted: 09/20/2019] [Indexed: 11/25/2022]
Abstract
How the brain encodes the speech acoustic signal into phonological representations is a fundamental question for the neurobiology of language. Determining whether this process is characterized by tonotopic maps in primary or secondary auditory areas, with bilateral or leftward activity, remains a long-standing challenge. Magnetoencephalographic studies failed to show hierarchical and asymmetric hints for speech processing. We employed high-density electroencephalography to map the Salento Italian vowel system onto cortical sources using the N1 auditory evoked component. We found evidence that the N1 is characterized by hierarchical and asymmetrical indexes in primary and secondary auditory areas structuring vowel representations. Importantly, the N1 was characterized by early and late phases. The early N1 peaked at 125-135 msec and was localized in the primary auditory cortex; the late N1 peaked at 145-155 msec and was localized in the left superior temporal gyrus. We showed that early in the primary auditory cortex, the cortical spatial arrangements-along the lateral-medial and anterior-posterior gradients-are broadly warped by phonemotopic patterns according to the distinctive feature principle. These phonemotopic patterns are carefully refined in the superior temporal gyrus along the inferior-superior and anterior-posterior gradients. The dynamical and hierarchical interface between primary and secondary auditory areas and the interaction effects between Height and Place features generate the categorical representation of the Salento Italian vowels.
Collapse
Affiliation(s)
- Anna Dora Manca
- Centro di Ricerca Interdisciplinare sul Linguaggio (CRIL), University of Salento, Lecce, Italy; Laboratorio Diffuso di Ricerca interdisciplinare Applicata alla Medicina (DReAM), Lecce, Italy
| | - Francesco Di Russo
- Dipartimento di Scienze Motorie, Umane e della Salute, University of Rome "Foro Italico", Rome, Italy; IRCCS Fondazione Santa Lucia, Rome, Italy
| | - Francesco Sigona
- Centro di Ricerca Interdisciplinare sul Linguaggio (CRIL), University of Salento, Lecce, Italy; Laboratorio Diffuso di Ricerca interdisciplinare Applicata alla Medicina (DReAM), Lecce, Italy
| | - Mirko Grimaldi
- Centro di Ricerca Interdisciplinare sul Linguaggio (CRIL), University of Salento, Lecce, Italy; Laboratorio Diffuso di Ricerca interdisciplinare Applicata alla Medicina (DReAM), Lecce, Italy.
| |
Collapse
|
50
|
Discourse management during speech perception: A functional magnetic resonance imaging (fMRI) study. Neuroimage 2019; 202:116047. [DOI: 10.1016/j.neuroimage.2019.116047] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2018] [Revised: 07/09/2019] [Accepted: 07/22/2019] [Indexed: 11/22/2022] Open
|