1
|
Brisson V, Tremblay P. Assessing the Impact of Transcranial Magnetic Stimulation on Speech Perception in Noise. J Cogn Neurosci 2024; 36:2184-2207. [PMID: 39023366 DOI: 10.1162/jocn_a_02224] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/20/2024]
Abstract
Healthy aging is associated with reduced speech perception in noise (SPiN) abilities. The etiology of these difficulties remains elusive, which prevents the development of new strategies to optimize the speech processing network and reduce these difficulties. The objective of this study was to determine if sublexical SPiN performance can be enhanced by applying TMS to three regions involved in processing speech: the left posterior temporal sulcus, the left superior temporal gyrus, and the left ventral premotor cortex. The second objective was to assess the impact of several factors (age, baseline performance, target, brain structure, and activity) on post-TMS SPiN improvement. The results revealed that participants with lower baseline performance were more likely to improve. Moreover, in older adults, cortical thickness within the target areas was negatively associated with performance improvement, whereas this association was null in younger individuals. No differences between the targets were found. This study suggests that TMS can modulate sublexical SPiN performance, but that the strength and direction of the effects depend on a complex combination of contextual and individual factors.
Collapse
Affiliation(s)
- Valérie Brisson
- Université Laval, School of Rehabilitation Sciences, Québec, Canada
- Centre de recherche CERVO, Québec, Canada
| | - Pascale Tremblay
- Université Laval, School of Rehabilitation Sciences, Québec, Canada
- Centre de recherche CERVO, Québec, Canada
| |
Collapse
|
2
|
Cui AX, Kraeutner SN, Kepinska O, Motamed Yeganeh N, Hermiston N, Werker JF, Boyd LA. Musical Sophistication and Multilingualism: Effects on Arcuate Fasciculus Characteristics. Hum Brain Mapp 2024; 45:e70035. [PMID: 39360580 PMCID: PMC11447524 DOI: 10.1002/hbm.70035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2024] [Revised: 09/05/2024] [Accepted: 09/17/2024] [Indexed: 10/04/2024] Open
Abstract
The processing of auditory stimuli which are structured in time is thought to involve the arcuate fasciculus, the white matter tract which connects the temporal cortex and the inferior frontal gyrus. Research has indicated effects of both musical and language experience on the structural characteristics of the arcuate fasciculus. Here, we investigated in a sample of n = 84 young adults whether continuous conceptualizations of musical and multilingual experience related to structural characteristics of the arcuate fasciculus, measured using diffusion tensor imaging. Probabilistic tractography was used to identify the dorsal and ventral parts of the white matter tract. Linear regressions indicated that different aspects of musical sophistication related to the arcuate fasciculus' volume (emotional engagement with music), volumetric asymmetry (musical training and music perceptual abilities), and fractional anisotropy (music perceptual abilities). Our conceptualization of multilingual experience, accounting for participants' proficiency in reading, writing, understanding, and speaking different languages, was not related to the structural characteristics of the arcuate fasciculus. We discuss our results in the context of other research on hemispheric specializations and a dual-stream model of auditory processing.
Collapse
Affiliation(s)
- Anja-Xiaoxing Cui
- Department of Musicology, University of Vienna, Vienna, Austria
- Department of Psychology, University of British Columbia, Vancouver, British Columbia, Canada
- Vienna Cognitive Science Hub, University of Vienna, Vienna, Austria
| | - Sarah N Kraeutner
- Department of Psychology, University of British Columbia Okanagan, Kelowna, British Columbia, Canada
| | - Olga Kepinska
- Vienna Cognitive Science Hub, University of Vienna, Vienna, Austria
- Department of Behavioral and Cognitive Biology, Faculty of Life Sciences, University of Vienna, Vienna, Austria
| | - Negin Motamed Yeganeh
- Brain Behaviour Lab, Department of Physical Therapy, University of British Columbia, Vancouver, British Columbia, Canada
| | - Nancy Hermiston
- School of Music, University of British Columbia, Vancouver, British Columbia, Canada
| | - Janet F Werker
- Department of Psychology, University of British Columbia, Vancouver, British Columbia, Canada
| | - Lara A Boyd
- Brain Behaviour Lab, Department of Physical Therapy, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
3
|
Arya R, Ervin B, Greiner HM, Buroker J, Byars AW, Tenney JR, Arthur TM, Fong SL, Lin N, Frink C, Rozhkov L, Scholle C, Skoch J, Leach JL, Mangano FT, Glauser TA, Hickok G, Holland KD. Emotional facial expression and perioral motor functions of the human auditory cortex. Clin Neurophysiol 2024; 163:102-111. [PMID: 38729074 PMCID: PMC11176009 DOI: 10.1016/j.clinph.2024.04.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Revised: 04/16/2024] [Accepted: 04/17/2024] [Indexed: 05/12/2024]
Abstract
OBJECTIVE We investigated the role of transverse temporal gyrus and adjacent cortex (TTG+) in facial expressions and perioral movements. METHODS In 31 patients undergoing stereo-electroencephalography monitoring, we describe behavioral responses elicited by electrical stimulation within the TTG+. Task-induced high-gamma modulation (HGM), auditory evoked responses, and resting-state connectivity were used to investigate the cortical sites having different types of responses on electrical stimulation. RESULTS Changes in facial expressions and perioral movements were elicited on electrical stimulation within TTG+ in 9 (29%) and 10 (32%) patients, respectively, in addition to the more common language responses (naming interruptions, auditory hallucinations, paraphasic errors). All functional sites showed auditory task induced HGM and evoked responses validating their location within the auditory cortex, however, motor sites showed lower peak amplitudes and longer peak latencies compared to language sites. Significant first-degree connections for motor sites included precentral, anterior cingulate, parahippocampal, and anterior insular gyri, whereas those for language sites included posterior superior temporal, posterior middle temporal, inferior frontal, supramarginal, and angular gyri. CONCLUSIONS Multimodal data suggests that TTG+ may participate in auditory-motor integration. SIGNIFICANCE TTG+ likely participates in facial expressions in response to emotional cues during an auditory discourse.
Collapse
Affiliation(s)
- Ravindra Arya
- Comprehensive Epilepsy Center, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA; Department of Electrical Engineering and Computer Science, University of Cincinnati, Cincinnati, OH, USA.
| | - Brian Ervin
- Comprehensive Epilepsy Center, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Electrical Engineering and Computer Science, University of Cincinnati, Cincinnati, OH, USA
| | - Hansel M Greiner
- Comprehensive Epilepsy Center, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Jason Buroker
- Comprehensive Epilepsy Center, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | - Anna W Byars
- Comprehensive Epilepsy Center, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Jeffrey R Tenney
- Comprehensive Epilepsy Center, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Todd M Arthur
- Comprehensive Epilepsy Center, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Susan L Fong
- Comprehensive Epilepsy Center, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Nan Lin
- Comprehensive Epilepsy Center, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Clayton Frink
- Comprehensive Epilepsy Center, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | - Leonid Rozhkov
- Comprehensive Epilepsy Center, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | - Craig Scholle
- Comprehensive Epilepsy Center, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | - Jesse Skoch
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA; Division of Pediatric Neurosurgery, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | - James L Leach
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA; Division of Pediatric Neuro-radiology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | - Francesco T Mangano
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA; Division of Pediatric Neurosurgery, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | - Tracy A Glauser
- Comprehensive Epilepsy Center, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Gregory Hickok
- Department of Cognitive Sciences, Department of Language Science, University of California, Irvine, CA, USA
| | - Katherine D Holland
- Comprehensive Epilepsy Center, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| |
Collapse
|
4
|
Liu Y, Wang S, Lu J, Ding J, Chen Y, Yang L, Wang S. Neural processing of speech comprehension in noise predicts individual age using fNIRS-based brain-behavior models. Cereb Cortex 2024; 34:bhae178. [PMID: 38715408 DOI: 10.1093/cercor/bhae178] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Revised: 03/29/2024] [Accepted: 04/01/2024] [Indexed: 01/28/2025] Open
Abstract
Speech comprehension in noise depends on complex interactions between peripheral sensory and central cognitive systems. Despite having normal peripheral hearing, older adults show difficulties in speech comprehension. It remains unclear whether the brain's neural responses could indicate aging. The current study examined whether individual brain activation during speech perception in different listening environments could predict age. We applied functional near-infrared spectroscopy to 93 normal-hearing human adults (20 to 70 years old) during a sentence listening task, which contained a quiet condition and 4 different signal-to-noise ratios (SNR = 10, 5, 0, -5 dB) noisy conditions. A data-driven approach, the region-based brain-age predictive modeling was adopted. We observed a significant behavioral decrease with age under the 4 noisy conditions, but not under the quiet condition. Brain activations in SNR = 10 dB listening condition could successfully predict individual's age. Moreover, we found that the bilateral visual sensory cortex, left dorsal speech pathway, left cerebellum, right temporal-parietal junction area, right homolog Wernicke's area, and right middle temporal gyrus contributed most to prediction performance. These results demonstrate that the activations of regions about sensory-motor mapping of sound, especially in noisy conditions, could be sensitive measures for age prediction than external behavior measures.
Collapse
Affiliation(s)
- Yi Liu
- Beijing Institute of Otolaryngology, Otolaryngology-Head and Neck Surgery, Key Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical University, No. 17, Hougou Hutong, Dongcheng District, Beijing 100005, China
| | - Songjian Wang
- Beijing Institute of Otolaryngology, Otolaryngology-Head and Neck Surgery, Key Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical University, No. 17, Hougou Hutong, Dongcheng District, Beijing 100005, China
| | - Jing Lu
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, No. 19, Xinjiekou Wai Street, Haidian District, Beijing 100875, China
| | - Junhua Ding
- Department of Psychology, University of Edinburgh, 15Kings Buildings, Edinburgh EH8 9JZ, United Kingdom
| | - Younuo Chen
- Beijing Institute of Otolaryngology, Otolaryngology-Head and Neck Surgery, Key Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical University, No. 17, Hougou Hutong, Dongcheng District, Beijing 100005, China
| | - Liu Yang
- School of Biomedical Engineering, Capital Medical University, No. 10, Xitoutiao, YouAnMen, Fengtai District, Beijing 100069, China
| | - Shuo Wang
- Beijing Institute of Otolaryngology, Otolaryngology-Head and Neck Surgery, Key Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical University, No. 17, Hougou Hutong, Dongcheng District, Beijing 100005, China
| |
Collapse
|
5
|
Tilsen S. Internal speech is faster than external speech: Evidence for feedback-based temporal control. Cognition 2024; 244:105713. [PMID: 38176155 DOI: 10.1016/j.cognition.2023.105713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 11/19/2023] [Accepted: 12/22/2023] [Indexed: 01/06/2024]
Abstract
A recent model of temporal control in speech holds that speakers use sensory feedback to control speech rate and articulatory timing. An experiment was conducted to assess whether there is evidence in support of this hypothesis by comparing durations of phrases in external speech (with sensory feedback) and internal speech (without sensory feedback). Phrase lengths were varied by including one to three disyllabic nouns in a target phrase that was always surrounded by overt speech. An inferred duration method was used to estimate the durations of target phrases produced internally. The results showed that internal speech is faster than external speech, supporting the hypothesis. In addition, the results indicate that there is a slow-down associated with transitioning between internal and external modes of production.
Collapse
Affiliation(s)
- Sam Tilsen
- Department of Linguistics, Cornell University, Ithaca, NY, USA.
| |
Collapse
|
6
|
Tolkacheva V, Brownsett SLE, McMahon KL, de Zubicaray GI. Perceiving and misperceiving speech: lexical and sublexical processing in the superior temporal lobes. Cereb Cortex 2024; 34:bhae087. [PMID: 38494418 PMCID: PMC10944697 DOI: 10.1093/cercor/bhae087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 02/15/2024] [Accepted: 02/16/2024] [Indexed: 03/19/2024] Open
Abstract
Listeners can use prior knowledge to predict the content of noisy speech signals, enhancing perception. However, this process can also elicit misperceptions. For the first time, we employed a prime-probe paradigm and transcranial magnetic stimulation to investigate causal roles for the left and right posterior superior temporal gyri (pSTG) in the perception and misperception of degraded speech. Listeners were presented with spectrotemporally degraded probe sentences preceded by a clear prime. To produce misperceptions, we created partially mismatched pseudo-sentence probes via homophonic nonword transformations (e.g. The little girl was excited to lose her first tooth-Tha fittle girmn wam expited du roos har derst cooth). Compared to a control site (vertex), inhibitory stimulation of the left pSTG selectively disrupted priming of real but not pseudo-sentences. Conversely, inhibitory stimulation of the right pSTG enhanced priming of misperceptions with pseudo-sentences, but did not influence perception of real sentences. These results indicate qualitatively different causal roles for the left and right pSTG in perceiving degraded speech, supporting bilateral models that propose engagement of the right pSTG in sublexical processing.
Collapse
Affiliation(s)
- Valeriya Tolkacheva
- Queensland University of Technology, School of Psychology and Counselling, O Block, Kelvin Grove, Queensland, 4059, Australia
| | - Sonia L E Brownsett
- Queensland Aphasia Research Centre, School of Health and Rehabilitation Sciences, University of Queensland, Surgical Treatment and Rehabilitation Services, Herston, Queensland, 4006, Australia
- Centre of Research Excellence in Aphasia Recovery and Rehabilitation, La Trobe University, Melbourne, Health Sciences Building 1, 1 Kingsbury Drive, Bundoora, Victoria, 3086, Australia
| | - Katie L McMahon
- Herston Imaging Research Facility, Royal Brisbane & Women’s Hospital, Building 71/918, Royal Brisbane & Women’s Hospital, Herston, Queensland, 4006, Australia
- Queensland University of Technology, School of Clinical Sciences and Centre for Biomedical Technologies, 60 Musk Avenue, Kelvin Grove, Queensland, 4059, Australia
| | - Greig I de Zubicaray
- Queensland University of Technology, School of Psychology and Counselling, O Block, Kelvin Grove, Queensland, 4059, Australia
| |
Collapse
|
7
|
Silva Pereira S, Özer EE, Sebastian-Galles N. Complexity of STG signals and linguistic rhythm: a methodological study for EEG data. Cereb Cortex 2024; 34:bhad549. [PMID: 38236741 DOI: 10.1093/cercor/bhad549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 12/29/2023] [Accepted: 12/30/2023] [Indexed: 02/06/2024] Open
Abstract
The superior temporal and the Heschl's gyri of the human brain play a fundamental role in speech processing. Neurons synchronize their activity to the amplitude envelope of the speech signal to extract acoustic and linguistic features, a process known as neural tracking/entrainment. Electroencephalography has been extensively used in language-related research due to its high temporal resolution and reduced cost, but it does not allow for a precise source localization. Motivated by the lack of a unified methodology for the interpretation of source reconstructed signals, we propose a method based on modularity and signal complexity. The procedure was tested on data from an experiment in which we investigated the impact of native language on tracking to linguistic rhythms in two groups: English natives and Spanish natives. In the experiment, we found no effect of native language but an effect of language rhythm. Here, we compare source projected signals in the auditory areas of both hemispheres for the different conditions using nonparametric permutation tests, modularity, and a dynamical complexity measure. We found increasing values of complexity for decreased regularity in the stimuli, giving us the possibility to conclude that languages with less complex rhythms are easier to track by the auditory cortex.
Collapse
Affiliation(s)
- Silvana Silva Pereira
- Center for Brain and Cognition, Department of Information and Communications Technologies, Universitat Pompeu Fabra, 08005 Barcelona, Spain
| | - Ege Ekin Özer
- Center for Brain and Cognition, Department of Information and Communications Technologies, Universitat Pompeu Fabra, 08005 Barcelona, Spain
| | - Nuria Sebastian-Galles
- Center for Brain and Cognition, Department of Information and Communications Technologies, Universitat Pompeu Fabra, 08005 Barcelona, Spain
| |
Collapse
|
8
|
Khoshhal Mollasaraei Z, Behroozmand R. Impairment of the internal forward model and feedback mechanisms for vocal sensorimotor control in post-stroke aphasia: evidence from directional responses to altered auditory feedback. Exp Brain Res 2024; 242:225-239. [PMID: 37999725 PMCID: PMC10849397 DOI: 10.1007/s00221-023-06743-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2023] [Accepted: 11/05/2023] [Indexed: 11/25/2023]
Abstract
The present study examined opposing and following vocal responses to altered auditory feedback (AAF) to determine how damage to left-hemisphere brain networks impairs the internal forward model and feedback mechanisms in post-stroke aphasia. Forty-nine subjects with aphasia and sixty age-matched controls performed speech vowel production tasks while their auditory feedback was altered using randomized ± 100 cents upward and downward pitch-shift stimuli. Data analysis revealed that when vocal responses were averaged across all trials (i.e., opposing and following), the overall magnitude of vocal compensation was significantly reduced in the aphasia group compared with controls. In addition, when vocal responses were analyzed separately for opposing and following trials, subjects in the aphasia group showed a significantly lower percentage of opposing and higher percentage of following vocal response trials compared with controls, particularly for the upward pitch-shift stimuli. However, there was no significant difference in the magnitude of opposing and following vocal responses between the two groups. These findings further support previous evidence on the impairment of vocal sensorimotor control in aphasia and provide new insights into the distinctive impact of left-hemisphere stroke on the internal forward model and feedback mechanisms. In this context, we propose that the lower percentage of opposing responses in aphasia may be accounted for by deficits in feedback-dependent mechanisms of audio-vocal integration and motor control. In addition, the higher percentage of following responses may reflect aberrantly increased reliance of the speech system on the internal forward model for generating sensory predictions during vocal error detection and motor control.
Collapse
Affiliation(s)
- Zeinab Khoshhal Mollasaraei
- NeuroSyntax Lab, Department of Communication Sciences and Disorders, Arnold School of Public Health, University of South Carolina, 915 Greene Street, Columbia, SC, 29208, USA
| | - Roozbeh Behroozmand
- Speech Neuroscience Lab, Department of Speech, Language, and Hearing, Callier Center for Communication Disorders, School of Behavioral and Brain Sciences, The University of Texas at Dallas, 2811 N. Floyd Rd, Richardson, TX, 75080, USA.
| |
Collapse
|
9
|
Nerland S, Slapø NB, Barth C, Mørch-Johnsen L, Jørgensen KN, Beck D, Wortinger LA, Westlye LT, Jönsson EG, Andreassen OA, Maximov II, Geier OM, Agartz I. Current Auditory Hallucinations Are Not Associated With Specific White Matter Diffusion Alterations in Schizophrenia. SCHIZOPHRENIA BULLETIN OPEN 2024; 5:sgae008. [PMID: 39144116 PMCID: PMC11207682 DOI: 10.1093/schizbullopen/sgae008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 08/16/2024]
Abstract
Background and Hypothesis Studies have linked auditory hallucinations (AH) in schizophrenia spectrum disorders (SCZ) to altered cerebral white matter microstructure within the language and auditory processing circuitry (LAPC). However, the specificity to the LAPC remains unclear. Here, we investigated the relationship between AH and DTI among patients with SCZ using diffusion tensor imaging (DTI). Study Design We included patients with SCZ with (AH+; n = 59) and without (AH-; n = 81) current AH, and 140 age- and sex-matched controls. Fractional anisotropy (FA), mean diffusivity (MD), radial diffusivity (RD), and axial diffusivity (AD) were extracted from 39 fiber tracts. We used principal component analysis (PCA) to identify general factors of variation across fiber tracts and DTI metrics. Regression models adjusted for sex, age, and age2 were used to compare tract-wise DTI metrics and PCA factors between AH+, AH-, and healthy controls and to assess associations with clinical characteristics. Study Results Widespread differences relative to controls were observed for MD and RD in patients without current AH. Only limited differences in 2 fiber tracts were observed between AH+ and controls. Unimodal PCA factors based on MD, RD, and AD, as well as multimodal PCA factors, differed significantly relative to controls for AH-, but not AH+. We did not find any significant associations between PCA factors and clinical characteristics. Conclusions Contrary to previous studies, DTI metrics differed mainly in patients without current AH compared to controls, indicating a widespread neuroanatomical distribution. This challenges the notion that altered DTI metrics within the LAPC is a specific feature underlying AH.
Collapse
Affiliation(s)
- Stener Nerland
- Department of Psychiatric Research, Diakonhjemmet Hospital, Oslo, Norway
- Norwegian Center for Mental Disorders Research (NORMENT), Institute of Clinical Medicine, University of Oslo, Oslo, Norway
| | - Nora Berz Slapø
- Norwegian Center for Mental Disorders Research (NORMENT), Institute of Clinical Medicine, University of Oslo, Oslo, Norway
| | - Claudia Barth
- Department of Psychiatric Research, Diakonhjemmet Hospital, Oslo, Norway
- Norwegian Center for Mental Disorders Research (NORMENT), Institute of Clinical Medicine, University of Oslo, Oslo, Norway
| | - Lynn Mørch-Johnsen
- Norwegian Center for Mental Disorders Research (NORMENT), Institute of Clinical Medicine, University of Oslo, Oslo, Norway
- Department of Psychiatry, Østfold Hospital, Grålum, Norway
- Department of Clinical Research, Østfold Hospital, Grålum, Norway
| | - Kjetil Nordbø Jørgensen
- Norwegian Center for Mental Disorders Research (NORMENT), Institute of Clinical Medicine, University of Oslo, Oslo, Norway
| | - Dani Beck
- Department of Psychiatric Research, Diakonhjemmet Hospital, Oslo, Norway
- Norwegian Center for Mental Disorders Research (NORMENT), Division of Mental Health and Addiction, Oslo University Hospital, Oslo, Norway
- Department of Psychology, University of Oslo, Oslo, Norway
| | - Laura A Wortinger
- Department of Psychiatric Research, Diakonhjemmet Hospital, Oslo, Norway
- Norwegian Center for Mental Disorders Research (NORMENT), Institute of Clinical Medicine, University of Oslo, Oslo, Norway
| | - Lars T Westlye
- Norwegian Center for Mental Disorders Research (NORMENT), Division of Mental Health and Addiction, Oslo University Hospital, Oslo, Norway
- Department of Psychology, University of Oslo, Oslo, Norway
| | - Erik G Jönsson
- Norwegian Center for Mental Disorders Research (NORMENT), Institute of Clinical Medicine, University of Oslo, Oslo, Norway
- Centre for Psychiatry Research, Department of Clinical Neuroscience, Karolinska Institutet and Stockholm Health Care Services, Stockholm Region, Stockholm, Sweden
| | - Ole A Andreassen
- Department of Psychiatric Research, Diakonhjemmet Hospital, Oslo, Norway
- Norwegian Center for Mental Disorders Research (NORMENT), Institute of Clinical Medicine, University of Oslo, Oslo, Norway
- Norwegian Center for Mental Disorders Research (NORMENT), Division of Mental Health and Addiction, Oslo University Hospital, Oslo, Norway
| | - Ivan I Maximov
- Norwegian Center for Mental Disorders Research (NORMENT), Division of Mental Health and Addiction, Oslo University Hospital, Oslo, Norway
- Department of Health and Functioning, Western Norway University of Applied Sciences, Bergen, Norway
| | - Oliver M Geier
- Department of Computational Radiology and Physics, Division of Radiology and Nuclear Medicine, Oslo University Hospital, Oslo, Norway
- Center for Lifespan Changes in Brain and Cognition (LCBC), Department of Psychology, University of Oslo, Oslo, Norway
| | - Ingrid Agartz
- Department of Psychiatric Research, Diakonhjemmet Hospital, Oslo, Norway
- Norwegian Center for Mental Disorders Research (NORMENT), Institute of Clinical Medicine, University of Oslo, Oslo, Norway
- Centre for Psychiatry Research, Department of Clinical Neuroscience, Karolinska Institutet and Stockholm Health Care Services, Stockholm Region, Stockholm, Sweden
| |
Collapse
|
10
|
Zhou X, Wang L, Hong X, Wong PCM. Infant-directed speech facilitates word learning through attentional mechanisms: An fNIRS study of toddlers. Dev Sci 2024; 27:e13424. [PMID: 37322865 DOI: 10.1111/desc.13424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Revised: 05/27/2023] [Accepted: 05/30/2023] [Indexed: 06/17/2023]
Abstract
The speech register that adults especially caregivers use when interacting with infants and toddlers, that is, infant-directed speech (IDS) or baby talk, has been reported to facilitate language development throughout the early years. However, the neural mechanisms as well as why IDS results in such a developmental faciliatory effect remain to be investigated. The current study uses functional near-infrared spectroscopy (fNIRS) to evaluate two alternative hypotheses of such a facilitative effect, that IDS serves to enhance linguistic contrastiveness or to attract the child's attention. Behavioral and fNIRS data were acquired from twenty-seven Cantonese-learning toddlers 15-20 months of age when their parents spoke to them in either an IDS or adult-directed speech (ADS) register in a naturalistic task in which the child learned four disyllabic pseudowords. fNIRS results showed significantly greater neural responses to IDS than ADS register in the left dorsolateral prefrontal cortex (L-dlPFC), but opposite response patterns in the bilateral inferior frontal gyrus (IFG). The differences in fNIRS responses to IDS and to ADS in the L-dlPFC and the left parietal cortex (L-PC) showed significantly positive correlations with the differences in the behavioral word-learning performance of toddlers. The same fNIRS measures in the L-dlPFC and right PC (R-PC) of toddlers were significantly correlated with pitch range differences of parents between the two speech conditions. Together, our results suggest that the dynamic prosody in IDS increased toddlers' attention through greater involvement of the left frontoparietal network that facilitated word learning, compared to ADS. RESEARCH HIGHLIGHTS: This study for the first time examined the neural mechanisms of how infant-directed speech (IDS) facilitates word learning in toddlers. Using fNIRS, we identified the cortical regions that were directly involved in IDS processing. Our results suggest that IDS facilitates word learning by engaging a right-lateralized prosody processing and top-down attentional mechanisms in the left frontoparietal networks. The language network including the inferior frontal gyrus and temporal cortex was not directly involved in IDS processing to support word learning.
Collapse
Affiliation(s)
- Xin Zhou
- Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Hong Kong SAR, China
- Brain and Mind Institute, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Luchang Wang
- Department of Applied Linguistics, Xi'an Jiaotong-Liverpool University, Suzhou, China
| | - Xuancu Hong
- Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Patrick C M Wong
- Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Hong Kong SAR, China
- Brain and Mind Institute, The Chinese University of Hong Kong, Hong Kong SAR, China
| |
Collapse
|
11
|
Hovsepyan S, Olasagasti I, Giraud AL. Rhythmic modulation of prediction errors: A top-down gating role for the beta-range in speech processing. PLoS Comput Biol 2023; 19:e1011595. [PMID: 37934766 PMCID: PMC10655987 DOI: 10.1371/journal.pcbi.1011595] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 11/17/2023] [Accepted: 10/11/2023] [Indexed: 11/09/2023] Open
Abstract
Natural speech perception requires processing the ongoing acoustic input while keeping in mind the preceding one and predicting the next. This complex computational problem could be handled by a dynamic multi-timescale hierarchical inferential process that coordinates the information flow up and down the language network hierarchy. Using a predictive coding computational model (Precoss-β) that identifies online individual syllables from continuous speech, we address the advantage of a rhythmic modulation of up and down information flows, and whether beta oscillations could be optimal for this. In the model, and consistent with experimental data, theta and low-gamma neural frequency scales ensure syllable-tracking and phoneme-level speech encoding, respectively, while the beta rhythm is associated with inferential processes. We show that a rhythmic alternation of bottom-up and top-down processing regimes improves syllable recognition, and that optimal efficacy is reached when the alternation of bottom-up and top-down regimes, via oscillating prediction error precisions, is in the beta range (around 20-30 Hz). These results not only demonstrate the advantage of a rhythmic alternation of up- and down-going information, but also that the low-beta range is optimal given sensory analysis at theta and low-gamma scales. While specific to speech processing, the notion of alternating bottom-up and top-down processes with frequency multiplexing might generalize to other cognitive architectures.
Collapse
Affiliation(s)
- Sevada Hovsepyan
- Department of Basic Neurosciences, University of Geneva, Biotech Campus, Genève, Switzerland
| | - Itsaso Olasagasti
- Department of Basic Neurosciences, University of Geneva, Biotech Campus, Genève, Switzerland
| | - Anne-Lise Giraud
- Department of Basic Neurosciences, University of Geneva, Biotech Campus, Genève, Switzerland
- Institut Pasteur, Université Paris Cité, Inserm, Institut de l’Audition, France
| |
Collapse
|
12
|
Zheng Y, Gao P, Li X. The modulating effect of musical expertise on lexical-semantic prediction in speech-in-noise comprehension: Evidence from an EEG study. Psychophysiology 2023; 60:e14371. [PMID: 37350401 DOI: 10.1111/psyp.14371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2022] [Revised: 05/08/2023] [Accepted: 05/27/2023] [Indexed: 06/24/2023]
Abstract
Musical expertise has been proposed to facilitate speech perception and comprehension in noisy environments. This study further examined the open question of whether musical expertise modulates high-level lexical-semantic prediction to aid online speech comprehension in noisy backgrounds. Musicians and nonmusicians listened to semantically strongly/weakly constraining sentences during EEG recording. At verbs prior to target nouns, both groups showed a positivity-ERP effect (Strong vs. Weak) associated with the predictability of incoming nouns; this correlation effect was stronger in musicians than in nonmusicians. After the target nouns appeared, both groups showed an N400 reduction effect (Strong vs. Weak) associated with noun predictability, but musicians exhibited an earlier onset latency and stronger effect size of this correlation effect than nonmusicians. To determine whether musical expertise enhances anticipatory semantic processing in general, the same group of participants participated in a control reading comprehension experiment. The results showed that, compared with nonmusicians, musicians demonstrated more delayed ERP correlation effects of noun predictability at words preceding the target nouns; musicians also exhibited more delayed and reduced N400 decrease effects correlated with noun predictability at the target nouns. Taken together, these results suggest that musical expertise enhances lexical-semantic predictive processing in speech-in-noise comprehension. This musical-expertise effect may be related to the strengthened hierarchical speech processing in particular.
Collapse
Affiliation(s)
- Yuanyi Zheng
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Panke Gao
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Xiaoqing Li
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
- Jiangsu Collaborative Innovation Center for Language Ability, Jiangsu Normal University, Xuzhou, China
| |
Collapse
|
13
|
Wang R, Chen X, Khalilian-Gourtani A, Yu L, Dugan P, Friedman D, Doyle W, Devinsky O, Wang Y, Flinker A. Distributed feedforward and feedback cortical processing supports human speech production. Proc Natl Acad Sci U S A 2023; 120:e2300255120. [PMID: 37819985 PMCID: PMC10589651 DOI: 10.1073/pnas.2300255120] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Accepted: 07/22/2023] [Indexed: 10/13/2023] Open
Abstract
Speech production is a complex human function requiring continuous feedforward commands together with reafferent feedback processing. These processes are carried out by distinct frontal and temporal cortical networks, but the degree and timing of their recruitment and dynamics remain poorly understood. We present a deep learning architecture that translates neural signals recorded directly from the cortex to an interpretable representational space that can reconstruct speech. We leverage learned decoding networks to disentangle feedforward vs. feedback processing. Unlike prevailing models, we find a mixed cortical architecture in which frontal and temporal networks each process both feedforward and feedback information in tandem. We elucidate the timing of feedforward and feedback-related processing by quantifying the derived receptive fields. Our approach provides evidence for a surprisingly mixed cortical architecture of speech circuitry together with decoding advances that have important implications for neural prosthetics.
Collapse
Affiliation(s)
- Ran Wang
- Electrical and Computer Engineering Department, New York University, New York, NY11201
| | - Xupeng Chen
- Electrical and Computer Engineering Department, New York University, New York, NY11201
| | | | - Leyao Yu
- Neurology Department, New York University, New York, NY10016
- Biomedical Engineering Department, New York University, New York, NY11201
| | - Patricia Dugan
- Neurology Department, New York University, New York, NY10016
| | - Daniel Friedman
- Neurology Department, New York University, New York, NY10016
| | - Werner Doyle
- Neurosurgery Department, New York University, New York, NY10016
| | - Orrin Devinsky
- Neurology Department, New York University, New York, NY10016
| | - Yao Wang
- Electrical and Computer Engineering Department, New York University, New York, NY11201
- Biomedical Engineering Department, New York University, New York, NY11201
| | - Adeen Flinker
- Neurology Department, New York University, New York, NY10016
- Biomedical Engineering Department, New York University, New York, NY11201
| |
Collapse
|
14
|
Kocsis Z, Jenison RL, Taylor PN, Calmus RM, McMurray B, Rhone AE, Sarrett ME, Deifelt Streese C, Kikuchi Y, Gander PE, Berger JI, Kovach CK, Choi I, Greenlee JD, Kawasaki H, Cope TE, Griffiths TD, Howard MA, Petkov CI. Immediate neural impact and incomplete compensation after semantic hub disconnection. Nat Commun 2023; 14:6264. [PMID: 37805497 PMCID: PMC10560235 DOI: 10.1038/s41467-023-42088-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 09/28/2023] [Indexed: 10/09/2023] Open
Abstract
The human brain extracts meaning using an extensive neural system for semantic knowledge. Whether broadly distributed systems depend on or can compensate after losing a highly interconnected hub is controversial. We report intracranial recordings from two patients during a speech prediction task, obtained minutes before and after neurosurgical treatment requiring disconnection of the left anterior temporal lobe (ATL), a candidate semantic knowledge hub. Informed by modern diaschisis and predictive coding frameworks, we tested hypotheses ranging from solely neural network disruption to complete compensation by the indirectly affected language-related and speech-processing sites. Immediately after ATL disconnection, we observed neurophysiological alterations in the recorded frontal and auditory sites, providing direct evidence for the importance of the ATL as a semantic hub. We also obtained evidence for rapid, albeit incomplete, attempts at neural network compensation, with neural impact largely in the forms stipulated by the predictive coding framework, in specificity, and the modern diaschisis framework, more generally. The overall results validate these frameworks and reveal an immediate impact and capability of the human brain to adjust after losing a brain hub.
Collapse
Affiliation(s)
- Zsuzsanna Kocsis
- Department of Neurosurgery, University of Iowa, Iowa City, IA, USA.
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK.
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA.
| | - Rick L Jenison
- Departments of Neuroscience and Psychology, University of Wisconsin, Madison, WI, USA
| | - Peter N Taylor
- CNNP Lab, Interdisciplinary Computing and Complex BioSystems Group, School of Computing, Newcastle University, Newcastle upon Tyne, UK
- UCL Institute of Neurology, Queen Square, London, UK
| | - Ryan M Calmus
- Department of Neurosurgery, University of Iowa, Iowa City, IA, USA
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK
| | - Bob McMurray
- Department of Psychological and Brain Science, University of Iowa, Iowa City, IA, USA
| | - Ariane E Rhone
- Department of Neurosurgery, University of Iowa, Iowa City, IA, USA
| | | | | | - Yukiko Kikuchi
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK
| | - Phillip E Gander
- Department of Neurosurgery, University of Iowa, Iowa City, IA, USA
- Department of Radiology, University of Iowa, Iowa City, IA, USA
- Iowa Neuroscience Institute, University of Iowa, Iowa City, IA, USA
| | - Joel I Berger
- Department of Neurosurgery, University of Iowa, Iowa City, IA, USA
| | | | - Inyong Choi
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA, USA
| | | | - Hiroto Kawasaki
- Department of Neurosurgery, University of Iowa, Iowa City, IA, USA
| | - Thomas E Cope
- Department of Clinical Neurosciences, Cambridge University, Cambridge, UK
- MRC Cognition and Brain Sciences Unit, Cambridge University, Cambridge, UK
| | - Timothy D Griffiths
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK
| | - Matthew A Howard
- Department of Neurosurgery, University of Iowa, Iowa City, IA, USA
| | - Christopher I Petkov
- Department of Neurosurgery, University of Iowa, Iowa City, IA, USA.
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK.
| |
Collapse
|
15
|
Sarmukadam K, Behroozmand R. Neural oscillations reveal disrupted functional connectivity associated with impaired speech auditory feedback control in post-stroke aphasia. Cortex 2023; 166:258-274. [PMID: 37437320 PMCID: PMC10527672 DOI: 10.1016/j.cortex.2023.05.015] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 05/11/2023] [Accepted: 05/24/2023] [Indexed: 07/14/2023]
Abstract
The oscillatory brain activities reflect neuro-computational processes that are critical for speech production and sensorimotor control. In the present study, we used neural oscillations in left-hemisphere stroke survivors with aphasia as a model to investigate network-level functional connectivity deficits associated with disrupted speech auditory feedback control. Electroencephalography signals were recorded from 40 post-stroke aphasia and 39 neurologically intact control participants while they performed speech vowel production and listening tasks under pitch-shifted altered auditory feedback (AAF) conditions. Using weighted phase-lag index, we calculated broadband (1-70 Hz) functional neural connectivity between electrode pairs covering the frontal, pre- and post-central, and parietal regions. Results revealed reduced fronto-central delta and theta band and centro-parietal low-beta band connectivity in left-hemisphere electrodes associated with diminished speech AAF compensation responses in post-stroke aphasia compared with controls. Lesion-mapping analysis demonstrated that stroke-induced damage to multi-modal brain networks within the inferior frontal gyrus, Rolandic operculum, inferior parietal lobule, angular gyrus, and supramarginal gyrus predicted the reduced functional neural connectivity within the delta and low-beta bands during both tasks in aphasia. These results provide evidence that disrupted neural connectivity due to left-hemisphere brain damage can result in network-wide dysfunctions associated with impaired sensorimotor integration mechanisms for speech auditory feedback control.
Collapse
Affiliation(s)
- Kimaya Sarmukadam
- Speech Neuroscience Lab, Department of Communication Sciences and Disorders, Arnold School of Public Health, University of South Carolina, Columbia, SC, United States.
| | - Roozbeh Behroozmand
- Speech Neuroscience Lab, Department of Communication Sciences and Disorders, Arnold School of Public Health, University of South Carolina, Columbia, SC, United States.
| |
Collapse
|
16
|
Oancea AA, Saleh C, Cordier D, Schoepfer R, Lieb J. Does Functional Imaging Play a Role in Pre-Operative Diagnosis of Brain Tumours? FORTSCHRITTE DER NEUROLOGIE-PSYCHIATRIE 2023; 91:366-368. [PMID: 37327815 DOI: 10.1055/a-2089-3425] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
Although there is a large variability in the neural organization of language function between individuals, there is an ongoing debate about functional imaging as a standard procedure in the preoperative setting of brain tumors. Brain mapping of the language centers differs from individual to individual in multilingual patients and changes in its architecture may occur as a result of neuroplasticity induced by a mass lesion. This article discusses the role of functional imaging in the preoperative setting.
Collapse
Affiliation(s)
- Alexandra A Oancea
- REHAB Basel, Clinic for Neurorehabilitation and Paraplegiology, Basel, Switzerland
| | - Christian Saleh
- REHAB Basel, Clinic for Neurorehabilitation and Paraplegiology, Basel, Switzerland
- University of Basel, Basel, Switzerland
| | - Dominik Cordier
- Department of Neurosurgery, University Hospital of Basel, Basel, Switzerland
| | - Raphaela Schoepfer
- REHAB Basel, Clinic for Neurorehabilitation and Paraplegiology, Basel, Switzerland
| | - Johanna Lieb
- University of Basel, Basel, Switzerland
- Division of Neuroradiology, Clinic of Radiology & Nuclear Medicine, Department of Theragnostics, University Hospital of Basel, Basel, Switzerland
| |
Collapse
|
17
|
Abbasi O, Steingräber N, Chalas N, Kluger DS, Gross J. Spatiotemporal dynamics characterise spectral connectivity profiles of continuous speaking and listening. PLoS Biol 2023; 21:e3002178. [PMID: 37478152 DOI: 10.1371/journal.pbio.3002178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Accepted: 05/31/2023] [Indexed: 07/23/2023] Open
Abstract
Speech production and perception are fundamental processes of human cognition that both rely on intricate processing mechanisms that are still poorly understood. Here, we study these processes by using magnetoencephalography (MEG) to comprehensively map connectivity of regional brain activity within the brain and to the speech envelope during continuous speaking and listening. Our results reveal not only a partly shared neural substrate for both processes but also a dissociation in space, delay, and frequency. Neural activity in motor and frontal areas is coupled to succeeding speech in delta band (1 to 3 Hz), whereas coupling in the theta range follows speech in temporal areas during speaking. Neural connectivity results showed a separation of bottom-up and top-down signalling in distinct frequency bands during speaking. Here, we show that frequency-specific connectivity channels for bottom-up and top-down signalling support continuous speaking and listening. These findings further shed light on the complex interplay between different brain regions involved in speech production and perception.
Collapse
Affiliation(s)
- Omid Abbasi
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany
| | - Nadine Steingräber
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany
| | - Nikos Chalas
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany
- Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany
| | - Daniel S Kluger
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany
- Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany
| | - Joachim Gross
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany
- Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany
| |
Collapse
|
18
|
Medeiros W, Barros T, Caixeta FV. Bibliometric mapping of non-invasive brain stimulation techniques (NIBS) for fluent speech production. Front Hum Neurosci 2023; 17:1164890. [PMID: 37425291 PMCID: PMC10323431 DOI: 10.3389/fnhum.2023.1164890] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Accepted: 05/30/2023] [Indexed: 07/11/2023] Open
Abstract
Introduction Language production is a finely regulated process, with many aspects which still elude comprehension. From a motor perspective, speech involves over a hundred different muscles functioning in coordination. As science and technology evolve, new approaches are used to study speech production and treat its disorders, and there is growing interest in the use of non-invasive modulation by means of transcranial magnetic stimulation (TMS) and transcranial direct current stimulation (tDCS). Methods Here we analyzed data obtained from Scopus (Elsevier) using VOSViewer to provide an overview of bibliographic mapping of citation, co-occurrence of keywords, co-citation and bibliographic coupling of non-invasive brain stimulation (NIBS) use in speech research. Results In total, 253 documents were found, being 55% from only three countries (USA, Germany and Italy), with emerging economies such as Brazil and China becoming relevant in this topic recently. Most documents were published in this last decade, with 2022 being the most productive yet, showing brain stimulation has untapped potential for the speech research field. Discussion Keyword analysis indicates a move away from basic research on the motor control in healthy speech, toward clinical applications such as stuttering and aphasia treatment. We also observe a recent trend in cerebellar modulation for clinical treatment. Finally, we discuss how NIBS have established over the years and gained prominence as tools in speech therapy and research, and highlight potential methodological possibilities for future research.
Collapse
|
19
|
Behroozmand R, Sarmukadam K, Fridriksson J. Aberrant modulation of broadband neural oscillations reflects vocal sensorimotor deficits in post-stroke aphasia. Clin Neurophysiol 2023; 149:100-112. [PMID: 36934601 PMCID: PMC10101924 DOI: 10.1016/j.clinph.2023.02.176] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 02/17/2023] [Accepted: 02/25/2023] [Indexed: 03/11/2023]
Abstract
OBJECTIVE The present study investigated the neural oscillatory correlates of impaired vocal sensorimotor control in left-hemisphere stroke. METHODS Electroencephalography (EEG) signals were recorded from 34 stroke and 46 control subjects during speech vowel vocalization and listening tasks under normal and pitch-shifted auditory feedback. RESULTS Time-frequency analyses revealed aberrantly decreased theta (4-8 Hz) and increased gamma band (30-80 Hz) power in frontal and posterior parieto-occipital regions as well as reduced alpha (8-13 Hz) and beta (13-30 Hz) desynchronization over sensorimotor areas before speech vowel vocalization in left-hemisphere stroke compared with controls. Subjects with the stroke also presented with aberrant modulation of broadband (4-80 Hz) neural oscillations over sensorimotor regions after speech vowel onset during vocalization and listening under normal and altered auditory feedback. We found that the atypical pattern of broadband neural oscillatory modulation was correlated with diminished vocal feedback error compensation behavior and the severity of co-existing language-related aphasia symptoms associated with left-hemisphere stroke. CONCLUSIONS These findings indicate complex interplays between the underlying mechanisms of speech and language and their deficits in post-stroke aphasia. SIGNIFICANCE Our data motivate the notion of studying neural oscillatory dynamics as a critical component for the examination of speech and language disorders in post-stroke aphasia.
Collapse
Affiliation(s)
- Roozbeh Behroozmand
- Speech Neuroscience Lab, Department of Communication Sciences and Disorders, Arnold School of Public Health, University of South Carolina, 915 Greene Street, Columbia, SC 29208, USA.
| | - Kimaya Sarmukadam
- Speech Neuroscience Lab, Department of Communication Sciences and Disorders, Arnold School of Public Health, University of South Carolina, 915 Greene Street, Columbia, SC 29208, USA
| | - Julius Fridriksson
- The Aphasia Lab, Department of Communication Sciences and Disorders, Arnold School of Public Health, University of South Carolina, 915 Greene St, Columbia, SC 29208, USA; Center for the Study of Aphasia Recovery (C-STAR), Arnold School of Public Health, University of South Carolina, 915 Greene St, Columbia, SC 29208, USA
| |
Collapse
|
20
|
Meyer AM, Snider SF, Tippett DC, Saloma R, Turkeltaub PE, Hillis AE, Friedman RB. Baseline Conceptual-Semantic Impairment Predicts Longitudinal Treatment Effects for Anomia in Primary Progressive Aphasia and Alzheimer's Disease. APHASIOLOGY 2023; 38:205-236. [PMID: 38283767 PMCID: PMC10809875 DOI: 10.1080/02687038.2023.2183075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/09/2022] [Accepted: 02/16/2023] [Indexed: 01/30/2024]
Abstract
Background An individual's diagnostic subtype may fail to predict the efficacy of a given type of treatment for anomia. Classification by conceptual-semantic impairment may be more informative. Aims This study examined the effects of conceptual-semantic impairment and diagnostic subtype on anomia treatment effects in primary progressive aphasia (PPA) and Alzheimer's disease (AD). Methods & Procedures At baseline, the picture and word versions of the Pyramids and Palm Trees and Kissing and Dancing tests were used to measure conceptual-semantic processing. Based on norming that was conducted with unimpaired older adults, participants were classified as being impaired on both the picture and word versions (i.e., modality-general conceptual-semantic impairment), the picture version (Objects or Actions) only (i.e., visual-conceptual impairment), the word version (Nouns or Verbs) only (i.e., lexical-semantic impairment), or neither the picture nor the word version (i.e., no impairment). Following baseline testing, a lexical treatment and a semantic treatment were administered to all participants. The treatment stimuli consisted of nouns and verbs that were consistently named correctly at baseline (Prophylaxis items) and/or nouns and verbs that were consistently named incorrectly at baseline (Remediation items). Naming accuracy was measured at baseline, and it was measured at three, seven, eleven, fourteen, eighteen, and twenty-one months. Outcomes & Results Compared to baseline naming performance, lexical and semantic treatments both improved naming accuracy for treated Remediation nouns and verbs. For Prophylaxis items, lexical treatment was effective for both nouns and verbs, and semantic treatment was effective for verbs, but the pattern of results was different for nouns -- the effect of semantic treatment was initially nonsignificant or marginally significant, but it was significant beginning at 11 Months, suggesting that the effects of prophylactic semantic treatment may become more apparent as the disorder progresses. Furthermore, the interaction between baseline Conceptual-Semantic Impairment and the Treatment Condition (Lexical vs. Semantic) was significant for verb Prophylaxis items at 3 and 18 Months, and it was significant for noun Prophylaxis items at 14 and 18 Months. Conclusions The pattern of results suggested that individuals who have modality-general conceptual-semantic impairment at baseline are more likely to benefit from lexical treatment, while individuals who have unimpaired conceptual-semantic processing at baseline are more likely to benefit from semantic treatment as the disorder progresses. In contrast to conceptual-semantic impairment, diagnostic subtype did not typically predict the treatment effects.
Collapse
Affiliation(s)
- Aaron M. Meyer
- Center for Aphasia Research and Rehabilitation, Georgetown University Medical Center
| | - Sarah F. Snider
- Center for Aphasia Research and Rehabilitation, Georgetown University Medical Center
| | | | - Ryan Saloma
- Center for Aphasia Research and Rehabilitation, Georgetown University Medical Center
| | - Peter E. Turkeltaub
- Center for Aphasia Research and Rehabilitation, Georgetown University Medical Center
| | | | - Rhonda B. Friedman
- Center for Aphasia Research and Rehabilitation, Georgetown University Medical Center
| |
Collapse
|
21
|
Giorgobiani T, Binkofski F. TPJ in speech and praxis: Comment on "Left and right temporal-parietal junctions (TPJs) as "match/mismatch" hedonic machines: A unifying account of TPJ function" by Doricchi et al. Phys Life Rev 2023; 44:4-5. [PMID: 36455474 DOI: 10.1016/j.plrev.2022.11.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Accepted: 11/21/2022] [Indexed: 11/25/2022]
Affiliation(s)
- Tamar Giorgobiani
- Faculty of Psychology and Educational Sciences, Ivane Javakhishvili Tbilisi State University, Tbilisi, Georgia.
| | - Ferdinand Binkofski
- Division for Clinical Cognitive Sciences, University Hospital RWTH Aachen, Aachen, Germany; Institute for Neuroscience and Medicine (INM-4), Research Center Jülich GmbH, Jülich, Germany
| |
Collapse
|
22
|
Sidat SM, Giannakopoulou A, Hand CJ, Ingram J. Dual-task decrements in mono-, bi- and multilingual participants: Evidence for multilingual advantage. Laterality 2023:1-23. [PMID: 36803667 DOI: 10.1080/1357650x.2023.2178061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/22/2023]
Abstract
Evidence suggests that language processing in bilinguals is less left-lateralized than in monolinguals. We explored dual-task decrement (DTD) for mono-, bi- and multilinguals in a verbal-motor dual-task paradigm. We expected monolinguals to show greater DTD than bilingual participants, who would show greater DTD than multilingual participants. Fifty right-handed participants (18 monolingual, 16 bilingual, 16 multilingual) completed verbal fluency and manual motor tasks in isolation and concurrently. Tasks were completed twice in isolation (left-handed, right-handed) and twice as dual-tasks (left-handed, right-handed); participants' motor-executing hands served proxy for hemispheric activation. Results supported the hypotheses. Completing dual-tasks incurred greater cost for manual motor tasks than for verbal fluency tasks. Negative cost of performing dual-tasks diminished as number of languages spoken increased; in fact, multilingual individuals demonstrated a dual-task advantage in both tasks when using the right hand, strongest in the verbal task. Dual-tasking had the greatest negative impact on verbal fluency of monolingual participants when the motor task was completed with the right hand; for bi- and multi-lingual participants, the greatest negative impact on verbal fluency was seen when the motor task was completed with the left hand. Results provide support for the bi-lateralization of language function in bi- and multilingual individuals.
Collapse
Affiliation(s)
| | | | | | - Joanne Ingram
- Division of Psychology, University of the West of Scotland, Paisley, UK
| |
Collapse
|
23
|
Wu S, Wen Z, Yang W, Jiang C, Zhou Y, Zhao Z, Zhou A, Liu X, Wang X, Wang Y, Wang H, Lin F. Potential dynamic regional brain biomarkers for early discrimination of autism and language development delay in toddlers. Front Neurosci 2023; 16:1097244. [PMID: 36699523 PMCID: PMC9869111 DOI: 10.3389/fnins.2022.1097244] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Accepted: 12/19/2022] [Indexed: 01/11/2023] Open
Abstract
Background The early diagnosis of autism in children is particularly important. However, there is no obvious objective indices for the diagnosis of autism spectrum disorder (ASD), especially in toddlers aged 1-3 years with language development delay (LDD). The early differential diagnosis of ASD is challenging. Objective To examine differences in the dynamic characteristics of regional neural activity in toddlers with ASD and LDD, and whether the differences can be used as an imaging biomarker for the early differential diagnosis of ASD and LDD. Methods Dynamic regional homogeneity (dReHo) and dynamic amplitude of low-frequency fluctuations (dALFF) in 55 children with ASD and 31 with LDD, aged 1-3 years, were compared. The correlations between ASD symptoms and the values of dReHo/dALFF within regions showing significant between-group differences were analyzed in ASD group. We further assessed the accuracy of dynamic regional neural activity alterations to distinguish ASD from LDD using receiver operating characteristic (ROC) analysis. Results Compared with the LDD group, the ASD group showed increased dReHo in the left cerebellum_8/Crust2 and right cerebellum_Crust2, and decreased dReHo in the right middle frontal gyrus (MFG) and post-central gyrus. Patients with ASD also exhibited decreased dALFF in the right middle temporal gyrus (MFG) and right precuneus. Moreover, the Childhood Autism Rating Scale score was negatively correlated with the dReHo of the left cerebellum_8/crust2 and right cerebellum_crust2. The dReHo value of the right MFG was negatively correlated with social self-help of the Autism Behavior Checklist score. Conclusion The pattern of resting-state regional neural activity variability was different between toddlers with ASD and those with LDD. Dynamic regional indices might be novel neuroimaging biomarkers that allow differentiation of ASD from LDD in toddlers.
Collapse
Affiliation(s)
- Shengjuan Wu
- Department of Child Health Care, Maternal and Child Health Hospital of Hubei Province, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Zhi Wen
- Department of Radiology, Renmin Hospital of Wuhan University, Wuhan, China
| | - Wenzhong Yang
- Department of Radiology, Maternal and Child Health Hospital of Hubei Province, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Chengcheng Jiang
- Department of Radiology, Maternal and Child Health Hospital of Hubei Province, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Yurong Zhou
- Department of Radiology, Zhongnan Hospital of Wuhan University, Wuhan, China
| | - Zhiwei Zhao
- Department of Child Health Care, Maternal and Child Health Hospital of Hubei Province, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Aiqin Zhou
- Department of Child Health Care, Maternal and Child Health Hospital of Hubei Province, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Xinglian Liu
- Department of Child Health Care, Maternal and Child Health Hospital of Hubei Province, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Xiaoyan Wang
- Department of Child Health Care, Maternal and Child Health Hospital of Hubei Province, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Yue Wang
- Department of Child Health Care, Maternal and Child Health Hospital of Hubei Province, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Hong Wang
- Department of Child Health Care, Maternal and Child Health Hospital of Hubei Province, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China,*Correspondence: Hong Wang,
| | - Fuchun Lin
- State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, National Center for Magnetic Resonance in Wuhan, Wuhan Institute of Physics and Mathematics, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences, Wuhan, China,Fuchun Lin,
| |
Collapse
|
24
|
Humphreys GF, Tibon R. Dual-axes of functional organisation across lateral parietal cortex: the angular gyrus forms part of a multi-modal buffering system. Brain Struct Funct 2023; 228:341-352. [PMID: 35670844 PMCID: PMC9813060 DOI: 10.1007/s00429-022-02510-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Accepted: 05/08/2022] [Indexed: 01/09/2023]
Abstract
Decades of neuropsychological and neuroimaging evidence have implicated the lateral parietal cortex (LPC) in a myriad of cognitive domains, generating numerous influential theoretical models. However, these theories fail to explain why distinct cognitive activities appear to implicate common neural regions. Here we discuss a unifying model in which the angular gyrus forms part of a wider LPC system with a core underlying neurocomputational function; the multi-sensory buffering of spatio-temporally extended representations. We review the principles derived from computational modelling with neuroimaging task data and functional and structural connectivity measures that underpin the unified neurocomputational framework. We propose that although a variety of cognitive activities might draw on shared underlying machinery, variations in task preference across angular gyrus, and wider LPC, arise from graded changes in the underlying structural connectivity of the region to different input/output information sources. More specifically, we propose two primary axes of organisation: a dorsal-ventral axis and an anterior-posterior axis, with variations in task preference arising from underlying connectivity to different core cognitive networks (e.g. the executive, language, visual, or episodic memory networks).
Collapse
Affiliation(s)
- Gina F Humphreys
- MRC Cognition and Brain Sciences Unit, University of Cambridge, 15 Chaucer Road, Cambridge, CB2 7EF, UK.
| | - Roni Tibon
- MRC Cognition and Brain Sciences Unit, University of Cambridge, 15 Chaucer Road, Cambridge, CB2 7EF, UK.
- School of Psychology, University of Nottingham, Nottingham, UK.
| |
Collapse
|
25
|
Hervais-Adelman A, Kumar U, Mishra RK, Tripathi VA, Guleria A, Singh JP, Huettig F. How Does Literacy Affect Speech Processing? Not by Enhancing Cortical Responses to Speech, But by Promoting Connectivity of Acoustic-Phonetic and Graphomotor Cortices. J Neurosci 2022; 42:8826-8841. [PMID: 36253084 PMCID: PMC9698677 DOI: 10.1523/jneurosci.1125-21.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Revised: 09/01/2022] [Accepted: 09/06/2022] [Indexed: 12/29/2022] Open
Abstract
Previous research suggests that literacy, specifically learning alphabetic letter-to-phoneme mappings, modifies online speech processing and enhances brain responses, as indexed by the BOLD, to speech in auditory areas associated with phonological processing (Dehaene et al., 2010). However, alphabets are not the only orthographic systems in use in the world, and hundreds of millions of individuals speak languages that are not written using alphabets. In order to make claims that literacy per se has broad and general consequences for brain responses to speech, one must seek confirmatory evidence from nonalphabetic literacy. To this end, we conducted a longitudinal fMRI study in India probing the effect of literacy in Devanagari, an abubgida, on functional connectivity and cerebral responses to speech in 91 variously literate Hindi-speaking male and female human participants. Twenty-two completely illiterate participants underwent 6 months of reading and writing training. Devanagari literacy increases functional connectivity between acoustic-phonetic and graphomotor brain areas, but we find no evidence that literacy changes brain responses to speech, either in cross-sectional or longitudinal analyses. These findings shows that a dramatic reconfiguration of the neurofunctional substrates of online speech processing may not be a universal result of learning to read, and suggest that the influence of writing on speech processing should also be investigated.SIGNIFICANCE STATEMENT It is widely claimed that a consequence of being able to read is enhanced auditory processing of speech, reflected by increased cortical responses in areas associated with phonological processing. Here we find no relationship between literacy and the magnitude of brain response to speech stimuli in individuals who speak Hindi, which is written using a nonalphabetic script, Devanagari, an abugida. We propose that the exact nature of the script under examination must be considered before making sweeping claims about the consequences of literacy for the brain. Further, we find evidence that literacy enhances functional connectivity between auditory processing areas and graphomotor areas, suggesting a mechanism whereby learning to write might influence speech perception.
Collapse
Affiliation(s)
- Alexis Hervais-Adelman
- Max Planck Institute for Psycholinguistics, Nijmegen, 6525 XD, The Netherlands
- Neurolinguistics and Department of Psychology, University of Zurich, 8050, Zurich, Switzerland
- Neuroscience Center Zurich, University of Zurich and Eidgenössische Technische Hochschule Zurich, Zurich, 8057, Switzerland
| | - Uttam Kumar
- Centre of Biomedical Research, Lucknow 226014, Uttar Pradesh, India
| | - Ramesh K Mishra
- University of Hyderabad, Gachibowli 500046, Telangana, India
| | - Vivek A Tripathi
- Centre for Behavioural and Cognitive Sciences, University of Allahabad, Old Katra 211002, Uttar Pradesh, India
| | - Anupam Guleria
- Centre of Biomedical Research, Lucknow 226014, Uttar Pradesh, India
| | - Jay P Singh
- Centre for Behavioural and Cognitive Sciences, University of Allahabad, Old Katra 211002, Uttar Pradesh, India
| | - Falk Huettig
- Max Planck Institute for Psycholinguistics, Nijmegen, 6525 XD, The Netherlands
- Centre for Language Studies, Radboud University, 6525 HT Nijmegen, The Netherlands
| |
Collapse
|
26
|
Satoer D, De Witte E, Bulté B, Bastiaanse R, Smits M, Vincent A, Mariën P, Visch-Brink E. Dutch Diagnostic Instrument for Mild Aphasia (DIMA): standardisation and a first clinical application in two brain tumour patients. CLINICAL LINGUISTICS & PHONETICS 2022; 36:929-953. [PMID: 35899484 DOI: 10.1080/02699206.2021.1992797] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Revised: 08/17/2021] [Accepted: 10/06/2021] [Indexed: 06/15/2023]
Abstract
Brain tumour patients with mild language disturbances are typically underdiagnosed due to lack of sensitive tests leading to negative effects in daily communicative and social life. We aim to develop a Dutch standardised test-battery, the Diagnostic Instrument for Mild Aphasia (DIMA) to detect characteristics of mild aphasia at the main linguistic levels phonology, semantics and (morpho-)syntax in production and comprehension. We designed 4 DIMA subtests: 1) repetition (words, non-words, compounds and sentences), 2) semantic odd-picture-out (objects and actions), 3) sentence completion and 4) sentence judgment (accuracy and reaction time). A normative study was carried out in a healthy Dutch-speaking population (N = 211) divided into groups of gender, age and education. Clinical application of DIMA was demonstrated in two brain tumour patients (glioma and meningioma). Standard language tests were also administered: object naming, verbal fluency (category and letter), and Token Test. Performance was at ceiling on all sub-tests, except semantic odd-picture-out actions, with an effect of age and education on most subtests. Clinical application DIMA: repetition was impaired in both cases. Reaction time in the sentence judgment test (phonology and syntax) was impaired (not accuracy) in one patient. Standard language tests: category fluency was impaired in both cases and object naming in one patient. The Token Test was not able to detect language disturbances in both cases. DIMA seems to be sensitive to capture mild aphasic deficits. DIMA is expected to be of great potential for standard assessment of language functions in patients with also other neurological diseases than brain tumours.
Collapse
Affiliation(s)
- Djaina Satoer
- Department of Neurosurgery, Erasmus MC - University Medical Center, Rotterdam, The Netherlands
| | - Elke De Witte
- Department of Neurosurgery, Erasmus MC - University Medical Center, Rotterdam, The Netherlands
- Department of Clinical and Experimental Linguistics, Vrije Universiteit Brussel, Brussels, Belgium
| | - Bram Bulté
- Centre for Linguistics, Vrije Universiteit Brussel, Brussels, Belgium
| | - Roelien Bastiaanse
- Center for Language and Cognition Groningen, University of Groningen, Groningen, The Netherlands
- Center for Language and Brain, National Research University Higher School of Economics, Moscow, Russian Federation
| | - Marion Smits
- Department of Nuclear Medicine and Radiology, Erasmus MC - University Medical Center, Rotterdam, The Netherlands
| | - Arnaud Vincent
- Department of Neurosurgery, Erasmus MC - University Medical Center, Rotterdam, The Netherlands
| | | | - Evy Visch-Brink
- Department of Neurosurgery, Erasmus MC - University Medical Center, Rotterdam, The Netherlands
- Department of Neurology, Erasmus MC - University Medical Center, Rotterdam, The Netherlands
| |
Collapse
|
27
|
Language Development in Preschool Duchenne Muscular Dystrophy Boys. Brain Sci 2022; 12:brainsci12091252. [PMID: 36138988 PMCID: PMC9497138 DOI: 10.3390/brainsci12091252] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Revised: 08/31/2022] [Accepted: 09/13/2022] [Indexed: 11/17/2022] Open
Abstract
BACKGROUND the present study aims to assess language in preschool-aged Duchenne muscular dystrophy (DMD) boys with normal cognitive quotients, and to establish whether language difficulties are related to attentional aspects or to the involvement of brain dystrophin isoforms. METHODS 20 children aged between 48 and 72 months were assessed with language and attention assessments for preschool children. Nine had a mutation upstream of exon 44, five between 44 and 51, four between 51 and 63, and two after exon 63. A control group comprising 20 age-matched boys with a speech language disorder and normal IQ were also used. RESULTS lexical and syntactic comprehension and denomination were normal in 90% of the boys with Duchenne, while the articulation and repetition of long words, and sentence repetition frequently showed abnormal results (80%). Abnormal results were also found in tests assessing selective and sustained auditory attention. Language difficulties were less frequent in patients with mutations not involving isoforms Dp140 and Dp71. The profile in Duchenne boys was different form the one observed in SLI with no cognitive impairment. CONCLUSION The results of our observational cross-sectional study suggest that early language abilities are frequently abnormal in preschool Duchenne boys and should be assessed regardless of their global neurodevelopmental quotient.
Collapse
|
28
|
Yu M, Song Y, Liu J. The posterior middle temporal gyrus serves as a hub in syntactic comprehension: A model on the syntactic neural network. BRAIN AND LANGUAGE 2022; 232:105162. [PMID: 35908340 DOI: 10.1016/j.bandl.2022.105162] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Revised: 06/18/2022] [Accepted: 07/21/2022] [Indexed: 06/15/2023]
Abstract
Neuroimaging studies have revealed a distributed neural network involving multiple fronto-temporal regions that are active during syntactic processing. Here, we investigated how these regions work collaboratively to support syntactic comprehension by examining the behavioral relevance of the global functional integration of the syntax network (SN). We found that individuals with a stronger resting-state within-network integration in the left posterior middle temporal gyrus (lpMTG) were better at syntactic comprehension. Furthermore, the pair-wise functional connectivity between the lpMTG and the Broca's area, the middle frontal gyrus, and the angular and supramarginal gyri was positively correlated with participants' syntactic processing ability. In short, our study reveals the behavioral significance of intrinsic functional integration of the SN in syntactic comprehension, and provides empirical evidence for the hub-like role of the lpMTG. We proposed a neural model for syntactic comprehension highlighting the hub of the SN and its interactions with other regions in the network.
Collapse
Affiliation(s)
- Mengxia Yu
- Bilingual Cognition and Development Lab, Center for Linguistics and Applied Linguistics, Guangdong University of Foreign Studies, Guangzhou 510420, China
| | - Yiying Song
- Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology, Beijing Normal University, Beijing 100875, China.
| | - Jia Liu
- Department of Psychology & Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing 100084, China
| |
Collapse
|
29
|
Laricchiuta D, Termine A, Fabrizio C, Passarello N, Greco F, Piras F, Picerni E, Cutuli D, Marini A, Mandolesi L, Spalletta G, Petrosini L. Only Words Count; the Rest Is Mere Chattering: A Cross-Disciplinary Approach to the Verbal Expression of Emotional Experience. Behav Sci (Basel) 2022; 12:bs12080292. [PMID: 36004863 PMCID: PMC9404916 DOI: 10.3390/bs12080292] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Revised: 08/11/2022] [Accepted: 08/12/2022] [Indexed: 12/12/2022] Open
Abstract
The analysis of sequences of words and prosody, meter, and rhythm provided in an interview addressing the capacity to identify and describe emotions represents a powerful tool to reveal emotional processing. The ability to express and identify emotions was analyzed by means of the Toronto Structured Interview for Alexithymia (TSIA), and TSIA transcripts were analyzed by Natural Language Processing to shed light on verbal features. The brain correlates of the capacity to translate emotional experience into words were determined through cortical thickness measures. A machine learning methodology proved that individuals with deficits in identifying and describing emotions (n = 7) produced language distortions, frequently used the present tense of auxiliary verbs, and few possessive determiners, as well as scarcely connected the speech, in comparison to individuals without deficits (n = 7). Interestingly, they showed high cortical thickness at left temporal pole and low at isthmus of the right cingulate cortex. Overall, we identified the neuro-linguistic pattern of the expression of emotional experience.
Collapse
Affiliation(s)
- Daniela Laricchiuta
- IRCCS Fondazione Santa Lucia, 00143 Rome, Italy
- Correspondence: ; Tel.: +39-065-0170-3077
| | | | | | - Noemi Passarello
- IRCCS Fondazione Santa Lucia, 00143 Rome, Italy
- Department of Humanities, Federico II University of Naples, 80138 Naples, Italy
| | - Francesca Greco
- Department of Communication and Social Research, Sapienza University of Rome, 00198 Rome, Italy
| | | | | | - Debora Cutuli
- IRCCS Fondazione Santa Lucia, 00143 Rome, Italy
- Department of Psychology, University “Sapienza” of Rome, 00185 Rome, Italy
| | - Andrea Marini
- Department of Languages, Literatures, Communication, Education and Society, University of Udine, 33100 Udine, Italy
| | - Laura Mandolesi
- Department of Humanities, Federico II University of Naples, 80138 Naples, Italy
| | | | | |
Collapse
|
30
|
Aberrant Beta-band Brain Connectivity Predicts Speech Motor Planning Deficits in Post-Stroke Aphasia. Cortex 2022; 155:75-89. [DOI: 10.1016/j.cortex.2022.07.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Revised: 05/24/2022] [Accepted: 07/06/2022] [Indexed: 11/22/2022]
|
31
|
Wang S, Zhang X, Hong T, Tzeng OJL, Aslin R. Top-down sensory prediction in the infant brain at 6 months is correlated with language development at 12 and 18 months. BRAIN AND LANGUAGE 2022; 230:105129. [PMID: 35576737 DOI: 10.1016/j.bandl.2022.105129] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Revised: 04/18/2022] [Accepted: 04/21/2022] [Indexed: 06/15/2023]
Abstract
Previous research has suggested that top-down sensory prediction facilitates, and may be necessary for, efficient transmission of information in the brain. Here we related infants' vocabulary development to the top-down sensory prediction indexed by occipital cortex activation to the unexpected absence of a visual stimulus previously paired with an auditory stimulus. The magnitude of the neural response to the unexpected omission of a visual stimulus was assessed at the age of 6 months with functional near-infrared spectroscopy (fNIRS) and vocabulary scores were obtained using the MacArthur-Bates Communicative Development Inventory (MCDI) when infants reached the age of 12 months and 18 months, respectively. Results indicated significant positive correlations between this predictive neural signal at 6 months and MCDI expressive vocabulary scores at 12 and 18 months. These findings provide additional and robust support for the hypothesis that top-down prediction at the neural level plays a key role in infants' language development.
Collapse
Affiliation(s)
- Shinmin Wang
- Department of Human Development and Family Studies, National Taiwan Normal University, Taipei, Taiwan.
| | - Xian Zhang
- Department of Psychiatry, Yale School of Medicine,New Haven, CT, USA.
| | - Tian Hong
- Haskins Laboratories, New Haven, CT, USA.
| | - Ovid J L Tzeng
- Department of Educational Psychology and Counseling, National Taiwan Normal University, Taipei, Taiwan; Taipei Medical University, Taipei, Taiwan; Linguistic Institute, Academia Sinica, Taipei, Taiwan.
| | - Richard Aslin
- Haskins Laboratories, New Haven, CT, USA; Department of Psychology and Child Study Center, Yale University, New Haven, CT, USA; Department of Psychological Sciences, University of Connecticut, Storrs, CT, USA.
| |
Collapse
|
32
|
Johansson C, Folgerø PO. Is Reduced Visual Processing the Price of Language? Brain Sci 2022; 12:brainsci12060771. [PMID: 35741656 PMCID: PMC9221435 DOI: 10.3390/brainsci12060771] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Revised: 06/06/2022] [Accepted: 06/08/2022] [Indexed: 02/01/2023] Open
Abstract
We suggest a later timeline for full language capabilities in Homo sapiens, placing the emergence of language over 200,000 years after the emergence of our species. The late Paleolithic period saw several significant changes. Homo sapiens became more gracile and gradually lost significant brain volumes. Detailed realistic cave paintings disappeared completely, and iconic/symbolic ones appeared at other sites. This may indicate a shift in perceptual abilities, away from an accurate perception of the present. Language in modern humans interact with vision. One example is the McGurk effect. Studies show that artistic abilities may improve when language-related brain areas are damaged or temporarily knocked out. Language relies on many pre-existing non-linguistic functions. We suggest that an overwhelming flow of perceptual information, vision, in particular, was an obstacle to language, as is sometimes implied in autism with relative language impairment. We systematically review the recent research literature investigating the relationship between language and perception. We see homologues of language-relevant brain functions predating language. Recent findings show brain lateralization for communicative gestures in other primates without language, supporting the idea that a language-ready brain may be overwhelmed by raw perception, thus blocking overt language from evolving. We find support in converging evidence for a change in neural organization away from raw perception, thus pushing the emergence of language closer in time. A recent origin of language makes it possible to investigate the genetic origins of language.
Collapse
|
33
|
Neural correlates of impaired vocal feedback control in post-stroke aphasia. Neuroimage 2022; 250:118938. [PMID: 35092839 PMCID: PMC8920755 DOI: 10.1016/j.neuroimage.2022.118938] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Revised: 12/31/2021] [Accepted: 01/25/2022] [Indexed: 01/16/2023] Open
Abstract
We used left-hemisphere stroke as a model to examine how damage to sensorimotor brain networks impairs vocal auditory feedback processing and control. Individuals with post-stroke aphasia and matched neurotypical control subjects vocalized speech vowel sounds and listened to the playback of their self-produced vocalizations under normal (NAF) and pitch-shifted altered auditory feedback (AAF) while their brain activity was recorded using electroencephalography (EEG) signals. Event-related potentials (ERPs) were utilized as a neural index to probe the effect of vocal production on auditory feedback processing with high temporal resolution, while lesion data in the stroke group was used to determine how brain abnormality accounted for the impairment of such mechanisms. Results revealed that ERP activity was aberrantly modulated during vocalization vs. listening in aphasia, and this effect was accompanied by the reduced magnitude of compensatory vocal responses to pitch-shift alterations in the auditory feedback compared with control subjects. Lesion-mapping revealed that the aberrant pattern of ERP modulation in response to NAF was accounted for by damage to sensorimotor networks within the left-hemisphere inferior frontal, precentral, inferior parietal, and superior temporal cortices. For responses to AAF, neural deficits were predicted by damage to a distinguishable network within the inferior frontal and parietal cortices. These findings define the left-hemisphere sensorimotor networks implicated in auditory feedback processing, error detection, and vocal motor control. Our results provide translational synergy to inform the theoretical models of sensorimotor integration while having clinical applications for diagnosis and treatment of communication disabilities in individuals with stroke and other neurological conditions.
Collapse
|
34
|
Yada T, Kawasaki T. Circumscribed supplementary motor area injury with gait apraxia including freezing of gait and shuffling gait: a case report. Neurocase 2022; 28:231-234. [PMID: 35491765 DOI: 10.1080/13554794.2022.2071628] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Clinical findings in cases of injury circumscribed with SMA is no consensus. We report the case of a 60-year-old male with circumscribed SMA injury who showed freezing of gait, and shuffling gait. Twenty-one days after onset, the patient showed difficulties with the left leg swing in gait initiation (freezing of gait). In steady-state gait, the stride of the left leg swing was short (shuffling gait). Thirty-four days after onset, this phenomenon was not observed during gait. Circumscribed SMA injury can cause gait apraxia, including freezing and shuffling gait, such as in extensive SMA injury in the medial frontal cortex.
Collapse
Affiliation(s)
- Takuya Yada
- Division of Physical Therapy, Department of Rehabilitation, Tokyo Metropolitan Rehabilitation Hospital, Tokyo, Japan
| | - Tsubasa Kawasaki
- Department of Physical Therapy, School of Health Sciences, Tokyo International University, Kawagoe, Japan
| |
Collapse
|
35
|
Rovetti J, Copelli F, Russo FA. Audio and visual speech emotion activate the left pre-supplementary motor area. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2022; 22:291-303. [PMID: 34811708 DOI: 10.3758/s13415-021-00961-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 10/03/2021] [Indexed: 06/13/2023]
Abstract
Sensorimotor brain areas have been implicated in the recognition of emotion expressed on the face and through nonverbal vocalizations. However, no previous study has assessed whether sensorimotor cortices are recruited during the perception of emotion in speech-a signal that includes both audio (speech sounds) and visual (facial speech movements) components. To address this gap in the literature, we recruited 24 participants to listen to speech clips produced in a way that was either happy, sad, or neutral in expression. These stimuli also were presented in one of three modalities: audio-only (hearing the voice but not seeing the face), video-only (seeing the face but not hearing the voice), or audiovisual. Brain activity was recorded using electroencephalography, subjected to independent component analysis, and source-localized. We found that the left presupplementary motor area was more active in response to happy and sad stimuli than neutral stimuli, as indexed by greater mu event-related desynchronization. This effect did not differ by the sensory modality of the stimuli. Activity levels in other sensorimotor brain areas did not differ by emotion, although they were greatest in response to visual-only and audiovisual stimuli. One possible explanation for the pre-SMA result is that this brain area may actively support speech emotion recognition by using our extensive experience expressing emotion to generate sensory predictions that in turn guide our perception.
Collapse
Affiliation(s)
- Joseph Rovetti
- Department of Psychology, Ryerson University, Toronto, ON, M5B 2K3, Canada
- Department of Psychology, Western University, London, ON, Canada
| | - Fran Copelli
- Department of Psychology, Ryerson University, Toronto, ON, M5B 2K3, Canada
| | - Frank A Russo
- Department of Psychology, Ryerson University, Toronto, ON, M5B 2K3, Canada.
| |
Collapse
|
36
|
Phonetic Effects in the Perception of VOT in a Prevoicing Language. Brain Sci 2022; 12:brainsci12040427. [PMID: 35447959 PMCID: PMC9025303 DOI: 10.3390/brainsci12040427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2022] [Revised: 03/12/2022] [Accepted: 03/18/2022] [Indexed: 11/17/2022] Open
Abstract
Previous production studies have reported differential amounts of closure voicing in plosives depending on the location of the oral constriction (anterior vs. posterior), vocalic context (high vs. low vowels), and speaker sex. Such differences have been attributed to the aerodynamic factors related to the configuration of the cavity behind the oral constriction, with certain articulations and physiological characteristics of the speaker facilitating vocal fold vibration during closure. The current study used perceptual identification tasks to examine whether similar effects of consonantal posteriority, adjacent vowel height, and speaker sex exist in the perception of voicing. The language of investigation was Russian, a prevoicing language that uses negative VOT to signal the voicing contrast in plosives. The study used both original and resynthesized tokens for speaker sex, which allowed it to focus on the role of differences in VOT specifically. Results indicated that listeners’ judgments were significantly affected by consonantal place of articulation, with listeners accepting less voicing in velar plosives. Speaker sex showed only a marginally significant difference in the expected direction, and vowel height had no effect on perceptual responses. These findings suggest that certain phonetic factors can affect both the initial production and subsequent perception of closure voicing.
Collapse
|
37
|
Xue K, Chen J, Wei Y, Chen Y, Han S, Wang C, Zhang Y, Song X, Cheng J. Altered dynamic functional connectivity of auditory cortex and medial geniculate nucleus in first-episode, drug-naïve schizophrenia patients with and without auditory verbal hallucinations. Front Psychiatry 2022; 13:963634. [PMID: 36159925 PMCID: PMC9489854 DOI: 10.3389/fpsyt.2022.963634] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Accepted: 08/18/2022] [Indexed: 11/25/2022] Open
Abstract
BACKGROUND AND OBJECTIVE As a key feature of schizophrenia, auditory verbal hallucination (AVH) is causing concern. Altered dynamic functional connectivity (dFC) patterns involving in auditory related regions were rarely reported in schizophrenia patients with AVH. The goal of this research was to find out the dFC abnormalities of auditory related regions in first-episode, drug-naïve schizophrenia patients with and without AVH using resting state functional magnetic resonance imaging (rs-fMRI). METHODS A total of 107 schizophrenia patients with AVH, 85 schizophrenia patients without AVH (NAVH) underwent rs-fMRI examinations, and 104 healthy controls (HC) were matched. Seed-based dFC of the primary auditory cortex (Heschl's gyrus, HES), auditory association cortex (AAC, including Brodmann's areas 22 and 42), and medial geniculate nucleus (MGN) was conducted to build a whole-brain dFC diagram, then inter group comparison and correlation analysis were performed. RESULTS In comparison to the NAVH and HC groups, the AVH group showed increased dFC from left ACC to the right middle temporal gyrus and right middle occipital gyrus, decreased dFC from left HES to the left superior occipital gyrus, left cuneus gyrus, left precuneus gyrus, decreased dFC from right HES to the posterior cingulate gyrus, and decreased dFC from left MGN to the bilateral calcarine gyrus, bilateral cuneus gyrus, bilateral lingual gyrus. The Auditory Hallucination Rating Scale (AHRS) was significantly positively correlated with the dFC values of cluster 1 (bilateral calcarine gyrus, cuneus gyrus, lingual gyrus, superior occipital gyrus, precuneus gyrus, and posterior cingulate gyrus) using left AAC seed, cluster 2 (right middle temporal gyrus and right middle occipital gyrus) using left AAC seed, cluster 1 (bilateral calcarine gyrus, cuneus gyrus, lingual gyrus, superior occipital gyrus, precuneus gyrus and posterior cingulate gyrus) using right AAC seed and cluster 2 (posterior cingulate gyrus) using right HES seed in the AVH group. In both AVH and NAVH groups, a significantly negative correlation is also found between the dFC values of cluster 2 (posterior cingulate gyrus) using the right HES seed and the PANSS negative sub-scores. CONCLUSIONS The present findings demonstrate that schizophrenia patients with AVH showed multiple abnormal dFC regions using auditory related cortex and nucleus as seeds, particularly involving the occipital lobe, default mode network (DMN), and middle temporal lobe, implying that the different dFC patterns of auditory related areas could provide a neurological mechanism of AVH in schizophrenia.
Collapse
Affiliation(s)
- Kangkang Xue
- Department of Magnetic Resonance Imaging, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Jingli Chen
- Department of Magnetic Resonance Imaging, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Yarui Wei
- Department of Magnetic Resonance Imaging, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Yuan Chen
- Department of Magnetic Resonance Imaging, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Shaoqiang Han
- Department of Magnetic Resonance Imaging, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Caihong Wang
- Department of Magnetic Resonance Imaging, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Yong Zhang
- Department of Magnetic Resonance Imaging, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Xueqin Song
- Department of Psychiatry, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Jingliang Cheng
- Department of Magnetic Resonance Imaging, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| |
Collapse
|
38
|
Shao X, Li M, Yang Y, Li X, Han Z. The Neural Basis of Semantic Prediction in Sentence Comprehension. J Cogn Neurosci 2021; 34:236-257. [PMID: 34813653 DOI: 10.1162/jocn_a_01793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Although prediction plays an important role in language comprehension, its precise neural basis remains unclear. This fMRI study investigated whether and how semantic-category-specific and common cerebral areas are recruited in predictive semantic processing during sentence comprehension. We manipulated the semantic constraint of sentence contexts, upon which a tool-related, a building-related, or no specific category of noun is highly predictable. This noun-predictability effect was measured not only over the target nouns but also over their preceding transitive verbs. Both before and after the appearance of target nouns, left anterior supramarginal gyrus was specifically activated for tool-related nouns and left parahippocampal place area was activated specifically for building-related nouns. The semantic-category common areas included a subset of left inferior frontal gyrus during the anticipation of incoming target nouns (activity enhancement for high predictability) and included a wide spread of areas (bilateral inferior frontal gyrus, left superior/middle temporal gyrus, left medial pFC, and left TPJ) during the integration of actually perceived nouns (activity reduction for high predictability). These results indicated that the human brain recruits fine divisions of cortical areas to distinguish different semantic categories of predicted words, and anticipatory semantic processing relies, at least partially, on top-down prediction conducted in higher-level cortical areas.
Collapse
|
39
|
Brisson V, Tremblay P. Improving speech perception in noise in young and older adults using transcranial magnetic stimulation. BRAIN AND LANGUAGE 2021; 222:105009. [PMID: 34425411 DOI: 10.1016/j.bandl.2021.105009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Revised: 08/06/2021] [Accepted: 08/12/2021] [Indexed: 06/13/2023]
Abstract
UNLABELLED Normal aging is associated with speech perception in noise (SPiN) difficulties. The objective of this study was to determine if SPiN performance can be enhanced by intermittent theta-burst stimulation (iTBS) in young and older adults. METHOD We developed a sub-lexical SPiN test to evaluate the contribution of age, hearing, and cognition to SPiN performance in young and older adults. iTBS was applied to the left posterior superior temporal sulcus (pSTS) and the left ventral premotor cortex (PMv) to examine its impact on SPiN performance. RESULTS Aging was associated with reduced SPiN accuracy. TMS-induced performance gain was greater after stimulation of the PMv compared to the pSTS. Participants with lower scores in the baseline condition improved the most. DISCUSSION SPiN difficulties can be reduced by enhancing activity within the left speech-processing network in adults. This study paves the way for the development of TMS-based interventions to reduce SPiN difficulties in adults.
Collapse
Affiliation(s)
- Valérie Brisson
- Département de réadaptation, Université Laval, Québec, Canada; Centre de recherche CERVO, Québec, Canada
| | - Pascale Tremblay
- Département de réadaptation, Université Laval, Québec, Canada; Centre de recherche CERVO, Québec, Canada.
| |
Collapse
|
40
|
Zheng Y, Zhao Z, Yang X, Li X. The impact of musical expertise on anticipatory semantic processing during online speech comprehension: An electroencephalography study. BRAIN AND LANGUAGE 2021; 221:105006. [PMID: 34392023 DOI: 10.1016/j.bandl.2021.105006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Revised: 07/29/2021] [Accepted: 07/30/2021] [Indexed: 06/13/2023]
Abstract
Musical experience has been found to aid speech perception. This electroencephalography study further examined whether and how musical expertise affects high-level predictive semantic processing in speech comprehension. Musicians and non-musicians listened to semantically strongly/weakly constraining sentences, with each sentence being primed by a congruent/incongruent sentence-prosody. At the target nouns, a N400 reduction effect (strongly vs. weakly constraining) was observed in both groups, with the onset-latency of this effect being delayed for incongruent (vs. congruent) priming. At the transitive verbs preceding these target nouns, musicians' event-related-potential amplitude (in incongruent-priming) and beta-band oscillatory power (in congruent- and incongruent-priming) showed a semantic-constraint effect, and were correlated with the predictability of incoming nouns; non-musicians only demonstrated an event-related-potential semantic-constraint effect, which was correlated with the predictability of current verbs. These results indicate musical expertise enhances semantic prediction tendency in speech comprehension, and this effect might be not just an aftereffect of facilitated acoustic/phonological processing.
Collapse
Affiliation(s)
- Yuanyi Zheng
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100149, China
| | - Zitong Zhao
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100149, China
| | - Xiaohong Yang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100149, China
| | - Xiaoqing Li
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100149, China.
| |
Collapse
|
41
|
Palaniyappan L. Dissecting the neurobiology of linguistic disorganisation and impoverishment in schizophrenia. Semin Cell Dev Biol 2021; 129:47-60. [PMID: 34507903 DOI: 10.1016/j.semcdb.2021.08.015] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2021] [Revised: 08/13/2021] [Accepted: 05/06/2021] [Indexed: 12/16/2022]
Abstract
Schizophrenia provides a quintessential disease model of how disturbances in the molecular mechanisms of neurodevelopment lead to disruptions in the emergence of cognition. The central and often persistent feature of this illness is the disorganisation and impoverishment of language and related expressive behaviours. Though clinically more prominent, the periodic perceptual distortions characterised as psychosis are non-specific and often episodic. While several insights into psychosis have been gained based on study of the dopaminergic system, the mechanistic basis of linguistic disorganisation and impoverishment is still elusive. Key findings from cellular to systems-level studies highlight the role of ubiquitous, inhibitory processes in language production. Dysregulation of these processes at critical time periods, in key brain areas, provides a surprisingly parsimonious account of linguistic disorganisation and impoverishment in schizophrenia. This review links the notion of excitatory/inhibitory (E/I) imbalance at cortical microcircuits to the expression of language behaviour characteristic of schizophrenia, through the building blocks of neurochemistry, neurophysiology, and neurocognition.
Collapse
Affiliation(s)
- Lena Palaniyappan
- Department of Psychiatry,University of Western Ontario, London, Ontario, Canada; Robarts Research Institute,University of Western Ontario, London, Ontario, Canada; Lawson Health Research Institute, London, Ontario, Canada.
| |
Collapse
|
42
|
Ghaleh M, Lacey EH, Fama ME, Anbari Z, DeMarco AT, Turkeltaub PE. Dissociable Mechanisms of Verbal Working Memory Revealed through Multivariate Lesion Mapping. Cereb Cortex 2021; 30:2542-2554. [PMID: 31701121 DOI: 10.1093/cercor/bhz259] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
Two maintenance mechanisms with separate neural systems have been suggested for verbal working memory: articulatory-rehearsal and non-articulatory maintenance. Although lesion data would be key to understanding the essential neural substrates of these systems, there is little evidence from lesion studies that the two proposed mechanisms crucially rely on different neuroanatomical substrates. We examined 39 healthy adults and 71 individuals with chronic left-hemisphere stroke to determine if verbal working memory tasks with varying demands would rely on dissociable brain structures. Multivariate lesion-symptom mapping was used to identify the brain regions involved in each task, controlling for spatial working memory scores. Maintenance of verbal information relied on distinct brain regions depending on task demands: sensorimotor cortex under higher demands and superior temporal gyrus (STG) under lower demands. Inferior parietal cortex and posterior STG were involved under both low and high demands. These results suggest that maintenance of auditory information preferentially relies on auditory-phonological storage in the STG via a nonarticulatory maintenance when demands are low. Under higher demands, sensorimotor regions are crucial for the articulatory rehearsal process, which reduces the reliance on STG for maintenance. Lesions to either of these regions impair maintenance of verbal information preferentially under the appropriate task conditions.
Collapse
Affiliation(s)
- Maryam Ghaleh
- Department of Neurology, Georgetown University Medical Center, Washington, DC 20057, USA
| | - Elizabeth H Lacey
- Department of Neurology, Georgetown University Medical Center, Washington, DC 20057, USA.,Research Division, MedStar National Rehabilitation Hospital, Washington, DC 20010, USA
| | - Mackenzie E Fama
- Department of Neurology, Georgetown University Medical Center, Washington, DC 20057, USA.,Department of Speech-Language Pathology and Audiology, Towson University, Towson, MD 21252, USA
| | - Zainab Anbari
- Department of Neurology, Georgetown University Medical Center, Washington, DC 20057, USA
| | - Andrew T DeMarco
- Department of Neurology, Georgetown University Medical Center, Washington, DC 20057, USA
| | - Peter E Turkeltaub
- Department of Neurology, Georgetown University Medical Center, Washington, DC 20057, USA.,Research Division, MedStar National Rehabilitation Hospital, Washington, DC 20010, USA
| |
Collapse
|
43
|
Handedness Development: A Model for Investigating the Development of Hemispheric Specialization and Interhemispheric Coordination. Symmetry (Basel) 2021. [DOI: 10.3390/sym13060992] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023] Open
Abstract
The author presents his perspective on the character of science, development, and handedness and relates these to his investigations of the early development of handedness. After presenting some ideas on what hemispheric specialization of function might mean for neural processing and how handedness should be assessed, the neuroscience of control of the arms/hands and interhemispheric communication and coordination are examined for how developmental processes can affect these mechanisms. The author’s work on the development of early handedness is reviewed and placed within a context of cascading events in which different forms of handedness emerge from earlier forms but not in a deterministic manner. This approach supports a continuous rather than categorical distribution of handedness and accounts for the predominance of right-handedness while maintaining a minority of left-handedness. Finally, the relation of the development of handedness to the development of several language and cognitive skills is examined.
Collapse
|
44
|
Wang S, Chen B, Yu Y, Yang H, Cui W, Fan G, Li J. Altered resting-state functional network connectivity in profound sensorineural hearing loss infants within an early sensitive period: A group ICA study. Hum Brain Mapp 2021; 42:4314-4326. [PMID: 34060682 PMCID: PMC8356983 DOI: 10.1002/hbm.25548] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 04/29/2021] [Accepted: 05/20/2021] [Indexed: 12/21/2022] Open
Abstract
Data from both animal models and deaf children provide evidence for that the maturation of auditory cortex has a sensitive period during the first 2-4 years of life. During this period, the auditory stimulation can affect the development of cortical function to the greatest extent. Thus far, little is known about the brain development trajectory after early auditory deprivation within this period. In this study, independent component analysis (ICA) technique was used to detect the characteristics of brain network development in children with bilateral profound sensorineural hearing loss (SNHL) before 3 years old. Seven resting-state networks (RSN) were identified in 50 SNHL and 36 healthy controls using ICA method, and further their intra-and inter-network functional connectivity (FC) were compared between two groups. Compared with the control group, SNHL group showed decreased FC within default mode network, while enhanced FC within auditory network (AUN) and salience network. No significant changes in FC were found in the visual network (VN) and sensorimotor network (SMN). Furthermore, the inter-network FC between SMN and AUN, frontal network and AUN, SMN and VN, frontal network and VN were significantly increased in SNHL group. The results implicate that the loss and the compensatory reorganization of brain network FC coexist in SNHL infants. It provides a network basis for understanding the brain development trajectory after hearing loss within early sensitive period.
Collapse
Affiliation(s)
- Shanshan Wang
- Department of Radiology, The First Hospital, China Medical University, Shenyang, Liaoning, China
| | - Boyu Chen
- Department of Radiology, The First Hospital, China Medical University, Shenyang, Liaoning, China
| | - Yalian Yu
- Department of Otorhinolaryngology, The First Hospital, China Medical University, Shenyang, Liaoning, China
| | - Huaguang Yang
- Department of Radiology, Renmin Hospital, Wuhan University, Wuhan, China
| | - Wenzhuo Cui
- Department of Radiology, The First Hospital, China Medical University, Shenyang, Liaoning, China
| | - Guoguang Fan
- Department of Radiology, The First Hospital, China Medical University, Shenyang, Liaoning, China
| | - Jian Li
- Department of Radiology, The First Hospital, China Medical University, Shenyang, Liaoning, China
| |
Collapse
|
45
|
Lumaca M, Baggio G, Vuust P. White matter variability in auditory callosal pathways contributes to variation in the cultural transmission of auditory symbolic systems. Brain Struct Funct 2021; 226:1943-1959. [PMID: 34050791 DOI: 10.1007/s00429-021-02302-y] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Accepted: 05/17/2021] [Indexed: 12/11/2022]
Abstract
The cultural transmission of spoken language and music relies on human capacities for encoding and recalling auditory patterns. In this experiment, we show that interindividual differences in this ability are associated with variation in the organization of cross-callosal white matter pathways. First, high-angular resolution diffusion MRI (dMRI) data were analyzed in a large participant sample (N = 51). Subsequently, these participants underwent a behavioral test that models in the laboratory the cultural transmission of auditory symbolic systems: the signaling game. Cross-callosal and intrahemispheric (arcuate fasciculus) pathways were reconstructed and analyzed using conventional diffusion tensor imaging (DTI) as well as a more advanced dMRI technique: fixel-based analysis (FBA). The DTI metric of fractional anisotropy (FA) in auditory callosal pathways predicted-weeks after scanning-the fidelity of transmission of an artificial tone system. The ability to coherently transmit auditory signals in one signaling game, irrespective of the signals learned during the previous game, was predicted by morphological properties of the fiber bundles in the most anterior portions of the corpus callosum. The current study is the first application of dMRI in the field of cultural transmission, and the first to connect individual characteristics of callosal pathways to core behaviors in the transmission of auditory symbolic systems.
Collapse
Affiliation(s)
- Massimo Lumaca
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music, Aarhus/Aalborg, 8000, Aarhus C, Denmark.
| | - Giosuè Baggio
- Language Acquisition and Language Processing Lab, Department of Language and Literature, Norwegian University of Science and Technology, 7941, Trondheim, Norway
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music, Aarhus/Aalborg, 8000, Aarhus C, Denmark
| |
Collapse
|
46
|
Guediche S, de Bruin A, Caballero-Gaudes C, Baart M, Samuel AG. Second-language word recognition in noise: Interdependent neuromodulatory effects of semantic context and crosslinguistic interactions driven by word form similarity. Neuroimage 2021; 237:118168. [PMID: 34000398 DOI: 10.1016/j.neuroimage.2021.118168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Revised: 05/05/2021] [Accepted: 05/12/2021] [Indexed: 11/17/2022] Open
Abstract
Spoken language comprehension is a fundamental component of our cognitive skills. We are quite proficient at deciphering words from the auditory input despite the fact that the speech we hear is often masked by noise such as background babble originating from talkers other than the one we are attending to. To perceive spoken language as intended, we rely on prior linguistic knowledge and context. Prior knowledge includes all sounds and words that are familiar to a listener and depends on linguistic experience. For bilinguals, the phonetic and lexical repertoire encompasses two languages, and the degree of overlap between word forms across languages affects the degree to which they influence one another during auditory word recognition. To support spoken word recognition, listeners often rely on semantic information (i.e., the words we hear are usually related in a meaningful way). Although the number of multilinguals across the globe is increasing, little is known about how crosslinguistic effects (i.e., word overlap) interact with semantic context and affect the flexible neural systems that support accurate word recognition. The current multi-echo functional magnetic resonance imaging (fMRI) study addresses this question by examining how prime-target word pair semantic relationships interact with the target word's form similarity (cognate status) to the translation equivalent in the dominant language (L1) during accurate word recognition of a non-dominant (L2) language. We tested 26 early-proficient Spanish-Basque (L1-L2) bilinguals. When L2 targets matching L1 translation-equivalent phonological word forms were preceded by unrelated semantic contexts that drive lexical competition, a flexible language control (fronto-parietal-subcortical) network was upregulated, whereas when they were preceded by related semantic contexts that reduce lexical competition, it was downregulated. We conclude that an interplay between semantic and crosslinguistic effects regulates flexible control mechanisms of speech processing to facilitate L2 word recognition, in noise.
Collapse
Affiliation(s)
- Sara Guediche
- Basque Center on Cognition Brain, and Language, Donostia-San Sebastian 20009, Spain.
| | | | | | - Martijn Baart
- Basque Center on Cognition Brain, and Language, Donostia-San Sebastian 20009, Spain; Department of Cognitive Neuropsychology, Tilburg University, P.O. Box 90153, 5000 LE Tilburg, the Netherlands
| | - Arthur G Samuel
- Basque Center on Cognition Brain, and Language, Donostia-San Sebastian 20009, Spain; Stony Brook University, NY 11794-2500, United States; Ikerbasque Foundation, Spain
| |
Collapse
|
47
|
Similar activation patterns in the bilateral dorsal inferior frontal gyrus for monolingual and bilingual contexts in second language production. Neuropsychologia 2021; 156:107857. [PMID: 33857531 DOI: 10.1016/j.neuropsychologia.2021.107857] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Revised: 04/08/2021] [Accepted: 04/09/2021] [Indexed: 11/24/2022]
Abstract
Language production is a vital process of communication. Although many studies have devoted to the neural mechanisms of language production in bilinguals, they mainly focused on the mechanisms of cognitive control during language switching. Therefore, it is not clear how naming context influences the neural representations of linguistic information during language production in bilinguals. To address that question, the present study adopted representational similarity analysis (RSA) to investigate the neural pattern similarity (PS) between the monolingual and bilingual contexts separately for native and second languages. Consistent with previous findings, bilinguals behaviorally performed worse, and showed greater activation in brain regions for cognitive control including the anterior cingulate cortex and dorsolateral prefrontal cortex in the bilingual context relative to the monolingual context. More importantly, RSA revealed that bilinguals exhibited similar neural activation patterns in the bilateral dorsal inferior frontal gyrus between the monolingual and bilingual contexts in the production of the second language. Moreover, higher cross-context PS in the right inferior frontal gyrus was associated with smaller differences in naming speed of second language between the monolingual and bilingual contexts. These results suggest that similar linguistic representations are encoded for the monolingual and bilingual contexts in the production of non-dominant language.
Collapse
|
48
|
Olaru M, Nillo RM, Mukherjee P, Sugrue LP. A quantitative approach for measuring laterality in clinical fMRI for preoperative language mapping. Neuroradiology 2021; 63:1489-1500. [PMID: 33772347 PMCID: PMC8376727 DOI: 10.1007/s00234-021-02685-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Accepted: 03/01/2021] [Indexed: 11/16/2022]
Abstract
Purpose fMRI is increasingly used for presurgical language mapping, but lack of standard methodology has made it difficult to combine/compare data across institutions or determine the relative efficacy of different approaches. Here, we describe a quantitative analytic framework for determining language laterality in clinical fMRI that addresses these concerns. Methods We retrospectively analyzed fMRI data from 59 patients who underwent presurgical language mapping at our institution with identical imaging and behavioral protocols. First, we compared the efficacy of different regional masks in capturing language activations. Then, we systematically explored how laterality indices (LIs) computed from these masks vary as a function of task and activation threshold. Finally, we determined the percentile threshold that maximized the correlation between the results of our LI approach and the laterality assessments from the original clinical radiology reports. Results First, we found that a regional mask derived from a meta-analysis of the fMRI literature better captured language task activations than masks based on anatomically defined language areas. Then, we showed that an LI approach based on this functional mask and percentile thresholding of subject activation can quantify the relative ability of different language tasks to lateralize language function at the population level. Finally, we determined that the 92nd percentile of subject-level activation provides the optimal LI threshold with which to reproduce the original clinical reports. Conclusion A quantitative framework for determining language laterality that uses a functionally-derived language mask and percentile thresholding of subject activation can combine/compare results across tasks and patients and reproduce clinical assessments of language laterality. Supplementary Information The online version contains supplementary material available at 10.1007/s00234-021-02685-z.
Collapse
Affiliation(s)
- Maria Olaru
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA, USA
| | - Ryan M Nillo
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA, USA
| | - Pratik Mukherjee
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA, USA
| | - Leo P Sugrue
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA, USA.
| |
Collapse
|
49
|
MEG Intersubject Phase Locking of Stimulus-Driven Activity during Naturalistic Speech Listening Correlates with Musical Training. J Neurosci 2021; 41:2713-2722. [PMID: 33536196 DOI: 10.1523/jneurosci.0932-20.2020] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2020] [Revised: 11/13/2020] [Accepted: 11/17/2020] [Indexed: 12/26/2022] Open
Abstract
Musical training is associated with increased structural and functional connectivity between auditory sensory areas and higher-order brain networks involved in speech and motor processing. Whether such changed connectivity patterns facilitate the cortical propagation of speech information in musicians remains poorly understood. We here used magnetoencephalography (MEG) source imaging and a novel seed-based intersubject phase-locking approach to investigate the effects of musical training on the interregional synchronization of stimulus-driven neural responses during listening to naturalistic continuous speech presented in silence. MEG data were obtained from 20 young human subjects (both sexes) with different degrees of musical training. Our data show robust bilateral patterns of stimulus-driven interregional phase synchronization between auditory cortex and frontotemporal brain regions previously associated with speech processing. Stimulus-driven phase locking was maximal in the delta band, but was also observed in the theta and alpha bands. The individual duration of musical training was positively associated with the magnitude of stimulus-driven alpha-band phase locking between auditory cortex and parts of the dorsal and ventral auditory processing streams. These findings provide evidence for a positive relationship between musical training and the propagation of speech-related information between auditory sensory areas and higher-order processing networks, even when speech is presented in silence. We suggest that the increased synchronization of higher-order cortical regions to auditory cortex may contribute to the previously described musician advantage in processing speech in background noise.SIGNIFICANCE STATEMENT Musical training has been associated with widespread structural and functional brain plasticity. It has been suggested that these changes benefit the production and perception of music but can also translate to other domains of auditory processing, such as speech. We developed a new magnetoencephalography intersubject analysis approach to study the cortical synchronization of stimulus-driven neural responses during the perception of continuous natural speech and its relationship to individual musical training. Our results provide evidence that musical training is associated with higher synchronization of stimulus-driven activity between brain regions involved in early auditory sensory and higher-order processing. We suggest that the increased synchronized propagation of speech information may contribute to the previously described musician advantage in processing speech in background noise.
Collapse
|
50
|
Jiang J, Benhamou E, Waters S, Johnson JCS, Volkmer A, Weil RS, Marshall CR, Warren JD, Hardy CJD. Processing of Degraded Speech in Brain Disorders. Brain Sci 2021; 11:394. [PMID: 33804653 PMCID: PMC8003678 DOI: 10.3390/brainsci11030394] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Revised: 03/15/2021] [Accepted: 03/18/2021] [Indexed: 11/30/2022] Open
Abstract
The speech we hear every day is typically "degraded" by competing sounds and the idiosyncratic vocal characteristics of individual speakers. While the comprehension of "degraded" speech is normally automatic, it depends on dynamic and adaptive processing across distributed neural networks. This presents the brain with an immense computational challenge, making degraded speech processing vulnerable to a range of brain disorders. Therefore, it is likely to be a sensitive marker of neural circuit dysfunction and an index of retained neural plasticity. Considering experimental methods for studying degraded speech and factors that affect its processing in healthy individuals, we review the evidence for altered degraded speech processing in major neurodegenerative diseases, traumatic brain injury and stroke. We develop a predictive coding framework for understanding deficits of degraded speech processing in these disorders, focussing on the "language-led dementias"-the primary progressive aphasias. We conclude by considering prospects for using degraded speech as a probe of language network pathophysiology, a diagnostic tool and a target for therapeutic intervention.
Collapse
Affiliation(s)
- Jessica Jiang
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, London WC1N 3BG, UK; (J.J.); (E.B.); (J.C.S.J.); (R.S.W.); (C.R.M.); (J.D.W.)
| | - Elia Benhamou
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, London WC1N 3BG, UK; (J.J.); (E.B.); (J.C.S.J.); (R.S.W.); (C.R.M.); (J.D.W.)
| | - Sheena Waters
- Preventive Neurology Unit, Wolfson Institute of Preventive Medicine, Queen Mary University of London, London EC1M 6BQ, UK;
| | - Jeremy C. S. Johnson
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, London WC1N 3BG, UK; (J.J.); (E.B.); (J.C.S.J.); (R.S.W.); (C.R.M.); (J.D.W.)
| | - Anna Volkmer
- Division of Psychology and Language Sciences, University College London, London WC1H 0AP, UK;
| | - Rimona S. Weil
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, London WC1N 3BG, UK; (J.J.); (E.B.); (J.C.S.J.); (R.S.W.); (C.R.M.); (J.D.W.)
| | - Charles R. Marshall
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, London WC1N 3BG, UK; (J.J.); (E.B.); (J.C.S.J.); (R.S.W.); (C.R.M.); (J.D.W.)
- Preventive Neurology Unit, Wolfson Institute of Preventive Medicine, Queen Mary University of London, London EC1M 6BQ, UK;
| | - Jason D. Warren
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, London WC1N 3BG, UK; (J.J.); (E.B.); (J.C.S.J.); (R.S.W.); (C.R.M.); (J.D.W.)
| | - Chris J. D. Hardy
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, London WC1N 3BG, UK; (J.J.); (E.B.); (J.C.S.J.); (R.S.W.); (C.R.M.); (J.D.W.)
| |
Collapse
|