1
|
Blanco B, Molnar M, Arrieta I, Caballero-Gaudes C, Carreiras M. Functional Brain Adaptations During Speech Processing in 4-Month-Old Bilingual Infants. Dev Sci 2024:e13572. [PMID: 39340440 DOI: 10.1111/desc.13572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Revised: 08/07/2024] [Accepted: 09/12/2024] [Indexed: 09/30/2024]
Abstract
Language learning is influenced by both neural development and environmental experiences. This work investigates the influence of early bilingual experience on the neural mechanisms underlying speech processing in 4-month-old infants. We study how an early environmental factor such as bilingualism interacts with neural development by comparing monolingual and bilingual infants' brain responses to speech. We used functional near-infrared spectroscopy (fNIRS) to measure 4-month-old Spanish-Basque bilingual and Spanish monolingual infants' brain responses while they listened to forward (FW) and backward (BW) speech stimuli in Spanish. We reveal distinct neural signatures associated with bilingual adaptations, including increased engagement of bilateral inferior frontal and temporal regions during speech processing in bilingual infants, as opposed to left hemispheric functional specialization observed in monolingual infants. This study provides compelling evidence of bilingualism-induced brain adaptations during speech processing in infants as young as 4 months. These findings emphasize the role of early language experience in shaping neural plasticity during infancy suggesting that bilingual exposure at this young age profoundly influences the neural mechanisms underlying speech processing.
Collapse
Affiliation(s)
- Borja Blanco
- Department of Psychology, University of Cambridge, Cambridge, UK
- Basque Center on Cognition, Brain and Language (BCBL), Donostia/San Sebastián, Spain
| | - Monika Molnar
- Department of Speech-Language Pathology, Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Irene Arrieta
- Basque Center on Cognition, Brain and Language (BCBL), Donostia/San Sebastián, Spain
| | | | - Manuel Carreiras
- Basque Center on Cognition, Brain and Language (BCBL), Donostia/San Sebastián, Spain
- Ikerbasque, Basque Foundation for Science, Bilbao, Spain
- Department of Basque Language and Communication, University of the Basque Country (UPV/EHU), Donostia/San Sebastián, Spain
| |
Collapse
|
2
|
Burunat I, Levitin DJ, Toiviainen P. Breaking (musical) boundaries by investigating brain dynamics of event segmentation during real-life music-listening. Proc Natl Acad Sci U S A 2024; 121:e2319459121. [PMID: 39186645 PMCID: PMC11388323 DOI: 10.1073/pnas.2319459121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Accepted: 06/26/2024] [Indexed: 08/28/2024] Open
Abstract
The perception of musical phrase boundaries is a critical aspect of human musical experience: It allows us to organize, understand, derive pleasure from, and remember music. Identifying boundaries is a prerequisite for segmenting music into meaningful chunks, facilitating efficient processing and storage while providing an enjoyable, fulfilling listening experience through the anticipation of upcoming musical events. Expanding on Sridharan et al.'s [Neuron 55, 521-532 (2007)] work on coarse musical boundaries between symphonic movements, we examined finer-grained boundaries. We measured the fMRI responses of 18 musicians and 18 nonmusicians during music listening. Using general linear model, independent component analysis, and Granger causality, we observed heightened auditory integration in anticipation to musical boundaries, and an extensive decrease within the fronto-temporal-parietal network during and immediately following boundaries. Notably, responses were modulated by musicianship. Findings uncover the intricate interplay between musical structure, expertise, and cognitive processing, advancing our knowledge of how the brain makes sense of music.
Collapse
Affiliation(s)
- Iballa Burunat
- Centre of Excellence in Music, Mind, Body and Brain, Department of Music, Arts and Culture Studies, University of Jyväskylä, Jyväskylä 40014, Finland
| | - Daniel J Levitin
- School of Social Sciences, Minerva University, San Francisco, CA 94103
- Department of Psychology, McGill University, Montreal, QC H3A 1G1, Canada
| | - Petri Toiviainen
- Centre of Excellence in Music, Mind, Body and Brain, Department of Music, Arts and Culture Studies, University of Jyväskylä, Jyväskylä 40014, Finland
| |
Collapse
|
3
|
Perron M, Vuong V, Grassi MW, Imran A, Alain C. Engagement of the speech motor system in challenging speech perception: Activation likelihood estimation meta-analyses. Hum Brain Mapp 2024; 45:e70023. [PMID: 39268584 PMCID: PMC11393483 DOI: 10.1002/hbm.70023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2024] [Revised: 08/20/2024] [Accepted: 08/29/2024] [Indexed: 09/17/2024] Open
Abstract
The relationship between speech production and perception is a topic of ongoing debate. Some argue that there is little interaction between the two, while others claim they share representations and processes. One perspective suggests increased recruitment of the speech motor system in demanding listening situations to facilitate perception. However, uncertainties persist regarding the specific regions involved and the listening conditions influencing its engagement. This study used activation likelihood estimation in coordinate-based meta-analyses to investigate the neural overlap between speech production and three speech perception conditions: speech-in-noise, spectrally degraded speech and linguistically complex speech. Neural overlap was observed in the left frontal, insular and temporal regions. Key nodes included the left frontal operculum (FOC), left posterior lateral part of the inferior frontal gyrus (IFG), left planum temporale (PT), and left pre-supplementary motor area (pre-SMA). The left IFG activation was consistently observed during linguistic processing, suggesting sensitivity to the linguistic content of speech. In comparison, the left pre-SMA activation was observed when processing degraded and noisy signals, indicating sensitivity to signal quality. Activations of the left PT and FOC activation were noted in all conditions, with the posterior FOC area overlapping in all conditions. Our meta-analysis reveals context-independent (FOC, PT) and context-dependent (pre-SMA, posterior lateral IFG) regions within the speech motor system during challenging speech perception. These regions could contribute to sensorimotor integration and executive cognitive control for perception and production.
Collapse
Affiliation(s)
- Maxime Perron
- Rotman Research Institute, Baycrest Academy for Research and Education, Toronto, Ontario, Canada
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| | - Veronica Vuong
- Rotman Research Institute, Baycrest Academy for Research and Education, Toronto, Ontario, Canada
- Institute of Medical Sciences, Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
- Music and Health Science Research Collaboratory, Faculty of Music, University of Toronto, Toronto, Ontario, Canada
| | - Madison W Grassi
- Rotman Research Institute, Baycrest Academy for Research and Education, Toronto, Ontario, Canada
| | - Ashna Imran
- Rotman Research Institute, Baycrest Academy for Research and Education, Toronto, Ontario, Canada
| | - Claude Alain
- Rotman Research Institute, Baycrest Academy for Research and Education, Toronto, Ontario, Canada
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada
- Institute of Medical Sciences, Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
- Music and Health Science Research Collaboratory, Faculty of Music, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
4
|
Lamekina Y, Titone L, Maess B, Meyer L. Speech Prosody Serves Temporal Prediction of Language via Contextual Entrainment. J Neurosci 2024; 44:e1041232024. [PMID: 38839302 PMCID: PMC11236583 DOI: 10.1523/jneurosci.1041-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 03/08/2024] [Accepted: 04/08/2024] [Indexed: 06/07/2024] Open
Abstract
Temporal prediction assists language comprehension. In a series of recent behavioral studies, we have shown that listeners specifically employ rhythmic modulations of prosody to estimate the duration of upcoming sentences, thereby speeding up comprehension. In the current human magnetoencephalography (MEG) study on participants of either sex, we show that the human brain achieves this function through a mechanism termed entrainment. Through entrainment, electrophysiological brain activity maintains and continues contextual rhythms beyond their offset. Our experiment combined exposure to repetitive prosodic contours with the subsequent presentation of visual sentences that either matched or mismatched the duration of the preceding contour. During exposure to prosodic contours, we observed MEG coherence with the contours, which was source-localized to right-hemispheric auditory areas. During the processing of the visual targets, activity at the frequency of the preceding contour was still detectable in the MEG; yet sources shifted to the (left) frontal cortex, in line with a functional inheritance of the rhythmic acoustic context for prediction. Strikingly, when the target sentence was shorter than expected from the preceding contour, an omission response appeared in the evoked potential record. We conclude that prosodic entrainment is a functional mechanism of temporal prediction in language comprehension. In general, acoustic rhythms appear to endow language for employing the brain's electrophysiological mechanisms of temporal prediction.
Collapse
Affiliation(s)
- Yulia Lamekina
- Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - Lorenzo Titone
- Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - Burkhard Maess
- Methods and Development Group Brain Networks, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - Lars Meyer
- Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
- University Clinic Münster, Münster 48149, Germany
| |
Collapse
|
5
|
Wu M, Wang Y, Zhao X, Xin T, Wu K, Liu H, Wu S, Liu M, Chai X, Li J, Wei C, Zhu C, Liu Y, Zhang YX. Anti-phasic oscillatory development for speech and noise processing in cochlear implanted toddlers. Child Dev 2024. [PMID: 38742715 DOI: 10.1111/cdev.14105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
Human brain demonstrates amazing readiness for speech and language learning at birth, but the auditory development preceding such readiness remains unknown. Cochlear implanted (CI) children (n = 67; mean age 2.77 year ± 1.31 SD; 28 females) with prelingual deafness provide a unique opportunity to study this stage. Using functional near-infrared spectroscopy, it was revealed that the brain of CI children was irresponsive to sounds at CI hearing onset. With increasing CI experiences up to 32 months, the brain demonstrated function, region and hemisphere specific development. Most strikingly, the left anterior temporal lobe showed an oscillatory trajectory, changing in opposite phases for speech and noise. The study provides the first longitudinal brain imaging evidence for early auditory development preceding speech acquisition.
Collapse
Affiliation(s)
- Meiyun Wu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Yuyang Wang
- Department of Otolaryngology Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China
- Department of Otolaryngology Head and Neck Surgery, Hunan Provincial People's Hospital (First Affiliated Hospital of Hunan Normal University), Changsha, China
| | - Xue Zhao
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Tianyu Xin
- Department of Otolaryngology Head and Neck Surgery, Peking University First Hospital, Beijing, China
| | - Kun Wu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Haotian Liu
- Department of Otolaryngology Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China
- Department of Otolaryngology Head and Neck Surgery, West China Hospital of Sichuan University, Chengdu, China
| | - Shinan Wu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Min Liu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Xiaoke Chai
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Jinhong Li
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Chaogang Wei
- Department of Otolaryngology Head and Neck Surgery, Peking University First Hospital, Beijing, China
| | - Chaozhe Zhu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Yuhe Liu
- Department of Otolaryngology Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Yu-Xuan Zhang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| |
Collapse
|
6
|
Luo Q, Gao L, Yang Z, Chen S, Yang J, Lu S. Integrated sentence-level speech perception evokes strengthened language networks and facilitates early speech development. Neuroimage 2024; 289:120544. [PMID: 38365164 DOI: 10.1016/j.neuroimage.2024.120544] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2023] [Revised: 12/23/2023] [Accepted: 02/14/2024] [Indexed: 02/18/2024] Open
Abstract
Natural poetic speeches (i.e., proverbs, nursery rhymes, and commercial ads) with strong prosodic regularities are easily memorized by children and the harmonious acoustic patterns are suggested to facilitate their integrated sentence processing. Do children have specific neural pathways for perceiving such poetic utterances, and does their speech development benefit from it? We recorded the task-induced hemodynamic changes of 94 children aged 2 to 12 years using functional near-infrared spectroscopy (fNIRS) while they listened to poetic and non-poetic natural sentences. Seventy-three adult as controls were recruited to investigate the developmental specificity of children group. The results indicated that poetic sentences perceiving is a highly integrated process featured by a lower brain workload in both groups. However, an early activated large-scale network was induced only in the child group, coordinated by hubs for connectivity diversity. Additionally, poetic speeches evoked activation in the phonological encoding regions in the children's group rather than adult controls which decreases with children's ages. The neural responses to poetic speeches were positively linked to children's speech communication performance, especially the fluency and semantic aspects. These results reveal children's neural sensitivity to integrated speech perception which facilitate early speech development by strengthening more sophisticated language networks and the perception-production circuit.
Collapse
Affiliation(s)
- Qinqin Luo
- Neurolinguistics Laboratory,College of International Studies, Shenzhen University, Shenzhen, China; Department of Chinese Language and Literature, The Chinese University of Hong Kong, Shatin, Hong Kong
| | - Leyan Gao
- Neurolinguistics Laboratory,College of International Studies, Shenzhen University, Shenzhen, China
| | - Zhirui Yang
- Neurolinguistics Laboratory,College of International Studies, Shenzhen University, Shenzhen, China; Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Shatin, Hong Kong
| | - Sihui Chen
- Department of Chinese Language and Literature, Sun Yat-sen University, Guangzhou, China
| | - Jingwen Yang
- Neurolinguistics Laboratory,College of International Studies, Shenzhen University, Shenzhen, China
| | - Shuo Lu
- Neurolinguistics Laboratory,College of International Studies, Shenzhen University, Shenzhen, China; Department of Clinical Neurolinguistics Research, Mental and Neurological Diseases Research Center, The Third Affiliated Hospital of Sun Yat-Sen University, Guangzhou, China.
| |
Collapse
|
7
|
Zioga I, Zhou YJ, Weissbart H, Martin AE, Haegens S. Alpha and Beta Oscillations Differentially Support Word Production in a Rule-Switching Task. eNeuro 2024; 11:ENEURO.0312-23.2024. [PMID: 38490743 PMCID: PMC10988358 DOI: 10.1523/eneuro.0312-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 01/26/2024] [Accepted: 02/22/2024] [Indexed: 03/17/2024] Open
Abstract
Research into the role of brain oscillations in basic perceptual and cognitive functions has suggested that the alpha rhythm reflects functional inhibition while the beta rhythm reflects neural ensemble (re)activation. However, little is known regarding the generalization of these proposed fundamental operations to linguistic processes, such as speech comprehension and production. Here, we recorded magnetoencephalography in participants performing a novel rule-switching paradigm. Specifically, Dutch native speakers had to produce an alternative exemplar from the same category or a feature of a given target word embedded in spoken sentences (e.g., for the word "tuna", an exemplar from the same category-"seafood"-would be "shrimp", and a feature would be "pink"). A cue indicated the task rule-exemplar or feature-either before (pre-cue) or after (retro-cue) listening to the sentence. Alpha power during the working memory delay was lower for retro-cue compared with that for pre-cue in the left hemispheric language-related regions. Critically, alpha power negatively correlated with reaction times, suggestive of alpha facilitating task performance by regulating inhibition in regions linked to lexical retrieval. Furthermore, we observed a different spatiotemporal pattern of beta activity for exemplars versus features in the right temporoparietal regions, in line with the proposed role of beta in recruiting neural networks for the encoding of distinct categories. Overall, our study provides evidence for the generalizability of the role of alpha and beta oscillations from perceptual to more "complex, linguistic processes" and offers a novel task to investigate links between rule-switching, working memory, and word production.
Collapse
Affiliation(s)
- Ioanna Zioga
- Donders Centre for Cognitive Neuroimaging, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen 6525 EN, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen 6525 XD, The Netherlands
| | - Ying Joey Zhou
- Donders Centre for Cognitive Neuroimaging, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen 6525 EN, The Netherlands
- Department of Psychiatry, Oxford Centre for Human Brain Activity, Oxford, United Kingdom
| | - Hugo Weissbart
- Donders Centre for Cognitive Neuroimaging, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen 6525 EN, The Netherlands
| | - Andrea E Martin
- Donders Centre for Cognitive Neuroimaging, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen 6525 EN, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen 6525 XD, The Netherlands
| | - Saskia Haegens
- Donders Centre for Cognitive Neuroimaging, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen 6525 EN, The Netherlands
- Department of Psychiatry, Columbia University, New York, New York 10032
- Division of Systems Neuroscience, New York State Psychiatric Institute, New York, New York 10032
| |
Collapse
|
8
|
Harris I, Niven EC, Griffin A, Scott SK. Is song processing distinct and special in the auditory cortex? Nat Rev Neurosci 2023; 24:711-722. [PMID: 37783820 DOI: 10.1038/s41583-023-00743-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/30/2023] [Indexed: 10/04/2023]
Abstract
Is the singing voice processed distinctively in the human brain? In this Perspective, we discuss what might distinguish song processing from speech processing in light of recent work suggesting that some cortical neuronal populations respond selectively to song and we outline the implications for our understanding of auditory processing. We review the literature regarding the neural and physiological mechanisms of song production and perception and show that this provides evidence for key differences between song and speech processing. We conclude by discussing the significance of the notion that song processing is special in terms of how this might contribute to theories of the neurobiological origins of vocal communication and to our understanding of the neural circuitry underlying sound processing in the human cortex.
Collapse
Affiliation(s)
- Ilana Harris
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Efe C Niven
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Alex Griffin
- Department of Psychology, University of Cambridge, Cambridge, UK
| | - Sophie K Scott
- Institute of Cognitive Neuroscience, University College London, London, UK.
| |
Collapse
|
9
|
Zioga I, Weissbart H, Lewis AG, Haegens S, Martin AE. Naturalistic Spoken Language Comprehension Is Supported by Alpha and Beta Oscillations. J Neurosci 2023; 43:3718-3732. [PMID: 37059462 PMCID: PMC10198453 DOI: 10.1523/jneurosci.1500-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Revised: 03/17/2023] [Accepted: 03/23/2023] [Indexed: 04/16/2023] Open
Abstract
Brain oscillations are prevalent in all species and are involved in numerous perceptual operations. α oscillations are thought to facilitate processing through the inhibition of task-irrelevant networks, while β oscillations are linked to the putative reactivation of content representations. Can the proposed functional role of α and β oscillations be generalized from low-level operations to higher-level cognitive processes? Here we address this question focusing on naturalistic spoken language comprehension. Twenty-two (18 female) Dutch native speakers listened to stories in Dutch and French while MEG was recorded. We used dependency parsing to identify three dependency states at each word: the number of (1) newly opened dependencies, (2) dependencies that remained open, and (3) resolved dependencies. We then constructed forward models to predict α and β power from the dependency features. Results showed that dependency features predict α and β power in language-related regions beyond low-level linguistic features. Left temporal, fundamental language regions are involved in language comprehension in α, while frontal and parietal, higher-order language regions, and motor regions are involved in β. Critically, α- and β-band dynamics seem to subserve language comprehension tapping into syntactic structure building and semantic composition by providing low-level mechanistic operations for inhibition and reactivation processes. Because of the temporal similarity of the α-β responses, their potential functional dissociation remains to be elucidated. Overall, this study sheds light on the role of α and β oscillations during naturalistic spoken language comprehension, providing evidence for the generalizability of these dynamics from perceptual to complex linguistic processes.SIGNIFICANCE STATEMENT It remains unclear whether the proposed functional role of α and β oscillations in perceptual and motor function is generalizable to higher-level cognitive processes, such as spoken language comprehension. We found that syntactic features predict α and β power in language-related regions beyond low-level linguistic features when listening to naturalistic speech in a known language. We offer experimental findings that integrate a neuroscientific framework on the role of brain oscillations as "building blocks" with spoken language comprehension. This supports the view of a domain-general role of oscillations across the hierarchy of cognitive functions, from low-level sensory operations to abstract linguistic processes.
Collapse
Affiliation(s)
- Ioanna Zioga
- Donders Institute for Brain, Cognition and Behaviour, Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, 6525 EN, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, 6525 XD, The Netherlands
| | - Hugo Weissbart
- Donders Institute for Brain, Cognition and Behaviour, Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, 6525 EN, The Netherlands
| | - Ashley G Lewis
- Donders Institute for Brain, Cognition and Behaviour, Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, 6525 EN, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, 6525 XD, The Netherlands
| | - Saskia Haegens
- Donders Institute for Brain, Cognition and Behaviour, Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, 6525 EN, The Netherlands
- Department of Psychiatry, Columbia University, New York, New York 10032
- Division of Systems Neuroscience, New York State Psychiatric Institute, New York, New York 10032
| | - Andrea E Martin
- Donders Institute for Brain, Cognition and Behaviour, Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, 6525 EN, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, 6525 XD, The Netherlands
| |
Collapse
|
10
|
Schwab S, Mouthon M, Jost LB, Salvadori J, Stefanos-Yakoub I, da Silva EF, Giroud N, Perriard B, Annoni JM. Neural correlates of lexical stress processing in a foreign free-stress language. Brain Behav 2023; 13:e2854. [PMID: 36573037 PMCID: PMC9847599 DOI: 10.1002/brb3.2854] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Revised: 10/25/2022] [Accepted: 12/05/2022] [Indexed: 12/28/2022] Open
Abstract
INTRODUCTION The paper examines the discrimination of lexical stress contrasts in a foreign language from a neural perspective. The aim of the study was to identify the areas associated with word stress processing (in comparison with vowel processing), when listeners of a fixed-stress language have to process stress in a foreign free-stress language. METHODS We asked French-speaking participants to process stress and vowel contrasts in Spanish, a foreign language that the participants did not know. Participants performed a discrimination task on Spanish word pairs differing either with respect to word stress (penultimate or final stressed word) or with respect to the final vowel while functional magnetic resonance imaging data was acquired. RESULTS Behavioral results showed lower accuracy and longer reaction times for discriminating stress contrasts than vowel contrasts. The contrast Stress > Vowel revealed an increased bilateral activation of regions shown to be associated with stress processing (i.e., supplementary motor area, insula, middle/superior temporal gyrus), as well as a stronger involvement of areas related to more domain-general cognitive control functions (i.e., bilateral inferior frontal gyrus). The contrast Vowel > Stress showed an increased activation in regions typically associated with the default mode network (known for decreasing its activity during attentionally more demanding tasks). CONCLUSION When processing Spanish stress contrasts as compared to processing vowel contrasts, native listeners of French activated to a higher degree anterior networks including regions related to cognitive control. They also show a decrease in regions related to the default mode network. These findings, together with the behavioral results, reflect the higher cognitive demand, and therefore, the larger difficulties, for French-speaking listeners during stress processing as compared to vowel processing.
Collapse
Affiliation(s)
- Sandra Schwab
- Department of French, University of Fribourg, Fribourg, Switzerland
| | - Michael Mouthon
- Neurology-Laboratory for Cognitive and Neurological Sciences, University of Fribourg, Fribourg, Switzerland
| | - Lea B Jost
- Neurology-Laboratory for Cognitive and Neurological Sciences, University of Fribourg, Fribourg, Switzerland
| | | | | | | | - Nathalie Giroud
- Computational Neuroscience of Speech & Hearing, Department of Computational Linguistics, University of Zurich, Zürich, Switzerland
| | - Benoit Perriard
- Department of French, University of Fribourg, Fribourg, Switzerland
| | - Jean-Marie Annoni
- Neurology-Laboratory for Cognitive and Neurological Sciences, University of Fribourg, Fribourg, Switzerland
| |
Collapse
|
11
|
Marchina S, Norton A, Schlaug G. Effects of melodic intonation therapy in patients with chronic nonfluent aphasia. Ann N Y Acad Sci 2023; 1519:173-185. [PMID: 36349876 PMCID: PMC10262915 DOI: 10.1111/nyas.14927] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Patients with large left-hemisphere lesions and post-stroke aphasia often remain nonfluent. Melodic intonation therapy (MIT) may be an effective alternative to traditional speech therapy for facilitating recovery of fluency in those patients. In an open-label, proof-of-concept study, 14 subjects with nonfluent aphasia with large left-hemisphere lesions (171 ± 76 cc) underwent two speech/language assessments before, one at the midpoint, and two after the end of 75 sessions (1.5 h/session) of MIT. Functional MR imaging was done before and after therapy asking subjects to vocalize the same set of 10 bi-syllabic words. We found significant improvements in speech output after a period of intensive MIT (75 sessions for a total of 112.5 h) compared to two pre-therapy assessments. Therapy-induced gains were maintained 4 weeks post-treatment. Imaging changes were seen in a right-hemisphere network that included the posterior superior temporal and inferior frontal gyri, inferior pre- and postcentral gyri, pre-supplementary motor area, and supramarginal gyrus. Functional changes in the posterior right inferior frontal gyri significantly correlated with changes in a measure of fluency. Intense training of intonation-supported auditory-motor coupling and engaging feedforward/feedback control regions in the unaffected hemisphere improves speech-motor functions in subjects with nonfluent aphasia and large left-hemisphere lesions.
Collapse
Affiliation(s)
- Sarah Marchina
- Department of Neurology, Beth Israel Deaconess Medical Center/Harvard Medical School, Boston, Massachusetts, USA
| | - Andrea Norton
- Department of Neurology, Beth Israel Deaconess Medical Center/Harvard Medical School, Boston, Massachusetts, USA
| | - Gottfried Schlaug
- Department of Neurology, Beth Israel Deaconess Medical Center/Harvard Medical School, Boston, Massachusetts, USA
- Department of Neurology, Music, Neuroimaging and Stroke Recovery Laboratories, University of Massachusetts Chan Medical School – Baystate Campus, Springfield, Massachusetts, USA
- Department of Biomedical Engineering and Institute of Applied Life Sciences, University of Massachusetts, Amherst, Amherst, Massachusetts, USA
| |
Collapse
|
12
|
Li T, Zhu X, Wu X, Gong Y, Jones JA, Liu P, Chang Y, Yan N, Chen X, Liu H. Continuous theta burst stimulation over left and right supramarginal gyri demonstrates their involvement in auditory feedback control of vocal production. Cereb Cortex 2022; 33:11-22. [PMID: 35174862 DOI: 10.1093/cercor/bhac049] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Revised: 01/25/2022] [Accepted: 01/26/2022] [Indexed: 01/06/2023] Open
Abstract
The supramarginal gyrus (SMG) has been implicated in auditory-motor integration for vocal production. However, whether the SMG is bilaterally or unilaterally involved in auditory feedback control of vocal production in a causal manner remains unclear. The present event-related potential (ERP) study investigated the causal roles of the left and right SMG to auditory-vocal integration using neuronavigated continuous theta burst stimulation (c-TBS). Twenty-four young adults produced sustained vowel phonations and heard their voice unexpectedly pitch-shifted by ±200 cents after receiving active or sham c-TBS over the left or right SMG. As compared to sham stimulation, c-TBS over the left or right SMG led to significantly smaller vocal compensations for pitch perturbations that were accompanied by smaller cortical P2 responses. Moreover, no significant differences were found in the vocal and ERP responses when comparing active c-TBS over the left vs. right SMG. These findings provide neurobehavioral evidence for a causal influence of both the left and right SMG on auditory feedback control of vocal production. Decreased vocal compensations paralleled by reduced P2 responses following c-TBS over the bilateral SMG support their roles for auditory-motor transformation in a bottom-up manner: receiving auditory feedback information and mediating vocal compensations for feedback errors.
Collapse
Affiliation(s)
- Tingni Li
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, China
| | - Xiaoxia Zhu
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, China
| | - Xiuqin Wu
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, China
| | - Yulai Gong
- Department of Neurological Rehabilitation, Affiliated Sichuan Provincial Rehabilitation Hospital of Chengdu University of Traditional Chinese Medicine, Chengdu, 611135, China
| | - Jeffery A Jones
- Psychology Department and Laurier Centre for Cognitive Neuroscience, Wilfrid Laurier University, Waterloo, Ontario, N2L 3C5, Canada
| | - Peng Liu
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, China
| | - Yichen Chang
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, China
| | - Nan Yan
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.,Guangdong-Hong Kong-Macao Joint Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Xi Chen
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, China
| | - Hanjun Liu
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, China.,Guangdong Provincial Key Laboratory of Brain Function and Disease, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, 510080, China
| |
Collapse
|
13
|
Miyagawa S, Arévalo A, Nóbrega VA. On the representation of hierarchical structure: Revisiting Darwin's musical protolanguage. Front Hum Neurosci 2022; 16:1018708. [PMID: 36438635 PMCID: PMC9692108 DOI: 10.3389/fnhum.2022.1018708] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2022] [Accepted: 10/20/2022] [Indexed: 11/13/2022] Open
Abstract
In this article, we address the tenability of Darwin's musical protolanguage, arguing that a more compelling evolutionary scenario is one where a prosodic protolanguage is taken to be the preliminary step to represent the hierarchy involved in linguistic structures within a linear auditory signal. We hypothesize that the establishment of a prosodic protolanguage results from an enhancement of a rhythmic system that transformed linear signals into speech prosody, which in turn can mark syntactic hierarchical relations. To develop this claim, we explore the role of prosodic cues on the parsing of syntactic structures, as well as neuroscientific evidence connecting the evolutionary development of music and linguistic capacities. Finally, we entertain the assumption that the capacity to generate hierarchical structure might have developed as part of tool-making in human prehistory, and hence was established prior to the enhancement of a prosodic protolinguistic system.
Collapse
Affiliation(s)
- Shigeru Miyagawa
- Department of Linguistics and Philosophy, Massachusetts Institute of Technology, Cambridge, MA, United States
- Institute of Biosciences, University of São Paulo, São Paulo, Brazil
| | - Analía Arévalo
- School of Medicine, University of São Paulo, São Paulo, Brazil
| | - Vitor A. Nóbrega
- Institute of Romance Studies, University of Hamburg, Hamburg, Germany
| |
Collapse
|
14
|
Chen Y, Luo Q, Liang M, Gao L, Yang J, Feng R, Liu J, Qiu G, Li Y, Zheng Y, Lu S. Children's Neural Sensitivity to Prosodic Features of Natural Speech and Its Significance to Speech Development in Cochlear Implanted Children. Front Neurosci 2022; 16:892894. [PMID: 35903806 PMCID: PMC9315047 DOI: 10.3389/fnins.2022.892894] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Accepted: 06/14/2022] [Indexed: 11/13/2022] Open
Abstract
Catchy utterances, such as proverbs, verses, and nursery rhymes (i.e., "No pain, no gain" in English), contain strong-prosodic (SP) features and are child-friendly in repeating and memorizing; yet the way those prosodic features encoded by neural activity and their influence on speech development in children are still largely unknown. Using functional near-infrared spectroscopy (fNIRS), this study investigated the cortical responses to the perception of natural speech sentences with strong/weak-prosodic (SP/WP) features and evaluated the speech communication ability in 21 pre-lingually deaf children with cochlear implantation (CI) and 25 normal hearing (NH) children. A comprehensive evaluation of speech communication ability was conducted on all the participants to explore the potential correlations between neural activities and children's speech development. The SP information evoked right-lateralized cortical responses across a broad brain network in NH children and facilitated the early integration of linguistic information, highlighting children's neural sensitivity to natural SP sentences. In contrast, children with CI showed significantly weaker cortical activation and characteristic deficits in speech perception with SP features, suggesting hearing loss at the early age of life, causing significantly impaired sensitivity to prosodic features of sentences. Importantly, the level of neural sensitivity to SP sentences was significantly related to the speech behaviors of all children participants. These findings demonstrate the significance of speech prosodic features in children's speech development.
Collapse
Affiliation(s)
- Yuebo Chen
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Qinqin Luo
- Department of Chinese Language and Literature, The Chinese University of Hong Kong, Hong Kong, Hong Kong SAR, China
- School of Foreign Languages, Shenzhen University, Shenzhen, China
| | - Maojin Liang
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Leyan Gao
- Neurolinguistics Teaching Laboratory, Department of Chinese Language and Literature, Sun Yat-sen University, Guangzhou, China
| | - Jingwen Yang
- Department of Neurology, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
- Department of Clinical Neurolinguistics Research, Mental and Neurological Diseases Research Center, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Ruiyan Feng
- Neurolinguistics Teaching Laboratory, Department of Chinese Language and Literature, Sun Yat-sen University, Guangzhou, China
| | - Jiahao Liu
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
- Hearing and Speech Science Department, Guangzhou Xinhua University, Guangzhou, China
| | - Guoxin Qiu
- Department of Clinical Neurolinguistics Research, Mental and Neurological Diseases Research Center, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Yi Li
- School of Foreign Languages, Shenzhen University, Shenzhen, China
| | - Yiqing Zheng
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
- Hearing and Speech Science Department, Guangzhou Xinhua University, Guangzhou, China
| | - Shuo Lu
- School of Foreign Languages, Shenzhen University, Shenzhen, China
- Department of Clinical Neurolinguistics Research, Mental and Neurological Diseases Research Center, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
15
|
Muñetón-Ayala M, De Vega M, Ochoa-Gómez JF, Beltrán D. The Brain Dynamics of Syllable Duration and Semantic Predictability in Spanish. Brain Sci 2022; 12:brainsci12040458. [PMID: 35447989 PMCID: PMC9030985 DOI: 10.3390/brainsci12040458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2022] [Revised: 03/02/2022] [Accepted: 03/04/2022] [Indexed: 12/03/2022] Open
Abstract
This study examines the neural dynamics underlying the prosodic (duration) and the semantic dimensions in Spanish sentence perception. Specifically, we investigated whether adult listeners are aware of changes in the duration of a pretonic syllable of words that were either semantically predictable or unpredictable from the preceding sentential context. Participants listened to the sentences with instructions to make prosodic or semantic judgments, while their EEG was recorded. For both accuracy and RTs, the results revealed an interaction between duration and semantics. ERP analysis exposed an interactive effect between task, duration and semantic, showing that both processes share neural resources. There was an enhanced negativity on semantic process (N400) and an extended positivity associated with anomalous duration. Source estimation for the N400 component revealed activations in the frontal gyrus for the semantic contrast and in the parietal postcentral gyrus for duration contrast in the metric task, while activation in the sub-lobar insula was observed for the semantic task. The source of the late positive components was located on posterior cingulate. Hence, the ERP data support the idea that semantic and prosodic levels are processed by similar neural networks, and the two linguistic dimensions influence each other during the decision-making stage in the metric and semantic judgment tasks.
Collapse
Affiliation(s)
- Mercedes Muñetón-Ayala
- Programa de Filología Hispánica, Facultad de Comunicaciones y Filología, Universidad de Antioquia, Calle 70 N° 52-21, Medellín 050010, Colombia
- Correspondence:
| | - Manuel De Vega
- Instituto Universitario de Neurociencia, Universidad de la Laguna, 38200 Tenerife, Spain; (M.D.V.); (D.B.)
| | - John Fredy Ochoa-Gómez
- Programa de Bioingeniería, Facultad de Ingeniería, Universidad de Antioquia, Medellín 050010, Colombia;
- Laboratorio de Neurofisiología, GRUNECO-GNA, Universidad de Antioquia, Medellín 050010, Colombia
| | - David Beltrán
- Instituto Universitario de Neurociencia, Universidad de la Laguna, 38200 Tenerife, Spain; (M.D.V.); (D.B.)
- Departamento de Psicología Básica, Universidad Nacional de Educación a Distancia, 28040 Madrid, Spain
| |
Collapse
|
16
|
Vanden Bosch der Nederlanden CM, Joanisse MF, Grahn JA, Snijders TM, Schoffelen JM. Familiarity modulates neural tracking of sung and spoken utterances. Neuroimage 2022; 252:119049. [PMID: 35248707 DOI: 10.1016/j.neuroimage.2022.119049] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2021] [Revised: 02/11/2022] [Accepted: 03/01/2022] [Indexed: 10/18/2022] Open
Abstract
Music is often described in the laboratory and in the classroom as a beneficial tool for memory encoding and retention, with a particularly strong effect when words are sung to familiar compared to unfamiliar melodies. However, the neural mechanisms underlying this memory benefit, especially for benefits related to familiar music are not well understood. The current study examined whether neural tracking of the slow syllable rhythms of speech and song is modulated by melody familiarity. Participants became familiar with twelve novel melodies over four days prior to MEG testing. Neural tracking of the same utterances spoken and sung revealed greater cerebro-acoustic phase coherence for sung compared to spoken utterances, but did not show an effect of familiar melody when stimuli were grouped by their assigned (trained) familiarity. However, when participant's subjective ratings of perceived familiarity were used to group stimuli, a large effect of familiarity was observed. This effect was not specific to song, as it was observed in both sung and spoken utterances. Exploratory analyses revealed some in-session learning of unfamiliar and spoken utterances, with increased neural tracking for untrained stimuli by the end of the MEG testing session. Our results indicate that top-down factors like familiarity are strong modulators of neural tracking for music and language. Participants' neural tracking was related to their perception of familiarity, which was likely driven by a combination of effects from repeated listening, stimulus-specific melodic simplicity, and individual differences. Beyond simply the acoustic features of music, top-down factors built into the music listening experience, like repetition and familiarity, play a large role in the way we attend to and encode information presented in a musical context.
Collapse
Affiliation(s)
| | - Marc F Joanisse
- The Brain and Mind Institute, The University of Western Ontario, London, Ontario, Canada; Psychology Department, The University of Western Ontario, London, Ontario, Canada
| | - Jessica A Grahn
- The Brain and Mind Institute, The University of Western Ontario, London, Ontario, Canada; Psychology Department, The University of Western Ontario, London, Ontario, Canada
| | - Tineke M Snijders
- Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands; Radboud University, Donders Institute for Brain, Cognition and Behaviour, the Netherlands
| | - Jan-Mathijs Schoffelen
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, the Netherlands.
| |
Collapse
|
17
|
Cortical activity evoked by voice pitch changes: a combined fNIRS and EEG study. Hear Res 2022; 420:108483. [DOI: 10.1016/j.heares.2022.108483] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/08/2021] [Revised: 03/02/2022] [Accepted: 03/10/2022] [Indexed: 11/22/2022]
|
18
|
Sheppard SM, Meier EL, Kim KT, Breining BL, Keator LM, Tang B, Caffo BS, Hillis AE. Neural correlates of syntactic comprehension: A longitudinal study. BRAIN AND LANGUAGE 2022; 225:105068. [PMID: 34979477 PMCID: PMC9232253 DOI: 10.1016/j.bandl.2021.105068] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Revised: 12/18/2021] [Accepted: 12/20/2021] [Indexed: 06/14/2023]
Abstract
Broca's area is frequently implicated in sentence comprehension but its specific role is debated. Most lesion studies have investigated deficits at the chronic stage. We aimed (1) to use acute imaging to predict which left hemisphere stroke patients will recover sentence comprehension; and (2) to better understand the role of Broca's area in sentence comprehension by investigating acute deficits prior to functional reorganization. We assessed comprehension of canonical and noncanonical sentences in 15 patients with left hemisphere stroke at acute and chronic stages. LASSO regression was used to conduct lesion symptom mapping analyses. Patients with more severe word-level comprehension deficits and a greater proportion of damage to supramarginal gyrus and superior longitudinal fasciculus were likely to experience acute deficits prior to functional reorganization. Broca's area was only implicated in chronic deficits. We propose that when temporoparietal regions are damaged, intact Broca's area can support syntactic processing after functional reorganization occurs.
Collapse
Affiliation(s)
- Shannon M Sheppard
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD 21287, United States; Department of Communication Sciences & Disorders, Chapman University, Irvine, CA 92618, United States.
| | - Erin L Meier
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD 21287, United States
| | - Kevin T Kim
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD 21287, United States
| | - Bonnie L Breining
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD 21287, United States
| | - Lynsey M Keator
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD 21287, United States
| | - Bohao Tang
- Department of Biostatics, Johns Hopkins School of Public Health, Baltimore, MD 21287, United States
| | - Brian S Caffo
- Department of Biostatics, Johns Hopkins School of Public Health, Baltimore, MD 21287, United States
| | - Argye E Hillis
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD 21287, United States; Department of Physical Medicine and Rehabilitation, Johns Hopkins University School of Medicine, Baltimore, MD 21287, United States; Department of Cognitive Science, Krieger School of Arts and Sciences, Johns Hopkins University, Baltimore, MD 21218, United States
| |
Collapse
|
19
|
Ruthig P, Schönwiesner M. Common principles in the lateralisation of auditory cortex structure and function for vocal communication in primates and rodents. Eur J Neurosci 2022; 55:827-845. [PMID: 34984748 DOI: 10.1111/ejn.15590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2021] [Accepted: 12/24/2021] [Indexed: 11/27/2022]
Abstract
This review summarises recent findings on the lateralisation of communicative sound processing in the auditory cortex (AC) of humans, non-human primates, and rodents. Functional imaging in humans has demonstrated a left hemispheric preference for some acoustic features of speech, but it is unclear to which degree this is caused by bottom-up acoustic feature selectivity or top-down modulation from language areas. Although non-human primates show a less pronounced functional lateralisation in AC, the properties of AC fields and behavioral asymmetries are qualitatively similar. Rodent studies demonstrate microstructural circuits that might underlie bottom-up acoustic feature selectivity in both hemispheres. Functionally, the left AC in the mouse appears to be specifically tuned to communication calls, whereas the right AC may have a more 'generalist' role. Rodents also show anatomical AC lateralisation, such as differences in size and connectivity. Several of these functional and anatomical characteristics are also lateralized in human AC. Thus, complex vocal communication processing shares common features among rodents and primates. We argue that a synthesis of results from humans, non-human primates, and rodents is necessary to identify the neural circuitry of vocal communication processing. However, data from different species and methods are often difficult to compare. Recent advances may enable better integration of methods across species. Efforts to standardise data formats and analysis tools would benefit comparative research and enable synergies between psychological and biological research in the area of vocal communication processing.
Collapse
Affiliation(s)
- Philip Ruthig
- Faculty of Life Sciences, Leipzig University, Leipzig, Sachsen.,Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig
| | | |
Collapse
|
20
|
Di Cesare G, Cuccio V, Marchi M, Sciutti A, Rizzolatti G. Communicative And Affective Components in Processing Auditory Vitality Forms: An fMRI Study. Cereb Cortex 2021; 32:909-918. [PMID: 34428292 PMCID: PMC8889944 DOI: 10.1093/cercor/bhab255] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Revised: 07/07/2021] [Accepted: 07/08/2021] [Indexed: 11/22/2022] Open
Abstract
In previous studies on auditory vitality forms, we found that listening to action verbs pronounced gently or rudely, produced, relative to a neutral robotic voice, activation of the dorso-central insula. One might wonder whether this insular activation depends on the conjunction of action verbs and auditory vitality forms, or whether auditory vitality forms are sufficient per se to activate the insula. To solve this issue, we presented words not related to actions such as concrete nouns (e.g.,“ball”), pronounced gently or rudely. No activation of the dorso-central insula was found. As a further step, we examined whether interjections, i.e., speech stimuli conveying communicative intention (e.g., “hello”), pronounced with different vitality forms, would be able to activate, relative to control, the insula. The results showed that stimuli conveying a communicative intention, pronounced with different auditory vitality forms activate the dorsal-central insula. These data deepen our understanding of the vitality forms processing, showing that insular activation is not specific to action verbs, but can be also activated by speech acts conveying communicative intention such as interjections. These findings also show the intrinsic social nature of vitality forms because activation of the insula was not observed in the absence of a communicative intention.
Collapse
Affiliation(s)
- G Di Cesare
- Italian Institute of Technology, Cognitive Architecture for Collaborative Technologies Unit, Genova, Italy
| | - V Cuccio
- Department of Cognitive Science, Psychology, Education and Cultural Studies, University of Messina, Messina, Italy
| | - M Marchi
- Department of Computer Science, University of Milan, Milan, Italy
| | - A Sciutti
- Italian Institute of Technology, Cognitive Architecture for Collaborative Technologies Unit, Genova, Italy
| | - G Rizzolatti
- Istituto di Neuroscienze, Consiglio Nazionale delle Ricerche, Parma, Italy
| |
Collapse
|
21
|
The Melody of Speech: What the Melodic Perception of Speech Reveals about Language Performance and Musical Abilities. LANGUAGES 2021. [DOI: 10.3390/languages6030132] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Research has shown that melody not only plays a crucial role in music but also in language acquisition processes. Evidence has been provided that melody helps in retrieving, remembering, and memorizing new language material, while relatively little is known about whether individuals who perceive speech as more melodic than others also benefit in the acquisition of oral languages. In this investigation, we wanted to show which impact the subjective melodic perception of speech has on the pronunciation of unfamiliar foreign languages. We tested 86 participants for how melodic they perceived five unfamiliar languages, for their ability to repeat and pronounce the respective five languages, for their musical abilities, and for their short-term memory (STM). The results revealed that 59 percent of the variance in the language pronunciation tasks could be explained by five predictors: the number of foreign languages spoken, short-term memory capacity, tonal aptitude, melodic singing ability, and how melodic the languages appeared to the participants. Group comparisons showed that individuals who perceived languages as more melodic performed significantly better in all language tasks than those who did not. However, even though we expected musical measures to be related to the melodic perception of foreign languages, we could only detect some correlations to rhythmical and tonal musical aptitude. Overall, the findings of this investigation add a new dimension to language research, which shows that individuals who perceive natural languages to be more melodic than others also retrieve and pronounce utterances more accurately.
Collapse
|
22
|
Multiple prosodic meanings are conveyed through separate pitch ranges: Evidence from perception of focus and surprise in Mandarin Chinese. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2021; 21:1164-1175. [PMID: 34331268 DOI: 10.3758/s13415-021-00930-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 07/06/2021] [Indexed: 11/08/2022]
Abstract
F0 variation is a crucial feature in speech prosody, which can convey linguistic information such as focus and paralinguistic meanings such as surprise. How can multiple layers of information be represented with F0 in speech: are they divided into discrete layers of pitch or overlapped without clear divisions? We investigated this question by assessing pitch perception of focus and surprise in Mandarin Chinese. Seventeen native Mandarin listeners rated the strength of focus and surprise conveyed by the same set of synthetically manipulated sentences. An fMRI experiment was conducted to assess neural correlates of the listeners' perceptual response to the stimuli. The results showed that behaviourally, the perceptual threshold for focus was 3 semitones and that for surprise was 5 semitones above the baseline. Moreover, the pitch range of 5-12 semitones above the baseline signalled both focus and surprise, suggesting a considerable overlap between the two types of prosodic information within this range. The neuroimaging data positively correlated with the variations in behavioural data. Also, a ceiling effect was found as no significant behavioural differences or neural activities were shown after reaching a certain pitch level for the perception of focus and surprise respectively. Together, the results suggest that different layers of prosodic information are represented in F0 through different pitch ranges: paralinguistic information is represented at a pitch range beyond that used by linguistic information. Meanwhile, the representation of paralinguistic information is achieved without obscuring linguistic prosody, thus allowing F0 to represent the two layers of information in parallel.
Collapse
|
23
|
Hartwigsen G, Bengio Y, Bzdok D. How does hemispheric specialization contribute to human-defining cognition? Neuron 2021; 109:2075-2090. [PMID: 34004139 PMCID: PMC8273110 DOI: 10.1016/j.neuron.2021.04.024] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2021] [Revised: 03/22/2021] [Accepted: 04/26/2021] [Indexed: 12/30/2022]
Abstract
Uniquely human cognitive faculties arise from flexible interplay between specific local neural modules, with hemispheric asymmetries in functional specialization. Here, we discuss how these computational design principles provide a scaffold that enables some of the most advanced cognitive operations, such as semantic understanding of world structure, logical reasoning, and communication via language. We draw parallels to dual-processing theories of cognition by placing a focus on Kahneman's System 1 and System 2. We propose integration of these ideas with the global workspace theory to explain dynamic relay of information products between both systems. Deepening the current understanding of how neurocognitive asymmetry makes humans special can ignite the next wave of neuroscience-inspired artificial intelligence.
Collapse
Affiliation(s)
- Gesa Hartwigsen
- Max Planck Institute for Human Cognitive and Brain Sciences, Lise Meitner Research Group Cognition and Plasticity, Leipzig, Germany.
| | - Yoshua Bengio
- Mila, Montreal, QC, Canada; University of Montreal, Montreal, QC, Canada
| | - Danilo Bzdok
- Mila, Montreal, QC, Canada; Montreal Neurological Institute, McConnell Brain Imaging Centre, Faculty of Medicine, McGill University, Montreal, QC, Canada; Department of Biomedical Engineering, Faculty of Medicine, and School of Computer Science, McGill University, Montreal, QC, Canada.
| |
Collapse
|
24
|
Zainaee S, Mahdipour R, Mahdavi Rashed M, Sobhani-Rad D. Dysgraphia and dysprosody in a patient with arteriovenous malformation: a case report. Neurocase 2021; 27:259-265. [PMID: 34106816 DOI: 10.1080/13554794.2021.1929332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Arteriovenous malformation (AVM) results from development of abnormal connections between veins and arteries. This study reported anAVM case suffering from dysgraphia and dysprosody. According to the results after the trauma, the patient's handwriting was identified as macrographic and illegible, and written letters and verbs were neglected in free writing or dictation. Moreover, prosody of the patient's utterances was changed. Finally, an intervention was conducted to improve the writing impairments whereby they eventually enhanced. AVM can adversely affect communication opportunities and working life due to these impairments. Thus referring the patient to speech and language pathologists seems sensible and necessary.
Collapse
Affiliation(s)
- Shahryar Zainaee
- Department of Speech Therapy, School of Paramedical Sciences, Mashhad University of Medical Sciences
| | - Ramin Mahdipour
- Department of Anatomy and Cell Biology, School of Medicine, Mashhad University of Medical Sciences, Mashhad, Iran
| | | | - Davood Sobhani-Rad
- Department of Speech Therapy, School of Paramedical Sciences, Mashhad University of Medical Sciences
| |
Collapse
|
25
|
Sheppard SM, Meier EL, Zezinka Durfee A, Walker A, Shea J, Hillis AE. Characterizing subtypes and neural correlates of receptive aprosodia in acute right hemisphere stroke. Cortex 2021; 141:36-54. [PMID: 34029857 DOI: 10.1016/j.cortex.2021.04.003] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Revised: 03/20/2021] [Accepted: 04/09/2021] [Indexed: 02/04/2023]
Abstract
INTRODUCTION Speakers naturally produce prosodic variations depending on their emotional state. Receptive prosody has several processing stages. We aimed to conduct lesion-symptom mapping to determine whether damage (core infarct or hypoperfusion) to specific brain areas was associated with receptive aprosodia or with impairment at different processing stages in individuals with acute right hemisphere stroke. We also aimed to determine whether different subtypes of receptive aprosodia exist that are characterized by distinctive behavioral performance patterns. METHODS Twenty patients with receptive aprosodia following right hemisphere ischemic stroke were enrolled within five days of stroke; clinical imaging was acquired. Participants completed tests of receptive emotional prosody, and tests of each stage of prosodic processing (Stage 1: acoustic analysis; Stage 2: analyzing abstract representations of acoustic characteristics that convey emotion; Stage 3: semantic processing). Emotional facial recognition was also assessed. LASSO regression was used to identify predictors of performance on each behavioral task. Predictors entered into each model included 14 right hemisphere regions, hypoperfusion in four vascular territories as measured using FLAIR hyperintense vessel ratings, lesion volume, age, and education. A k-medoid cluster analysis was used to identify different subtypes of receptive aprosodia based on performance on the behavioral tasks. RESULTS Impaired receptive emotional prosody and impaired emotional facial expression recognition were both predicted by greater percent damage to the caudate. The k-medoid cluster analysis identified three different subtypes of aprosodia. One group was primarily impaired on Stage 1 processing and primarily had frontotemporal lesions. The second group had a domain-general emotion recognition impairment and maximal lesion overlap in subcortical areas. Finally, the third group was characterized by a Stage 2 processing deficit and had lesion overlap in posterior regions. CONCLUSIONS Subcortical structures, particularly the caudate, play an important role in emotional prosody comprehension. Receptive aprosodia can result from impairments at different processing stages.
Collapse
Affiliation(s)
- Shannon M Sheppard
- Department of Communication Sciences & Disorders, Chapman University, Irvine, CA, USA; Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA.
| | - Erin L Meier
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | | | - Alex Walker
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Jennifer Shea
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Argye E Hillis
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA; Department of Physical Medicine and Rehabilitation, Johns Hopkins University School of Medicine, Baltimore, MD, USA; Department of Cognitive Science, Krieger School of Arts and Sciences, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
26
|
Murai SA, Riquimaroux H. Neural correlates of subjective comprehension of noise-vocoded speech. Hear Res 2021; 405:108249. [PMID: 33894680 DOI: 10.1016/j.heares.2021.108249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Revised: 03/28/2021] [Accepted: 04/06/2021] [Indexed: 10/21/2022]
Abstract
Under an acoustically degraded condition, the degree of speech comprehension fluctuates within individuals. Understanding the relationship between such fluctuations in comprehension and neural responses might reveal perceptual processing for distorted speech. In this study we investigated the cerebral activity associated with the degree of subjective comprehension of noise-vocoded speech sounds (NVSS) using functional magnetic resonance imaging. Our results indicate that higher comprehension of NVSS sentences was associated with greater activation in the right superior temporal cortex, and that activity in the left inferior frontal gyrus (Broca's area) was increased when a listener recognized words in a sentence they did not fully comprehend. In addition, results of laterality analysis demonstrated that recognition of words in an NVSS sentence led to less lateralized responses in the temporal cortex, though a left-lateralization was observed when no words were recognized. The data suggest that variation in comprehension within individuals can be associated with changes in lateralization in the temporal auditory cortex.
Collapse
Affiliation(s)
- Shota A Murai
- Faculty of Life and Medical Sciences, Doshisha University. 1-3 Miyakodani, Tatara, Kyotanabe 610-0321, Kyoto, Japan
| | - Hiroshi Riquimaroux
- Faculty of Life and Medical Sciences, Doshisha University. 1-3 Miyakodani, Tatara, Kyotanabe 610-0321, Kyoto, Japan.
| |
Collapse
|
27
|
Bilateral age-related atrophy in the planum temporale is associated with vowel discrimination difficulty in healthy older adults. Hear Res 2021; 406:108252. [PMID: 33951578 DOI: 10.1016/j.heares.2021.108252] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Revised: 04/04/2021] [Accepted: 04/07/2021] [Indexed: 11/24/2022]
Abstract
In this study we investigated the association between age-related brain atrophy and behavioural as well as electrophysiological markers of vowel perception in a sample of healthy younger and older adults with normal pure-tone hearing. Twenty-three older adults and 27 younger controls discriminated a set of vowels with altered second formants embedded in consonant-vowel syllables. Additionally, mismatch negativity (MMN) responses were recorded in a separate oddball paradigm with the same set of stimuli. A structural magnet resonance scan was obtained for each participant to determine cortical architecture of the left and right planum temporale (PT). The PT was chosen for its function as a major processor of auditory cues and speech. Results suggested that older adults performed worse in vowel discrimination despite normal-for-age pure-tone hearing. In the older group, we found evidence that those with greater age-related cortical atrophy (i.e., lower cortical surface area and cortical volume) in the left and right PT also showed weaker vowel discrimination. In comparison, we found a lateralized correlation in the younger group suggesting that those with greater cortical thickness in only the left PT performed weaker in the vowel discrimination task. We did not find any associations between macroanatomical traits of the PT and MMN responses. We conclude that deficient vowel processing is not only caused by pure-tone hearing loss but is also influenced by atrophy-related changes in the ageing auditory-related cortices. Furthermore, our results suggest that auditory processing might become more bilateral across the lifespan.
Collapse
|
28
|
Hsieh IH, Yeh WT. The Interaction Between Timescale and Pitch Contour at Pre-attentive Processing of Frequency-Modulated Sweeps. Front Psychol 2021; 12:637289. [PMID: 33833720 PMCID: PMC8021897 DOI: 10.3389/fpsyg.2021.637289] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2020] [Accepted: 02/17/2021] [Indexed: 11/30/2022] Open
Abstract
Speech comprehension across languages depends on encoding the pitch variations in frequency-modulated (FM) sweeps at different timescales and frequency ranges. While timescale and spectral contour of FM sweeps play important roles in differentiating acoustic speech units, relatively little work has been done to understand the interaction between the two acoustic dimensions at early cortical processing. An auditory oddball paradigm was employed to examine the interaction of timescale and pitch contour at pre-attentive processing of FM sweeps. Event-related potentials to frequency sweeps that vary in linguistically relevant pitch contour (fundamental frequency F0 vs. first formant frequency F1) and timescale (local vs. global) in Mandarin Chinese were recorded. Mismatch negativities (MMNs) were elicited by all types of sweep deviants. For local timescale, FM sweeps with F0 contours yielded larger MMN amplitudes than F1 contours. A reversed MMN amplitude pattern was obtained with respect to F0/F1 contours for global timescale stimuli. An interhemispheric asymmetry of MMN topography was observed corresponding to local and global-timescale contours. Falling but not rising frequency difference waveforms sweep contours elicited right hemispheric dominance. Results showed that timescale and pitch contour interacts with each other in pre-attentive auditory processing of FM sweeps. Findings suggest that FM sweeps, a type of non-speech signal, is processed at an early stage with reference to its linguistic function. That the dynamic interaction between timescale and spectral pattern is processed during early cortical processing of non-speech frequency sweep signal may be critical to facilitate speech encoding at a later stage.
Collapse
Affiliation(s)
- I-Hui Hsieh
- Institute of Cognitive Neuroscience, National Central University, Taoyuan City, Taiwan
| | - Wan-Ting Yeh
- Institute of Cognitive Neuroscience, National Central University, Taoyuan City, Taiwan
| |
Collapse
|
29
|
Right Broca's area is hyperactive in right-handed subjects during meditation: Possible clinical implications? Med Hypotheses 2021; 150:110556. [PMID: 33812300 DOI: 10.1016/j.mehy.2021.110556] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2021] [Accepted: 02/26/2021] [Indexed: 11/23/2022]
Abstract
Broca's area, conventionally located in left (categorical) hemisphere of brain, is responsible for integrating linguistic and non-linguistic processing however, functionality of its right homolog remains partly understood and explored. This perception is based on the fact that in 96% of right-handed individuals, who constitute 91% of human population, the left hemisphere is dominant or categorical hemisphere. Here, we introduce novel scientific-based hypothesis that the right homolog of Broca's region which we observed hyperactive during attention focused meditation, might further play an important role in patients with attention deficits and language and speech disorders. Meditation includes self-regulation practices that focus on attention and awareness to achieve better control on mental processes. The positron emission tomography of brain in twelve (12) apparently healthy male, right-handed long-term meditators showed that the right Broca's area was significantly hyperactive (p = 0.002) during Meditation vs. Baseline while there was only a subtle increase in the activity of left Broca's area. Our results suggest that hitherto partly explored and understood right homolog of the Broca's area (referred to as right Broca's area) may have some important role, especially during meditation which needs to be explored further.
Collapse
|
30
|
Early institutionalized care disrupts the development of emotion processing in prosody. Dev Psychopathol 2021; 33:421-430. [PMID: 33583457 DOI: 10.1017/s0954579420002023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Millions of children worldwide are raised in institutionalized settings. Unfortunately, institutionalized rearing is often characterized by psychosocial deprivation, leading to difficulties in numerous social, emotional, physical, and cognitive skills. One such skill is the ability to recognize emotional facial expressions. Children with a history of institutional rearing tend to be worse at recognizing emotions in facial expressions than their peers, and this deficit likely affects social interactions. However, emotional information is also conveyed vocally, and neither prosodic information processing nor the cross-modal integration of facial and prosodic emotional expressions have been investigated in these children to date. We recorded electroencephalograms (EEG) while 47 children under institutionalized care (IC) (n = 24) or biological family care (BFC) (n = 23) viewed angry, happy, or neutral facial expressions while listening to pseudowords with angry, happy, or neutral prosody. The results indicate that 20- to 40-month-olds living in IC have event-related potentials (ERPs) over midfrontal brain regions that are less sensitive to incongruent facial and prosodic emotions relative to children under BFC, and that their brain responses to prosody are less lateralized. Children under IC also showed midfrontal ERP differences in processing of angry prosody, indicating that institutionalized rearing may specifically affect the processing of anger.
Collapse
|
31
|
Asymmetry of Auditory-Motor Speech Processing is Determined by Language Experience. J Neurosci 2021; 41:1059-1067. [PMID: 33298537 PMCID: PMC7880293 DOI: 10.1523/jneurosci.1977-20.2020] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2020] [Revised: 10/24/2020] [Accepted: 11/12/2020] [Indexed: 11/21/2022] Open
Abstract
Speech processing relies on interactions between auditory and motor systems and is asymmetrically organized in the human brain. The left auditory system is specialized for processing of phonemes, whereas the right is specialized for processing of pitch changes in speech affecting prosody. In speakers of tonal languages, however, processing of pitch (i.e., tone) changes that alter word meaning is left-lateralized indicating that linguistic function and language experience shape speech processing asymmetries. Here, we investigated the asymmetry of motor contributions to auditory speech processing in male and female speakers of tonal and non-tonal languages. We temporarily disrupted the right or left speech motor cortex using transcranial magnetic stimulation (TMS) and measured the impact of these disruptions on auditory discrimination (mismatch negativity; MMN) responses to phoneme and tone changes in sequences of syllables using electroencephalography (EEG). We found that the effect of motor disruptions on processing of tone changes differed between language groups: disruption of the right speech motor cortex suppressed responses to tone changes in non-tonal language speakers, whereas disruption of the left speech motor cortex suppressed responses to tone changes in tonal language speakers. In non-tonal language speakers, the effects of disruption of left speech motor cortex on responses to tone changes were inconclusive. For phoneme changes, disruption of left but not right speech motor cortex suppressed responses in both language groups. We conclude that the contributions of the right and left speech motor cortex to auditory speech processing are determined by the functional roles of acoustic cues in the listener's native language.SIGNIFICANCE STATEMENT The principles underlying hemispheric asymmetries of auditory speech processing remain debated. The asymmetry of processing of speech sounds is affected by low-level acoustic cues, but also by their linguistic function. By combining transcranial magnetic stimulation (TMS) and electroencephalography (EEG), we investigated the asymmetry of motor contributions to auditory speech processing in tonal and non-tonal language speakers. We provide causal evidence that the functional role of the acoustic cues in the listener's native language affects the asymmetry of motor influences on auditory speech discrimination ability [indexed by mismatch negativity (MMN) responses]. Lateralized top-down motor influences can affect asymmetry of speech processing in the auditory system.
Collapse
|
32
|
Kherif F, Muller S. Neuro-Clinical Signatures of Language Impairments: A Theoretical Framework for Function-to-structure Mapping in Clinics. Curr Top Med Chem 2021; 20:800-811. [PMID: 32116193 DOI: 10.2174/1568026620666200302111130] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2019] [Revised: 11/10/2019] [Accepted: 01/12/2020] [Indexed: 12/26/2022]
Abstract
In the past decades, neuroscientists and clinicians have collected a considerable amount of data and drastically increased our knowledge about the mapping of language in the brain. The emerging picture from the accumulated knowledge is that there are complex and combinatorial relationships between language functions and anatomical brain regions. Understanding the underlying principles of this complex mapping is of paramount importance for the identification of the brain signature of language and Neuro-Clinical signatures that explain language impairments and predict language recovery after stroke. We review recent attempts to addresses this question of language-brain mapping. We introduce the different concepts of mapping (from diffeomorphic one-to-one mapping to many-to-many mapping). We build those different forms of mapping to derive a theoretical framework where the current principles of brain architectures including redundancy, degeneracy, pluri-potentiality and bow-tie network are described.
Collapse
Affiliation(s)
- Ferath Kherif
- Laboratory for Research in Neuroimaging, Department of Clinical Neuroscience, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Sandrine Muller
- 1Laboratory for Research in Neuroimaging, Department of Clinical Neuroscience, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| |
Collapse
|
33
|
Giroud N, Pichora-Fuller MK, Mick P, Wittich W, Al-Yawer F, Rehan S, Orange JB, Phillips NA. Hearing loss is associated with gray matter differences in older adults at risk for and with Alzheimer's disease. AGING BRAIN 2021; 1:100018. [PMID: 36911511 PMCID: PMC9997162 DOI: 10.1016/j.nbas.2021.100018] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Revised: 05/06/2021] [Accepted: 05/19/2021] [Indexed: 12/27/2022] Open
Abstract
Using data from the COMPASS-ND study we investigated associations between hearing loss and hippocampal volume as well as cortical thickness in older adults with subjective cognitive decline (SCD), mild cognitive impairment (MCI), and Alzheimer's dementia (AD). SCD participants with greater pure-tone hearing loss exhibited lower hippocampal volume, but more cortical thickness in the left superior temporal gyrus and right pars opercularis. Greater speech-in-noise reception thresholds were associated with lower cortical thickness bilaterally across much of the cortex in AD. The AD group also showed a trend towards worse speech-in-noise thresholds compared to the SCD group.
Collapse
Affiliation(s)
- N Giroud
- Department of Psychology, Centre for Research in Human Development, Concordia University, Montréal, Québec, Canada.,Centre for Research on Brain, Language, and Music, Montréal, Québec, Canada
| | - M K Pichora-Fuller
- Department of Psychology, University of Toronto, Mississauga, Ontario, Canada
| | - P Mick
- Department of Surgery, College of Medicine, University of Saskatchewan, Saskatoon, Saskatchewan, Canada
| | - W Wittich
- School of Optometry, Université de Montréal, Montreal, Quebec, Canada
| | - F Al-Yawer
- Department of Psychology, Centre for Research in Human Development, Concordia University, Montréal, Québec, Canada
| | - S Rehan
- Department of Psychology, Centre for Research in Human Development, Concordia University, Montréal, Québec, Canada
| | - J B Orange
- School of Communication Sciences and Disorders, Western University, London, Canada
| | - N A Phillips
- Department of Psychology, Centre for Research in Human Development, Concordia University, Montréal, Québec, Canada.,Centre for Research on Brain, Language, and Music, Montréal, Québec, Canada.,Lady Davis Institute for Medical Research, Jewish General Hospital, Montréal, Québec, Canada
| |
Collapse
|
34
|
Trettenbrein PC, Papitto G, Friederici AD, Zaccarella E. Functional neuroanatomy of language without speech: An ALE meta-analysis of sign language. Hum Brain Mapp 2020; 42:699-712. [PMID: 33118302 PMCID: PMC7814757 DOI: 10.1002/hbm.25254] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2020] [Accepted: 10/09/2020] [Indexed: 12/19/2022] Open
Abstract
Sign language (SL) conveys linguistic information using gestures instead of sounds. Here, we apply a meta‐analytic estimation approach to neuroimaging studies (N = 23; subjects = 316) and ask whether SL comprehension in deaf signers relies on the same primarily left‐hemispheric cortical network implicated in spoken and written language (SWL) comprehension in hearing speakers. We show that: (a) SL recruits bilateral fronto‐temporo‐occipital regions with strong left‐lateralization in the posterior inferior frontal gyrus known as Broca's area, mirroring functional asymmetries observed for SWL. (b) Within this SL network, Broca's area constitutes a hub which attributes abstract linguistic information to gestures. (c) SL‐specific voxels in Broca's area are also crucially involved in SWL, as confirmed by meta‐analytic connectivity modeling using an independent large‐scale neuroimaging database. This strongly suggests that the human brain evolved a lateralized language network with a supramodal hub in Broca's area which computes linguistic information independent of speech.
Collapse
Affiliation(s)
- Patrick C Trettenbrein
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,International Max Planck Research School on Neuroscience of Communication: Structure, Function, and Plasticity (IMPRS NeuroCom), Leipzig, Germany
| | - Giorgio Papitto
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,International Max Planck Research School on Neuroscience of Communication: Structure, Function, and Plasticity (IMPRS NeuroCom), Leipzig, Germany
| | - Angela D Friederici
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Emiliano Zaccarella
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
35
|
Chien PJ, Friederici AD, Hartwigsen G, Sammler D. Intonation processing increases task-specific fronto-temporal connectivity in tonal language speakers. Hum Brain Mapp 2020; 42:161-174. [PMID: 32996647 PMCID: PMC7721241 DOI: 10.1002/hbm.25214] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Revised: 09/08/2020] [Accepted: 09/13/2020] [Indexed: 01/08/2023] Open
Abstract
Language comprehension depends on tight functional interactions between distributed brain regions. While these interactions are established for semantic and syntactic processes, the functional network of speech intonation – the linguistic variation of pitch – has been scarcely defined. Particularly little is known about intonation in tonal languages, in which pitch not only serves intonation but also expresses meaning via lexical tones. The present study used psychophysiological interaction analyses of functional magnetic resonance imaging data to characterise the neural networks underlying intonation and tone processing in native Mandarin Chinese speakers. Participants categorised either intonation or tone of monosyllabic Mandarin words that gradually varied between statement and question and between Tone 2 and Tone 4. Intonation processing induced bilateral fronto‐temporal activity and increased functional connectivity between left inferior frontal gyrus and bilateral temporal regions, likely linking auditory perception and labelling of intonation categories in a phonological network. Tone processing induced bilateral temporal activity, associated with the auditory representation of tonal (phonemic) categories. Together, the present data demonstrate the breadth of the functional intonation network in a tonal language including higher‐level phonological processes in addition to auditory representations common to both intonation and tone.
Collapse
Affiliation(s)
- Pei-Ju Chien
- International Max Planck Research School NeuroCom, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Otto Hahn Group 'Neural Bases of Intonation in Speech and Music', Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Lise Meitner Research Group 'Cognition and Plasticity', Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Angela D Friederici
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Gesa Hartwigsen
- Lise Meitner Research Group 'Cognition and Plasticity', Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Daniela Sammler
- Otto Hahn Group 'Neural Bases of Intonation in Speech and Music', Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
36
|
Chien PJ, Friederici AD, Hartwigsen G, Sammler D. Neural correlates of intonation and lexical tone in tonal and non-tonal language speakers. Hum Brain Mapp 2020; 41:1842-1858. [PMID: 31957928 PMCID: PMC7268089 DOI: 10.1002/hbm.24916] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2019] [Revised: 12/10/2019] [Accepted: 12/18/2019] [Indexed: 12/31/2022] Open
Abstract
Intonation, the modulation of pitch in speech, is a crucial aspect of language that is processed in right‐hemispheric regions, beyond the classical left‐hemispheric language system. Whether or not this notion generalises across languages remains, however, unclear. Particularly, tonal languages are an interesting test case because of the dual linguistic function of pitch that conveys lexical meaning in form of tone, in addition to intonation. To date, only few studies have explored how intonation is processed in tonal languages, how this compares to tone and between tonal and non‐tonal language speakers. The present fMRI study addressed these questions by testing Mandarin and German speakers with Mandarin material. Both groups categorised mono‐syllabic Mandarin words in terms of intonation, tone, and voice gender. Systematic comparisons of brain activity of the two groups between the three tasks showed large cross‐linguistic commonalities in the neural processing of intonation in left fronto‐parietal, right frontal, and bilateral cingulo‐opercular regions. These areas are associated with general phonological, specific prosodic, and controlled categorical decision‐making processes, respectively. Tone processing overlapped with intonation processing in left fronto‐parietal areas, in both groups, but evoked additional activity in bilateral temporo‐parietal semantic regions and subcortical areas in Mandarin speakers only. Together, these findings confirm cross‐linguistic commonalities in the neural implementation of intonation processing but dissociations for semantic processing of tone only in tonal language speakers.
Collapse
Affiliation(s)
- Pei-Ju Chien
- International Max Planck Research School NeuroCom, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Otto Hahn Group "Neural Bases of Intonation in Speech and Music", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Lise Meitner Research Group "Cognition and Plasticity", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Angela D Friederici
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Gesa Hartwigsen
- Lise Meitner Research Group "Cognition and Plasticity", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Daniela Sammler
- Otto Hahn Group "Neural Bases of Intonation in Speech and Music", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
37
|
Gao J, Zhang D, Wang L, Wang W, Fan Y, Tang M, Zhang X, Lei X, Wang Y, Yang J, Zhang X. Altered Effective Connectivity in Schizophrenic Patients With Auditory Verbal Hallucinations: A Resting-State fMRI Study With Granger Causality Analysis. Front Psychiatry 2020; 11:575. [PMID: 32670108 PMCID: PMC7327618 DOI: 10.3389/fpsyt.2020.00575] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/31/2020] [Accepted: 06/05/2020] [Indexed: 11/13/2022] Open
Abstract
PURPOSE Auditory verbal hallucinations (AVH) are among the most common and prominent symptoms of schizophrenia. Although abnormal functional connectivity associated with AVH has been reported in multiple regions, the changes in information flow remain unclear. In this study, we aimed to elucidate causal influences related to AVH in key regions of auditory, language, and memory networks, by using Granger causality analysis (GCA). PATIENTS AND METHODS Eighteen patients with schizophrenia with AVH and eighteen matched patients without AVH who received resting-state fMRI scans were enrolled in the study. The bilateral superior temporal gyrus (STG), Broca's area, Wernicke's area, putamen, and hippocampus were selected as regions of interest. RESULTS Granger causality (GC) increased from Broca's area to the left STG, and decreased from the right homolog of Wernicke's area to the right homolog of Broca's area, and from the right STG to the right hippocampus in the AVH group compared with the non-AVH group. Correlation analysis showed that the normalized GC ratios from the left STG to Broca's area, from the left STG to the right homolog of Broca's area, and from the right STG to the right homolog of Broca's area were negatively correlated with severity of AVH, and the normalized GC ratios from Broca's area to the left hippocampus and from Broca's area to the right STG were positively correlated with severity of AVH. CONCLUSION Our findings indicate a causal influence of pivotal regions involving the auditory, language, and memory networks in schizophrenia with AVH, which provide a deeper understanding of the neural mechanisms underlying AVH.
Collapse
Affiliation(s)
- Jie Gao
- Department of MRI, Shaanxi Provincial People's Hospital, Xi'an, China
| | - Dongsheng Zhang
- Department of MRI, Shaanxi Provincial People's Hospital, Xi'an, China
| | - Lei Wang
- Department of Radiology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, China
| | - Wei Wang
- Department of Psychiatry, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, China
| | - Yajuan Fan
- Department of Psychiatry, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, China
| | - Min Tang
- Department of MRI, Shaanxi Provincial People's Hospital, Xi'an, China
| | - Xin Zhang
- Department of MRI, Shaanxi Provincial People's Hospital, Xi'an, China
| | - Xiaoyan Lei
- Department of MRI, Shaanxi Provincial People's Hospital, Xi'an, China
| | - Yarong Wang
- Department of Radiology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, China
| | - Jian Yang
- Department of Radiology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, China
| | - Xiaoling Zhang
- Department of MRI, Shaanxi Provincial People's Hospital, Xi'an, China
| |
Collapse
|
38
|
Sheppard SM, Love T, Midgley KJ, Shapiro LP, Holcomb PJ. Using prosody during sentence processing in aphasia: Evidence from temporal neural dynamics. Neuropsychologia 2019; 134:107197. [PMID: 31542361 PMCID: PMC6911311 DOI: 10.1016/j.neuropsychologia.2019.107197] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2018] [Revised: 04/17/2019] [Accepted: 09/18/2019] [Indexed: 11/19/2022]
Affiliation(s)
- Shannon M Sheppard
- San Diego State University, USA; University of California, San Diego, USA.
| | - Tracy Love
- San Diego State University, USA; University of California, San Diego, USA
| | | | - Lewis P Shapiro
- San Diego State University, USA; University of California, San Diego, USA
| | | |
Collapse
|
39
|
Karlsson EM, Johnstone LT, Carey DP. The depth and breadth of multiple perceptual asymmetries in right handers and non-right handers. Laterality 2019; 24:707-739. [PMID: 31399020 DOI: 10.1080/1357650x.2019.1652308] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
Abstract
Several non-verbal perceptual and attentional processes have been linked with specialization of the right cerebral hemisphere. Given that most people have a left hemispheric specialization for language, it is tempting to assume that functions of these two classes of dominance are related. Unfortunately, such models of complementarity are notoriously hard to test. Here we suggest a method which compares frequency of a particular perceptual asymmetry with known frequencies of left hemispheric language dominance in right-handed and non-right handed groups. We illustrate this idea using the greyscales and colourscales tasks, chimeric faces, emotional dichotic listening, and a consonant-vowel dichotic listening task. Results show a substantial "breadth" of leftward bias on the right hemispheric tasks and rightward bias on verbal dichotic listening. Right handers and non-right handers did not differ in terms of proportions of people who were left biased for greyscales/colourscales. Support for reduced typical biases in non-right handers was found for chimeric faces and for CV dichotic listening. Results are discussed in terms of complementary theories of cerebral asymmetries, and how this type of method could be used to create a taxonomy of lateralized functions, each categorized as related to speech and language dominance, or not.
Collapse
Affiliation(s)
- Emma M Karlsson
- Perception, Action and Memory Research Group, School of Psychology, Bangor University , Bangor , UK
| | | | - David P Carey
- Perception, Action and Memory Research Group, School of Psychology, Bangor University , Bangor , UK
| |
Collapse
|
40
|
Intonation guides sentence processing in the left inferior frontal gyrus. Cortex 2019; 117:122-134. [DOI: 10.1016/j.cortex.2019.02.011] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2018] [Revised: 12/22/2018] [Accepted: 02/11/2019] [Indexed: 11/18/2022]
|
41
|
Giroud N, Keller M, Hirsiger S, Dellwo V, Meyer M. Bridging the brain structure—brain function gap in prosodic speech processing in older adults. Neurobiol Aging 2019; 80:116-126. [DOI: 10.1016/j.neurobiolaging.2019.04.017] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2018] [Revised: 04/24/2019] [Accepted: 04/26/2019] [Indexed: 12/21/2022]
|
42
|
Teoh ES, Cappelloni MS, Lalor EC. Prosodic pitch processing is represented in delta-band EEG and is dissociable from the cortical tracking of other acoustic and phonetic features. Eur J Neurosci 2019; 50:3831-3842. [PMID: 31287601 DOI: 10.1111/ejn.14510] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2019] [Revised: 06/10/2019] [Accepted: 07/02/2019] [Indexed: 01/09/2023]
Abstract
Speech is central to communication among humans. Meaning is largely conveyed by the selection of linguistic units such as words, phrases and sentences. However, prosody, that is the variation of acoustic cues that tie linguistic segments together, adds another layer of meaning. There are various features underlying prosody, one of the most important being pitch and how it is modulated. Recent fMRI and ECoG studies have suggested that there are cortical regions for pitch which respond primarily to resolved harmonics and that high-gamma cortical activity encodes intonation as represented by relative pitch. Importantly, this latter result was shown to be independent of the cortical tracking of the acoustic energy of speech, a commonly used measure. Here, we investigate whether we can isolate low-frequency EEG indices of pitch processing of continuous narrative speech from those reflecting the tracking of other acoustic and phonetic features. Harmonic resolvability was found to contain unique predictive power in delta and theta phase, but it was highly correlated with the envelope and tracked even when stimuli were pitch-impoverished. As such, we are circumspect about whether its contribution is truly pitch-specific. Crucially however, we found a unique contribution of relative pitch to EEG delta-phase prediction, and this tracking was absent when subjects listened to pitch-impoverished stimuli. This finding suggests the possibility of a separate processing stream for prosody that might operate in parallel to acoustic-linguistic processing. Furthermore, it provides a novel neural index that could be useful for testing prosodic encoding in populations with speech processing deficits and for improving cognitively controlled hearing aids.
Collapse
Affiliation(s)
- Emily S Teoh
- School of Engineering, Trinity Centre for Biomedical Engineering and Trinity College Institute of Neuroscience, Trinity College Dublin, University of Dublin, Dublin, Ireland
| | | | - Edmund C Lalor
- School of Engineering, Trinity Centre for Biomedical Engineering and Trinity College Institute of Neuroscience, Trinity College Dublin, University of Dublin, Dublin, Ireland.,Department of Biomedical Engineering, University of Rochester, Rochester, NY, USA.,Department of Neuroscience and Del Monte Institute for Neuroscience, University of Rochester, Rochester, NY, USA
| |
Collapse
|
43
|
Keller M, Neuschwander P, Meyer M. When right becomes less right: Neural dedifferentiation during suprasegmental speech processing in the aging brain. Neuroimage 2019; 189:886-895. [DOI: 10.1016/j.neuroimage.2019.01.050] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2018] [Revised: 01/17/2019] [Accepted: 01/17/2019] [Indexed: 01/27/2023] Open
|
44
|
Rahul DR, Ponniah RJ. Decoding the biology of language and its implications in language acquisition. J Biosci 2019; 44:25. [PMID: 30837376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Associating human genetic makeup with the faculty of language has long been a goal for biolinguistics. This stimulated the idea that language is attributed to genes and language disabilities are caused by genetic mutations. However, application of genetic knowledge on language intervention is still a gap in the existing literature. In an effort to bridge this gap, this article presents an account of genetic and neural associations of language and synthesizes the genetic, neural, epigenetic and environmental facets involved in language. In addition to describing the association of genes with language, the neural and epigenetic aspects of language are also explored. Further, the environmental aspects of language such as language input, emotion and cognition are also traced back to gene expressions. Therefore, effective language intervention for language learning difficulties must offer genetics-informed solutions, both linguistic and medical.
Collapse
Affiliation(s)
- D R Rahul
- National Institute of Technology, Tiruchirappalli, Tamil Nadu, India
| | | |
Collapse
|
45
|
Fló A, Brusini P, Macagno F, Nespor M, Mehler J, Ferry AL. Newborns are sensitive to multiple cues for word segmentation in continuous speech. Dev Sci 2019; 22:e12802. [PMID: 30681763 DOI: 10.1111/desc.12802] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2018] [Revised: 01/19/2019] [Accepted: 01/21/2019] [Indexed: 11/30/2022]
Abstract
Before infants can learn words, they must identify those words in continuous speech. Yet, the speech signal lacks obvious boundary markers, which poses a potential problem for language acquisition (Swingley, Philos Trans R Soc Lond. Series B, Biol Sci 364(1536), 3617-3632, 2009). By the middle of the first year, infants seem to have solved this problem (Bergelson & Swingley, Proc Natl Acad Sci 109(9), 3253-3258, 2012; Jusczyk & Aslin, Cogn Psychol 29, 1-23, 1995), but it is unknown if segmentation abilities are present from birth, or if they only emerge after sufficient language exposure and/or brain maturation. Here, in two independent experiments, we looked at two cues known to be crucial for the segmentation of human speech: the computation of statistical co-occurrences between syllables and the use of the language's prosody. After a brief familiarization of about 3 min with continuous speech, using functional near-infrared spectroscopy, neonates showed differential brain responses on a recognition test to words that violated either the statistical (Experiment 1) or prosodic (Experiment 2) boundaries of the familiarization, compared to words that conformed to those boundaries. Importantly, word recognition in Experiment 2 occurred even in the absence of prosodic information at test, meaning that newborns encoded the phonological content independently of its prosody. These data indicate that humans are born with operational language processing and memory capacities and can use at least two types of cues to segment otherwise continuous speech, a key first step in language acquisition.
Collapse
Affiliation(s)
- Ana Fló
- Language, Cognition, and Development Laboratory, Scuola Internazionale di Studi Avanzati, Trieste, Italy.,Cognitive Neuroimaging Unit, Commissariat à l'Energie Atomique (CEA), Institut National de la Santé et de la Recherche Médicale (INSERM) U992, NeuroSpin Center, Gif-sur-Yvette, France
| | - Perrine Brusini
- Language, Cognition, and Development Laboratory, Scuola Internazionale di Studi Avanzati, Trieste, Italy.,Institute of Psychology Health and Society, University of Liverpool, Liverpool, UK
| | - Francesco Macagno
- Neonatology Unit, Azienda Ospedaliera Santa Maria della Misericordia, Udine, Italy
| | - Marina Nespor
- Language, Cognition, and Development Laboratory, Scuola Internazionale di Studi Avanzati, Trieste, Italy
| | - Jacques Mehler
- Language, Cognition, and Development Laboratory, Scuola Internazionale di Studi Avanzati, Trieste, Italy
| | - Alissa L Ferry
- Language, Cognition, and Development Laboratory, Scuola Internazionale di Studi Avanzati, Trieste, Italy.,Division of Human Communication, Hearing, and Development, University of Manchester, Manchester, UK
| |
Collapse
|
46
|
|
47
|
Walenski M, Europa E, Caplan D, Thompson CK. Neural networks for sentence comprehension and production: An ALE-based meta-analysis of neuroimaging studies. Hum Brain Mapp 2019; 40:2275-2304. [PMID: 30689268 DOI: 10.1002/hbm.24523] [Citation(s) in RCA: 78] [Impact Index Per Article: 15.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2018] [Revised: 12/14/2018] [Accepted: 12/26/2018] [Indexed: 12/24/2022] Open
Abstract
Comprehending and producing sentences is a complex endeavor requiring the coordinated activity of multiple brain regions. We examined three issues related to the brain networks underlying sentence comprehension and production in healthy individuals: First, which regions are recruited for sentence comprehension and sentence production? Second, are there differences for auditory sentence comprehension vs. visual sentence comprehension? Third, which regions are specifically recruited for the comprehension of syntactically complex sentences? Results from activation likelihood estimation (ALE) analyses (from 45 studies) implicated a sentence comprehension network occupying bilateral frontal and temporal lobe regions. Regions implicated in production (from 15 studies) overlapped with the set of regions associated with sentence comprehension in the left hemisphere, but did not include inferior frontal cortex, and did not extend to the right hemisphere. Modality differences between auditory and visual sentence comprehension were found principally in the temporal lobes. Results from the analysis of complex syntax (from 37 studies) showed engagement of left inferior frontal and posterior temporal regions, as well as the right insula. The involvement of the right hemisphere in the comprehension of these structures has potentially important implications for language treatment and recovery in individuals with agrammatic aphasia following left hemisphere brain damage.
Collapse
Affiliation(s)
- Matthew Walenski
- Center for the Neurobiology of Language Recovery, Northwestern University, Evanston, Illinois.,Department of Communication Sciences and Disorders, School of Communication, Northwestern University, Evanston, Illinois
| | - Eduardo Europa
- Department of Neurology, University of California, San Francisco
| | - David Caplan
- Department of Neurology, Harvard Medical School, Massachusetts General Hospital, Boston, Massachusetts
| | - Cynthia K Thompson
- Center for the Neurobiology of Language Recovery, Northwestern University, Evanston, Illinois.,Department of Communication Sciences and Disorders, School of Communication, Northwestern University, Evanston, Illinois.,Department of Neurology, Feinberg School of Medicine, Northwestern University, Evanston, Illinois
| |
Collapse
|
48
|
Nourski KV, Steinschneider M, Rhone AE, Kovach CK, Kawasaki H, Howard MA. Differential responses to spectrally degraded speech within human auditory cortex: An intracranial electrophysiology study. Hear Res 2018; 371:53-65. [PMID: 30500619 DOI: 10.1016/j.heares.2018.11.009] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/13/2018] [Revised: 11/15/2018] [Accepted: 11/19/2018] [Indexed: 12/28/2022]
Abstract
Understanding cortical processing of spectrally degraded speech in normal-hearing subjects may provide insights into how sound information is processed by cochlear implant (CI) users. This study investigated electrocorticographic (ECoG) responses to noise-vocoded speech and related these responses to behavioral performance in a phonemic identification task. Subjects were neurosurgical patients undergoing chronic invasive monitoring for medically refractory epilepsy. Stimuli were utterances /aba/ and /ada/, spectrally degraded using a noise vocoder (1-4 bands). ECoG responses were obtained from Heschl's gyrus (HG) and superior temporal gyrus (STG), and were examined within the high gamma frequency range (70-150 Hz). All subjects performed at chance accuracy with speech degraded to 1 and 2 spectral bands, and at or near ceiling for clear speech. Inter-subject variability was observed in the 3- and 4-band conditions. High gamma responses in posteromedial HG (auditory core cortex) were similar for all vocoded conditions and clear speech. A progressive preference for clear speech emerged in anterolateral segments of HG, regardless of behavioral performance. On the lateral STG, responses to all vocoded stimuli were larger in subjects with better task performance. In contrast, both behavioral and neural responses to clear speech were comparable across subjects regardless of their ability to identify degraded stimuli. Findings highlight differences in representation of spectrally degraded speech across cortical areas and their relationship to perception. The results are in agreement with prior non-invasive results. The data provide insight into the neural mechanisms associated with variability in perception of degraded speech and potentially into sources of such variability in CI users.
Collapse
Affiliation(s)
- Kirill V Nourski
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, USA.
| | - Mitchell Steinschneider
- Departments of Neurology and Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Ariane E Rhone
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA
| | | | - Hiroto Kawasaki
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA
| | - Matthew A Howard
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, USA; Pappajohn Biomedical Institute, The University of Iowa, Iowa City, IA, USA
| |
Collapse
|
49
|
Lei M, Miyoshi T, Niwa Y, Dan I, Sato H. Comprehension-Dependent Cortical Activation During Speech Comprehension Tasks with Multiple Languages: Functional Near-Infrared Spectroscopy Study. JAPANESE PSYCHOLOGICAL RESEARCH 2018. [DOI: 10.1111/jpr.12218] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
50
|
Aglieri V, Chaminade T, Takerkart S, Belin P. Functional connectivity within the voice perception network and its behavioural relevance. Neuroimage 2018; 183:356-365. [PMID: 30099078 PMCID: PMC6215333 DOI: 10.1016/j.neuroimage.2018.08.011] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2018] [Revised: 07/13/2018] [Accepted: 08/08/2018] [Indexed: 12/13/2022] Open
Abstract
Recognizing who is speaking is a cognitive ability characterized by considerable individual differences, which could relate to the inter-individual variability observed in voice-elicited BOLD activity. Since voice perception is sustained by a complex brain network involving temporal voice areas (TVAs) and, even if less consistently, extra-temporal regions such as frontal cortices, functional connectivity (FC) during an fMRI voice localizer (passive listening of voices vs non-voices) has been computed within twelve temporal and frontal voice-sensitive regions (“voice patches”) individually defined for each subject (N = 90) to account for inter-individual variability. Results revealed that voice patches were positively co-activated during voice listening and that they were characterized by different FC pattern depending on the location (anterior/posterior) and the hemisphere. Importantly, FC between right frontal and temporal voice patches was behaviorally relevant: FC significantly increased with voice recognition abilities as measured in a voice recognition test performed outside the scanner. Hence, this study highlights the importance of frontal regions in voice perception and it supports the idea that looking at FC between stimulus-specific and higher-order frontal regions can help understanding individual differences in processing social stimuli such as voices.
Collapse
Affiliation(s)
- Virginia Aglieri
- Institut des Neurosciences de la Timone, UMR 7289, CNRS and Université Aix-Marseille, Marseille, France.
| | - Thierry Chaminade
- Institut des Neurosciences de la Timone, UMR 7289, CNRS and Université Aix-Marseille, Marseille, France; Institute of Language, Communication and the Brain, Marseille, France
| | - Sylvain Takerkart
- Institut des Neurosciences de la Timone, UMR 7289, CNRS and Université Aix-Marseille, Marseille, France; Institute of Language, Communication and the Brain, Marseille, France
| | - Pascal Belin
- Institut des Neurosciences de la Timone, UMR 7289, CNRS and Université Aix-Marseille, Marseille, France; Institute of Language, Communication and the Brain, Marseille, France; International Laboratories for Brain, Music and Sound, Department of Psychology, Université de Montréal, McGill University, Montreal, QC, Canada
| |
Collapse
|