1
|
Lamekina Y, Titone L, Maess B, Meyer L. Speech Prosody Serves Temporal Prediction of Language via Contextual Entrainment. J Neurosci 2024; 44:e1041232024. [PMID: 38839302 PMCID: PMC11236583 DOI: 10.1523/jneurosci.1041-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 03/08/2024] [Accepted: 04/08/2024] [Indexed: 06/07/2024] Open
Abstract
Temporal prediction assists language comprehension. In a series of recent behavioral studies, we have shown that listeners specifically employ rhythmic modulations of prosody to estimate the duration of upcoming sentences, thereby speeding up comprehension. In the current human magnetoencephalography (MEG) study on participants of either sex, we show that the human brain achieves this function through a mechanism termed entrainment. Through entrainment, electrophysiological brain activity maintains and continues contextual rhythms beyond their offset. Our experiment combined exposure to repetitive prosodic contours with the subsequent presentation of visual sentences that either matched or mismatched the duration of the preceding contour. During exposure to prosodic contours, we observed MEG coherence with the contours, which was source-localized to right-hemispheric auditory areas. During the processing of the visual targets, activity at the frequency of the preceding contour was still detectable in the MEG; yet sources shifted to the (left) frontal cortex, in line with a functional inheritance of the rhythmic acoustic context for prediction. Strikingly, when the target sentence was shorter than expected from the preceding contour, an omission response appeared in the evoked potential record. We conclude that prosodic entrainment is a functional mechanism of temporal prediction in language comprehension. In general, acoustic rhythms appear to endow language for employing the brain's electrophysiological mechanisms of temporal prediction.
Collapse
Affiliation(s)
- Yulia Lamekina
- Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - Lorenzo Titone
- Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - Burkhard Maess
- Methods and Development Group Brain Networks, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - Lars Meyer
- Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
- University Clinic Münster, Münster 48149, Germany
| |
Collapse
|
2
|
Janes A, McClay E, Gurm M, Boucher TQ, Yeung HH, Iarocci G, Scheerer NE. Predicting Social Competence in Autistic and Non-Autistic Children: Effects of Prosody and the Amount of Speech Input. J Autism Dev Disord 2024:10.1007/s10803-024-06363-w. [PMID: 38703251 DOI: 10.1007/s10803-024-06363-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/17/2024] [Indexed: 05/06/2024]
Abstract
PURPOSE Autistic individuals often face challenges perceiving and expressing emotions, potentially stemming from differences in speech prosody. Here we explore how autism diagnoses between groups, and measures of social competence within groups may be related to, first, children's speech characteristics (both prosodic features and amount of spontaneous speech), and second, to these two factors in mothers' speech to their children. METHODS Autistic (n = 21) and non-autistic (n = 18) children, aged 7-12 years, participated in a Lego-building task with their mothers, while conversational speech was recorded. Mean F0, pitch range, pitch variability, and amount of spontaneous speech were calculated for each child and their mother. RESULTS The results indicated no differences in speech characteristics across autistic and non-autistic children, or across their mothers, suggesting that conversational context may have large effects on whether differences between autistic and non-autistic populations are found. However, variability in social competence within the group of non-autistic children (but not within autistic children) was predictive of children's mean F0, pitch range and pitch variability. The amount of spontaneous speech produced by mothers (but not their prosody) predicted their autistic children's social competence, which may suggest a heightened impact of scaffolding for mothers of autistic children. CONCLUSION Together, results suggest complex interactions between context, social competence, and adaptive parenting strategies in driving prosodic differences in children's speech.
Collapse
Affiliation(s)
- Alyssa Janes
- Graduate Program in Health and Rehabilitation Sciences, Western University, 1151 Richmond Street, London, ON, N6A 3K7, Canada.
- School of Communication Sciences and Disorders, Western University, 1151 Richmond Street, London, ON, N6A 3K7, Canada.
| | - Elise McClay
- Department of Linguistics, Simon Fraser University, 8888 University Drive, Burnaby, BC, V5A 1S6, Canada
| | - Mandeep Gurm
- Department of Psychology, Simon Fraser University, 8888 University Drive, Burnaby, BC, V5A 1S6, Canada
| | - Troy Q Boucher
- Department of Psychology, Simon Fraser University, 8888 University Drive, Burnaby, BC, V5A 1S6, Canada
| | - H Henny Yeung
- Department of Linguistics, Simon Fraser University, 8888 University Drive, Burnaby, BC, V5A 1S6, Canada
| | - Grace Iarocci
- Department of Psychology, Simon Fraser University, 8888 University Drive, Burnaby, BC, V5A 1S6, Canada
| | - Nichole E Scheerer
- Psychology Department, Wilfrid Laurier University, 75 University Ave W, Waterloo, ON, N2L3C5, Canada
| |
Collapse
|
3
|
Guldner S, Lavan N, Lally C, Wittmann L, Nees F, Flor H, McGettigan C. Human talkers change their voices to elicit specific trait percepts. Psychon Bull Rev 2024; 31:209-222. [PMID: 37507647 PMCID: PMC10866754 DOI: 10.3758/s13423-023-02333-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/30/2023] [Indexed: 07/30/2023]
Abstract
The voice is a variable and dynamic social tool with functional relevance for self-presentation, for example, during a job interview or courtship. Talkers adjust their voices flexibly to their situational or social environment. Here, we investigated how effectively intentional voice modulations can evoke trait impressions in listeners (Experiment 1), whether these trait impressions are recognizable (Experiment 2), and whether they meaningfully influence social interactions (Experiment 3). We recorded 40 healthy adult speakers' whilst speaking neutrally and whilst producing vocal expressions of six social traits (e.g., likeability, confidence). Multivariate ratings of 40 listeners showed that vocal modulations amplified specific trait percepts (Experiments 1 and 2), which could be explained by two principal components relating to perceived affiliation and competence. Moreover, vocal modulations increased the likelihood of listeners choosing the voice to be suitable for corresponding social goals (i.e., a confident rather than likeable voice to negotiate a promotion, Experiment 3). These results indicate that talkers modulate their voice along a common trait space for social navigation. Moreover, beyond reactive voice changes, vocal behaviour can be strategically used by talkers to communicate subtle information about themselves to listeners. These findings advance our understanding of non-verbal vocal behaviour for social communication.
Collapse
Affiliation(s)
- Stella Guldner
- Department of Child and Adolescent Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany.
- Institute of Cognitive and Clinical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany.
| | - Nadine Lavan
- Department of Psychology, Queen Mary University of London, London, UK
| | - Clare Lally
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK
| | - Lisa Wittmann
- Institute of Psychology, University of Regensburg, Regensburg, Germany
| | - Frauke Nees
- Institute of Medical Psychology and Medical Sociology, University Medical Centre Schleswig Holstein, Kiel University, Kiel, Germany
| | - Herta Flor
- Institute of Cognitive and Clinical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Carolyn McGettigan
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK
| |
Collapse
|
4
|
Hauptman M, Blank I, Fedorenko E. Non-literal language processing is jointly supported by the language and theory of mind networks: Evidence from a novel meta-analytic fMRI approach. Cortex 2023; 162:96-114. [PMID: 37023480 PMCID: PMC10210011 DOI: 10.1016/j.cortex.2023.01.013] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Revised: 11/08/2022] [Accepted: 01/11/2023] [Indexed: 03/12/2023]
Abstract
Going beyond the literal meaning of language is key to communicative success. However, the mechanisms that support non-literal inferences remain debated. Using a novel meta-analytic approach, we evaluate the contribution of linguistic, social-cognitive, and executive mechanisms to non-literal interpretation. We identified 74 fMRI experiments (n = 1,430 participants) from 2001 to 2021 that contrasted non-literal language comprehension with a literal control condition, spanning ten phenomena (e.g., metaphor, irony, indirect speech). Applying the activation likelihood estimation approach to the 825 activation peaks yielded six left-lateralized clusters. We then evaluated the locations of both the individual-study peaks and the clusters against probabilistic functional atlases (cf. anatomical locations, as is typically done) for three candidate brain networks-the language-selective network (Fedorenko, Behr, & Kanwisher, 2011), which supports language processing, the Theory of Mind (ToM) network (Saxe & Kanwisher, 2003), which supports social inferences, and the domain-general Multiple-Demand (MD) network (Duncan, 2010), which supports executive control. These atlases were created by overlaying individual activation maps of participants who performed robust and extensively validated 'localizer' tasks that selectively target each network in question (n = 806 for language; n = 198 for ToM; n = 691 for MD). We found that both the individual-study peaks and the ALE clusters fell primarily within the language network and the ToM network. These results suggest that non-literal processing is supported by both i) mechanisms that process literal linguistic meaning, and ii) mechanisms that support general social inference. They thus undermine a strong divide between literal and non-literal aspects of language and challenge the claim that non-literal processing requires additional executive resources.
Collapse
Affiliation(s)
- Miriam Hauptman
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, USA; McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, USA; Department of Psychological & Brain Sciences, Johns Hopkins University, Baltimore, MD 21218, USA.
| | - Idan Blank
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, USA; McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, USA; Department of Psychology, UCLA, Los Angeles, CA 90095, USA; Department of Linguistics, UCLA, Los Angeles, CA 90095, USA
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, USA; McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, USA; Program in Speech and Hearing in Bioscience and Technology, Harvard University, Boston, MA 02114, USA.
| |
Collapse
|
5
|
Benetti S, Ferrari A, Pavani F. Multimodal processing in face-to-face interactions: A bridging link between psycholinguistics and sensory neuroscience. Front Hum Neurosci 2023; 17:1108354. [PMID: 36816496 PMCID: PMC9932987 DOI: 10.3389/fnhum.2023.1108354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 01/11/2023] [Indexed: 02/05/2023] Open
Abstract
In face-to-face communication, humans are faced with multiple layers of discontinuous multimodal signals, such as head, face, hand gestures, speech and non-speech sounds, which need to be interpreted as coherent and unified communicative actions. This implies a fundamental computational challenge: optimally binding only signals belonging to the same communicative action while segregating signals that are not connected by the communicative content. How do we achieve such an extraordinary feat, reliably, and efficiently? To address this question, we need to further move the study of human communication beyond speech-centred perspectives and promote a multimodal approach combined with interdisciplinary cooperation. Accordingly, we seek to reconcile two explanatory frameworks recently proposed in psycholinguistics and sensory neuroscience into a neurocognitive model of multimodal face-to-face communication. First, we introduce a psycholinguistic framework that characterises face-to-face communication at three parallel processing levels: multiplex signals, multimodal gestalts and multilevel predictions. Second, we consider the recent proposal of a lateral neural visual pathway specifically dedicated to the dynamic aspects of social perception and reconceive it from a multimodal perspective ("lateral processing pathway"). Third, we reconcile the two frameworks into a neurocognitive model that proposes how multiplex signals, multimodal gestalts, and multilevel predictions may be implemented along the lateral processing pathway. Finally, we advocate a multimodal and multidisciplinary research approach, combining state-of-the-art imaging techniques, computational modelling and artificial intelligence for future empirical testing of our model.
Collapse
Affiliation(s)
- Stefania Benetti
- Centre for Mind/Brain Sciences, University of Trento, Trento, Italy,Interuniversity Research Centre “Cognition, Language, and Deafness”, CIRCLeS, Catania, Italy,*Correspondence: Stefania Benetti,
| | - Ambra Ferrari
- Max Planck Institute for Psycholinguistics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Francesco Pavani
- Centre for Mind/Brain Sciences, University of Trento, Trento, Italy,Interuniversity Research Centre “Cognition, Language, and Deafness”, CIRCLeS, Catania, Italy
| |
Collapse
|
6
|
Tomasello R. Linguistic signs in action: The neuropragmatics of speech acts. BRAIN AND LANGUAGE 2023; 236:105203. [PMID: 36470125 PMCID: PMC9856589 DOI: 10.1016/j.bandl.2022.105203] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 09/07/2022] [Accepted: 11/07/2022] [Indexed: 06/05/2023]
Abstract
What makes human communication exceptional is the ability to grasp speaker's intentions beyond what is said verbally. How the brain processes communicative functions is one of the central concerns of the neurobiology of language and pragmatics. Linguistic-pragmatic theories define these functions as speech acts, and various pragmatic traits characterise them at the levels of propositional content, action sequence structure, related commitments and social aspects. Here I discuss recent neurocognitive studies, which have shown that the use of identical linguistic signs in conveying different communicative functions elicits distinct and ultra-rapid neural responses. Interestingly, cortical areas show differential involvement underlying various pragmatic features related to theory-of-mind, emotion and action for specific speech acts expressed with the same utterances. Drawing on a neurocognitive model, I posit that understanding speech acts involves the expectation of typical partner follow-up actions and that this predictive knowledge is immediately reflected in mind and brain.
Collapse
Affiliation(s)
- Rosario Tomasello
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität Berlin, 14195 Berlin, Germany; Cluster of Excellence 'Matters of Activity. Image Space Material', Humboldt Universität zu Berlin, 10099 Berlin, Germany.
| |
Collapse
|
7
|
Chang W, Wang L, Yang R, Wang X, Gao Z, Zhou X. Representing linguistic communicative functions in the premotor cortex. Cereb Cortex 2022; 33:5671-5689. [PMID: 36437790 DOI: 10.1093/cercor/bhac451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Revised: 10/18/2022] [Indexed: 11/29/2022] Open
Abstract
Abstract
Linguistic communication is often regarded as an action that serves a function to convey the speaker's goal to the addressee. Here, with an functional magnetic resonance imaging (fMRI) study and a lesion study, we demonstrated that communicative functions are represented in the human premotor cortex. Participants read scripts involving 2 interlocutors. Each script contained a critical sentence said by the speaker with a communicative function of either making a Promise, a Request, or a Reply to the addressee's query. With various preceding contexts, the critical sentences were supposed to induce neural activities associated with communicative functions rather than specific actions literally described by these sentences. The fMRI results showed that the premotor cortex contained more information, as revealed by multivariate analyses, on communicative functions and relevant interlocutors' attitudes than the perisylvian language regions. The lesion study results showed that, relative to healthy controls, the understanding of communicative functions was impaired in patients with lesions in the premotor cortex, whereas no reliable difference was observed between the healthy controls and patients with lesions in other brain regions. These findings convergently suggest the crucial role of the premotor cortex in representing the functions of linguistic communications, supporting that linguistic communication can be seen as an action.
Collapse
Affiliation(s)
- Wenshuo Chang
- Institute of Linguistics, Shanghai International Studies University , 1550 Wenxiang Road, Shanghai 201620 , China
- Beijing Key Laboratory of Behavior and Mental Health, School of Psychological and Cognitive Sciences, Peking University , 5 Yiheyuan Road, Beijing 100871 , China
| | - Lihui Wang
- Institute of Psychology and Behavioral Science, Shanghai Jiao Tong University , 1954 Huashan Road, Shanghai 200030 , China
- Shanghai Key Laboratory of Psychotic Disorders, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine , 600 Wan Ping Nan Road, Shanghai 200030 , China
- Shanghai Center for Brain Science and Brain-Inspired Intelligence Technology , 555 Qiangye Road Shanghai 200125 , China
| | - Ruolin Yang
- Beijing Key Laboratory of Behavior and Mental Health, School of Psychological and Cognitive Sciences, Peking University , 5 Yiheyuan Road, Beijing 100871 , China
- Beijing Neurosurgical Institute, Capital Medical University , 119 South Fourth Ring West Road, Beijing 100070 , China
- Peking-Tsinghua Center for Life Sciences, Peking University , 5 Yiheyuan Road, Beijing 100871 , China
- IDG/McGovern Institute for Brain Research, Peking University , 5 Yiheyuan Road, Beijing 100871 , China
| | - Xingchao Wang
- Beijing Tiantan Hospital, Capital Medical University Department of Neurosurgery, , 119 South Fourth Ring West Road, Beijing 100070 , China
- China National Clinical Research Center for Neurological Diseases , 119 South Fourth Ring West Road, Beijing 100070 , China
| | - Zhixian Gao
- Beijing Tiantan Hospital, Capital Medical University Department of Neurosurgery, , 119 South Fourth Ring West Road, Beijing 100070 , China
- China National Clinical Research Center for Neurological Diseases , 119 South Fourth Ring West Road, Beijing 100070 , China
| | - Xiaolin Zhou
- Institute of Linguistics, Shanghai International Studies University , 1550 Wenxiang Road, Shanghai 201620 , China
- Beijing Key Laboratory of Behavior and Mental Health, School of Psychological and Cognitive Sciences, Peking University , 5 Yiheyuan Road, Beijing 100871 , China
- IDG/McGovern Institute for Brain Research, Peking University , 5 Yiheyuan Road, Beijing 100871 , China
- Shanghai Key Laboratory of Mental Health and Psychological Crisis Intervention, School of Psychology and Cognitive Science, East China Normal University , 3663 North Zhongshan Road, Shanghai 200062 , China
| |
Collapse
|
8
|
Bendtz K, Ericsson S, Schneider J, Borg J, Bašnáková J, Uddén J. Individual Differences in Indirect Speech Act Processing Found Outside the Language Network. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2022; 3:287-317. [PMID: 37215561 PMCID: PMC10158615 DOI: 10.1162/nol_a_00066] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Accepted: 01/05/2022] [Indexed: 05/24/2023]
Abstract
Face-to-face communication requires skills that go beyond core language abilities. In dialogue, we routinely make inferences beyond the literal meaning of utterances and distinguish between different speech acts based on, e.g., contextual cues. It is, however, not known whether such communicative skills potentially overlap with core language skills or other capacities, such as theory of mind (ToM). In this functional magnetic resonance imaging (fMRI) study we investigate these questions by capitalizing on individual variation in pragmatic skills in the general population. Based on behavioral data from 199 participants, we selected participants with higher vs. lower pragmatic skills for the fMRI study (N = 57). In the scanner, participants listened to dialogues including a direct or an indirect target utterance. The paradigm allowed participants at the whole group level to (passively) distinguish indirect from direct speech acts, as evidenced by a robust activity difference between these speech acts in an extended language network including ToM areas. Individual differences in pragmatic skills modulated activation in two additional regions outside the core language regions (one cluster in the left lateral parietal cortex and intraparietal sulcus and one in the precuneus). The behavioral results indicate segregation of pragmatic skill from core language and ToM. In conclusion, contextualized and multimodal communication requires a set of interrelated pragmatic processes that are neurocognitively segregated: (1) from core language and (2) partly from ToM.
Collapse
Affiliation(s)
| | | | | | - Julia Borg
- Department of Psychology, Stockholm University, Sweden
| | - Jana Bašnáková
- Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands
- Institute of Experimental Psychology, Centre of Social and Psychological Sciences SAS, Slovakia
| | - Julia Uddén
- Department of Psychology, Stockholm University, Sweden
- Department of Linguistics, Stockholm University, Sweden
| |
Collapse
|
9
|
Tomasello R, Grisoni L, Boux I, Sammler D, Pulvermüller F. OUP accepted manuscript. Cereb Cortex 2022; 32:4885-4901. [PMID: 35136980 PMCID: PMC9626830 DOI: 10.1093/cercor/bhab522] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Revised: 11/16/2021] [Accepted: 12/17/2021] [Indexed: 11/20/2022] Open
Abstract
During conversations, speech prosody provides important clues about the speaker’s communicative intentions. In many languages, a rising vocal pitch at the end of a sentence typically expresses a question function, whereas a falling pitch suggests a statement. Here, the neurophysiological basis of intonation and speech act understanding were investigated with high-density electroencephalography (EEG) to determine whether prosodic features are reflected at the neurophysiological level. Already approximately 100 ms after the sentence-final word differing in prosody, questions, and statements expressed with the same sentences led to different neurophysiological activity recorded in the event-related potential. Interestingly, low-pass filtered sentences and acoustically matched nonvocal musical signals failed to show any neurophysiological dissociations, thus suggesting that the physical intonation alone cannot explain this modulation. Our results show rapid neurophysiological indexes of prosodic communicative information processing that emerge only when pragmatic and lexico-semantic information are fully expressed. The early enhancement of question-related activity compared with statements was due to sources in the articulatory-motor region, which may reflect the richer action knowledge immanent to questions, namely the expectation of the partner action of answering the question. The present findings demonstrate a neurophysiological correlate of prosodic communicative information processing, which enables humans to rapidly detect and understand speaker intentions in linguistic interactions.
Collapse
Affiliation(s)
- Rosario Tomasello
- Address correspondence to Rosario Tomasello, Brain Language Laboratory, Department of Philosophy and Humanities, WE4, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany.
| | - Luigi Grisoni
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität Berlin, 14195 Berlin, Germany
- Cluster of Excellence ‘Matters of Activity. Image Space Material’, Humboldt Universität zu Berlin, 10099 Berlin, Germany
| | - Isabella Boux
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität Berlin, 14195 Berlin, Germany
- Berlin School of Mind and Brain, Humboldt Universität zu Berlin, 10117 Berlin, Germany
- Einstein Center for Neurosciences, 10117 Berlin, Germany
| | - Daniela Sammler
- Research Group ‘Neurocognition of Music and Language’, Max Planck Institute for Empirical Aesthetics, 60322 Frankfurt am Main, Germany
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, 04103 Leipzig, Germany
| | - Friedemann Pulvermüller
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität Berlin, 14195 Berlin, Germany
- Cluster of Excellence ‘Matters of Activity. Image Space Material’, Humboldt Universität zu Berlin, 10099 Berlin, Germany
- Berlin School of Mind and Brain, Humboldt Universität zu Berlin, 10117 Berlin, Germany
- Einstein Center for Neurosciences, 10117 Berlin, Germany
| |
Collapse
|
10
|
Di Cesare G, Cuccio V, Marchi M, Sciutti A, Rizzolatti G. Communicative And Affective Components in Processing Auditory Vitality Forms: An fMRI Study. Cereb Cortex 2021; 32:909-918. [PMID: 34428292 PMCID: PMC8889944 DOI: 10.1093/cercor/bhab255] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Revised: 07/07/2021] [Accepted: 07/08/2021] [Indexed: 11/22/2022] Open
Abstract
In previous studies on auditory vitality forms, we found that listening to action verbs pronounced gently or rudely, produced, relative to a neutral robotic voice, activation of the dorso-central insula. One might wonder whether this insular activation depends on the conjunction of action verbs and auditory vitality forms, or whether auditory vitality forms are sufficient per se to activate the insula. To solve this issue, we presented words not related to actions such as concrete nouns (e.g.,“ball”), pronounced gently or rudely. No activation of the dorso-central insula was found. As a further step, we examined whether interjections, i.e., speech stimuli conveying communicative intention (e.g., “hello”), pronounced with different vitality forms, would be able to activate, relative to control, the insula. The results showed that stimuli conveying a communicative intention, pronounced with different auditory vitality forms activate the dorsal-central insula. These data deepen our understanding of the vitality forms processing, showing that insular activation is not specific to action verbs, but can be also activated by speech acts conveying communicative intention such as interjections. These findings also show the intrinsic social nature of vitality forms because activation of the insula was not observed in the absence of a communicative intention.
Collapse
Affiliation(s)
- G Di Cesare
- Italian Institute of Technology, Cognitive Architecture for Collaborative Technologies Unit, Genova, Italy
| | - V Cuccio
- Department of Cognitive Science, Psychology, Education and Cultural Studies, University of Messina, Messina, Italy
| | - M Marchi
- Department of Computer Science, University of Milan, Milan, Italy
| | - A Sciutti
- Italian Institute of Technology, Cognitive Architecture for Collaborative Technologies Unit, Genova, Italy
| | - G Rizzolatti
- Istituto di Neuroscienze, Consiglio Nazionale delle Ricerche, Parma, Italy
| |
Collapse
|
11
|
Kroczek LOH, Gunter TC. The time course of speaker-specific language processing. Cortex 2021; 141:311-321. [PMID: 34118750 DOI: 10.1016/j.cortex.2021.04.017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2020] [Revised: 04/27/2021] [Accepted: 04/27/2021] [Indexed: 10/21/2022]
Abstract
Listeners are sensitive to a speaker's individual language use and generate expectations for particular speakers. It is unclear, however, how such expectations affect online language processing. In the present EEG study, we presented thirty-two participants with auditory sentence stimuli of two speakers. Speakers differed in their use of two particular syntactic structures, easy subject-initial SOV structures and more difficult object-initial OSV structures. One speaker, the SOV-Speaker, had a high proportion of SOV sentences (75%) and a low proportion of OSV sentences (25%), and vice-versa for the OSV-Speaker. Participants were exposed to the speakers' individual language use in a training session followed by a test session on the consecutive day. ERP-results show that early stages of sentence processing are driven by syntactic processing only and are unaffected by speaker-specific expectations. In a late stage, however, an interaction between speaker and syntax information was observed. For the SOV-Speaker condition, the classical P600-effect reflected the effort of processing difficult and unexpected sentence structures. For the OSV-Speaker condition, both structures elicited different responses on frontal electrodes, possibly indexing effort to switch from a local speaker model to a global model of language use. Overall, the study identifies distinct neural mechanisms related to speaker-specific expectations.
Collapse
Affiliation(s)
- Leon O H Kroczek
- Department of Clinical Psychology and Psychotherapy, University of Regensburg, Germany.
| | - Thomas C Gunter
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Germany
| |
Collapse
|
12
|
Arioli M, Basso G, Poggi P, Canessa N. Fronto-temporal brain activity and connectivity track implicit attention to positive and negative social words in a novel socio-emotional Stroop task. Neuroimage 2020; 226:117580. [PMID: 33221447 DOI: 10.1016/j.neuroimage.2020.117580] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2019] [Revised: 11/11/2020] [Accepted: 11/16/2020] [Indexed: 12/16/2022] Open
Abstract
Previous inconsistencies on the effects of implicitly processing positively - vs. negatively - connotated emotional words might reflect the influence of uncontrolled psycholinguistic dimensions, and/or social facets inherent in putative "emotional" stimuli. Based on the relevance of social features in semantic cognition, we developed a socio-emotional Stroop task to assess the influence of social vs. individual (non-social) emotional content, besides negative vs. positive valence, on implicit word processing. The effect of these variables was evaluated in terms of performance and RTs, alongside associated brain activity/connectivity. We matched conditions for several psycholinguistic variables, and assessed a modulation of brain activity/connectivity by trial-wise RT, to characterize the maximum of condition- and subject-specific variability. RTs were tracked by insular and anterior cingulate activations likely reflecting implicit attention to stimuli, interfering with task-performance based on condition-specific processing of their subjective salience. Slower performance for negative than neutral/positive words was tracked by left-hemispheric structures processing negative stimuli and emotions, such as fronto-insular cortex, while the lack of specific activations for positively-connotated words supported their marginal facilitatory effect. The speeding/slowing effects of processing positive/negative individual emotional stimuli were enhanced by social words, reflecting in specific activations of the right anterior temporal and orbitofrontal cortex, respectively. RTs to social positive and negative words modulated connectivity from these regions to fronto-striatal and sensorimotor structures, respectively, likely promoting approach vs. avoidance dispositions shaping their facilitatory vs. inhibitory effect. These results might help assessing the neural correlates of impaired social cognition and emotional regulation, and the effects of rehabilitative interventions.
Collapse
Affiliation(s)
- Maria Arioli
- NEtS center, Scuola Universitaria Superiore IUSS, Pavia 27100, Italy; Cognitive Neuroscience Laboratory, Istituti Clinici Scientifici Maugeri IRCCS, Pavia 27100, Italy
| | - Gianpaolo Basso
- Cognitive Neuroscience Laboratory, Istituti Clinici Scientifici Maugeri IRCCS, Pavia 27100, Italy; University of Milano-Bicocca, Milan 20126, Italy
| | - Paolo Poggi
- Radiology Unit, Istituti Clinici Scientifici Maugeri IRCCS, Pavia 27100, Italy
| | - Nicola Canessa
- NEtS center, Scuola Universitaria Superiore IUSS, Pavia 27100, Italy; Cognitive Neuroscience Laboratory, Istituti Clinici Scientifici Maugeri IRCCS, Pavia 27100, Italy.
| |
Collapse
|
13
|
Guldner S, Nees F, McGettigan C. Vocomotor and Social Brain Networks Work Together to Express Social Traits in Voices. Cereb Cortex 2020; 30:6004-6020. [PMID: 32577719 DOI: 10.1093/cercor/bhaa175] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2020] [Revised: 05/08/2020] [Accepted: 05/31/2020] [Indexed: 11/14/2022] Open
Abstract
Voice modulation is important when navigating social interactions-tone of voice in a business negotiation is very different from that used to comfort an upset child. While voluntary vocal behavior relies on a cortical vocomotor network, social voice modulation may require additional social cognitive processing. Using functional magnetic resonance imaging, we investigated the neural basis for social vocal control and whether it involves an interplay of vocal control and social processing networks. Twenty-four healthy adult participants modulated their voice to express social traits along the dimensions of the social trait space (affiliation and competence) or to express body size (control for vocal flexibility). Naïve listener ratings showed that vocal modulations were effective in evoking social trait ratings along the two primary dimensions of the social trait space. Whereas basic vocal modulation engaged the vocomotor network, social voice modulation specifically engaged social processing regions including the medial prefrontal cortex, superior temporal sulcus, and precuneus. Moreover, these regions showed task-relevant modulations in functional connectivity to the left inferior frontal gyrus, a core vocomotor control network area. These findings highlight the impact of the integration of vocal motor control and social information processing for socially meaningful voice modulation.
Collapse
Affiliation(s)
- Stella Guldner
- Department of Cognitive and Clinical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim 68159, Germany.,Graduate School of Economic and Social Sciences, University of Mannheim, Mannheim 68159, Germany.,Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK
| | - Frauke Nees
- Department of Cognitive and Clinical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim 68159, Germany.,Institute of Medical Psychology and Medical Sociology, University Medical Center Schleswig Holstein, Kiel University, Kiel 24105, Germany
| | - Carolyn McGettigan
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK.,Department of Psychology, Royal Holloway, University of London, Egham TW20 0EX, UK
| |
Collapse
|
14
|
Chien PJ, Friederici AD, Hartwigsen G, Sammler D. Intonation processing increases task-specific fronto-temporal connectivity in tonal language speakers. Hum Brain Mapp 2020; 42:161-174. [PMID: 32996647 PMCID: PMC7721241 DOI: 10.1002/hbm.25214] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Revised: 09/08/2020] [Accepted: 09/13/2020] [Indexed: 01/08/2023] Open
Abstract
Language comprehension depends on tight functional interactions between distributed brain regions. While these interactions are established for semantic and syntactic processes, the functional network of speech intonation – the linguistic variation of pitch – has been scarcely defined. Particularly little is known about intonation in tonal languages, in which pitch not only serves intonation but also expresses meaning via lexical tones. The present study used psychophysiological interaction analyses of functional magnetic resonance imaging data to characterise the neural networks underlying intonation and tone processing in native Mandarin Chinese speakers. Participants categorised either intonation or tone of monosyllabic Mandarin words that gradually varied between statement and question and between Tone 2 and Tone 4. Intonation processing induced bilateral fronto‐temporal activity and increased functional connectivity between left inferior frontal gyrus and bilateral temporal regions, likely linking auditory perception and labelling of intonation categories in a phonological network. Tone processing induced bilateral temporal activity, associated with the auditory representation of tonal (phonemic) categories. Together, the present data demonstrate the breadth of the functional intonation network in a tonal language including higher‐level phonological processes in addition to auditory representations common to both intonation and tone.
Collapse
Affiliation(s)
- Pei-Ju Chien
- International Max Planck Research School NeuroCom, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Otto Hahn Group 'Neural Bases of Intonation in Speech and Music', Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Lise Meitner Research Group 'Cognition and Plasticity', Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Angela D Friederici
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Gesa Hartwigsen
- Lise Meitner Research Group 'Cognition and Plasticity', Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Daniela Sammler
- Otto Hahn Group 'Neural Bases of Intonation in Speech and Music', Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
15
|
Chien PJ, Friederici AD, Hartwigsen G, Sammler D. Neural correlates of intonation and lexical tone in tonal and non-tonal language speakers. Hum Brain Mapp 2020; 41:1842-1858. [PMID: 31957928 PMCID: PMC7268089 DOI: 10.1002/hbm.24916] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2019] [Revised: 12/10/2019] [Accepted: 12/18/2019] [Indexed: 12/31/2022] Open
Abstract
Intonation, the modulation of pitch in speech, is a crucial aspect of language that is processed in right‐hemispheric regions, beyond the classical left‐hemispheric language system. Whether or not this notion generalises across languages remains, however, unclear. Particularly, tonal languages are an interesting test case because of the dual linguistic function of pitch that conveys lexical meaning in form of tone, in addition to intonation. To date, only few studies have explored how intonation is processed in tonal languages, how this compares to tone and between tonal and non‐tonal language speakers. The present fMRI study addressed these questions by testing Mandarin and German speakers with Mandarin material. Both groups categorised mono‐syllabic Mandarin words in terms of intonation, tone, and voice gender. Systematic comparisons of brain activity of the two groups between the three tasks showed large cross‐linguistic commonalities in the neural processing of intonation in left fronto‐parietal, right frontal, and bilateral cingulo‐opercular regions. These areas are associated with general phonological, specific prosodic, and controlled categorical decision‐making processes, respectively. Tone processing overlapped with intonation processing in left fronto‐parietal areas, in both groups, but evoked additional activity in bilateral temporo‐parietal semantic regions and subcortical areas in Mandarin speakers only. Together, these findings confirm cross‐linguistic commonalities in the neural implementation of intonation processing but dissociations for semantic processing of tone only in tonal language speakers.
Collapse
Affiliation(s)
- Pei-Ju Chien
- International Max Planck Research School NeuroCom, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Otto Hahn Group "Neural Bases of Intonation in Speech and Music", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Lise Meitner Research Group "Cognition and Plasticity", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Angela D Friederici
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Gesa Hartwigsen
- Lise Meitner Research Group "Cognition and Plasticity", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Daniela Sammler
- Otto Hahn Group "Neural Bases of Intonation in Speech and Music", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
16
|
Ohtsubo Y, Matsunaga M, Himichi T, Suzuki K, Shibata E, Hori R, Umemura T, Ohira H. Costly group apology communicates a group's sincere "intention". Soc Neurosci 2019; 15:244-254. [PMID: 31762397 DOI: 10.1080/17470919.2019.1697745] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Groups, such as governments and organizations, apologize for their misconduct. In the interpersonal context, the forgiveness-fostering effect of apologies is pronounced when apologizing entails some cost (e.g., compensating damage, canceling a favorite activity to prioritize the apology) because costly apologies tend to be perceived as more sincere than non-costly apologies (e.g., merely saying "sorry"). Since groups lack a mental state (e.g., sincere intention), this could arguably render a group apology ineffective. This research investigated the possibility that people ascribe intention to group agents and that offering a costly group apology is an effective means of fostering perceived sincerity. A vignette study (Pilot Study) showed that costly group apologies tend to be perceived as more sincere than non-costly group apologies. A subsequent functional magnetic resonance imaging study revealed that costly group apologies engaged the bilateral temporoparietal junction and precuneus more so than non-costly group apologies and no apology did. The bilateral temporoparietal junction and precuneus have been implicated in the reasoning of social/communicative intention. Therefore, these results suggest that although a group mind does not exist, people ascribe a mental state (i.e., sincere intention) to a group especially when the group issues a costly apology after committing some transgression.
Collapse
Affiliation(s)
- Yohsuke Ohtsubo
- Department of Psychology, Graduate School of Humanities, Kobe University, Kobe, Japan
| | - Masahiro Matsunaga
- Department of Health and Psychosocial Medicine, Aichi Medical University School of Medicine, Nagakute, Japan
| | - Toshiyuki Himichi
- School of Economics and Management, Kochi University of Technology, Kochi, Japan
| | - Kohta Suzuki
- Department of Health and Psychosocial Medicine, Aichi Medical University School of Medicine, Nagakute, Japan
| | - Eiji Shibata
- Department of Health and Psychosocial Medicine, Aichi Medical University School of Medicine, Nagakute, Japan
| | - Reiko Hori
- Department of Health and Psychosocial Medicine, Aichi Medical University School of Medicine, Nagakute, Japan
| | - Tomohiro Umemura
- Department of Health and Psychosocial Medicine, Aichi Medical University School of Medicine, Nagakute, Japan
| | - Hideki Ohira
- Department of Psychology, Graduate School of Informatics, Nagoya University, Nagoya, Japan
| |
Collapse
|
17
|
Reybrouck M, Podlipniak P. Preconceptual Spectral and Temporal Cues as a Source of Meaning in Speech and Music. Brain Sci 2019; 9:E53. [PMID: 30832292 PMCID: PMC6468545 DOI: 10.3390/brainsci9030053] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2019] [Revised: 02/18/2019] [Accepted: 02/26/2019] [Indexed: 11/24/2022] Open
Abstract
This paper explores the importance of preconceptual meaning in speech and music, stressing the role of affective vocalizations as a common ancestral instrument in communicative interactions. Speech and music are sensory rich stimuli, both at the level of production and perception, which involve different body channels, mainly the face and the voice. However, this bimodal approach has been challenged as being too restrictive. A broader conception argues for an action-oriented embodied approach that stresses the reciprocity between multisensory processing and articulatory-motor routines. There is, however, a distinction between language and music, with the latter being largely unable to function referentially. Contrary to the centrifugal tendency of language to direct the attention of the receiver away from the text or speech proper, music is centripetal in directing the listener's attention to the auditory material itself. Sound, therefore, can be considered as the meeting point between speech and music and the question can be raised as to the shared components between the interpretation of sound in the domain of speech and music. In order to answer these questions, this paper elaborates on the following topics: (i) The relationship between speech and music with a special focus on early vocalizations in humans and non-human primates; (ii) the transition from sound to meaning in speech and music; (iii) the role of emotion and affect in early sound processing; (iv) vocalizations and nonverbal affect burst in communicative sound comprehension; and (v) the acoustic features of affective sound with a special emphasis on temporal and spectrographic cues as parts of speech prosody and musical expressiveness.
Collapse
Affiliation(s)
- Mark Reybrouck
- Musicology Research Group, KU Leuven⁻University of Leuven, 3000 Leuven, Belgium and IPEM⁻Department of Musicology, Ghent University, 9000 Ghent, Belgium.
| | - Piotr Podlipniak
- Institute of Musicology, Adam Mickiewicz University in Poznań, ul. Umultowska 89D, 61-614 Poznań, Poland.
| |
Collapse
|
18
|
Jiang X, Sanford R, Pell MD. Neural architecture underlying person perception from in-group and out-group voices. Neuroimage 2018; 181:582-597. [DOI: 10.1016/j.neuroimage.2018.07.042] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2017] [Revised: 07/04/2018] [Accepted: 07/16/2018] [Indexed: 01/02/2023] Open
|