1
|
Zai AT, Stepien AE, Giret N, Hahnloser RHR. Goal-directed vocal planning in a songbird. eLife 2024; 12:RP90445. [PMID: 38959057 PMCID: PMC11221833 DOI: 10.7554/elife.90445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/04/2024] Open
Abstract
Songbirds' vocal mastery is impressive, but to what extent is it a result of practice? Can they, based on experienced mismatch with a known target, plan the necessary changes to recover the target in a practice-free manner without intermittently singing? In adult zebra finches, we drive the pitch of a song syllable away from its stable (baseline) variant acquired from a tutor, then we withdraw reinforcement and subsequently deprive them of singing experience by muting or deafening. In this deprived state, birds do not recover their baseline song. However, they revert their songs toward the target by about 1 standard deviation of their recent practice, provided the sensory feedback during the latter signaled a pitch mismatch with the target. Thus, targeted vocal plasticity does not require immediate sensory experience, showing that zebra finches are capable of goal-directed vocal planning.
Collapse
Affiliation(s)
- Anja T Zai
- Neuroscience Center Zurich (ZNZ), University of Zurich and ETH ZurichZurichSwitzerland
- Institute of Neuroinformatics, University of Zurich and ETH ZurichZurichSwitzerland
| | - Anna E Stepien
- Neuroscience Center Zurich (ZNZ), University of Zurich and ETH ZurichZurichSwitzerland
- Institute of Neuroinformatics, University of Zurich and ETH ZurichZurichSwitzerland
| | - Nicolas Giret
- Institut des Neurosciences Paris-Saclay, UMR 9197 CNRS, Université Paris-SaclaySaclayFrance
| | - Richard HR Hahnloser
- Neuroscience Center Zurich (ZNZ), University of Zurich and ETH ZurichZurichSwitzerland
- Institute of Neuroinformatics, University of Zurich and ETH ZurichZurichSwitzerland
| |
Collapse
|
2
|
Dang Q, Ma F, Yuan Q, Fu Y, Chen K, Zhang Z, Lu C, Guo T. Processing negative emotion in two languages of bilinguals: Accommodation and assimilation of the neural pathways based on a meta-analysis. Cereb Cortex 2023:7133665. [PMID: 37083264 DOI: 10.1093/cercor/bhad121] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 03/14/2023] [Accepted: 03/17/2023] [Indexed: 04/22/2023] Open
Abstract
Numerous functional magnetic resonance imaging (fMRI) studies have examined the neural mechanisms of negative emotional words, but scarce evidence is available for the interactions among related brain regions from the functional brain connectivity perspective. Moreover, few studies have addressed the neural networks for negative word processing in bilinguals. To fill this gap, the current study examined the brain networks for processing negative words in the first language (L1) and the second language (L2) with Chinese-English bilinguals. To identify objective indicators associated with negative word processing, we first conducted a coordinate-based meta-analysis on contrasts between negative and neutral words (including 32 contrasts from 1589 participants) using the activation likelihood estimation method. Results showed that the left medial prefrontal cortex (mPFC), the left inferior frontal gyrus (IFG), the left posterior cingulate cortex (PCC), the left amygdala, the left inferior temporal gyrus (ITG), and the left thalamus were involved in processing negative words. Next, these six clusters were used as regions of interest in effective connectivity analyses using extended unified structural equation modeling to pinpoint the brain networks for bilingual negative word processing. Brain network results revealed two pathways for negative word processing in L1: a dorsal pathway consisting of the left IFG, the left mPFC, and the left PCC, and a ventral pathway involving the left amygdala, the left ITG, and the left thalamus. We further investigated the similarity and difference between brain networks for negative word processing in L1 and L2. The findings revealed similarities in the dorsal pathway, as well as differences primarily in the ventral pathway, indicating both neural assimilation and accommodation across processing negative emotion in two languages of bilinguals.
Collapse
Affiliation(s)
- Qinpu Dang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Fengyang Ma
- School of Education, University of Cincinnati, Cincinnati, OH 45219, USA
| | - Qiming Yuan
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Yongben Fu
- The Psychological Education and Counseling Center, Huazhong Agricultural University, Wuhan 430070, China
| | - Keyue Chen
- Division of Psychology and Language Sciences, University College London, London WC1E 6BT, UK
| | - Zhaoqi Zhang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Chunming Lu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
- Center for Collaboration and Innovation in Brain and Learning Sciences, Beijing Normal University, Beijing 100875, China
| | - Taomei Guo
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
- Center for Collaboration and Innovation in Brain and Learning Sciences, Beijing Normal University, Beijing 100875, China
| |
Collapse
|
3
|
Westermann B, Lotze M, Varra L, Versteeg N, Domin M, Nicolet L, Obrist M, Klepzig K, Marbot L, Lämmler L, Fiedler K, Wattendorf E. When laughter arrests speech: fMRI-based evidence. Philos Trans R Soc Lond B Biol Sci 2022; 377:20210182. [PMID: 36126674 PMCID: PMC9489293 DOI: 10.1098/rstb.2021.0182] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023] Open
Abstract
Who has not experienced that sensation of losing the power of speech owing to an involuntary bout of laughter? An investigation of this phenomenon affords an insight into the neuronal processes that underlie laughter. In our functional magnetic resonance imaging study, participants were made to laugh by tickling in a first condition; in a second one they were requested to produce vocal utterances under the provocation of laughter by tickling. This investigation reveals increased neuronal activity in the sensorimotor cortex, the anterior cingulate gyrus, the insula, the nucleus accumbens, the hypothalamus and the periaqueductal grey for both conditions, thereby replicating the results of previous studies on ticklish laughter. However, further analysis indicates the activity in the emotion-associated regions to be lower when tickling is accompanied by voluntary vocalization. Here, a typical pattern of activation is identified, including the primary sensory cortex, a ventral area of the anterior insula and the ventral tegmental field, to which belongs to the nucleus ambiguus, namely, the common effector organ for voluntary and involuntary vocalizations. During the conflictual voluntary-vocalization versus laughter experience, the laughter-triggering network appears to rely heavily on a sensory and a deep interoceptive analysis, as well as on motor effectors in the brainstem. This article is part of the theme issue ‘Cracking the laugh code: laughter through the lens of biology, psychology and neuroscience’.
Collapse
Affiliation(s)
- B Westermann
- Department of Neurosurgery, University Hospital Basel, Basel, Switzerland
| | - M Lotze
- Faculty of Medicine, University of Greifswald, Greifswald, Germany
| | - L Varra
- Faculty of Science and Medicine, University of Fribourg, Fribourg, Switzerland
| | - N Versteeg
- Faculty of Science and Medicine, University of Fribourg, Fribourg, Switzerland
| | - M Domin
- Faculty of Medicine, University of Greifswald, Greifswald, Germany
| | - L Nicolet
- College of Health Sciences Fribourg, Fribourg, Switzerland
| | - M Obrist
- College of Health Sciences Fribourg, Fribourg, Switzerland
| | - K Klepzig
- College of Health Sciences Fribourg, Fribourg, Switzerland
| | - L Marbot
- Faculty of Science and Medicine, University of Fribourg, Fribourg, Switzerland
| | - L Lämmler
- Faculty of Science and Medicine, University of Fribourg, Fribourg, Switzerland
| | - K Fiedler
- Faculty of Science and Medicine, University of Fribourg, Fribourg, Switzerland
| | - E Wattendorf
- Faculty of Science and Medicine, University of Fribourg, Fribourg, Switzerland.,College of Health Sciences Fribourg, Fribourg, Switzerland
| |
Collapse
|
4
|
Abstract
The human voice carries socially relevant information such as how authoritative, dominant, and attractive the speaker sounds. However, some speakers may be able to manipulate listeners by modulating the shape and size of their vocal tract to exaggerate certain characteristics of their voice. We analysed the veridical size of speakers’ vocal tracts using real-time magnetic resonance imaging as they volitionally modulated their voice to sound larger or smaller, corresponding changes to the size implied by the acoustics of their voice, and their influence over the perceptions of listeners. Individual differences in this ability were marked, spanning from nearly incapable to nearly perfect vocal modulation, and was consistent across modalities of measurement. Further research is needed to determine whether speakers who are effective at vocal size exaggeration are better able to manipulate their social environment, and whether this variation is an inherited quality of the individual, or the result of life experiences such as vocal training.
Collapse
|
5
|
Pisanski K, Bryant GA, Cornec C, Anikin A, Reby D. Form follows function in human nonverbal vocalisations. ETHOL ECOL EVOL 2022. [DOI: 10.1080/03949370.2022.2026482] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Katarzyna Pisanski
- ENES Sensory Neuro-Ethology Lab, CRNL, Jean Monnet University of Saint Étienne, UMR 5293, St-Étienne 42023, France
- CNRS French National Centre for Scientific Research, DDL Dynamics of Language Lab, University of Lyon 2, Lyon 69007, France
| | - Gregory A. Bryant
- Department of Communication, Center for Behavior, Evolution, and Culture, University of California, Los Angeles, California, USA
| | - Clément Cornec
- ENES Sensory Neuro-Ethology Lab, CRNL, Jean Monnet University of Saint Étienne, UMR 5293, St-Étienne 42023, France
| | - Andrey Anikin
- ENES Sensory Neuro-Ethology Lab, CRNL, Jean Monnet University of Saint Étienne, UMR 5293, St-Étienne 42023, France
- Division of Cognitive Science, Lund University, Lund 22100, Sweden
| | - David Reby
- ENES Sensory Neuro-Ethology Lab, CRNL, Jean Monnet University of Saint Étienne, UMR 5293, St-Étienne 42023, France
| |
Collapse
|
6
|
Human larynx motor cortices coordinate respiration for vocal-motor control. Neuroimage 2021; 239:118326. [PMID: 34216772 DOI: 10.1016/j.neuroimage.2021.118326] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Revised: 05/22/2021] [Accepted: 06/29/2021] [Indexed: 11/23/2022] Open
Abstract
Vocal flexibility is a hallmark of the human species, most particularly the capacity to speak and sing. This ability is supported in part by the evolution of a direct neural pathway linking the motor cortex to the brainstem nucleus that controls the larynx the primary sound source for communication. Early brain imaging studies demonstrated that larynx motor cortex at the dorsal end of the orofacial division of motor cortex (dLMC) integrated laryngeal and respiratory control, thereby coordinating two major muscular systems that are necessary for vocalization. Neurosurgical studies have since demonstrated the existence of a second larynx motor area at the ventral extent of the orofacial motor division (vLMC) of motor cortex. The vLMC has been presumed to be less relevant to speech motor control, but its functional role remains unknown. We employed a novel ultra-high field (7T) magnetic resonance imaging paradigm that combined singing and whistling simple melodies to localise the larynx motor cortices and test their involvement in respiratory motor control. Surprisingly, whistling activated both 'larynx areas' more strongly than singing despite the reduced involvement of the larynx during whistling. We provide further evidence for the existence of two larynx motor areas in the human brain, and the first evidence that laryngeal-respiratory integration is a shared property of both larynx motor areas. We outline explicit predictions about the descending motor pathways that give these cortical areas access to both the laryngeal and respiratory systems and discuss the implications for the evolution of speech.
Collapse
|
7
|
Kasaba R, Shimada K, Tomoda A. Neural Mechanisms of Parental Communicative Adjustments in Spoken Language. Neuroscience 2020; 457:206-217. [PMID: 33346117 DOI: 10.1016/j.neuroscience.2020.12.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Revised: 12/01/2020] [Accepted: 12/04/2020] [Indexed: 11/30/2022]
Abstract
During cultural transmission, caregivers typically adjust their form of speech according to the presumed characteristics of an infant/child, a phenomenon known as infant/child directed speech (IDS/CDS) or "parentese." Although ventromedial prefrontal cortex (vmPFC) damage was previously found to be associated with failure in adjusting non-verbal communicative behaviors, little is known about the neural mechanisms of verbal communicative adjustments, such as IDS/CDS. In the current study, 30 healthy mothers with preschool-age children underwent functional magnetic resonance imaging (fMRI) while performing a picture naming task which required them to name an object for either a child or an adult. In the picture naming task, mothers exhibited a longer naming duration in the toward-child condition than the toward-adult control condition. Naming an object for a child, compared with naming it for an adult, resulted in greater involvement in the vmPFC and other regions (e.g., cerebellum) in the global caregiving network. In particular, the vmPFC exhibited task-related deactivation and decreased functional connectivity with the supplementary motor, precentral, postcentral, and supramarginal regions. These findings suggest that the vmPFC, which is included in the default mode network, is involved in optimizing communicative behaviors for the inter-generational transmission of knowledge. This function of the vmPFC may be considered as a prosocial drive to lead to prosocial communicative behaviors depending on the context. This study provides a better understanding of the neural mechanisms involved in communicative adjustments for children and insight into related applied research fields such as parenting, pedagogy, and education.
Collapse
Affiliation(s)
- Ryoko Kasaba
- Division of Developmental Higher Brain Functions, United Graduate School of Child Development, University of Fukui, 23-3 Matsuoka-Shimoaizuki, Eiheiji-cho, Yoshida-gun, Fukui 910-1193, Japan
| | - Koji Shimada
- Division of Developmental Higher Brain Functions, United Graduate School of Child Development, University of Fukui, 23-3 Matsuoka-Shimoaizuki, Eiheiji-cho, Yoshida-gun, Fukui 910-1193, Japan; Research Center for Child Mental Development, University of Fukui, 23-3 Matsuoka-Shimoaizuki, Eiheiji-cho, Yoshida-gun, Fukui 910-1193, Japan; Biomedical Imaging Research Center, University of Fukui, 23-3 Matsuoka-Shimoaizuki,Eiheiji-cho, Yoshida-gun, Fukui 910-1193, Japan.
| | - Akemi Tomoda
- Division of Developmental Higher Brain Functions, United Graduate School of Child Development, University of Fukui, 23-3 Matsuoka-Shimoaizuki, Eiheiji-cho, Yoshida-gun, Fukui 910-1193, Japan; Research Center for Child Mental Development, University of Fukui, 23-3 Matsuoka-Shimoaizuki, Eiheiji-cho, Yoshida-gun, Fukui 910-1193, Japan; Department of Child and Adolescent Psychological Medicine, University of Fukui Hospital, 23-3 Matsuoka-Shimoaizuki, Eiheiji-cho, Yoshida-gun, Fukui 910-1193, Japan.
| |
Collapse
|
8
|
Guldner S, Nees F, McGettigan C. Vocomotor and Social Brain Networks Work Together to Express Social Traits in Voices. Cereb Cortex 2020; 30:6004-6020. [PMID: 32577719 DOI: 10.1093/cercor/bhaa175] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2020] [Revised: 05/08/2020] [Accepted: 05/31/2020] [Indexed: 11/14/2022] Open
Abstract
Voice modulation is important when navigating social interactions-tone of voice in a business negotiation is very different from that used to comfort an upset child. While voluntary vocal behavior relies on a cortical vocomotor network, social voice modulation may require additional social cognitive processing. Using functional magnetic resonance imaging, we investigated the neural basis for social vocal control and whether it involves an interplay of vocal control and social processing networks. Twenty-four healthy adult participants modulated their voice to express social traits along the dimensions of the social trait space (affiliation and competence) or to express body size (control for vocal flexibility). Naïve listener ratings showed that vocal modulations were effective in evoking social trait ratings along the two primary dimensions of the social trait space. Whereas basic vocal modulation engaged the vocomotor network, social voice modulation specifically engaged social processing regions including the medial prefrontal cortex, superior temporal sulcus, and precuneus. Moreover, these regions showed task-relevant modulations in functional connectivity to the left inferior frontal gyrus, a core vocomotor control network area. These findings highlight the impact of the integration of vocal motor control and social information processing for socially meaningful voice modulation.
Collapse
Affiliation(s)
- Stella Guldner
- Department of Cognitive and Clinical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim 68159, Germany.,Graduate School of Economic and Social Sciences, University of Mannheim, Mannheim 68159, Germany.,Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK
| | - Frauke Nees
- Department of Cognitive and Clinical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim 68159, Germany.,Institute of Medical Psychology and Medical Sociology, University Medical Center Schleswig Holstein, Kiel University, Kiel 24105, Germany
| | - Carolyn McGettigan
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK.,Department of Psychology, Royal Holloway, University of London, Egham TW20 0EX, UK
| |
Collapse
|
9
|
Wattendorf E, Westermann B, Fiedler K, Ritz S, Redmann A, Pfannmöller J, Lotze M, Celio MR. Laughter is in the air: involvement of key nodes of the emotional motor system in the anticipation of tickling. Soc Cogn Affect Neurosci 2020; 14:837-847. [PMID: 31393979 PMCID: PMC6847157 DOI: 10.1093/scan/nsz056] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2018] [Revised: 06/27/2019] [Accepted: 07/10/2019] [Indexed: 12/22/2022] Open
Abstract
In analogy to the appreciation of humor, that of tickling is based upon the re-interpretation of an anticipated emotional situation. Hence, the anticipation of tickling contributes to the final outburst of ticklish laughter. To localize the neuronal substrates of this process, functional magnetic resonance imaging (fMRI) was conducted on 31 healthy volunteers. The state of anticipation was simulated by generating an uncertainty respecting the onset of manual foot tickling. Anticipation was characterized by an augmented fMRI signal in the anterior insula, the hypothalamus, the nucleus accumbens and the ventral tegmental area, as well as by an attenuated one in the internal globus pallidus. Furthermore, anticipatory activity in the anterior insula correlated positively with the degree of laughter that was produced during tickling. These findings are consistent with an encoding of the expected emotional consequences of tickling and suggest that early regulatory mechanisms influence, automatically, the laughter circuitry at the level of affective and sensory processing. Tickling activated not only those regions of the brain that were involved during anticipation, but also the posterior insula, the anterior cingulate cortex and the periaqueductal gray matter. Sequential or combined anticipatory and tickling-related neuronal activities may adjust emotional and sensorimotor pathways in preparation for the impending laughter response.
Collapse
Affiliation(s)
- Elise Wattendorf
- Faculty of Science and Medicine, Department of Neuroscience, Anatomy, University of Fribourg, 1700 Fribourg, Switzerland
| | - Birgit Westermann
- Department of Neurosurgery, University Hospital, University of Basel, 4031 Basel, Switzerland
| | - Klaus Fiedler
- Faculty of Science and Medicine, Department of Neuroscience, Anatomy, University of Fribourg, 1700 Fribourg, Switzerland
| | - Simone Ritz
- Faculty of Science and Medicine, Department of Neuroscience, Anatomy, University of Fribourg, 1700 Fribourg, Switzerland
| | - Annetta Redmann
- Faculty of Science and Medicine, Department of Neuroscience, Anatomy, University of Fribourg, 1700 Fribourg, Switzerland
| | - Jörg Pfannmöller
- Functional Imaging, Center for Diagnostic Radiology and Neuroradiology, University Medicine Greifswald, Walther-Rathenau-Straße 46, 17475 Greifswald, Germany
| | - Martin Lotze
- Functional Imaging, Center for Diagnostic Radiology and Neuroradiology, University Medicine Greifswald, Walther-Rathenau-Straße 46, 17475 Greifswald, Germany
| | - Marco R Celio
- Faculty of Science and Medicine, Department of Neuroscience, Anatomy, University of Fribourg, 1700 Fribourg, Switzerland
| |
Collapse
|
10
|
Caletti E, Delvecchio G, Andreella A, Finos L, Perlini C, Tavano A, Lasalvia A, Bonetto C, Cristofalo D, Lamonaca D, Ceccato E, Pileggi F, Mazzi F, Santonastaso P, Ruggeri M, Bellani M, Brambilla P. Prosody abilities in a large sample of affective and non-affective first episode psychosis patients. Compr Psychiatry 2018; 86:31-38. [PMID: 30056363 DOI: 10.1016/j.comppsych.2018.07.004] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/31/2018] [Revised: 06/27/2018] [Accepted: 07/10/2018] [Indexed: 10/28/2022] Open
Abstract
OBJECTIVE Prosody comprehension deficits have been reported in major psychoses. It is still not clear whether these deficits occur at early psychosis stages. The aims of our study were to investigate a) linguistic and emotional prosody comprehension abilities in First Episode Psychosis (FEP) patients compared to healthy controls (HC); b) performance differences between non-affective (FEP-NA) and affective (FEP-A) patients, and c) association between symptoms severity and prosodic features. METHODS A total of 208 FEP (156 FEP-NA and 52 FEP-A) patients and 77 HC were enrolled and assessed with the Italian version of the "Protocole Montréal d'Evaluation de la Communication" to evaluate linguistic and emotional prosody comprehension. Clinical variables were assessed with a comprehensive set of standardized measures. RESULTS FEP patients displayed significant linguistic and emotional prosody deficits compared to HC, with FEP-NA showing greater impairment than FEP-A. Also, significant correlations between symptom severity and prosodic features in FEP patients were found. CONCLUSIONS Our results suggest that prosodic impairments occur at the onset of psychosis being more prominent in FEP-NA and in those with severe psychopathology. These findings further support the hypothesis that aprosodia is a core feature of psychosis.
Collapse
Affiliation(s)
- Elisabetta Caletti
- Department of Neurosciences and Mental Health, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico, University of Milan, Milan, Italy
| | - Giuseppe Delvecchio
- Department of Neurosciences and Mental Health, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico, University of Milan, Milan, Italy
| | | | - Livio Finos
- Department of Developmental Psychology and Socialization, University of Padua, Italy
| | - Cinzia Perlini
- Department of Neurosciences, Biomedicine and Movement Sciences, Section of Clinical Psychology, University of Verona, Verona, Italy
| | - Alessandro Tavano
- Department of Neurosciences, Max Planck Institute for Empirical Aesthetics, Frankfurt am Maine, Germany
| | - Antonio Lasalvia
- UOC Psychiatry, University Hospital Integrated Trust of Verona (AOUI), Italy
| | - Chiara Bonetto
- Section of Psychiatry, Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Verona, Italy
| | - Doriana Cristofalo
- Section of Psychiatry, Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Verona, Italy
| | - Dario Lamonaca
- Department of Psychiatry, CSM AULSS 21 Legnago, Verona, Italy
| | - Enrico Ceccato
- Department of Mental Health, Azienda ULSS 8 Berica, Vicenza, Italy
| | | | | | | | - Mirella Ruggeri
- UOC Psychiatry, University Hospital Integrated Trust of Verona (AOUI), Italy; Department of Public Health and Community Medicine, Section of Clinical Psychology, University of Verona, Verona, Italy
| | - Marcella Bellani
- UOC Psychiatry, University Hospital Integrated Trust of Verona (AOUI), Italy
| | - Paolo Brambilla
- Scientific Institute IRCCS "E.Medea", Bosisio Parini, Italy; Department of Pathophysiology and Transplantantion, University of Milan, Milan, Italy.
| | | |
Collapse
|
11
|
Belyk M, Lee YS, Brown S. How does human motor cortex regulate vocal pitch in singers? ROYAL SOCIETY OPEN SCIENCE 2018; 5:172208. [PMID: 30224990 PMCID: PMC6124115 DOI: 10.1098/rsos.172208] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/14/2017] [Accepted: 07/20/2018] [Indexed: 06/08/2023]
Abstract
Vocal pitch is used as an important communicative device by humans, as found in the melodic dimension of both speech and song. Vocal pitch is determined by the degree of tension in the vocal folds of the larynx, which itself is influenced by complex and nonlinear interactions among the laryngeal muscles. The relationship between these muscles and vocal pitch has been described by a mathematical model in the form of a set of 'control rules'. We searched for the biological implementation of these control rules in the larynx motor cortex of the human brain. We scanned choral singers with functional magnetic resonance imaging as they produced discrete pitches at four different levels across their vocal range. While the locations of the larynx motor activations varied across singers, the activation peaks for the four pitch levels were highly consistent within each individual singer. This result was corroborated using multi-voxel pattern analysis, which demonstrated an absence of patterned activations differentiating any pairing of pitch levels. The complex and nonlinear relationships between the multiple laryngeal muscles that control vocal pitch may obscure the neural encoding of vocal pitch in the brain.
Collapse
Affiliation(s)
- Michel Belyk
- Bloorview Research Institute, Holland Bloorview Kids Rehabilitation Hospital, Toronto, Ontario, Canada
| | - Yune S. Lee
- Department of Speech and Hearing Sciences and Center for Brain Injury, The Ohio State University, Columbus, OH, USA
| | - Steven Brown
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario, Canada
| |
Collapse
|
12
|
Liang B, Du Y. The Functional Neuroanatomy of Lexical Tone Perception: An Activation Likelihood Estimation Meta-Analysis. Front Neurosci 2018; 12:495. [PMID: 30087589 PMCID: PMC6066585 DOI: 10.3389/fnins.2018.00495] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2018] [Accepted: 07/02/2018] [Indexed: 11/13/2022] Open
Abstract
In tonal language such as Chinese, lexical tone serves as a phonemic feature in determining word meaning. Meanwhile, it is close to prosody in terms of suprasegmental pitch variations and larynx-based articulation. The important yet mixed nature of lexical tone has evoked considerable studies, but no consensus has been reached on its functional neuroanatomy. This meta-analysis aimed at uncovering the neural network of lexical tone perception in comparison with that of phoneme and prosody in a unified framework. Independent Activation Likelihood Estimation meta-analyses were conducted for different linguistic elements: lexical tone by native tonal language speakers, lexical tone by non-tonal language speakers, phoneme, word-level prosody, and sentence-level prosody. Results showed that lexical tone and prosody studies demonstrated more extensive activations in the right than the left auditory cortex, whereas the opposite pattern was found for phoneme studies. Only tonal language speakers consistently recruited the left anterior superior temporal gyrus (STG) for processing lexical tone, an area implicated in phoneme processing and word-form recognition. Moreover, an anterior-lateral to posterior-medial gradient of activation as a function of element timescale was revealed in the right STG, in which the activation for lexical tone lied between that for phoneme and that for prosody. Another topological pattern was shown on the left precentral gyrus (preCG), with the activation for lexical tone overlapped with that for prosody but ventral to that for phoneme. These findings provide evidence that the neural network for lexical tone perception is hybrid with those for phoneme and prosody. That is, resembling prosody, lexical tone perception, regardless of language experience, involved right auditory cortex, with activation localized between sites engaged by phonemic and prosodic processing, suggesting a hierarchical organization of representations in the right auditory cortex. For tonal language speakers, lexical tone additionally engaged the left STG lexical mapping network, consistent with the phonemic representation. Similarly, when processing lexical tone, only tonal language speakers engaged the left preCG site implicated in prosody perception, consistent with tonal language speakers having stronger articulatory representations for lexical tone in the laryngeal sensorimotor network. A dynamic dual-stream model for lexical tone perception was proposed and discussed.
Collapse
Affiliation(s)
- Baishen Liang
- CAS Key Laboratory of Behavioral Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Yi Du
- CAS Key Laboratory of Behavioral Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
13
|
Dichter BK, Breshears JD, Leonard MK, Chang EF. The Control of Vocal Pitch in Human Laryngeal Motor Cortex. Cell 2018; 174:21-31.e9. [PMID: 29958109 PMCID: PMC6084806 DOI: 10.1016/j.cell.2018.05.016] [Citation(s) in RCA: 74] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2018] [Revised: 03/23/2018] [Accepted: 05/08/2018] [Indexed: 11/24/2022]
Abstract
In speech, the highly flexible modulation of vocal pitch creates intonation patterns that speakers use to convey linguistic meaning. This human ability is unique among primates. Here, we used high-density cortical recordings directly from the human brain to determine the encoding of vocal pitch during natural speech. We found neural populations in bilateral dorsal laryngeal motor cortex (dLMC) that selectively encoded produced pitch but not non-laryngeal articulatory movements. This neural population controlled short pitch accents to express prosodic emphasis on a word in a sentence. Other larynx cortical representations controlling voicing and longer pitch phrase contours were found at separate sites. dLMC sites also encoded vocal pitch during a non-speech singing task. Finally, direct focal stimulation of dLMC evoked laryngeal movements and involuntary vocalization, confirming its causal role in feedforward control. Together, these results reveal the neural basis for the voluntary control of vocal pitch in human speech. VIDEO ABSTRACT.
Collapse
Affiliation(s)
- Benjamin K Dichter
- Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA 94158, USA; Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA 94143, USA; UC Berkeley and UCSF Joint Program in Bioengineering, Berkeley, CA 94720, USA
| | - Jonathan D Breshears
- Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA 94158, USA; Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA 94143, USA
| | - Matthew K Leonard
- Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA 94158, USA; Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA 94143, USA
| | - Edward F Chang
- Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA 94158, USA; Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA 94143, USA; UC Berkeley and UCSF Joint Program in Bioengineering, Berkeley, CA 94720, USA.
| |
Collapse
|
14
|
Belyk M, Johnson JF, Kotz SA. Poor neuro-motor tuning of the human larynx: a comparison of sung and whistled pitch imitation. ROYAL SOCIETY OPEN SCIENCE 2018; 5:171544. [PMID: 29765635 PMCID: PMC5936900 DOI: 10.1098/rsos.171544] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/04/2017] [Accepted: 03/13/2018] [Indexed: 06/08/2023]
Abstract
Vocal imitation is a hallmark of human communication that underlies the capacity to learn to speak and sing. Even so, poor vocal imitation abilities are surprisingly common in the general population and even expert vocalists cannot match the precision of a musical instrument. Although humans have evolved a greater degree of control over the laryngeal muscles that govern voice production, this ability may be underdeveloped compared with control over the articulatory muscles, such as the tongue and lips, volitional control of which emerged earlier in primate evolution. Human participants imitated simple melodies by either singing (i.e. producing pitch with the larynx) or whistling (i.e. producing pitch with the lips and tongue). Sung notes were systematically biased towards each individual's habitual pitch, which we hypothesize may act to conserve muscular effort. Furthermore, while participants who sung more precisely also whistled more precisely, sung imitations were less precise than whistled imitations. The laryngeal muscles that control voice production are under less precise control than the oral muscles that are involved in whistling. This imprecision may be due to the relatively recent evolution of volitional laryngeal-motor control in humans, which may be tuned just well enough for the coarse modulation of vocal-pitch in speech.
Collapse
Affiliation(s)
- Michel Belyk
- Bloorview Research Institute, 150 Kilgour Road, Toronto, CanadaM4G 1R8
- Faculty of Psychology and Neuroscience, University of Maastricht, Maastricht, The Netherlands
| | - Joseph F. Johnson
- Faculty of Psychology and Neuroscience, University of Maastricht, Maastricht, The Netherlands
| | - Sonja A. Kotz
- Faculty of Psychology and Neuroscience, University of Maastricht, Maastricht, The Netherlands
- Department of Neuropsychology, Max Planck Institute for Human and Cognitive Sciences, Leipzig, Germany
| |
Collapse
|
15
|
Klasen M, von Marschall C, Isman G, Zvyagintsev M, Gur RC, Mathiak K. Prosody production networks are modulated by sensory cues and social context. Soc Cogn Affect Neurosci 2018. [PMID: 29514331 PMCID: PMC5928400 DOI: 10.1093/scan/nsy015] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Abstract
The neurobiology of emotional prosody production is not well investigated. In particular, the effects of cues and social context are not known. The present study sought to differentiate cued from free emotion generation and the effect of social feedback from a human listener. Online speech filtering enabled functional magnetic resonance imaging during prosodic communication in 30 participants. Emotional vocalizations were (i) free, (ii) auditorily cued, (iii) visually cued or (iv) with interactive feedback. In addition to distributed language networks, cued emotions increased activity in auditory and—in case of visual stimuli—visual cortex. Responses were larger in posterior superior temporal gyrus at the right hemisphere and the ventral striatum when participants were listened to and received feedback from the experimenter. Sensory, language and reward networks contributed to prosody production and were modulated by cues and social context. The right posterior superior temporal gyrus is a central hub for communication in social interactions—in particular for interpersonal evaluation of vocal emotions.
Collapse
Affiliation(s)
- Martin Klasen
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen, Pauwelsstraße 30, 52074 Aachen, Germany.,JARA - Translational Brain Medicine, 52074 Aachen, Germany
| | - Clara von Marschall
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen, Pauwelsstraße 30, 52074 Aachen, Germany.,JARA - Translational Brain Medicine, 52074 Aachen, Germany
| | - Güldehen Isman
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Mikhail Zvyagintsev
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen, Pauwelsstraße 30, 52074 Aachen, Germany.,JARA - Translational Brain Medicine, 52074 Aachen, Germany
| | - Ruben C Gur
- Department of Psychiatry, University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104, USA
| | - Klaus Mathiak
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen, Pauwelsstraße 30, 52074 Aachen, Germany.,JARA - Translational Brain Medicine, 52074 Aachen, Germany
| |
Collapse
|
16
|
Convergence of semantics and emotional expression within the IFG pars orbitalis. Neuroimage 2017; 156:240-248. [DOI: 10.1016/j.neuroimage.2017.04.020] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2017] [Revised: 03/16/2017] [Accepted: 04/07/2017] [Indexed: 10/19/2022] Open
|
17
|
Belyk M, Brown S, Kotz SA. Demonstration and validation of Kernel Density Estimation for spatial meta-analyses in cognitive neuroscience using simulated data. Data Brief 2017; 13:346-352. [PMID: 28664169 PMCID: PMC5480230 DOI: 10.1016/j.dib.2017.06.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2017] [Revised: 05/11/2017] [Accepted: 06/01/2017] [Indexed: 11/15/2022] Open
Abstract
The data presented in this article are related to the research article entitled "Convergence of semantics and emotional expression within the IFG pars orbitalis" (Belyk et al., 2017) [1]. The research article reports a spatial meta-analysis of brain imaging experiments on the perception of semantic compared to emotional communicative signals in humans. This Data in Brief article demonstrates and validates the use of Kernel Density Estimation (KDE) as a novel statistical approach to neuroimaging data. First, we performed a side-by-side comparison of KDE with a previously published meta-analysis that applied activation likelihood estimation, which is the predominant approach to meta-analyses in cognitive neuroscience. Second, we analyzed data simulated with known spatial properties to test the sensitivity of KDE to varying degrees of spatial separation. KDE successfully detected true spatial differences in simulated data and displayed few false positives when no true differences were present. R code to simulate and analyze these data is made publicly available to facilitate the further evaluation of KDE for neuroimaging data and its dissemination to cognitive neuroscientists.
Collapse
Affiliation(s)
- Michel Belyk
- Faculty of Psychology and Neuroscience, Department of Neuropsychology and Psychopharmacology, University of Maastricht, Maastricht, The Netherlands.,Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario, Canada
| | - Steven Brown
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario, Canada
| | - Sonja A Kotz
- Faculty of Psychology and Neuroscience, Department of Neuropsychology and Psychopharmacology, University of Maastricht, Maastricht, The Netherlands.,Department of Neuropsychology, Max Planck Institute for Human and Cognitive Sciences, Leipzig, Germany
| |
Collapse
|
18
|
The origins of the vocal brain in humans. Neurosci Biobehav Rev 2017; 77:177-193. [DOI: 10.1016/j.neubiorev.2017.03.014] [Citation(s) in RCA: 76] [Impact Index Per Article: 10.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2016] [Revised: 02/15/2017] [Accepted: 03/22/2017] [Indexed: 01/13/2023]
|