1
|
Taheri A. The partial upward migration of the laryngeal motor cortex: A window to the human brain evolution. Brain Res 2024; 1834:148892. [PMID: 38554798 DOI: 10.1016/j.brainres.2024.148892] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Revised: 03/16/2024] [Accepted: 03/27/2024] [Indexed: 04/02/2024]
Abstract
The pioneer cortical electrical stimulation studies of the last century did not explicitly mark the location of the human laryngeal motor cortex (LMC), but only the "vocalization area" in the lower half of the lateral motor cortex. In the final years of 2010́s, neuroimaging studies did demonstrate two human cortical laryngeal representations, located at the opposing ends of the orofacial motor zone, therefore termed dorsal (LMCd) and ventral laryngeal motor cortex (LMCv). Since then, there has been a continuing debate regarding the origin, function and evolutionary significance of these areas. The "local duplication model" posits that the LMCd evolved by a duplication of an adjacent region of the motor cortex. The "duplication and migration model" assumes that the dorsal LMCd arose by a duplication of motor regions related to vocalization, such as the ancestry LMC, followed by a migration into the orofacial region of the motor cortex. This paper reviews the basic arguments of these viewpoints and suggests a new explanation, declaring that the LMCd in man is rather induced through the division of the unitary LMC in nonhuman primates, upward shift and relocation of its motor part due to the disproportional growth of the head, face, mouth, lips, and tongue motor areas in the ventral part of the human motor homunculus. This explanation may be called "expansion-division and relocation model".
Collapse
Affiliation(s)
- Abbas Taheri
- Neuroscience Razi, Berlin, Germany; Former Assistant Professor of Neurosurgery, Humboldt University, Berlin, Germany
| |
Collapse
|
2
|
Silva AB, Littlejohn KT, Liu JR, Moses DA, Chang EF. The speech neuroprosthesis. Nat Rev Neurosci 2024:10.1038/s41583-024-00819-9. [PMID: 38745103 DOI: 10.1038/s41583-024-00819-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/12/2024] [Indexed: 05/16/2024]
Abstract
Loss of speech after paralysis is devastating, but circumventing motor-pathway injury by directly decoding speech from intact cortical activity has the potential to restore natural communication and self-expression. Recent discoveries have defined how key features of speech production are facilitated by the coordinated activity of vocal-tract articulatory and motor-planning cortical representations. In this Review, we highlight such progress and how it has led to successful speech decoding, first in individuals implanted with intracranial electrodes for clinical epilepsy monitoring and subsequently in individuals with paralysis as part of early feasibility clinical trials to restore speech. We discuss high-spatiotemporal-resolution neural interfaces and the adaptation of state-of-the-art speech computational algorithms that have driven rapid and substantial progress in decoding neural activity into text, audible speech, and facial movements. Although restoring natural speech is a long-term goal, speech neuroprostheses already have performance levels that surpass communication rates offered by current assistive-communication technology. Given this accelerated rate of progress in the field, we propose key evaluation metrics for speed and accuracy, among others, to help standardize across studies. We finish by highlighting several directions to more fully explore the multidimensional feature space of speech and language, which will continue to accelerate progress towards a clinically viable speech neuroprosthesis.
Collapse
Affiliation(s)
- Alexander B Silva
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
| | - Kaylo T Littlejohn
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA, USA
| | - Jessie R Liu
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
| | - David A Moses
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
| | - Edward F Chang
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA.
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA.
| |
Collapse
|
3
|
Harris I, Niven EC, Griffin A, Scott SK. Is song processing distinct and special in the auditory cortex? Nat Rev Neurosci 2023; 24:711-722. [PMID: 37783820 DOI: 10.1038/s41583-023-00743-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/30/2023] [Indexed: 10/04/2023]
Abstract
Is the singing voice processed distinctively in the human brain? In this Perspective, we discuss what might distinguish song processing from speech processing in light of recent work suggesting that some cortical neuronal populations respond selectively to song and we outline the implications for our understanding of auditory processing. We review the literature regarding the neural and physiological mechanisms of song production and perception and show that this provides evidence for key differences between song and speech processing. We conclude by discussing the significance of the notion that song processing is special in terms of how this might contribute to theories of the neurobiological origins of vocal communication and to our understanding of the neural circuitry underlying sound processing in the human cortex.
Collapse
Affiliation(s)
- Ilana Harris
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Efe C Niven
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Alex Griffin
- Department of Psychology, University of Cambridge, Cambridge, UK
| | - Sophie K Scott
- Institute of Cognitive Neuroscience, University College London, London, UK.
| |
Collapse
|
4
|
Lu J, Li Y, Zhao Z, Liu Y, Zhu Y, Mao Y, Wu J, Chang EF. Neural control of lexical tone production in human laryngeal motor cortex. Nat Commun 2023; 14:6917. [PMID: 37903780 PMCID: PMC10616086 DOI: 10.1038/s41467-023-42175-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Accepted: 09/28/2023] [Indexed: 11/01/2023] Open
Abstract
In tonal languages, which are spoken by nearly one-third of the world's population, speakers precisely control the tension of vocal folds in the larynx to modulate pitch in order to distinguish words with completely different meanings. The specific pitch trajectories for a given tonal language are called lexical tones. Here, we used high-density direct cortical recordings to determine the neural basis of lexical tone production in native Mandarin-speaking participants. We found that instead of a tone category-selective coding, local populations in the bilateral laryngeal motor cortex (LMC) encode articulatory kinematic information to generate the pitch dynamics of lexical tones. Using a computational model of tone production, we discovered two distinct patterns of population activity in LMC commanding pitch rising and lowering. Finally, we showed that direct electrocortical stimulation of different local populations in LMC evoked pitch rising and lowering during tone production, respectively. Together, these results reveal the neural basis of vocal pitch control of lexical tones in tonal languages.
Collapse
Affiliation(s)
- Junfeng Lu
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai, 200040, China
- Shanghai Key Laboratory of Brain Function Restoration and Neural Regeneration, Shanghai, 200040, China
- National Center for Neurological Disorders, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai, 200040, China
| | - Yuanning Li
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, 201210, China
- Department of Neurological Surgery, University of California, San Francisco, CA, 94143, USA
- Weill Institute for Neurosciences, University of California, San Francisco, CA, 94158, USA
- State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, 201210, China
| | - Zehao Zhao
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai, 200040, China
- Shanghai Key Laboratory of Brain Function Restoration and Neural Regeneration, Shanghai, 200040, China
- National Center for Neurological Disorders, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai, 200040, China
| | - Yan Liu
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai, 200040, China
- Shanghai Key Laboratory of Brain Function Restoration and Neural Regeneration, Shanghai, 200040, China
- National Center for Neurological Disorders, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai, 200040, China
| | - Yanming Zhu
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai, 200040, China
- Speech and Hearing Bioscience & Technology Program, Division of Medical Sciences, Harvard University, Boston, MA, 02215, USA
| | - Ying Mao
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai, 200040, China.
- Shanghai Key Laboratory of Brain Function Restoration and Neural Regeneration, Shanghai, 200040, China.
- National Center for Neurological Disorders, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai, 200040, China.
| | - Jinsong Wu
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai, 200040, China.
- Shanghai Key Laboratory of Brain Function Restoration and Neural Regeneration, Shanghai, 200040, China.
- National Center for Neurological Disorders, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai, 200040, China.
| | - Edward F Chang
- Department of Neurological Surgery, University of California, San Francisco, CA, 94143, USA.
- Weill Institute for Neurosciences, University of California, San Francisco, CA, 94158, USA.
| |
Collapse
|
5
|
Manes JL, Kurani AS, Herschel E, Roberts AC, Tjaden K, Parrish T, Corcos DM. Premotor cortex is hypoactive during sustained vowel production in individuals with Parkinson's disease and hypophonia. Front Hum Neurosci 2023; 17:1250114. [PMID: 37941570 PMCID: PMC10629592 DOI: 10.3389/fnhum.2023.1250114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Accepted: 10/09/2023] [Indexed: 11/10/2023] Open
Abstract
Introduction Hypophonia is a common feature of Parkinson's disease (PD); however, the contribution of motor cortical activity to reduced phonatory scaling in PD is still not clear. Methods In this study, we employed a sustained vowel production task during functional magnetic resonance imaging to compare brain activity between individuals with PD and hypophonia and an older healthy control (OHC) group. Results When comparing vowel production versus rest, the PD group showed fewer regions with significant BOLD activity compared to OHCs. Within the motor cortices, both OHC and PD groups showed bilateral activation of the laryngeal/phonatory area (LPA) of the primary motor cortex as well as activation of the supplementary motor area. The OHC group also recruited additional activity in the bilateral trunk motor area and right dorsal premotor cortex (PMd). A voxel-wise comparison of PD and HC groups showed that activity in right PMd was significantly lower in the PD group compared to OHC (p < 0.001, uncorrected). Right PMd activity was positively correlated with maximum phonation time in the PD group and negatively correlated with perceptual severity ratings of loudness and pitch. Discussion Our findings suggest that hypoactivation of PMd may be associated with abnormal phonatory control in PD.
Collapse
Affiliation(s)
- Jordan L. Manes
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, MA, United States
| | - Ajay S. Kurani
- Ken and Ruth Davee Department of Neurology, Northwestern University, Chicago, IL, United States
- Department of Radiology, Northwestern University, Chicago, IL, United States
| | - Ellen Herschel
- Brain and Creativity Institute, University of Southern California, Los Angeles, CA, United States
| | - Angela C. Roberts
- School of Communication Sciences and Disorders, Western University, London, ON, Canada
- Canadian Centre for Activity and Aging, Western University, London, ON, Canada
- Department of Computer Science, Western University, London, ON, Canada
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, United States
| | - Kris Tjaden
- Department of Communicative Disorders and Sciences, University at Buffalo, Buffalo, NY, United States
| | - Todd Parrish
- Department of Radiology, Northwestern University, Chicago, IL, United States
| | - Daniel M. Corcos
- Department of Physical Therapy and Human Movement Sciences, Northwestern University, Chicago, IL, United States
| |
Collapse
|
6
|
Brown S, Phillips E. The vocal origin of musical scales: the Interval Spacing model. Front Psychol 2023; 14:1261218. [PMID: 37868594 PMCID: PMC10587400 DOI: 10.3389/fpsyg.2023.1261218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Accepted: 09/11/2023] [Indexed: 10/24/2023] Open
Affiliation(s)
- Steven Brown
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, ON, Canada
| | | |
Collapse
|
7
|
Patel B, Zhang Z, McGettigan C, Belyk M. Speech With Pauses Sounds Deceptive to Listeners With and Without Hearing Impairment. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:3735-3744. [PMID: 37672786 DOI: 10.1044/2023_jslhr-22-00618] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/08/2023]
Abstract
PURPOSE Communication is as much persuasion as it is the transfer of information. This creates a tension between the interests of the speaker and those of the listener, as dishonest speakers naturally attempt to hide deceptive speech and listeners are faced with the challenge of sorting truths from lies. Listeners with hearing impairment in particular may have differing levels of access to the acoustical cues that give away deceptive speech. A greater tendency toward speech pauses has been hypothesized to result from the cognitive demands of lying convincingly. Higher vocal pitch has also been hypothesized to mark the increased anxiety of a dishonest speaker. METHOD Listeners with or without hearing impairments heard short utterances from natural conversations, some of which had been digitally manipulated to contain either increased pausing or raised vocal pitch. Listeners were asked to guess whether each statement was a lie in a two-alternative forced-choice task. Participants were also asked explicitly which cues they believed had influenced their decisions. RESULTS Statements were more likely to be perceived as a lie when they contained pauses, but not when vocal pitch was raised. This pattern held regardless of hearing ability. In contrast, both groups of listeners self-reported using vocal pitch cues to identify deceptive statements, though at lower rates than pauses. CONCLUSIONS Listeners may have only partial awareness of the cues that influence their impression of dishonesty. Listeners with hearing impairment may place greater weight on acoustical cues according to the differing degrees of access provided by hearing aids. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.24052446.
Collapse
Affiliation(s)
- Bindiya Patel
- Department of Audiological Sciences, University College London, United Kingdom
| | - Ziyun Zhang
- Department of Speech Hearing and Phonetic Sciences, University College London, United Kingdom
| | - Carolyn McGettigan
- Department of Speech Hearing and Phonetic Sciences, University College London, United Kingdom
| | - Michel Belyk
- Department of Psychology, Edge Hill University, Ormskirk, United Kingdom
| |
Collapse
|
8
|
Liang B, Li Y, Zhao W, Du Y. Bilateral human laryngeal motor cortex in perceptual decision of lexical tone and voicing of consonant. Nat Commun 2023; 14:4710. [PMID: 37543659 PMCID: PMC10404239 DOI: 10.1038/s41467-023-40445-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Accepted: 07/27/2023] [Indexed: 08/07/2023] Open
Abstract
Speech perception is believed to recruit the left motor cortex. However, the exact role of the laryngeal subregion and its right counterpart in speech perception, as well as their temporal patterns of involvement remain unclear. To address these questions, we conducted a hypothesis-driven study, utilizing transcranial magnetic stimulation on the left or right dorsal laryngeal motor cortex (dLMC) when participants performed perceptual decision on Mandarin lexical tone or consonant (voicing contrast) presented with or without noise. We used psychometric function and hierarchical drift-diffusion model to disentangle perceptual sensitivity and dynamic decision-making parameters. Results showed that bilateral dLMCs were engaged with effector specificity, and this engagement was left-lateralized with right upregulation in noise. Furthermore, the dLMC contributed to various decision stages depending on the hemisphere and task difficulty. These findings substantially advance our understanding of the hemispherical lateralization and temporal dynamics of bilateral dLMC in sensorimotor integration during speech perceptual decision-making.
Collapse
Affiliation(s)
- Baishen Liang
- Institute of Psychology, CAS Key Laboratory of Behavioral Science, Chinese Academy of Sciences, Beijing, 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Yanchang Li
- Institute of Psychology, CAS Key Laboratory of Behavioral Science, Chinese Academy of Sciences, Beijing, 100101, China
| | - Wanying Zhao
- Institute of Psychology, CAS Key Laboratory of Behavioral Science, Chinese Academy of Sciences, Beijing, 100101, China
| | - Yi Du
- Institute of Psychology, CAS Key Laboratory of Behavioral Science, Chinese Academy of Sciences, Beijing, 100101, China.
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China.
- CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai, 200031, China.
- Chinese Institute for Brain Research, Beijing, 102206, China.
| |
Collapse
|
9
|
Zamorano AM, Zatorre RJ, Vuust P, Friberg A, Birbaumer N, Kleber B. Singing training predicts increased insula connectivity with speech and respiratory sensorimotor areas at rest. Brain Res 2023:148418. [PMID: 37217111 DOI: 10.1016/j.brainres.2023.148418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Revised: 03/28/2023] [Accepted: 05/17/2023] [Indexed: 05/24/2023]
Abstract
The insula contributes to the detection of salient events during goal-directed behavior and participates in the coordination of motor, multisensory, and cognitive systems. Recent task-fMRI studies with trained singers suggest that singing experience can enhance the access to these resources. However, the long-term effects of vocal training on insula-based networks are still unknown. In this study, we employed resting-state fMRI to assess experience-dependent differences in insula co-activation patterns between conservatory-trained singers and non-singers. Results indicate enhanced bilateral anterior insula connectivity in singers relative to non-singers with constituents of the speech sensorimotor network. Specifically, with the cerebellum (lobule V-VI) and the superior parietal lobes. The reversed comparison showed no effects. The amount of accumulated singing training predicted enhanced bilateral insula co-activation with primary sensorimotor areas representing the diaphragm and the larynx/phonation area-crucial regions for cortico-motor control of complex vocalizations-as well as the bilateral thalamus and the left putamen. Together, these findings highlight the neuroplastic effect of expert singing training on insula-based networks, as evidenced by the association between enhanced insula co-activation profiles in singers and the brain's speech motor system components.
Collapse
Affiliation(s)
- A M Zamorano
- Center for Neuroplasticity and Pain (CNAP), Department of Health Science and Technology, Aalborg University, Aalborg, Denmark
| | - R J Zatorre
- McGill University-Montreal Neurological Institute, Neuropsychology and Cognitive Neuroscience, Montreal, Canada; International Laboratory for Brain, Music and Sound research (BRAMS), Montreal, Canada
| | - P Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University, & The Royal Academy of Music Aarhus/Aalborg, Denmark
| | - A Friberg
- Speech, Music and Hearing, KTH Royal Institute of Technology, Stockholm, Sweden
| | - N Birbaumer
- Institute for Medical Psychology and Behavioral Neurobiology, University of Tübingen, Germany
| | - B Kleber
- Institute for Medical Psychology and Behavioral Neurobiology, University of Tübingen, Germany; Center for Music in the Brain, Department of Clinical Medicine, Aarhus University, & The Royal Academy of Music Aarhus/Aalborg, Denmark.
| |
Collapse
|
10
|
Lameira AR, Moran S. Life of p: A consonant older than speech. Bioessays 2023; 45:e2200246. [PMID: 36811380 DOI: 10.1002/bies.202200246] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 02/06/2023] [Accepted: 02/07/2023] [Indexed: 02/24/2023]
Abstract
Which sounds composed the first spoken languages? Archetypal sounds are not phylogenetically or archeologically recoverable, but comparative linguistics and primatology provide an alternative approach. Labial articulations are the most common speech sound, being virtually universal across the world's languages. Of all labials, the plosive 'p' sound, as in 'Pablo Picasso', transcribed /p/, is the most predominant voiceless sound globally and one of the first sounds to emerge in human infant canonical babbling. Global omnipresence and ontogenetic precocity imply that /p/-like sounds could predate the first major linguistic diversification event(s) in humans. Indeed, great ape vocal data support this view, namely, the only cultural sound shared across all great ape genera is articulatorily homologous to a rolling or trilled /p/, the 'raspberry'. /p/-like labial sounds represent an 'articulatory attractor' among living hominids and are likely among the oldest phonological features to have ever emerged in linguistic systems.
Collapse
Affiliation(s)
| | - Steven Moran
- Department of Anthropology, University of Miami, Coral Gables, Florida, USA
- Institute of Biology, University of Neuchatel, Neuchatel, Switzerland
| |
Collapse
|
11
|
Leipold S, Abrams DA, Karraker S, Menon V. Neural decoding of emotional prosody in voice-sensitive auditory cortex predicts social communication abilities in children. Cereb Cortex 2023; 33:709-728. [PMID: 35296892 PMCID: PMC9890475 DOI: 10.1093/cercor/bhac095] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2021] [Revised: 02/11/2022] [Accepted: 02/12/2022] [Indexed: 02/04/2023] Open
Abstract
During social interactions, speakers signal information about their emotional state through their voice, which is known as emotional prosody. Little is known regarding the precise brain systems underlying emotional prosody decoding in children and whether accurate neural decoding of these vocal cues is linked to social skills. Here, we address critical gaps in the developmental literature by investigating neural representations of prosody and their links to behavior in children. Multivariate pattern analysis revealed that representations in the bilateral middle and posterior superior temporal sulcus (STS) divisions of voice-sensitive auditory cortex decode emotional prosody information in children. Crucially, emotional prosody decoding in middle STS was correlated with standardized measures of social communication abilities; more accurate decoding of prosody stimuli in the STS was predictive of greater social communication abilities in children. Moreover, social communication abilities were specifically related to decoding sadness, highlighting the importance of tuning in to negative emotional vocal cues for strengthening social responsiveness and functioning. Findings bridge an important theoretical gap by showing that the ability of the voice-sensitive cortex to detect emotional cues in speech is predictive of a child's social skills, including the ability to relate and interact with others.
Collapse
Affiliation(s)
- Simon Leipold
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA
| | - Daniel A Abrams
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA
| | - Shelby Karraker
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA
| | - Vinod Menon
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA
- Department of Neurology and Neurological Sciences, Stanford University, Stanford, CA, USA
- Stanford Neurosciences Institute, Stanford University, Stanford, CA, USA
| |
Collapse
|
12
|
Miyagawa S, Arévalo A, Nóbrega VA. On the representation of hierarchical structure: Revisiting Darwin's musical protolanguage. Front Hum Neurosci 2022; 16:1018708. [PMID: 36438635 PMCID: PMC9692108 DOI: 10.3389/fnhum.2022.1018708] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2022] [Accepted: 10/20/2022] [Indexed: 11/13/2022] Open
Abstract
In this article, we address the tenability of Darwin's musical protolanguage, arguing that a more compelling evolutionary scenario is one where a prosodic protolanguage is taken to be the preliminary step to represent the hierarchy involved in linguistic structures within a linear auditory signal. We hypothesize that the establishment of a prosodic protolanguage results from an enhancement of a rhythmic system that transformed linear signals into speech prosody, which in turn can mark syntactic hierarchical relations. To develop this claim, we explore the role of prosodic cues on the parsing of syntactic structures, as well as neuroscientific evidence connecting the evolutionary development of music and linguistic capacities. Finally, we entertain the assumption that the capacity to generate hierarchical structure might have developed as part of tool-making in human prehistory, and hence was established prior to the enhancement of a prosodic protolinguistic system.
Collapse
Affiliation(s)
- Shigeru Miyagawa
- Department of Linguistics and Philosophy, Massachusetts Institute of Technology, Cambridge, MA, United States
- Institute of Biosciences, University of São Paulo, São Paulo, Brazil
| | - Analía Arévalo
- School of Medicine, University of São Paulo, São Paulo, Brazil
| | - Vitor A. Nóbrega
- Institute of Romance Studies, University of Hamburg, Hamburg, Germany
| |
Collapse
|
13
|
Westermann B, Lotze M, Varra L, Versteeg N, Domin M, Nicolet L, Obrist M, Klepzig K, Marbot L, Lämmler L, Fiedler K, Wattendorf E. When laughter arrests speech: fMRI-based evidence. Philos Trans R Soc Lond B Biol Sci 2022; 377:20210182. [PMID: 36126674 PMCID: PMC9489293 DOI: 10.1098/rstb.2021.0182] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023] Open
Abstract
Who has not experienced that sensation of losing the power of speech owing to an involuntary bout of laughter? An investigation of this phenomenon affords an insight into the neuronal processes that underlie laughter. In our functional magnetic resonance imaging study, participants were made to laugh by tickling in a first condition; in a second one they were requested to produce vocal utterances under the provocation of laughter by tickling. This investigation reveals increased neuronal activity in the sensorimotor cortex, the anterior cingulate gyrus, the insula, the nucleus accumbens, the hypothalamus and the periaqueductal grey for both conditions, thereby replicating the results of previous studies on ticklish laughter. However, further analysis indicates the activity in the emotion-associated regions to be lower when tickling is accompanied by voluntary vocalization. Here, a typical pattern of activation is identified, including the primary sensory cortex, a ventral area of the anterior insula and the ventral tegmental field, to which belongs to the nucleus ambiguus, namely, the common effector organ for voluntary and involuntary vocalizations. During the conflictual voluntary-vocalization versus laughter experience, the laughter-triggering network appears to rely heavily on a sensory and a deep interoceptive analysis, as well as on motor effectors in the brainstem. This article is part of the theme issue ‘Cracking the laugh code: laughter through the lens of biology, psychology and neuroscience’.
Collapse
Affiliation(s)
- B Westermann
- Department of Neurosurgery, University Hospital Basel, Basel, Switzerland
| | - M Lotze
- Faculty of Medicine, University of Greifswald, Greifswald, Germany
| | - L Varra
- Faculty of Science and Medicine, University of Fribourg, Fribourg, Switzerland
| | - N Versteeg
- Faculty of Science and Medicine, University of Fribourg, Fribourg, Switzerland
| | - M Domin
- Faculty of Medicine, University of Greifswald, Greifswald, Germany
| | - L Nicolet
- College of Health Sciences Fribourg, Fribourg, Switzerland
| | - M Obrist
- College of Health Sciences Fribourg, Fribourg, Switzerland
| | - K Klepzig
- College of Health Sciences Fribourg, Fribourg, Switzerland
| | - L Marbot
- Faculty of Science and Medicine, University of Fribourg, Fribourg, Switzerland
| | - L Lämmler
- Faculty of Science and Medicine, University of Fribourg, Fribourg, Switzerland
| | - K Fiedler
- Faculty of Science and Medicine, University of Fribourg, Fribourg, Switzerland
| | - E Wattendorf
- Faculty of Science and Medicine, University of Fribourg, Fribourg, Switzerland.,College of Health Sciences Fribourg, Fribourg, Switzerland
| |
Collapse
|
14
|
Bono D, Belyk M, Longo MR, Dick F. Beyond language: The unspoken sensory-motor representation of the tongue in non-primates, non-human and human primates. Neurosci Biobehav Rev 2022; 139:104730. [PMID: 35691470 DOI: 10.1016/j.neubiorev.2022.104730] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Revised: 04/06/2022] [Accepted: 06/06/2022] [Indexed: 11/28/2022]
Abstract
The English idiom "on the tip of my tongue" commonly acknowledges that something is known, but it cannot be immediately brought to mind. This phrase accurately describes sensorimotor functions of the tongue, which are fundamental for many tongue-related behaviors (e.g., speech), but often neglected by scientific research. Here, we review a wide range of studies conducted on non-primates, non-human and human primates with the aim of providing a comprehensive description of the cortical representation of the tongue's somatosensory inputs and motor outputs across different phylogenetic domains. First, we summarize how the properties of passive non-noxious mechanical stimuli are encoded in the putative somatosensory tongue area, which has a conserved location in the ventral portion of the somatosensory cortex across mammals. Second, we review how complex self-generated actions involving the tongue are represented in more anterior regions of the putative somato-motor tongue area. Finally, we describe multisensory response properties of the primate and non-primate tongue area by also defining how the cytoarchitecture of this area is affected by experience and deafferentation.
Collapse
Affiliation(s)
- Davide Bono
- Birkbeck/UCL Centre for Neuroimaging, 26 Bedford Way, London WC1H0AP, UK; Department of Experimental Psychology, UCL Division of Psychology and Language Sciences, 26 Bedford Way, London WC1H0AP, UK.
| | - Michel Belyk
- Department of Speech, Hearing, and Phonetic Sciences, UCL Division of Psychology and Language Sciences, 2 Wakefield Street, London WC1N 1PJ, UK
| | - Matthew R Longo
- Department of Psychological Sciences, Birkbeck College, University of London, Malet St, London WC1E7HX, UK
| | - Frederic Dick
- Birkbeck/UCL Centre for Neuroimaging, 26 Bedford Way, London WC1H0AP, UK; Department of Experimental Psychology, UCL Division of Psychology and Language Sciences, 26 Bedford Way, London WC1H0AP, UK; Department of Psychological Sciences, Birkbeck College, University of London, Malet St, London WC1E7HX, UK.
| |
Collapse
|
15
|
Lameira AR, Santamaría-Bonfil G, Galeone D, Gamba M, Hardus ME, Knott CD, Morrogh-Bernard H, Nowak MG, Campbell-Smith G, Wich SA. Sociality predicts orangutan vocal phenotype. Nat Ecol Evol 2022; 6:644-652. [PMID: 35314786 PMCID: PMC9085614 DOI: 10.1038/s41559-022-01689-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2021] [Accepted: 02/02/2022] [Indexed: 11/09/2022]
Abstract
In humans, individuals' social setting determines which and how language is acquired. Social seclusion experiments show that sociality also guides vocal development in songbirds and marmoset monkeys, but absence of similar great ape data has been interpreted as support to saltational notions for language origin, even if such laboratorial protocols are unethical with great apes. Here we characterize the repertoire entropy of orangutan individuals and show that in the wild, different degrees of sociality across populations are associated with different 'vocal personalities' in the form of distinct regimes of alarm call variants. In high-density populations, individuals are vocally more original and acoustically unpredictable but new call variants are short lived, whereas individuals in low-density populations are more conformative and acoustically consistent but also exhibit more complex call repertoires. Findings provide non-invasive evidence that sociality predicts vocal phenotype in a wild great ape. They prove false hypotheses that discredit great apes as having hardwired vocal development programmes and non-plastic vocal behaviour. Social settings mould vocal output in hominids besides humans.
Collapse
Affiliation(s)
- Adriano R Lameira
- Department of Psychology, University of Warwick, Coventry, UK. .,School of Psychology and Neuroscience, University of St Andrews, St Andrews, UK.
| | - Guillermo Santamaría-Bonfil
- Instituto Nacional de Electricidad y Energías Limpias, Gerencia de Tecnologías de la Información, Cuernavaca, México
| | - Deborah Galeone
- Department of Life Sciences and Systems Biology, University of Torino, Turin, Italy
| | - Marco Gamba
- Department of Life Sciences and Systems Biology, University of Torino, Turin, Italy
| | | | - Cheryl D Knott
- Department of Anthropology, Boston University, Boston, MA, USA
| | - Helen Morrogh-Bernard
- Borneo Nature Foundation, Palangka Raya, Indonesia.,College of Life and Environmental Sciences, University of Exeter, Penryn, UK
| | - Matthew G Nowak
- The PanEco Foundation-Sumatran Orangutan Conservation Programme, Berg am Irchel, Switzerland.,Department of Anthropology, Southern Illinois University, Carbondale, IL, USA
| | - Gail Campbell-Smith
- Yayasan Inisiasi Alam Rehabilitasi Indonesia, International Animal Rescue, Ketapang, Indonesia
| | - Serge A Wich
- School of Natural Sciences and Psychology, Liverpool John Moores University, Liverpool, UK.,Faculty of Science, University of Amsterdam, Amsterdam, Netherlands
| |
Collapse
|
16
|
Abstract
The human voice carries socially relevant information such as how authoritative, dominant, and attractive the speaker sounds. However, some speakers may be able to manipulate listeners by modulating the shape and size of their vocal tract to exaggerate certain characteristics of their voice. We analysed the veridical size of speakers’ vocal tracts using real-time magnetic resonance imaging as they volitionally modulated their voice to sound larger or smaller, corresponding changes to the size implied by the acoustics of their voice, and their influence over the perceptions of listeners. Individual differences in this ability were marked, spanning from nearly incapable to nearly perfect vocal modulation, and was consistent across modalities of measurement. Further research is needed to determine whether speakers who are effective at vocal size exaggeration are better able to manipulate their social environment, and whether this variation is an inherited quality of the individual, or the result of life experiences such as vocal training.
Collapse
|
17
|
Zhang Z, McGettigan C, Belyk M. Speech timing cues reveal deceptive speech in social deduction board games. PLoS One 2022; 17:e0263852. [PMID: 35148352 PMCID: PMC8836341 DOI: 10.1371/journal.pone.0263852] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Accepted: 01/27/2022] [Indexed: 11/18/2022] Open
Abstract
The faculty of language allows humans to state falsehoods in their choice of words. However, while what is said might easily uphold a lie, how it is said may reveal deception. Hence, some features of the voice that are difficult for liars to control may keep speech mostly, if not always, honest. Previous research has identified that speech timing and voice pitch cues can predict the truthfulness of speech, but this evidence has come primarily from laboratory experiments, which sacrifice ecological validity for experimental control. We obtained ecologically valid recordings of deceptive speech while observing natural utterances from players of a popular social deduction board game, in which players are assigned roles that either induce honest or dishonest interactions. When speakers chose to lie, they were prone to longer and more frequent pauses in their speech. This finding is in line with theoretical predictions that lying is more cognitively demanding. However, lying was not reliably associated with vocal pitch. This contradicts predictions that increased physiological arousal from lying might increase muscular tension in the larynx, but is consistent with human specialisations that grant Homo sapiens sapiens an unusual degree of control over the voice relative to other primates. The present study demonstrates the utility of social deduction board games as a means of making naturalistic observations of human behaviour from semi-structured social interactions.
Collapse
Affiliation(s)
- Ziyun Zhang
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, United Kingdom
| | - Carolyn McGettigan
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, United Kingdom
| | - Michel Belyk
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, United Kingdom
- Department of Psychology, Edge Hill University, Ormskirk, United Kingdom
| |
Collapse
|
18
|
Iwasaki SI, Yoshimura K, Asami T, Erdoğan S. Comparative morphology and physiology of the vocal production apparatus and the brain in the extant primates. Ann Anat 2022; 240:151887. [PMID: 35032565 DOI: 10.1016/j.aanat.2022.151887] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Revised: 12/26/2021] [Accepted: 12/28/2021] [Indexed: 01/04/2023]
Abstract
Objective data mainly from the comparative anatomy of various organs related to human speech and language is considered to unearth clues about the mechanisms behind language development. The two organs of the larynx and hyoid bone are considered to have evolved towards suitable positions and forms in preparation for the occurrence of the large repertoire of vocalization necessary for human speech. However, some researchers have asserted that there is no significant difference of these organs between humans and non-human primates. Speech production is dependent on the voluntary control of the respiratory, laryngeal, and vocal tract musculature. Such control is fully present in humans but only partially so in non-human primates, which appear to be able to voluntarily control only supralaryngeal articulators. Both humans and non-human primates have direct cortical innervation of motor neurons controlling the supralaryngeal vocal tract but only human appear to have direct cortical innervation of motor neurons controlling the larynx. In this review, we investigate the comparative morphology and function of the wide range of components involved in vocal production, including the larynx, the hyoid bone, the tongue, and the vocal brain. We would like to emphasize the importance of the tongue in the primary development of human speech and language. It is now time to reconsider the possibility of the tongue playing a definitive role in the emergence of human speech.
Collapse
Affiliation(s)
- Shin-Ichi Iwasaki
- Faculty of Health Science, Gunma PAZ University, Takasaki, Japan; The Nippon Dental University, Tokyo and Niigata, Japan
| | - Ken Yoshimura
- Department of Anatomy, The Nippon Dental University School of Life Dentistry at Niigata, Niigata, Japan
| | - Tomoichiro Asami
- Faculty of Rehabilitation, Gunma Paz University, Takasaki, Japan
| | - Serkan Erdoğan
- Department of Anatomy, Faculty of Veterinary Medicine, Tekirdağ Namık Kemal University, Tekirdağ, Turkey.
| |
Collapse
|
19
|
|
20
|
Torres Borda L, Jadoul Y, Rasilo H, Salazar Casals A, Ravignani A. Vocal plasticity in harbour seal pups. Philos Trans R Soc Lond B Biol Sci 2021; 376:20200456. [PMID: 34719248 PMCID: PMC8558775 DOI: 10.1098/rstb.2020.0456] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/28/2021] [Indexed: 12/22/2022] Open
Abstract
Vocal plasticity can occur in response to environmental and biological factors, including conspecifics' vocalizations and noise. Pinnipeds are one of the few mammalian groups capable of vocal learning, and are therefore relevant to understanding the evolution of vocal plasticity in humans and other animals. Here, we investigate the vocal plasticity of harbour seals (Phoca vitulina), a species with vocal learning abilities observed in adulthood but not puppyhood. To evaluate early mammalian vocal development, we tested 1-3 weeks-old seal pups. We tailored noise playbacks to this species and age to induce seal pups to shift their fundamental frequency (f0), rather than adapt call amplitude or temporal characteristics. We exposed individual pups to low- and high-intensity bandpass-filtered noise, which spanned-and masked-their typical range of f0; simultaneously, we recorded pups' spontaneous calls. Unlike most mammals, pups modified their vocalizations by lowering their f0 in response to increased noise. This modulation was precise and adapted to the particular experimental manipulation of the noise condition. In addition, higher levels of noise induced less dispersion around the mean f0, suggesting that pups may have actively focused their phonatory efforts to target lower frequencies. Noise did not seem to affect call amplitude. However, one seal showed two characteristics of the Lombard effect known for human speech in noise: significant increase in call amplitude and flattening of spectral tilt. Our relatively low noise levels may have favoured f0 modulation while inhibiting amplitude adjustments. This lowering of f0 is unusual, as most animals commonly display no such f0 shift. Our data represent a relatively rare case in mammalian neonates, and have implications for the evolution of vocal plasticity and vocal learning across species, including humans. This article is part of the theme issue 'Voice modulation: from origin and mechanism to social impact (Part I)'.
Collapse
Affiliation(s)
- Laura Torres Borda
- Comparative Bioacoustics Group, Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525 XD Nijmegen, The Netherlands
- Research Department, Sealcentre Pieterburen, Hoofdstraat 94-A, 9968 AG Pieterburen, The Netherlands
| | - Yannick Jadoul
- Comparative Bioacoustics Group, Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525 XD Nijmegen, The Netherlands
- Artificial Intelligence Lab, Vrije Universiteit Brussel, 1050 Elsene/Ixelles, Belgium
| | - Heikki Rasilo
- Artificial Intelligence Lab, Vrije Universiteit Brussel, 1050 Elsene/Ixelles, Belgium
| | - Anna Salazar Casals
- Research Department, Sealcentre Pieterburen, Hoofdstraat 94-A, 9968 AG Pieterburen, The Netherlands
| | - Andrea Ravignani
- Comparative Bioacoustics Group, Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525 XD Nijmegen, The Netherlands
- Research Department, Sealcentre Pieterburen, Hoofdstraat 94-A, 9968 AG Pieterburen, The Netherlands
| |
Collapse
|
21
|
Belyk M, Eichert N, McGettigan C. A dual larynx motor networks hypothesis. Philos Trans R Soc Lond B Biol Sci 2021; 376:20200392. [PMID: 34719252 PMCID: PMC8558777 DOI: 10.1098/rstb.2020.0392] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/05/2021] [Indexed: 01/14/2023] Open
Abstract
Humans are vocal modulators par excellence. This ability is supported in part by the dual representation of the laryngeal muscles in the motor cortex. Movement, however, is not the product of motor cortex alone but of a broader motor network. This network consists of brain regions that contain somatotopic maps that parallel the organization in motor cortex. We therefore present a novel hypothesis that the dual laryngeal representation is repeated throughout the broader motor network. In support of the hypothesis, we review existing literature that demonstrates the existence of network-wide somatotopy and present initial evidence for the hypothesis' plausibility. Understanding how this uniquely human phenotype in motor cortex interacts with broader brain networks is an important step toward understanding how humans evolved the ability to speak. We further suggest that this system may provide a means to study how individual components of the nervous system evolved within the context of neuronal networks. This article is part of the theme issue 'Voice modulation: from origin and mechanism to social impact (Part I)'.
Collapse
Affiliation(s)
- Michel Belyk
- Department of Speech Hearing and Phonetic Sciences, University College London, London WC1N 1PJ, UK
- Department of Psychology, Edge Hill University, Ormskirk, L39 4QP, UK
| | - Nicole Eichert
- Wellcome Centre for Integrative Neuroimaging, Centre for Functional MRI of the Brain (FMRIB), Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, Oxford OX3 9DU, UK
| | - Carolyn McGettigan
- Department of Speech Hearing and Phonetic Sciences, University College London, London WC1N 1PJ, UK
| |
Collapse
|
22
|
Waters S, Kanber E, Lavan N, Belyk M, Carey D, Cartei V, Lally C, Miquel M, McGettigan C. Singers show enhanced performance and neural representation of vocal imitation. Philos Trans R Soc Lond B Biol Sci 2021; 376:20200399. [PMID: 34719245 PMCID: PMC8558773 DOI: 10.1098/rstb.2020.0399] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/06/2021] [Indexed: 12/17/2022] Open
Abstract
Humans have a remarkable capacity to finely control the muscles of the larynx, via distinct patterns of cortical topography and innervation that may underpin our sophisticated vocal capabilities compared with non-human primates. Here, we investigated the behavioural and neural correlates of laryngeal control, and their relationship to vocal expertise, using an imitation task that required adjustments of larynx musculature during speech. Highly trained human singers and non-singer control participants modulated voice pitch and vocal tract length (VTL) to mimic auditory speech targets, while undergoing real-time anatomical scans of the vocal tract and functional scans of brain activity. Multivariate analyses of speech acoustics, larynx movements and brain activation data were used to quantify vocal modulation behaviour and to search for neural representations of the two modulated vocal parameters during the preparation and execution of speech. We found that singers showed more accurate task-relevant modulations of speech pitch and VTL (i.e. larynx height, as measured with vocal tract MRI) during speech imitation; this was accompanied by stronger representation of VTL within a region of the right somatosensory cortex. Our findings suggest a common neural basis for enhanced vocal control in speech and song. This article is part of the theme issue 'Voice modulation: from origin and mechanism to social impact (Part I)'.
Collapse
Affiliation(s)
- Sheena Waters
- Department of Psychology, Royal Holloway, University of London, Egham TW20 0EX, UK
- Wolfson Institute of Preventive Medicine, Barts and The London School of Medicine and Dentistry, Charterhouse Square, London EC1M 6BQ, UK
| | - Elise Kanber
- Department of Psychology, Royal Holloway, University of London, Egham TW20 0EX, UK
- Speech, Hearing and Phonetic Sciences, University College London, 2 Wakefield Street, London WC1N 1PF, UK
| | - Nadine Lavan
- Speech, Hearing and Phonetic Sciences, University College London, 2 Wakefield Street, London WC1N 1PF, UK
- Department of Biological and Experimental Psychology, Queen Mary University of London, Mile End Road, Bethnal Green, London E1 4NS, UK
| | - Michel Belyk
- Speech, Hearing and Phonetic Sciences, University College London, 2 Wakefield Street, London WC1N 1PF, UK
| | - Daniel Carey
- Department of Psychology, Royal Holloway, University of London, Egham TW20 0EX, UK
- Data & AI, Novartis Pharmaceuticals, Novartis Global Service Center, 203 Merrion Road, Dublin 4 D04 NN12, Ireland
| | - Valentina Cartei
- Equipe de Neuro-Ethologie Sensorielle (ENES), Centre de Recherche en Neurosciences de Lyon, Université de Lyon/Saint-Etienne, 21 rue du Docteur Paul Michelon, 42100 Saint-Etienne, France
- Department of Psychology, Institute of Education, Health and Social Sciences, University of Chichester, College Lane, Chichester, West Sussex PO19 6PE, UK
| | - Clare Lally
- Department of Psychology, Royal Holloway, University of London, Egham TW20 0EX, UK
- Speech, Hearing and Phonetic Sciences, University College London, 2 Wakefield Street, London WC1N 1PF, UK
| | - Marc Miquel
- Department of Clinical Physics, Barts Health NHS Trust, London EC1A 7BE, UK
- William Harvey Research Institute, Queen Mary University of London, London EC1M 6BQ, UK
| | - Carolyn McGettigan
- Department of Psychology, Royal Holloway, University of London, Egham TW20 0EX, UK
- Speech, Hearing and Phonetic Sciences, University College London, 2 Wakefield Street, London WC1N 1PF, UK
| |
Collapse
|
23
|
Hughes SM, Puts DA. Vocal modulation in human mating and competition. Philos Trans R Soc Lond B Biol Sci 2021; 376:20200388. [PMID: 34719246 DOI: 10.1098/rstb.2020.0388] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
The human voice is dynamic, and people modulate their voices across different social interactions. This article presents a review of the literature examining natural vocal modulation in social contexts relevant to human mating and intrasexual competition. Altering acoustic parameters during speech, particularly pitch, in response to mating and competitive contexts can influence social perception and indicate certain qualities of the speaker. For instance, a lowered voice pitch is often used to exert dominance, display status and compete with rivals. Changes in voice can also serve as a salient medium for signalling a person's attraction to another, and there is evidence to support the notion that attraction and/or romantic interest can be distinguished through vocal tones alone. Individuals can purposely change their vocal behaviour in attempt to sound more attractive and to facilitate courtship success. Several findings also point to the effectiveness of vocal change as a mechanism for communicating relationship status. As future studies continue to explore vocal modulation in the arena of human mating, we will gain a better understanding of how and why vocal modulation varies across social contexts and its impact on receiver psychology. This article is part of the theme issue 'Voice modulation: from origin and mechanism to social impact (Part I)'.
Collapse
Affiliation(s)
- Susan M Hughes
- Psychology Department, Albright College, Reading, PA 19612, USA
| | - David A Puts
- Department of Anthropology, Pennsylvania State University, University Park, PA 16802, USA
| |
Collapse
|
24
|
Mekki Y, Guillemot V, Lemaitre H, Carrion-Castillo A, Forkel S, Frouin V, Philippe C. The genetic architecture of language functional connectivity. Neuroimage 2021; 249:118795. [PMID: 34929384 DOI: 10.1016/j.neuroimage.2021.118795] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Revised: 11/11/2021] [Accepted: 12/08/2021] [Indexed: 02/08/2023] Open
Abstract
Language is a unique trait of the human species, of which the genetic architecture remains largely unknown. Through language disorders studies, many candidate genes were identified. However, such complex and multifactorial trait is unlikely to be driven by only few genes and case-control studies, suffering from a lack of power, struggle to uncover significant variants. In parallel, neuroimaging has significantly contributed to the understanding of structural and functional aspects of language in the human brain and the recent availability of large scale cohorts like UK Biobank have made possible to study language via image-derived endophenotypes in the general population. Because of its strong relationship with task-based fMRI (tbfMRI) activations and its easiness of acquisition, resting-state functional MRI (rsfMRI) have been more popularised, making it a good surrogate of functional neuronal processes. Taking advantage of such a synergistic system by aggregating effects across spatially distributed traits, we performed a multivariate genome-wide association study (mvGWAS) between genetic variations and resting-state functional connectivity (FC) of classical brain language areas in the inferior frontal (pars opercularis, triangularis and orbitalis), temporal and inferior parietal lobes (angular and supramarginal gyri), in 32,186 participants from UK Biobank. Twenty genomic loci were found associated with language FCs, out of which three were replicated in an independent replication sample. A locus in 3p11.1, regulating EPHA3 gene expression, is found associated with FCs of the semantic component of the language network, while a locus in 15q14, regulating THBS1 gene expression is found associated with FCs of the perceptual-motor language processing, bringing novel insights into the neurobiology of language.
Collapse
Affiliation(s)
- Yasmina Mekki
- NeuroSpin, Institut Joliot, CEA - Université Paris-Saclay, Gif-Sur-Yvette, 91191, France.
| | - Vincent Guillemot
- Hub de Bioinformatique et Biostatistique, Département Biologie Computationnelle, Institut Pasteur, USR 3756 CNRS, Paris, France
| | - Hervé Lemaitre
- Groupe d'Imagerie Neurofonctionnelle, Institut des Maladies Neurodégénératives, CNRS UMR 5293, Université de Bordeaux, Centre Broca Nouvelle-Aquitaine, Bordeaux, France
| | | | - Stephanie Forkel
- Groupe d'Imagerie Neurofonctionnelle, Institut des Maladies Neurodégénératives, CNRS UMR 5293, Université de Bordeaux, Centre Broca Nouvelle-Aquitaine, Bordeaux, France; Brain Connectivity and Behaviour Laboratory, Sorbonne Universities, Paris, France; Department of Neuroimaging, Institute of Psychiatry, Psychology and Neurosciences, King's College London, UK
| | - Vincent Frouin
- NeuroSpin, Institut Joliot, CEA - Université Paris-Saclay, Gif-Sur-Yvette, 91191, France
| | - Cathy Philippe
- NeuroSpin, Institut Joliot, CEA - Université Paris-Saclay, Gif-Sur-Yvette, 91191, France.
| |
Collapse
|
25
|
Dheerendra P, Baumann S, Joly O, Balezeau F, Petkov CI, Thiele A, Griffiths TD. The Representation of Time Windows in Primate Auditory Cortex. Cereb Cortex 2021; 32:3568-3580. [PMID: 34875029 PMCID: PMC9376871 DOI: 10.1093/cercor/bhab434] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2020] [Revised: 11/04/2021] [Accepted: 11/05/2021] [Indexed: 11/13/2022] Open
Abstract
Whether human and nonhuman primates process the temporal dimension of sound similarly remains an open question. We examined the brain basis for the processing of acoustic time windows in rhesus macaques using stimuli simulating the spectrotemporal complexity of vocalizations. We conducted functional magnetic resonance imaging in awake macaques to identify the functional anatomy of response patterns to different time windows. We then contrasted it against the responses to identical stimuli used previously in humans. Despite a similar overall pattern, ranging from the processing of shorter time windows in core areas to longer time windows in lateral belt and parabelt areas, monkeys exhibited lower sensitivity to longer time windows than humans. This difference in neuronal sensitivity might be explained by a specialization of the human brain for processing longer time windows in speech.
Collapse
Affiliation(s)
- Pradeep Dheerendra
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE2 4HH, UK.,Institute of Neuroscience and Psychology, University of Glasgow, Glasgow G128QB, UK
| | - Simon Baumann
- National Institute of Mental Health, NIH, Bethesda, MD 20892-1148, USA.,Department of Psychology, University of Turin, Torino 10124, Italy
| | - Olivier Joly
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE2 4HH, UK
| | - Fabien Balezeau
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE2 4HH, UK
| | | | - Alexander Thiele
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE2 4HH, UK
| | - Timothy D Griffiths
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE2 4HH, UK
| |
Collapse
|
26
|
Venezia JH, Richards VM, Hickok G. Speech-Driven Spectrotemporal Receptive Fields Beyond the Auditory Cortex. Hear Res 2021; 408:108307. [PMID: 34311190 PMCID: PMC8378265 DOI: 10.1016/j.heares.2021.108307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/08/2021] [Revised: 06/15/2021] [Accepted: 06/30/2021] [Indexed: 10/20/2022]
Abstract
We recently developed a method to estimate speech-driven spectrotemporal receptive fields (STRFs) using fMRI. The method uses spectrotemporal modulation filtering, a form of acoustic distortion that renders speech sometimes intelligible and sometimes unintelligible. Using this method, we found significant STRF responses only in classic auditory regions throughout the superior temporal lobes. However, our analysis was not optimized to detect small clusters of STRFs as might be expected in non-auditory regions. Here, we re-analyze our data using a more sensitive multivariate statistical test for cross-subject alignment of STRFs, and we identify STRF responses in non-auditory regions including the left dorsal premotor cortex (dPM), left inferior frontal gyrus (IFG), and bilateral calcarine sulcus (calcS). All three regions responded more to intelligible than unintelligible speech, but left dPM and calcS responded significantly to vocal pitch and demonstrated strong functional connectivity with early auditory regions. Left dPM's STRF generated the best predictions of activation on trials rated as unintelligible by listeners, a hallmark auditory profile. IFG, on the other hand, responded almost exclusively to intelligible speech and was functionally connected with classic speech-language regions in the superior temporal sulcus and middle temporal gyrus. IFG's STRF was also (weakly) able to predict activation on unintelligible trials, suggesting the presence of a partial 'acoustic trace' in the region. We conclude that left dPM is part of the human dorsal laryngeal motor cortex, a region previously shown to be capable of operating in an 'auditory mode' to encode vocal pitch. Further, given previous observations that IFG is involved in syntactic working memory and/or processing of linear order, we conclude that IFG is part of a higher-order speech circuit that exerts a top-down influence on processing of speech acoustics. Finally, because calcS is modulated by emotion, we speculate that changes in the quality of vocal pitch may have contributed to its response.
Collapse
Affiliation(s)
- Jonathan H Venezia
- VA Loma Linda Healthcare System, Loma Linda, CA, United States; Dept. of Otolaryngology, Loma Linda University School of Medicine, Loma Linda, CA, United States.
| | - Virginia M Richards
- Depts. of Cognitive Sciences and Language Science, University of California, Irvine, Irvine, CA, United States
| | - Gregory Hickok
- Depts. of Cognitive Sciences and Language Science, University of California, Irvine, Irvine, CA, United States
| |
Collapse
|
27
|
Human larynx motor cortices coordinate respiration for vocal-motor control. Neuroimage 2021; 239:118326. [PMID: 34216772 DOI: 10.1016/j.neuroimage.2021.118326] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Revised: 05/22/2021] [Accepted: 06/29/2021] [Indexed: 11/23/2022] Open
Abstract
Vocal flexibility is a hallmark of the human species, most particularly the capacity to speak and sing. This ability is supported in part by the evolution of a direct neural pathway linking the motor cortex to the brainstem nucleus that controls the larynx the primary sound source for communication. Early brain imaging studies demonstrated that larynx motor cortex at the dorsal end of the orofacial division of motor cortex (dLMC) integrated laryngeal and respiratory control, thereby coordinating two major muscular systems that are necessary for vocalization. Neurosurgical studies have since demonstrated the existence of a second larynx motor area at the ventral extent of the orofacial motor division (vLMC) of motor cortex. The vLMC has been presumed to be less relevant to speech motor control, but its functional role remains unknown. We employed a novel ultra-high field (7T) magnetic resonance imaging paradigm that combined singing and whistling simple melodies to localise the larynx motor cortices and test their involvement in respiratory motor control. Surprisingly, whistling activated both 'larynx areas' more strongly than singing despite the reduced involvement of the larynx during whistling. We provide further evidence for the existence of two larynx motor areas in the human brain, and the first evidence that laryngeal-respiratory integration is a shared property of both larynx motor areas. We outline explicit predictions about the descending motor pathways that give these cortical areas access to both the laryngeal and respiratory systems and discuss the implications for the evolution of speech.
Collapse
|
28
|
Veit L, Tian LY, Monroy Hernandez CJ, Brainard MS. Songbirds can learn flexible contextual control over syllable sequencing. eLife 2021; 10:61610. [PMID: 34060473 PMCID: PMC8169114 DOI: 10.7554/elife.61610] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Accepted: 04/25/2021] [Indexed: 11/23/2022] Open
Abstract
The flexible control of sequential behavior is a fundamental aspect of speech, enabling endless reordering of a limited set of learned vocal elements (syllables or words). Songbirds are phylogenetically distant from humans but share both the capacity for vocal learning and neural circuitry for vocal control that includes direct pallial-brainstem projections. Based on these similarities, we hypothesized that songbirds might likewise be able to learn flexible, moment-by-moment control over vocalizations. Here, we demonstrate that Bengalese finches (Lonchura striata domestica), which sing variable syllable sequences, can learn to rapidly modify the probability of specific sequences (e.g. ‘ab-c’ versus ‘ab-d’) in response to arbitrary visual cues. Moreover, once learned, this modulation of sequencing occurs immediately following changes in contextual cues and persists without external reinforcement. Our findings reveal a capacity in songbirds for learned contextual control over syllable sequencing that parallels human cognitive control over syllable sequencing in speech. Human speech and birdsong share numerous parallels. Both humans and birds learn their vocalizations during critical phases early in life, and both learn by imitating adults. Moreover, both humans and songbirds possess specific circuits in the brain that connect the forebrain to midbrain vocal centers. Humans can flexibly control what they say and how by reordering a fixed set of syllables into endless combinations, an ability critical to human speech and language. Birdsongs also vary depending on their context, and melodies to seduce a mate will be different from aggressive songs to warn other males to stay away. However, so far it was unclear whether songbirds are also capable of modifying songs independent of social or other naturally relevant contexts. To test whether birds can control their songs in a purposeful way, Veit et al. trained adult male Bengalese finches to change the sequence of their songs in response to random colored lights that had no natural meaning to the birds. A specific computer program was used to detect different variations on a theme that the bird naturally produced (for example, “ab-c” versus “ab-d”), and rewarded birds for singing one sequence when the light was yellow, and the other when it was green. Gradually, the finches learned to modify their songs and were able to switch between the appropriate sequences as soon as the light cues changed. This ability persisted for days, even without any further training. This suggests that songbirds can learn to flexibly and purposefully modify the way in which they sequence the notes in their songs, in a manner that parallels how humans control syllable sequencing in speech. Moreover, birds can learn to do this ‘on command’ in response to an arbitrarily chosen signal, even if it is not something that would impact their song in nature. Songbirds are an important model to study brain circuits involved in vocal learning. They are one of the few animals that, like humans, learn their vocalizations by imitating conspecifics. The finding that they can also flexibly control vocalizations may help shed light on the interactions between cognitive processing and sophisticated vocal learning abilities.
Collapse
Affiliation(s)
- Lena Veit
- Center for Integrative Neuroscience and Howard Hughes Medical Institute, University of California, San Francisco, San Francisco, United States
| | - Lucas Y Tian
- Center for Integrative Neuroscience and Howard Hughes Medical Institute, University of California, San Francisco, San Francisco, United States
| | - Christian J Monroy Hernandez
- Center for Integrative Neuroscience and Howard Hughes Medical Institute, University of California, San Francisco, San Francisco, United States
| | - Michael S Brainard
- Center for Integrative Neuroscience and Howard Hughes Medical Institute, University of California, San Francisco, San Francisco, United States
| |
Collapse
|
29
|
Choe HN, Jarvis ED. The role of sex chromosomes and sex hormones in vocal learning systems. Horm Behav 2021; 132:104978. [PMID: 33895570 DOI: 10.1016/j.yhbeh.2021.104978] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/16/2021] [Revised: 03/22/2021] [Accepted: 03/23/2021] [Indexed: 12/12/2022]
Abstract
Vocal learning is the ability to imitate and modify sounds through auditory experience, a rare trait found in only a few lineages of mammals and birds. It is a critical component of human spoken language, allowing us to verbally transmit speech repertoires and knowledge across generations. In many vocal learning species, the vocal learning trait is sexually dimorphic, where it is either limited to males or present in both sexes to different degrees. In humans, recent findings have revealed subtle sexual dimorphism in vocal learning/spoken language brain regions and some associated disorders. For songbirds, where the neural mechanisms of vocal learning have been well studied, vocal learning appears to have been present in both sexes at the origin of the lineage and was then independently lost in females of some subsequent lineages. This loss is associated with an interplay between sex chromosomes and sex steroid hormones. Even in species with little dimorphism, like humans, sex chromosomes and hormones still have some influence on learned vocalizations. Here we present a brief synthesis of these studies, in the context of sex determination broadly, and identify areas of needed investigation to further understand how sex chromosomes and sex steroid hormones help establish sexually dimorphic neural structures for vocal learning.
Collapse
Affiliation(s)
- Ha Na Choe
- Duke University Medical Center, The Rockefeller University, Howard Hughes Medical Institute, United States of America.
| | - Erich D Jarvis
- Duke University Medical Center, The Rockefeller University, Howard Hughes Medical Institute, United States of America.
| |
Collapse
|
30
|
Asano R. The evolution of hierarchical structure building capacity for language and music: a bottom-up perspective. Primates 2021; 63:417-428. [PMID: 33839984 PMCID: PMC9463250 DOI: 10.1007/s10329-021-00905-x] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2020] [Accepted: 03/26/2021] [Indexed: 12/27/2022]
Abstract
A central property of human language is its hierarchical structure. Humans can flexibly combine elements to build a hierarchical structure expressing rich semantics. A hierarchical structure is also considered as playing a key role in many other human cognitive domains. In music, auditory-motor events are combined into hierarchical pitch and/or rhythm structure expressing affect. How did such a hierarchical structure building capacity evolve? This paper investigates this question from a bottom-up perspective based on a set of action-related components as a shared basis underlying cognitive capacities of nonhuman primates and humans. Especially, I argue that the evolution of hierarchical structure building capacity for language and music is tractable for comparative evolutionary study once we focus on the gradual elaboration of shared brain architecture: the cortico-basal ganglia-thalamocortical circuits for hierarchical control of goal-directed action and the dorsal pathways for hierarchical internal models. I suggest that this gradual elaboration of the action-related brain architecture in the context of vocal control and tool-making went hand in hand with amplification of working memory, and made the brain ready for hierarchical structure building in language and music.
Collapse
Affiliation(s)
- Rie Asano
- Systematic Musicology, Institute of Musicology, University of Cologne, Cologne, Germany.
| |
Collapse
|
31
|
Rocchi F, Oya H, Balezeau F, Billig AJ, Kocsis Z, Jenison RL, Nourski KV, Kovach CK, Steinschneider M, Kikuchi Y, Rhone AE, Dlouhy BJ, Kawasaki H, Adolphs R, Greenlee JDW, Griffiths TD, Howard MA, Petkov CI. Common fronto-temporal effective connectivity in humans and monkeys. Neuron 2021; 109:852-868.e8. [PMID: 33482086 PMCID: PMC7927917 DOI: 10.1016/j.neuron.2020.12.026] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Revised: 10/02/2020] [Accepted: 12/30/2020] [Indexed: 01/24/2023]
Abstract
Human brain pathways supporting language and declarative memory are thought to have differentiated substantially during evolution. However, cross-species comparisons are missing on site-specific effective connectivity between regions important for cognition. We harnessed functional imaging to visualize the effects of direct electrical brain stimulation in macaque monkeys and human neurosurgery patients. We discovered comparable effective connectivity between caudal auditory cortex and both ventro-lateral prefrontal cortex (VLPFC, including area 44) and parahippocampal cortex in both species. Human-specific differences were clearest in the form of stronger hemispheric lateralization effects. In humans, electrical tractography revealed remarkably rapid evoked potentials in VLPFC following auditory cortex stimulation and speech sounds drove VLPFC, consistent with prior evidence in monkeys of direct auditory cortex projections to homologous vocalization-responsive regions. The results identify a common effective connectivity signature in human and nonhuman primates, which from auditory cortex appears equally direct to VLPFC and indirect to the hippocampus. VIDEO ABSTRACT.
Collapse
Affiliation(s)
- Francesca Rocchi
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK.
| | - Hiroyuki Oya
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, USA.
| | - Fabien Balezeau
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK
| | | | - Zsuzsanna Kocsis
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK; Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA
| | - Rick L Jenison
- Department of Neuroscience, University of Wisconsin - Madison, Madison, WI, USA
| | - Kirill V Nourski
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, USA
| | | | - Mitchell Steinschneider
- Departments of Neurology and Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Yukiko Kikuchi
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK
| | - Ariane E Rhone
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA
| | - Brian J Dlouhy
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, USA
| | - Hiroto Kawasaki
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA
| | - Ralph Adolphs
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
| | - Jeremy D W Greenlee
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, USA
| | - Timothy D Griffiths
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK; Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA; Wellcome Centre for Human Neuroimaging, University College London, London, UK
| | - Matthew A Howard
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, USA; Pappajohn Biomedical Institute, The University of Iowa, Iowa City, IA, USA
| | - Christopher I Petkov
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK.
| |
Collapse
|
32
|
Pisanski K, Reby D. Efficacy in deceptive vocal exaggeration of human body size. Nat Commun 2021; 12:968. [PMID: 33579910 PMCID: PMC7881139 DOI: 10.1038/s41467-021-21008-7] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2020] [Accepted: 01/05/2021] [Indexed: 11/10/2022] Open
Abstract
How can deceptive communication signals exist in an evolutionarily stable signalling system? To resolve this age-old honest signalling paradox, researchers must first establish whether deception benefits deceivers. However, while vocal exaggeration is widespread in the animal kingdom and assumably adaptive, its effectiveness in biasing listeners has not been established. Here, we show that human listeners can detect deceptive vocal signals produced by vocalisers who volitionally shift their voice frequencies to exaggerate or attenuate their perceived size. Listeners can also judge the relative heights of cheaters, whose deceptive signals retain reliable acoustic cues to interindividual height. Importantly, although vocal deception biases listeners' absolute height judgments, listeners recalibrate their height assessments for vocalisers they correctly and concurrently identify as deceptive, particularly men judging men. Thus, while size exaggeration can fool listeners, benefiting the deceiver, its detection can reduce bias and mitigate costs for listeners, underscoring an unremitting arms-race between signallers and receivers in animal communication.
Collapse
Affiliation(s)
- Katarzyna Pisanski
- Equipe de Neuro-Ethologie Sensorielle (ENES), Centre de Recherche en Neurosciences de Lyon (CRNL), CNRS, INSERM, University of Lyon/Saint-Étienne, Saint-Étienne, France. .,Institute of Psychology, University of Wrocław, Wrocław, Poland.
| | - David Reby
- Equipe de Neuro-Ethologie Sensorielle (ENES), Centre de Recherche en Neurosciences de Lyon (CRNL), CNRS, INSERM, University of Lyon/Saint-Étienne, Saint-Étienne, France
| |
Collapse
|
33
|
Neef NE, Primaßin A, von Gudenberg AW, Dechent P, Riedel C, Paulus W, Sommer M. Two cortical representations of voice control are differentially involved in speech fluency. Brain Commun 2021; 3:fcaa232. [PMID: 33959707 PMCID: PMC8088816 DOI: 10.1093/braincomms/fcaa232] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Revised: 11/29/2020] [Accepted: 12/01/2020] [Indexed: 01/01/2023] Open
Abstract
Recent studies have identified two distinct cortical representations of voice control in humans, the ventral and the dorsal laryngeal motor cortex. Strikingly, while persistent developmental stuttering has been linked to a white-matter deficit in the ventral laryngeal motor cortex, intensive fluency-shaping intervention modulated the functional connectivity of the dorsal laryngeal motor cortical network. Currently, it is unknown whether the underlying structural network organization of these two laryngeal representations is distinct or differently shaped by stuttering intervention. Using probabilistic diffusion tractography in 22 individuals who stutter and participated in a fluency shaping intervention, in 18 individuals who stutter and did not participate in the intervention and in 28 control participants, we here compare structural networks of the dorsal laryngeal motor cortex and the ventral laryngeal motor cortex and test intervention-related white-matter changes. We show (i) that all participants have weaker ventral laryngeal motor cortex connections compared to the dorsal laryngeal motor cortex network, regardless of speech fluency, (ii) connections of the ventral laryngeal motor cortex were stronger in fluent speakers, (iii) the connectivity profile of the ventral laryngeal motor cortex predicted stuttering severity (iv) but the ventral laryngeal motor cortex network is resistant to a fluency shaping intervention. Our findings substantiate a weaker structural organization of the ventral laryngeal motor cortical network in developmental stuttering and imply that assisted recovery supports neural compensation rather than normalization. Moreover, the resulting dissociation provides evidence for functionally segregated roles of the ventral laryngeal motor cortical and dorsal laryngeal motor cortical networks.
Collapse
Affiliation(s)
- Nicole E Neef
- Department of Clinical Neurophysiology, Georg August University, Göttingen 37075, Germany
- Department of Diagnostic and Interventional Neuroradiology, Georg August University, Göttingen 37075, Germany
| | - Annika Primaßin
- Department of Clinical Neurophysiology, Georg August University, Göttingen 37075, Germany
| | | | - Peter Dechent
- Department of Cognitive Neurology, MR Research in Neurosciences, Georg August University, Göttingen 37075, Germany
| | - Christian Riedel
- Department of Diagnostic and Interventional Neuroradiology, Georg August University, Göttingen 37075, Germany
| | - Walter Paulus
- Department of Clinical Neurophysiology, Georg August University, Göttingen 37075, Germany
| | - Martin Sommer
- Department of Clinical Neurophysiology, Georg August University, Göttingen 37075, Germany
- Department of Neurology, Georg August University, Göttingen 37075, Germany
| |
Collapse
|
34
|
Belkhir JR, Fitch WT, Garcea FE, Chernoff BL, Sims MH, Navarrete E, Haber S, Paul DA, Smith SO, Pilcher WH, Mahon BZ. Direct electrical stimulation evidence for a dorsal motor area with control of the larynx. Brain Stimul 2021; 14:110-112. [PMID: 33217608 DOI: 10.1016/j.brs.2020.11.013] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2020] [Revised: 10/19/2020] [Accepted: 11/12/2020] [Indexed: 11/17/2022] Open
Affiliation(s)
- J Raouf Belkhir
- Department of Psychology, Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh, PA, 15213, USA; Carnegie Mellon Neuroscience Institute, Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh, PA, 15213, USA
| | - W Tecumseh Fitch
- Department of Behavioral and Cognitive Biology, Faculty of Life Sciences, University of Vienna, Althanstrasse 14, 1090, Vienna, Austria
| | - Frank E Garcea
- Department of Neurosurgery, University of Rochester Medical Center, 601 Elmwood Avenue, Rochester, NY, 14642, USA
| | - Benjamin L Chernoff
- Department of Psychology, Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh, PA, 15213, USA
| | - Max H Sims
- Department of Neurosurgery, University of Rochester Medical Center, 601 Elmwood Avenue, Rochester, NY, 14642, USA
| | - Eduardo Navarrete
- Dipartimento di Psicologia Dello Sviluppo e Della Socializzazione, Università di Padova, Via Venezia 8, 35131, Padova, Italy
| | - Sam Haber
- Department of Neurosurgery, University of Rochester Medical Center, 601 Elmwood Avenue, Rochester, NY, 14642, USA
| | - David A Paul
- Department of Neurosurgery, University of Rochester Medical Center, 601 Elmwood Avenue, Rochester, NY, 14642, USA
| | - Susan O Smith
- Department of Neurosurgery, University of Rochester Medical Center, 601 Elmwood Avenue, Rochester, NY, 14642, USA
| | - Webster H Pilcher
- Department of Neurosurgery, University of Rochester Medical Center, 601 Elmwood Avenue, Rochester, NY, 14642, USA
| | - Bradford Z Mahon
- Department of Psychology, Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh, PA, 15213, USA; Carnegie Mellon Neuroscience Institute, Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh, PA, 15213, USA; Department of Neurosurgery, University of Rochester Medical Center, 601 Elmwood Avenue, Rochester, NY, 14642, USA; Department of Neurology, University of Rochester Medical Center, 601 Elmwood Avenue, Rochester, NY, 14642, USA.
| |
Collapse
|
35
|
Eichert N, Watkins KE, Mars RB, Petrides M. Morphological and functional variability in central and subcentral motor cortex of the human brain. Brain Struct Funct 2020; 226:263-279. [PMID: 33355695 PMCID: PMC7817568 DOI: 10.1007/s00429-020-02180-w] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Accepted: 11/16/2020] [Indexed: 11/30/2022]
Abstract
There is a long-established link between anatomy and function in the somatomotor system in the mammalian cerebral cortex. The morphology of the central sulcus is predictive of the location of functional activation peaks relating to movement of different effectors in individuals. By contrast, morphological variation in the subcentral region and its relationship to function is, as yet, unknown. Investigating the subcentral region is particularly important in the context of speech, since control of the larynx during human speech production is related to activity in this region. Here, we examined the relationship between morphology in the central and subcentral region and the location of functional activity during movement of the hand, lips, tongue, and larynx at the individual participant level. We provide a systematic description of the sulcal patterns of the subcentral and adjacent opercular cortex, including the inter-individual variability in sulcal morphology. We show that, in the majority of participants, the anterior subcentral sulcus is not continuous, but consists of two distinct segments. A robust relationship between morphology of the central and subcentral sulcal segments and movement of different effectors is demonstrated. Inter-individual variability of underlying anatomy might thus explain previous inconsistent findings, in particular regarding the ventral larynx area in subcentral cortex. A surface registration based on sulcal labels indicated that such anatomical information can improve the alignment of functional data for group studies.
Collapse
Affiliation(s)
- Nicole Eichert
- Wellcome Centre for Integrative Neuroimaging, Centre for Functional MRI of the Brain (FMRIB), Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, Oxford, OX3 9DU, UK.
| | - Kate E Watkins
- Wellcome Centre for Integrative Neuroimaging, Department of Experimental Psychology, University of Oxford, Oxford, OX2 6GG, UK
| | - Rogier B Mars
- Wellcome Centre for Integrative Neuroimaging, Centre for Functional MRI of the Brain (FMRIB), Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, Oxford, OX3 9DU, UK.,Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, 6525 AJ, Nijmegen, The Netherlands
| | - Michael Petrides
- Department of Neurology and Neurosurgery, Montreal Neurological Institute and Hospital, McGill University, 3801 University Street, Montreal, QC, H3A 2B4, Canada.,Department of Psychology, McGill University, 1205 Dr. Penfield Avenue, Montreal, QC, H3A 1B1, Canada
| |
Collapse
|
36
|
Speech frequency-following response in human auditory cortex is more than a simple tracking. Neuroimage 2020; 226:117545. [PMID: 33186711 DOI: 10.1016/j.neuroimage.2020.117545] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2020] [Revised: 10/29/2020] [Accepted: 11/02/2020] [Indexed: 11/20/2022] Open
Abstract
The human auditory cortex is recently found to contribute to the frequency following response (FFR) and the cortical component has been shown to be more relevant to speech perception. However, it is not clear how cortical FFR may contribute to the processing of speech fundamental frequency (F0) and the dynamic pitch. Using intracranial EEG recordings, we observed a significant FFR at the fundamental frequency (F0) for both speech and speech-like harmonic complex stimuli in the human auditory cortex, even in the missing fundamental condition. Both the spectral amplitude and phase coherence of the cortical FFR showed a significant harmonic preference, and attenuated from the primary auditory cortex to the surrounding associative auditory cortex. The phase coherence of the speech FFR was found significantly higher than that of the harmonic complex stimuli, especially in the left hemisphere, showing a high timing fidelity of the cortical FFR in tracking dynamic F0 in speech. Spectrally, the frequency band of the cortical FFR was largely overlapped with the range of the human vocal pitch. Taken together, our study parsed the intrinsic properties of the cortical FFR and reveals a preference for speech-like sounds, supporting its potential role in processing speech intonation and lexical tones.
Collapse
|
37
|
Eichert N, Papp D, Mars RB, Watkins KE. Mapping Human Laryngeal Motor Cortex during Vocalization. Cereb Cortex 2020; 30:6254-6269. [PMID: 32728706 PMCID: PMC7610685 DOI: 10.1093/cercor/bhaa182] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2020] [Revised: 06/01/2020] [Accepted: 06/06/2020] [Indexed: 01/17/2023] Open
Abstract
The representations of the articulators involved in human speech production are organized somatotopically in primary motor cortex. The neural representation of the larynx, however, remains debated. Both a dorsal and a ventral larynx representation have been previously described. It is unknown, however, whether both representations are located in primary motor cortex. Here, we mapped the motor representations of the human larynx using functional magnetic resonance imaging and characterized the cortical microstructure underlying the activated regions. We isolated brain activity related to laryngeal activity during vocalization while controlling for breathing. We also mapped the articulators (the lips and tongue) and the hand area. We found two separate activations during vocalization-a dorsal and a ventral larynx representation. Structural and quantitative neuroimaging revealed that myelin content and cortical thickness underlying the dorsal, but not the ventral larynx representation, are similar to those of other primary motor representations. This finding confirms that the dorsal larynx representation is located in primary motor cortex and that the ventral one is not. We further speculate that the location of the ventral larynx representation is in premotor cortex, as seen in other primates. It remains unclear, however, whether and how these two representations differentially contribute to laryngeal motor control.
Collapse
Affiliation(s)
- Nicole Eichert
- Centre for Functional MRI of the Brain (FMRIB), Wellcome Centre for Integrative Neuroimaging, Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, Oxford, UK
| | - Daniel Papp
- Centre for Functional MRI of the Brain (FMRIB), Wellcome Centre for Integrative Neuroimaging, Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, Oxford, UK
| | - Rogier B. Mars
- Centre for Functional MRI of the Brain (FMRIB), Wellcome Centre for Integrative Neuroimaging, Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, Oxford, UK
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, the Netherlands
| | - Kate E. Watkins
- Department of Experimental Psychology, Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, UK
| |
Collapse
|
38
|
Affiliation(s)
- Andrea Ravignani
- Comparative Bioacoustics Group, Max Planck Institute for Psycholinguistics, 6525 XD Nijmegen, The Netherlands;
| | - Sonja A Kotz
- Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, 6200 MD Maastricht, The Netherlands;
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, 04103 Leipzig, Germany
| |
Collapse
|
39
|
Brown S, Yuan Y, Belyk M. Evolution of the speech-ready brain: The voice/jaw connection in the human motor cortex. J Comp Neurol 2020; 529:1018-1028. [PMID: 32720701 DOI: 10.1002/cne.24997] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2020] [Revised: 07/07/2020] [Accepted: 07/19/2020] [Indexed: 12/18/2022]
Abstract
A prominent model of the origins of speech, known as the "frame/content" theory, posits that oscillatory lowering and raising of the jaw provided an evolutionary scaffold for the development of syllable structure in speech. Because such oscillations are nonvocal in most nonhuman primates, the evolution of speech required the addition of vocalization onto this scaffold in order to turn such jaw oscillations into vocalized syllables. In the present functional MRI study, we demonstrate overlapping somatotopic representations between the larynx and the jaw muscles in the human primary motor cortex. This proximity between the larynx and jaw in the brain might support the coupling between vocalization and jaw oscillations to generate syllable structure. This model suggests that humans inherited voluntary control of jaw oscillations from ancestral species, but added voluntary control of vocalization onto this via the evolution of a new brain area that came to be situated near the jaw region in the human motor cortex.
Collapse
Affiliation(s)
- Steven Brown
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario, Canada
| | - Ye Yuan
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario, Canada
| | - Michel Belyk
- Department of Speech Hearing and Phonetic Sciences, University College London, London, UK
| |
Collapse
|
40
|
Shi ER, Zhang Q. A domain-general perspective on the role of the basal ganglia in language and music: Benefits of music therapy for the treatment of aphasia. BRAIN AND LANGUAGE 2020; 206:104811. [PMID: 32442810 DOI: 10.1016/j.bandl.2020.104811] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2019] [Revised: 03/19/2020] [Accepted: 05/04/2020] [Indexed: 06/11/2023]
Abstract
In addition to cortical lesions, mounting evidence on the links between language and the subcortical regions suggests that subcortical lesions may also lead to the emergence of aphasic symptoms. In this paper, by emphasizing the domain-general function of the basal ganglia in both language and music, we highlight that rhythm processing, the function of temporal prediction, motor programming and execution, is an important shared mechanism underlying the treatment of non-fluent aphasia with music therapy. In support of this, we conduct a literature review on the music therapy treating aphasia. The results show that rhythm processing plays a key role in Melodic Intonation Therapy in the rehabilitation of non-fluent aphasia patients with lesions on the basal ganglia. This paper strengthens the correlation between the basal ganglia lesions and language deficits, and provides support to the direction of taking advantage of rhythm as an important point in music therapy in clinical studies.
Collapse
Affiliation(s)
- Edward Ruoyang Shi
- Department of Catalan Philology and General Linguistics, University of Barcelona, Gran Via de Les Corts Catalanes, 585, 08007 Barcelona, Spain
| | - Qing Zhang
- Department of Psychology, Sun Yat-Sen Universtiy, Waihuan East Road, No. 132, Guangzhou 510006, China.
| |
Collapse
|
41
|
Archakov D, DeWitt I, Kuśmierek P, Ortiz-Rios M, Cameron D, Cui D, Morin EL, VanMeter JW, Sams M, Jääskeläinen IP, Rauschecker JP. Auditory representation of learned sound sequences in motor regions of the macaque brain. Proc Natl Acad Sci U S A 2020; 117:15242-15252. [PMID: 32541016 PMCID: PMC7334521 DOI: 10.1073/pnas.1915610117] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022] Open
Abstract
Human speech production requires the ability to couple motor actions with their auditory consequences. Nonhuman primates might not have speech because they lack this ability. To address this question, we trained macaques to perform an auditory-motor task producing sound sequences via hand presses on a newly designed device ("monkey piano"). Catch trials were interspersed to ascertain the monkeys were listening to the sounds they produced. Functional MRI was then used to map brain activity while the animals listened attentively to the sound sequences they had learned to produce and to two control sequences, which were either completely unfamiliar or familiar through passive exposure only. All sounds activated auditory midbrain and cortex, but listening to the sequences that were learned by self-production additionally activated the putamen and the hand and arm regions of motor cortex. These results indicate that, in principle, monkeys are capable of forming internal models linking sound perception and production in motor regions of the brain, so this ability is not special to speech in humans. However, the coupling of sounds and actions in nonhuman primates (and the availability of an internal model supporting it) seems not to extend to the upper vocal tract, that is, the supralaryngeal articulators, which are key for the production of speech sounds in humans. The origin of speech may have required the evolution of a "command apparatus" similar to the control of the hand, which was crucial for the evolution of tool use.
Collapse
Affiliation(s)
- Denis Archakov
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20057
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, FI-02150 Espoo, Finland
| | - Iain DeWitt
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20057
| | - Paweł Kuśmierek
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20057
| | - Michael Ortiz-Rios
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20057
| | - Daniel Cameron
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20057
| | - Ding Cui
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20057
| | - Elyse L Morin
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20057
| | - John W VanMeter
- Center for Functional and Molecular Imaging, Georgetown University Medical Center, Washington, DC 20057
| | - Mikko Sams
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, FI-02150 Espoo, Finland
| | - Iiro P Jääskeläinen
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, FI-02150 Espoo, Finland
| | - Josef P Rauschecker
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20057;
| |
Collapse
|
42
|
Correia JM, Caballero-Gaudes C, Guediche S, Carreiras M. Phonatory and articulatory representations of speech production in cortical and subcortical fMRI responses. Sci Rep 2020; 10:4529. [PMID: 32161310 PMCID: PMC7066132 DOI: 10.1038/s41598-020-61435-y] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Accepted: 02/24/2020] [Indexed: 11/25/2022] Open
Abstract
Speaking involves coordination of multiple neuromotor systems, including respiration, phonation and articulation. Developing non-invasive imaging methods to study how the brain controls these systems is critical for understanding the neurobiology of speech production. Recent models and animal research suggest that regions beyond the primary motor cortex (M1) help orchestrate the neuromotor control needed for speaking, including cortical and sub-cortical regions. Using contrasts between speech conditions with controlled respiratory behavior, this fMRI study investigates articulatory gestures involving the tongue, lips and velum (i.e., alveolars versus bilabials, and nasals versus orals), and phonatory gestures (i.e., voiced versus whispered speech). Multivariate pattern analysis (MVPA) was used to decode articulatory gestures in M1, cerebellum and basal ganglia. Furthermore, apart from confirming the role of a mid-M1 region for phonation, we found that a dorsal M1 region, linked to respiratory control, showed significant differences for voiced compared to whispered speech despite matched lung volume observations. This region was also functionally connected to tongue and lip M1 seed regions, underlying its importance in the coordination of speech. Our study confirms and extends current knowledge regarding the neural mechanisms underlying neuromotor speech control, which hold promise to study neural dysfunctions involved in motor-speech disorders non-invasively.
Collapse
Affiliation(s)
- Joao M Correia
- BCBL, Basque Center on Cognition Brain and Language, San Sebastian, Spain. .,Centre for Biomedical Research (CBMR)/Department of Psychology, University of Algarve, Faro, Portugal.
| | | | - Sara Guediche
- BCBL, Basque Center on Cognition Brain and Language, San Sebastian, Spain
| | - Manuel Carreiras
- BCBL, Basque Center on Cognition Brain and Language, San Sebastian, Spain.,Ikerbasque. Basque Foundation for Science, Bilbao, Spain.,University of the Basque Country. UPV/EHU, Bilbao, Spain
| |
Collapse
|
43
|
Wattendorf E, Westermann B, Fiedler K, Ritz S, Redmann A, Pfannmöller J, Lotze M, Celio MR. Laughter is in the air: involvement of key nodes of the emotional motor system in the anticipation of tickling. Soc Cogn Affect Neurosci 2020; 14:837-847. [PMID: 31393979 PMCID: PMC6847157 DOI: 10.1093/scan/nsz056] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2018] [Revised: 06/27/2019] [Accepted: 07/10/2019] [Indexed: 12/22/2022] Open
Abstract
In analogy to the appreciation of humor, that of tickling is based upon the re-interpretation of an anticipated emotional situation. Hence, the anticipation of tickling contributes to the final outburst of ticklish laughter. To localize the neuronal substrates of this process, functional magnetic resonance imaging (fMRI) was conducted on 31 healthy volunteers. The state of anticipation was simulated by generating an uncertainty respecting the onset of manual foot tickling. Anticipation was characterized by an augmented fMRI signal in the anterior insula, the hypothalamus, the nucleus accumbens and the ventral tegmental area, as well as by an attenuated one in the internal globus pallidus. Furthermore, anticipatory activity in the anterior insula correlated positively with the degree of laughter that was produced during tickling. These findings are consistent with an encoding of the expected emotional consequences of tickling and suggest that early regulatory mechanisms influence, automatically, the laughter circuitry at the level of affective and sensory processing. Tickling activated not only those regions of the brain that were involved during anticipation, but also the posterior insula, the anterior cingulate cortex and the periaqueductal gray matter. Sequential or combined anticipatory and tickling-related neuronal activities may adjust emotional and sensorimotor pathways in preparation for the impending laughter response.
Collapse
Affiliation(s)
- Elise Wattendorf
- Faculty of Science and Medicine, Department of Neuroscience, Anatomy, University of Fribourg, 1700 Fribourg, Switzerland
| | - Birgit Westermann
- Department of Neurosurgery, University Hospital, University of Basel, 4031 Basel, Switzerland
| | - Klaus Fiedler
- Faculty of Science and Medicine, Department of Neuroscience, Anatomy, University of Fribourg, 1700 Fribourg, Switzerland
| | - Simone Ritz
- Faculty of Science and Medicine, Department of Neuroscience, Anatomy, University of Fribourg, 1700 Fribourg, Switzerland
| | - Annetta Redmann
- Faculty of Science and Medicine, Department of Neuroscience, Anatomy, University of Fribourg, 1700 Fribourg, Switzerland
| | - Jörg Pfannmöller
- Functional Imaging, Center for Diagnostic Radiology and Neuroradiology, University Medicine Greifswald, Walther-Rathenau-Straße 46, 17475 Greifswald, Germany
| | - Martin Lotze
- Functional Imaging, Center for Diagnostic Radiology and Neuroradiology, University Medicine Greifswald, Walther-Rathenau-Straße 46, 17475 Greifswald, Germany
| | - Marco R Celio
- Faculty of Science and Medicine, Department of Neuroscience, Anatomy, University of Fribourg, 1700 Fribourg, Switzerland
| |
Collapse
|
44
|
Lameira AR, Call J. Understanding Language Evolution: Beyond Pan-Centrism. Bioessays 2020; 42:e1900102. [PMID: 31994246 DOI: 10.1002/bies.201900102] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2019] [Revised: 12/18/2019] [Indexed: 12/20/2022]
Abstract
Language does not fossilize but this does not mean that the language's evolutionary timeline is lost forever. Great apes provide a window back in time on our last prelinguistic ancestor's communication and cognition. Phylogeny and cladistics implicitly conjure Pan (chimpanzees, bonobos) as a superior (often the only) model for language evolution compared with earlier diverging lineages, Gorilla and Pongo (orangutans). Here, in reviewing the literature, it is shown that Pan do not surpass other great apes along genetic, cognitive, ecologic, or vocal traits that are putatively paramount for language onset and evolution. Instead, revived herein is the idea that only by abandoning single-species models and learning about the variation among great apes, there might be a chance to retrieve lost fragments of the evolutionary timeline of language.
Collapse
Affiliation(s)
- Adriano R Lameira
- School of Psychology and Neuroscience, University of St. Andrews, South Street, KY16 9JP, St Andrews, UK.,Deparment of Psychology, University of Warwick, University Road, CV4 7AL, Coventry, UK
| | - Josep Call
- School of Psychology and Neuroscience, University of St. Andrews, South Street, KY16 9JP, St Andrews, UK
| |
Collapse
|
45
|
Chang SE, Guenther FH. Involvement of the Cortico-Basal Ganglia-Thalamocortical Loop in Developmental Stuttering. Front Psychol 2020; 10:3088. [PMID: 32047456 PMCID: PMC6997432 DOI: 10.3389/fpsyg.2019.03088] [Citation(s) in RCA: 59] [Impact Index Per Article: 14.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2019] [Accepted: 12/31/2019] [Indexed: 01/14/2023] Open
Abstract
Stuttering is a complex neurodevelopmental disorder that has to date eluded a clear explication of its pathophysiological bases. In this review, we utilize the Directions Into Velocities of Articulators (DIVA) neurocomputational modeling framework to mechanistically interpret relevant findings from the behavioral and neurological literatures on stuttering. Within this theoretical framework, we propose that the primary impairment underlying stuttering behavior is malfunction in the cortico-basal ganglia-thalamocortical (hereafter, cortico-BG) loop that is responsible for initiating speech motor programs. This theoretical perspective predicts three possible loci of impaired neural processing within the cortico-BG loop that could lead to stuttering behaviors: impairment within the basal ganglia proper; impairment of axonal projections between cerebral cortex, basal ganglia, and thalamus; and impairment in cortical processing. These theoretical perspectives are presented in detail, followed by a review of empirical data that make reference to these three possibilities. We also highlight any differences that are present in the literature based on examining adults versus children, which give important insights into potential core deficits associated with stuttering versus compensatory changes that occur in the brain as a result of having stuttered for many years in the case of adults who stutter. We conclude with outstanding questions in the field and promising areas for future studies that have the potential to further advance mechanistic understanding of neural deficits underlying persistent developmental stuttering.
Collapse
Affiliation(s)
- Soo-Eun Chang
- Department of Psychiatry, University of Michigan, Ann Arbor, MI, United States
- Department of Radiology, Cognitive Imaging Research Center, Michigan State University, East Lansing, MI, United States
- Department of Communicative Sciences and Disorders, Michigan State University, East Lansing, MI, United States
| | - Frank H. Guenther
- Department of Speech, Language and Hearing Sciences, Sargent College of Health and Rehabilitation Sciences, Boston University, Boston, MA, United States
- Department of Biomedical Engineering, Boston University, Boston, MA, United States
- Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, MA, United States
- Department of Radiology, Massachusetts General Hospital, Charlestown, MA, United States
| |
Collapse
|
46
|
Nieder A, Mooney R. The neurobiology of innate, volitional and learned vocalizations in mammals and birds. Philos Trans R Soc Lond B Biol Sci 2020; 375:20190054. [PMID: 31735150 PMCID: PMC6895551 DOI: 10.1098/rstb.2019.0054] [Citation(s) in RCA: 60] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/15/2019] [Indexed: 11/12/2022] Open
Abstract
Vocalization is an ancient vertebrate trait essential to many forms of communication, ranging from courtship calls to free verse. Vocalizations may be entirely innate and evoked by sexual cues or emotional state, as with many types of calls made in primates, rodents and birds; volitional, as with innate calls that, following extensive training, can be evoked by arbitrary sensory cues in non-human primates and corvid songbirds; or learned, acoustically flexible and complex, as with human speech and the courtship songs of oscine songbirds. This review compares and contrasts the neural mechanisms underlying innate, volitional and learned vocalizations, with an emphasis on functional studies in primates, rodents and songbirds. This comparison reveals both highly conserved and convergent mechanisms of vocal production in these different groups, despite their often vast phylogenetic separation. This similarity of central mechanisms for different forms of vocal production presents experimentalists with useful avenues for gaining detailed mechanistic insight into how vocalizations are employed for social and sexual signalling, and how they can be modified through experience to yield new vocal repertoires customized to the individual's social group. This article is part of the theme issue 'What can animal communication teach us about human language?'
Collapse
Affiliation(s)
- Andreas Nieder
- Animal Physiology Unit, Institute of Neurobiology, University Tübingen, Auf der Morgenstelle 28, 72076 Tübingen, Germany
| | - Richard Mooney
- Department of Neurobiology, Duke University School of Medicine, Durham, NC 27710, USA
| |
Collapse
|
47
|
The Neuroethology of Vocal Communication in Songbirds: Production and Perception of a Call Repertoire. THE NEUROETHOLOGY OF BIRDSONG 2020. [DOI: 10.1007/978-3-030-34683-6_7] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
48
|
Yuan S, Li H, Xie J, Sun X. Quantitative Trait Module-Based Genetic Analysis of Alzheimer's Disease. Int J Mol Sci 2019; 20:E5912. [PMID: 31775305 PMCID: PMC6928939 DOI: 10.3390/ijms20235912] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2019] [Revised: 11/21/2019] [Accepted: 11/22/2019] [Indexed: 01/02/2023] Open
Abstract
The pathological features of Alzheimer's Disease (AD) first appear in the medial temporal lobe and then in other brain structures with the development of the disease. In this work, we investigated the association between genetic loci and subcortical structure volumes of AD on 393 samples in the Alzheimer's Disease Neuroimaging Initiative (ADNI) cohort. Brain subcortical structures were clustered into modules using Pearson's correlation coefficient of volumes across all samples. Module volumes were used as quantitative traits to identify not only the main effect loci but also the interactive effect loci for each module. Thirty-five subcortical structures were clustered into five modules, each corresponding to a particular brain structure/area, including the limbic system (module I), the corpus callosum (module II), thalamus-cerebellum-brainstem-pallidum (module III), the basal ganglia neostriatum (module IV), and the ventricular system (module V). Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment results indicate that the gene annotations of the five modules were distinct, with few overlaps between different modules. We identified several main effect loci and interactive effect loci for each module. All these loci are related to the function of module structures and basic biological processes such as material transport and signal transduction.
Collapse
Affiliation(s)
| | | | | | - Xiao Sun
- State Key Laboratory of Bioelectronics, School of Biological Science and Medical Engineering, Southeast University, Nanjing 210096, China; (S.Y.)
| |
Collapse
|
49
|
Abstract
Although language, and therefore spoken language or speech, is often considered unique to humans, the past several decades have seen a surge in nonhuman animal studies that inform us about human spoken language. Here, I present a modern, evolution-based synthesis of these studies, from behavioral to molecular levels of analyses. Among the key concepts drawn are that components of spoken language are continuous between species, and that the vocal learning component is the most specialized and rarest and evolved by brain pathway duplication from an ancient motor learning pathway. These concepts have important implications for understanding brain mechanisms and disorders of spoken language.
Collapse
Affiliation(s)
- Erich D Jarvis
- Laboratory of Neurogenetics of Language, The Rockefeller University, New York, NY, USA.,Howard Hughes Medical Institute, Chevy Chase, MD, USA
| |
Collapse
|
50
|
Fitch WT. Sequence and hierarchy in vocal rhythms and phonology. Ann N Y Acad Sci 2019; 1453:29-46. [PMID: 31410865 PMCID: PMC6790714 DOI: 10.1111/nyas.14215] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2019] [Revised: 07/16/2019] [Accepted: 07/23/2019] [Indexed: 11/30/2022]
Abstract
I explore the neural and evolutionary origins of phonological hierarchy, building on Peter MacNeilage's frame/content model, which suggests that human speech evolved from primate nonvocal jaw oscillations, for example, lip smack displays, combined with phonation. Considerable recent data, reviewed here, support this proposition. I argue that the evolution of speech motor control required two independent components. The first, identified by MacNeilage, is the diversification of phonetic "content" within a simple sequential "frame," and would be within reach of nonhuman primates, by simply intermittently activating phonation during lip smack displays. Such voicing control requires laryngeal control, hypothesized to necessitate direct corticomotor connections to the nucleus ambiguus. The second component, proposed here, involves imposing additional hierarchical rhythmic structure upon the "flat" control sequences typifying mammalian vocal tract oscillations and is required for the flexible combinatorial capacity observed in modern phonology. I hypothesize that phonological hierarchy resulted from a marriage of a preexisting capacity for sequential structure seen in other primates, with novel hierarchical motor control circuitry (potentially evolved in tool use and/or musical contexts). In turn, this phonological hierarchy paved the way for phrasal syntactic hierarchy. I support these arguments using comparative and neural data from nonhuman primates and birdsong.
Collapse
|