1
|
MacDonald A, Hebling A, Wei XP, Yackle K. The breath shape controls intonation of mouse vocalizations. eLife 2024; 13:RP93079. [PMID: 38963785 PMCID: PMC11223766 DOI: 10.7554/elife.93079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/06/2024] Open
Abstract
Intonation in speech is the control of vocal pitch to layer expressive meaning to communication, like increasing pitch to indicate a question. Also, stereotyped patterns of pitch are used to create distinct sounds with different denotations, like in tonal languages and, perhaps, the 10 sounds in the murine lexicon. A basic tone is created by exhalation through a constricted laryngeal voice box, and it is thought that more complex utterances are produced solely by dynamic changes in laryngeal tension. But perhaps, the shifting pitch also results from altering the swiftness of exhalation. Consistent with the latter model, we describe that intonation in most vocalization types follows deviations in exhalation that appear to be generated by the re-activation of the cardinal breathing muscle for inspiration. We also show that the brainstem vocalization central pattern generator, the iRO, can create this breath pattern. Consequently, ectopic activation of the iRO not only induces phonation, but also the pitch patterns that compose most of the vocalizations in the murine lexicon. These results reveal a novel brainstem mechanism for intonation.
Collapse
Affiliation(s)
- Alastair MacDonald
- Department of Physiology, University of California-San FranciscoSan FranciscoUnited States
| | - Alina Hebling
- Neuroscience Graduate Program, University of California-San FranciscoSan FranciscoUnited States
| | - Xin Paul Wei
- Department of Physiology, University of California-San FranciscoSan FranciscoUnited States
- Biomedical Sciences Graduate Program, University of California-San FranciscoSan FranciscoUnited States
| | - Kevin Yackle
- Department of Physiology, University of California-San FranciscoSan FranciscoUnited States
| |
Collapse
|
2
|
Montgomery JC. Roles for cerebellum and subsumption architecture in central pattern generation. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2024; 210:315-324. [PMID: 37130955 PMCID: PMC10994996 DOI: 10.1007/s00359-023-01634-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 04/13/2023] [Accepted: 04/14/2023] [Indexed: 05/04/2023]
Abstract
Within vertebrates, central pattern generators drive rhythmical behaviours, such as locomotion and ventilation. Their pattern generation is also influenced by sensory input and various forms of neuromodulation. These capabilities arose early in vertebrate evolution, preceding the evolution of the cerebellum in jawed vertebrates. This later evolution of the cerebellum is suggestive of subsumption architecture that adds functionality to a pre-existing network. From a central-pattern-generator perspective, what additional functionality might the cerebellum provide? The suggestion is that the adaptive filter capabilities of the cerebellum may be able to use error learning to appropriately repurpose pattern output. Examples may include head and eye stabilization during locomotion, song learning, and context-dependent alternation between learnt motor-control sequences.
Collapse
Affiliation(s)
- John C Montgomery
- Institute of Marine Science, University of Auckland, Auckland, New Zealand.
| |
Collapse
|
3
|
Banerjee A, Chen F, Druckmann S, Long MA. Temporal scaling of motor cortical dynamics reveals hierarchical control of vocal production. Nat Neurosci 2024; 27:527-535. [PMID: 38291282 DOI: 10.1038/s41593-023-01556-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 12/13/2023] [Indexed: 02/01/2024]
Abstract
Neocortical activity is thought to mediate voluntary control over vocal production, but the underlying neural mechanisms remain unclear. In a highly vocal rodent, the male Alston's singing mouse, we investigate neural dynamics in the orofacial motor cortex (OMC), a structure critical for vocal behavior. We first describe neural activity that is modulated by component notes (~100 ms), probably representing sensory feedback. At longer timescales, however, OMC neurons exhibit diverse and often persistent premotor firing patterns that stretch or compress with song duration (~10 s). Using computational modeling, we demonstrate that such temporal scaling, acting through downstream motor production circuits, can enable vocal flexibility. These results provide a framework for studying hierarchical control circuits, a common design principle across many natural and artificial systems.
Collapse
Affiliation(s)
- Arkarup Banerjee
- NYU Neuroscience Institute, New York University Langone Health, New York, NY, USA.
- Department of Otolaryngology, New York University Langone Health, New York, NY, USA.
- Center for Neural Science, New York University, New York, NY, USA.
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA.
| | - Feng Chen
- Department of Applied Physics, Stanford University, Stanford, CA, USA
| | - Shaul Druckmann
- Department of Neurobiology, Stanford University, Stanford, CA, USA
| | - Michael A Long
- NYU Neuroscience Institute, New York University Langone Health, New York, NY, USA.
- Department of Otolaryngology, New York University Langone Health, New York, NY, USA.
- Center for Neural Science, New York University, New York, NY, USA.
| |
Collapse
|
4
|
Zhang Y, Shen SX, Bibic A, Wang X. Evolutionary continuity and divergence of auditory dorsal and ventral pathways in primates revealed by ultra-high field diffusion MRI. Proc Natl Acad Sci U S A 2024; 121:e2313831121. [PMID: 38377216 PMCID: PMC10907247 DOI: 10.1073/pnas.2313831121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Accepted: 01/22/2024] [Indexed: 02/22/2024] Open
Abstract
Auditory dorsal and ventral pathways in the human brain play important roles in supporting speech and language processing. However, the evolutionary root of the dual auditory pathways in the primate brain is unclear. By parcellating the auditory cortex of marmosets (a New World monkey species), macaques (an Old World monkey species), and humans using the same individual-based analysis method and tracking the pathways from the auditory cortex based on multi-shell diffusion-weighted MRI (dMRI), homologous auditory dorsal and ventral fiber tracks were identified in these primate species. The ventral pathway was found to be well conserved in all three primate species analyzed but extend to more anterior temporal regions in humans. In contrast, the dorsal pathway showed a divergence between monkey and human brains. First, frontal regions in the human brain have stronger connections to the higher-level auditory regions than to the lower-level auditory regions along the dorsal pathway, while frontal regions in the monkey brain show opposite connection patterns along the dorsal pathway. Second, the left lateralization of the dorsal pathway is only found in humans. Moreover, the connectivity strength of the dorsal pathway in marmosets is more similar to that of humans than macaques. These results demonstrate the continuity and divergence of the dual auditory pathways in the primate brains along the evolutionary path, suggesting that the putative neural networks supporting human speech and language processing might have emerged early in primate evolution.
Collapse
Affiliation(s)
- Yang Zhang
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD21205
| | - Sherry Xinyi Shen
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD21205
| | - Adnan Bibic
- Department of Radiology, Johns Hopkins University School of Medicine, Baltimore, MD21205
- Kirby Research Center for Functional Brain Imaging, Kennedy Krieger Institute, F. M. Kirby Center, Baltimore, MD21205
| | - Xiaoqin Wang
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD21205
| |
Collapse
|
5
|
Neef NE, Chang SE. Knowns and unknowns about the neurobiology of stuttering. PLoS Biol 2024; 22:e3002492. [PMID: 38386639 PMCID: PMC10883586 DOI: 10.1371/journal.pbio.3002492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/24/2024] Open
Abstract
Stuttering occurs in early childhood during a dynamic phase of brain and behavioral development. The latest studies examining children at ages close to this critical developmental period have identified early brain alterations that are most likely linked to stuttering, while spontaneous recovery appears related to increased inter-area connectivity. By contrast, therapy-driven improvement in adults is associated with a functional reorganization within and beyond the speech network. The etiology of stuttering, however, remains enigmatic. This Unsolved Mystery highlights critical questions and points to neuroimaging findings that could inspire future research to uncover how genetics, interacting neural hierarchies, social context, and reward circuitry contribute to the many facets of stuttering.
Collapse
Affiliation(s)
- Nicole E. Neef
- Institute for Diagnostic and Interventional Neuroradiology, University Medical Center Göttingen, Göttingen, Germany
| | - Soo-Eun Chang
- Department of Psychiatry, University of Michigan, Ann Arbor, Michigan, United States of America
- Department of Communication Disorders, Ewha Womans University, Seoul, Korea
| |
Collapse
|
6
|
Schuppe ER, Ballagh I, Akbari N, Fang W, Perelmuter JT, Radtke CH, Marchaterre MA, Bass AH. Midbrain node for context-specific vocalisation in fish. Nat Commun 2024; 15:189. [PMID: 38167237 PMCID: PMC10762186 DOI: 10.1038/s41467-023-43794-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Accepted: 11/20/2023] [Indexed: 01/05/2024] Open
Abstract
Vocalizations communicate information indicative of behavioural state across divergent social contexts. Yet, how brain regions actively pattern the acoustic features of context-specific vocal signals remains largely unexplored. The midbrain periaqueductal gray (PAG) is a major site for initiating vocalization among mammals, including primates. We show that PAG neurons in a highly vocal fish species (Porichthys notatus) are activated in distinct patterns during agonistic versus courtship calling by males, with few co-activated during a non-vocal behaviour, foraging. Pharmacological manipulations within vocally active PAG, but not hindbrain, sites evoke vocal network output to sonic muscles matching the temporal features of courtship and agonistic calls, showing that a balance of inhibitory and excitatory dynamics is likely necessary for patterning different call types. Collectively, these findings support the hypothesis that vocal species of fish and mammals share functionally comparable PAG nodes that in some species can influence the acoustic structure of social context-specific vocal signals.
Collapse
Affiliation(s)
- Eric R Schuppe
- Department of Neurobiology and Behavior, Cornell University, Ithaca, NY, 14853, USA
- Department of Physiology, University of California San Francisco School of Medicine, San Francisco, CA, 94305, USA
| | - Irene Ballagh
- Department of Neurobiology and Behavior, Cornell University, Ithaca, NY, 14853, USA
- Department of Zoology, The University of British Columbia, Vancouver, V6T 1Z4, BC, Canada
| | - Najva Akbari
- Department of Applied and Engineering Physics, Cornell University, Ithaca, NY, 14853, USA
- Department of Biology, Stanford University, Palo Alto, CA, 94305, USA
| | - Wenxuan Fang
- Department of Neurobiology and Behavior, Cornell University, Ithaca, NY, 14853, USA
- Graduate Program in Neuroscience, The University of British Columbia, Vancouver, V6T 1Z4, BC, Canada
| | | | - Caleb H Radtke
- Department of Neurobiology and Behavior, Cornell University, Ithaca, NY, 14853, USA
| | | | - Andrew H Bass
- Department of Neurobiology and Behavior, Cornell University, Ithaca, NY, 14853, USA.
| |
Collapse
|
7
|
Varella TT, Takahashi DY, Ghazanfar AA. Active Sampling in Primate Vocal Interactions. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.12.05.570161. [PMID: 38106107 PMCID: PMC10723297 DOI: 10.1101/2023.12.05.570161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/19/2023]
Abstract
Active sensing is a behavioral strategy for exploring the environment. In this study, we show that contact vocal behaviors can be an active sensing mechanism that uses sampling to gain information about the social environment, in particular, the vocal behavior of others. With a focus on the realtime vocal interactions of marmoset monkeys, we contrast active sampling to a vocal accommodation framework in which vocalizations are adjusted simply to maximize responses. We conducted simulations of a vocal accommodation and an active sampling policy and compared them with real vocal exchange data. Our findings support active sampling as the best model for marmoset monkey vocal exchanges. In some cases, the active sampling model was even able to predict the distribution of vocal durations for individuals. These results suggest a new function for primate vocal interactions in which they are used by animals to seek information from social environments.
Collapse
Affiliation(s)
- Thiago T Varella
- Princeton Neuroscience Institute & Department of Psychology, Princeton University, Princeton NJ 08544, USA
| | - Daniel Y Takahashi
- Brain Institute, Federal University of Rio Grande do Norte (UFRN), Av. Nascimento de Castro, 2155 - Morro Branco, Natal, RN 59056-450, Brasil
| | - Asif A Ghazanfar
- Princeton Neuroscience Institute & Department of Psychology, Princeton University, Princeton NJ 08544, USA
| |
Collapse
|
8
|
Engelen T, Solcà M, Tallon-Baudry C. Interoceptive rhythms in the brain. Nat Neurosci 2023; 26:1670-1684. [PMID: 37697110 DOI: 10.1038/s41593-023-01425-1] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2022] [Accepted: 08/08/2023] [Indexed: 09/13/2023]
Abstract
Sensing internal bodily signals, or interoception, is fundamental to maintain life. However, interoception should not be viewed as an isolated domain, as it interacts with exteroception, cognition and action to ensure the integrity of the organism. Focusing on cardiac, respiratory and gastric rhythms, we review evidence that interoception is anatomically and functionally intertwined with the processing of signals from the external environment. Interactions arise at all stages, from the peripheral transduction of interoceptive signals to sensory processing and cortical integration, in a network that extends beyond core interoceptive regions. Interoceptive rhythms contribute to functions ranging from perceptual detection up to sense of self, or conversely compete with external inputs. Renewed interest in interoception revives long-standing issues on how the brain integrates and coordinates information in distributed regions, by means of oscillatory synchrony, predictive coding or multisensory integration. Considering interoception and exteroception in the same framework paves the way for biological modes of information processing specific to living organisms.
Collapse
Affiliation(s)
- Tahnée Engelen
- Cognitive and Computational Neuroscience Laboratory, Inserm, Ecole Normale Supérieure PSL University, Paris, France
| | - Marco Solcà
- Cognitive and Computational Neuroscience Laboratory, Inserm, Ecole Normale Supérieure PSL University, Paris, France
| | - Catherine Tallon-Baudry
- Cognitive and Computational Neuroscience Laboratory, Inserm, Ecole Normale Supérieure PSL University, Paris, France.
| |
Collapse
|
9
|
Grijseels DM, Prendergast BJ, Gorman JC, Miller CT. The neurobiology of vocal communication in marmosets. Ann N Y Acad Sci 2023; 1528:13-28. [PMID: 37615212 PMCID: PMC10592205 DOI: 10.1111/nyas.15057] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/25/2023]
Abstract
An increasingly popular animal model for studying the neural basis of social behavior, cognition, and communication is the common marmoset (Callithrix jacchus). Interest in this New World primate across neuroscience is now being driven by their proclivity for prosociality across their repertoire, high volubility, and rapid development, as well as their amenability to naturalistic testing paradigms and freely moving neural recording and imaging technologies. The complement of these characteristics set marmosets up to be a powerful model of the primate social brain in the years to come. Here, we focus on vocal communication because it is the area that has both made the most progress and illustrates the prodigious potential of this species. We review the current state of the field with a focus on the various brain areas and networks involved in vocal perception and production, comparing the findings from marmosets to other animals, including humans.
Collapse
Affiliation(s)
- Dori M Grijseels
- Cortical Systems and Behavior Laboratory, University of California, San Diego, La Jolla, California, USA
| | - Brendan J Prendergast
- Cortical Systems and Behavior Laboratory, University of California, San Diego, La Jolla, California, USA
| | - Julia C Gorman
- Cortical Systems and Behavior Laboratory, University of California, San Diego, La Jolla, California, USA
- Neurosciences Graduate Program, University of California, San Diego, La Jolla, California, USA
| | - Cory T Miller
- Cortical Systems and Behavior Laboratory, University of California, San Diego, La Jolla, California, USA
- Neurosciences Graduate Program, University of California, San Diego, La Jolla, California, USA
| |
Collapse
|
10
|
Yamoah EN, Pavlinkova G, Fritzsch B. The Development of Speaking and Singing in Infants May Play a Role in Genomics and Dementia in Humans. Brain Sci 2023; 13:1190. [PMID: 37626546 PMCID: PMC10452560 DOI: 10.3390/brainsci13081190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2023] [Revised: 08/04/2023] [Accepted: 08/08/2023] [Indexed: 08/27/2023] Open
Abstract
The development of the central auditory system, including the auditory cortex and other areas involved in processing sound, is shaped by genetic and environmental factors, enabling infants to learn how to speak. Before explaining hearing in humans, a short overview of auditory dysfunction is provided. Environmental factors such as exposure to sound and language can impact the development and function of the auditory system sound processing, including discerning in speech perception, singing, and language processing. Infants can hear before birth, and sound exposure sculpts their developing auditory system structure and functions. Exposing infants to singing and speaking can support their auditory and language development. In aging humans, the hippocampus and auditory nuclear centers are affected by neurodegenerative diseases such as Alzheimer's, resulting in memory and auditory processing difficulties. As the disease progresses, overt auditory nuclear center damage occurs, leading to problems in processing auditory information. In conclusion, combined memory and auditory processing difficulties significantly impact people's ability to communicate and engage with their societal essence.
Collapse
Affiliation(s)
- Ebenezer N. Yamoah
- Department of Physiology and Cell Biology, School of Medicine, University of Nevada, Reno, NV 89557, USA;
| | | | - Bernd Fritzsch
- Department of Neurological Sciences, University of Nebraska Medical Center, Omaha, NE 68198, USA
| |
Collapse
|
11
|
Choi D, Yeung HH, Werker JF. Sensorimotor foundations of speech perception in infancy. Trends Cogn Sci 2023:S1364-6613(23)00124-9. [PMID: 37302917 DOI: 10.1016/j.tics.2023.05.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Revised: 05/09/2023] [Accepted: 05/10/2023] [Indexed: 06/13/2023]
Abstract
The perceptual system for speech is highly organized from early infancy. This organization bootstraps young human learners' ability to acquire their native speech and language from speech input. Here, we review behavioral and neuroimaging evidence that perceptual systems beyond the auditory modality are also specialized for speech in infancy, and that motor and sensorimotor systems can influence speech perception even in infants too young to produce speech-like vocalizations. These investigations complement existing literature on infant vocal development and on the interplay between speech perception and production systems in adults. We conclude that a multimodal speech and language network is present before speech-like vocalizations emerge.
Collapse
Affiliation(s)
- Dawoon Choi
- Department of Psychology, Yale University, Yale, CT, USA.
| | - H Henny Yeung
- Department of Linguistics, Simon Fraser University, Burnaby, BC, Canada
| | - Janet F Werker
- Department of Psychology, University of British Columbia, Vancouver, BC, Canada.
| |
Collapse
|
12
|
Jourjine N, Woolfolk ML, Sanguinetti-Scheck JI, Sabatini JE, McFadden S, Lindholm AK, Hoekstra HE. Two pup vocalization types are genetically and functionally separable in deer mice. Curr Biol 2023; 33:1237-1248.e4. [PMID: 36893759 DOI: 10.1016/j.cub.2023.02.045] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2022] [Revised: 02/11/2023] [Accepted: 02/14/2023] [Indexed: 03/10/2023]
Abstract
Vocalization is a widespread social behavior in vertebrates that can affect fitness in the wild. Although many vocal behaviors are highly conserved, heritable features of specific vocalization types can vary both within and between species, raising the questions of why and how some vocal behaviors evolve. Here, using new computational tools to automatically detect and cluster vocalizations into distinct acoustic categories, we compare pup isolation calls across neonatal development in eight taxa of deer mice (genus Peromyscus) and compare them with laboratory mice (C57BL6/J strain) and free-living, wild house mice (Mus musculus domesticus). Whereas both Peromyscus and Mus pups produce ultrasonic vocalizations (USVs), Peromyscus pups also produce a second call type with acoustic features, temporal rhythms, and developmental trajectories that are distinct from those of USVs. In deer mice, these lower frequency "cries" are predominantly emitted in postnatal days one through nine, whereas USVs are primarily made after day 9. Using playback assays, we show that cries result in a more rapid approach by Peromyscus mothers than USVs, suggesting a role for cries in eliciting parental care early in neonatal development. Using a genetic cross between two sister species of deer mice exhibiting large, innate differences in the acoustic structure of cries and USVs, we find that variation in vocalization rate, duration, and pitch displays different degrees of genetic dominance and that cry and USV features can be uncoupled in second-generation hybrids. Taken together, this work shows that vocal behavior can evolve quickly between closely related rodent species in which vocalization types, likely serving distinct functions in communication, are controlled by distinct genetic loci.
Collapse
Affiliation(s)
- Nicholas Jourjine
- Department of Molecular & Cellular Biology, Department of Organismic & Evolutionary Biology, Center for Brain Science, Museum of Comparative Zoology, Harvard University and the Howard Hughes Medical Institute, 16 Divinity Avenue, Cambridge, MA 02138, USA
| | - Maya L Woolfolk
- Department of Molecular & Cellular Biology, Department of Organismic & Evolutionary Biology, Center for Brain Science, Museum of Comparative Zoology, Harvard University and the Howard Hughes Medical Institute, 16 Divinity Avenue, Cambridge, MA 02138, USA
| | - Juan I Sanguinetti-Scheck
- Department of Molecular & Cellular Biology, Department of Organismic & Evolutionary Biology, Center for Brain Science, Museum of Comparative Zoology, Harvard University and the Howard Hughes Medical Institute, 16 Divinity Avenue, Cambridge, MA 02138, USA
| | - John E Sabatini
- Department of Molecular & Cellular Biology, Department of Organismic & Evolutionary Biology, Center for Brain Science, Museum of Comparative Zoology, Harvard University and the Howard Hughes Medical Institute, 16 Divinity Avenue, Cambridge, MA 02138, USA
| | - Sade McFadden
- Department of Molecular & Cellular Biology, Department of Organismic & Evolutionary Biology, Center for Brain Science, Museum of Comparative Zoology, Harvard University and the Howard Hughes Medical Institute, 16 Divinity Avenue, Cambridge, MA 02138, USA
| | - Anna K Lindholm
- Department of Evolutionary Biology & Environmental Studies, University of Zürich, Winterthurerstrasse, 190 8057 Zürich, Switzerland
| | - Hopi E Hoekstra
- Department of Molecular & Cellular Biology, Department of Organismic & Evolutionary Biology, Center for Brain Science, Museum of Comparative Zoology, Harvard University and the Howard Hughes Medical Institute, 16 Divinity Avenue, Cambridge, MA 02138, USA.
| |
Collapse
|
13
|
Abstract
Bass describes the fascinating life history, behavior, and neurobiology of the California singing fish, including its remarkable vocal abilities.
Collapse
Affiliation(s)
- Andrew H Bass
- Department of Neurobiology and Behavior, Cornell University, Ithaca, NY 14850, USA.
| |
Collapse
|
14
|
Banerjee A, Chen F, Druckmann S, Long MA. Neural dynamics in the rodent motor cortex enables flexible control of vocal timing. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.01.23.525252. [PMID: 36747850 PMCID: PMC9900850 DOI: 10.1101/2023.01.23.525252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
Neocortical activity is thought to mediate voluntary control over vocal production, but the underlying neural mechanisms remain unclear. In a highly vocal rodent, the Alston's singing mouse, we investigate neural dynamics in the orofacial motor cortex (OMC), a structure critical for vocal behavior. We first describe neural activity that is modulated by component notes (approx. 100 ms), likely representing sensory feedback. At longer timescales, however, OMC neurons exhibit diverse and often persistent premotor firing patterns that stretch or compress with song duration (approx. 10 s). Using computational modeling, we demonstrate that such temporal scaling, acting via downstream motor production circuits, can enable vocal flexibility. These results provide a framework for studying hierarchical control circuits, a common design principle across many natural and artificial systems.
Collapse
Affiliation(s)
- Arkarup Banerjee
- NYU Neuroscience Institute, New York University Langone Health, New York, NY 10016, USA
- Department of Otolaryngology, New York University Langone Health, New York, NY 10016, USA
- Center for Neural Science, New York University, New York, NY 10003, USA
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY 11724, USA
| | - Feng Chen
- Department of Applied Physics, Stanford University, Stanford, CA 94305, USA
| | - Shaul Druckmann
- Department of Neuroscience, Stanford University, Stanford, CA 94304, USA
| | - Michael A Long
- NYU Neuroscience Institute, New York University Langone Health, New York, NY 10016, USA
- Department of Otolaryngology, New York University Langone Health, New York, NY 10016, USA
- Center for Neural Science, New York University, New York, NY 10003, USA
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY 11724, USA
| |
Collapse
|
15
|
Marschik PB, Widmann CAA, Lang S, Kulvicius T, Boterberg S, Nielsen-Saines K, Bölte S, Esposito G, Nordahl-Hansen A, Roeyers H, Wörgötter F, Einspieler C, Poustka L, Zhang D. Emerging Verbal Functions in Early Infancy: Lessons from Observational and Computational Approaches on Typical Development and Neurodevelopmental Disorders. ADVANCES IN NEURODEVELOPMENTAL DISORDERS 2022; 6:369-388. [PMID: 36540761 PMCID: PMC9762685 DOI: 10.1007/s41252-022-00300-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
OBJECTIVES Research on typically developing (TD) children and those with neurodevelopmental disorders and genetic syndromes was targeted. Specifically, studies on autism spectrum disorder, Down syndrome, Rett syndrome, fragile X syndrome, cerebral palsy, Angelman syndrome, tuberous sclerosis complex, Williams-Beuren syndrome, Cri-du-chat syndrome, Prader-Willi syndrome, and West syndrome were searched. The objectives are to review observational and computational studies on the emergence of (pre-)babbling vocalisations and outline findings on acoustic characteristics of early verbal functions. METHODS A comprehensive review of the literature was performed including observational and computational studies focusing on spontaneous infant vocalisations at the pre-babbling age of TD children, individuals with genetic or neurodevelopmental disorders. RESULTS While there is substantial knowledge about early vocal development in TD infants, the pre-babbling phase in infants with neurodevelopmental and genetic syndromes is scarcely scrutinised. Related approaches, paradigms, and definitions vary substantially and insights into the onset and characteristics of early verbal functions in most above-mentioned disorders are missing. Most studies focused on acoustic low-level descriptors (e.g. fundamental frequency) which bore limited clinical relevance. This calls for computational approaches to analyse features of infant typical and atypical verbal development. CONCLUSIONS Pre-babbling vocalisations as precursor for future speech-language functions may reveal valuable signs for identifying infants at risk for atypical development. Observational studies should be complemented by computational approaches to enable in-depth understanding of the developing speech-language functions. By disentangling features of typical and atypical early verbal development, computational approaches may support clinical screening and evaluation.
Collapse
Affiliation(s)
- Peter B. Marschik
- Child and Adolescent Psychiatry and Psychotherapy, Göttingen, Germany and Leibniz ScienceCampus Primate Cognition, University Medical Center Göttingen, Göttingen, Germany
- iDN - Interdisciplinary Developmental Neuroscience, Division of Phoniatrics, Medical University of Graz, Graz, Austria
| | - Claudius A. A. Widmann
- Child and Adolescent Psychiatry and Psychotherapy, Göttingen, Germany and Leibniz ScienceCampus Primate Cognition, University Medical Center Göttingen, Göttingen, Germany
| | - Sigrun Lang
- Child and Adolescent Psychiatry and Psychotherapy, Göttingen, Germany and Leibniz ScienceCampus Primate Cognition, University Medical Center Göttingen, Göttingen, Germany
| | - Tomas Kulvicius
- Child and Adolescent Psychiatry and Psychotherapy, Göttingen, Germany and Leibniz ScienceCampus Primate Cognition, University Medical Center Göttingen, Göttingen, Germany
| | - Sofie Boterberg
- Research in Developmental Disorders Lab, Department of Experimental Clinical and Health Psychology, Faculty of Psychology and Educational Sciences, Ghent University, Ghent, Belgium
| | - Karin Nielsen-Saines
- Department of Pediatrics, David Geffen UCLA School of Medicine, Los Angeles, CA, USA
| | - Sven Bölte
- Center of Neurodevelopmental Disorders (KIND), Centre for Psychiatry Research, Department of Women’s and Children’s Health, Child and Adolescent Psychiatry, Region Stockholm, Karolinska Institutet & Stockholm Health Care Services, Stockholm, Sweden
- Curtin Autism Research Group, Curtin School of Allied Health, Curtin University, Perth, WA, Austria
| | - Gianluca Esposito
- Affiliative Behavior and Physiology Lab, Department of Psychology and Cognitive Science, University of Trento, Trento, Italy
| | - Anders Nordahl-Hansen
- Department of Education, ICT and Learning, Østfold University College, Halden, Norway
| | - Herbert Roeyers
- Research in Developmental Disorders Lab, Department of Experimental Clinical and Health Psychology, Faculty of Psychology and Educational Sciences, Ghent University, Ghent, Belgium
| | - Florentin Wörgötter
- Third Institute of Physics-Biophysics, Georg-August University Göttingen, Göttingen, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
| | - Christa Einspieler
- iDN - Interdisciplinary Developmental Neuroscience, Division of Phoniatrics, Medical University of Graz, Graz, Austria
| | - Luise Poustka
- Child and Adolescent Psychiatry and Psychotherapy, Göttingen, Germany and Leibniz ScienceCampus Primate Cognition, University Medical Center Göttingen, Göttingen, Germany
| | - Dajie Zhang
- Child and Adolescent Psychiatry and Psychotherapy, Göttingen, Germany and Leibniz ScienceCampus Primate Cognition, University Medical Center Göttingen, Göttingen, Germany
- iDN - Interdisciplinary Developmental Neuroscience, Division of Phoniatrics, Medical University of Graz, Graz, Austria
| |
Collapse
|
16
|
Cognitive control of song production by humpback whales. Anim Cogn 2022; 25:1133-1149. [PMID: 36058997 DOI: 10.1007/s10071-022-01675-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Revised: 08/04/2022] [Accepted: 08/12/2022] [Indexed: 11/01/2022]
Abstract
Singing humpback whales are highly versatile vocalizers, producing complex sequences of sounds that they vary throughout adulthood. Past analyses of humpback whale song have emphasized yearly variations in structural features of songs made collectively by singers within a population with comparatively little attention given to the ways that individual singers vary consecutive songs. As a result, many researchers describe singing by humpback whales as a process in which singers produce sequences of repeating sound patterns. Here, we show that such characterizations misrepresent the degree to which humpback whales flexibly and dynamically control the production of sounds and sound patterns within song sessions. Singers recorded off the coast of Hawaii continuously morphed units along multiple acoustic dimensions, with the degree and direction of morphing varying across parallel streams of successive units. Individual singers also produced multiple phrase variants (structurally similar, but acoustically distinctive sequences) within song sessions. The precision with which individual singers maintained some acoustic properties of phrases and morphing trajectories while flexibly changing others suggests that singing humpback whales actively select and adjust acoustic elements of their songs in real time rather than simply repeating stereotyped sound patterns within song sessions.
Collapse
|
17
|
Modulation transfer functions for audiovisual speech. PLoS Comput Biol 2022; 18:e1010273. [PMID: 35852989 PMCID: PMC9295967 DOI: 10.1371/journal.pcbi.1010273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2022] [Accepted: 06/01/2022] [Indexed: 11/19/2022] Open
Abstract
Temporal synchrony between facial motion and acoustic modulations is a hallmark feature of audiovisual speech. The moving face and mouth during natural speech is known to be correlated with low-frequency acoustic envelope fluctuations (below 10 Hz), but the precise rates at which envelope information is synchronized with motion in different parts of the face are less clear. Here, we used regularized canonical correlation analysis (rCCA) to learn speech envelope filters whose outputs correlate with motion in different parts of the speakers face. We leveraged recent advances in video-based 3D facial landmark estimation allowing us to examine statistical envelope-face correlations across a large number of speakers (∼4000). Specifically, rCCA was used to learn modulation transfer functions (MTFs) for the speech envelope that significantly predict correlation with facial motion across different speakers. The AV analysis revealed bandpass speech envelope filters at distinct temporal scales. A first set of MTFs showed peaks around 3-4 Hz and were correlated with mouth movements. A second set of MTFs captured envelope fluctuations in the 1-2 Hz range correlated with more global face and head motion. These two distinctive timescales emerged only as a property of natural AV speech statistics across many speakers. A similar analysis of fewer speakers performing a controlled speech task highlighted only the well-known temporal modulations around 4 Hz correlated with orofacial motion. The different bandpass ranges of AV correlation align notably with the average rates at which syllables (3-4 Hz) and phrases (1-2 Hz) are produced in natural speech. Whereas periodicities at the syllable rate are evident in the envelope spectrum of the speech signal itself, slower 1-2 Hz regularities thus only become prominent when considering crossmodal signal statistics. This may indicate a motor origin of temporal regularities at the timescales of syllables and phrases in natural speech.
Collapse
|
18
|
Echolocation-related reversal of information flow in a cortical vocalization network. Nat Commun 2022; 13:3642. [PMID: 35752629 PMCID: PMC9233670 DOI: 10.1038/s41467-022-31230-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2021] [Accepted: 05/30/2022] [Indexed: 11/09/2022] Open
Abstract
The mammalian frontal and auditory cortices are important for vocal behavior. Here, using local-field potential recordings, we demonstrate that the timing and spatial patterns of oscillations in the fronto-auditory network of vocalizing bats (Carollia perspicillata) predict the purpose of vocalization: echolocation or communication. Transfer entropy analyses revealed predominant top-down (frontal-to-auditory cortex) information flow during spontaneous activity and pre-vocal periods. The dynamics of information flow depend on the behavioral role of the vocalization and on the timing relative to vocal onset. We observed the emergence of predominant bottom-up (auditory-to-frontal) information transfer during the post-vocal period specific to echolocation pulse emission, leading to self-directed acoustic feedback. Electrical stimulation of frontal areas selectively enhanced responses to sounds in auditory cortex. These results reveal unique changes in information flow across sensory and frontal cortices, potentially driven by the purpose of the vocalization in a highly vocal mammalian model.
Collapse
|
19
|
Zhang Y, Alvarez JL, Ghazanfar AA. Arousal elevation drives the development of oscillatory vocal output. J Neurophysiol 2022; 127:1519-1531. [PMID: 35475704 DOI: 10.1152/jn.00007.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Adult behaviors, such as vocal production, often exhibit temporal regularity. In contrast, their immature forms are more irregular. We ask whether the coupling of motor behaviors with arousal changes give rise to temporal regularity. Do they drive the transition from variable to regular motor output over the course of development? We used marmoset monkey vocal production to explore this putative influence of arousal on the nonlinear changes in their developing vocal output patterns. Based on a detailed analysis of vocal and arousal dynamics in marmosets, we put forth a general model incorporating arousal and auditory-feedback loops for spontaneous vocal production. Using this model, we show that a stable oscillation can emerge as the baseline arousal increases, predicting the transition from stochastic to periodic oscillations observed during marmoset vocal development. We further provide a solution for how this model can explain vocal development as the joint consequence of energetic growth and social feedback. Together, we put forth a plausible mechanism for the development of arousal-mediated adaptive behavior.
Collapse
Affiliation(s)
- Yisi Zhang
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, United States
| | - John Luis Alvarez
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, United States
| | - Asif A Ghazanfar
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, United States.,Department of Psychology, Princeton University, Princeton, New Jersey, United States.,Department of Ecology and Evolutionary Biology, Princeton University, Princeton, New Jersey, United States
| |
Collapse
|
20
|
Zheng DJ, Okobi DE, Shu R, Agrawal R, Smith SK, Long MA, Phelps SM. Mapping the vocal circuitry of Alston's singing mouse with pseudorabies virus. J Comp Neurol 2022; 530:2075-2099. [PMID: 35385140 DOI: 10.1002/cne.25321] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2021] [Revised: 02/06/2022] [Accepted: 03/07/2022] [Indexed: 11/11/2022]
Abstract
Vocalizations are often elaborate, rhythmically structured behaviors. Vocal motor patterns require close coordination of neural circuits governing the muscles of the larynx, jaw, and respiratory system. In the elaborate vocalization of Alston's singing mouse (Scotinomys teguina) each note of its rapid, frequency-modulated trill is accompanied by equally rapid modulation of breath and gape. To elucidate the neural circuitry underlying this behavior, we introduced the polysynaptic retrograde neuronal tracer pseudorabies virus (PRV) into the cricothyroid and digastricus muscles, which control frequency modulation and jaw opening, respectively. Each virus singly labels ipsilateral motoneurons (nucleus ambiguus for cricothyroid, and motor trigeminal nucleus for digastricus). We find that the two isogenic viruses heavily and bilaterally colabel neurons in the gigantocellular reticular formation, a putative central pattern generator. The viruses also show strong colabeling in compartments of the midbrain including the ventrolateral periaqueductal gray and the parabrachial nucleus, two structures strongly implicated in vocalizations. In the forebrain, regions important to social cognition and energy balance both exhibit extensive colabeling. This includes the paraventricular and arcuate nuclei of the hypothalamus, the lateral hypothalamus, preoptic area, extended amygdala, central amygdala, and the bed nucleus of the stria terminalis. Finally, we find doubly labeled neurons in M1 motor cortex previously described as laryngeal, as well as in the prelimbic cortex, which indicate these cortical regions play a role in vocal production. The progress of both viruses is broadly consistent with vertebrate-general patterns of vocal circuitry, as well as with circuit models derived from primate literature.
Collapse
Affiliation(s)
- Da-Jiang Zheng
- Department of Integrative Biology, The University of Texas at Austin, Austin, Texas, USA
| | - Daniel E Okobi
- Department of Neurology, University of California Los Angeles, Los Angeles, California, USA
| | - Ryan Shu
- Department of Integrative Biology, The University of Texas at Austin, Austin, Texas, USA
| | - Rania Agrawal
- Department of Integrative Biology, The University of Texas at Austin, Austin, Texas, USA
| | - Samantha K Smith
- Department of Integrative Biology, The University of Texas at Austin, Austin, Texas, USA
| | - Michael A Long
- NYU Neuroscience Institute and Department of Otolaryngology, Langone Medical Center, New York University, New York City, New York, USA
| | - Steven M Phelps
- Department of Integrative Biology, The University of Texas at Austin, Austin, Texas, USA
| |
Collapse
|
21
|
Banerjee A, Vallentin D. Convergent behavioral strategies and neural computations during vocal turn-taking across diverse species. Curr Opin Neurobiol 2022; 73:102529. [DOI: 10.1016/j.conb.2022.102529] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 02/21/2022] [Accepted: 03/02/2022] [Indexed: 01/20/2023]
|
22
|
Wass S, Perapoch Amadó M, Ives J. How the ghost learns to drive the machine? Oscillatory entrainment to our early social or physical environment and the emergence of volitional control. Dev Cogn Neurosci 2022; 54:101102. [PMID: 35398645 PMCID: PMC9010552 DOI: 10.1016/j.dcn.2022.101102] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 02/18/2022] [Accepted: 03/23/2022] [Indexed: 01/08/2023] Open
Abstract
An individual’s early interactions with their environment are thought to be largely passive; through the early years, the capacity for volitional control develops. Here, we consider: how is the emergence of volitional control characterised by changes in the entrainment observed between internal activity (behaviour, physiology and brain activity) and the sights and sounds in our everyday environment (physical and social)? We differentiate between contingent responsiveness (entrainment driven by evoked responses to external events) and oscillatory entrainment (driven by internal oscillators becoming temporally aligned with external oscillators). We conclude that ample evidence suggests that children show behavioural, physiological and neural entrainment to their physical and social environment, irrespective of volitional attention control; however, evidence for oscillatory entrainment beyond contingent responsiveness is currently lacking. Evidence for how oscillatory entrainment changes over developmental time is also lacking. Finally, we suggest a mechanism through which periodic environmental rhythms might facilitate both sensory processing and the development of volitional control even in the absence of oscillatory entrainment.
Collapse
|
23
|
Wei XP, Collie M, Dempsey B, Fortin G, Yackle K. A novel reticular node in the brainstem synchronizes neonatal mouse crying with breathing. Neuron 2022; 110:644-657.e6. [PMID: 34998469 PMCID: PMC8857054 DOI: 10.1016/j.neuron.2021.12.014] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2021] [Revised: 10/14/2021] [Accepted: 12/08/2021] [Indexed: 12/30/2022]
Abstract
Human speech can be divided into short, rhythmically timed elements, similar to syllables within words. Even our cries and laughs, as well as the vocalizations of other species, are periodic. However, the cellular and molecular mechanisms underlying the tempo of mammalian vocalizations remain unknown. Furthermore, even the core cells that produce vocalizations remain ill-defined. Here, we describe rhythmically timed neonatal mouse vocalizations that occur within single breaths and identify a brainstem node that is necessary for and sufficient to structure these cries, which we name the intermediate reticular oscillator (iRO). We show that the iRO acts autonomously and sends direct inputs to key muscles and the respiratory rhythm generator in order to coordinate neonatal vocalizations with breathing, as well as paces and patterns these cries. These results reveal that a novel mammalian brainstem oscillator embedded within the conserved breathing circuitry plays a central role in the production of neonatal vocalizations.
Collapse
Affiliation(s)
- Xin Paul Wei
- Department of Physiology, University of California, San Francisco, San Francisco, CA 94143, USA; Biomedical Sciences Graduate Program, University of California, San Francisco, San Francisco, CA 94143, USA
| | - Matthew Collie
- Department of Physiology, University of California, San Francisco, San Francisco, CA 94143, USA
| | - Bowen Dempsey
- Institut de Biologie de l'École Normale Supérieure (IBENS), École Normale Supérieure, CNRS, INSERM, PSL Research University, Paris, France
| | - Gilles Fortin
- Institut de Biologie de l'École Normale Supérieure (IBENS), École Normale Supérieure, CNRS, INSERM, PSL Research University, Paris, France
| | - Kevin Yackle
- Department of Physiology, University of California, San Francisco, San Francisco, CA 94143, USA.
| |
Collapse
|
24
|
Vocalization and physiological hyperarousal in infant-caregiver dyads where the caregiver has elevated anxiety. Dev Psychopathol 2022; 35:459-470. [PMID: 35105411 DOI: 10.1017/s095457942100153x] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Co-regulation of physiological arousal within the caregiver-child dyad precedes later self-regulation within the individual. Despite the importance of unimpaired self-regulatory development for later adjustment outcomes, little is understood about how early co-regulatory processes can become dysregulated during early life. Aspects of caregiver behavior, such as patterns of anxious speech, may be one factor influencing infant arousal dysregulation. To address this, we made day-long, naturalistic biobehavioral recordings in home settings in caregiver-infant dyads using wearable autonomic devices and miniature microphones. We examined the association between arousal, vocalization intensity, and caregiver anxiety. We found that moments of high physiological arousal in infants were more likely to be accompanied by high caregiver arousal when caregivers had high self-reported trait anxiety. Anxious caregivers were also more likely to vocalize intensely at states of high arousal and produce intense vocalizations that occurred in clusters. High-intensity vocalizations were associated with more sustained increases in autonomic arousal for both anxious caregivers and their infants. Findings indicate that caregiver vocal behavior differs in anxious parents, cooccurs with dyadic arousal dysregulation, and could contribute to physiological arousal transmission. Implications for caregiver vocalization as an intervention target are discussed.
Collapse
|
25
|
|
26
|
Risueno-Segovia C, Koç O, Champéroux P, Hage SR. Cardiovascular mechanisms underlying vocal behavior in freely moving macaque monkeys. iScience 2022; 25:103688. [PMID: 35036873 PMCID: PMC8749184 DOI: 10.1016/j.isci.2021.103688] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Revised: 12/01/2021] [Accepted: 12/22/2021] [Indexed: 11/30/2022] Open
Abstract
Communication is a keystone of animal behavior. However, the physiological states underlying natural vocal signaling are still largely unknown. In this study, we investigated the correlation of affective vocal utterances with concomitant cardiorespiratory mechanisms. We telemetrically recorded electrocardiography, blood pressure, and physical activity in six freely moving and interacting cynomolgus monkeys (Macaca fascicularis). Our results demonstrate that vocal onsets are strengthened during states of sympathetic activation, and are phase locked to a slower Mayer wave and a faster heart rate signal at ∼2.5 Hz. Vocalizations are coupled with a distinct peri-vocal physiological signature based on which we were able to predict the onset of vocal output using three machine learning classification models. These findings emphasize the role of cardiorespiratory mechanisms correlated with vocal onsets to optimize arousal levels and minimize energy expenditure during natural vocal production. Cardiovascular signals are measured telemetrically in freely moving macaques A distinct cardiovascular physiological signature is present before vocal onset Vocal onsets are phase locked to the Mayer wave and heart rate signals Vocal onsets prediction is performed using machine learning classification models
Collapse
Affiliation(s)
- Cristina Risueno-Segovia
- Neurobiology of Social Communication, Department of Otolaryngology-Head and Neck Surgery, Hearing Research Centre, University of Tübingen, Medical Center, Elfriede-Aulhorn-Strasse 5, 72076 Tübingen, Germany.,Werner Reichardt Centre for Integrative Neuroscience, University of Tübingen, Otfried-Müller-Street 25, 72076 Tübingen, Germany.,Graduate School of Neural and Behavioural Sciences-International Max Planck Research School, University of Tübingen, Österberg-Street 3, 72074 Tübingen, Germany
| | - Okan Koç
- Werner Reichardt Centre for Integrative Neuroscience, University of Tübingen, Otfried-Müller-Street 25, 72076 Tübingen, Germany
| | - Pascal Champéroux
- European Research Biology Center, ERBC, Chemin de Montifault, 18800 Baugy, France
| | - Steffen R Hage
- Neurobiology of Social Communication, Department of Otolaryngology-Head and Neck Surgery, Hearing Research Centre, University of Tübingen, Medical Center, Elfriede-Aulhorn-Strasse 5, 72076 Tübingen, Germany.,Werner Reichardt Centre for Integrative Neuroscience, University of Tübingen, Otfried-Müller-Street 25, 72076 Tübingen, Germany
| |
Collapse
|
27
|
Wass S, Phillips E, Smith C, Fatimehin EOOB, Goupil L. Vocal communication is tied to interpersonal arousal coupling in caregiver-infant dyads. eLife 2022; 11:77399. [PMID: 36537657 PMCID: PMC9833822 DOI: 10.7554/elife.77399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Accepted: 12/05/2022] [Indexed: 12/24/2022] Open
Abstract
It has been argued that a necessary condition for the emergence of speech in humans is the ability to vocalise irrespective of underlying affective states, but when and how this happens during development remains unclear. To examine this, we used wearable microphones and autonomic sensors to collect multimodal naturalistic datasets from 12-month-olds and their caregivers. We observed that, across the day, clusters of vocalisations occur during elevated infant and caregiver arousal. This relationship is stronger in infants than caregivers: caregivers vocalisations show greater decoupling with their own states of arousal, and their vocal production is more influenced by the infant's arousal than their own. Different types of vocalisation elicit different patterns of change across the dyad. Cries occur following reduced infant arousal stability and lead to increased child-caregiver arousal coupling, and decreased infant arousal. Speech-like vocalisations also occur at elevated arousal, but lead to longer-lasting increases in arousal, and elicit more parental verbal responses. Our results suggest that: 12-month-old infants' vocalisations are strongly contingent on their arousal state (for both cries and speech-like vocalisations), whereas adults' vocalisations are more flexibly tied to their own arousal; that cries and speech-like vocalisations alter the intra-dyadic dynamics of arousal in different ways, which may be an important factor driving speech development; and that this selection mechanism which drives vocal development is anchored in our stress physiology.
Collapse
Affiliation(s)
- Sam Wass
- Department of Psychology, University of East LondonLondonUnited Kingdom
| | - Emily Phillips
- Department of Psychology, University of East LondonLondonUnited Kingdom
| | - Celia Smith
- Institute of Psychiatry, Psychology & Neuroscience, King's College LondonLondonUnited Kingdom
| | | | | |
Collapse
|
28
|
The maturational gradient of infant vocalizations: Developmental stages and functional modules. Infant Behav Dev 2021; 66:101682. [PMID: 34920296 DOI: 10.1016/j.infbeh.2021.101682] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2021] [Revised: 12/06/2021] [Accepted: 12/07/2021] [Indexed: 12/29/2022]
Abstract
Stage models have been influential in characterizing infant vocalizations in the first year of life. These models are basically descriptive and do not explain why certain types of vocal behaviors occur within a particular stage or why successive patterns of vocalization occur. This review paper summarizes and elaborates a theory of Developmental Functional Modules (DFMs) and discusses how maturational gradients in the DFMs explain age typical vocalizations as well as the transitions between successive stages or other static forms. Maturational gradients are based on biological processes that effect the reconfiguration and remodeling of the respiratory, laryngeal, and craniofacial systems during infancy. From a dynamic systems perspective, DFMs are part of a complex system with multiple degrees of freedom that can achieve stable performance with relatively few control variables by relying on principles such as synergies, self-organization, nonlinear performance, and movement variability.
Collapse
|
29
|
Patel AD. Vocal learning as a preadaptation for the evolution of human beat perception and synchronization. Philos Trans R Soc Lond B Biol Sci 2021; 376:20200326. [PMID: 34420384 PMCID: PMC8380969 DOI: 10.1098/rstb.2020.0326] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/18/2021] [Indexed: 12/18/2022] Open
Abstract
The human capacity to synchronize movements to an auditory beat is central to musical behaviour and to debates over the evolution of human musicality. Have humans evolved any neural specializations for music processing, or does music rely entirely on brain circuits that evolved for other reasons? The vocal learning and rhythmic synchronization hypothesis proposes that our ability to move in time with an auditory beat in a precise, predictive and tempo-flexible manner originated in the neural circuitry for complex vocal learning. In the 15 years, since the hypothesis was proposed a variety of studies have supported it. However, one study has provided a significant challenge to the hypothesis. Furthermore, it is increasingly clear that vocal learning is not a binary trait animals have or lack, but varies more continuously across species. In the light of these developments and of recent progress in the neurobiology of beat processing and of vocal learning, the current paper revises the vocal learning hypothesis. It argues that an advanced form of vocal learning acts as a preadaptation for sporadic beat perception and synchronization (BPS), providing intrinsic rewards for predicting the temporal structure of complex acoustic sequences. It further proposes that in humans, mechanisms of gene-culture coevolution transformed this preadaptation into a genuine neural adaptation for sustained BPS. The larger significance of this proposal is that it outlines a hypothesis of cognitive gene-culture coevolution which makes testable predictions for neuroscience, cross-species studies and genetics. This article is part of the theme issue 'Synchrony and rhythm interaction: from the brain to behavioural ecology'.
Collapse
Affiliation(s)
- Aniruddh D. Patel
- Department of Psychology, Tufts University, Medford, MA, USA
- Program in Brain, Mind, and Consciousness, Canadian Institute for Advanced Research, Toronto, Canada
| |
Collapse
|
30
|
Golesorkhi M, Gomez-Pilar J, Zilio F, Berberian N, Wolff A, Yagoub MCE, Northoff G. The brain and its time: intrinsic neural timescales are key for input processing. Commun Biol 2021; 4:970. [PMID: 34400800 PMCID: PMC8368044 DOI: 10.1038/s42003-021-02483-6] [Citation(s) in RCA: 45] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Accepted: 07/19/2021] [Indexed: 02/07/2023] Open
Abstract
We process and integrate multiple timescales into one meaningful whole. Recent evidence suggests that the brain displays a complex multiscale temporal organization. Different regions exhibit different timescales as described by the concept of intrinsic neural timescales (INT); however, their function and neural mechanisms remains unclear. We review recent literature on INT and propose that they are key for input processing. Specifically, they are shared across different species, i.e., input sharing. This suggests a role of INT in encoding inputs through matching the inputs' stochastics with the ongoing temporal statistics of the brain's neural activity, i.e., input encoding. Following simulation and empirical data, we point out input integration versus segregation and input sampling as key temporal mechanisms of input processing. This deeply grounds the brain within its environmental and evolutionary context. It carries major implications in understanding mental features and psychiatric disorders, as well as going beyond the brain in integrating timescales into artificial intelligence.
Collapse
Affiliation(s)
- Mehrshad Golesorkhi
- grid.28046.380000 0001 2182 2255School of Electrical Engineering and Computer Science, University of Ottawa, Ottawa, Canada ,grid.28046.380000 0001 2182 2255Mind, Brain Imaging and Neuroethics Research Unit, Institute of Mental Health, Royal Ottawa Mental Health Centre and University of Ottawa, Ottawa, Canada
| | - Javier Gomez-Pilar
- grid.5239.d0000 0001 2286 5329Biomedical Engineering Group, University of Valladolid, Valladolid, Spain ,grid.413448.e0000 0000 9314 1427Centro de Investigación Biomédica en Red en Bioingeniería, Biomateriales y Nanomedicina, (CIBER-BBN), Madrid, Spain
| | - Federico Zilio
- grid.5608.b0000 0004 1757 3470Department of Philosophy, Sociology, Education and Applied Psychology, University of Padova, Padua, Italy
| | - Nareg Berberian
- grid.28046.380000 0001 2182 2255Mind, Brain Imaging and Neuroethics Research Unit, Institute of Mental Health, Royal Ottawa Mental Health Centre and University of Ottawa, Ottawa, Canada
| | - Annemarie Wolff
- grid.28046.380000 0001 2182 2255Mind, Brain Imaging and Neuroethics Research Unit, Institute of Mental Health, Royal Ottawa Mental Health Centre and University of Ottawa, Ottawa, Canada
| | - Mustapha C. E. Yagoub
- grid.28046.380000 0001 2182 2255School of Electrical Engineering and Computer Science, University of Ottawa, Ottawa, Canada
| | - Georg Northoff
- grid.28046.380000 0001 2182 2255Mind, Brain Imaging and Neuroethics Research Unit, Institute of Mental Health, Royal Ottawa Mental Health Centre and University of Ottawa, Ottawa, Canada ,grid.410595.c0000 0001 2230 9154Centre for Cognition and Brain Disorders, Hangzhou Normal University, Hangzhou, China ,grid.13402.340000 0004 1759 700XMental Health Centre, Zhejiang University School of Medicine, Hangzhou, Zhejiang China
| |
Collapse
|
31
|
Vocal learning and flexible rhythm pattern perception are linked: Evidence from songbirds. Proc Natl Acad Sci U S A 2021; 118:2026130118. [PMID: 34272278 PMCID: PMC8307534 DOI: 10.1073/pnas.2026130118] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
We can recognize the cadence of a friend’s voice or the rhythm of a familiar song across a wide range of tempi. This shows that our perception of temporal patterns relies strongly on the relative timing of events rather than on specific absolute durations. This tendency is foundational to speech and music perception, but to what extent is it shared by other species? We hypothesize that animals that learn their vocalizations are more likely to share this tendency. Here, we show that a vocal learning songbird robustly recognizes a basic rhythmic pattern independent of rate. Our findings pave the way for neurobiological studies to identify how the brain represents and perceives the temporal structure of auditory sequences. Rhythm perception is fundamental to speech and music. Humans readily recognize a rhythmic pattern, such as that of a familiar song, independently of the tempo at which it occurs. This shows that our perception of auditory rhythms is flexible, relying on global relational patterns more than on the absolute durations of specific time intervals. Given that auditory rhythm perception in humans engages a complex auditory–motor cortical network even in the absence of movement and that the evolution of vocal learning is accompanied by strengthening of forebrain auditory–motor pathways, we hypothesize that vocal learning species share our perceptual facility for relational rhythm processing. We test this by asking whether the best-studied animal model for vocal learning, the zebra finch, can recognize a fundamental rhythmic pattern—equal timing between event onsets (isochrony)—based on temporal relations between intervals rather than on absolute durations. Prior work suggests that vocal nonlearners (pigeons and rats) are quite limited in this regard and are biased to attend to absolute durations when listening to rhythmic sequences. In contrast, using naturalistic sounds at multiple stimulus rates, we show that male zebra finches robustly recognize isochrony independent of absolute time intervals, even at rates distant from those used in training. Our findings highlight the importance of comparative studies of rhythmic processing and suggest that vocal learning species are promising animal models for key aspects of human rhythm perception. Such models are needed to understand the neural mechanisms behind the positive effect of rhythm on certain speech and movement disorders.
Collapse
|
32
|
Kent RD. Developmental Functional Modules in Infant Vocalizations. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:1581-1604. [PMID: 33861626 DOI: 10.1044/2021_jslhr-20-00703] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
Purpose Developmental functional modules (DFMs) are biological modules that are defined by their structural (morphological), functional, or developmental elements, and, in some cases, all three of these. This review article considers the hypothesis that vocal development in the first year of life can be understood in large part with respect to DFMs that characterize the speech production system. Method Literature is reviewed on relevant embryology, orofacial reflexes, craniofacial muscle properties, stages of vocal development, and related topics to identity candidates for DFMs. Results The following DFMs are identified and described: laryngeal, pharyngo-laryngeal, mandibular, velopharyngeal, labial complex, and lingual complex. These DFMs and their submodules, considered along with phenomena such as rhythmic movements, account for several well-documented features of vocal development in the first year of life. The proposed DFMs, rooted in embryologic, histologic, and kinematic properties, serve as low-dimensional control variables for the developing vocal tract. Each DFM is semi-autonomous but interacts with other DFMs to produce patterns of vocal behavior. Discussion Considered in relation to contemporary profiles and models of vocal development in the first year of life, DFMs have interpretive and explanatory value. DFMs complement other approaches in the study of infant vocalizations and are grounded in biology.
Collapse
Affiliation(s)
- Ray D Kent
- Department of Communication Sciences & Disorders, University of Wisconsin-Madison
| |
Collapse
|
33
|
Tripp JA, Feng NY, Bass AH. To hum or not to hum: Neural transcriptome signature of male courtship vocalization in a teleost fish. GENES, BRAIN, AND BEHAVIOR 2021; 20:e12740. [PMID: 33960645 DOI: 10.1111/gbb.12740] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/23/2020] [Revised: 01/31/2021] [Accepted: 05/04/2021] [Indexed: 11/28/2022]
Abstract
For many animal species, vocal communication is a critical social behavior and often a necessary component of reproductive success. Additionally, vocalizations are often demanding motor acts. Wanting to know whether a specific molecular toolkit might be required for vocalization, we used RNA-sequencing to investigate neural gene expression underlying the performance of an extreme vocal behavior, the courtship hum of the plainfin midshipman fish (Porichthys notatus). Single hums can last up to 2 h and may be repeated throughout an evening of courtship activity. We asked whether vocal behavioral states are associated with specific gene expression signatures in key brain regions that regulate vocalization by comparing transcript expression levels in humming versus non-humming males. We find that the circadian-related genes period3 and Clock are significantly upregulated in the vocal motor nucleus and preoptic area-anterior hypothalamus, respectively, in humming compared with non-humming males, indicating that internal circadian clocks may differ between these divergent behavioral states. In addition, we identify suites of differentially expressed genes related to synaptic transmission, ion channels and transport, neuropeptide and hormone signaling, and metabolism and antioxidant activity that together may support the neural and energetic demands of humming behavior. Comparisons of transcript expression across regions stress regional differences in brain gene expression, while also showing coordinated gene regulation in the vocal motor circuit in preparation for courtship behavior. These results underscore the role of differential gene expression in shifts between behavioral states, in this case neuroendocrine, motor and circadian control of courtship vocalization.
Collapse
Affiliation(s)
- Joel A Tripp
- Department of Neurobiology and Behavior, Cornell University, Ithaca, New York, USA
- Department of Integrative Biology, University of Texas-Austin, Austin, Texas, USA
| | - Ni Y Feng
- Department of Neurobiology and Behavior, Cornell University, Ithaca, New York, USA
- Department of Cellular and Molecular Physiology, Yale University School of Medicine, New Haven, Connecticut, USA
| | - Andrew H Bass
- Department of Neurobiology and Behavior, Cornell University, Ithaca, New York, USA
| |
Collapse
|
34
|
Garcia M, Manser M. Bound for Specific Sounds: Vocal Predisposition in Animal Communication. Trends Cogn Sci 2020; 24:690-693. [PMID: 32595086 DOI: 10.1016/j.tics.2020.05.013] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2020] [Revised: 05/20/2020] [Accepted: 05/22/2020] [Indexed: 11/16/2022]
Abstract
Mechanical constraints imposed by anatomical adaptations are a ubiquitous feature of animal sound production. They can give rise to 'vocal predispositions' (i.e., acoustic structures strictly determined by vocal anatomy). Such predispositions are crucial to the investigation of the cognitive and evolutionary processes underlying acoustic communication in vertebrates, including human speech.
Collapse
Affiliation(s)
- Maxime Garcia
- Animal Behaviour, Department of Evolutionary Biology and Environmental Studies, University of Zurich, Winterthurerstrasse 190, 8051, Zurich, Switzerland; Center for the Interdisciplinary Study of Language Evolution, University of Zurich, 8032 Zurich, Switzerland.
| | - Marta Manser
- Animal Behaviour, Department of Evolutionary Biology and Environmental Studies, University of Zurich, Winterthurerstrasse 190, 8051, Zurich, Switzerland; Center for the Interdisciplinary Study of Language Evolution, University of Zurich, 8032 Zurich, Switzerland
| |
Collapse
|
35
|
Balezeau F, Wilson B, Gallardo G, Dick F, Hopkins W, Anwander A, Friederici AD, Griffiths TD, Petkov CI. Primate auditory prototype in the evolution of the arcuate fasciculus. Nat Neurosci 2020; 23:611-614. [PMID: 32313267 PMCID: PMC7195223 DOI: 10.1038/s41593-020-0623-9] [Citation(s) in RCA: 37] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2019] [Accepted: 03/16/2020] [Indexed: 12/27/2022]
Abstract
The human arcuate fasciculus pathway is crucial for language, interconnecting posterior temporal and inferior frontal areas. Whether a monkey homolog exists is controversial and the nature of human-specific specialization unclear. Using monkey, ape and human auditory functional fields and diffusion-weighted MRI, we identified homologous pathways originating from the auditory cortex. This discovery establishes a primate auditory prototype for the arcuate fasciculus, reveals an earlier phylogenetic origin and illuminates its remarkable transformation.
Collapse
Affiliation(s)
- Fabien Balezeau
- Newcastle University Medical School, Newcastle upon Tyne, UK.
| | - Benjamin Wilson
- Newcastle University Medical School, Newcastle upon Tyne, UK.
- Department of Psychology and Yerkes Primate Research Center, Emory University, GA, Atlanta, USA.
| | - Guillermo Gallardo
- Max Planck Institute for Cognitive and Brain Sciences, Department of Neuropsychology, Leipzig, Germany
| | - Fred Dick
- Birkbeck-UCL Centre for NeuroImaging, Birkbeck University of London, London, UK
| | - William Hopkins
- Keeling Center for Comparative Medicine and Research at University of Texas MD Anderson Cancer Center, TX, Bastrop, USA
| | - Alfred Anwander
- Max Planck Institute for Cognitive and Brain Sciences, Department of Neuropsychology, Leipzig, Germany
| | - Angela D Friederici
- Max Planck Institute for Cognitive and Brain Sciences, Department of Neuropsychology, Leipzig, Germany
| | - Timothy D Griffiths
- Newcastle University Medical School, Newcastle upon Tyne, UK
- Wellcome Trust Centre for Neuroimaging, University College London, London, UK
- Department of Neurosurgery, University of Iowa Hospitals and Clinics, IA, Iowa City, USA
| | | |
Collapse
|