1
|
Colquitt BM, Li K, Green F, Veline R, Brainard MS. Neural circuit-wide analysis of changes to gene expression during deafening-induced birdsong destabilization. eLife 2023; 12:e85970. [PMID: 37284822 PMCID: PMC10259477 DOI: 10.7554/elife.85970] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Accepted: 04/17/2023] [Indexed: 06/08/2023] Open
Abstract
Sensory feedback is required for the stable execution of learned motor skills, and its loss can severely disrupt motor performance. The neural mechanisms that mediate sensorimotor stability have been extensively studied at systems and physiological levels, yet relatively little is known about how disruptions to sensory input alter the molecular properties of associated motor systems. Songbird courtship song, a model for skilled behavior, is a learned and highly structured vocalization that is destabilized following deafening. Here, we sought to determine how the loss of auditory feedback modifies gene expression and its coordination across the birdsong sensorimotor circuit. To facilitate this system-wide analysis of transcriptional responses, we developed a gene expression profiling approach that enables the construction of hundreds of spatially-defined RNA-sequencing libraries. Using this method, we found that deafening preferentially alters gene expression across birdsong neural circuitry relative to surrounding areas, particularly in premotor and striatal regions. Genes with altered expression are associated with synaptic transmission, neuronal spines, and neuromodulation and show a bias toward expression in glutamatergic neurons and Pvalb/Sst-class GABAergic interneurons. We also found that connected song regions exhibit correlations in gene expression that were reduced in deafened birds relative to hearing birds, suggesting that song destabilization alters the inter-region coordination of transcriptional states. Finally, lesioning LMAN, a forebrain afferent of RA required for deafening-induced song plasticity, had the largest effect on groups of genes that were also most affected by deafening. Combined, this integrated transcriptomics analysis demonstrates that the loss of peripheral sensory input drives a distributed gene expression response throughout associated sensorimotor neural circuitry and identifies specific candidate molecular and cellular mechanisms that support the stability and plasticity of learned motor skills.
Collapse
Affiliation(s)
- Bradley M Colquitt
- Howard Hughes Medical InstituteChevy ChaseUnited States
- Department of Physiology, University of California, San FranciscoSan FranciscoUnited States
| | - Kelly Li
- Howard Hughes Medical InstituteChevy ChaseUnited States
- Department of Physiology, University of California, San FranciscoSan FranciscoUnited States
| | - Foad Green
- Howard Hughes Medical InstituteChevy ChaseUnited States
- Department of Physiology, University of California, San FranciscoSan FranciscoUnited States
| | - Robert Veline
- Howard Hughes Medical InstituteChevy ChaseUnited States
- Department of Physiology, University of California, San FranciscoSan FranciscoUnited States
| | - Michael S Brainard
- Howard Hughes Medical InstituteChevy ChaseUnited States
- Department of Physiology, University of California, San FranciscoSan FranciscoUnited States
| |
Collapse
|
2
|
Rodríguez-Saltos CA, Bhise A, Karur P, Khan RN, Lee S, Ramsay G, Maney DL. Song preferences predict the quality of vocal learning in zebra finches. Sci Rep 2023; 13:605. [PMID: 36635470 PMCID: PMC9837092 DOI: 10.1038/s41598-023-27708-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Accepted: 01/06/2023] [Indexed: 01/14/2023] Open
Abstract
In songbirds, learning to sing is a highly social process that likely involves social reward. Here, we tested the hypothesis that during song learning, the reward value of hearing a particular song predicts the degree to which that song will ultimately be learned. We measured the early song preferences of young male zebra finches (Taeniopygia guttata) in an operant key-pressing assay; each of two keys was associated with a higher likelihood of playing the song of the father or that of another familiar adult ("neighbor"). To minimize the effects of exposure on learning, we implemented a novel reinforcement schedule that allowed us to detect preferences while balancing exposure to each song. On average, the juveniles significantly preferred the father's song early during song learning, before actual singing occurs in this species. When they reached adulthood, all the birds copied the father's song. The accuracy with which the father's song was imitated was positively correlated with the peak strength of the preference for the father's song during the sensitive period of song learning. Our results show that preference for the song of a chosen tutor, in this case the father, predicted vocal learning during development.
Collapse
Affiliation(s)
| | - Aditya Bhise
- Department of Psychology, Emory University, 36 Eagle Row, Atlanta, GA, 30322, USA
| | - Prasanna Karur
- Department of Psychology, Emory University, 36 Eagle Row, Atlanta, GA, 30322, USA
| | - Ramsha Nabihah Khan
- Department of Psychology, Emory University, 36 Eagle Row, Atlanta, GA, 30322, USA
| | - Sumin Lee
- Department of Psychology, Emory University, 36 Eagle Row, Atlanta, GA, 30322, USA
| | - Gordon Ramsay
- Marcus Autism Center, Children's Healthcare of Atlanta, Atlanta, GA, 30329, USA.,Department of Pediatrics, Emory University, Atlanta, GA, 30329, USA
| | - Donna L Maney
- Department of Psychology, Emory University, 36 Eagle Row, Atlanta, GA, 30322, USA.
| |
Collapse
|
3
|
Cohen Y, Nicholson DA, Sanchioni A, Mallaber EK, Skidanova V, Gardner TJ. Automated annotation of birdsong with a neural network that segments spectrograms. eLife 2022; 11:63853. [PMID: 35050849 PMCID: PMC8860439 DOI: 10.7554/elife.63853] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2020] [Accepted: 01/19/2022] [Indexed: 11/13/2022] Open
Abstract
Songbirds provide a powerful model system for studying sensory-motor learning. However, many analyses of birdsong require time-consuming, manual annotation of its elements, called syllables. Automated methods for annotation have been proposed, but these methods assume that audio can be cleanly segmented into syllables, or they require carefully tuning multiple statistical models. Here we present TweetyNet: a single neural network model that learns how to segment spectrograms of birdsong into annotated syllables. We show that TweetyNet mitigates limitations of methods that rely on segmented audio. We also show that TweetyNet performs well across multiple individuals from two species of songbirds, Bengalese finches and canaries. Lastly, we demonstrate that using TweetyNet we can accurately annotate very large datasets containing multiple days of song, and that these predicted annotations replicate key findings from behavioral studies. In addition, we provide open-source software to assist other researchers, and a large dataset of annotated canary song that can serve as a benchmark. We conclude that TweetyNet makes it possible to address a wide range of new questions about birdsong.
Collapse
Affiliation(s)
- Yarden Cohen
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot, Israel
| | | | - Alexa Sanchioni
- Department of Biology, Boston University, Boston, United States
| | | | | | - Timothy J Gardner
- Phil and Penny Knight Campus for Accelerating Scientific Impact, University of Oregon, Eugene, United States
| |
Collapse
|
4
|
Sainburg T, Gentner TQ. Toward a Computational Neuroethology of Vocal Communication: From Bioacoustics to Neurophysiology, Emerging Tools and Future Directions. Front Behav Neurosci 2021; 15:811737. [PMID: 34987365 PMCID: PMC8721140 DOI: 10.3389/fnbeh.2021.811737] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2021] [Accepted: 11/29/2021] [Indexed: 11/23/2022] Open
Abstract
Recently developed methods in computational neuroethology have enabled increasingly detailed and comprehensive quantification of animal movements and behavioral kinematics. Vocal communication behavior is well poised for application of similar large-scale quantification methods in the service of physiological and ethological studies. This review describes emerging techniques that can be applied to acoustic and vocal communication signals with the goal of enabling study beyond a small number of model species. We review a range of modern computational methods for bioacoustics, signal processing, and brain-behavior mapping. Along with a discussion of recent advances and techniques, we include challenges and broader goals in establishing a framework for the computational neuroethology of vocal communication.
Collapse
Affiliation(s)
- Tim Sainburg
- Department of Psychology, University of California, San Diego, La Jolla, CA, United States
- Center for Academic Research & Training in Anthropogeny, University of California, San Diego, La Jolla, CA, United States
| | - Timothy Q. Gentner
- Department of Psychology, University of California, San Diego, La Jolla, CA, United States
- Neurosciences Graduate Program, University of California, San Diego, La Jolla, CA, United States
- Neurobiology Section, Division of Biological Sciences, University of California, San Diego, La Jolla, CA, United States
- Kavli Institute for Brain and Mind, University of California, San Diego, La Jolla, CA, United States
| |
Collapse
|
5
|
Mobbs D, Wise T, Suthana N, Guzmán N, Kriegeskorte N, Leibo JZ. Promises and challenges of human computational ethology. Neuron 2021; 109:2224-2238. [PMID: 34143951 PMCID: PMC8769712 DOI: 10.1016/j.neuron.2021.05.021] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Revised: 03/05/2021] [Accepted: 05/17/2021] [Indexed: 12/22/2022]
Abstract
The movements an organism makes provide insights into its internal states and motives. This principle is the foundation of the new field of computational ethology, which links rich automatic measurements of natural behaviors to motivational states and neural activity. Computational ethology has proven transformative for animal behavioral neuroscience. This success raises the question of whether rich automatic measurements of behavior can similarly drive progress in human neuroscience and psychology. New technologies for capturing and analyzing complex behaviors in real and virtual environments enable us to probe the human brain during naturalistic dynamic interactions with the environment that so far were beyond experimental investigation. Inspired by nonhuman computational ethology, we explore how these new tools can be used to test important questions in human neuroscience. We argue that application of this methodology will help human neuroscience and psychology extend limited behavioral measurements such as reaction time and accuracy, permit novel insights into how the human brain produces behavior, and ultimately reduce the growing measurement gap between human and animal neuroscience.
Collapse
Affiliation(s)
- Dean Mobbs
- Department of Humanities and Social Sciences, 1200 E. California Blvd., HSS 228-77, Pasadena, CA 91125, USA; Computation and Neural Systems Program at the California Institute of Technology, 1200 E. California Blvd., HSS 228-77, Pasadena, CA 91125, USA.
| | - Toby Wise
- Department of Humanities and Social Sciences, 1200 E. California Blvd., HSS 228-77, Pasadena, CA 91125, USA; Wellcome Centre for Human Neuroimaging, University College London, London, UK; Max Planck UCL Centre for Computational Psychiatry and Ageing Research, University College London, London, UK
| | - Nanthia Suthana
- Department of Psychiatry and Biobehavioral Sciences, Jane and Terry Semel Institute for Neuroscience and Human Behavior, University of California, Los Angeles, Los Angeles, CA, USA; Departments of Neurosurgery, Psychology, and Bioengineering, University of California, Los Angeles, Los Angeles, CA, USA
| | - Noah Guzmán
- Computation and Neural Systems Program at the California Institute of Technology, 1200 E. California Blvd., HSS 228-77, Pasadena, CA 91125, USA
| | - Nikolaus Kriegeskorte
- Department of Psychology, Columbia University, New York, NY, USA; Department of Neuroscience, Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | | |
Collapse
|
6
|
Moorman S, Ahn JR, Kao MH. Plasticity of stereotyped birdsong driven by chronic manipulation of cortical-basal ganglia activity. Curr Biol 2021; 31:2619-2632.e4. [PMID: 33974850 PMCID: PMC8222193 DOI: 10.1016/j.cub.2021.04.030] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Revised: 03/05/2021] [Accepted: 04/13/2021] [Indexed: 10/21/2022]
Abstract
Cortical-basal ganglia (CBG) circuits are critical for motor learning and performance, and are a major site of pathology. In songbirds, a CBG circuit regulates moment-by-moment variability in song and also enables song plasticity. Studies have shown that variable burst firing in LMAN, the output nucleus of this CBG circuit, actively drives acute song variability, but whether and how LMAN drives long-lasting changes in song remains unclear. Here, we ask whether chronic pharmacological augmentation of LMAN bursting is sufficient to drive plasticity in birds singing stereotyped songs. We show that altered LMAN activity drives cumulative changes in acoustic structure, timing, and sequencing over multiple days, and induces repetitions and silent pauses reminiscent of human stuttering. Changes persisted when LMAN was subsequently inactivated, indicating plasticity in song motor regions. Following cessation of pharmacological treatment, acoustic features and song sequence gradually recovered to their baseline values over a period of days to weeks. Together, our findings show that augmented bursting in CBG circuitry drives plasticity in well-learned motor skills, and may inform treatments for basal ganglia movement disorders.
Collapse
Affiliation(s)
- Sanne Moorman
- Psychology Department, Utrecht University, Yalelaan 2, 3584 CM Utrecht, the Netherlands; Biology Department, Tufts University, 200 Boston Avenue, Medford, MA 02155, USA.
| | - Jae-Rong Ahn
- Biology Department, Tufts University, 200 Boston Avenue, Medford, MA 02155, USA
| | - Mimi H Kao
- Biology Department, Tufts University, 200 Boston Avenue, Medford, MA 02155, USA; Neuroscience Graduate Program, Tufts University, Boston, MA 02111, USA.
| |
Collapse
|
7
|
Goffinet J, Brudner S, Mooney R, Pearson J. Low-dimensional learned feature spaces quantify individual and group differences in vocal repertoires. eLife 2021; 10:e67855. [PMID: 33988503 PMCID: PMC8213406 DOI: 10.7554/elife.67855] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Accepted: 05/12/2021] [Indexed: 11/16/2022] Open
Abstract
Increases in the scale and complexity of behavioral data pose an increasing challenge for data analysis. A common strategy involves replacing entire behaviors with small numbers of handpicked, domain-specific features, but this approach suffers from several crucial limitations. For example, handpicked features may miss important dimensions of variability, and correlations among them complicate statistical testing. Here, by contrast, we apply the variational autoencoder (VAE), an unsupervised learning method, to learn features directly from data and quantify the vocal behavior of two model species: the laboratory mouse and the zebra finch. The VAE converges on a parsimonious representation that outperforms handpicked features on a variety of common analysis tasks, enables the measurement of moment-by-moment vocal variability on the timescale of tens of milliseconds in the zebra finch, provides strong evidence that mouse ultrasonic vocalizations do not cluster as is commonly believed, and captures the similarity of tutor and pupil birdsong with qualitatively higher fidelity than previous approaches. In all, we demonstrate the utility of modern unsupervised learning approaches to the quantification of complex and high-dimensional vocal behavior.
Collapse
Affiliation(s)
- Jack Goffinet
- Department of Computer Science, Duke UniversityDurhamUnited States
- Center for Cognitive Neurobiology, Duke UniversityDurhamUnited States
- Department of Neurobiology, Duke UniversityDurhamUnited States
| | - Samuel Brudner
- Department of Neurobiology, Duke UniversityDurhamUnited States
| | - Richard Mooney
- Department of Neurobiology, Duke UniversityDurhamUnited States
| | - John Pearson
- Center for Cognitive Neurobiology, Duke UniversityDurhamUnited States
- Department of Neurobiology, Duke UniversityDurhamUnited States
- Department of Biostatistics & Bioinformatics, Duke UniversityDurhamUnited States
- Department of Electrical and Computer Engineering, Duke UniversityDurhamUnited States
| |
Collapse
|
8
|
An avian cortical circuit for chunking tutor song syllables into simple vocal-motor units. Nat Commun 2020; 11:5029. [PMID: 33024101 PMCID: PMC7538968 DOI: 10.1038/s41467-020-18732-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2019] [Accepted: 08/24/2020] [Indexed: 12/24/2022] Open
Abstract
How are brain circuits constructed to achieve complex goals? The brains of young songbirds develop motor circuits that achieve the goal of imitating a specific tutor song to which they are exposed. Here, we set out to examine how song-generating circuits may be influenced early in song learning by a cortical region (NIf) at the interface between auditory and motor systems. Single-unit recordings reveal that, during juvenile babbling, NIf neurons burst at syllable onsets, with some neurons exhibiting selectivity for particular emerging syllable types. When juvenile birds listen to their tutor, NIf neurons are also activated at tutor syllable onsets, and are often selective for particular syllable types. We examine a simple computational model in which tutor exposure imprints the correct number of syllable patterns as ensembles in an interconnected NIf network. These ensembles are then reactivated during singing to train a set of syllable sequences in the motor network. Young songbirds learn to imitate their parents’ songs. Here, the authors find that, in baby birds, neurons in a brain region at the interface of auditory and motor circuits signal the onsets of song syllables during both tutoring and babbling, suggesting a specific neural mechanism for vocal imitation.
Collapse
|
9
|
Mets DG, Brainard MS. Learning is enhanced by tailoring instruction to individual genetic differences. eLife 2019; 8:e47216. [PMID: 31526480 PMCID: PMC6748825 DOI: 10.7554/elife.47216] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2019] [Accepted: 08/11/2019] [Indexed: 01/07/2023] Open
Abstract
It is widely argued that personalized instruction based on individual differences in learning styles or genetic predispositions could improve learning outcomes. However, this proposition has resisted clear demonstration in human studies, where it is difficult to control experience and quantify outcomes. Here, we take advantage of the tractable nature of vocal learning in songbirds (Lonchura striata domestica) to test the idea that matching instruction to individual genetic predispositions can enhance learning. We use both cross-fostering and computerized instruction with synthetic songs to demonstrate that matching the tutor song to individual predispositions can improve learning across genetic backgrounds. Moreover, we find that optimizing instruction in this fashion can equalize learning differences across individuals that might otherwise be construed as genetically determined. Our results demonstrate potent, synergistic interactions between experience and genetics in shaping song, and indicate the likely importance of such interactions for other complex learned behaviors.
Collapse
Affiliation(s)
- David G Mets
- Center for Integrative NeuroscienceUniversity of California, San FranciscoSan FranciscoUnited States
- Howard Hughes Medical Institute, University of California, San FranciscoSan FranciscoUnited States
| | - Michael S Brainard
- Center for Integrative NeuroscienceUniversity of California, San FranciscoSan FranciscoUnited States
- Howard Hughes Medical Institute, University of California, San FranciscoSan FranciscoUnited States
- Department of PhysiologyUniversity of California, San FranciscoSan FranciscoUnited States
- Department of PsychiatryUniversity of California, San FranciscoSan FranciscoUnited States
| |
Collapse
|
10
|
Miller MN, Cheung CYJ, Brainard MS. Vocal learning promotes patterned inhibitory connectivity. Nat Commun 2017; 8:2105. [PMID: 29235480 PMCID: PMC5727387 DOI: 10.1038/s41467-017-01914-5] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2017] [Accepted: 10/25/2017] [Indexed: 01/25/2023] Open
Abstract
Skill learning is instantiated by changes to functional connectivity within premotor circuits, but whether the specificity of learning depends on structured changes to inhibitory circuitry remains unclear. We used slice electrophysiology to measure connectivity changes associated with song learning in the avian analog of primary motor cortex (robust nucleus of the arcopallium, RA) in Bengalese Finches. Before song learning, fast-spiking interneurons (FSIs) densely innervated glutamatergic projection neurons (PNs) with apparently random connectivity. After learning, there was a profound reduction in the overall strength and number of inhibitory connections, but this was accompanied by a more than two-fold enrichment in reciprocal FSI–PN connections. Moreover, in singing birds, we found that pharmacological manipulations of RA's inhibitory circuitry drove large shifts in learned vocal features, such as pitch and amplitude, without grossly disrupting the song. Our results indicate that skill learning establishes nonrandom inhibitory connectivity, and implicates this patterning in encoding specific features of learned movements. Complex motor behaviors such as birdsong are learned through practice and are thought to depend on specific excitatory connectivity in premotor circuits. Here the authors show that song learning in Bengalese Finches is associated with enrichment of inhibitory network connectivity that can affect specific song features.
Collapse
Affiliation(s)
- Mark N Miller
- Howard Hughes Medical Institute and Departments of Physiology and Psychiatry, University of California-San Francisco, San Francisco, CA, 94158, USA.
| | - Chung Yan J Cheung
- Neuroscience Graduate, Program, University of California-San Francisco, San Francisco, CA, 94158, USA
| | - Michael S Brainard
- Howard Hughes Medical Institute and Departments of Physiology and Psychiatry, University of California-San Francisco, San Francisco, CA, 94158, USA
| |
Collapse
|