1
|
Zucca S, La Rosa C, Fellin T, Peretto P, Bovetti S. Developmental encoding of natural sounds in the mouse auditory cortex. Cereb Cortex 2024; 34:bhae438. [PMID: 39503245 PMCID: PMC11538960 DOI: 10.1093/cercor/bhae438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Revised: 10/16/2024] [Accepted: 10/23/2024] [Indexed: 11/09/2024] Open
Abstract
Mice communicate through high-frequency ultrasonic vocalizations, which are crucial for social interactions such as courtship and aggression. Although ultrasonic vocalization representation has been found in adult brain areas along the auditory pathway, including the auditory cortex, no evidence is available on the neuronal representation of ultrasonic vocalizations early in life. Using in vivo two-photon calcium imaging, we analyzed auditory cortex layer 2/3 neuronal responses to USVs, pure tones (4 to 90 kHz), and high-frequency modulated sweeps from postnatal day 12 (P12) to P21. We found that ACx neurons are tuned to respond to ultrasonic vocalization syllables as early as P12 to P13, with an increasing number of responsive cells as the mouse age. By P14, while pure tone responses showed a frequency preference, no syllable preference was observed. Additionally, at P14, USVs, pure tones, and modulated sweeps activate clusters of largely nonoverlapping responsive neurons. Finally, we show that while cell correlation decreases with increasing processing of peripheral auditory stimuli, neurons responding to the same stimulus maintain highly correlated spontaneous activity after circuits have attained mature organization, forming neuronal subnetworks sharing similar functional properties.
Collapse
Affiliation(s)
- Stefano Zucca
- Department of Life Sciences and Systems Biology (DBIOS), University of Turin, via Accademia Albertina 13, 10123 Turin, Italy
- Neuroscience Institute Cavalieri Ottolenghi (NICO), University of Turin, Regione Gonzole 10, 10143 Orbassano, Italy
| | - Chiara La Rosa
- Department of Life Sciences and Systems Biology (DBIOS), University of Turin, via Accademia Albertina 13, 10123 Turin, Italy
- Neuroscience Institute Cavalieri Ottolenghi (NICO), University of Turin, Regione Gonzole 10, 10143 Orbassano, Italy
| | - Tommaso Fellin
- Optical Approaches to Brain Function Laboratory, Istituto Italiano di Tecnologia, via Morego 30, 16163 Genoa, Italy
| | - Paolo Peretto
- Department of Life Sciences and Systems Biology (DBIOS), University of Turin, via Accademia Albertina 13, 10123 Turin, Italy
- Neuroscience Institute Cavalieri Ottolenghi (NICO), University of Turin, Regione Gonzole 10, 10143 Orbassano, Italy
| | - Serena Bovetti
- Department of Life Sciences and Systems Biology (DBIOS), University of Turin, via Accademia Albertina 13, 10123 Turin, Italy
- Neuroscience Institute Cavalieri Ottolenghi (NICO), University of Turin, Regione Gonzole 10, 10143 Orbassano, Italy
| |
Collapse
|
2
|
Pedapati EV, Ethridge LE, Liu Y, Liu R, Sweeney JA, DeStefano LA, Miyakoshi M, Razak K, Schmitt LM, Moore DR, Gilbert DL, Wu SW, Smith E, Shaffer RC, Dominick KC, Horn PS, Binder D, Erickson CA. Frontal Cortex Hyperactivation and Gamma Desynchrony in Fragile X Syndrome: Correlates of Auditory Hypersensitivity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.06.13.598957. [PMID: 38915683 PMCID: PMC11195233 DOI: 10.1101/2024.06.13.598957] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/26/2024]
Abstract
Fragile X syndrome (FXS) is an X-linked disorder that often leads to intellectual disability, anxiety, and sensory hypersensitivity. While sound sensitivity (hyperacusis) is a distressing symptom in FXS, its neural basis is not well understood. It is postulated that hyperacusis may stem from temporal lobe hyperexcitability or dysregulation in top-down modulation. Studying the neural mechanisms underlying sound sensitivity in FXS using scalp electroencephalography (EEG) is challenging because the temporal and frontal regions have overlapping neural projections that are difficult to differentiate. To overcome this challenge, we conducted EEG source analysis on a group of 36 individuals with FXS and 39 matched healthy controls. Our goal was to characterize the spatial and temporal properties of the response to an auditory chirp stimulus. Our results showed that males with FXS exhibit excessive activation in the frontal cortex in response to the stimulus onset, which may reflect changes in top-down modulation of auditory processing. Additionally, during the chirp stimulus, individuals with FXS demonstrated a reduction in typical gamma phase synchrony, along with an increase in asynchronous gamma power, across multiple regions, most strongly in temporal cortex. Consistent with these findings, we observed a decrease in the signal-to-noise ratio, estimated by the ratio of synchronous to asynchronous gamma activity, in individuals with FXS. Furthermore, this ratio was highly correlated with performance in an auditory attention task. Compared to controls, males with FXS demonstrated elevated bidirectional frontotemporal information flow at chirp onset. The evidence indicates that both temporal lobe hyperexcitability and disruptions in top-down regulation play a role in auditory sensitivity disturbances in FXS. These findings have the potential to guide the development of therapeutic targets and back-translation strategies.
Collapse
Affiliation(s)
- Ernest V Pedapati
- Division of Child and Adolescent Psychiatry, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States
- Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States
- Department of Psychiatry, University of Cincinnati College of Medicine, Cincinnati, OH, United States
| | - Lauren E Ethridge
- Department of Pediatrics, Section on Developmental and Behavioral Pediatrics, University of Oklahoma Health Sciences Center, Oklahoma City, OK, United States
- Department of Psychology, University of Oklahoma, Norman, OK, United States
| | - Yanchen Liu
- Division of Child and Adolescent Psychiatry, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States
| | - Rui Liu
- Division of Child and Adolescent Psychiatry, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States
| | - John A Sweeney
- Department of Psychiatry, University of Cincinnati College of Medicine, Cincinnati, OH, United States
| | - Lisa A DeStefano
- Division of Developmental and Behavioral Pediatrics, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States
| | - Makoto Miyakoshi
- Division of Child and Adolescent Psychiatry, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States
| | - Khaleel Razak
- Department of Psychology, University of California, Riverside, Riverside, CA, United States
| | - Lauren M Schmitt
- Division of Developmental and Behavioral Pediatrics, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, United States
| | - David R Moore
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States
- Manchester Centre for Audiology and Deafness, University of Manchester, Manchester, UK
| | - Donald L Gilbert
- Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, United States
| | - Steve W Wu
- Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, United States
| | - Elizabeth Smith
- Division of Developmental and Behavioral Pediatrics, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, United States
| | - Rebecca C Shaffer
- Division of Developmental and Behavioral Pediatrics, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, United States
| | - Kelli C Dominick
- Division of Child and Adolescent Psychiatry, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States
- Department of Psychiatry, University of Cincinnati College of Medicine, Cincinnati, OH, United States
| | - Paul S Horn
- Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States
| | - Devin Binder
- Division of Biomedical Sciences, School of Medicine, University of California, Riverside, United States
| | - Craig A Erickson
- Division of Child and Adolescent Psychiatry, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States
- Department of Psychiatry, University of Cincinnati College of Medicine, Cincinnati, OH, United States
| |
Collapse
|
3
|
Peng F, Harper NS, Mishra AP, Auksztulewicz R, Schnupp JWH. Dissociable Roles of the Auditory Midbrain and Cortex in Processing the Statistical Features of Natural Sound Textures. J Neurosci 2024; 44:e1115232023. [PMID: 38267259 PMCID: PMC10919253 DOI: 10.1523/jneurosci.1115-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Revised: 11/23/2023] [Accepted: 12/11/2023] [Indexed: 01/26/2024] Open
Abstract
Sound texture perception takes advantage of a hierarchy of time-averaged statistical features of acoustic stimuli, but much remains unclear about how these statistical features are processed along the auditory pathway. Here, we compared the neural representation of sound textures in the inferior colliculus (IC) and auditory cortex (AC) of anesthetized female rats. We recorded responses to texture morph stimuli that gradually add statistical features of increasingly higher complexity. For each texture, several different exemplars were synthesized using different random seeds. An analysis of transient and ongoing multiunit responses showed that the IC units were sensitive to every type of statistical feature, albeit to a varying extent. In contrast, only a small proportion of AC units were overtly sensitive to any statistical features. Differences in texture types explained more of the variance of IC neural responses than did differences in exemplars, indicating a degree of "texture type tuning" in the IC, but the same was, perhaps surprisingly, not the case for AC responses. We also evaluated the accuracy of texture type classification from single-trial population activity and found that IC responses became more informative as more summary statistics were included in the texture morphs, while for AC population responses, classification performance remained consistently very low. These results argue against the idea that AC neurons encode sound type via an overt sensitivity in neural firing rate to fine-grain spectral and temporal statistical features.
Collapse
Affiliation(s)
- Fei Peng
- Department of Neuroscience, City University of Hong Kong, Hong Kong, China
| | - Nicol S Harper
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford OX1 2JD, United Kingdom
| | - Ambika P Mishra
- Department of Neuroscience, City University of Hong Kong, Hong Kong, China
| | - Ryszard Auksztulewicz
- Department of Neuroscience, City University of Hong Kong, Hong Kong, China
- Center for Cognitive Neuroscience Berlin, Free University Berlin, Berlin 14195, Germany
| | - Jan W H Schnupp
- Department of Neuroscience, City University of Hong Kong, Hong Kong, China
| |
Collapse
|
4
|
Reichert MS, Bolek MG, McCullagh EA. Parasite effects on receivers in animal communication: Hidden impacts on behavior, ecology, and evolution. Proc Natl Acad Sci U S A 2023; 120:e2300186120. [PMID: 37459523 PMCID: PMC10372545 DOI: 10.1073/pnas.2300186120] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/20/2023] Open
Abstract
Parasites exert a profound effect on biological processes. In animal communication, parasite effects on signalers are well-known drivers of the evolution of communication systems. Receiver behavior is also likely to be altered when they are parasitized or at risk of parasitism, but these effects have received much less attention. Here, we present a broad framework for understanding the consequences of parasitism on receivers for behavioral, ecological, and evolutionary processes. First, we outline the different kinds of effects parasites can have on receivers, including effects on signal processing from the many parasites that inhabit, occlude, or damage the sensory periphery and the central nervous system or that affect physiological processes that support these organs, and effects on receiver response strategies. We then demonstrate how understanding parasite effects on receivers could answer important questions about the mechanistic causes and functional consequences of variation in animal communication systems. Variation in parasitism levels is a likely source of among-individual differences in response to signals, which can affect receiver fitness and, through effects on signaler fitness, impact population levels of signal variability. The prevalence of parasitic effects on specific sensory organs may be an important selective force for the evolution of elaborate and multimodal signals. Finally, host-parasite coevolution across heterogeneous landscapes will generate geographic variation in communication systems, which could ultimately lead to evolutionary divergence. We discuss applications of experimental techniques to manipulate parasitism levels and point the way forward by calling for integrative research collaborations between parasitologists, neurobiologists, and behavioral and evolutionary ecologists.
Collapse
Affiliation(s)
- Michael S. Reichert
- Department of Integrative Biology, Oklahoma State University, Stillwater, OK74078
| | - Matthew G. Bolek
- Department of Integrative Biology, Oklahoma State University, Stillwater, OK74078
| | | |
Collapse
|
5
|
Salles A, Neunuebel J. What do mammals have to say about the neurobiology of acoustic communication? MOLECULAR PSYCHOLOGY : BRAIN, BEHAVIOR, AND SOCIETY 2023; 2:5. [PMID: 38827277 PMCID: PMC11141777 DOI: 10.12688/molpsychol.17539.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/04/2024]
Abstract
Auditory communication is crucial across taxa, including humans, because it enables individuals to convey information about threats, food sources, mating opportunities, and other social cues necessary for survival. Comparative approaches to auditory communication will help bridge gaps across taxa and facilitate our understanding of the neural mechanisms underlying this complex task. In this work, we briefly review the field of auditory communication processing and the classical champion animal, the songbird. In addition, we discuss other mammalian species that are advancing the field. In particular, we emphasize mice and bats, highlighting the characteristics that may inform how we think about communication processing.
Collapse
Affiliation(s)
- Angeles Salles
- Biological Sciences, University of Illinois Chicago, Chicago, Illinois, USA
| | - Joshua Neunuebel
- Psychological and Brain Sciences, University of Delaware, Newark, Delaware, USA
| |
Collapse
|
6
|
Rivera M, Edwards JA, Hauber ME, Woolley SMN. Machine learning and statistical classification of birdsong link vocal acoustic features with phylogeny. Sci Rep 2023; 13:7076. [PMID: 37127781 PMCID: PMC10151348 DOI: 10.1038/s41598-023-33825-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Accepted: 04/19/2023] [Indexed: 05/03/2023] Open
Abstract
Birdsong is a longstanding model system for studying evolution and biodiversity. Here, we collected and analyzed high quality song recordings from seven species in the family Estrildidae. We measured the acoustic features of syllables and then used dimensionality reduction and machine learning classifiers to identify features that accurately assigned syllables to species. Species differences were captured by the first 3 principal components, corresponding to basic frequency, power distribution, and spectrotemporal features. We then identified the measured features underlying classification accuracy. We found that fundamental frequency, mean frequency, spectral flatness, and syllable duration were the most informative features for species identification. Next, we tested whether specific acoustic features of species' songs predicted phylogenetic distance. We found significant phylogenetic signal in syllable frequency features, but not in power distribution or spectrotemporal features. Results suggest that frequency features are more constrained by species' genetics than are other features, and are the best signal features for identifying species from song recordings. The absence of phylogenetic signal in power distribution and spectrotemporal features suggests that these song features are labile, reflecting learning processes and individual recognition.
Collapse
Affiliation(s)
- Moises Rivera
- Department of Psychology, Hunter College and the Graduate Center, City University of New York, New York, NY, 10065, USA
- Mortimer B. Zuckerman Mind, Brain, and Behavior Institute, Columbia University, New York, NY, 10027, USA
| | - Jacob A Edwards
- Mortimer B. Zuckerman Mind, Brain, and Behavior Institute, Columbia University, New York, NY, 10027, USA
- Department of Psychology, Columbia University, New York, NY, 10027, USA
| | - Mark E Hauber
- Department of Evolution, Ecology, and Behavior, School of Biological Sciences, University of Illinois at Urbana-Champaign, Urbana, IL, 61801, USA
| | - Sarah M N Woolley
- Mortimer B. Zuckerman Mind, Brain, and Behavior Institute, Columbia University, New York, NY, 10027, USA.
- Department of Psychology, Columbia University, New York, NY, 10027, USA.
- Zuckerman Institute at Columbia University, Jerome L. Greene Science Center, 3227 Broadway, L3.028, New York, NY, 10027, USA.
| |
Collapse
|
7
|
Janse van Rensburg EO, Botha RA, von Solms R. Utility indicator for emotion detection in a speaker authentication system. INFORMATION AND COMPUTER SECURITY 2022. [DOI: 10.1108/ics-07-2021-0097] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Purpose
Authenticating an individual through voice can prove convenient as nothing needs to be stored and cannot easily be stolen. However, if an individual is authenticating under duress, the coerced attempt must be acknowledged and appropriate warnings issued. Furthermore, as duress may entail multiple combinations of emotions, the current f-score evaluation does not accommodate that multiple selected samples possess similar levels of importance. Thus, this study aims to demonstrate an approach to identifying duress within a voice-based authentication system.
Design/methodology/approach
Measuring the value that a classifier presents is often done using an f-score. However, the f-score does not effectively portray the proposed value when multiple classes could be grouped as one. The f-score also does not provide any information when numerous classes are often incorrectly identified as the other. Therefore, the proposed approach uses the confusion matrix, aggregates the select classes into another matrix and calculates a more precise representation of the selected classifier’s value. The utility of the proposed approach is demonstrated through multiple tests and is conducted as follows. The initial tests’ value is presented by an f-score, which does not value the individual emotions. The lack of value is then remedied with further tests, which include a confusion matrix. Final tests are then conducted that aggregate selected emotions within the confusion matrix to present a more precise utility value.
Findings
Two tests within the set of experiments achieved an f-score difference of 1%, indicating, Mel frequency cepstral coefficient, emotion detection, confusion matrix, multi-layer perceptron, Ryerson audio-visual database of emotional speech and song (RAVDESS), voice authentication that the two tests provided similar value. The confusion matrix used to calculate the f-score indicated that some emotions are often confused, which could all be considered closely related. Although the f-score can represent an accuracy value, these tests’ value is not accurately portrayed when not considering often confused emotions. Deciding which approach to take based on the f-score did not prove beneficial as it did not address the confused emotions. When aggregating the confusion matrix of these two tests based on selected emotions, the newly calculated utility value demonstrated a difference of 4%, indicating that the two tests may not provide a similar value as previously indicated.
Research limitations/implications
This approach’s performance is dependent on the data presented to it. If the classifier is presented with incomplete or degraded data, the results obtained from the classifier will reflect that. Additionally, the grouping of emotions is not based on psychological evidence, and this was purely done to demonstrate the implementation of an aggregated confusion matrix.
Originality/value
The f-score offers a value that represents the classifiers’ ability to classify a class correctly. This paper demonstrates that aggregating a confusion matrix could provide more value than a single f-score in the context of classifying an emotion that could consist of a combination of emotions. This approach can similarly be applied to different combinations of classifiers for the desired effect of extracting a more accurate performance value that a selected classifier presents.
Collapse
|
8
|
Distinct integration of spectrally complex sounds in mouse primary auditory cortices. Hear Res 2022; 417:108455. [DOI: 10.1016/j.heares.2022.108455] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Revised: 01/07/2022] [Accepted: 01/26/2022] [Indexed: 11/21/2022]
|
9
|
Homma NY, Bajo VM. Lemniscal Corticothalamic Feedback in Auditory Scene Analysis. Front Neurosci 2021; 15:723893. [PMID: 34489635 PMCID: PMC8417129 DOI: 10.3389/fnins.2021.723893] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Accepted: 07/30/2021] [Indexed: 12/15/2022] Open
Abstract
Sound information is transmitted from the ear to central auditory stations of the brain via several nuclei. In addition to these ascending pathways there exist descending projections that can influence the information processing at each of these nuclei. A major descending pathway in the auditory system is the feedback projection from layer VI of the primary auditory cortex (A1) to the ventral division of medial geniculate body (MGBv) in the thalamus. The corticothalamic axons have small glutamatergic terminals that can modulate thalamic processing and thalamocortical information transmission. Corticothalamic neurons also provide input to GABAergic neurons of the thalamic reticular nucleus (TRN) that receives collaterals from the ascending thalamic axons. The balance of corticothalamic and TRN inputs has been shown to refine frequency tuning, firing patterns, and gating of MGBv neurons. Therefore, the thalamus is not merely a relay stage in the chain of auditory nuclei but does participate in complex aspects of sound processing that include top-down modulations. In this review, we aim (i) to examine how lemniscal corticothalamic feedback modulates responses in MGBv neurons, and (ii) to explore how the feedback contributes to auditory scene analysis, particularly on frequency and harmonic perception. Finally, we will discuss potential implications of the role of corticothalamic feedback in music and speech perception, where precise spectral and temporal processing is essential.
Collapse
Affiliation(s)
- Natsumi Y. Homma
- Center for Integrative Neuroscience, University of California, San Francisco, San Francisco, CA, United States
- Coleman Memorial Laboratory, Department of Otolaryngology – Head and Neck Surgery, University of California, San Francisco, San Francisco, CA, United States
| | - Victoria M. Bajo
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
10
|
Spool JA, Macedo-Lima M, Scarpa G, Morohashi Y, Yazaki-Sugiyama Y, Remage-Healey L. Genetically identified neurons in avian auditory pallium mirror core principles of their mammalian counterparts. Curr Biol 2021; 31:2831-2843.e6. [PMID: 33989528 DOI: 10.1016/j.cub.2021.04.039] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2020] [Revised: 02/12/2021] [Accepted: 04/15/2021] [Indexed: 12/21/2022]
Abstract
In vertebrates, advanced cognitive abilities are typically associated with the telencephalic pallium. In mammals, the pallium is a layered mixture of excitatory and inhibitory neuronal populations with distinct molecular, physiological, and network phenotypes. This cortical architecture is proposed to support efficient, high-level information processing. Comparative perspectives across vertebrates provide a lens to understand the common features of pallium that are important for advanced cognition. Studies in songbirds have established strikingly parallel features of neuronal types between mammalian and avian pallium. However, lack of genetic access to defined pallial cell types in non-mammalian vertebrates has hindered progress in resolving connections between molecular and physiological phenotypes. A definitive mapping of the physiology of pallial cells onto their molecular identities in birds is critical for understanding how synaptic and computational properties depend on underlying molecular phenotypes. Using viral tools to target excitatory versus inhibitory neurons in the zebra finch auditory association pallium (calmodulin-dependent kinase alpha [CaMKIIα] and glutamate decarboxylase 1 [GAD1] promoters, respectively), we systematically tested predictions derived from mammalian pallium. We identified two genetically distinct neuronal populations that exhibit profound physiological and computational similarities with mammalian excitatory and inhibitory pallial cells, definitively aligning putative cell types in avian caudal nidopallium with these molecular identities. Specifically, genetically identified CaMKIIα and GAD1 cell types in avian auditory association pallium exhibit distinct intrinsic physiological parameters, distinct auditory coding principles, and inhibitory-dependent pallial synchrony, gamma oscillations, and local suppression. The retention, or convergence, of these molecular and physiological features in both birds and mammals clarifies the characteristics of pallial circuits for advanced cognitive abilities.
Collapse
Affiliation(s)
- Jeremy A Spool
- Neuroscience and Behavior, Center for Neuroendocrine Studies, University of Massachusetts, Amherst, MA 01003, USA
| | - Matheus Macedo-Lima
- Neuroscience and Behavior, Center for Neuroendocrine Studies, University of Massachusetts, Amherst, MA 01003, USA; CAPES Foundation, Ministry of Education of Brazil, Brasília 70040-020, Brazil
| | - Garrett Scarpa
- Neuroscience and Behavior, Center for Neuroendocrine Studies, University of Massachusetts, Amherst, MA 01003, USA
| | - Yuichi Morohashi
- Okinawa Institute of Science and Technology (OIST) Graduate University, Okinawa, Japan
| | - Yoko Yazaki-Sugiyama
- Okinawa Institute of Science and Technology (OIST) Graduate University, Okinawa, Japan
| | - Luke Remage-Healey
- Neuroscience and Behavior, Center for Neuroendocrine Studies, University of Massachusetts, Amherst, MA 01003, USA.
| |
Collapse
|