1
|
Kar M, Pernia M, Williams K, Parida S, Schneider NA, McAndrew M, Kumbam I, Sadagopan S. Vocalization categorization behavior explained by a feature-based auditory categorization model. eLife 2022; 11:e78278. [PMID: 36226815 PMCID: PMC9633061 DOI: 10.7554/elife.78278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Accepted: 10/12/2022] [Indexed: 11/16/2022] Open
Abstract
Vocal animals produce multiple categories of calls with high between- and within-subject variability, over which listeners must generalize to accomplish call categorization. The behavioral strategies and neural mechanisms that support this ability to generalize are largely unexplored. We previously proposed a theoretical model that accomplished call categorization by detecting features of intermediate complexity that best contrasted each call category from all other categories. We further demonstrated that some neural responses in the primary auditory cortex were consistent with such a model. Here, we asked whether a feature-based model could predict call categorization behavior. We trained both the model and guinea pigs (GPs) on call categorization tasks using natural calls. We then tested categorization by the model and GPs using temporally and spectrally altered calls. Both the model and GPs were surprisingly resilient to temporal manipulations, but sensitive to moderate frequency shifts. Critically, the model predicted about 50% of the variance in GP behavior. By adopting different model training strategies and examining features that contributed to solving specific tasks, we could gain insight into possible strategies used by animals to categorize calls. Our results validate a model that uses the detection of intermediate-complexity contrastive features to accomplish call categorization.
Collapse
Affiliation(s)
- Manaswini Kar
- Center for Neuroscience at the University of PittsburghPittsburghUnited States
- Center for the Neural Basis of CognitionPittsburghUnited States
- Department of Neurobiology, University of PittsburghPittsburghUnited States
| | - Marianny Pernia
- Department of Neurobiology, University of PittsburghPittsburghUnited States
| | - Kayla Williams
- Department of Neurobiology, University of PittsburghPittsburghUnited States
| | - Satyabrata Parida
- Department of Neurobiology, University of PittsburghPittsburghUnited States
| | - Nathan Alan Schneider
- Center for Neuroscience at the University of PittsburghPittsburghUnited States
- Center for the Neural Basis of CognitionPittsburghUnited States
| | - Madelyn McAndrew
- Center for the Neural Basis of CognitionPittsburghUnited States
- Department of Neurobiology, University of PittsburghPittsburghUnited States
| | - Isha Kumbam
- Department of Neurobiology, University of PittsburghPittsburghUnited States
| | - Srivatsun Sadagopan
- Center for Neuroscience at the University of PittsburghPittsburghUnited States
- Center for the Neural Basis of CognitionPittsburghUnited States
- Department of Neurobiology, University of PittsburghPittsburghUnited States
- Department of Bioengineering, University of PittsburghPittsburghUnited States
- Department of Communication Science and Disorders, University of PittsburghPittsburghUnited States
| |
Collapse
|
2
|
Montes-Lourido P, Kar M, Pernia M, Parida S, Sadagopan S. Updates to the guinea pig animal model for in-vivo auditory neuroscience in the low-frequency hearing range. Hear Res 2022; 424:108603. [PMID: 36099806 PMCID: PMC9922531 DOI: 10.1016/j.heares.2022.108603] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Revised: 08/29/2022] [Accepted: 09/03/2022] [Indexed: 02/08/2023]
Abstract
For gaining insight into general principles of auditory processing, it is critical to choose model organisms whose set of natural behaviors encompasses the processes being investigated. This reasoning has led to the development of a variety of animal models for auditory neuroscience research, such as guinea pigs, gerbils, chinchillas, rabbits, and ferrets; but in recent years, the availability of cutting-edge molecular tools and other methodologies in the mouse model have led to waning interest in these unique model species. As laboratories increasingly look to include in-vivo components in their research programs, a comprehensive description of procedures and techniques for applying some of these modern neuroscience tools to a non-mouse small animal model would enable researchers to leverage unique model species that may be best suited for testing their specific hypotheses. In this manuscript, we describe in detail the methods we have developed to apply these tools to the guinea pig animal model to answer questions regarding the neural processing of complex sounds, such as vocalizations. We describe techniques for vocalization acquisition, behavioral testing, recording of auditory brainstem responses and frequency-following responses, intracranial neural signals including local field potential and single unit activity, and the expression of transgenes allowing for optogenetic manipulation of neural activity, all in awake and head-fixed guinea pigs. We demonstrate the rich datasets at the behavioral and electrophysiological levels that can be obtained using these techniques, underscoring the guinea pig as a versatile animal model for studying complex auditory processing. More generally, the methods described here are applicable to a broad range of small mammals, enabling investigators to address specific auditory processing questions in model organisms that are best suited for answering them.
Collapse
Affiliation(s)
- Pilar Montes-Lourido
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA
| | - Manaswini Kar
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA
| | - Marianny Pernia
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA
| | - Satyabrata Parida
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA
| | - Srivatsun Sadagopan
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA; Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, USA.
| |
Collapse
|
3
|
Cortical Activation Patterns Evoked by Temporally Asymmetric Sounds and Their Modulation by Learning. eNeuro 2017; 4:eN-NWR-0241-16. [PMID: 28451640 PMCID: PMC5399754 DOI: 10.1523/eneuro.0241-16.2017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2016] [Revised: 03/29/2017] [Accepted: 04/03/2017] [Indexed: 01/20/2023] Open
Abstract
When complex sounds are reversed in time, the original and reversed versions are perceived differently in spectral and temporal dimensions despite their identical duration and long-term spectrum-power profiles. Spatiotemporal activation patterns evoked by temporally asymmetric sound pairs demonstrate how the temporal envelope determines the readout of the spectrum. We examined the patterns of activation evoked by a temporally asymmetric sound pair in the primary auditory field (AI) of anesthetized guinea pigs and determined how discrimination training modified these patterns. Optical imaging using a voltage-sensitive dye revealed that a forward ramped-down natural sound (F) consistently evoked much stronger responses than its time-reversed, ramped-up counterpart (revF). The spatiotemporal maximum peak (maxP) of F-evoked activation was always greater than that of revF-evoked activation, and these maxPs were significantly separated within the AI. Although discrimination training did not affect the absolute magnitude of these maxPs, the revF-to-F ratio of the activation peaks calculated at the location where hemispheres were maximally activated (i.e., F-evoked maxP) was significantly smaller in the trained group. The F-evoked activation propagated across the AI along the temporal axis to the ventroanterior belt field (VA), with the local activation peak within the VA being significantly larger in the trained than in the naïve group. These results suggest that the innate network is more responsive to natural sounds of ramped-down envelopes than their time-reversed, unnatural sounds. The VA belt field activation might play an important role in emotional learning of sounds through its connections with amygdala.
Collapse
|