1
|
Lawlor J, Wohlgemuth MJ, Moss CF, Kuchibhotla KV. Spatially clustered neurons encode vocalization categories in the bat midbrain. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.06.14.545029. [PMID: 37398454 PMCID: PMC10312733 DOI: 10.1101/2023.06.14.545029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/04/2023]
Abstract
Rapid categorization of vocalizations enables adaptive behavior across species. While categorical perception is thought to arise in the neocortex, humans and other animals could benefit from functional organization of ethologically-relevant sounds at earlier stages in the auditory hierarchy. Here, we developed two-photon calcium imaging in the awake echolocating bat (Eptesicus fuscus) to study encoding of sound meaning in the Inferior Colliculus, which is as few as two synapses from the inner ear. Echolocating bats produce and interpret frequency sweep-based vocalizations for social communication and navigation. Auditory playback experiments demonstrated that individual neurons responded selectively to social or navigation calls, enabling robust population-level decoding across categories. Strikingly, category-selective neurons formed spatial clusters, independent of tonotopy within the IC. These findings support a revised view of categorical processing in which specified channels for ethologically-relevant sounds are spatially segregated early in the auditory hierarchy, enabling rapid subcortical organization of call meaning.
Collapse
Affiliation(s)
- Jennifer Lawlor
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, 21218, USA
- Johns Hopkins Kavli Neuroscience Discovery Institute, Baltimore, MD, 21218, MD
| | | | - Cynthia F. Moss
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, 21218, USA
- Johns Hopkins Kavli Neuroscience Discovery Institute, Baltimore, MD, 21218, MD
- The Solomon Snyder Department of Neuroscience, Johns Hopkins University School of Medicine, Baltimore, MD, 21205, USA
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, 21218, USA
| | - Kishore V. Kuchibhotla
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, 21218, USA
- Johns Hopkins Kavli Neuroscience Discovery Institute, Baltimore, MD, 21218, MD
- The Solomon Snyder Department of Neuroscience, Johns Hopkins University School of Medicine, Baltimore, MD, 21205, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, 21218, USA
- Lead contact
| |
Collapse
|
2
|
Robotka H, Thomas L, Yu K, Wood W, Elie JE, Gahr M, Theunissen FE. Sparse ensemble neural code for a complete vocal repertoire. Cell Rep 2023; 42:112034. [PMID: 36696266 PMCID: PMC10363576 DOI: 10.1016/j.celrep.2023.112034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 08/08/2022] [Accepted: 01/09/2023] [Indexed: 01/26/2023] Open
Abstract
The categorization of animal vocalizations into distinct behaviorally relevant groups for communication is an essential operation that must be performed by the auditory system. This auditory object recognition is a difficult task that requires selectivity to the group identifying acoustic features and invariance to renditions within each group. We find that small ensembles of auditory neurons in the forebrain of a social songbird can code the bird's entire vocal repertoire (∼10 call types). Ensemble neural discrimination is not, however, correlated with single unit selectivity, but instead with how well the joint single unit tunings to characteristic spectro-temporal modulations span the acoustic subspace optimized for the discrimination of call types. Thus, akin to face recognition in the visual system, call type recognition in the auditory system is based on a sparse code representing a small number of high-level features and not on highly selective grandmother neurons.
Collapse
Affiliation(s)
- H Robotka
- Max Planck Institute for Ornithology, Seewiesen, Germany
| | - L Thomas
- University of California, Berkeley, Helen Wills Neuroscience Institute, Berkeley, CA, USA
| | - K Yu
- University of California, Berkeley, Helen Wills Neuroscience Institute, Berkeley, CA, USA
| | - W Wood
- University of California, Berkeley, Helen Wills Neuroscience Institute, Berkeley, CA, USA
| | - J E Elie
- University of California, Berkeley, Helen Wills Neuroscience Institute, Berkeley, CA, USA
| | - M Gahr
- Max Planck Institute for Ornithology, Seewiesen, Germany
| | - F E Theunissen
- Max Planck Institute for Ornithology, Seewiesen, Germany; University of California, Berkeley, Helen Wills Neuroscience Institute, Berkeley, CA, USA; Department of Psychology and Integrative Biology, University of California, Berkeley, Berkeley, CA, USA.
| |
Collapse
|
3
|
Kocsor F, Ferencz T, Kisander Z, Tizedes G, Schaadt B, Kertész R, Kozma L, Vincze O, Láng A. The mental representation of occupational stereotypes is driven as much by their affective as by their semantic content. BMC Psychol 2022; 10:222. [PMID: 36131295 PMCID: PMC9494850 DOI: 10.1186/s40359-022-00928-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Accepted: 08/30/2022] [Indexed: 11/27/2022] Open
Abstract
BACKGROUND Studies on person perception showed that stereotypes can be activated by presenting either characteristic traits of group members, or labels associated to these groups. However, it is not clear whether these pieces of semantic information activate negative and positive stereotypes directly, or via an indirect cognitive pathway leading through brain regions responsible for affective responses. Our main objective with this study was to disentangle the effects of semantic and affective contents. To this end, we intended to scrutinize whether the representation of occupational labels is independent of the emotions they evoke. METHODS Participants (N = 73, M = 27.0, SD = 9.1, 31 men 42 women,) were asked to complete two tasks presented online. In the first task they had to arrange 20 occupational labels-randomly chosen from a pool of 60 items-in a two-dimensional space, moving the mouse pointer along two undefined axes. In a second task the axes' names were defined a priori. Subjects were asked to arrange the labels according to valence, the extent to which the word evoked pleasant or unpleasant feelings, and arousal, the extent to which the word evoked excitement or calmness. RESULTS Based on the final coordinates of the labels, two cluster analyses were carried out separately in the two tasks. The two clusters were compared with Fisher's exact test, which revealed that the cluster structures overlap significantly. CONCLUSIONS The results suggest that the spontaneous categorization and the semantic representation of occupations rely largely on the affective state they evoke. We propose that affective content might have a primacy over detailed semantic information in many aspects of person perception, including social categorization.
Collapse
Affiliation(s)
- Ferenc Kocsor
- Institute of Psychology, Faculty of Humanities and Social Sciences, University of Pécs, Pécs, Hungary.
| | - Tas Ferencz
- grid.9679.10000 0001 0663 9479Institute of Psychology, Faculty of Humanities and Social Sciences, University of Pécs, Pécs, Hungary
| | - Zsolt Kisander
- grid.9679.10000 0001 0663 9479Department of Behavioral Sciences, Medical School, University of Pécs, Pécs, Hungary
| | - Gitta Tizedes
- grid.9679.10000 0001 0663 9479Institute of Psychology, Faculty of Humanities and Social Sciences, University of Pécs, Pécs, Hungary
| | - Blanka Schaadt
- grid.9679.10000 0001 0663 9479Institute of Psychology, Faculty of Humanities and Social Sciences, University of Pécs, Pécs, Hungary
| | - Rita Kertész
- grid.9679.10000 0001 0663 9479Institute of Psychology, Faculty of Humanities and Social Sciences, University of Pécs, Pécs, Hungary
| | - Luca Kozma
- grid.9679.10000 0001 0663 9479Institute of Psychology, Faculty of Humanities and Social Sciences, University of Pécs, Pécs, Hungary
| | - Orsolya Vincze
- grid.9679.10000 0001 0663 9479Institute of Psychology, Faculty of Humanities and Social Sciences, University of Pécs, Pécs, Hungary
| | - András Láng
- grid.9679.10000 0001 0663 9479Institute of Psychology, Faculty of Humanities and Social Sciences, University of Pécs, Pécs, Hungary
| |
Collapse
|
4
|
Abstract
Many concepts in mathematics are not fully defined, and their properties are implicit, which leads to paradoxes. New foundations of mathematics were formulated based on the concept of innate programs of behavior and thinking. The basic axiom of mathematics is proposed, according to which any mathematical object has a physical carrier. This carrier can store and process only a finite amount of information. As a result of the D-procedure (encoding of any mathematical objects and operations on them in the form of qubits), a mathematical object is digitized. As a consequence, the basis of mathematics is the interaction of brain qubits, which can only implement arithmetic operations on numbers. A proof in mathematics is an algorithm for finding the correct statement from a list of already-existing statements. Some mathematical paradoxes (e.g., Banach–Tarski and Russell) and Smale’s 18th problem are solved by means of the D-procedure. The axiom of choice is a consequence of the equivalence of physical states, the choice among which can be made randomly. The proposed mathematics is constructive in the sense that any mathematical object exists if it is physically realized. The consistency of mathematics is due to directed evolution, which results in effective structures. Computing with qubits is based on the nontrivial quantum effects of biologically important molecules in neurons and the brain.
Collapse
|
5
|
Mahmud MS, Yeasin M, Bidelman GM. Data-driven machine learning models for decoding speech categorization from evoked brain responses. J Neural Eng 2021; 18. [PMID: 33690177 DOI: 10.1101/2020.08.03.234997] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2020] [Accepted: 03/09/2021] [Indexed: 05/24/2023]
Abstract
Objective.Categorical perception (CP) of audio is critical to understand how the human brain perceives speech sounds despite widespread variability in acoustic properties. Here, we investigated the spatiotemporal characteristics of auditory neural activity that reflects CP for speech (i.e. differentiates phonetic prototypes from ambiguous speech sounds).Approach.We recorded 64-channel electroencephalograms as listeners rapidly classified vowel sounds along an acoustic-phonetic continuum. We used support vector machine classifiers and stability selection to determine when and where in the brain CP was best decoded across space and time via source-level analysis of the event-related potentials.Main results. We found that early (120 ms) whole-brain data decoded speech categories (i.e. prototypical vs. ambiguous tokens) with 95.16% accuracy (area under the curve 95.14%;F1-score 95.00%). Separate analyses on left hemisphere (LH) and right hemisphere (RH) responses showed that LH decoding was more accurate and earlier than RH (89.03% vs. 86.45% accuracy; 140 ms vs. 200 ms). Stability (feature) selection identified 13 regions of interest (ROIs) out of 68 brain regions [including auditory cortex, supramarginal gyrus, and inferior frontal gyrus (IFG)] that showed categorical representation during stimulus encoding (0-260 ms). In contrast, 15 ROIs (including fronto-parietal regions, IFG, motor cortex) were necessary to describe later decision stages (later 300-800 ms) of categorization but these areas were highly associated with the strength of listeners' categorical hearing (i.e. slope of behavioral identification functions).Significance.Our data-driven multivariate models demonstrate that abstract categories emerge surprisingly early (∼120 ms) in the time course of speech processing and are dominated by engagement of a relatively compact fronto-temporal-parietal brain network.
Collapse
Affiliation(s)
- Md Sultan Mahmud
- Department of Electrical and Computer Engineering, University of Memphis, 3815 Central Avenue, Memphis, TN 38152, United States of America
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States of America
| | - Mohammed Yeasin
- Department of Electrical and Computer Engineering, University of Memphis, 3815 Central Avenue, Memphis, TN 38152, United States of America
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States of America
| | - Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States of America
- School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States of America
- University of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, United States of America
| |
Collapse
|
6
|
Mahmud MS, Yeasin M, Bidelman GM. Data-driven machine learning models for decoding speech categorization from evoked brain responses. J Neural Eng 2021; 18:10.1088/1741-2552/abecf0. [PMID: 33690177 PMCID: PMC8738965 DOI: 10.1088/1741-2552/abecf0] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2020] [Accepted: 03/09/2021] [Indexed: 11/12/2022]
Abstract
Objective.Categorical perception (CP) of audio is critical to understand how the human brain perceives speech sounds despite widespread variability in acoustic properties. Here, we investigated the spatiotemporal characteristics of auditory neural activity that reflects CP for speech (i.e. differentiates phonetic prototypes from ambiguous speech sounds).Approach.We recorded 64-channel electroencephalograms as listeners rapidly classified vowel sounds along an acoustic-phonetic continuum. We used support vector machine classifiers and stability selection to determine when and where in the brain CP was best decoded across space and time via source-level analysis of the event-related potentials.Main results. We found that early (120 ms) whole-brain data decoded speech categories (i.e. prototypical vs. ambiguous tokens) with 95.16% accuracy (area under the curve 95.14%;F1-score 95.00%). Separate analyses on left hemisphere (LH) and right hemisphere (RH) responses showed that LH decoding was more accurate and earlier than RH (89.03% vs. 86.45% accuracy; 140 ms vs. 200 ms). Stability (feature) selection identified 13 regions of interest (ROIs) out of 68 brain regions [including auditory cortex, supramarginal gyrus, and inferior frontal gyrus (IFG)] that showed categorical representation during stimulus encoding (0-260 ms). In contrast, 15 ROIs (including fronto-parietal regions, IFG, motor cortex) were necessary to describe later decision stages (later 300-800 ms) of categorization but these areas were highly associated with the strength of listeners' categorical hearing (i.e. slope of behavioral identification functions).Significance.Our data-driven multivariate models demonstrate that abstract categories emerge surprisingly early (∼120 ms) in the time course of speech processing and are dominated by engagement of a relatively compact fronto-temporal-parietal brain network.
Collapse
Affiliation(s)
- Md Sultan Mahmud
- Department of Electrical and Computer Engineering, University of Memphis, 3815 Central Avenue, Memphis, TN 38152, United States of America
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States of America
| | - Mohammed Yeasin
- Department of Electrical and Computer Engineering, University of Memphis, 3815 Central Avenue, Memphis, TN 38152, United States of America
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States of America
| | - Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States of America
- School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States of America
- University of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, United States of America
| |
Collapse
|
7
|
Liu RC, Anandakumar DB, Lu K. Parent TRAP: Discriminating Infant Cries Requires a Higher-Order Auditory Association Area in Mice. Neuron 2020; 107:399-401. [PMID: 32758444 DOI: 10.1016/j.neuron.2020.07.022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
A circuit understanding of how perception links to response requires integrating neural connectivity, activity, and behavior. In this issue of Neuron, Tasaka et al. (2020) target neurons activated by ultrasonic pup vocalizations and discover a functional synaptic network embedded through acoustically selective TeA neurons that help link the calls to a discriminative maternal behavioral response.
Collapse
Affiliation(s)
- Robert C Liu
- Department of Biology, Emory University, Atlanta, GA, USA; Silvio O. Conte Center for Oxytocin and Social Cognition and Center for Translational Social Neuroscience, Atlanta, GA, USA.
| | - Dakshitha B Anandakumar
- Department of Biology, Emory University, Atlanta, GA, USA; Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Kai Lu
- Department of Biology, Emory University, Atlanta, GA, USA
| |
Collapse
|
8
|
Banno T, Lestang JH, Cohen YE. Computational and neurophysiological principles underlying auditory perceptual decisions. CURRENT OPINION IN PHYSIOLOGY 2020; 18:20-24. [PMID: 32832744 PMCID: PMC7437958 DOI: 10.1016/j.cophys.2020.07.001] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
A fundamental scientific goal in auditory neuroscience is identifying what mechanisms allow the brain to transform an unlabeled mixture of auditory stimuli into distinct perceptual representations. This transformation is accomplished by a complex interaction of multiple neurocomputational processes, including Gestalt grouping mechanisms, categorization, attention, and perceptual decision-making. Despite a great deal of scientific energy devoted to understanding these principles of hearing, we still do not understand either how auditory perception arises from neural activity or the causal relationship between neural activity and auditory perception. Here, we review the contributions of cortical and subcortical regions to auditory perceptual decisions with an emphasis on those studies that simultaneously measure behavior and neural activity. We also put forth challenges to the field that must be faced if we are to further our understanding of the relationship between neural activity and auditory perception.
Collapse
Affiliation(s)
- Taku Banno
- Departments of Otorhinolaryngology, University of Pennsylvania, G12A Stemmler, 3450 Hamilton Walk, Philadelphia, PA 19104, United States.,co-first authors
| | - Jean-Hugues Lestang
- Departments of Otorhinolaryngology, University of Pennsylvania, G12A Stemmler, 3450 Hamilton Walk, Philadelphia, PA 19104, United States.,co-first authors
| | - Yale E Cohen
- Departments of Otorhinolaryngology, University of Pennsylvania, G12A Stemmler, 3450 Hamilton Walk, Philadelphia, PA 19104, United States.,Departments of Bioengineering, University of Pennsylvania, G12A Stemmler, 3450 Hamilton Walk, Philadelphia, PA 19104, United States.,Departments of Neuroscience, University of Pennsylvania, G12A Stemmler, 3450 Hamilton Walk, Philadelphia, PA 19104, United States
| |
Collapse
|
9
|
Mysore SP, Kothari NB. Mechanisms of competitive selection: A canonical neural circuit framework. eLife 2020; 9:e51473. [PMID: 32431293 PMCID: PMC7239658 DOI: 10.7554/elife.51473] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2019] [Accepted: 04/02/2020] [Indexed: 01/25/2023] Open
Abstract
Competitive selection, the transformation of multiple competing sensory inputs and internal states into a unitary choice, is a fundamental component of animal behavior. Selection behaviors have been studied under several intersecting umbrellas including decision-making, action selection, perceptual categorization, and attentional selection. Neural correlates of these behaviors and computational models have been investigated extensively. However, specific, identifiable neural circuit mechanisms underlying the implementation of selection remain elusive. Here, we employ a first principles approach to map competitive selection explicitly onto neural circuit elements. We decompose selection into six computational primitives, identify demands that their execution places on neural circuit design, and propose a canonical neural circuit framework. The resulting framework has several links to neural literature, indicating its biological feasibility, and has several common elements with prominent computational models, suggesting its generality. We propose that this framework can help catalyze experimental discovery of the neural circuit underpinnings of competitive selection.
Collapse
Affiliation(s)
- Shreesh P Mysore
- Department of Psychological and Brain Sciences, Johns Hopkins UniversityBaltimoreUnited States
- The Solomon H. Snyder Department of Neuroscience, Johns Hopkins UniversityBaltimoreUnited States
| | - Ninad B Kothari
- Department of Psychological and Brain Sciences, Johns Hopkins UniversityBaltimoreUnited States
| |
Collapse
|
10
|
Dynamics and Hierarchical Encoding of Non-compact Acoustic Categories in Auditory and Frontal Cortex. Curr Biol 2020; 30:1649-1663.e5. [PMID: 32220317 DOI: 10.1016/j.cub.2020.02.047] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2019] [Revised: 12/28/2019] [Accepted: 02/18/2020] [Indexed: 01/02/2023]
Abstract
Categorical perception is a fundamental cognitive function enabling animals to flexibly assign sounds into behaviorally relevant categories. This study investigates the nature of acoustic category representations, their emergence in an ascending series of ferret auditory and frontal cortical fields, and the dynamics of this representation during passive listening to task-relevant stimuli and during active retrieval from memory while engaging in learned categorization tasks. Ferrets were trained on two auditory Go-NoGo categorization tasks to discriminate two non-compact sound categories (composed of tones or amplitude-modulated noise). Neuronal responses became progressively more categorical in higher cortical fields, especially during task performance. The dynamics of the categorical responses exhibited a cascading top-down modulation pattern that began earliest in the frontal cortex and subsequently flowed downstream to the secondary auditory cortex, followed by the primary auditory cortex. In a subpopulation of neurons, categorical responses persisted even during the passive listening condition, demonstrating memory for task categories and their enhanced categorical boundaries.
Collapse
|
11
|
Kastanenka KV, Moreno-Bote R, De Pittà M, Perea G, Eraso-Pichot A, Masgrau R, Poskanzer KE, Galea E. A roadmap to integrate astrocytes into Systems Neuroscience. Glia 2020; 68:5-26. [PMID: 31058383 PMCID: PMC6832773 DOI: 10.1002/glia.23632] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2018] [Revised: 04/08/2019] [Accepted: 04/09/2019] [Indexed: 12/14/2022]
Abstract
Systems neuroscience is still mainly a neuronal field, despite the plethora of evidence supporting the fact that astrocytes modulate local neural circuits, networks, and complex behaviors. In this article, we sought to identify which types of studies are necessary to establish whether astrocytes, beyond their well-documented homeostatic and metabolic functions, perform computations implementing mathematical algorithms that sub-serve coding and higher-brain functions. First, we reviewed Systems-like studies that include astrocytes in order to identify computational operations that these cells may perform, using Ca2+ transients as their encoding language. The analysis suggests that astrocytes may carry out canonical computations in a time scale of subseconds to seconds in sensory processing, neuromodulation, brain state, memory formation, fear, and complex homeostatic reflexes. Next, we propose a list of actions to gain insight into the outstanding question of which variables are encoded by such computations. The application of statistical analyses based on machine learning, such as dimensionality reduction and decoding in the context of complex behaviors, combined with connectomics of astrocyte-neuronal circuits, is, in our view, fundamental undertakings. We also discuss technical and analytical approaches to study neuronal and astrocytic populations simultaneously, and the inclusion of astrocytes in advanced modeling of neural circuits, as well as in theories currently under exploration such as predictive coding and energy-efficient coding. Clarifying the relationship between astrocytic Ca2+ and brain coding may represent a leap forward toward novel approaches in the study of astrocytes in health and disease.
Collapse
Affiliation(s)
- Ksenia V. Kastanenka
- Department of Neurology, MassGeneral Institute for Neurodegenerative Diseases, Massachusetts General Hospital and Harvard Medical School, Massachusetts 02129, USA
| | - Rubén Moreno-Bote
- Department of Information and Communications Technologies, Center for Brain and Cognition and Universitat Pompeu Fabra, 08018 Barcelona, Spain
- ICREA, 08010 Barcelona, Spain
| | | | | | - Abel Eraso-Pichot
- Departament de Bioquímica, Institut de Neurociències i Universitat Autònoma de Barcelona, Bellaterra, 08193 Barcelona, Spain
| | - Roser Masgrau
- Departament de Bioquímica, Institut de Neurociències i Universitat Autònoma de Barcelona, Bellaterra, 08193 Barcelona, Spain
| | - Kira E. Poskanzer
- Department of Biochemistry & Biophysics, Neuroscience Graduate Program, and Kavli Institute for Fundamental Neuroscience, University of California, San Francisco, San Francisco, California 94143, USA
- Equally contributing authors
| | - Elena Galea
- ICREA, 08010 Barcelona, Spain
- Departament de Bioquímica, Institut de Neurociències i Universitat Autònoma de Barcelona, Bellaterra, 08193 Barcelona, Spain
- Equally contributing authors
| |
Collapse
|
12
|
Elie JE, Theunissen FE. Invariant neural responses for sensory categories revealed by the time-varying information for communication calls. PLoS Comput Biol 2019; 15:e1006698. [PMID: 31557151 PMCID: PMC6762074 DOI: 10.1371/journal.pcbi.1006698] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2018] [Accepted: 06/08/2019] [Indexed: 12/20/2022] Open
Abstract
Although information theoretic approaches have been used extensively in the analysis of the neural code, they have yet to be used to describe how information is accumulated in time while sensory systems are categorizing dynamic sensory stimuli such as speech sounds or visual objects. Here, we present a novel method to estimate the cumulative information for stimuli or categories. We further define a time-varying categorical information index that, by comparing the information obtained for stimuli versus categories of these same stimuli, quantifies invariant neural representations. We use these methods to investigate the dynamic properties of avian cortical auditory neurons recorded in zebra finches that were listening to a large set of call stimuli sampled from the complete vocal repertoire of this species. We found that the time-varying rates carry 5 times more information than the mean firing rates even in the first 100 ms. We also found that cumulative information has slow time constants (100–600 ms) relative to the typical integration time of single neurons, reflecting the fact that the behaviorally informative features of auditory objects are time-varying sound patterns. When we correlated firing rates and information values, we found that average information correlates with average firing rate but that higher-rates found at the onset response yielded similar information values as the lower-rates found in the sustained response: the onset and sustained response of avian cortical auditory neurons provide similar levels of independent information about call identity and call-type. Finally, our information measures allowed us to rigorously define categorical neurons; these categorical neurons show a high degree of invariance for vocalizations within a call-type. Peak invariance is found around 150 ms after stimulus onset. Surprisingly, call-type invariant neurons were found in both primary and secondary avian auditory areas. Just as the recognition of faces requires neural representations that are invariant to scale and rotation, the recognition of behaviorally relevant auditory objects, such as spoken words, requires neural representations that are invariant to the speaker uttering the word and to his or her location. Here, we used information theory to investigate the time course of the neural representation of bird communication calls and of behaviorally relevant categories of these same calls: the call-types of the bird’s repertoire. We found that neurons in both the primary and secondary avian auditory cortex exhibit invariant responses to call renditions within a call-type, suggestive of a potential role for extracting the meaning of these communication calls. We also found that time plays an important role: first, neural responses carry significantly more information when represented by temporal patterns calculated at the small time scale of 10 ms than when measured as average rates and, second, this information accumulates in a non-redundant fashion up to long integration times of 600 ms. This rich temporal neural representation is matched to the temporal richness found in the communication calls of this species.
Collapse
Affiliation(s)
- Julie E. Elie
- Helen Wills Neuroscience Institute, University of California Berkeley, Berkeley, California, United States of America
- Department of Bioengineering, University of California Berkeley, Berkeley, California, United States of America
- * E-mail:
| | - Frédéric E. Theunissen
- Helen Wills Neuroscience Institute, University of California Berkeley, Berkeley, California, United States of America
- Department of Psychology, University of California Berkeley, Berkeley, California, United States of America
| |
Collapse
|
13
|
Training Humans to Categorize Monkey Calls: Auditory Feature- and Category-Selective Neural Tuning Changes. Neuron 2019; 98:405-416.e4. [PMID: 29673483 DOI: 10.1016/j.neuron.2018.03.014] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2017] [Revised: 01/18/2018] [Accepted: 03/08/2018] [Indexed: 11/23/2022]
Abstract
Grouping auditory stimuli into common categories is essential for a variety of auditory tasks, including speech recognition. We trained human participants to categorize auditory stimuli from a large novel set of morphed monkey vocalizations. Using fMRI-rapid adaptation (fMRI-RA) and multi-voxel pattern analysis (MVPA) techniques, we gained evidence that categorization training results in two distinct sets of changes: sharpened tuning to monkey call features (without explicit category representation) in left auditory cortex and category selectivity for different types of calls in lateral prefrontal cortex. In addition, the sharpness of neural selectivity in left auditory cortex, as estimated with both fMRI-RA and MVPA, predicted the steepness of the categorical boundary, whereas categorical judgment correlated with release from adaptation in the left inferior frontal gyrus. These results support the theory that auditory category learning follows a two-stage model analogous to the visual domain, suggesting general principles of perceptual category learning in the human brain.
Collapse
|
14
|
Selezneva E, Gorkin A, Budinger E, Brosch M. Neuronal correlates of auditory streaming in the auditory cortex of behaving monkeys. Eur J Neurosci 2018; 48:3234-3245. [PMID: 30070745 DOI: 10.1111/ejn.14098] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2017] [Revised: 06/27/2018] [Accepted: 07/20/2018] [Indexed: 11/29/2022]
Abstract
This study tested the hypothesis that spiking activity in the primary auditory cortex of monkeys is related to auditory stream formation. Evidence for this hypothesis was previously obtained in animals that were passively exposed to stimuli and in which differences in the streaming percept were confounded with differences between the stimuli. In this study, monkeys performed an operant task on sequences that were composed of light flashes and tones. The tones alternated between a high and a low frequency and could be perceived either as one auditory stream or two auditory streams. The flashes promoted either a one-stream percept or a two-stream percept. Comparison of different types of sequences revealed that the neuronal responses to the alternating tones were more similar when the flashes promoted auditory stream integration, and were more dissimilar when the flashes promoted auditory stream segregation. Thus our findings show that the spiking activity in the monkey primary auditory cortex is related to auditory stream formation.
Collapse
Affiliation(s)
| | | | - Eike Budinger
- Leibniz Institut für Neurobiologie, Magdeburg, Germany
| | | |
Collapse
|
15
|
Moreno A, Gumaste A, Adams GK, Chong KK, Nguyen M, Shepard KN, Liu RC. Familiarity with social sounds alters c-Fos expression in auditory cortex and interacts with estradiol in locus coeruleus. Hear Res 2018; 366:38-49. [PMID: 29983289 PMCID: PMC6470399 DOI: 10.1016/j.heares.2018.06.020] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/15/2017] [Revised: 06/21/2018] [Accepted: 06/26/2018] [Indexed: 12/21/2022]
Abstract
When a social sound category initially gains behavioral significance to an animal, plasticity events presumably enhance the ability to recognize that sound category in the future. In the context of learning natural social stimuli, neuromodulators such as norepinephrine and estrogen have been associated with experience-dependent plasticity and processing of newly salient social cues, yet continued plasticity once stimuli are familiar could disrupt the stability of sensorineural representations. Here we employed a maternal mouse model of natural sensory cortical plasticity for infant vocalizations to ask whether the engagement of the noradrenergic locus coeruleus (LC) by the playback of pup-calls is affected by either prior experience with the sounds or estrogen availability, using a well-studied cellular activity and plasticity marker, the immediate early gene c-Fos. We counted call-induced c-Fos immunoreactive (cFos-IR) cells in both LC and physiologically validated fields within the auditory cortex (AC) of estradiol or blank-implanted virgin female mice with either 0 or 5-days prior experience caring for vocalizing pups. Estradiol and pup experience interacted both in the induction of c-Fos-IR in the LC, as well as in behavioral measures of locomotion during playback, consistent with the neuromodulatory center’s activity being an online reflection of both hormonal and experience-dependent influences on arousal. Throughout core AC, as well as in a high frequency sub-region of AC and in secondary AC, a main effect of pup experience was to reduce call-induced c-Fos-IR, irrespective of estradiol availability. This is consistent with the hypothesis that sound familiarity leads to less c-Fos-mediated plasticity, and less disrupted sensory representations of a meaningful call category. Taken together, our data support the view that any coupling between these sensory and neuromodulatory areas is situationally dependent, and their engagement depends differentially on both internal state factors like hormones and external state factors like prior experience.
Collapse
Affiliation(s)
- Amielle Moreno
- Neuroscience Graduate Program, Emory University, 1462 Clifton Road, Atlanta, GA, 30322, USA; Department of Biology, Emory University, 1510 Clifton Road, Atlanta, GA, 30322, USA.
| | - Ankita Gumaste
- Department of Biology, Emory University, 1510 Clifton Road, Atlanta, GA, 30322, USA; Neuroscience and Behavior Biology Program, Emory University, 1462 Clifton Road, Atlanta, GA, 30322, USA.
| | - Geoff K Adams
- Department of Biology, Emory University, 1510 Clifton Road, Atlanta, GA, 30322, USA.
| | - Kelly K Chong
- Department of Biology, Emory University, 1510 Clifton Road, Atlanta, GA, 30322, USA; Biomedical Engineering Graduate Program, Georgia Institute of Technology, North Ave NW, Atlanta, GA, 30332, USA.
| | - Michael Nguyen
- Department of Biology, Emory University, 1510 Clifton Road, Atlanta, GA, 30322, USA; Neuroscience and Behavior Biology Program, Emory University, 1462 Clifton Road, Atlanta, GA, 30322, USA.
| | - Kathryn N Shepard
- Neuroscience Graduate Program, Emory University, 1462 Clifton Road, Atlanta, GA, 30322, USA; Department of Biology, Emory University, 1510 Clifton Road, Atlanta, GA, 30322, USA.
| | - Robert C Liu
- Department of Biology, Emory University, 1510 Clifton Road, Atlanta, GA, 30322, USA; Center for Translational Social Neuroscience, Emory University, Atlanta, GA, 30322, USA.
| |
Collapse
|
16
|
Hausfeld L, Riecke L, Formisano E. Acoustic and higher-level representations of naturalistic auditory scenes in human auditory and frontal cortex. Neuroimage 2018. [DOI: 10.1016/j.neuroimage.2018.02.065] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022] Open
|
17
|
Ten Oever S, Hausfeld L, Correia J, Van Atteveldt N, Formisano E, Sack A. A 7T fMRI study investigating the influence of oscillatory phase on syllable representations. Neuroimage 2016; 141:1-9. [DOI: 10.1016/j.neuroimage.2016.07.011] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2016] [Revised: 06/06/2016] [Accepted: 07/06/2016] [Indexed: 01/01/2023] Open
|
18
|
Elie JE, Theunissen FE. Meaning in the avian auditory cortex: neural representation of communication calls. Eur J Neurosci 2015; 41:546-67. [PMID: 25728175 DOI: 10.1111/ejn.12812] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2014] [Revised: 11/26/2014] [Accepted: 11/28/2014] [Indexed: 11/28/2022]
Abstract
Understanding how the brain extracts the behavioral meaning carried by specific vocalization types that can be emitted by various vocalizers and in different conditions is a central question in auditory research. This semantic categorization is a fundamental process required for acoustic communication, and presupposes discriminative and invariance properties of the auditory system for conspecific vocalizations. Songbirds have been used extensively to study vocal learning, but the communicative function of all their vocalizations and their neural representation has yet to be examined. In this study, we first generated a library containing almost the entire zebra finch vocal repertoire, and organised communication calls along nine different categories according to their behavioral meaning. We then investigated the neural representations of these semantic categories in the primary and secondary auditory areas of six anesthetised zebra finches. To analyse how single units encode these call categories, we described neural responses in terms of their discrimination, selectivity and invariance properties. Quantitative measures for these neural properties were obtained with an optimal decoder using both spike counts and spike patterns. Information theoretic metrics show that almost half of the single units encode semantic information. Neurons achieve higher discrimination of these semantic categories by being more selective and more invariant. These results demonstrate that computations necessary for semantic categorization of meaningful vocalizations are already present in the auditory cortex, and emphasise the value of a neuro-ethological approach to understand vocal communication.
Collapse
Affiliation(s)
- Julie E Elie
- Helen Wills Neuroscience Institute and Psychology Department, University of California Berkeley, 3210 Tolman Hall, CA-94720, Berkeley, CA, USA
| | | |
Collapse
|
19
|
Iannilli E, Sorokowska A, Zhigang Z, Hähner A, Warr J, Hummel T. Source localization of event-related brain activity elicited by food and nonfood odors. Neuroscience 2015; 289:99-105. [DOI: 10.1016/j.neuroscience.2014.12.044] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2014] [Revised: 12/22/2014] [Accepted: 12/27/2014] [Indexed: 01/25/2023]
|
20
|
Ohl FW. Role of cortical neurodynamics for understanding the neural basis of motivated behavior - lessons from auditory category learning. Curr Opin Neurobiol 2014; 31:88-94. [PMID: 25241212 DOI: 10.1016/j.conb.2014.08.014] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2014] [Revised: 08/26/2014] [Accepted: 08/28/2014] [Indexed: 11/25/2022]
Abstract
Rhythmic activity appears in the auditory cortex in both microscopic and macroscopic observables and is modulated by both bottom-up and top-down processes. How this activity serves both types of processes is largely unknown. Here we review studies that have recently improved our understanding of potential functional roles of large-scale global dynamic activity patterns in auditory cortex. The experimental paradigm of auditory category learning allowed critical testing of the hypothesis that global auditory cortical activity states are associated with endogenous cognitive states mediating the meaning associated with an acoustic stimulus rather than with activity states that merely represent the stimulus for further processing.
Collapse
Affiliation(s)
- Frank W Ohl
- Leibniz Institute for Neurobiology, Department of Systems Physiology of Learning, Brenneckestr. 6, D-39118 Magdeburg, Germany.
| |
Collapse
|