1
|
Hakonen M, Dahmani L, Lankinen K, Ren J, Barbaro J, Blazejewska A, Cui W, Kotlarz P, Li M, Polimeni JR, Turpin T, Uluç I, Wang D, Liu H, Ahveninen J. Individual connectivity-based parcellations reflect functional properties of human auditory cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.20.576475. [PMID: 38293021 PMCID: PMC10827228 DOI: 10.1101/2024.01.20.576475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2024]
Abstract
Neuroimaging studies of the functional organization of human auditory cortex have focused on group-level analyses to identify tendencies that represent the typical brain. Here, we mapped auditory areas of the human superior temporal cortex (STC) in 30 participants by combining functional network analysis and 1-mm isotropic resolution 7T functional magnetic resonance imaging (fMRI). Two resting-state fMRI sessions, and one or two auditory and audiovisual speech localizer sessions, were collected on 3-4 separate days. We generated a set of functional network-based parcellations from these data. Solutions with 4, 6, and 11 networks were selected for closer examination based on local maxima of Dice and Silhouette values. The resulting parcellation of auditory cortices showed high intraindividual reproducibility both between resting state sessions (Dice coefficient: 69-78%) and between resting state and task sessions (Dice coefficient: 62-73%). This demonstrates that auditory areas in STC can be reliably segmented into functional subareas. The interindividual variability was significantly larger than intraindividual variability (Dice coefficient: 57%-68%, p<0.001), indicating that the parcellations also captured meaningful interindividual variability. The individual-specific parcellations yielded the highest alignment with task response topographies, suggesting that individual variability in parcellations reflects individual variability in auditory function. Connectional homogeneity within networks was also highest for the individual-specific parcellations. Furthermore, the similarity in the functional parcellations was not explainable by the similarity of macroanatomical properties of auditory cortex. Our findings suggest that individual-level parcellations capture meaningful idiosyncrasies in auditory cortex organization.
Collapse
Affiliation(s)
- M Hakonen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - L Dahmani
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - K Lankinen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - J Ren
- Division of Brain Sciences, Changping Laboratory, Beijing, China
| | - J Barbaro
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
| | - A Blazejewska
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - W Cui
- Division of Brain Sciences, Changping Laboratory, Beijing, China
| | - P Kotlarz
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
| | - M Li
- Division of Brain Sciences, Changping Laboratory, Beijing, China
| | - J R Polimeni
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
- Harvard-MIT Program in Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - T Turpin
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
| | - I Uluç
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - D Wang
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - H Liu
- Division of Brain Sciences, Changping Laboratory, Beijing, China
- Biomedical Pioneering Innovation Center (BIOPIC), Peking University, Beijing, China
| | - J Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
2
|
van den Berg MM, Busscher E, Borst JGG, Wong AB. Neuronal responses in mouse inferior colliculus correlate with behavioral detection of amplitude-modulated sound. J Neurophysiol 2023; 130:524-546. [PMID: 37465872 DOI: 10.1152/jn.00048.2023] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Revised: 07/18/2023] [Accepted: 07/18/2023] [Indexed: 07/20/2023] Open
Abstract
Amplitude modulation (AM) is a common feature of natural sounds, including speech and animal vocalizations. Here, we used operant conditioning and in vivo electrophysiology to determine the AM detection threshold of mice as well as its underlying neuronal encoding. Mice were trained in a Go-NoGo task to detect the transition to AM within a noise stimulus designed to prevent the use of spectral side-bands or a change in intensity as alternative cues. Our results indicate that mice, compared with other species, detect high modulation frequencies up to 512 Hz well, but show much poorer performance at low frequencies. Our in vivo multielectrode recordings in the inferior colliculus (IC) of both anesthetized and awake mice revealed a few single units with remarkable phase-locking ability to 512 Hz modulation, but not sufficient to explain the good behavioral detection at that frequency. Using a model of the population response that combined dimensionality reduction with threshold detection, we reproduced the general band-pass characteristics of behavioral detection based on a subset of neurons showing the largest firing rate change (both increase and decrease) in response to AM, suggesting that these neurons are instrumental in the behavioral detection of AM stimuli by the mice.NEW & NOTEWORTHY The amplitude of natural sounds, including speech and animal vocalizations, often shows characteristic modulations. We examined the relationship between neuronal responses in the mouse inferior colliculus and the behavioral detection of amplitude modulation (AM) in sound and modeled how the former can give rise to the latter. Our model suggests that behavioral detection can be well explained by the activity of a subset of neurons showing the largest firing rate changes in response to AM.
Collapse
Affiliation(s)
- Maurits M van den Berg
- Department of Neuroscience, Erasmus MC, University Medical Center Rotterdam, Rotterdam, The Netherlands
| | - Esmée Busscher
- Department of Neuroscience, Erasmus MC, University Medical Center Rotterdam, Rotterdam, The Netherlands
| | - J Gerard G Borst
- Department of Neuroscience, Erasmus MC, University Medical Center Rotterdam, Rotterdam, The Netherlands
| | - Aaron B Wong
- Department of Neuroscience, Erasmus MC, University Medical Center Rotterdam, Rotterdam, The Netherlands
| |
Collapse
|
3
|
Sadagopan S, Kar M, Parida S. Quantitative models of auditory cortical processing. Hear Res 2023; 429:108697. [PMID: 36696724 PMCID: PMC9928778 DOI: 10.1016/j.heares.2023.108697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 12/17/2022] [Accepted: 01/12/2023] [Indexed: 01/15/2023]
Abstract
To generate insight from experimental data, it is critical to understand the inter-relationships between individual data points and place them in context within a structured framework. Quantitative modeling can provide the scaffolding for such an endeavor. Our main objective in this review is to provide a primer on the range of quantitative tools available to experimental auditory neuroscientists. Quantitative modeling is advantageous because it can provide a compact summary of observed data, make underlying assumptions explicit, and generate predictions for future experiments. Quantitative models may be developed to characterize or fit observed data, to test theories of how a task may be solved by neural circuits, to determine how observed biophysical details might contribute to measured activity patterns, or to predict how an experimental manipulation would affect neural activity. In complexity, quantitative models can range from those that are highly biophysically realistic and that include detailed simulations at the level of individual synapses, to those that use abstract and simplified neuron models to simulate entire networks. Here, we survey the landscape of recently developed models of auditory cortical processing, highlighting a small selection of models to demonstrate how they help generate insight into the mechanisms of auditory processing. We discuss examples ranging from models that use details of synaptic properties to explain the temporal pattern of cortical responses to those that use modern deep neural networks to gain insight into human fMRI data. We conclude by discussing a biologically realistic and interpretable model that our laboratory has developed to explore aspects of vocalization categorization in the auditory pathway.
Collapse
Affiliation(s)
- Srivatsun Sadagopan
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA; Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, USA.
| | - Manaswini Kar
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA
| | - Satyabrata Parida
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
4
|
Souffi S, Varnet L, Zaidi M, Bathellier B, Huetz C, Edeline JM. Reduction in sound discrimination in noise is related to envelope similarity and not to a decrease in envelope tracking abilities. J Physiol 2023; 601:123-149. [PMID: 36373184 DOI: 10.1113/jp283526] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 11/08/2022] [Indexed: 11/15/2022] Open
Abstract
Humans and animals constantly face challenging acoustic environments, such as various background noises, that impair the detection, discrimination and identification of behaviourally relevant sounds. Here, we disentangled the role of temporal envelope tracking in the reduction in neuronal and behavioural discrimination between communication sounds in situations of acoustic degradations. By collecting neuronal activity from six different levels of the auditory system, from the auditory nerve up to the secondary auditory cortex, in anaesthetized guinea-pigs, we found that tracking of slow changes of the temporal envelope is a general functional property of auditory neurons for encoding communication sounds in quiet conditions and in adverse, challenging conditions. Results from a go/no-go sound discrimination task in mice support the idea that the loss of distinct slow envelope cues in noisy conditions impacted the discrimination performance. Together, these results suggest that envelope tracking is potentially a universal mechanism operating in the central auditory system, which allows the detection of any between-stimulus difference in the slow envelope and thus copes with degraded conditions. KEY POINTS: In quiet conditions, envelope tracking in the low amplitude modulation range (<20 Hz) is correlated with the neuronal discrimination between communication sounds as quantified by mutual information from the cochlear nucleus up to the auditory cortex. At each level of the auditory system, auditory neurons retain their abilities to track the communication sound envelopes in situations of acoustic degradation, such as vocoding and the addition of masking noises up to a signal-to-noise ratio of -10 dB. In noisy conditions, the increase in between-stimulus envelope similarity explains the reduction in both behavioural and neuronal discrimination in the auditory system. Envelope tracking can be viewed as a universal mechanism that allows neural and behavioural discrimination as long as the temporal envelope of communication sounds displays some differences.
Collapse
Affiliation(s)
- Samira Souffi
- Paris-Saclay Institute of Neuroscience (Neuro-PSI, UMR 9197), CNRS - Université Paris-Saclay, Saclay, France
| | - Léo Varnet
- Laboratoire des systèmes perceptifs, UMR CNRS 8248, Département d'Etudes Cognitives, Ecole Normale Supérieure, Université Paris Sciences & Lettres, Paris, France
| | - Meryem Zaidi
- Paris-Saclay Institute of Neuroscience (Neuro-PSI, UMR 9197), CNRS - Université Paris-Saclay, Saclay, France
| | - Brice Bathellier
- Institut de l'Audition, Institut Pasteur, Université de Paris, INSERM, Paris, France
| | - Chloé Huetz
- Paris-Saclay Institute of Neuroscience (Neuro-PSI, UMR 9197), CNRS - Université Paris-Saclay, Saclay, France
| | - Jean-Marc Edeline
- Paris-Saclay Institute of Neuroscience (Neuro-PSI, UMR 9197), CNRS - Université Paris-Saclay, Saclay, France
| |
Collapse
|
5
|
Liu XP, Wang X. Distinct neuronal types contribute to hybrid temporal encoding strategies in primate auditory cortex. PLoS Biol 2022; 20:e3001642. [PMID: 35613218 PMCID: PMC9132345 DOI: 10.1371/journal.pbio.3001642] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 04/22/2022] [Indexed: 11/18/2022] Open
Abstract
Studies of the encoding of sensory stimuli by the brain often consider recorded neurons as a pool of identical units. Here, we report divergence in stimulus-encoding properties between subpopulations of cortical neurons that are classified based on spike timing and waveform features. Neurons in auditory cortex of the awake marmoset (Callithrix jacchus) encode temporal information with either stimulus-synchronized or nonsynchronized responses. When we classified single-unit recordings using either a criteria-based or an unsupervised classification method into regular-spiking, fast-spiking, and bursting units, a subset of intrinsically bursting neurons formed the most highly synchronized group, with strong phase-locking to sinusoidal amplitude modulation (SAM) that extended well above 20 Hz. In contrast with other unit types, these bursting neurons fired primarily on the rising phase of SAM or the onset of unmodulated stimuli, and preferred rapid stimulus onset rates. Such differentiating behavior has been previously reported in bursting neuron models and may reflect specializations for detection of acoustic edges. These units responded to natural stimuli (vocalizations) with brief and precise spiking at particular time points that could be decoded with high temporal stringency. Regular-spiking units better reflected the shape of slow modulations and responded more selectively to vocalizations with overall firing rate increases. Population decoding using time-binned neural activity found that decoding behavior differed substantially between regular-spiking and bursting units. A relatively small pool of bursting units was sufficient to identify the stimulus with high accuracy in a manner that relied on the temporal pattern of responses. These unit type differences may contribute to parallel and complementary neural codes. Neurons in auditory cortex show highly diverse responses to sounds. This study suggests that neuronal type inferred from baseline firing properties accounts for much of this diversity, with a subpopulation of bursting units being specialized for precise temporal encoding.
Collapse
Affiliation(s)
- Xiao-Ping Liu
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, Maryland, United States of America
- * E-mail: (X-PL); (XW)
| | - Xiaoqin Wang
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, Maryland, United States of America
- * E-mail: (X-PL); (XW)
| |
Collapse
|
6
|
Ren J, Xu T, Wang D, Li M, Lin Y, Schoeppe F, Ramirez JSB, Han Y, Luan G, Li L, Liu H, Ahveninen J. Individual Variability in Functional Organization of the Human and Monkey Auditory Cortex. Cereb Cortex 2020; 31:2450-2465. [PMID: 33350445 DOI: 10.1093/cercor/bhaa366] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Revised: 11/01/2020] [Accepted: 11/05/2020] [Indexed: 12/13/2022] Open
Abstract
Accumulating evidence shows that auditory cortex (AC) of humans, and other primates, is involved in more complex cognitive processes than feature segregation only, which are shaped by experience-dependent plasticity and thus likely show substantial individual variability. However, thus far, individual variability of ACs has been considered a methodological impediment rather than a phenomenon of theoretical importance. Here, we examined the variability of ACs using intrinsic functional connectivity patterns in humans and macaques. Our results demonstrate that in humans, interindividual variability is greater near the nonprimary than primary ACs, indicating that variability dramatically increases across the processing hierarchy. ACs are also more variable than comparable visual areas and show higher variability in the left than in the right hemisphere, which may be related to the left lateralization of auditory-related functions such as language. Intriguingly, remarkably similar modality differences and lateralization of variability were also observed in macaques. These connectivity-based findings are consistent with a confirmatory task-based functional magnetic resonance imaging analysis. The quantification of variability in auditory function, and the similar findings in both humans and macaques, will have strong implications for understanding the evolution of advanced auditory functions in humans.
Collapse
Affiliation(s)
- Jianxun Ren
- National Engineering Laboratory for Neuromodulation, School of Aerospace Engineering, Tsinghua University, 100084 Beijing, China.,Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA 02129, USA
| | - Ting Xu
- Center for the Developing Brain, Child Mind Institute, New York, NY 10022, USA
| | - Danhong Wang
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA 02129, USA
| | - Meiling Li
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA 02129, USA
| | - Yuanxiang Lin
- Department of Neurosurgery, First Affiliated Hospital, Fujian Medical University, 350108 Fuzhou, China
| | - Franziska Schoeppe
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA 02129, USA
| | - Julian S B Ramirez
- Department of Behavioral Neuroscience, Oregon Health and Science University, Portland, OR 97239, USA
| | - Ying Han
- Department of Neurology, Xuanwu Hospital of Capital Medical University, 100053 Beijing, China
| | - Guoming Luan
- Department of Neurosurgery, Comprehensive Epilepsy Center, Sanbo Brain Hospital, Capital Medical University, 100093 Beijing, China
| | - Luming Li
- National Engineering Laboratory for Neuromodulation, School of Aerospace Engineering, Tsinghua University, 100084 Beijing, China.,Precision Medicine & Healthcare Research Center, Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, 518055 Shenzhen, China.,IDG/McGovern Institute for Brain Research, Tsinghua University, 100084 Beijing, China
| | - Hesheng Liu
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA 02129, USA.,Department of Neuroscience, Medical University of South Carolina, Charleston, SC 29425, USA
| | - Jyrki Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA 02129, USA
| |
Collapse
|
7
|
Macias S, Bakshi K, Garcia-Rosales F, Hechavarria JC, Smotherman M. Temporal coding of echo spectral shape in the bat auditory cortex. PLoS Biol 2020; 18:e3000831. [PMID: 33170833 PMCID: PMC7678962 DOI: 10.1371/journal.pbio.3000831] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2020] [Revised: 11/20/2020] [Accepted: 10/01/2020] [Indexed: 01/26/2023] Open
Abstract
Echolocating bats rely upon spectral interference patterns in echoes to reconstruct fine details of a reflecting object’s shape. However, the acoustic modulations required to do this are extremely brief, raising questions about how their auditory cortex encodes and processes such rapid and fine spectrotemporal details. Here, we tested the hypothesis that biosonar target shape representation in the primary auditory cortex (A1) is more reliably encoded by changes in spike timing (latency) than spike rates and that latency is sufficiently precise to support a synchronization-based ensemble representation of this critical auditory object feature space. To test this, we measured how the spatiotemporal activation patterns of A1 changed when naturalistic spectral notches were inserted into echo mimic stimuli. Neurons tuned to notch frequencies were predicted to exhibit longer latencies and lower mean firing rates due to lower signal amplitudes at their preferred frequencies, and both were found to occur. Comparative analyses confirmed that significantly more information was recoverable from changes in spike times relative to concurrent changes in spike rates. With this data, we reconstructed spatiotemporal activation maps of A1 and estimated the level of emerging neuronal spike synchrony between cortical neurons tuned to different frequencies. The results support existing computational models, indicating that spectral interference patterns may be efficiently encoded by a cascading tonotopic sequence of neural synchronization patterns within an ensemble of network activity that relates to the physical features of the reflecting object surface. Echolocating bats rely upon spectral interference patterns in echoes to reconstruct fine details of a reflecting object’s shape. This study shows that the latency shifts induced by spectral notch patterns can provide the foundation for an avalanche of neuronal synchrony that is sufficient to support encoding of auditory object shape features during active biosonar.
Collapse
Affiliation(s)
- Silvio Macias
- Department of Biology, Texas A&M University, College Station, Texas, United States of America
- * E-mail:
| | - Kushal Bakshi
- Department of Biology, Texas A&M University, College Station, Texas, United States of America
| | | | - Julio C. Hechavarria
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Frankfurt/M., Germany
| | - Michael Smotherman
- Department of Biology, Texas A&M University, College Station, Texas, United States of America
| |
Collapse
|
8
|
Johnson JS, Niwa M, O'Connor KN, Sutter ML. Amplitude modulation encoding in the auditory cortex: comparisons between the primary and middle lateral belt regions. J Neurophysiol 2020; 124:1706-1726. [PMID: 33026929 DOI: 10.1152/jn.00171.2020] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
In macaques, the middle lateral auditory cortex (ML) is a belt region adjacent to the primary auditory cortex (A1) and believed to be at a hierarchically higher level. Although ML single-unit responses have been studied for several auditory stimuli, the ability of ML cells to encode amplitude modulation (AM)-an ability that has been widely studied in A1-has not yet been characterized. Here, we compared the responses of A1 and ML neurons to amplitude-modulated (AM) noise in awake macaques. Although several of the basic properties of A1 and ML responses to AM noise were similar, we found several key differences. ML neurons were less likely to phase lock, did not phase lock as strongly, and were more likely to respond in a nonsynchronized fashion than A1 cells, consistent with a temporal-to-rate transformation as information ascends the auditory hierarchy. ML neurons tended to have lower temporally (phase-locking) based best modulation frequencies than A1 neurons. Neurons that decreased their firing rate in response to AM noise relative to their firing rate in response to unmodulated noise became more common at the level of ML than they were in A1. In both A1 and ML, we found a prevalent class of neurons that usually have enhanced rate responses relative to responses to the unmodulated noise at lower modulation frequencies and suppressed rate responses relative to responses to the unmodulated noise at middle modulation frequencies.NEW & NOTEWORTHY ML neurons synchronized less than A1 neurons, consistent with a hierarchical temporal-to-rate transformation. Both A1 and ML had a class of modulation transfer functions previously unreported in the cortex with a low-modulation-frequency (MF) peak, a middle-MF trough, and responses similar to unmodulated noise responses at high MFs. The results support a hierarchical shift toward a two-pool opponent code, where subtraction of neural activity between two populations of oppositely tuned neurons encodes AM.
Collapse
Affiliation(s)
- Jeffrey S Johnson
- Center for Neuroscience, University of California, Davis, California
| | - Mamiko Niwa
- Center for Neuroscience, University of California, Davis, California
| | - Kevin N O'Connor
- Center for Neuroscience, University of California, Davis, California.,Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| | - Mitchell L Sutter
- Center for Neuroscience, University of California, Davis, California.,Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| |
Collapse
|
9
|
Elie JE, Theunissen FE. Invariant neural responses for sensory categories revealed by the time-varying information for communication calls. PLoS Comput Biol 2019; 15:e1006698. [PMID: 31557151 PMCID: PMC6762074 DOI: 10.1371/journal.pcbi.1006698] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2018] [Accepted: 06/08/2019] [Indexed: 12/20/2022] Open
Abstract
Although information theoretic approaches have been used extensively in the analysis of the neural code, they have yet to be used to describe how information is accumulated in time while sensory systems are categorizing dynamic sensory stimuli such as speech sounds or visual objects. Here, we present a novel method to estimate the cumulative information for stimuli or categories. We further define a time-varying categorical information index that, by comparing the information obtained for stimuli versus categories of these same stimuli, quantifies invariant neural representations. We use these methods to investigate the dynamic properties of avian cortical auditory neurons recorded in zebra finches that were listening to a large set of call stimuli sampled from the complete vocal repertoire of this species. We found that the time-varying rates carry 5 times more information than the mean firing rates even in the first 100 ms. We also found that cumulative information has slow time constants (100–600 ms) relative to the typical integration time of single neurons, reflecting the fact that the behaviorally informative features of auditory objects are time-varying sound patterns. When we correlated firing rates and information values, we found that average information correlates with average firing rate but that higher-rates found at the onset response yielded similar information values as the lower-rates found in the sustained response: the onset and sustained response of avian cortical auditory neurons provide similar levels of independent information about call identity and call-type. Finally, our information measures allowed us to rigorously define categorical neurons; these categorical neurons show a high degree of invariance for vocalizations within a call-type. Peak invariance is found around 150 ms after stimulus onset. Surprisingly, call-type invariant neurons were found in both primary and secondary avian auditory areas. Just as the recognition of faces requires neural representations that are invariant to scale and rotation, the recognition of behaviorally relevant auditory objects, such as spoken words, requires neural representations that are invariant to the speaker uttering the word and to his or her location. Here, we used information theory to investigate the time course of the neural representation of bird communication calls and of behaviorally relevant categories of these same calls: the call-types of the bird’s repertoire. We found that neurons in both the primary and secondary avian auditory cortex exhibit invariant responses to call renditions within a call-type, suggestive of a potential role for extracting the meaning of these communication calls. We also found that time plays an important role: first, neural responses carry significantly more information when represented by temporal patterns calculated at the small time scale of 10 ms than when measured as average rates and, second, this information accumulates in a non-redundant fashion up to long integration times of 600 ms. This rich temporal neural representation is matched to the temporal richness found in the communication calls of this species.
Collapse
Affiliation(s)
- Julie E. Elie
- Helen Wills Neuroscience Institute, University of California Berkeley, Berkeley, California, United States of America
- Department of Bioengineering, University of California Berkeley, Berkeley, California, United States of America
- * E-mail:
| | - Frédéric E. Theunissen
- Helen Wills Neuroscience Institute, University of California Berkeley, Berkeley, California, United States of America
- Department of Psychology, University of California Berkeley, Berkeley, California, United States of America
| |
Collapse
|
10
|
Xu N, Luo L, Wang Q, Li L. Binaural unmasking of the accuracy of envelope-signal representation in rat auditory cortex but not auditory midbrain. Hear Res 2019; 377:224-233. [PMID: 30991272 DOI: 10.1016/j.heares.2019.04.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/26/2018] [Revised: 03/25/2019] [Accepted: 04/03/2019] [Indexed: 01/16/2023]
Abstract
Accurate neural representations of acoustic signals under noisy conditions are critical for animals' survival. Detecting signal against background noise can be improved by binaural hearing particularly when an interaural-time-difference (ITD) disparity is introduced between the signal and the noise, a phenomenon known as binaural unmasking. Previous studies have mainly focused on the binaural unmasking effect on response magnitudes, and it is not clear whether binaural unmasking affects the accuracy of central representations of target acoustic signals and the relative contributions of different central auditory structures to this accuracy. Frequency following responses (FFRs), which are sustained phase-locked neural activities, can be used for measuring the accuracy of the representation of signals. Using intracranial recordings of local field potentials, this study aimed to assess whether the binaural unmasking effects include an improvement of the accuracy of neural representations of sound-envelope signals in the rat IC and/or auditory cortex (AC). The results showed that (1) when a narrow-band noise was presented binaurally, the stimulus-response (S-R) coherence of the FFRs to the envelope (FFRenvelope) of the narrow-band noise recorded in the IC was higher than that recorded in the AC. (2) Presenting a broad-band masking noise caused a larger reduction of the S-R coherence for FFRenvelope in the IC than that in the AC. (3) Introducing an ITD disparity between the narrow-band signal noise and the broad-band masking noise did not affect the IC S-R coherence, but enhanced both the AC S-R coherence and the coherence between the IC FFRenvelope and AC FFRenvelope. Thus, although the accuracy of representing envelope signals in the AC is lower than that in the IC, it can be binaurally unmasked, indicating a binaural-unmasking mechanism that is formed during the signal transmission from the IC to the AC.
Collapse
Affiliation(s)
- Na Xu
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, 100080, China
| | - Lu Luo
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, 100080, China
| | - Qian Wang
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, 100080, China; Beijing Key Laboratory of Epilepsy, Epilepsy Center, Department of Functional Neurosurgery, Sanbo Brain Hospital, Capital Medical University, Beijing, 100093, China
| | - Liang Li
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, 100080, China; Speech and Hearing Research Center, Key Laboratory on Machine Perception (Ministry of Education), Peking University, Beijing, 100871, China; Beijing Institute for Brain Disorders, Beijing, 100096, China.
| |
Collapse
|
11
|
Insanally MN, Carcea I, Field RE, Rodgers CC, DePasquale B, Rajan K, DeWeese MR, Albanna BF, Froemke RC. Spike-timing-dependent ensemble encoding by non-classically responsive cortical neurons. eLife 2019; 8:42409. [PMID: 30688649 PMCID: PMC6391134 DOI: 10.7554/elife.42409] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2018] [Accepted: 01/27/2019] [Indexed: 12/02/2022] Open
Abstract
Neurons recorded in behaving animals often do not discernibly respond to sensory input and are not overtly task-modulated. These non-classically responsive neurons are difficult to interpret and are typically neglected from analysis, confounding attempts to connect neural activity to perception and behavior. Here, we describe a trial-by-trial, spike-timing-based algorithm to reveal the coding capacities of these neurons in auditory and frontal cortex of behaving rats. Classically responsive and non-classically responsive cells contained significant information about sensory stimuli and behavioral decisions. Stimulus category was more accurately represented in frontal cortex than auditory cortex, via ensembles of non-classically responsive cells coordinating the behavioral meaning of spike timings on correct but not error trials. This unbiased approach allows the contribution of all recorded neurons – particularly those without obvious task-related, trial-averaged firing rate modulation – to be assessed for behavioral relevance on single trials. Neurons encode information in the form of electrical signals called spikes. Certain neurons increase the rate at which they produce spikes under specific circumstances, e.g., whenever an animal hears a particular sound. These neurons are said to be 'classically responsive'. But not all neurons behave in this way. Others produce spikes at a variable rate that does not obviously relate to the animal's behavior. These neurons are said to be 'non-classically responsive'. They are often omitted from analyses, despite typically outnumbering their classically responsive counterparts. So, what are these neurons doing? To find out, Insanally et al. trained rats to respond to sounds. The animals learned to poke their nose into a window whenever they heard a specific tone, and to avoid responding whenever they heard any other tone. As the rats performed the task, Insanally et al. recorded from neurons in two areas of the brain, the frontal cortex and the auditory cortex. A computer then analyzed the activity of individual neurons during each trial. As expected, the firing rate of non-classically responsive cells did not relate to the animals' behavior. But the timing of this firing did. The interval between spikes contained information about which tone the animals had heard and/or how they had responded. The cells worked together in groups to encode this information. Over the course of each trial, every neuron in the group varied the interval between its spikes. Eventually, the group reached a consensus, with all neurons using the same interval to represent information relevant to the task. Groups of neurons in the frontal cortex encoded more information about the category of the tone than those in the auditory cortex. By including all neurons – both classically and non-classically responsive – this model offers a more comprehensive view of how neural activity relates to behavior. This may in turn help us understand the variable and complex neural activity seen in people with sensory and cognitive disorders.
Collapse
Affiliation(s)
- Michele N Insanally
- Skirball Institute for Biomolecular Medicine, New York University School of Medicine, New York, United States.,Neuroscience Institute, New York University School of Medicine, New York, United States.,Department of Otolaryngology, New York University School of Medicine, New York, United States.,Department of Neuroscience and Physiology, New York University School of Medicine, New York, United States.,Center for Neural Science, New York University, New York, United States
| | - Ioana Carcea
- Skirball Institute for Biomolecular Medicine, New York University School of Medicine, New York, United States.,Neuroscience Institute, New York University School of Medicine, New York, United States.,Department of Otolaryngology, New York University School of Medicine, New York, United States.,Department of Neuroscience and Physiology, New York University School of Medicine, New York, United States.,Center for Neural Science, New York University, New York, United States
| | - Rachel E Field
- Skirball Institute for Biomolecular Medicine, New York University School of Medicine, New York, United States.,Neuroscience Institute, New York University School of Medicine, New York, United States.,Department of Otolaryngology, New York University School of Medicine, New York, United States.,Department of Neuroscience and Physiology, New York University School of Medicine, New York, United States.,Center for Neural Science, New York University, New York, United States
| | - Chris C Rodgers
- Department of Neuroscience, Columbia University, New York, United States.,Kavli Institute of Brain Science, Columbia University, New York, United States
| | - Brian DePasquale
- Princeton Neuroscience Institute, Princeton University, Princeton, United States
| | - Kanaka Rajan
- Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, United States.,Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, United States
| | - Michael R DeWeese
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, United States.,Department of Physics, University of California, Berkeley, Berkeley, United States
| | - Badr F Albanna
- Department of Natural Sciences, Fordham University, New York, United States
| | - Robert C Froemke
- Skirball Institute for Biomolecular Medicine, New York University School of Medicine, New York, United States.,Neuroscience Institute, New York University School of Medicine, New York, United States.,Department of Neuroscience and Physiology, New York University School of Medicine, New York, United States.,Center for Neural Science, New York University, New York, United States.,Howard Hughes Medical Institute, New York University School of Medicine, New York, United States
| |
Collapse
|
12
|
Peng F, Innes-Brown H, McKay CM, Fallon JB, Zhou Y, Wang X, Hu N, Hou W. Temporal Coding of Voice Pitch Contours in Mandarin Tones. Front Neural Circuits 2018; 12:55. [PMID: 30087597 PMCID: PMC6066958 DOI: 10.3389/fncir.2018.00055] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2017] [Accepted: 06/27/2018] [Indexed: 11/13/2022] Open
Abstract
Accurate perception of time-variant pitch is important for speech recognition, particularly for tonal languages with different lexical tones such as Mandarin, in which different tones convey different semantic information. Previous studies reported that the auditory nerve and cochlear nucleus can encode different pitches through phase-locked neural activities. However, little is known about how the inferior colliculus (IC) encodes the time-variant periodicity pitch of natural speech. In this study, the Mandarin syllable /ba/ pronounced with four lexical tones (flat, rising, falling then rising and falling) were used as stimuli. Local field potentials (LFPs) and single neuron activity were simultaneously recorded from 90 sites within contralateral IC of six urethane-anesthetized and decerebrate guinea pigs in response to the four stimuli. Analysis of the temporal information of LFPs showed that 93% of the LFPs exhibited robust encoding of periodicity pitch. Pitch strength of LFPs derived from the autocorrelogram was significantly (p < 0.001) stronger for rising tones than flat and falling tones. Pitch strength are also significantly increased (p < 0.05) with the characteristic frequency (CF). On the other hand, only 47% (42 or 90) of single neuron activities were significantly synchronized to the fundamental frequency of the stimulus suggesting that the temporal spiking pattern of single IC neuron could encode the time variant periodicity pitch of speech robustly. The difference between the number of LFPs and single neurons that encode the time-variant F0 voice pitch supports the notion of a transition at the level of IC from direct temporal coding in the spike trains of individual neurons to other form of neural representation.
Collapse
Affiliation(s)
- Fei Peng
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, Bioengineering College of Chongqing University, Chongqing, China
- Collaborative Innovation Center for Brain Science, Chongqing University, Chongqing, China
| | - Hamish Innes-Brown
- Bionics Institute, East Melbourne, VIC, Australia
- Department of Medical Bionics Department, University of Melbourne, Melbourne, VIC, Australia
| | - Colette M. McKay
- Bionics Institute, East Melbourne, VIC, Australia
- Department of Medical Bionics Department, University of Melbourne, Melbourne, VIC, Australia
| | - James B. Fallon
- Bionics Institute, East Melbourne, VIC, Australia
- Department of Medical Bionics Department, University of Melbourne, Melbourne, VIC, Australia
- Department of Otolaryngology, University of Melbourne, Melbourne, VIC, Australia
| | - Yi Zhou
- Chongqing Key Laboratory of Neurobiology, Department of Neurobiology, Third Military Medical University, Chongqing, China
| | - Xing Wang
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, Bioengineering College of Chongqing University, Chongqing, China
- Chongqing Medical Electronics Engineering Technology Research Center, Chongqing University, Chongqing, China
| | - Ning Hu
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, Bioengineering College of Chongqing University, Chongqing, China
- Collaborative Innovation Center for Brain Science, Chongqing University, Chongqing, China
| | - Wensheng Hou
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, Bioengineering College of Chongqing University, Chongqing, China
- Collaborative Innovation Center for Brain Science, Chongqing University, Chongqing, China
- Chongqing Medical Electronics Engineering Technology Research Center, Chongqing University, Chongqing, China
| |
Collapse
|
13
|
Abstract
How the cerebral cortex encodes auditory features of biologically important sounds, including speech and music, is one of the most important questions in auditory neuroscience. The pursuit to understand related neural coding mechanisms in the mammalian auditory cortex can be traced back several decades to the early exploration of the cerebral cortex. Significant progress in this field has been made in the past two decades with new technical and conceptual advances. This article reviews the progress and challenges in this area of research.
Collapse
Affiliation(s)
- Xiaoqin Wang
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland 21205, USA
| |
Collapse
|
14
|
Kostal L, D'Onofrio G. Coordinate invariance as a fundamental constraint on the form of stimulus-specific information measures. BIOLOGICAL CYBERNETICS 2018; 112:13-23. [PMID: 28856427 DOI: 10.1007/s00422-017-0729-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/20/2017] [Accepted: 08/16/2017] [Indexed: 06/07/2023]
Abstract
The value of Shannon's mutual information is commonly used to describe the total amount of information that the neural code transfers between the ensemble of stimuli and the ensemble of neural responses. In addition, it is often desirable to know which features of the stimulus or response are most informative. The literature offers several different decompositions of the mutual information into its stimulus or response-specific components, such as the specific surprise or the uncertainty reduction, but the number of mutually distinct measures is in fact infinite. We resolve this ambiguity by requiring the specific information measures to be invariant under invertible coordinate transformations of the stimulus and the response ensembles. We prove that the Kullback-Leibler divergence is then the only suitable measure of the specific information. On a more general level, we discuss the necessity and the fundamental aspects of the coordinate invariance as a selection principle. We believe that our results will encourage further research into invariant statistical methods for the analysis of neural coding.
Collapse
Affiliation(s)
- Lubomir Kostal
- Institute of Physiology, Czech Academy of Sciences, Videnska 1083, 14220, Prague 4, Czech Republic.
| | - Giuseppe D'Onofrio
- Institute of Physiology, Czech Academy of Sciences, Videnska 1083, 14220, Prague 4, Czech Republic
| |
Collapse
|
15
|
Hoglen NEG, Larimer P, Phillips EAK, Malone BJ, Hasenstaub AR. Amplitude modulation coding in awake mice and squirrel monkeys. J Neurophysiol 2018; 119:1753-1766. [PMID: 29364073 DOI: 10.1152/jn.00101.2017] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Both mice and primates are used to model the human auditory system. The primate order possesses unique cortical specializations that govern auditory processing. Given the power of molecular and genetic tools available in the mouse model, it is essential to understand the similarities and differences in auditory cortical processing between mice and primates. To address this issue, we directly compared temporal encoding properties of neurons in the auditory cortex of awake mice and awake squirrel monkeys (SQMs). Stimuli were drawn from a sinusoidal amplitude modulation (SAM) paradigm, which has been used previously both to characterize temporal precision and to model the envelopes of natural sounds. Neural responses were analyzed with linear template-based decoders. In both species, spike timing information supported better modulation frequency discrimination than rate information, and multiunit responses generally supported more accurate discrimination than single-unit responses from the same site. However, cortical responses in SQMs supported better discrimination overall, reflecting superior temporal precision and greater rate modulation relative to the spontaneous baseline and suggesting that spiking activity in mouse cortex was less strictly regimented by incoming acoustic information. The quantitative differences we observed between SQM and mouse cortex support the idea that SQMs offer advantages for modeling precise responses to fast envelope dynamics relevant to human auditory processing. Nevertheless, our results indicate that cortical temporal processing is qualitatively similar in mice and SQMs and thus recommend the mouse model for mechanistic questions, such as development and circuit function, where its substantial methodological advantages can be exploited. NEW & NOTEWORTHY To understand the advantages of different model organisms, it is necessary to directly compare sensory responses across species. Contrasting temporal processing in auditory cortex of awake squirrel monkeys and mice, with parametrically matched amplitude-modulated tone stimuli, reveals a similar role of timing information in stimulus encoding. However, disparities in response precision and strength suggest that anatomical and biophysical differences between squirrel monkeys and mice produce quantitative but not qualitative differences in processing strategy.
Collapse
Affiliation(s)
- Nerissa E G Hoglen
- Center for Integrative Neuroscience, University of California , San Francisco, California.,Department of Otolaryngology-Head and Neck Surgery, University of California , San Francisco, California.,Coleman Memorial Laboratory, University of California , San Francisco, California.,Kavli Institute for Fundamental Neuroscience, University of California , San Francisco, California.,Department of Psychiatry, University of California , San Francisco, California.,Neuroscience Graduate Program, University of California , San Francisco, California
| | - Phillip Larimer
- Center for Integrative Neuroscience, University of California , San Francisco, California.,Coleman Memorial Laboratory, University of California , San Francisco, California.,Department of Neurology, University of California , San Francisco, California
| | - Elizabeth A K Phillips
- Center for Integrative Neuroscience, University of California , San Francisco, California.,Department of Otolaryngology-Head and Neck Surgery, University of California , San Francisco, California.,Coleman Memorial Laboratory, University of California , San Francisco, California.,Neuroscience Graduate Program, University of California , San Francisco, California
| | - Brian J Malone
- Department of Otolaryngology-Head and Neck Surgery, University of California , San Francisco, California.,Coleman Memorial Laboratory, University of California , San Francisco, California.,Kavli Institute for Fundamental Neuroscience, University of California , San Francisco, California
| | - Andrea R Hasenstaub
- Center for Integrative Neuroscience, University of California , San Francisco, California.,Department of Otolaryngology-Head and Neck Surgery, University of California , San Francisco, California.,Coleman Memorial Laboratory, University of California , San Francisco, California.,Kavli Institute for Fundamental Neuroscience, University of California , San Francisco, California
| |
Collapse
|
16
|
Organization of auditory areas in the superior temporal gyrus of marmoset monkeys revealed by real-time optical imaging. Brain Struct Funct 2017; 223:1599-1614. [DOI: 10.1007/s00429-017-1574-0] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2017] [Accepted: 11/18/2017] [Indexed: 11/25/2022]
|
17
|
Luo M, Li Y, Zhong W. Do dorsal raphe 5-HT neurons encode “beneficialness”? Neurobiol Learn Mem 2016; 135:40-49. [DOI: 10.1016/j.nlm.2016.08.008] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2016] [Revised: 08/15/2016] [Accepted: 08/17/2016] [Indexed: 10/21/2022]
|
18
|
Overton JA, Recanzone GH. Effects of aging on the response of single neurons to amplitude-modulated noise in primary auditory cortex of rhesus macaque. J Neurophysiol 2016; 115:2911-23. [PMID: 26936987 DOI: 10.1152/jn.01098.2015] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2015] [Accepted: 03/02/2016] [Indexed: 12/13/2022] Open
Abstract
Temporal envelope processing is critical for speech comprehension, which is known to be affected by normal aging. Whereas the macaque is an excellent animal model for human cerebral cortical function, few studies have investigated neural processing in the auditory cortex of aged, nonhuman primates. Therefore, we investigated age-related changes in the spiking activity of neurons in primary auditory cortex (A1) of two aged macaque monkeys using amplitude-modulated (AM) noise and compared these responses with data from a similar study in young monkeys (Yin P, Johnson JS, O'Connor KN, Sutter ML. J Neurophysiol 105: 582-600, 2011). For each neuron, we calculated firing rate (rate code) and phase-locking using phase-projected vector strength (temporal code). We made several key findings where neurons in old monkeys differed from those in young monkeys. Old monkeys had higher spontaneous and driven firing rates, fewer neurons that synchronized with the AM stimulus, and fewer neurons that had differential responses to AM stimuli with both a rate and temporal code. Finally, whereas rate and temporal tuning functions were positively correlated in young monkeys, this relationship was lost in older monkeys at both the population and single neuron levels. These results are consistent with considerable evidence from rodents and primates of an age-related decrease in inhibition throughout the auditory pathway. Furthermore, this dual coding in A1 is thought to underlie the capacity to encode multiple features of an acoustic stimulus. The apparent loss of ability to encode AM with both rate and temporal codes may have consequences for stream segregation and effective speech comprehension in complex listening environments.
Collapse
Affiliation(s)
| | - Gregg H Recanzone
- Center for Neuroscience, University of California, Davis, California; and Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| |
Collapse
|
19
|
Lee CM, Osman AF, Volgushev M, Escabí MA, Read HL. Neural spike-timing patterns vary with sound shape and periodicity in three auditory cortical fields. J Neurophysiol 2016; 115:1886-904. [PMID: 26843599 DOI: 10.1152/jn.00784.2015] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2015] [Accepted: 01/29/2016] [Indexed: 11/22/2022] Open
Abstract
Mammals perceive a wide range of temporal cues in natural sounds, and the auditory cortex is essential for their detection and discrimination. The rat primary (A1), ventral (VAF), and caudal suprarhinal (cSRAF) auditory cortical fields have separate thalamocortical pathways that may support unique temporal cue sensitivities. To explore this, we record responses of single neurons in the three fields to variations in envelope shape and modulation frequency of periodic noise sequences. Spike rate, relative synchrony, and first-spike latency metrics have previously been used to quantify neural sensitivities to temporal sound cues; however, such metrics do not measure absolute spike timing of sustained responses to sound shape. To address this, in this study we quantify two forms of spike-timing precision, jitter, and reliability. In all three fields, we find that jitter decreases logarithmically with increase in the basis spline (B-spline) cutoff frequency used to shape the sound envelope. In contrast, reliability decreases logarithmically with increase in sound envelope modulation frequency. In A1, jitter and reliability vary independently, whereas in ventral cortical fields, jitter and reliability covary. Jitter time scales increase (A1 < VAF < cSRAF) and modulation frequency upper cutoffs decrease (A1 > VAF > cSRAF) with ventral progression from A1. These results suggest a transition from independent encoding of shape and periodicity sound cues on short time scales in A1 to a joint encoding of these same cues on longer time scales in ventral nonprimary cortices.
Collapse
Affiliation(s)
- Christopher M Lee
- Department of Psychology, University of Connecticut, Storrs, Connecticut
| | - Ahmad F Osman
- Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut; and
| | - Maxim Volgushev
- Department of Psychology, University of Connecticut, Storrs, Connecticut
| | - Monty A Escabí
- Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut; and Department of Electrical and Computer Engineering, University of Connecticut, Storrs, Connecticut
| | - Heather L Read
- Department of Psychology, University of Connecticut, Storrs, Connecticut; Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut; and
| |
Collapse
|
20
|
Cai R, Caspary DM. GABAergic inhibition shapes SAM responses in rat auditory thalamus. Neuroscience 2015; 299:146-55. [PMID: 25943479 PMCID: PMC4457678 DOI: 10.1016/j.neuroscience.2015.04.062] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2014] [Revised: 04/27/2015] [Accepted: 04/27/2015] [Indexed: 01/03/2023]
Abstract
Auditory thalamus (medial geniculate body [MGB]) receives ascending inhibitory GABAergic inputs from inferior colliculus (IC) and descending GABAergic projections from the thalamic reticular nucleus (TRN) with both inputs postulated to play a role in shaping temporal responses. Previous studies suggested that enhanced processing of temporally rich stimuli occurs at the level of MGB, with our recent study demonstrating enhanced GABA sensitivity in MGB compared to IC. The present study used sinusoidal amplitude-modulated (SAM) stimuli to generate modulation transfer functions (MTFs), to examine the role of GABAergic inhibition in shaping the response properties of MGB single units in anesthetized rats. Rate MTFs (rMTFs) were parsed into "bandpass (BP)", "mixed (Mixed)", "highpass (HP)" or "atypical" response types, with most units showing the Mixed response type. GABAA receptor blockade with iontophoretic application of the GABAA receptor (GABAAR) antagonist gabazine (GBZ) selectively altered the response properties of most MGB neurons examined. Mixed and HP units showed significant GABAAR-mediated SAM-evoked rate response changes at higher modulation frequencies (fms), which were also altered by N-methyl-d-aspartic acid (NMDA) receptor blockade (2R)-amino-5-phosphonopentanoate (AP5). BP units, and the lower arm of Mixed units responded to GABAAR blockade with increased responses to SAM stimuli at or near the rate best modulation frequency (rBMF). The ability of GABA circuits to shape responses at higher modulation frequencies is an emergent property of MGB units, not observed at lower levels of the auditory pathway and may reflect activation of MGB NMDA receptors (Rabang and Bartlett, 2011; Rabang et al., 2012). Together, GABAARs exert selective rate control over selected fms, generally without changing the units' response type. These results showed that coding of modulated stimuli at the level of auditory thalamus is at least, in part, strongly controlled by GABA neurotransmission, in delicate balance with glutamatergic neurotransmission.
Collapse
Affiliation(s)
- R Cai
- Southern Illinois University School of Medicine, Department of Pharmacology, Springfield, IL, United States
| | - D M Caspary
- Southern Illinois University School of Medicine, Department of Pharmacology, Springfield, IL, United States.
| |
Collapse
|
21
|
Bendor D. The role of inhibition in a computational model of an auditory cortical neuron during the encoding of temporal information. PLoS Comput Biol 2015; 11:e1004197. [PMID: 25879843 PMCID: PMC4400160 DOI: 10.1371/journal.pcbi.1004197] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2014] [Accepted: 02/12/2015] [Indexed: 11/19/2022] Open
Abstract
In auditory cortex, temporal information within a sound is represented by two complementary neural codes: a temporal representation based on stimulus-locked firing and a rate representation, where discharge rate co-varies with the timing between acoustic events but lacks a stimulus-synchronized response. Using a computational neuronal model, we find that stimulus-locked responses are generated when sound-evoked excitation is combined with strong, delayed inhibition. In contrast to this, a non-synchronized rate representation is generated when the net excitation evoked by the sound is weak, which occurs when excitation is coincident and balanced with inhibition. Using single-unit recordings from awake marmosets (Callithrix jacchus), we validate several model predictions, including differences in the temporal fidelity, discharge rates and temporal dynamics of stimulus-evoked responses between neurons with rate and temporal representations. Together these data suggest that feedforward inhibition provides a parsimonious explanation of the neural coding dichotomy observed in auditory cortex.
Collapse
Affiliation(s)
- Daniel Bendor
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College London, London, United Kingdom
- * E-mail:
| |
Collapse
|
22
|
Niwa M, O'Connor KN, Engall E, Johnson JS, Sutter ML. Hierarchical effects of task engagement on amplitude modulation encoding in auditory cortex. J Neurophysiol 2014; 113:307-27. [PMID: 25298387 DOI: 10.1152/jn.00458.2013] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We recorded from middle lateral belt (ML) and primary (A1) auditory cortical neurons while animals discriminated amplitude-modulated (AM) sounds and also while they sat passively. Engagement in AM discrimination improved ML and A1 neurons' ability to discriminate AM with both firing rate and phase-locking; however, task engagement affected neural AM discrimination differently in the two fields. The results suggest that these two areas utilize different AM coding schemes: a "single mode" in A1 that relies on increased activity for AM relative to unmodulated sounds and a "dual-polar mode" in ML that uses both increases and decreases in neural activity to encode modulation. In the dual-polar ML code, nonsynchronized responses might play a special role. The results are consistent with findings in the primary and secondary somatosensory cortices during discrimination of vibrotactile modulation frequency, implicating a common scheme in the hierarchical processing of temporal information among different modalities. The time course of activity differences between behaving and passive conditions was also distinct in A1 and ML and may have implications for auditory attention. At modulation depths ≥ 16% (approximately behavioral threshold), A1 neurons' improvement in distinguishing AM from unmodulated noise is relatively constant or improves slightly with increasing modulation depth. In ML, improvement during engagement is most pronounced near threshold and disappears at highly suprathreshold depths. This ML effect is evident later in the stimulus, and mainly in nonsynchronized responses. This suggests that attention-related increases in activity are stronger or longer-lasting for more difficult stimuli in ML.
Collapse
Affiliation(s)
- Mamiko Niwa
- Center for Neuroscience and Department of Neurobiology, Physiology, and Behavior, University of California, Davis, California
| | - Kevin N O'Connor
- Center for Neuroscience and Department of Neurobiology, Physiology, and Behavior, University of California, Davis, California
| | - Elizabeth Engall
- Center for Neuroscience and Department of Neurobiology, Physiology, and Behavior, University of California, Davis, California
| | - Jeffrey S Johnson
- Center for Neuroscience and Department of Neurobiology, Physiology, and Behavior, University of California, Davis, California
| | - M L Sutter
- Center for Neuroscience and Department of Neurobiology, Physiology, and Behavior, University of California, Davis, California
| |
Collapse
|
23
|
Luo L, Wen Q, Ren J, Hendricks M, Gershow M, Qin Y, Greenwood J, Soucy ER, Klein M, Smith-Parker HK, Calvo AC, Colón-Ramos DA, Samuel ADT, Zhang Y. Dynamic encoding of perception, memory, and movement in a C. elegans chemotaxis circuit. Neuron 2014; 82:1115-28. [PMID: 24908490 DOI: 10.1016/j.neuron.2014.05.010] [Citation(s) in RCA: 84] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/21/2014] [Indexed: 01/08/2023]
Abstract
Brain circuits endow behavioral flexibility. Here, we study circuits encoding flexible chemotaxis in C. elegans, where the animal navigates up or down NaCl gradients (positive or negative chemotaxis) to reach the salt concentration of previous growth (the set point). The ASER sensory neuron mediates positive and negative chemotaxis by regulating the frequency and direction of reorientation movements in response to salt gradients. Both salt gradients and set point memory are encoded in ASER temporal activity patterns. Distinct temporal activity patterns in interneurons immediately downstream of ASER encode chemotactic movement decisions. Different interneuron combinations regulate positive versus negative chemotaxis. We conclude that sensorimotor pathways are segregated immediately after the primary sensory neuron in the chemotaxis circuit, and sensory representation is rapidly transformed to motor representation at the first interneuron layer. Our study reveals compact encoding of perception, memory, and locomotion in an experience-dependent navigational behavior in C. elegans.
Collapse
Affiliation(s)
- Linjiao Luo
- Key Laboratory of Modern Acoustics, Ministry of Education, Department of Physics, Nanjing University, China; Center for Brain Science, Harvard University, Cambridge, MA 02138, USA; Department of Physics, Harvard University, Cambridge, MA 02138, USA
| | - Quan Wen
- Center for Brain Science, Harvard University, Cambridge, MA 02138, USA; Department of Physics, Harvard University, Cambridge, MA 02138, USA; Department of Neurobiology and Biophysics, School of Life Sciences, University of Science and Technology of China, China
| | - Jing Ren
- Center for Brain Science, Harvard University, Cambridge, MA 02138, USA; Department of Organismic and Evolutionary Biology, Harvard University, Cambridge, MA 02138, USA
| | - Michael Hendricks
- Center for Brain Science, Harvard University, Cambridge, MA 02138, USA; Department of Organismic and Evolutionary Biology, Harvard University, Cambridge, MA 02138, USA
| | - Marc Gershow
- Center for Brain Science, Harvard University, Cambridge, MA 02138, USA; Department of Physics, Harvard University, Cambridge, MA 02138, USA
| | - Yuqi Qin
- Center for Brain Science, Harvard University, Cambridge, MA 02138, USA; Department of Organismic and Evolutionary Biology, Harvard University, Cambridge, MA 02138, USA
| | - Joel Greenwood
- Center for Brain Science, Harvard University, Cambridge, MA 02138, USA
| | - Edward R Soucy
- Center for Brain Science, Harvard University, Cambridge, MA 02138, USA
| | - Mason Klein
- Center for Brain Science, Harvard University, Cambridge, MA 02138, USA; Department of Physics, Harvard University, Cambridge, MA 02138, USA
| | - Heidi K Smith-Parker
- Department of Biochemistry and Molecular Biophysics, Columbia University, New York, NY 10032, USA
| | - Ana C Calvo
- Program in Cellular Neuroscience, Neurodegeneration and Repair, Department of Cell Biology, Yale University School of Medicine, New Haven, CT 06536, USA
| | - Daniel A Colón-Ramos
- Program in Cellular Neuroscience, Neurodegeneration and Repair, Department of Cell Biology, Yale University School of Medicine, New Haven, CT 06536, USA
| | - Aravinthan D T Samuel
- Center for Brain Science, Harvard University, Cambridge, MA 02138, USA; Department of Physics, Harvard University, Cambridge, MA 02138, USA.
| | - Yun Zhang
- Center for Brain Science, Harvard University, Cambridge, MA 02138, USA; Department of Organismic and Evolutionary Biology, Harvard University, Cambridge, MA 02138, USA.
| |
Collapse
|
24
|
Schafer PB, Jin DZ. Noise-Robust Speech Recognition Through Auditory Feature Detection and Spike Sequence Decoding. Neural Comput 2014; 26:523-56. [DOI: 10.1162/neco_a_00557] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Speech recognition in noisy conditions is a major challenge for computer systems, but the human brain performs it routinely and accurately. Automatic speech recognition (ASR) systems that are inspired by neuroscience can potentially bridge the performance gap between humans and machines. We present a system for noise-robust isolated word recognition that works by decoding sequences of spikes from a population of simulated auditory feature-detecting neurons. Each neuron is trained to respond selectively to a brief spectrotemporal pattern, or feature, drawn from the simulated auditory nerve response to speech. The neural population conveys the time-dependent structure of a sound by its sequence of spikes. We compare two methods for decoding the spike sequences—one using a hidden Markov model–based recognizer, the other using a novel template-based recognition scheme. In the latter case, words are recognized by comparing their spike sequences to template sequences obtained from clean training data, using a similarity measure based on the length of the longest common sub-sequence. Using isolated spoken digits from the AURORA-2 database, we show that our combined system outperforms a state-of-the-art robust speech recognizer at low signal-to-noise ratios. Both the spike-based encoding scheme and the template-based decoding offer gains in noise robustness over traditional speech recognition methods. Our system highlights potential advantages of spike-based acoustic coding and provides a biologically motivated framework for robust ASR development.
Collapse
Affiliation(s)
- Phillip B. Schafer
- Department of Physics and Center for Neural Engineering, The Pennsylvania State University, University Park, PA 16802, U.S.A
| | - Dezhe Z. Jin
- Department of Physics and Center for Neural Engineering, The Pennsylvania State University, University Park, PA 16802, U.S.A
| |
Collapse
|
25
|
Abstract
To understand the strategies used by the brain to analyze complex environments, we must first characterize how the features of sensory stimuli are encoded in the spiking of neuronal populations. Characterizing a population code requires identifying the temporal precision of spiking and the extent to which spiking is correlated, both between cells and over time. In this study, we characterize the population code for speech in the gerbil inferior colliculus (IC), the hub of the auditory system where inputs from parallel brainstem pathways are integrated for transmission to the cortex. We find that IC spike trains can carry information about speech with sub-millisecond precision, and, consequently, that the temporal correlations imposed by refractoriness can play a significant role in shaping spike patterns. We also find that, in contrast to most other brain areas, the noise correlations between IC cells are extremely weak, indicating that spiking in the population is conditionally independent. These results demonstrate that the problem of understanding the population coding of speech can be reduced to the problem of understanding the stimulus-driven spiking of individual cells, suggesting that a comprehensive model of the subcortical processing of speech may be attainable in the near future.
Collapse
|
26
|
Abstract
The encoding of sensory information by populations of cortical neurons forms the basis for perception but remains poorly understood. To understand the constraints of cortical population coding we analyzed neural responses to natural sounds recorded in auditory cortex of primates (Macaca mulatta). We estimated stimulus information while varying the composition and size of the considered population. Consistent with previous reports we found that when choosing subpopulations randomly from the recorded ensemble, the average population information increases steadily with population size. This scaling was explained by a model assuming that each neuron carried equal amounts of information, and that any overlap between the information carried by each neuron arises purely from random sampling within the stimulus space. However, when studying subpopulations selected to optimize information for each given population size, the scaling of information was strikingly different: a small fraction of temporally precise cells carried the vast majority of information. This scaling could be explained by an extended model, assuming that the amount of information carried by individual neurons was highly nonuniform, with few neurons carrying large amounts of information. Importantly, these optimal populations can be determined by a single biophysical marker-the neuron's encoding time scale-allowing their detection and readout within biologically realistic circuits. These results show that extrapolations of population information based on random ensembles may overestimate the population size required for stimulus encoding, and that sensory cortical circuits may process information using small but highly informative ensembles.
Collapse
|
27
|
Abolafia JM, Martinez-Garcia M, Deco G, Sanchez-Vives MV. Variability and information content in auditory cortex spike trains during an interval-discrimination task. J Neurophysiol 2013; 110:2163-74. [PMID: 23945780 DOI: 10.1152/jn.00381.2013] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Processing of temporal information is key in auditory processing. In this study, we recorded single-unit activity from rat auditory cortex while they performed an interval-discrimination task. The animals had to decide whether two auditory stimuli were separated by either 150 or 300 ms and nose-poke to the left or to the right accordingly. The spike firing of single neurons in the auditory cortex was then compared in engaged vs. idle brain states. We found that spike firing variability measured with the Fano factor was markedly reduced, not only during stimulation, but also in between stimuli in engaged trials. We next explored if this decrease in variability was associated with an increased information encoding. Our information theory analysis revealed increased information content in auditory responses during engagement compared with idle states, in particular in the responses to task-relevant stimuli. Altogether, we demonstrate that task-engagement significantly modulates coding properties of auditory cortical neurons during an interval-discrimination task.
Collapse
Affiliation(s)
- Juan M Abolafia
- Institut d'Investigacions Biomèdiques August Pi i Sunyer, Barcelona, Spain
| | | | | | | |
Collapse
|
28
|
Zheng Y, Escabí MA. Proportional spike-timing precision and firing reliability underlie efficient temporal processing of periodicity and envelope shape cues. J Neurophysiol 2013; 110:587-606. [PMID: 23636724 DOI: 10.1152/jn.01080.2010] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Temporal sound cues are essential for sound recognition, pitch, rhythm, and timbre perception, yet how auditory neurons encode such cues is subject of ongoing debate. Rate coding theories propose that temporal sound features are represented by rate tuned modulation filters. However, overwhelming evidence also suggests that precise spike timing is an essential attribute of the neural code. Here we demonstrate that single neurons in the auditory midbrain employ a proportional code in which spike-timing precision and firing reliability covary with the sound envelope cues to provide an efficient representation of the stimulus. Spike-timing precision varied systematically with the timescale and shape of the sound envelope and yet was largely independent of the sound modulation frequency, a prominent cue for pitch. In contrast, spike-count reliability was strongly affected by the modulation frequency. Spike-timing precision extends from sub-millisecond for brief transient sounds up to tens of milliseconds for sounds with slow-varying envelope. Information theoretic analysis further confirms that spike-timing precision depends strongly on the sound envelope shape, while firing reliability was strongly affected by the sound modulation frequency. Both the information efficiency and total information were limited by the firing reliability and spike-timing precision in a manner that reflected the sound structure. This result supports a temporal coding strategy in the auditory midbrain where proportional changes in spike-timing precision and firing reliability can efficiently signal shape and periodicity temporal cues.
Collapse
Affiliation(s)
- Y Zheng
- Department of Biomedical Engineering, University of Connecticut, Storrs, CT 06269, USA
| | | |
Collapse
|
29
|
Wang N, Bo L, Zhang F, Tan X, Yang X, Xiao Z. An approach to identify the functional transduction and transmission of an activated pathway. CHINESE SCIENCE BULLETIN-CHINESE 2013. [DOI: 10.1007/s11434-012-5452-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
30
|
Abstract
Pitch, our perception of how high or low a sound is on a musical scale, is a fundamental perceptual attribute of sounds and is important for both music and speech. After more than a century of research, the exact mechanisms used by the auditory system to extract pitch are still being debated. Theoretically, pitch can be computed using either spectral or temporal acoustic features of a sound. We have investigated how cues derived from the temporal envelope and spectrum of an acoustic signal are used for pitch extraction in the common marmoset (Callithrix jacchus), a vocal primate species, by measuring pitch discrimination behaviorally and examining pitch-selective neuronal responses in auditory cortex. We find that pitch is extracted by marmosets using temporal envelope cues for lower pitch sounds composed of higher-order harmonics, whereas spectral cues are used for higher pitch sounds with lower-order harmonics. Our data support dual-pitch processing mechanisms, originally proposed by psychophysicists based on human studies, whereby pitch is extracted using a combination of temporal envelope and spectral cues.
Collapse
|
31
|
Tonotopic-column-dependent variability of neural encoding in the auditory cortex of rats. Neuroscience 2012; 223:377-87. [DOI: 10.1016/j.neuroscience.2012.07.051] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2012] [Revised: 07/11/2012] [Accepted: 07/19/2012] [Indexed: 11/19/2022]
|
32
|
de la Mothe LA, Blumell S, Kajikawa Y, Hackett TA. Cortical connections of auditory cortex in marmoset monkeys: lateral belt and parabelt regions. Anat Rec (Hoboken) 2012; 295:800-21. [PMID: 22461313 DOI: 10.1002/ar.22451] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2011] [Accepted: 03/01/2012] [Indexed: 11/12/2022]
Abstract
The current working model of primate auditory cortex is constructed from a number of studies of both new and old world monkeys. It includes three levels of processing. A primary level, the core region, is surrounded both medially and laterally by a secondary belt region. A third level of processing, the parabelt region, is located lateral to the belt. The marmoset monkey (Callithrix jacchus jacchus) has become an important model system to study auditory processing, but its anatomical organization has not been fully established. In previous studies, we focused on the architecture and connections of the core and medial belt areas (de la Mothe et al., 2006a, J Comp Neurol 496:27-71; de la Mothe et al., 2006b, J Comp Neurol 496:72-96). In this study, the corticocortical connections of the lateral belt and parabelt were examined in the marmoset. Tracers were injected into both rostral and caudal portions of the lateral belt and parabelt. Both regions revealed topographic connections along the rostrocaudal axis, where caudal areas of injection had stronger connections with caudal areas, and rostral areas of injection with rostral areas. The lateral belt had strong connections with the core, belt, and parabelt, whereas the parabelt had strong connections with the belt but not the core. Label in the core from injections in the parabelt was significantly reduced or absent, consistent with the idea that the parabelt relies mainly on the belt for its cortical input. In addition, the present and previous studies indicate hierarchical principles of anatomical organization in the marmoset that are consistent with those observed in other primates.
Collapse
Affiliation(s)
- Lisa A de la Mothe
- Department of Psychology, Tennessee State University, Nashville, Tennessee 37209, USA
| | | | | | | |
Collapse
|
33
|
Johnson JS, Yin P, O'Connor KN, Sutter ML. Ability of primary auditory cortical neurons to detect amplitude modulation with rate and temporal codes: neurometric analysis. J Neurophysiol 2012; 107:3325-41. [PMID: 22422997 DOI: 10.1152/jn.00812.2011] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Amplitude modulation (AM) is a common feature of natural sounds, and its detection is biologically important. Even though most sounds are not fully modulated, the majority of physiological studies have focused on fully modulated (100% modulation depth) sounds. We presented AM noise at a range of modulation depths to awake macaque monkeys while recording from neurons in primary auditory cortex (A1). The ability of neurons to detect partial AM with rate and temporal codes was assessed with signal detection methods. On average, single-cell synchrony was as or more sensitive than spike count in modulation detection. Cells are less sensitive to modulation depth if tested away from their best modulation frequency, particularly for temporal measures. Mean neural modulation detection thresholds in A1 are not as sensitive as behavioral thresholds, but with phase locking the most sensitive neurons are more sensitive, suggesting that for temporal measures the lower-envelope principle cannot account for thresholds. Three methods of preanalysis pooling of spike trains (multiunit, similar to convergence from a cortical column; within cell, similar to convergence of cells with matched response properties; across cell, similar to indiscriminate convergence of cells) all result in an increase in neural sensitivity to modulation depth for both temporal and rate codes. For the across-cell method, pooling of a few dozen cells can result in detection thresholds that approximate those of the behaving animal. With synchrony measures, indiscriminate pooling results in sensitive detection of modulation frequencies between 20 and 60 Hz, suggesting that differences in AM response phase are minor in A1.
Collapse
Affiliation(s)
- Jeffrey S Johnson
- Center for Neuroscience, Univ. of California at Davis, Davis, CA 95618, USA
| | | | | | | |
Collapse
|
34
|
Amarasingham A, Harrison MT, Hatsopoulos NG, Geman S. Conditional modeling and the jitter method of spike resampling. J Neurophysiol 2011; 107:517-31. [PMID: 22031767 DOI: 10.1152/jn.00633.2011] [Citation(s) in RCA: 71] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The existence and role of fine-temporal structure in the spiking activity of central neurons is the subject of an enduring debate among physiologists. To a large extent, the problem is a statistical one: what inferences can be drawn from neurons monitored in the absence of full control over their presynaptic environments? In principle, properly crafted resampling methods can still produce statistically correct hypothesis tests. We focus on the approach to resampling known as jitter. We review a wide range of jitter techniques, illustrated by both simulation experiments and selected analyses of spike data from motor cortical neurons. We rely on an intuitive and rigorous statistical framework known as conditional modeling to reveal otherwise hidden assumptions and to support precise conclusions. Among other applications, we review statistical tests for exploring any proposed limit on the rate of change of spiking probabilities, exact tests for the significance of repeated fine-temporal patterns of spikes, and the construction of acceptance bands for testing any purported relationship between sensory or motor variables and synchrony or other fine-temporal events.
Collapse
Affiliation(s)
- Asohan Amarasingham
- Department of Mathematics, The City College of New York, and Program in Cognitive Neuroscience, The Graduate Center, City University of New York, New York, New York, USA
| | | | | | | |
Collapse
|
35
|
Abolafia JM, Martinez-Garcia M, Deco G, Sanchez-Vives MV. Slow Modulation of Ongoing Discharge in the Auditory Cortex during an Interval-Discrimination Task. Front Integr Neurosci 2011; 5:60. [PMID: 22022308 PMCID: PMC3197084 DOI: 10.3389/fnint.2011.00060] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2011] [Accepted: 09/18/2011] [Indexed: 11/15/2022] Open
Abstract
In this study, we recorded single unit activity from rat auditory cortex while the animals performed an interval-discrimination task. The animals had to decide whether two auditory stimuli were separated by either 150 or 300 ms, and go to the left or right nose poke accordingly. Spontaneous firing in between auditory responses was compared in the attentive versus non-attentive brain states. We describe the firing rate modulation detected during intervals while there was no auditory stimulation. Nearly 18% of neurons (n = 14) showed a prominent neuronal discharge during the interstimulus interval, in the form of an upward or downward ramp towards the second auditory stimulus. These patterns of spontaneous activity were often modulated in the attentive versus passive trials. Modulation of the spontaneous firing rate during the task was observed not only between auditory stimuli, but also in the interval preceding the stimulus. These slow modulatory components could be locally generated or the result of a top-down influence originated in higher associative association areas. Such a neuronal discharge may be related to the computation of the interval time and contribute to the perception of the auditory stimulus.
Collapse
Affiliation(s)
- Juan M Abolafia
- Institut d'Investigacions Biomèdiques August Pi i Sunyer Barcelona, Spain
| | | | | | | |
Collapse
|
36
|
Brasselet R, Johansson RS, Arleo A. Quantifying Neurotransmission Reliability Through Metrics-Based Information Analysis. Neural Comput 2011; 23:852-81. [DOI: 10.1162/neco_a_00099] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
We set forth an information-theoretical measure to quantify neurotransmission reliability while taking into full account the metrical properties of the spike train space. This parametric information analysis relies on similarity measures induced by the metrical relations between neural responses as spikes flow in. Thus, in order to assess the entropy, the conditional entropy, and the overall information transfer, this method does not require any a priori decoding algorithm to partition the space into equivalence classes. It therefore allows the optimal parameters of a class of distances to be determined with respect to information transmission. To validate the proposed information-theoretical approach, we study precise temporal decoding of human somatosensory signals recorded using microneurography experiments. For this analysis, we employ a similarity measure based on the Victor-Purpura spike train metrics. We show that with appropriate parameters of this distance, the relative spike times of the mechanoreceptors’ responses convey enough information to perform optimal discrimination—defined as maximum metrical information and zero conditional entropy—of 81 distinct stimuli within 40 ms of the first afferent spike. The proposed information-theoretical measure proves to be a suitable generalization of Shannon mutual information in order to consider the metrics of temporal codes explicitly. It allows neurotransmission reliability to be assessed in the presence of large spike train spaces (e.g., neural population codes) with high temporal precision.
Collapse
Affiliation(s)
- Romain Brasselet
- Centre National de la Recherche Scientifique, Université Pierre et Marie Curie, UMR 7102, F75005 Paris, France
| | - Roland S. Johansson
- Umeå University, Department of Integrative Medical Biology, SE-901 87 Umeå, Sweden
| | - Angelo Arleo
- Centre National de la Recherche Scientifique, Université Pierre et Marie Curie, UMR 7102, F75005 Paris, France
| |
Collapse
|
37
|
Distributed representation of tone frequency in highly decodable spatio-temporal activity in the auditory cortex. Neural Netw 2011; 24:321-32. [PMID: 21277165 DOI: 10.1016/j.neunet.2010.12.010] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2009] [Revised: 12/14/2010] [Accepted: 12/24/2010] [Indexed: 11/21/2022]
Abstract
Although the place code of tone frequency, or tonotopic map, has been widely accepted in the auditory cortex, tone-evoked activation becomes less frequency-specific at moderate or high sound pressure levels. This implies that sound frequency is not represented by a simple place code but that the information is distributed spatio-temporally irrespective of the focal activation. In this study, using a decoding-based analysis, we investigated multi-unit activities in the auditory cortices of anesthetized rats to elucidate how a tone frequency is represented in the spatio-temporal neural pattern. We attempted sequential dimensionality reduction (SDR), a specific implementation of recursive feature elimination (RFE) with support vector machine (SVM), to identify the optimal spatio-temporal window patterns for decoding test frequency. SDR selected approximately a quarter of the windows, and SDR-identified window patterns led to significantly better decoding than spatial patterns, in which temporal structures were eliminated, or high-spike-rate patterns, in which windows with high spike rates were selectively extracted. Thus, the test frequency is also encoded in temporal as well as spatial structures of neural activities and low-spike-rate windows. Yet, SDR recruited more high-spike-rate windows than low-spike-rate windows, resulting in a highly dispersive pattern that probably offers an advantage of discrimination ability. Further investigation of SVM weights suggested that low-spike-rate windows play significant roles in fine frequency differentiation. These findings support the hypothesis that the auditory cortex adopts a distributed code in tone frequency representation, in which high- and low-spike-rate activities play mutually complementary roles.
Collapse
|
38
|
Tardif SD, Mansfield KG, Ratnam R, Ross CN, Ziegler TE. The marmoset as a model of aging and age-related diseases. ILAR J 2011; 52:54-65. [PMID: 21411858 PMCID: PMC3775658 DOI: 10.1093/ilar.52.1.54] [Citation(s) in RCA: 170] [Impact Index Per Article: 13.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
The common marmoset (Callithrix jacchus) is poised to become a standard nonhuman primate aging model. With an average lifespan of 5 to 7 years and a maximum lifespan of 16½ years, marmosets are the shortest-lived anthropoid primates. They display age-related changes in pathologies that mirror those seen in humans, such as cancer, amyloidosis, diabetes, and chronic renal disease. They also display predictable age-related differences in lean mass, calf circumference, circulating albumin, hemoglobin, and hematocrit. Features of spontaneous sensory and neurodegenerative change--for example, reduced neurogenesis, ß-amyloid deposition in the cerebral cortex, loss of calbindin D(28k) binding, and evidence of presbycusis--appear between the ages of 7 and 10 years. Variation among colonies in the age at which neurodegenerative change occurs suggests the interesting possibility that marmosets could be specifically managed to produce earlier versus later occurrence of degenerative conditions associated with differing rates of damage accumulation. In addition to the established value of the marmoset as a model of age-related neurodegenerative change, this primate can serve as a model of the integrated effects of aging and obesity on metabolic dysfunction, as it displays evidence of such dysfunction associated with high body weight as early as 6 to 8 years of age.
Collapse
Affiliation(s)
- Suzette D Tardif
- Barshop Institute for Longevity and Aging Studies, University of Texas Health Science Center at San Antonio, 15355 Lambda Drive, STCBM Bldg 2.200.08, San Antonio, TX 78245, USA.
| | | | | | | | | |
Collapse
|
39
|
Arleo A, Nieus T, Bezzi M, D'Errico A, D'Angelo E, Coenen OJMD. How synaptic release probability shapes neuronal transmission: information-theoretic analysis in a cerebellar granule cell. Neural Comput 2010; 22:2031-58. [PMID: 20438336 DOI: 10.1162/neco_a_00006-arleo] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
A nerve cell receives multiple inputs from upstream neurons by way of its synapses. Neuron processing functions are thus influenced by changes in the biophysical properties of the synapse, such as long-term potentiation (LTP) or depression (LTD). This observation has opened new perspectives on the biophysical basis of learning and memory, but its quantitative impact on the information transmission of a neuron remains partially elucidated. One major obstacle is the high dimensionality of the neuronal input-output space, which makes it unfeasible to perform a thorough computational analysis of a neuron with multiple synaptic inputs. In this work, information theory was employed to characterize the information transmission of a cerebellar granule cell over a region of its excitatory input space following synaptic changes. Granule cells have a small dendritic tree (on average, they receive only four mossy fiber afferents), which greatly bounds the input combinatorial space, reducing the complexity of information-theoretic calculations. Numerical simulations and LTP experiments quantified how changes in neurotransmitter release probability (p) modulated information transmission of a cerebellar granule cell. Numerical simulations showed that p shaped the neurotransmission landscape in unexpected ways. As p increased, the optimality of the information transmission of most stimuli did not increase strictly monotonically; instead it reached a plateau at intermediate p levels. Furthermore, our results showed that the spatiotemporal characteristics of the inputs determine the effect of p on neurotransmission, thus permitting the selection of distinctive preferred stimuli for different p values. These selective mechanisms may have important consequences on the encoding of cerebellar mossy fiber inputs and the plasticity and computation at the next circuit stage, including the parallel fiber-Purkinje cell synapses.
Collapse
Affiliation(s)
- Angelo Arleo
- CNRS, UPMC, UMR 7102 Neurobiology of Adaptive Processes, Paris, France.
| | | | | | | | | | | |
Collapse
|
40
|
Bendor D, Wang X. Neural coding of periodicity in marmoset auditory cortex. J Neurophysiol 2010; 103:1809-22. [PMID: 20147419 DOI: 10.1152/jn.00281.2009] [Citation(s) in RCA: 61] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Pitch, our perception of how high or low a sound is on a musical scale, crucially depends on a sound's periodicity. If an acoustic signal is temporally jittered so that it becomes aperiodic, the pitch will no longer be perceivable even though other acoustical features that normally covary with pitch are unchanged. Previous electrophysiological studies investigating pitch have typically used only periodic acoustic stimuli, and as such these studies cannot distinguish between a neural representation of pitch and an acoustical feature that only correlates with pitch. In this report, we examine in the auditory cortex of awake marmoset monkeys (Callithrix jacchus) the neural coding of a periodicity's repetition rate, an acoustic feature that covaries with pitch. We first examine if individual neurons show similar repetition rate tuning for different periodic acoustic signals. We next measure how sensitive these neural representations are to the temporal regularity of the acoustic signal. We find that neurons throughout auditory cortex covary their firing rate with the repetition rate of an acoustic signal. However, similar repetition rate tuning across acoustic stimuli and sensitivity to temporal regularity were generally only observed in a small group of neurons found near the anterolateral border of primary auditory cortex, the location of a previously identified putative pitch processing center. These results suggest that although the encoding of repetition rate is a general component of auditory cortical processing, the neural correlate of periodicity is confined to a special class of pitch-selective neurons within the putative pitch processing center of auditory cortex.
Collapse
Affiliation(s)
- Daniel Bendor
- Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Bldg. 46, Rm. 5233, 43 Vassar St., Cambridge, MA, USA.
| | | |
Collapse
|
41
|
Kayser C, Logothetis NK, Panzeri S. Visual enhancement of the information representation in auditory cortex. Curr Biol 2009; 20:19-24. [PMID: 20036538 DOI: 10.1016/j.cub.2009.10.068] [Citation(s) in RCA: 146] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2009] [Revised: 10/25/2009] [Accepted: 10/26/2009] [Indexed: 11/19/2022]
Abstract
Combining information across different sensory modalities can greatly facilitate our ability to detect, discriminate, or recognize sensory stimuli. Although this process of sensory integration has usually been attributed to classical association cortices, recent work has demonstrated that neuronal activity in early sensory cortices can also be influenced by cross-modal inputs. Here we demonstrate that such "early" multisensory influences enhance the information carried by neurons about multisensory stimuli. By recording in auditory cortex of alert monkeys watching naturalistic audiovisual stimuli, we quantified the effect of visual influences on the trial-to-trial response variability and on the amount of information carried by neural responses. We found that firing rates and precisely timed spike patterns of individual units became more reliable across trials and time when multisensory stimuli were presented, leading to greater encoded stimulus information. Importantly, this multisensory information enhancement was much reduced when the visual stimulus did not match the sound. These results demonstrate that multisensory influences enhance information processing already at early stages in cortex, suggesting that sensory integration is a distributed process, commencing in lower sensory areas and continuing in higher association cortices.
Collapse
Affiliation(s)
- Christoph Kayser
- Max Planck Institute for Biological Cybernetics, Spemannstrasse 38, 72076 Tübingen, Germany.
| | | | | |
Collapse
|
42
|
Gourévitch B, Eggermont JJ. Maximum decoding abilities of temporal patterns and synchronized firings: application to auditory neurons responding to click trains and amplitude modulated white noise. J Comput Neurosci 2009; 29:253-277. [PMID: 19373548 DOI: 10.1007/s10827-009-0149-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2008] [Revised: 02/10/2009] [Accepted: 03/16/2009] [Indexed: 11/29/2022]
Abstract
Simultaneous recordings of an increasing number of neurons have recently become available, but few methods have been proposed to handle this activity. Here, we extract and investigate all the possible temporal neural activity patterns based on synchronized firings of neurons recorded on multiple electrodes, or based on bursts of single-electrode activity in cat primary auditory cortex. We apply this to responses to periodic click trains or sinusoïdal amplitude modulated noise by obtaining for each pattern its temporal modulation transfer function. An algorithm that maximizes the mutual information between all patterns and stimuli subsequently leads to the identification of patterns that optimally decode modulation frequency (MF). We show that stimulus information contained in multi-electrode synchronized firing is not redundant with single-electrode firings and leads to improved efficiency of MF decoding. We also show that the combined use of firing rate and temporal codes leads to a better discrimination of the MF.
Collapse
Affiliation(s)
- Boris Gourévitch
- Department of Physiology and Biophysics, Department of Psychology, University of Calgary, Calgary, AB, Canada
| | - Jos J Eggermont
- Department of Physiology and Biophysics, Department of Psychology, University of Calgary, Calgary, AB, Canada. .,Department of Psychology, University of Calgary, 2500 University Drive N.W., Calgary, AB, T2N 1N4, Canada.
| |
Collapse
|
43
|
Abstract
How the brain processes temporal information embedded in sounds is a core question in auditory research. This article synthesizes recent studies from our laboratory regarding neural representations of time-varying signals in auditory cortex and thalamus in awake marmoset monkeys. Findings from these studies show that 1) the primary auditory cortex (A1) uses a temporal representation to encode slowly varying acoustic signals and a firing rate-based representation to encode rapidly changing acoustic signals, 2) the dual temporal-rate representation in A1 represent a progressive transformation from the auditory thalamus, 3) firing rate-based representations in the form of a monotonic rate-code are also found to encode slow temporal repetitions in the range of acoustic flutter in A1 and more prevalently in the cortical fields rostral to A1 in the core region of the marmoset auditory cortex, suggesting further temporal-to-rate transformations in higher cortical areas. These findings indicate that the auditory cortex forms internal representations of temporal characteristic structures. We suggest that such transformations are necessary for the auditory cortex to perform a wide range of functions including sound segmentation, object processing and multi-sensory integration.
Collapse
|
44
|
Wang X, Lu T, Bendor D, Bartlett E. Neural coding of temporal information in auditory thalamus and cortex. Neuroscience 2008; 154:294-303. [PMID: 18555164 DOI: 10.1016/j.neuroscience.2008.03.065] [Citation(s) in RCA: 61] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2007] [Revised: 03/24/2008] [Accepted: 03/24/2008] [Indexed: 10/22/2022]
Abstract
How the brain processes temporal information embedded in sounds is a core question in auditory research. This article synthesizes recent studies from our laboratory regarding neural representations of time-varying signals in auditory cortex and thalamus in awake marmoset monkeys. Findings from these studies show that 1) the primary auditory cortex (A1) uses a temporal representation to encode slowly varying acoustic signals and a firing rate-based representation to encode rapidly changing acoustic signals, 2) the dual temporal-rate representations in A1 represent a progressive transformation from the auditory thalamus, 3) firing rate-based representations in the form of monotonic rate-code are also found to encode slow temporal repetitions in the range of acoustic flutter in A1 and more prevalently in the cortical fields rostral to A1 in the core region of marmoset auditory cortex, suggesting further temporal-to-rate transformations in higher cortical areas. These findings indicate that the auditory cortex forms internal representations of temporal characteristics of sounds that are no longer faithful replicas of their acoustic structures. We suggest that such transformations are necessary for the auditory cortex to perform a wide range of functions including sound segmentation, object processing and multi-sensory integration.
Collapse
Affiliation(s)
- X Wang
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, Johns Hopkins University School of Medicine, 720 Rutland Avenue, Traylor 410, Baltimore, MD 21205, USA.
| | | | | | | |
Collapse
|
45
|
Gourévitch B, Le Bouquin Jeannès R, Faucon G, Liégeois-Chauvel C. Temporal envelope processing in the human auditory cortex: Response and interconnections of auditory cortical areas. Hear Res 2008; 237:1-18. [DOI: 10.1016/j.heares.2007.12.003] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/23/2007] [Revised: 12/07/2007] [Accepted: 12/07/2007] [Indexed: 10/22/2022]
|
46
|
Kajikawa Y, de la Mothe LA, Blumell S, Sterbing-D'Angelo SJ, D'Angelo W, Camalier CR, Hackett TA. Coding of FM sweep trains and twitter calls in area CM of marmoset auditory cortex. Hear Res 2008; 239:107-25. [PMID: 18342463 DOI: 10.1016/j.heares.2008.01.015] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/18/2006] [Revised: 01/28/2008] [Accepted: 01/31/2008] [Indexed: 11/18/2022]
Abstract
The primate auditory cortex contains three interconnected regions (core, belt, parabelt), which are further subdivided into discrete areas. The caudomedial area (CM) is one of about seven areas in the belt region that has been the subject of recent anatomical and physiological studies conducted to define the functional organization of auditory cortex. The main goal of the present study was to examine temporal coding in area CM of marmoset monkeys using two related classes of acoustic stimuli: (1) marmoset twitter calls; and (2) frequency-modulated (FM) sweep trains modeled after the twitter call. The FM sweep trains were presented at repetition rates between 1 and 24 Hz, overlapping the natural phrase frequency of the twitter call (6-8 Hz). Multiunit recordings in CM revealed robust phase-locked responses to twitter calls and FM sweep trains. For the latter, phase-locking quantified by vector strength (VS) was best at repetition rates between 2 and 8 Hz, with a mean of about 5 Hz. Temporal response patterns were not strictly phase-locked, but exhibited dynamic features that varied with the repetition rate. To examine these properties, classification of the repetition rate from the temporal response pattern evoked by twitter calls and FM sweep trains was examined by Fisher's linear discrimination analysis (LDA). Response classification by LDA revealed that information was encoded not only by phase-locking, but also other components of the temporal response pattern. For FM sweep trains, classification was best for repetition rates from 2 to 8 Hz. Thus, the majority of neurons in CM can accurately encode the envelopes of temporally complex stimuli over the behaviorally-relevant range of the twitter call. This suggests that CM could be engaged in processing that requires relatively precise temporal envelope discrimination, and supports the hypothesis that CM is positioned at an early stage of processing in the auditory cortex of primates.
Collapse
|
47
|
Thivierge JP. Higher derivatives of ERP responses to cross-modality processing. Neuroinformatics 2008; 6:35-46. [PMID: 18193398 DOI: 10.1007/s12021-007-9007-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2007] [Accepted: 12/11/2007] [Indexed: 10/22/2022]
Abstract
Determining the links between cognitive processes and neuroelectrical brain activity (i.e., event-related potentials, ERPs) depends strongly on our understanding of how this activity fluctuates in response to stimuli; however, the way in which changes in ERP amplitudes can accelerate and decelerate over time has received only scant attention. The present study demonstrates that moment-to-moment changes (i.e., derivatives) of ERP responses convey information that is not readily accessible from the amplitude of response. Subjects exposed to visual and auditory stimuli either alone (unimodal) or combined (crossmodal) yielded different responses according to particular derivatives of ERP activation. In particular, an effect of cross-modality integration (stronger activation for crossmodal compared to unimodal stimuli) was detected in the higher derivatives of activation of a number of electrode sites spanning a fronto-centro-parietal distribution; in most sites, no such effect was detected in the amplitude of waveforms itself. These results suggest that information may be carried by the higher derivatives of ERP responses, and that distinct topographic distributions are associated with different derivatives of response. These different derivatives of response may in turn relate to different strategies for sensory processing in the brain, and in particular reflect a fundamental mode of information processing by time derivatives previously reported in cortex.
Collapse
Affiliation(s)
- Jean-Philippe Thivierge
- Department of Psychological and Brain Sciences, Indiana University, 1101 East Tenth Street, Bloomington, IN 47405, USA.
| |
Collapse
|
48
|
Tan X, Wang X, Yang W, Xiao Z. First spike latency and spike count as functions of tone amplitude and frequency in the inferior colliculus of mice. Hear Res 2007; 235:90-104. [PMID: 18037595 DOI: 10.1016/j.heares.2007.10.002] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/06/2007] [Revised: 10/06/2007] [Accepted: 10/10/2007] [Indexed: 11/27/2022]
Abstract
Spike counts (SC) or, spike rate and first spike latency (FSL), are both used to evaluate the responses of neurons to amplitudes and frequencies of acoustic stimuli. However, it is unclear which one is more suitable as a parameter for evaluating the responses of neurons to acoustic amplitudes and frequencies, since systematic comparisons between SC and FSL tuned to different amplitudes and frequencies, are scarce. This study systematically compared the precision and stability (i.e., the resolution and the coefficient variation, CV) of SC- and FSL-function as frequencies and amplitudes in the inferior colliculus of mice. The results showed that: (1) the SC-amplitude functions were of diverse shape (monotonic, nonmonotonic and saturated) whereas the FSL-amplitude functions were in close registration, in which FSL decreased with the increase of amplitude and no paradoxical (an increase in FSL with increasing amplitude) or constant (an independence of FSL on amplitude) neuron was observed; (2) the discriminability (resolution) of differences in amplitude and frequency based on FSL are higher than those based on SC; (3) the CVs of FSL for low amplitude stimuli were smaller than those of SC; (4) the fraction of neurons for which BF=CF (within +/-500Hz) obtained from FSL was higher than that from SC at any amplitude of sound. Therefore, SC and FSL may vary, independent from each other and represent different parameters of an acoustic stimulus, but FSL with its precision and stability appears to be a better parameter than SC in evaluation of the response of a neuron to frequency and amplitude in mouse inferior colliculus.
Collapse
Affiliation(s)
- Xiaodong Tan
- Physiology Department, Basic Medical School, Southern Medical University, Guangzhou 510515, China
| | | | | | | |
Collapse
|
49
|
Lenarz T, Lim HH, Reuter G, Patrick JF, Lenarz M. The auditory midbrain implant: a new auditory prosthesis for neural deafness-concept and device description. Otol Neurotol 2007; 27:838-43. [PMID: 16936570 DOI: 10.1097/01.mao.0000232010.01116.e9] [Citation(s) in RCA: 80] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
The auditory midbrain implant (AMI) is a new central auditory prosthesis designed for penetrating stimulation of the human inferior colliculus. The major group of candidates for the AMI consists of neurofibromatosis type 2 (NF2) patients who develop neural deafness because of growth and/or surgical removal of bilateral acoustic neuromas. Because of the absence of a viable auditory nerve, these patients cannot benefit from cochlear implants. An alternative solution has been the auditory brainstem implant (ABI), which stimulates the cochlear nucleus. However, speech perception performance in NF2 ABI patients has been limited. The fact that the ABI is able to produce high levels of speech perception in nontumor patients (with inaccessible cochleae or posttraumatic damage to the cochlear nerve) suggests that limitations in ABI performance in NF2 patients may be associated with cochlear nucleus damage caused by the tumors or the tumor removal process. Thus, stimulation of the auditory midbrain proximal to the damaged cochlear nucleus may be a better alternative for hearing restoration in NF2 patients. We propose the central nucleus of the inferior colliculus (ICC) as the potential site. A penetrating electrode array aligned along the well-defined tonotopic gradient of the ICC should selectively activate different frequency regions, which is an important elementfor supporting good speech understanding. The goal of this article is to present the ICC as an alternative site for an auditory implant for NF2 patients and to describe the design of the first human prototype AMI. Practical considerations for implementation of the AMI will also be discussed.
Collapse
Affiliation(s)
- Thomas Lenarz
- Otorhinolaryngology Department, Medical University of Hannover, Germany.
| | | | | | | | | |
Collapse
|
50
|
Malone BJ, Scott BH, Semple MN. Dynamic amplitude coding in the auditory cortex of awake rhesus macaques. J Neurophysiol 2007; 98:1451-74. [PMID: 17615123 DOI: 10.1152/jn.01203.2006] [Citation(s) in RCA: 73] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
In many animals, the information most important for processing communication sounds, including speech, consists of temporal envelope cues below approximately 20 Hz. Physiological studies, however, have typically emphasized the upper limits of modulation encoding. Responses to sinusoidal AM (SAM) are generally summarized by modulation transfer functions (MTFs), which emphasize tuning to modulation frequency rather than the representation of the instantaneous stimulus amplitude. Unfortunately, MTFs fail to capture important but nonlinear aspects of amplitude coding in the central auditory system. We focus on an alternative data representation, the modulation period histogram (MPH), which depicts the spike train folded on the modulation period of the SAM stimulus. At low modulation frequencies, the fluctuations of stimulus amplitude in decibels are robustly encoded by the cycle-by-cycle response dynamics evident in the MPH. We show that all of the parameters that define a SAM stimulus--carrier frequency, carrier level, modulation frequency, and modulation depth--are reflected in the shape of cortical MPHs. In many neurons that are nonmonotonically tuned for sound amplitude, the representation of modulation frequency is typically sacrificed to preserve the mapping between the instantaneous discharge rate and the instantaneous stimulus amplitude, resulting in two response modes per modulation cycle. This behavior, as well as the relatively poor tuning of cortical MTFs, suggests that auditory cortical neurons are not well suited for operating as a "modulation filterbank." Instead, our results suggest that <20 Hz, the processing of modulated signals is better described as envelope shape discrimination rather than modulation frequency extraction.
Collapse
Affiliation(s)
- Brian J Malone
- Center for Neural Science, New York University, New York, NY 10003, USA
| | | | | |
Collapse
|