1
|
Lopez Espejo M, Schwartz ZP, David SV. Spectral tuning of adaptation supports coding of sensory context in auditory cortex. PLoS Comput Biol 2019; 15:e1007430. [PMID: 31626624 PMCID: PMC6821137 DOI: 10.1371/journal.pcbi.1007430] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2019] [Revised: 10/30/2019] [Accepted: 09/23/2019] [Indexed: 12/19/2022] Open
Abstract
Perception of vocalizations and other behaviorally relevant sounds requires integrating acoustic information over hundreds of milliseconds. Sound-evoked activity in auditory cortex typically has much shorter latency, but the acoustic context, i.e., sound history, can modulate sound evoked activity over longer periods. Contextual effects are attributed to modulatory phenomena, such as stimulus-specific adaption and contrast gain control. However, an encoding model that links context to natural sound processing has yet to be established. We tested whether a model in which spectrally tuned inputs undergo adaptation mimicking short-term synaptic plasticity (STP) can account for contextual effects during natural sound processing. Single-unit activity was recorded from primary auditory cortex of awake ferrets during presentation of noise with natural temporal dynamics and fully natural sounds. Encoding properties were characterized by a standard linear-nonlinear spectro-temporal receptive field (LN) model and variants that incorporated STP-like adaptation. In the adapting models, STP was applied either globally across all input spectral channels or locally to subsets of channels. For most neurons, models incorporating local STP predicted neural activity as well or better than LN and global STP models. The strength of nonlinear adaptation varied across neurons. Within neurons, adaptation was generally stronger for spectral channels with excitatory than inhibitory gain. Neurons showing improved STP model performance also tended to undergo stimulus-specific adaptation, suggesting a common mechanism for these phenomena. When STP models were compared between passive and active behavior conditions, response gain often changed, but average STP parameters were stable. Thus, spectrally and temporally heterogeneous adaptation, subserved by a mechanism with STP-like dynamics, may support representation of the complex spectro-temporal patterns that comprise natural sounds across wide-ranging sensory contexts.
Collapse
Affiliation(s)
- Mateo Lopez Espejo
- Neuroscience Graduate Program, Oregon Health and Science University, Portland, OR, United States of America
| | - Zachary P. Schwartz
- Neuroscience Graduate Program, Oregon Health and Science University, Portland, OR, United States of America
| | - Stephen V. David
- Oregon Hearing Research Center, Oregon Health and Science University, Portland, OR, United States of America
| |
Collapse
|
2
|
Banks MI, Moran NS, Krause BM, Grady SM, Uhlrich DJ, Manning KA. Altered stimulus representation in rat auditory cortex is not causal for loss of consciousness under general anaesthesia. Br J Anaesth 2018; 121:605-615. [PMID: 30115259 DOI: 10.1016/j.bja.2018.05.054] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2018] [Revised: 05/13/2018] [Accepted: 05/21/2018] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND Current concepts suggest that impaired representation of information in cortical networks contributes to loss of consciousness under anaesthesia. We tested this idea in rat auditory cortex using information theory analysis of multiunit responses recorded under three anaesthetic agents with different molecular targets: isoflurane, propofol, and dexmedetomidine. We reasoned that if changes in the representation of sensory stimuli are causal for loss of consciousness, they should occur regardless of the specific anaesthetic agent. METHODS Spiking responses were recorded with chronically implanted microwire arrays in response to acoustic stimuli incorporating varied temporal and spectral dynamics. Experiments consisted of four drug conditions: awake (pre-drug), sedation (i.e. intact righting reflex), loss of consciousness (a dose just sufficient to cause loss of righting reflex), and recovery. Measures of firing rate, spike timing, and mutual information were analysed as a function of drug condition. RESULTS All three drugs decreased spontaneous and evoked spiking activity and modulated spike timing. However, changes in mutual information were inconsistent with altered stimulus representation being causal for loss of consciousness. First, direction of change in mutual information was agent-specific, increasing under dexmedetomidine and decreasing under isoflurane and propofol. Second, mutual information did not decrease at the transition between sedation and LOC for any agent. Changes in mutual information under anaesthesia correlated strongly with changes in precision and reliability of spike timing, consistent with the importance of temporal stimulus features in driving auditory cortical activity. CONCLUSIONS The primary sensory cortex is not the locus for changes in representation of information causal for loss of consciousness under anaesthesia.
Collapse
Affiliation(s)
- M I Banks
- Department of Anesthesiology, University of Wisconsin, Madison, WI, USA.
| | - N S Moran
- Neuroscience Training Program, University of Wisconsin, Madison, WI, USA
| | - B M Krause
- Department of Anesthesiology, University of Wisconsin, Madison, WI, USA
| | - S M Grady
- Department of Anesthesiology, University of Wisconsin, Madison, WI, USA
| | - D J Uhlrich
- Department of Neuroscience, University of Wisconsin, Madison, WI, USA
| | - K A Manning
- Department of Neuroscience, University of Wisconsin, Madison, WI, USA
| |
Collapse
|
3
|
Aushana Y, Souffi S, Edeline JM, Lorenzi C, Huetz C. Robust Neuronal Discrimination in Primary Auditory Cortex Despite Degradations of Spectro-temporal Acoustic Details: Comparison Between Guinea Pigs with Normal Hearing and Mild Age-Related Hearing Loss. J Assoc Res Otolaryngol 2018; 19:163-180. [PMID: 29302822 PMCID: PMC5878150 DOI: 10.1007/s10162-017-0649-1] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2017] [Accepted: 12/11/2017] [Indexed: 01/04/2023] Open
Abstract
This study investigated to which extent the primary auditory cortex of young normal-hearing and mild hearing-impaired aged animals is able to maintain invariant representation of critical temporal-modulation features when sounds are submitted to degradations of fine spectro-temporal acoustic details. This was achieved by recording ensemble of cortical responses to conspecific vocalizations in guinea pigs with either normal hearing or mild age-related sensorineural hearing loss. The vocalizations were degraded using a tone vocoder. The neuronal responses and their discrimination capacities (estimated by mutual information) were analyzed at single recording and population levels. For normal-hearing animals, the neuronal responses decreased as a function of the number of the vocoder frequency bands, so did their discriminative capacities at the single recording level. However, small neuronal populations were found to be robust to the degradations induced by the vocoder. Similar robustness was obtained when broadband noise was added to exacerbate further the spectro-temporal distortions produced by the vocoder. A comparable pattern of robustness to degradations in fine spectro-temporal details was found for hearing-impaired animals. However, the latter showed an overall decrease in neuronal discrimination capacities between vocalizations in noisy conditions. Consistent with previous studies, these results demonstrate that the primary auditory cortex maintains robust neural representation of temporal envelope features for communication sounds under a large range of spectro-temporal degradations.
Collapse
Affiliation(s)
- Yonane Aushana
- Paris-Saclay Institute of Neurosciences (Neuro-PSI), CNRS UMR 9197, Orsay, France
- Université Paris-Sud, 91405 Orsay cedex, France
- Université Paris-Saclay, 91405 Orsay cedex, France
| | - Samira Souffi
- Paris-Saclay Institute of Neurosciences (Neuro-PSI), CNRS UMR 9197, Orsay, France
- Université Paris-Sud, 91405 Orsay cedex, France
- Université Paris-Saclay, 91405 Orsay cedex, France
| | - Jean-Marc Edeline
- Paris-Saclay Institute of Neurosciences (Neuro-PSI), CNRS UMR 9197, Orsay, France
- Université Paris-Sud, 91405 Orsay cedex, France
- Université Paris-Saclay, 91405 Orsay cedex, France
| | - Christian Lorenzi
- Laboratoire des Systèmes Perceptifs, UMR CNRS 8248, Département d’Etudes Cognitives, Ecole Normale Supérieure (ENS), Paris Sciences & Lettres Research University, 75005 Paris, France
| | - Chloé Huetz
- Paris-Saclay Institute of Neurosciences (Neuro-PSI), CNRS UMR 9197, Orsay, France
- Université Paris-Sud, 91405 Orsay cedex, France
- Université Paris-Saclay, 91405 Orsay cedex, France
| |
Collapse
|
4
|
|
5
|
Higgins I, Stringer S, Schnupp J. Unsupervised learning of temporal features for word categorization in a spiking neural network model of the auditory brain. PLoS One 2017; 12:e0180174. [PMID: 28797034 PMCID: PMC5552261 DOI: 10.1371/journal.pone.0180174] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2016] [Accepted: 06/12/2017] [Indexed: 12/04/2022] Open
Abstract
The nature of the code used in the auditory cortex to represent complex auditory stimuli, such as naturally spoken words, remains a matter of debate. Here we argue that such representations are encoded by stable spatio-temporal patterns of firing within cell assemblies known as polychronous groups, or PGs. We develop a physiologically grounded, unsupervised spiking neural network model of the auditory brain with local, biologically realistic, spike-time dependent plasticity (STDP) learning, and show that the plastic cortical layers of the network develop PGs which convey substantially more information about the speaker independent identity of two naturally spoken word stimuli than does rate encoding that ignores the precise spike timings. We furthermore demonstrate that such informative PGs can only develop if the input spatio-temporal spike patterns to the plastic cortical areas of the model are relatively stable.
Collapse
Affiliation(s)
- Irina Higgins
- Department of Experimental Psychology, University of Oxford, Oxford, England
| | - Simon Stringer
- Department of Experimental Psychology, University of Oxford, Oxford, England
| | - Jan Schnupp
- Department of Physiology, Anatomy and Genetics (DPAG), University of Oxford, Oxford, England
| |
Collapse
|
6
|
Single Neurons in the Avian Auditory Cortex Encode Individual Identity and Propagation Distance in Naturally Degraded Communication Calls. J Neurosci 2017; 37:3491-3510. [PMID: 28235893 PMCID: PMC5373131 DOI: 10.1523/jneurosci.2220-16.2017] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2016] [Revised: 01/08/2017] [Accepted: 01/13/2017] [Indexed: 11/21/2022] Open
Abstract
One of the most complex tasks performed by sensory systems is "scene analysis": the interpretation of complex signals as behaviorally relevant objects. The study of this problem, universal to species and sensory modalities, is particularly challenging in audition, where sounds from various sources and localizations, degraded by propagation through the environment, sum to form a single acoustical signal. Here we investigated in a songbird model, the zebra finch, the neural substrate for ranging and identifying a single source. We relied on ecologically and behaviorally relevant stimuli, contact calls, to investigate the neural discrimination of individual vocal signature as well as sound source distance when calls have been degraded through propagation in a natural environment. Performing electrophysiological recordings in anesthetized birds, we found neurons in the auditory forebrain that discriminate individual vocal signatures despite long-range degradation, as well as neurons discriminating propagation distance, with varying degrees of multiplexing between both information types. Moreover, the neural discrimination performance of individual identity was not affected by propagation-induced degradation beyond what was induced by the decreased intensity. For the first time, neurons with distance-invariant identity discrimination properties as well as distance-discriminant neurons are revealed in the avian auditory cortex. Because these neurons were recorded in animals that had prior experience neither with the vocalizers of the stimuli nor with long-range propagation of calls, we suggest that this neural population is part of a general-purpose system for vocalizer discrimination and ranging.SIGNIFICANCE STATEMENT Understanding how the brain makes sense of the multitude of stimuli that it continually receives in natural conditions is a challenge for scientists. Here we provide a new understanding of how the auditory system extracts behaviorally relevant information, the vocalizer identity and its distance to the listener, from acoustic signals that have been degraded by long-range propagation in natural conditions. We show, for the first time, that single neurons, in the auditory cortex of zebra finches, are capable of discriminating the individual identity and sound source distance in conspecific communication calls. The discrimination of identity in propagated calls relies on a neural coding that is robust to intensity changes, signals' quality, and decreases in the signal-to-noise ratio.
Collapse
|
7
|
Fiáth R, Beregszászi P, Horváth D, Wittner L, Aarts AAA, Ruther P, Neves HP, Bokor H, Acsády L, Ulbert I. Large-scale recording of thalamocortical circuits: in vivo electrophysiology with the two-dimensional electronic depth control silicon probe. J Neurophysiol 2016; 116:2312-2330. [PMID: 27535370 DOI: 10.1152/jn.00318.2016] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2016] [Accepted: 08/13/2016] [Indexed: 12/12/2022] Open
Abstract
Recording simultaneous activity of a large number of neurons in distributed neuronal networks is crucial to understand higher order brain functions. We demonstrate the in vivo performance of a recently developed electrophysiological recording system comprising a two-dimensional, multi-shank, high-density silicon probe with integrated complementary metal-oxide semiconductor electronics. The system implements the concept of electronic depth control (EDC), which enables the electronic selection of a limited number of recording sites on each of the probe shafts. This innovative feature of the system permits simultaneous recording of local field potentials (LFP) and single- and multiple-unit activity (SUA and MUA, respectively) from multiple brain sites with high quality and without the actual physical movement of the probe. To evaluate the in vivo recording capabilities of the EDC probe, we recorded LFP, MUA, and SUA in acute experiments from cortical and thalamic brain areas of anesthetized rats and mice. The advantages of large-scale recording with the EDC probe are illustrated by investigating the spatiotemporal dynamics of pharmacologically induced thalamocortical slow-wave activity in rats and by the two-dimensional tonotopic mapping of the auditory thalamus. In mice, spatial distribution of thalamic responses to optogenetic stimulation of the neocortex was examined. Utilizing the benefits of the EDC system may result in a higher yield of useful data from a single experiment compared with traditional passive multielectrode arrays, and thus in the reduction of animals needed for a research study.
Collapse
Affiliation(s)
- Richárd Fiáth
- Group of Comparative Psychophysiology, Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Budapest, Hungary.,Faculty of Information Technology and Bionics, Pázmány Péter, Catholic University, Budapest, Hungary.,School of Ph.D. Studies, Semmelweis University, Budapest, Hungary
| | - Patrícia Beregszászi
- Faculty of Information Technology and Bionics, Pázmány Péter, Catholic University, Budapest, Hungary
| | - Domonkos Horváth
- Group of Comparative Psychophysiology, Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Budapest, Hungary.,Faculty of Information Technology and Bionics, Pázmány Péter, Catholic University, Budapest, Hungary.,School of Ph.D. Studies, Semmelweis University, Budapest, Hungary
| | - Lucia Wittner
- Group of Comparative Psychophysiology, Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Budapest, Hungary
| | | | - Patrick Ruther
- Department of Microsystems Engineering (IMTEK), University of Freiburg, Freiburg, Germany.,BrainLinks-BrainTools Cluster of Excellence, University of Freiburg, Freiburg, Germany
| | - Hercules P Neves
- Unitec Semicondutores, Ribeirão das Neves, Brazil.,Solid State Electronics, Department of Engineering Sciences, Uppsala University, Uppsala, Sweden; and
| | - Hajnalka Bokor
- Laboratory of Thalamus Research, Institute of Experimental Medicine, Hungarian Academy of Sciences, Budapest, Hungary
| | - László Acsády
- Laboratory of Thalamus Research, Institute of Experimental Medicine, Hungarian Academy of Sciences, Budapest, Hungary
| | - István Ulbert
- Group of Comparative Psychophysiology, Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Budapest, Hungary; .,Faculty of Information Technology and Bionics, Pázmány Péter, Catholic University, Budapest, Hungary
| |
Collapse
|
8
|
Abstract
Vertebrate audition is a dynamic process, capable of exhibiting both short- and long-term adaptations to varying listening conditions. Precise spike timing has long been known to play an important role in auditory encoding, but its role in sensory plasticity remains largely unexplored. We addressed this issue in Gambel's white-crowned sparrow (Zonotrichia leucophrys gambelii), a songbird that shows pronounced seasonal fluctuations in circulating levels of sex-steroid hormones, which are known to be potent neuromodulators of auditory function. We recorded extracellular single-unit activity in the auditory forebrain of males and females under different breeding conditions and used a computational approach to explore two potential strategies for the neural discrimination of sound level: one based on spike counts and one based on spike timing reliability. We report that breeding condition has robust sex-specific effects on spike timing. Specifically, in females, breeding condition increases the proportion of cells that rely solely on spike timing information and increases the temporal resolution required for optimal intensity encoding. Furthermore, in a functionally distinct subset of cells that are particularly well suited for amplitude encoding, female breeding condition enhances spike timing-based discrimination accuracy. No effects of breeding condition were observed in males. Our results suggest that high-resolution temporal discharge patterns may provide a plastic neural substrate for sensory coding.
Collapse
|
9
|
Carrasco A, Brown TA, Lomber SG. Spectral and Temporal Acoustic Features Modulate Response Irregularities within Primary Auditory Cortex Columns. PLoS One 2014; 9:e114550. [PMID: 25494365 PMCID: PMC4262427 DOI: 10.1371/journal.pone.0114550] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2014] [Accepted: 11/05/2014] [Indexed: 11/18/2022] Open
Abstract
Assemblies of vertically connected neurons in the cerebral cortex form information processing units (columns) that participate in the distribution and segregation of sensory signals. Despite well-accepted models of columnar architecture, functional mechanisms of inter-laminar communication remain poorly understood. Hence, the purpose of the present investigation was to examine the effects of sensory information features on columnar response properties. Using acute recording techniques, extracellular response activity was collected from the right hemisphere of eight mature cats (felis catus). Recordings were conducted with multichannel electrodes that permitted the simultaneous acquisition of neuronal activity within primary auditory cortex columns. Neuronal responses to simple (pure tones), complex (noise burst and frequency modulated sweeps), and ecologically relevant (con-specific vocalizations) acoustic signals were measured. Collectively, the present investigation demonstrates that despite consistencies in neuronal tuning (characteristic frequency), irregularities in discharge activity between neurons of individual A1 columns increase as a function of spectral (signal complexity) and temporal (duration) acoustic variations.
Collapse
Affiliation(s)
- Andres Carrasco
- Cerebral Systems Laboratory, University of Western Ontario, London, Ontario, Canada
- Brain and Mind Institute, University of Western Ontario, London, Ontario, Canada
- Department of Physiology and Pharmacology, University of Western Ontario, London, Ontario, Canada
| | - Trecia A. Brown
- Cerebral Systems Laboratory, University of Western Ontario, London, Ontario, Canada
- Brain and Mind Institute, University of Western Ontario, London, Ontario, Canada
- Department of Physiology and Pharmacology, University of Western Ontario, London, Ontario, Canada
| | - Stephen G. Lomber
- Cerebral Systems Laboratory, University of Western Ontario, London, Ontario, Canada
- Brain and Mind Institute, University of Western Ontario, London, Ontario, Canada
- Department of Physiology and Pharmacology, University of Western Ontario, London, Ontario, Canada
- Department of Psychology, University of Western Ontario, London, Ontario, Canada
- National Centre for Audiology, University of Western Ontario, London, Ontario, Canada
- * E-mail:
| |
Collapse
|
10
|
Kohashi T, Carlson BA. A fast BK-type KCa current acts as a postsynaptic modulator of temporal selectivity for communication signals. Front Cell Neurosci 2014; 8:286. [PMID: 25278836 PMCID: PMC4166317 DOI: 10.3389/fncel.2014.00286] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2014] [Accepted: 08/29/2014] [Indexed: 11/13/2022] Open
Abstract
Temporal patterns of spiking often convey behaviorally relevant information. Various synaptic mechanisms and intrinsic membrane properties can influence neuronal selectivity to temporal patterns of input. However, little is known about how synaptic mechanisms and intrinsic properties together determine the temporal selectivity of neuronal output. We tackled this question by recording from midbrain electrosensory neurons in mormyrid fish, in which the processing of temporal intervals between communication signals can be studied in a reduced in vitro preparation. Mormyrids communicate by varying interpulse intervals (IPIs) between electric pulses. Within the midbrain posterior exterolateral nucleus (ELp), the temporal patterns of afferent spike trains are filtered to establish single-neuron IPI tuning. We performed whole-cell recording from ELp neurons in a whole-brain preparation and examined the relationship between intrinsic excitability and IPI tuning. We found that spike frequency adaptation of ELp neurons was highly variable. Postsynaptic potentials (PSPs) of strongly adapting (phasic) neurons were more sharply tuned to IPIs than weakly adapting (tonic) neurons. Further, the synaptic filtering of IPIs by tonic neurons was more faithfully converted into variation in spiking output, particularly at short IPIs. Pharmacological manipulation under current- and voltage-clamp revealed that tonic firing is mediated by a fast, large-conductance Ca(2+)-activated K(+) (KCa) current (BK) that speeds up action potential repolarization. These results suggest that BK currents can shape the temporal filtering of sensory inputs by modifying both synaptic responses and PSP-to-spike conversion. Slow SK-type KCa currents have previously been implicated in temporal processing. Thus, both fast and slow KCa currents can fine-tune temporal selectivity.
Collapse
Affiliation(s)
- Tsunehiko Kohashi
- Department of Biology, Washington University in St. Louis St. Louis, MO, USA ; Division of Biological Science, Graduate School of Science, Nagoya University Nagoya, Japan
| | - Bruce A Carlson
- Department of Biology, Washington University in St. Louis St. Louis, MO, USA
| |
Collapse
|
11
|
Menardy F, Giret N, Del Negro C. The presence of an audience modulates responses to familiar call stimuli in the male zebra finch forebrain. Eur J Neurosci 2014; 40:3338-50. [PMID: 25145963 DOI: 10.1111/ejn.12696] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2014] [Revised: 06/23/2014] [Accepted: 07/15/2014] [Indexed: 12/22/2022]
Abstract
The ability to recognize familiar individuals is crucial for establishing social relationships. The zebra finch, a highly social songbird species that forms lifelong pair bonds, uses a vocalization, the distance call, to identify its mate. However, in males, this ability depends on social conditions, requiring the presence of an audience. To evaluate whether the presence of bystanders modulates the auditory processing underlying recognition abilities, we assessed, by using a lightweight telemetry system, whether electrophysiological responses driven by familiar and unfamiliar female calls in a high-level auditory area [the caudomedial nidopallium (NCM)] were modulated by the presence of conspecific males. Males had experienced the call of their mate for several months and the call of a familiar female for several days. When they were exposed to female calls in the presence of two male conspecifics, NCM neurons showed greater responses to the playback of familiar female calls, including the mate's call, than to unfamiliar ones. In contrast, no such discrimination was observed in males when they were alone or when call-evoked responses were collected under anaesthesia. Together, these results suggest that NCM neuronal activity is profoundly influenced by social conditions, providing new evidence that the properties of NCM neurons are not simply determined by the acoustic structure of auditory stimuli. They also show that neurons in the NCM form part of a network that can be shaped by experience and that probably plays an important role in the emergence of communication sound recognition.
Collapse
Affiliation(s)
- F Menardy
- CNPS, UMR CNRS 8195, University Paris-Sud, 91405, Orsay, France
| | | | | |
Collapse
|
12
|
|
13
|
Huetz C, Guedin M, Edeline JM. Neural correlates of moderate hearing loss: time course of response changes in the primary auditory cortex of awake guinea-pigs. Front Syst Neurosci 2014; 8:65. [PMID: 24808831 PMCID: PMC4009414 DOI: 10.3389/fnsys.2014.00065] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2013] [Accepted: 04/07/2014] [Indexed: 11/21/2022] Open
Abstract
Over the last decade, the consequences of acoustic trauma on the functional properties of auditory cortex neurons have received growing attention. Changes in spontaneous and evoked activity, shifts of characteristic frequency (CF), and map reorganizations have extensively been described in anesthetized animals (e.g., Noreña and Eggermont, 2003, 2005). Here, we examined how the functional properties of cortical cells are modified after partial hearing loss in awake guinea pigs. Single unit activity was chronically recorded in awake, restrained, guinea pigs from 3 days before up to 15 days after an acoustic trauma induced by a 5 kHz 110 dB tone delivered for 1 h. Auditory brainstem responses (ABRs) audiograms indicated that these parameters produced a mean ABR threshold shift of 20 dB SPL at, and one octave above, the trauma frequency. When tested with pure tones, cortical cells showed on average a 25 dB increase in threshold at CF the day following the trauma. Over days, this increase progressively stabilized at only 10 dB above control value indicating a progressive recovery of cortical thresholds, probably reflecting a progressive shift from temporary threshold shift (TTS) to permanent threshold shift (PTS). There was an increase in response latency and in response variability the day following the trauma but these parameters returned to control values within 3 days. When tested with conspecific vocalizations, cortical neurons also displayed an increase in response latency and in response duration the day after the acoustic trauma, but there was no effect on the average firing rate elicited by the vocalization. These findings suggest that, in cases of moderate hearing loss, the temporal precision of neuronal responses to natural stimuli is impaired despite the fact the firing rate showed little or no changes.
Collapse
Affiliation(s)
- Chloé Huetz
- Centre de Neurosciences Paris-Sud, CNRS, UMR 8195, Université Paris-Sud Orsay, France
| | - Maud Guedin
- Centre de Neurosciences Paris-Sud, CNRS, UMR 8195, Université Paris-Sud Orsay, France
| | - Jean-Marc Edeline
- Centre de Neurosciences Paris-Sud, CNRS, UMR 8195, Université Paris-Sud Orsay, France
| |
Collapse
|
14
|
Dimitrov AG, Cummins GI, Mayko ZM, Portfors CV. Inhibition does not affect the timing code for vocalizations in the mouse auditory midbrain. Front Physiol 2014; 5:140. [PMID: 24795640 PMCID: PMC3997027 DOI: 10.3389/fphys.2014.00140] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2013] [Accepted: 03/23/2014] [Indexed: 11/13/2022] Open
Abstract
Many animals use a diverse repertoire of complex acoustic signals to convey different types of information to other animals. The information in each vocalization therefore must be coded by neurons in the auditory system. One way in which the auditory system may discriminate among different vocalizations is by having highly selective neurons, where only one or two different vocalizations evoke a strong response from a single neuron. Another strategy is to have specific spike timing patterns for particular vocalizations such that each neural response can be matched to a specific vocalization. Both of these strategies seem to occur in the auditory midbrain of mice. The neural mechanisms underlying rate and time coding are unclear, however, it is likely that inhibition plays a role. Here, we examined whether inhibition is involved in shaping neural selectivity to vocalizations via rate and/or time coding in the mouse inferior colliculus (IC). We examined extracellular single unit responses to vocalizations before and after iontophoretically blocking GABAA and glycine receptors in the IC of awake mice. We then applied a number of neurometrics to examine the rate and timing information of individual neurons. We initially evaluated the neuronal responses using inspection of the raster plots, spike-counting measures of response rate and stimulus preference, and a measure of maximum available stimulus-response mutual information. Subsequently, we used two different event sequence distance measures, one based on vector space embedding, and one derived from the Victor/Purpura D q metric, to direct hierarchical clustering of responses. In general, we found that the most salient feature of pharmacologically blocking inhibitory receptors in the IC was the lack of major effects on the functional properties of IC neurons. Blocking inhibition did increase response rate to vocalizations, as expected. However, it did not significantly affect spike timing, or stimulus selectivity of the studied neurons. We observed two main effects when inhibition was locally blocked: (1) Highly selective neurons maintained their selectivity and the information about the stimuli did not change, but response rate increased slightly. (2) Neurons that responded to multiple vocalizations in the control condition, also responded to the same stimuli in the test condition, with similar timing and pattern, but with a greater number of spikes. For some neurons the information rate generally increased, but the information per spike decreased. In many of these neurons, vocalizations that generated no responses in the control condition generated some response in the test condition. Overall, we found that inhibition in the IC does not play a substantial role in creating the distinguishable and reliable neuronal temporal spike patterns in response to different vocalizations.
Collapse
Affiliation(s)
- Alexander G Dimitrov
- Department of Mathematics, Washington State University Vancouver Vancouver, WA, USA
| | - Graham I Cummins
- Department of Mathematics, Washington State University Vancouver Vancouver, WA, USA
| | - Zachary M Mayko
- School of Biological Sciences, Washington State University Vancouver Vancouver, WA, USA
| | - Christine V Portfors
- School of Biological Sciences, Washington State University Vancouver Vancouver, WA, USA
| |
Collapse
|
15
|
Rode T, Hartmann T, Hubka P, Scheper V, Lenarz M, Lenarz T, Kral A, Lim HH. Neural representation in the auditory midbrain of the envelope of vocalizations based on a peripheral ear model. Front Neural Circuits 2013; 7:166. [PMID: 24155694 PMCID: PMC3800787 DOI: 10.3389/fncir.2013.00166] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2013] [Accepted: 09/24/2013] [Indexed: 11/24/2022] Open
Abstract
The auditory midbrain implant (AMI) consists of a single shank array (20 sites) for stimulation along the tonotopic axis of the central nucleus of the inferior colliculus (ICC) and has been safely implanted in deaf patients who cannot benefit from a cochlear implant (CI). The AMI improves lip-reading abilities and environmental awareness in the implanted patients. However, the AMI cannot achieve the high levels of speech perception possible with the CI. It appears the AMI can transmit sufficient spectral cues but with limited temporal cues required for speech understanding. Currently, the AMI uses a CI-based strategy, which was originally designed to stimulate each frequency region along the cochlea with amplitude-modulated pulse trains matching the envelope of the bandpass-filtered sound components. However, it is unclear if this type of stimulation with only a single site within each frequency lamina of the ICC can elicit sufficient temporal cues for speech perception. At least speech understanding in quiet is still possible with envelope cues as low as 50 Hz. Therefore, we investigated how ICC neurons follow the bandpass-filtered envelope structure of natural stimuli in ketamine-anesthetized guinea pigs. We identified a subset of ICC neurons that could closely follow the envelope structure (up to ß100 Hz) of a diverse set of species-specific calls, which was revealed by using a peripheral ear model to estimate the true bandpass-filtered envelopes observed by the brain. Although previous studies have suggested a complex neural transformation from the auditory nerve to the ICC, our data suggest that the brain maintains a robust temporal code in a subset of ICC neurons matching the envelope structure of natural stimuli. Clinically, these findings suggest that a CI-based strategy may still be effective for the AMI if the appropriate neurons are entrained to the envelope of the acoustic stimulus and can transmit sufficient temporal cues to higher centers.
Collapse
Affiliation(s)
- Thilo Rode
- Department of Otorhinolaryngology, Hannover Medical University Hannover, Germany
| | | | | | | | | | | | | | | |
Collapse
|
16
|
Yu C, Horev G, Rubin N, Derdikman D, Haidarliu S, Ahissar E. Coding of object location in the vibrissal thalamocortical system. ACTA ACUST UNITED AC 2013; 25:563-77. [PMID: 24062318 DOI: 10.1093/cercor/bht241] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
In whisking rodents, object location is encoded at the receptor level by a combination of motor and sensory related signals. Recoding of the encoded signals can result in various forms of internal representations. Here, we examined the coding schemes occurring at the first forebrain level that receives inputs necessary for generating such internal representations--the thalamocortical network. Single units were recorded in 8 thalamic and cortical stations in artificially whisking anesthetized rats. Neuronal representations of object location generated across these stations and expressed in response latency and magnitude were classified based on graded and binary coding schemes. Both graded and binary coding schemes occurred across the entire thalamocortical network, with a general tendency of graded-to-binary transformation from thalamus to cortex. Overall, 63% of the neurons of the thalamocortical network coded object position in their firing. Thalamocortical responses exhibited a slow dynamics during which the amount of coded information increased across 4-5 whisking cycles and then stabilized. Taken together, the results indicate that the thalamocortical network contains dynamic mechanisms that can converge over time on multiple coding schemes of object location, schemes which essentially transform temporal coding to rate coding and gradual to labeled-line coding.
Collapse
Affiliation(s)
- Chunxiu Yu
- Current address: Department of Psychology and Neuroscience, Center for Cognitive Neuroscience, Duke University, Durham, NC 27708, USA
| | - Guy Horev
- Current address: Cold Spring Harbor Laboratory, Cold Spring Harbor, NY 11724, USA
| | - Naama Rubin
- Department of Neurobiology, Weizmann Institute of Science, Rehovot 76100, Israel Current address: Department of Psychology and Neuroscience, Center for Cognitive Neuroscience, Duke University, Durham, NC 27708, USA Current address: Cold Spring Harbor Laboratory, Cold Spring Harbor, NY 11724, USA
| | - Dori Derdikman
- Department of Neurobiology, Weizmann Institute of Science, Rehovot 76100, Israel Current address: Department of Psychology and Neuroscience, Center for Cognitive Neuroscience, Duke University, Durham, NC 27708, USA Current address: Cold Spring Harbor Laboratory, Cold Spring Harbor, NY 11724, USA
| | - Sebastian Haidarliu
- Department of Neurobiology, Weizmann Institute of Science, Rehovot 76100, Israel Current address: Department of Psychology and Neuroscience, Center for Cognitive Neuroscience, Duke University, Durham, NC 27708, USA Current address: Cold Spring Harbor Laboratory, Cold Spring Harbor, NY 11724, USA
| | - Ehud Ahissar
- Department of Neurobiology, Weizmann Institute of Science, Rehovot 76100, Israel Current address: Department of Psychology and Neuroscience, Center for Cognitive Neuroscience, Duke University, Durham, NC 27708, USA Current address: Cold Spring Harbor Laboratory, Cold Spring Harbor, NY 11724, USA
| |
Collapse
|
17
|
Cortical inhibition reduces information redundancy at presentation of communication sounds in the primary auditory cortex. J Neurosci 2013; 33:10713-28. [PMID: 23804094 DOI: 10.1523/jneurosci.0079-13.2013] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
In all sensory modalities, intracortical inhibition shapes the functional properties of cortical neurons but also influences the responses to natural stimuli. Studies performed in various species have revealed that auditory cortex neurons respond to conspecific vocalizations by temporal spike patterns displaying a high trial-to-trial reliability, which might result from precise timing between excitation and inhibition. Studying the guinea pig auditory cortex, we show that partial blockage of GABAA receptors by gabazine (GBZ) application (10 μm, a concentration that promotes expansion of cortical receptive fields) increased the evoked firing rate and the spike-timing reliability during presentation of communication sounds (conspecific and heterospecific vocalizations), whereas GABAB receptor antagonists [10 μm saclofen; 10-50 μm CGP55845 (p-3-aminopropyl-p-diethoxymethyl phosphoric acid)] had nonsignificant effects. Computing mutual information (MI) from the responses to vocalizations using either the evoked firing rate or the temporal spike patterns revealed that GBZ application increased the MI derived from the activity of single cortical site but did not change the MI derived from population activity. In addition, quantification of information redundancy showed that GBZ significantly increased redundancy at the population level. This result suggests that a potential role of intracortical inhibition is to reduce information redundancy during the processing of natural stimuli.
Collapse
|
18
|
Hertrich I, Dietrich S, Ackermann H. How can audiovisual pathways enhance the temporal resolution of time-compressed speech in blind subjects? Front Psychol 2013; 4:530. [PMID: 23966968 PMCID: PMC3745084 DOI: 10.3389/fpsyg.2013.00530] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2013] [Accepted: 07/26/2013] [Indexed: 11/13/2022] Open
Abstract
In blind people, the visual channel cannot assist face-to-face communication via lipreading or visual prosody. Nevertheless, the visual system may enhance the evaluation of auditory information due to its cross-links to (1) the auditory system, (2) supramodal representations, and (3) frontal action-related areas. Apart from feedback or top-down support of, for example, the processing of spatial or phonological representations, experimental data have shown that the visual system can impact auditory perception at more basic computational stages such as temporal signal resolution. For example, blind as compared to sighted subjects are more resistant against backward masking, and this ability appears to be associated with activity in visual cortex. Regarding the comprehension of continuous speech, blind subjects can learn to use accelerated text-to-speech systems for "reading" texts at ultra-fast speaking rates (>16 syllables/s), exceeding by far the normal range of 6 syllables/s. A functional magnetic resonance imaging study has shown that this ability, among other brain regions, significantly covaries with BOLD responses in bilateral pulvinar, right visual cortex, and left supplementary motor area. Furthermore, magnetoencephalographic measurements revealed a particular component in right occipital cortex phase-locked to the syllable onsets of accelerated speech. In sighted people, the "bottleneck" for understanding time-compressed speech seems related to higher demands for buffering phonological material and is, presumably, linked to frontal brain structures. On the other hand, the neurophysiological correlates of functions overcoming this bottleneck, seem to depend upon early visual cortex activity. The present Hypothesis and Theory paper outlines a model that aims at binding these data together, based on early cross-modal pathways that are already known from various audiovisual experiments on cross-modal adjustments during space, time, and object recognition.
Collapse
Affiliation(s)
- Ingo Hertrich
- Department of General Neurology, Center of Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen Tübingen, Germany
| | | | | |
Collapse
|
19
|
Ranasinghe KG, Vrana WA, Matney CJ, Kilgard MP. Increasing diversity of neural responses to speech sounds across the central auditory pathway. Neuroscience 2013; 252:80-97. [PMID: 23954862 DOI: 10.1016/j.neuroscience.2013.08.005] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2013] [Revised: 07/24/2013] [Accepted: 08/03/2013] [Indexed: 10/26/2022]
Abstract
Neurons at higher stations of each sensory system are responsive to feature combinations not present at lower levels. As a result, the activity of these neurons becomes less redundant than lower levels. We recorded responses to speech sounds from the inferior colliculus and the primary auditory cortex neurons of rats, and tested the hypothesis that primary auditory cortex neurons are more sensitive to combinations of multiple acoustic parameters compared to inferior colliculus neurons. We independently eliminated periodicity information, spectral information and temporal information in each consonant and vowel sound using a noise vocoder. This technique made it possible to test several key hypotheses about speech sound processing. Our results demonstrate that inferior colliculus responses are spatially arranged and primarily determined by the spectral energy and the fundamental frequency of speech, whereas primary auditory cortex neurons generate widely distributed responses to multiple acoustic parameters, and are not strongly influenced by the fundamental frequency of speech. We found no evidence that inferior colliculus or primary auditory cortex was specialized for speech features such as voice onset time or formants. The greater diversity of responses in primary auditory cortex compared to inferior colliculus may help explain how the auditory system can identify a wide range of speech sounds across a wide range of conditions without relying on any single acoustic cue.
Collapse
Affiliation(s)
- K G Ranasinghe
- The University of Texas at Dallas, School of Behavioral Brain Sciences, 800 West Campbell Road, GR41, Richardson, TX 75080-3021, United States.
| | | | | | | |
Collapse
|
20
|
Gaucher Q, Huetz C, Gourévitch B, Laudanski J, Occelli F, Edeline JM. How do auditory cortex neurons represent communication sounds? Hear Res 2013; 305:102-12. [PMID: 23603138 DOI: 10.1016/j.heares.2013.03.011] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/03/2012] [Revised: 03/18/2013] [Accepted: 03/26/2013] [Indexed: 11/30/2022]
Abstract
A major goal in auditory neuroscience is to characterize how communication sounds are represented at the cortical level. The present review aims at investigating the role of auditory cortex in the processing of speech, bird songs and other vocalizations, which all are spectrally and temporally highly structured sounds. Whereas earlier studies have simply looked for neurons exhibiting higher firing rates to particular conspecific vocalizations over their modified, artificially synthesized versions, more recent studies determined the coding capacity of temporal spike patterns, which are prominent in primary and non-primary areas (and also in non-auditory cortical areas). In several cases, this information seems to be correlated with the behavioral performance of human or animal subjects, suggesting that spike-timing based coding strategies might set the foundations of our perceptive abilities. Also, it is now clear that the responses of auditory cortex neurons are highly nonlinear and that their responses to natural stimuli cannot be predicted from their responses to artificial stimuli such as moving ripples and broadband noises. Since auditory cortex neurons cannot follow rapid fluctuations of the vocalizations envelope, they only respond at specific time points during communication sounds, which can serve as temporal markers for integrating the temporal and spectral processing taking place at subcortical relays. Thus, the temporal sparse code of auditory cortex neurons can be considered as a first step for generating high level representations of communication sounds independent of the acoustic characteristic of these sounds. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives".
Collapse
Affiliation(s)
- Quentin Gaucher
- Centre de Neurosciences Paris-Sud (CNPS), CNRS UMR 8195, Université Paris-Sud, Bâtiment 446, 91405 Orsay cedex, France
| | | | | | | | | | | |
Collapse
|
21
|
Profant O, Burianová J, Syka J. The response properties of neurons in different fields of the auditory cortex in the rat. Hear Res 2013; 296:51-9. [DOI: 10.1016/j.heares.2012.11.021] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/04/2012] [Revised: 10/19/2012] [Accepted: 11/18/2012] [Indexed: 10/27/2022]
|
22
|
Grimsley JMS, Shanbhag SJ, Palmer AR, Wallace MN. Processing of communication calls in Guinea pig auditory cortex. PLoS One 2012; 7:e51646. [PMID: 23251604 PMCID: PMC3520958 DOI: 10.1371/journal.pone.0051646] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2011] [Accepted: 11/08/2012] [Indexed: 11/25/2022] Open
Abstract
Vocal communication is an important aspect of guinea pig behaviour and a large contributor to their acoustic environment. We postulated that some cortical areas have distinctive roles in processing conspecific calls. In order to test this hypothesis we presented exemplars from all ten of their main adult vocalizations to urethane anesthetised animals while recording from each of the eight areas of the auditory cortex. We demonstrate that the primary area (AI) and three adjacent auditory belt areas contain many units that give isomorphic responses to vocalizations. These are the ventrorostral belt (VRB), the transitional belt area (T) that is ventral to AI and the small area (area S) that is rostral to AI. Area VRB has a denser representation of cells that are better at discriminating among calls by using either a rate code or a temporal code than any other area. Furthermore, 10% of VRB cells responded to communication calls but did not respond to stimuli such as clicks, broadband noise or pure tones. Area S has a sparse distribution of call responsive cells that showed excellent temporal locking, 31% of which selectively responded to a single call. AI responded well to all vocalizations and was much more responsive to vocalizations than the adjacent dorsocaudal core area. Areas VRB, AI and S contained units with the highest levels of mutual information about call stimuli. Area T also responded well to some calls but seems to be specialized for low sound levels. The two dorsal belt areas are comparatively unresponsive to vocalizations and contain little information about the calls. AI projects to areas S, VRB and T, so there may be both rostral and ventral pathways for processing vocalizations in the guinea pig.
Collapse
Affiliation(s)
- Jasmine M. S. Grimsley
- Institute of Hearing Research, Medical Research Council, Nottingham, United Kingdom
- Department of Anatomy and Neurobiology, Northeast Ohio Medical University, Rootstown, Ohio, United States of America
| | - Sharad J. Shanbhag
- Department of Anatomy and Neurobiology, Northeast Ohio Medical University, Rootstown, Ohio, United States of America
| | - Alan R. Palmer
- Institute of Hearing Research, Medical Research Council, Nottingham, United Kingdom
| | - Mark N. Wallace
- Institute of Hearing Research, Medical Research Council, Nottingham, United Kingdom
- * E-mail:
| |
Collapse
|
23
|
Alliende J, Lehongre K, Del Negro C. A species-specific view of song representation in a sensorimotor nucleus. ACTA ACUST UNITED AC 2012; 107:193-202. [PMID: 22960663 DOI: 10.1016/j.jphysparis.2012.08.004] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2012] [Revised: 07/26/2012] [Accepted: 08/20/2012] [Indexed: 11/28/2022]
Abstract
Songbirds constitute a powerful model system for the investigation of how complex vocal communication sounds are represented and generated, offering a neural system in which the brain areas involved in auditory, motor and auditory-motor integration are well known. One brain area of considerable interest is the nucleus HVC. Neurons in the HVC respond vigorously to the presentation of the bird's own song and display song-related motor activity. In the present paper, we present a synthesis of neurophysiological studies performed in the HVC of one songbird species, the canary (Serinus canaria). These studies, by taking advantage of the singing behavior and song characteristics of the canary, have examined the neuronal representation of the bird's own song in the HVC. They suggest that breeding cues influence the degree of auditory selectivity of HVC neurons for the bird's own song over its time-reversed version, without affecting the contribution of spike timing to the information carried by these two song stimuli. Also, while HVC neurons are collectively more responsive to forward playback of the bird's own song than to its temporally or spectrally modified versions, some are more broadly tuned, with an auditory responsiveness that extends beyond the bird's own song. Lastly, because the HVC is also involved in song production, we discuss the peripheral control of song production, and suggest that interspecific variations in song production mechanisms could be exploited to improve our understanding of the functional role of the HVC in respiratory-vocal coordination.
Collapse
|
24
|
Edeline JM. Beyond traditional approaches to understanding the functional role of neuromodulators in sensory cortices. Front Behav Neurosci 2012; 6:45. [PMID: 22866031 PMCID: PMC3407859 DOI: 10.3389/fnbeh.2012.00045] [Citation(s) in RCA: 49] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2012] [Accepted: 07/03/2012] [Indexed: 02/01/2023] Open
Abstract
Over the last two decades, a vast literature has described the influence of neuromodulatory systems on the responses of sensory cortex neurons (review in Gu, 2002; Edeline, 2003; Weinberger, 2003; Metherate, 2004, 2011). At the single cell level, facilitation of evoked responses, increases in signal-to-noise ratio, and improved functional properties of sensory cortex neurons have been reported in the visual, auditory, and somatosensory modality. At the map level, massive cortical reorganizations have been described when repeated activation of a neuromodulatory system are associated with a particular sensory stimulus. In reviewing our knowledge concerning the way the noradrenergic and cholinergic system control sensory cortices, I will point out that the differences between the protocols used to reveal these effects most likely reflect different assumptions concerning the role of the neuromodulators. More importantly, a gap still exists between the descriptions of neuromodulatory effects and the concepts that are currently applied to decipher the neural code operating in sensory cortices. Key examples that bring this gap into focus are the concept of cell assemblies and the role played by the spike timing precision (i.e., by the temporal organization of spike trains at the millisecond time-scale) which are now recognized as essential in sensory physiology but are rarely considered in experiments describing the role of neuromodulators in sensory cortices. Thus, I will suggest that several lines of research, particularly in the field of computational neurosciences, should help us to go beyond traditional approaches and, ultimately, to understand how neuromodulators impact on the cortical mechanisms underlying our perceptual abilities.
Collapse
Affiliation(s)
- Jean-Marc Edeline
- Centre de Neurosciences Paris-Sud, CNRS UMR 8195, Université Paris-Sud, Bâtiment Orsay Cedex, France
| |
Collapse
|
25
|
Shetake JA, Wolf JT, Cheung RJ, Engineer CT, Ram SK, Kilgard MP. Cortical activity patterns predict robust speech discrimination ability in noise. Eur J Neurosci 2011; 34:1823-38. [PMID: 22098331 DOI: 10.1111/j.1460-9568.2011.07887.x] [Citation(s) in RCA: 57] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
The neural mechanisms that support speech discrimination in noisy conditions are poorly understood. In quiet conditions, spike timing information appears to be used in the discrimination of speech sounds. In this study, we evaluated the hypothesis that spike timing is also used to distinguish between speech sounds in noisy conditions that significantly degrade neural responses to speech sounds. We tested speech sound discrimination in rats and recorded primary auditory cortex (A1) responses to speech sounds in background noise of different intensities and spectral compositions. Our behavioral results indicate that rats, like humans, are able to accurately discriminate consonant sounds even in the presence of background noise that is as loud as the speech signal. Our neural recordings confirm that speech sounds evoke degraded but detectable responses in noise. Finally, we developed a novel neural classifier that mimics behavioral discrimination. The classifier discriminates between speech sounds by comparing the A1 spatiotemporal activity patterns evoked on single trials with the average spatiotemporal patterns evoked by known sounds. Unlike classifiers in most previous studies, this classifier is not provided with the stimulus onset time. Neural activity analyzed with the use of relative spike timing was well correlated with behavioral speech discrimination in quiet and in noise. Spike timing information integrated over longer intervals was required to accurately predict rat behavioral speech discrimination in noisy conditions. The similarity of neural and behavioral discrimination of speech in noise suggests that humans and rats may employ similar brain mechanisms to solve this problem.
Collapse
Affiliation(s)
- Jai A Shetake
- The University of Texas at Dallas, School of Behavioral Brain Sciences, 800 West Campbell Road, GR41 Richardson, TX 75080-3021, USA
| | | | | | | | | | | |
Collapse
|