1
|
Joris PX. Use of reverse noise to measure ongoing delay. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 154:926-937. [PMID: 37578194 DOI: 10.1121/10.0020657] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 07/29/2023] [Indexed: 08/15/2023]
Abstract
Counts of spike coincidences provide a powerful means to compare responses to different stimuli or of different neurons, particularly regarding temporal factors. A drawback is that these methods do not provide an absolute measure of latency, i.e., the temporal interval between stimulus features and response. It is desirable to have such a measure within the analysis framework of coincidence counting. Single neuron responses were obtained, from 130 fibers in several tracts (auditory nerve, trapezoid body, lateral lemniscus), to a broadband noise and its polarity-inverted version. The spike trains in response to these stimuli are the "forward noise" responses. The same stimuli were also played time-reversed. The resulting spike trains were then again time-reversed: These are the "reverse-noise" responses. The forward and reverse responses were then analyzed with the coincidence count methods we have introduced earlier. Correlograms between forward- and reverse-noise responses show maxima at values consistent with latencies measured with other methods; the pattern of latencies with characteristic frequency, sound pressure level, and recording location was also consistent. At low characteristic frequencies, correlograms were well-predicted by reverse-correlation functions. We conclude that reverse noise provides an easy and reliable means to estimate latency of auditory nerve and brainstem neurons.
Collapse
Affiliation(s)
- Philip X Joris
- Laboratory of Auditory Neurophysiology, KU Leuven, Leuven B-3000, Belgium
| |
Collapse
|
2
|
Gao F, Chen L, Zhang J. Nonuniform impacts of forward suppression on neural responses to preferred stimuli and nonpreferred stimuli in the rat auditory cortex. Eur J Neurosci 2018; 47:1320-1338. [PMID: 29761576 DOI: 10.1111/ejn.13943] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2017] [Revised: 03/30/2018] [Accepted: 04/03/2018] [Indexed: 11/29/2022]
Abstract
In natural conditions, human and animals need to extract target sound information from noisy acoustic environments for communication and survival. However, how the contextual environmental sounds impact the tuning of central auditory neurons to target sound source azimuth over a wide range of sound levels is not fully understood. Here, we determined the azimuth-level response areas (ALRAs) of rat auditory cortex neurons by recording their responses to probe tones varying with levels and sound source azimuths under both quiet (probe alone) and forward masking conditions (preceding noise + probe). In quiet, cortical neurons responded stronger to their preferred stimuli than to their nonpreferred stimuli. In forward masking conditions, an effective preceding noise reduced the extents of the ALRAs and suppressed the neural responses across the ALRAs by decreasing the response strength and lengthening the first-spike latency. The forward suppressive effect on neural response strength was increased with increasing masker level and decreased with prolonging the time interval between masker and probe. For a portion of cortical neurons studied, the effects of forward suppression on the response strength to preferred stimuli was weaker than those to nonpreferred stimuli, and the recovery from forward suppression of the response strength to preferred stimuli was earlier than that to nonpreferred stimuli. We suggest that this nonuniform forward suppression of neural responses to preferred stimuli and to nonpreferred stimuli is important for cortical neurons to maintain their relative stable preferences for target sound source azimuth and level in noisy acoustic environments.
Collapse
Affiliation(s)
- Fei Gao
- Key Laboratory of Brain Functional Genomics, Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics, NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, School of Life Sciences, East China Normal University, Shanghai, China
| | - Liang Chen
- Key Laboratory of Brain Functional Genomics, Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics, NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, School of Life Sciences, East China Normal University, Shanghai, China
| | - Jiping Zhang
- Key Laboratory of Brain Functional Genomics, Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics, NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, School of Life Sciences, East China Normal University, Shanghai, China
| |
Collapse
|
3
|
Spence C, Lee J, Van der Stoep N. Responding to sounds from unseen locations: crossmodal attentional orienting in response to sounds presented from the rear. Eur J Neurosci 2017; 51:1137-1150. [PMID: 28973789 DOI: 10.1111/ejn.13733] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2017] [Revised: 09/27/2017] [Accepted: 09/27/2017] [Indexed: 11/28/2022]
Abstract
To date, most of the research on spatial attention has focused on probing people's responses to stimuli presented in frontal space. That is, few researchers have attempted to assess what happens in the space that is currently unseen (essentially rear space). In a sense, then, 'out of sight' is, very much, 'out of mind'. In this review, we highlight what is presently known about the perception and processing of sensory stimuli (focusing on sounds) whose source is not currently visible. We briefly summarize known differences in the localizability of sounds presented from different locations in 3D space, and discuss the consequences for the crossmodal attentional and multisensory perceptual interactions taking place in various regions of space. The latest research now clearly shows that the kinds of crossmodal interactions that take place in rear space are very often different in kind from those that have been documented in frontal space. Developing a better understanding of how people respond to unseen sound sources in naturalistic environments by integrating findings emerging from multiple fields of research will likely lead to the design of better warning signals in the future. This review highlights the need for neuroscientists interested in spatial attention to spend more time researching what happens (in terms of the covert and overt crossmodal orienting of attention) in rear space.
Collapse
Affiliation(s)
- Charles Spence
- Crossmodal Research Laboratory, Department of Experimental Psychology, Oxford University, Oxford, OX1 3UD, UK
| | - Jae Lee
- Crossmodal Research Laboratory, Department of Experimental Psychology, Oxford University, Oxford, OX1 3UD, UK
| | - Nathan Van der Stoep
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
4
|
Representation of Multidimensional Stimuli: Quantifying the Most Informative Stimulus Dimension from Neural Responses. J Neurosci 2017; 37:7332-7346. [PMID: 28663198 DOI: 10.1523/jneurosci.0318-17.2017] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2017] [Revised: 06/09/2017] [Accepted: 06/17/2017] [Indexed: 11/21/2022] Open
Abstract
A common way to assess the function of sensory neurons is to measure the number of spikes produced by individual neurons while systematically varying a given dimension of the stimulus. Such measured tuning curves can then be used to quantify the accuracy of the neural representation of the stimulus dimension under study, which can in turn be related to behavioral performance. However, tuning curves often change shape when other dimensions of the stimulus are varied, reflecting the simultaneous sensitivity of neurons to multiple stimulus features. Here we illustrate how one-dimensional information analyses are misleading in this context, and propose a framework derived from Fisher information that allows the quantification of information carried by neurons in multidimensional stimulus spaces. We use this method to probe the representation of sound localization in auditory neurons of chinchillas and guinea pigs of both sexes, and show how heterogeneous tuning properties contribute to a representation of sound source position that is robust to changes in sound level.SIGNIFICANCE STATEMENT Sensory neurons' responses are typically modulated simultaneously by numerous stimulus properties, which can result in an overestimation of neural acuity with existing one-dimensional neural information transmission measures. To overcome this limitation, we develop new, compact expressions of Fisher information-derived measures that bound the robust encoding of separate stimulus dimensions in the context of multidimensional stimuli. We apply this method to the problem of the representation of sound source location in the face of changes in sound source level by neurons of the auditory midbrain.
Collapse
|
5
|
Razak KA. Functional segregation of monaural and binaural selectivity in the pallid bat auditory cortex. Hear Res 2016; 337:35-45. [PMID: 27233917 DOI: 10.1016/j.heares.2016.05.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/11/2016] [Revised: 05/05/2016] [Accepted: 05/13/2016] [Indexed: 11/30/2022]
Abstract
Different fields of the auditory cortex can be distinguished by the extent and level tolerance of spatial selectivity. The mechanisms underlying the range of spatial tuning properties observed across cortical fields are unclear. Here, this issue was addressed in the pallid bat because its auditory cortex contains two segregated regions of response selectivity that serve two different behaviors: echolocation for obstacle avoidance and localization of prey-generated noise. This provides the unique opportunity to examine mechanisms of spatial properties in two functionally distinct regions. Previous studies have shown that spatial selectivity of neurons in the region selective for noise (noise-selective region, NSR) is level tolerant and shaped by interaural level difference (ILD) selectivity. In contrast, spatial selectivity of neurons in the echolocation region ('FM sweep-selective region' or FMSR) is strongly level dependent with many neurons responding to multiple distinct spatial locations for louder sounds. To determine the mechanisms underlying such level dependence, frequency, azimuth, rate-level responses and ILD selectivity were measured from the same FMSR neurons. The majority (∼75%) of FMSR neurons were monaural (ILD insensitive). Azimuth tuning curves expanded or split into multiple peaks with increasing sound level in a manner that was predicted by the rate-level response of neurons. These data suggest that azimuth selectivity of FMSR neurons depends more on monaural ear directionality and rate-level responses. The pallid bat cortex utilizes segregated monaural and binaural regions to process echoes and prey-generated noise. Together the pallid bat FMSR/NSR data provide mechanistic explanations for a broad range of spatial tuning properties seen across species.
Collapse
Affiliation(s)
- Khaleel A Razak
- Department of Psychology and the Graduate Neuroscience Program, University of California, 900 University Avenue, Riverside, CA 92521, USA.
| |
Collapse
|
6
|
Abstract
UNLABELLED The auditory cortex is necessary for sound localization. The mechanisms that shape bicoordinate spatial representation in the auditory cortex remain unclear. Here, we addressed this issue by quantifying spatial receptive fields (SRFs) in two functionally distinct cortical regions in the pallid bat. The pallid bat uses echolocation for obstacle avoidance and listens to prey-generated noise to localize prey. Its cortex contains two segregated regions of response selectivity that serve echolocation and localization of prey-generated noise. The main aim of this study was to compare 2D SRFs between neurons in the noise-selective region (NSR) and the echolocation region [frequency-modulated sweep-selective region (FMSR)]. The data reveal the following major differences between these two regions: (1) compared with NSR neurons, SRF properties of FMSR neurons were more strongly dependent on sound level; (2) as a population, NSR neurons represent a broad region of contralateral space, while FMSR selectivity was focused near the midline at sound levels near threshold and expanded considerably with increasing sound levels; and (3) the SRF size and centroid elevation were correlated with the characteristic frequency in the NSR, but not the FMSR. These data suggest different mechanisms of sound localization for two different behaviors. Previously, we reported that azimuth is represented by predictable changes in the extent of activated cortex. The present data indicate how elevation constrains this activity pattern. These data suggest a novel model for bicoordinate spatial representation that is based on the extent of activated cortex resulting from the overlap of binaural and tonotopic maps. SIGNIFICANCE STATEMENT Unlike the visual and somatosensory systems, spatial information is not directly represented at the sensory receptor epithelium in the auditory system. Spatial locations are computed by integrating neural binaural properties and frequency-dependent pinna filtering, providing a useful model to study how neural properties and peripheral structures are adapted for sensory encoding. Although auditory cortex is necessary for sound localization, our understanding of how the cortex represents space remains rudimentary. Here we show that two functionally distinct regions of the pallid bat auditory cortex represent 2D space using different mechanisms. In addition, we suggest a novel hypothesis on how the nature of overlap between systematic maps of binaural and frequency selectivity leads to representation of both azimuth and elevation.
Collapse
|
7
|
Lui LL, Mokri Y, Reser DH, Rosa MGP, Rajan R. Responses of neurons in the marmoset primary auditory cortex to interaural level differences: comparison of pure tones and vocalizations. Front Neurosci 2015; 9:132. [PMID: 25941469 PMCID: PMC4403308 DOI: 10.3389/fnins.2015.00132] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2014] [Accepted: 04/01/2015] [Indexed: 11/13/2022] Open
Abstract
Interaural level differences (ILDs) are the dominant cue for localizing the sources of high frequency sounds that differ in azimuth. Neurons in the primary auditory cortex (A1) respond differentially to ILDs of simple stimuli such as tones and noise bands, but the extent to which this applies to complex natural sounds, such as vocalizations, is not known. In sufentanil/N2O anesthetized marmosets, we compared the responses of 76 A1 neurons to three vocalizations (Ock, Tsik, and Twitter) and pure tones at cells' characteristic frequency. Each stimulus was presented with ILDs ranging from 20 dB favoring the contralateral ear to 20 dB favoring the ipsilateral ear to cover most of the frontal azimuthal space. The response to each stimulus was tested at three average binaural levels (ABLs). Most neurons were sensitive to ILDs of vocalizations and pure tones. For all stimuli, the majority of cells had monotonic ILD sensitivity functions favoring the contralateral ear, but we also observed ILD sensitivity functions that peaked near the midline and functions favoring the ipsilateral ear. Representation of ILD in A1 was better for pure tones and the Ock vocalization in comparison to the Tsik and Twitter calls; this was reflected by higher discrimination indices and greater modulation ranges. ILD sensitivity was heavily dependent on ABL: changes in ABL by ±20 dB SPL from the optimal level for ILD sensitivity led to significant decreases in ILD sensitivity for all stimuli, although ILD sensitivity to pure tones and Ock calls was most robust to such ABL changes. Our results demonstrate differences in ILD coding for pure tones and vocalizations, showing that ILD sensitivity in A1 to complex sounds cannot be simply extrapolated from that to pure tones. They also show A1 neurons do not show level-invariant representation of ILD, suggesting that such a representation of auditory space is likely to require population coding, and further processing at subsequent hierarchical stages.
Collapse
Affiliation(s)
- Leo L Lui
- Department of Physiology, Monash University Clayton, VIC, Australia ; Australian Research Council, Centre of Excellence for Integrative Brain Function, Monash University Clayton, VIC, Australia
| | - Yasamin Mokri
- Department of Physiology, Monash University Clayton, VIC, Australia
| | - David H Reser
- Department of Physiology, Monash University Clayton, VIC, Australia
| | - Marcello G P Rosa
- Department of Physiology, Monash University Clayton, VIC, Australia ; Australian Research Council, Centre of Excellence for Integrative Brain Function, Monash University Clayton, VIC, Australia
| | - Ramesh Rajan
- Department of Physiology, Monash University Clayton, VIC, Australia ; Australian Research Council, Centre of Excellence for Integrative Brain Function, Monash University Clayton, VIC, Australia ; Ear Sciences Institute of Australia Subiaco, WA, Australia
| |
Collapse
|
8
|
Zhou Y, Wang X. Level dependence of spatial processing in the primate auditory cortex. J Neurophysiol 2012; 108:810-26. [PMID: 22592309 DOI: 10.1152/jn.00500.2011] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Sound localization in both humans and monkeys is tolerant to changes in sound levels. The underlying neural mechanism, however, is not well understood. This study reports the level dependence of individual neurons' spatial receptive fields (SRFs) in the primary auditory cortex (A1) and the adjacent caudal field in awake marmoset monkeys. We found that most neurons' excitatory SRF components were spatially confined in response to broadband noise stimuli delivered from the upper frontal sound field. Approximately half the recorded neurons exhibited little change in spatial tuning width over a ~20-dB change in sound level, whereas the remaining neurons showed either expansion or contraction in their tuning widths. Increased sound levels did not alter the percent distribution of tuning width for neurons collected in either cortical field. The population-averaged responses remained tuned between 30- and 80-dB sound pressure levels for neuronal groups preferring contralateral, midline, and ipsilateral locations. We further investigated the spatial extent and level dependence of the suppressive component of SRFs using a pair of sequentially presented stimuli. Forward suppression was observed when the stimuli were delivered from "far" locations, distant to the excitatory center of an SRF. In contrast to spatially confined excitation, the strength of suppression typically increased with stimulus level at both the excitatory center and far regions of an SRF. These findings indicate that although the spatial tuning of individual neurons varied with stimulus levels, their ensemble responses were level tolerant. Widespread spatial suppression may play an important role in limiting the sizes of SRFs at high sound levels in the auditory cortex.
Collapse
Affiliation(s)
- Yi Zhou
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, School of Medicine, Johns Hopkins University, Baltimore, Maryland 21205-2195, USA.
| | | |
Collapse
|
9
|
Abstract
Auditory signals are decomposed into discrete frequency elements early in the transduction process, yet somehow these signals are recombined into the rich acoustic percepts that we readily identify and are familiar with. The cerebral cortex is necessary for the perception of these signals, and studies from several laboratories over the past decade have made significant advances in our understanding of the neuronal mechanisms underlying auditory perception. This review will concentrate on recent studies in the macaque monkey that indicate that the activity of populations of neurons better accounts for the perceptual abilities compared to the activity of single neurons. The best examples address whether the acoustic space is represented along the "where" pathway in the caudal regions of auditory cortex. Our current understanding of how such population activity could also underlie the perception of the nonspatial features of acoustic stimuli is reviewed, as is how multisensory interactions can influence our auditory perception.
Collapse
Affiliation(s)
- Gregg H Recanzone
- Center for Neuroscience and Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| |
Collapse
|
10
|
SUN HY, SUN XD, ZHANG JP. Encoding of Sound Spatial Information by Neurons in The Rat Primary Auditory Cortex*. PROG BIOCHEM BIOPHYS 2009. [DOI: 10.3724/sp.j.1206.2008.00829] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
11
|
Recanzone GH, Cohen YE. Serial and parallel processing in the primate auditory cortex revisited. Behav Brain Res 2009; 206:1-7. [PMID: 19686779 DOI: 10.1016/j.bbr.2009.08.015] [Citation(s) in RCA: 66] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2009] [Accepted: 08/12/2009] [Indexed: 11/24/2022]
Abstract
Over a decade ago it was proposed that the primate auditory cortex is organized in a serial and parallel manner in which there is a dorsal stream processing spatial information and a ventral stream processing non-spatial information. This organization is similar to the "what"/"where" processing of the primate visual cortex. This review will examine several key studies, primarily electrophysiological, that have tested this hypothesis. We also review several human-imaging studies that have attempted to define these processing streams in the human auditory cortex. While there is good evidence that spatial information is processed along a particular series of cortical areas, the support for a non-spatial processing stream is not as strong. Why this should be the case and how to better test this hypothesis is also discussed.
Collapse
Affiliation(s)
- Gregg H Recanzone
- Center for Neuroscience and Department of Neurobiology, Physiology and Behavior, University of California at Davis, One Shields Avenue, Davis, CA 95616, USA.
| | | |
Collapse
|
12
|
Populations of auditory cortical neurons can accurately encode acoustic space across stimulus intensity. Proc Natl Acad Sci U S A 2009; 106:5931-5. [PMID: 19321750 DOI: 10.1073/pnas.0901023106] [Citation(s) in RCA: 107] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The auditory cortex is critical for perceiving a sound's location. However, there is no topographic representation of acoustic space, and individual auditory cortical neurons are often broadly tuned to stimulus location. It thus remains unclear how acoustic space is represented in the mammalian cerebral cortex and how it could contribute to sound localization. This report tests whether the firing rates of populations of neurons in different auditory cortical fields in the macaque monkey carry sufficient information to account for horizontal sound localization ability. We applied an optimal neural decoding technique, based on maximum likelihood estimation, to populations of neurons from 6 different cortical fields encompassing core and belt areas. We found that the firing rate of neurons in the caudolateral area contain enough information to account for sound localization ability, but neurons in other tested core and belt cortical areas do not. These results provide a detailed and plausible population model of how acoustic space could be represented in the primate cerebral cortex and support a dual stream processing model of auditory cortical processing.
Collapse
|
13
|
A rate code for sound azimuth in monkey auditory cortex: implications for human neuroimaging studies. J Neurosci 2008; 28:3747-58. [PMID: 18385333 DOI: 10.1523/jneurosci.5044-07.2008] [Citation(s) in RCA: 79] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Is sound location represented in the auditory cortex of humans and monkeys? Human neuroimaging experiments have had only mixed success at demonstrating sound location sensitivity in primary auditory cortex. This is in apparent conflict with studies in monkeys and other animals, in which single-unit recording studies have found stronger evidence for spatial sensitivity. Does this apparent discrepancy reflect a difference between humans and animals, or does it reflect differences in the sensitivity of the methods used for assessing the representation of sound location? The sensitivity of imaging methods such as functional magnetic resonance imaging depends on the following two key aspects of the underlying neuronal population: (1) what kind of spatial sensitivity individual neurons exhibit and (2) whether neurons with similar response preferences are clustered within the brain. To address this question, we conducted a single-unit recording study in monkeys. We investigated the nature of spatial sensitivity in individual auditory cortical neurons to determine whether they have receptive fields (place code) or monotonic (rate code) sensitivity to sound azimuth. Second, we tested how strongly the population of neurons favors contralateral locations. We report here that the majority of neurons show predominantly monotonic azimuthal sensitivity, forming a rate code for sound azimuth, but that at the population level the degree of contralaterality is modest. This suggests that the weakness of the evidence for spatial sensitivity in human neuroimaging studies of auditory cortex may be attributable to limited lateralization at the population level, despite what may be considerable spatial sensitivity in individual neurons.
Collapse
|
14
|
Qiu Q, Tang J, Yu Z, Zhang J, Zhou Y, Xiao Z, Shen J. Latency represents sound frequency in mouse IC. ACTA ACUST UNITED AC 2007; 50:258-64. [PMID: 17447034 DOI: 10.1007/s11427-007-0020-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2005] [Accepted: 12/19/2006] [Indexed: 11/30/2022]
Abstract
Frequency is one of the fundamental parameters of sound. The frequency of an acoustic stimulus can be represented by a neural response such as spike rate, and/or first spike latency (FSL) of a given neuron. The spike rates/frequency function of most neurons changes with different acoustic amplitudes, whereas FSL/frequency function is highly stable. This implies that FSL might represent the frequency of a sound stimulus more efficiently than spike rate. This study involved representations of acoustic frequency by spike rate and FSL of central inferior colliculus (IC) neurons responding to free-field pure-tone stimuli. We found that the FSLs of neurons responding to characteristic frequency (CF) of sound stimulus were usually the shortest, regardless of sound intensity, and that spike rates of most neurons showed a variety of function according to sound frequency, especially at high intensities. These results strongly suggest that FSL of auditory IC neurons can represent sound frequency more precisely than spike rate.
Collapse
Affiliation(s)
- Qiang Qiu
- State Key Laboratory of Brain and Cognitive Sciences, Institute of Biophysics, Chinese Academy of Sciences, Beijing 100101, China
| | | | | | | | | | | | | |
Collapse
|
15
|
King AJ, Bajo VM, Bizley JK, Campbell RAA, Nodal FR, Schulz AL, Schnupp JWH. Physiological and behavioral studies of spatial coding in the auditory cortex. Hear Res 2007; 229:106-15. [PMID: 17314017 PMCID: PMC7116512 DOI: 10.1016/j.heares.2007.01.001] [Citation(s) in RCA: 64] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/15/2006] [Revised: 12/19/2006] [Accepted: 01/03/2007] [Indexed: 11/27/2022]
Abstract
Despite extensive subcortical processing, the auditory cortex is believed to be essential for normal sound localization. However, we still have a poor understanding of how auditory spatial information is encoded in the cortex and of the relative contribution of different cortical areas to spatial hearing. We investigated the behavioral consequences of inactivating ferret primary auditory cortex (A1) on auditory localization by implanting a sustained release polymer containing the GABA(A) agonist muscimol bilaterally over A1. Silencing A1 led to a reversible deficit in the localization of brief noise bursts in both the horizontal and vertical planes. In other ferrets, large bilateral lesions of the auditory cortex, which extended beyond A1, produced more severe and persistent localization deficits. To investigate the processing of spatial information by high-frequency A1 neurons, we measured their binaural-level functions and used individualized virtual acoustic space stimuli to record their spatial receptive fields (SRFs) in anesthetized ferrets. We observed the existence of a continuum of response properties, with most neurons preferring contralateral sound locations. In many cases, the SRFs could be explained by a simple linear interaction between the acoustical properties of the head and external ears and the binaural frequency tuning of the neurons. Azimuth response profiles recorded in awake ferrets were very similar and further analysis suggested that the slopes of these functions and location-dependent variations in spike timing are the main information-bearing parameters. Studies of sensory plasticity can also provide valuable insights into the role of different brain areas and the way in which information is represented within them. For example, stimulus-specific training allows accurate auditory localization by adult ferrets to be relearned after manipulating binaural cues by occluding one ear. Reversible inactivation of A1 resulted in slower and less complete adaptation than in normal controls, whereas selective lesions of the descending cortico collicular pathway prevented any improvement in performance. These results reveal a role for auditory cortex in training-induced plasticity of auditory localization, which could be mediated by descending cortical pathways.
Collapse
Affiliation(s)
- Andrew J King
- Department of Physiology, Anatomy and Genetics, Sherrington Building, University of Oxford, Oxford, UK.
| | | | | | | | | | | | | |
Collapse
|
16
|
Woods TM, Lopez SE, Long JH, Rahman JE, Recanzone GH. Effects of stimulus azimuth and intensity on the single-neuron activity in the auditory cortex of the alert macaque monkey. J Neurophysiol 2006; 96:3323-37. [PMID: 16943318 DOI: 10.1152/jn.00392.2006] [Citation(s) in RCA: 132] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
It has been hypothesized that the primate auditory cortex is composed of at least two processing streams, one of which is believed to selectively process spatial information. To test whether spatial information is differentially encoded in different auditory cortical fields, we recorded the responses of single neurons in the auditory cortex of alert macaque monkeys to broadband noise stimuli presented from 360 degrees in azimuth at four different absolute intensities. Cortical areas tested were core areas A1 and rostral (R), caudal belt fields caudomedial and caudolateral, and more rostral belt fields middle lateral and middle medial (MM). We found that almost all neurons encountered showed some spatial tuning. However, spatial selectivity measures showed that the caudal belt fields had the sharpest spatial tuning, A1 had intermediate spatial tuning, and areas R and MM had the least spatial tuning. Although most neurons showed their best responses to contralateral space, best azimuths were observed across the entire 360 degrees of tested space. We also noted that although the responses of many neurons were significantly influenced by eye position, eye position did not systematically influence any of the spatially dependent responses that we measured. These data are consistent with the hypothesis that caudal auditory cortical fields in the primate process spatial features more accurately than the core and more rostral belt fields.
Collapse
Affiliation(s)
- Timothy M Woods
- Center for Neuroscience, University of California-Davis, 1544 Newton Court, Davis, CA 95616, USA
| | | | | | | | | |
Collapse
|
17
|
Nelken I, Chechik G, Mrsic-Flogel TD, King AJ, Schnupp JWH. Encoding Stimulus Information by Spike Numbers and Mean Response Time in Primary Auditory Cortex. J Comput Neurosci 2005; 19:199-221. [PMID: 16133819 DOI: 10.1007/s10827-005-1739-3] [Citation(s) in RCA: 101] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2005] [Revised: 03/27/2005] [Accepted: 04/27/2005] [Indexed: 10/25/2022]
Abstract
Neurons can transmit information about sensory stimuli via their firing rate, spike latency, or by the occurrence of complex spike patterns. Identifying which aspects of the neural responses actually encode sensory information remains a fundamental question in neuroscience. Here we compared various approaches for estimating the information transmitted by neurons in auditory cortex in two very different experimental paradigms, one measuring spatial tuning and the other responses to complex natural stimuli. We demonstrate that, in both cases, spike counts and mean response times jointly carry essentially all the available information about the stimuli. Thus, in auditory cortex, whereas spike counts carry only partial information about stimulus identity or location, the additional availability of relatively coarse temporal information is sufficient in order to extract essentially all the sensory information available in the spike discharge pattern, at least for the relatively short stimuli (<approximately 100 ms) commonly used in auditory research.
Collapse
Affiliation(s)
- Israel Nelken
- Department of Neurobiology and the Interdisciplinary Center for Neural Computation, Hebrew University, Jerusalem, Israel.
| | | | | | | | | |
Collapse
|
18
|
Mickey BJ, Middlebrooks JC. Sensitivity of Auditory Cortical Neurons to the Locations of Leading and Lagging Sounds. J Neurophysiol 2005; 94:979-89. [PMID: 15817648 DOI: 10.1152/jn.00580.2004] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We recorded unit activity in the auditory cortex (fields A1, A2, and PAF) of anesthetized cats while presenting paired clicks with variable locations and interstimulus delays (ISDs). In human listeners, such sounds elicit the precedence effect, in which localization of the lagging sound is impaired at ISDs ≲10 ms. In the present study, neurons typically responded to the leading stimulus with a brief burst of spikes, followed by suppression lasting 100–200 ms. At an ISD of 20 ms, at which listeners report a distinct lagging sound, only 12% of units showed discrete lagging responses. Long-lasting suppression was found in all sampled cortical fields, for all leading and lagging locations, and at all sound levels. Recordings from awake cats confirmed this long-lasting suppression in the absence of anesthesia, although recovery from suppression was faster in the awake state. Despite the lack of discrete lagging responses at delays of 1–20 ms, the spike patterns of 40% of units varied systematically with ISD, suggesting that many neurons represent lagging sounds implicitly in their temporal firing patterns rather than explicitly in discrete responses. We estimated the amount of location-related information transmitted by spike patterns at delays of 1–16 ms under conditions in which we varied only the leading location or only the lagging location. Consistent with human psychophysical results, transmission of information about the leading location was high at all ISDs. Unlike listeners, however, transmission of information about the lagging location remained low, even at ISDs of 12–16 ms.
Collapse
Affiliation(s)
- Brian J Mickey
- Kresge Hearing Research Institute, University of Michigan, Ann Arbor, MI 48109-0506, USA
| | | |
Collapse
|
19
|
Recanzone GH, Beckerman NS. Effects of intensity and location on sound location discrimination in macaque monkeys. Hear Res 2005; 198:116-24. [PMID: 15567608 DOI: 10.1016/j.heares.2004.07.017] [Citation(s) in RCA: 33] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/08/2004] [Accepted: 07/27/2004] [Indexed: 11/28/2022]
Abstract
Sound localization performance is degraded at low stimulus intensities in humans, and while the sound localization ability of humans and macaque monkeys appears similar, the effects of intensity have yet to be described in the macaque. We therefore defined the ability of four macaque monkeys to localize broadband noise stimuli at four different absolute intensities and six different starting locations in azimuth. Results indicate that performance was poorest at the lowest intensity tested (25 dB SPL), intermediate at 35 dB SPL, and equivalent at 55 and 75 dB SPL. Localization performance was best at 0 degree (directly in front of the animal) and was systematically degraded at more peripheral locations (+/-30 degrees and 90 degrees) and worst at a location directly behind the animal. Reaction times showed the same trends, with reaction times increasing with decreasing stimulus intensity, even under conditions where the monkey discriminated the location change with the same performance. These results indicate that sound level as well as position profoundly influences sound localization ability.
Collapse
Affiliation(s)
- Gregg H Recanzone
- Center for Neuroscience and Section of Neurobiology, Physiology and Behavior, University of California at Davis, Davis, CA 95616, USA.
| | | |
Collapse
|
20
|
Sabin AT, Macpherson EA, Middlebrooks JC. Human sound localization at near-threshold levels. Hear Res 2005; 199:124-34. [PMID: 15574307 DOI: 10.1016/j.heares.2004.08.001] [Citation(s) in RCA: 51] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/21/2004] [Accepted: 08/01/2004] [Indexed: 10/26/2022]
Abstract
Physiological studies of spatial hearing show that the spatial receptive fields of cortical neurons typically are narrow at near-threshold levels, broadening at moderate levels. The apparent loss of neuronal spatial selectivity at increasing sound levels conflicts with the accurate performance of human subjects localizing at moderate sound levels. In the present study, human sound localization was evaluated across a wide range of sensation levels, extending down to the detection threshold. Listeners reported whether they heard each target sound and, if the target was audible, turned their heads to face the apparent source direction. Head orientation was tracked electromagnetically. At near-threshold levels, the lateral (left/right) components of responses were highly variable and slightly biased towards the midline, and front vertical components consistently exhibited a strong bias towards the horizontal plane. Stimulus levels were specified relative to the detection threshold for a front-positioned source, so low-level rear targets often were inaudible. As the sound level increased, first lateral and then vertical localization neared asymptotic levels. The improvement of localization over a range of increasing levels, in which neural spatial receptive fields presumably are broadening, indicates that sound localization does not depend on narrow spatial receptive fields of cortical neurons.
Collapse
Affiliation(s)
- Andrew T Sabin
- Central Systems Laboratory, Kresge Hearing Research Institute, 1301 E. Ann Street, Ann Arbor, MI 48109-0506, USA
| | | | | |
Collapse
|
21
|
Mrsic-Flogel TD, King AJ, Schnupp JWH. Encoding of virtual acoustic space stimuli by neurons in ferret primary auditory cortex. J Neurophysiol 2005; 93:3489-503. [PMID: 15659534 DOI: 10.1152/jn.00748.2004] [Citation(s) in RCA: 59] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Recent studies from our laboratory have indicated that the spatial response fields (SRFs) of neurons in the ferret primary auditory cortex (A1) with best frequencies > or =4 kHz may arise from a largely linear processing of binaural level and spectral localization cues. Here we extend this analysis to investigate how well the linear model can predict the SRFs of neurons with different binaural response properties and the manner in which SRFs change with increases in sound level. We also consider whether temporal features of the response (e.g., response latency) vary with sound direction and whether such variations can be explained by linear processing. In keeping with previous studies, we show that A1 SRFs, which we measured with individualized virtual acoustic space stimuli, expand and shift in direction with increasing sound level. We found that these changes are, in most cases, in good agreement with predictions from a linear threshold model. However, changes in spatial tuning with increasing sound level were generally less well predicted for neurons whose binaural frequency-time receptive field (FTRF) exhibited strong excitatory inputs from both ears than for those in which the binaural FTRF revealed either a predominantly inhibitory effect or no clear contribution from the ipsilateral ear. Finally, we found (in agreement with other authors) that many A1 neurons exhibit systematic response latency shifts as a function of sound-source direction, although these temporal details could usually not be predicted from the neuron's binaural FTRF.
Collapse
|
22
|
Abstract
The product-moment correlation coefficient is often viewed as a natural measure of dependence. However, this equivalence applies only in the context of elliptical distributions, most commonly the multivariate gaussian, where linear correlation indeed sufficiently describes the underlying dependence structure. Should the true probability distributions deviate from those with elliptical contours, linear correlation may convey misleading information on the actual underlying dependencies. It is often the case that probability distributions other than the gaussian distribution are necessary to properly capture the stochastic nature of single neurons, which as a consequence greatly complicates the construction of a flexible model of covariance. We show how arbitrary probability densities can be coupled to allow greater flexibility in the construction of multivariate neural population models.
Collapse
Affiliation(s)
- Rick L Jenison
- Departments of Psychology and Physiology and the Waisman Center, University of Wisconsin-Madison, Madison, WI 53706, USA.
| | | |
Collapse
|
23
|
Abstract
We evaluated the spatial selectivity of auditory cortical neurons in awake cats. Single- and multiunit activity was recorded in primary auditory cortex as the animals performed a nonspatial auditory discrimination or sat idly. Their heads were unrestrained, and head position was tracked. Broadband sounds were delivered from locations throughout 360 degrees on the horizontal plane, and source locations were expressed in head-centered coordinates. As in anesthetized animals, the firing rates of most units were modulated by sound location, and most units responded best to sounds in the contralateral hemifield. Tuning was sharper than in anesthetized cats, in part because of suppression at nonoptimal locations. Nonetheless, spatial receptive fields typically spanned 150-180 degrees. Units exhibited diverse temporal response patterns that often depended on sound location. An information-theoretic analysis showed that information transmission was reduced by approximately 10% when the precision of spike timing was disrupted by 16-32 msec, and by approximately 50% when all location-related variation of spike timing was removed. Spikes occurring within 60 msec of stimulus onset transmitted the most location-related information, but later spikes also carried information. The amount of information transmitted by ensembles of units increased with the number of units, indicating some degree of mutual independence in the spatial information transmitted by various units. Spatial tuning and information transmission were changed little by an increase in sound level of 20-30 dB. For the vast majority of units, receptive fields showed no significant change with the cat's head position or level of participation in the auditory task.
Collapse
|