51
|
Processing of natural sounds: characterization of multipeak spectral tuning in human auditory cortex. J Neurosci 2013; 33:11888-98. [PMID: 23864678 DOI: 10.1523/jneurosci.5306-12.2013] [Citation(s) in RCA: 57] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
We examine the mechanisms by which the human auditory cortex processes the frequency content of natural sounds. Through mathematical modeling of ultra-high field (7 T) functional magnetic resonance imaging responses to natural sounds, we derive frequency-tuning curves of cortical neuronal populations. With a data-driven analysis, we divide the auditory cortex into five spatially distributed clusters, each characterized by a spectral tuning profile. Beyond neuronal populations with simple single-peaked spectral tuning (grouped into two clusters), we observe that ∼60% of auditory populations are sensitive to multiple frequency bands. Specifically, we observe sensitivity to multiple frequency bands (1) at exactly one octave distance from each other, (2) at multiple harmonically related frequency intervals, and (3) with no apparent relationship to each other. We propose that beyond the well known cortical tonotopic organization, multipeaked spectral tuning amplifies selected combinations of frequency bands. Such selective amplification might serve to detect behaviorally relevant and complex sound features, aid in segregating auditory scenes, and explain prominent perceptual phenomena such as octave invariance.
Collapse
|
52
|
Abstract
After hearing a tone, the human auditory system becomes more sensitive to similar tones than to other tones. Current auditory models explain this phenomenon by a simple bandpass attention filter. Here, we demonstrate that auditory attention involves multiple pass-bands around octave-related frequencies above and below the cued tone. Intriguingly, this "octave effect" not only occurs for physically presented tones, but even persists for the missing fundamental in complex tones, and for imagined tones. Our results suggest neural interactions combining octave-related frequencies, likely located in nonprimary cortical regions. We speculate that this connectivity scheme evolved from exposure to natural vibrations containing octave-related spectral peaks, e.g., as produced by vocal cords.
Collapse
|
53
|
Aksentijevic A, Northeast A, Canty D, Elliott MA. The oscillatory entrainment of virtual pitch perception. Front Psychol 2013; 4:210. [PMID: 23630515 PMCID: PMC3635022 DOI: 10.3389/fpsyg.2013.00210] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2012] [Accepted: 04/05/2013] [Indexed: 11/15/2022] Open
Abstract
Evidence suggests that synchronized brain oscillations in the low gamma range (around 33 Hz) are involved in the perceptual integration of harmonic complex tones. This process involves the binding of harmonic components into “harmonic templates” – neural structures responsible for pitch coding in the brain. We investigated the hypothesis that oscillatory harmonic binding promotes a change in pitch perception style from spectral (frequency) to virtual (relational). Using oscillatory priming we asked 24 participants to judge as rapidly as possible, the direction of an ambiguous target with ascending spectral and descending virtual contour. They made significantly more virtual responses when primed at 29, 31, and 33 Hz and when the first target tone was harmonically related to the prime, suggesting that neural synchronization in the low gamma range could facilitate a shift toward virtual pitch processing.
Collapse
|
54
|
Abstract
Receptive fields (RFs) of neurons in primary visual cortex have traditionally been subdivided into two major classes: "simple" and "complex" cells. Simple cells were originally defined by the existence of segregated subregions within their RF that respond to either the on- or offset of a light bar and by spatial summation within each of these regions, whereas complex cells had ON and OFF regions that were coextensive in space [Hubel DH, et al. (1962) J Physiol 160:106-154]. Although other definitions based on the linearity of response modulation have been proposed later [Movshon JA, et al. (1978) J Physiol 283:53-77; Skottun BC, et al. (1991) Vision Res 31(7-8):1079-1086], the segregation of ON and OFF subregions has remained an important criterion for the distinction between simple and complex cells. Here we report that response profiles of neurons in primary auditory cortex of monkeys show a similar distinction: one group of cells has segregated ON and OFF subregions in frequency space; and another group shows ON and OFF responses within largely overlapping response profiles. This observation is intriguing for two reasons: (i) spectrotemporal dissociation in the auditory domain provides a basic neural mechanism for the segregation of sounds, a fundamental prerequisite for auditory figure-ground discrimination; and (ii) the existence of similar types of RF organization in visual and auditory cortex would support the existence of a common canonical processing algorithm within cortical columns.
Collapse
|
55
|
Abstract
The mammalian neocortex is a six-layered structure organized into radial columns. Within sensory cortical areas, information enters in the thalamorecipient layer and is further processed in supragranular and infragranular layers. Within the neocortex, topographic maps of stimulus features are present, but whether topographic patterns of active neurons change between laminae is unknown. Here, we used in vivo two-photon Ca(2+) imaging to probe the organization of the mouse primary auditory cortex and show that the spatial organization of neural response properties (frequency tuning) within the thalamorecipient layer (L3b/4) is more homogeneous than in supragranular layers (L2/3). Moreover, stimulus-related correlations between pairs of neurons are higher in the thalamorecipient layer, whereas stimulus-independent trial-to-trial covariance is higher in supragranular neurons. These findings reveal a transformation of sensory representations that occurs between layers within the auditory cortex, which could generate sequentially more complex analysis of the acoustic scene incorporating a broad range of spectrotemporal sound features.
Collapse
|
56
|
|
57
|
Pasley BN, Knight RT. Decoding speech for understanding and treating aphasia. PROGRESS IN BRAIN RESEARCH 2013; 207:435-56. [PMID: 24309265 DOI: 10.1016/b978-0-444-63327-9.00018-7] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
Aphasia is an acquired language disorder with a diverse set of symptoms that can affect virtually any linguistic modality across both the comprehension and production of spoken language. Partial recovery of language function after injury is common but typically incomplete. Rehabilitation strategies focus on behavioral training to induce plasticity in underlying neural circuits to maximize linguistic recovery. Understanding the different neural circuits underlying diverse language functions is a key to developing more effective treatment strategies. This chapter discusses a systems identification analytic approach to the study of linguistic neural representation. The focus of this framework is a quantitative, model-based characterization of speech and language neural representations that can be used to decode, or predict, speech representations from measured brain activity. Recent results of this approach are discussed in the context of applications to understanding the neural basis of aphasia symptoms and the potential to optimize plasticity during the rehabilitation process.
Collapse
Affiliation(s)
- Brian N Pasley
- Helen Wills Neuroscience Institute, University of California Berkeley, Berkeley, CA, USA.
| | | |
Collapse
|
58
|
Rajan R, Dubaj V, Reser DH, Rosa MGP. Auditory cortex of the marmoset monkey - complex responses to tones and vocalizations under opiate anaesthesia in core and belt areas. Eur J Neurosci 2012; 37:924-41. [PMID: 23278961 DOI: 10.1111/ejn.12092] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2011] [Revised: 11/06/2012] [Accepted: 11/16/2012] [Indexed: 11/28/2022]
Abstract
Many anaesthetics commonly used in auditory research severely depress cortical responses, particularly in the supragranular layers of the primary auditory cortex and in non-primary areas. This is particularly true when stimuli other than simple tones are presented. Although awake preparations allow better preservation of the neuronal responses, there is an inherent limitation to this approach whenever the physiological data need to be combined with histological reconstruction or anatomical tracing. Here we tested the efficacy of an opiate-based anaesthetic regime to study physiological responses in the primary auditory cortex and middle lateral belt area. Adult marmosets were anaesthetized using a combination of sufentanil (8 μg/kg/h, i.v.) and N2 O (70%). Unit activity was recorded throughout the cortical layers, in response to auditory stimuli presented binaurally. Stimuli consisted of a battery of tones presented at different intensities, as well as two marmoset calls ('Tsik' and 'Twitter'). In addition to robust monotonic and non-monotonic responses to tones, we found that the neuronal activity reflected various aspects of the calls, including 'on' and 'off' components, and temporal fluctuations. Both phasic and tonic activities, as well as excitatory and inhibitory components, were observed. Furthermore, a late component (100-250 ms post-offset) was apparent. Our results indicate that the sufentanil/N2 O combination allows better preservation of response patterns in both the core and belt auditory cortex, in comparison with anaesthetics usually employed in auditory physiology. This anaesthetic regime holds promise in enabling the physiological study of complex auditory responses in acute preparations, combined with detailed anatomical and histological investigation.
Collapse
Affiliation(s)
- Ramesh Rajan
- Department of Physiology, Monash University, Clayton, Vic., 3800, Australia.
| | | | | | | |
Collapse
|
59
|
Remington ED, Osmanski MS, Wang X. An operant conditioning method for studying auditory behaviors in marmoset monkeys. PLoS One 2012; 7:e47895. [PMID: 23110123 PMCID: PMC3480461 DOI: 10.1371/journal.pone.0047895] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2012] [Accepted: 09/17/2012] [Indexed: 11/29/2022] Open
Abstract
The common marmoset (Callithrix jacchus) is a small New World primate that has increasingly been used as a non-human model in the fields of sensory, motor, and cognitive neuroscience. However, little knowledge exists regarding behavioral methods in this species. Developing an understanding of the neural basis of perception and cognition in an animal model requires measurement of both brain activity and behavior. Here we describe an operant conditioning behavioral training method developed to allow controlled psychoacoustic measurements in marmosets. We demonstrate that marmosets can be trained to consistently perform a Go/No-Go auditory task in which a subject licks at a feeding tube when it detects a sound. Correct responses result in delivery of a food reward. Crucially, this operant conditioning task generates little body movement and is well suited for pairing behavior with single-unit electrophysiology. Successful implementation of an operant conditioning behavior opens the door to a wide range of new studies in the field of auditory neuroscience using the marmoset as a model system.
Collapse
Affiliation(s)
- Evan D Remington
- Department of Biomedical Engineering, The Johns Hopkins University School of Medicine, Baltimore, Maryland, United States of America.
| | | | | |
Collapse
|
60
|
Wang X, Walker KMM. Neural mechanisms for the abstraction and use of pitch information in auditory cortex. J Neurosci 2012; 32:13339-42. [PMID: 23015423 PMCID: PMC3752151 DOI: 10.1523/jneurosci.3814-12.2012] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2012] [Revised: 07/18/2012] [Accepted: 07/23/2012] [Indexed: 11/21/2022] Open
Abstract
Experiments in animals have provided an important complement to human studies of pitch perception by revealing how the activity of individual neurons represents harmonic complex and periodic sounds. Such studies have shown that the acoustical parameters associated with pitch are represented by the spiking responses of neurons in A1 (primary auditory cortex) and various higher auditory cortical fields. The responses of these neurons are also modulated by the timbre of sounds. In marmosets, a distinct region on the low-frequency border of primary and non-primary auditory cortex may provide pitch tuning that generalizes across timbre classes.
Collapse
Affiliation(s)
- Xiaoqin Wang
- Tsinghua-Johns Hopkins Joint Center for Biomedical Engineering Research and Department of Biomedical Engineering, Tsinghua University, Beijing 100084, China.
| | | |
Collapse
|
61
|
Searching for optimal stimuli: ascending a neuron's response function. J Comput Neurosci 2012; 33:449-73. [PMID: 22580579 DOI: 10.1007/s10827-012-0395-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2010] [Revised: 03/20/2012] [Accepted: 04/02/2012] [Indexed: 02/03/2023]
Abstract
Many methods used to analyze neuronal response assume that neuronal activity has a fundamentally linear relationship to the stimulus. However, some neurons are strongly sensitive to multiple directions in stimulus space and have a highly nonlinear response. It can be difficult to find optimal stimuli for these neurons. We demonstrate how successive linear approximations of neuronal response can effectively carry out gradient ascent and move through stimulus space towards local maxima of the response. We demonstrate search results for a simple model neuron and two models of a highly selective neuron.
Collapse
|
62
|
Fishman YI, Micheyl C, Steinschneider M. Neural mechanisms of rhythmic masking release in monkey primary auditory cortex: implications for models of auditory scene analysis. J Neurophysiol 2012; 107:2366-82. [PMID: 22323627 DOI: 10.1152/jn.01010.2011] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The ability to detect and track relevant acoustic signals embedded in a background of other sounds is crucial for hearing in complex acoustic environments. This ability is exemplified by a perceptual phenomenon known as "rhythmic masking release" (RMR). To demonstrate RMR, a sequence of tones forming a target rhythm is intermingled with physically identical "Distracter" sounds that perceptually mask the rhythm. The rhythm can be "released from masking" by adding "Flanker" tones in adjacent frequency channels that are synchronous with the Distracters. RMR represents a special case of auditory stream segregation, whereby the target rhythm is perceptually segregated from the background of Distracters when they are accompanied by the synchronous Flankers. The neural basis of RMR is unknown. Previous studies suggest the involvement of primary auditory cortex (A1) in the perceptual organization of sound patterns. Here, we recorded neural responses to RMR sequences in A1 of awake monkeys in order to identify neural correlates and potential mechanisms of RMR. We also tested whether two current models of stream segregation, when applied to these responses, could account for the perceptual organization of RMR sequences. Results suggest a key role for suppression of Distracter-evoked responses by the simultaneous Flankers in the perceptual restoration of the target rhythm in RMR. Furthermore, predictions of stream segregation models paralleled the psychoacoustics of RMR in humans. These findings reinforce the view that preattentive or "primitive" aspects of auditory scene analysis may be explained by relatively basic neural mechanisms at the cortical level.
Collapse
Affiliation(s)
- Yonatan I Fishman
- Department of Neurology, Albert Einstein College of Medicine, Kennedy Center, 1410 Pelham Parkway, Bronx, NY 10461, USA.
| | | | | |
Collapse
|
63
|
Abstract
Combination sensitivity in central auditory neurons is a form of spectrotemporal integration in which excitatory responses to sounds at one frequency are facilitated by sounds within a distinctly different frequency band. Combination-sensitive neurons respond selectively to acoustic elements of sonar echoes or social vocalizations. In mustached bats, this response property originates in high-frequency representations of the inferior colliculus (IC) and depends on low and high frequency-tuned glycinergic inputs. To identify the source of these inputs, we combined glycine immunohistochemistry with retrograde tract tracing. Tracers were deposited at high-frequency (>56 kHz), combination-sensitive recording sites in IC. Most glycine-immunopositive, retrogradely labeled cells were in ipsilateral ventral and intermediate nuclei of the lateral lemniscus (VNLL and INLL), with some double labeling in ipsilateral lateral and medial superior olivary nuclei (LSO and MSO). Generally, double-labeled cells were in expected high-frequency tonotopic areas, but some VNLL and INLL labeling appeared to be in low-frequency representations. To test whether these nuclei provide low frequency-tuned input to the high-frequency IC, we combined retrograde tracing from IC combination-sensitive sites with anterograde tracing from low frequency-tuned sites in the anteroventral cochlear nucleus (AVCN). Only VNLL and INLL contained retrogradely labeled cells near (≤50 μm) anterogradely labeled boutons. These cells likely receive excitatory low-frequency input from AVCN. Results suggest that combination-sensitive facilitation arises through convergence of high-frequency glycinergic inputs from VNLL, INLL, or MSO and low-frequency glycinergic inputs from VNLL or INLL. This work establishes an anatomical basis for spectrotemporal integration in the auditory midbrain and a functional role for monaural nuclei of the lateral lemniscus.
Collapse
|
64
|
Bartlett EL, Wang X. Correlation of neural response properties with auditory thalamus subdivisions in the awake marmoset. J Neurophysiol 2011; 105:2647-67. [PMID: 21411564 PMCID: PMC3295207 DOI: 10.1152/jn.00238.2010] [Citation(s) in RCA: 49] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2010] [Accepted: 03/14/2011] [Indexed: 01/12/2023] Open
Abstract
As the information bottleneck of nearly all auditory input that reaches the cortex, the auditory thalamus serves as the basis for establishing auditory cortical processing streams. The functional organization of the primary and nonprimary subdivisions of the auditory thalamus is not well characterized, particularly in awake primates. We have recorded from neurons in the auditory thalamus of awake marmoset monkeys and tested their responses to tones, band-pass noise, and temporally modulated stimuli. We analyzed the spectral and temporal response properties of recorded neurons and correlated those properties with their locations in the auditory thalamus, thereby forming the basis for parallel output channels. Three medial geniculate body (MGB) subdivisions were identified and studied physiologically and anatomically, although other medial subdivisions were also identified anatomically. Neurons in the ventral subdivision (MGV) were sharply tuned for frequency, preferred narrowband stimuli, and were able to synchronize to rapid temporal modulations. Anterodorsal subdivision (MGAD) neurons appeared well suited for temporal processing, responding similarly to tone or noise stimuli but able to synchronize to the highest modulation frequencies and with the highest temporal precision among MGB subdivisions. Posterodorsal subdivision (MGPD) neurons differed substantially from the other two subdivisions, with many neurons preferring broadband stimuli and signaling changes in modulation frequency with nonsynchronized changes in firing rate. Most neurons in all subdivisions responded to increases in tone sound level with nonmonotonic changes in firing rate. MGV and MGAD neurons exhibited responses consistent with provision of thalamocortical input to core regions, whereas MGPD neurons were consistent with provision of input to belt regions.
Collapse
Affiliation(s)
- Edward L Bartlett
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA.
| | | |
Collapse
|
65
|
Bendor D. Understanding how neural circuits measure pitch. J Neurosci 2011; 31:3141-2. [PMID: 21368024 PMCID: PMC6623939 DOI: 10.1523/jneurosci.6077-10.2011] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2010] [Revised: 12/09/2010] [Accepted: 12/14/2010] [Indexed: 11/21/2022] Open
Affiliation(s)
- Daniel Bendor
- Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA.
| |
Collapse
|
66
|
Wenstrup JJ, Portfors CV. Neural processing of target distance by echolocating bats: functional roles of the auditory midbrain. Neurosci Biobehav Rev 2011; 35:2073-83. [PMID: 21238485 DOI: 10.1016/j.neubiorev.2010.12.015] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2010] [Revised: 11/29/2010] [Accepted: 12/08/2010] [Indexed: 11/25/2022]
Abstract
Using their biological sonar, bats estimate distance to avoid obstacles and capture moving prey. The primary distance cue is the delay between the bat's emitted echolocation pulse and the return of an echo. The mustached bat's auditory midbrain (inferior colliculus, IC) is crucial to the analysis of pulse-echo delay. IC neurons are selective for certain delays between frequency modulated (FM) elements of the pulse and echo. One role of the IC is to create these "delay-tuned", "FM-FM" response properties through a series of spectro-temporal integrative interactions. A second major role of the midbrain is to project target distance information to many parts of the brain. Pathways through auditory thalamus undergo radical reorganization to create highly ordered maps of pulse-echo delay in auditory cortex, likely contributing to perceptual features of target distance analysis. FM-FM neurons in IC also project strongly to pre-motor centers including the pretectum and the pontine nuclei. These pathways may contribute to rapid adjustments in flight, body position, and sonar vocalizations that occur as a bat closes in on a target.
Collapse
Affiliation(s)
- Jeffrey J Wenstrup
- Department of Anatomy and Neurobiology, Northeastern Ohio Universities Colleges of Medicine and Pharmacy, 4209 State Route 44, Rootstown, OH 44272, United States.
| | | |
Collapse
|
67
|
Tardif SD, Mansfield KG, Ratnam R, Ross CN, Ziegler TE. The marmoset as a model of aging and age-related diseases. ILAR J 2011; 52:54-65. [PMID: 21411858 PMCID: PMC3775658 DOI: 10.1093/ilar.52.1.54] [Citation(s) in RCA: 170] [Impact Index Per Article: 13.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
The common marmoset (Callithrix jacchus) is poised to become a standard nonhuman primate aging model. With an average lifespan of 5 to 7 years and a maximum lifespan of 16½ years, marmosets are the shortest-lived anthropoid primates. They display age-related changes in pathologies that mirror those seen in humans, such as cancer, amyloidosis, diabetes, and chronic renal disease. They also display predictable age-related differences in lean mass, calf circumference, circulating albumin, hemoglobin, and hematocrit. Features of spontaneous sensory and neurodegenerative change--for example, reduced neurogenesis, ß-amyloid deposition in the cerebral cortex, loss of calbindin D(28k) binding, and evidence of presbycusis--appear between the ages of 7 and 10 years. Variation among colonies in the age at which neurodegenerative change occurs suggests the interesting possibility that marmosets could be specifically managed to produce earlier versus later occurrence of degenerative conditions associated with differing rates of damage accumulation. In addition to the established value of the marmoset as a model of age-related neurodegenerative change, this primate can serve as a model of the integrated effects of aging and obesity on metabolic dysfunction, as it displays evidence of such dysfunction associated with high body weight as early as 6 to 8 years of age.
Collapse
Affiliation(s)
- Suzette D Tardif
- Barshop Institute for Longevity and Aging Studies, University of Texas Health Science Center at San Antonio, 15355 Lambda Drive, STCBM Bldg 2.200.08, San Antonio, TX 78245, USA.
| | | | | | | | | |
Collapse
|
68
|
O'Connor KN, Yin P, Petkov CI, Sutter ML. Complex spectral interactions encoded by auditory cortical neurons: relationship between bandwidth and pattern. Front Syst Neurosci 2010; 4:145. [PMID: 21152347 PMCID: PMC2998047 DOI: 10.3389/fnsys.2010.00145] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2010] [Accepted: 09/09/2010] [Indexed: 11/13/2022] Open
Abstract
The focus of most research on auditory cortical neurons has concerned the effects of rather simple stimuli, such as pure tones or broad-band noise, or the modulation of a single acoustic parameter. Extending these findings to feature coding in more complex stimuli such as natural sounds may be difficult, however. Generalizing results from the simple to more complex case may be complicated by non-linear interactions occurring between multiple, simultaneously varying acoustic parameters in complex sounds. To examine this issue in the frequency domain, we performed a parametric study of the effects of two global features, spectral pattern (here ripple frequency) and bandwidth, on primary auditory (A1) neurons in awake macaques. Most neurons were tuned for one or both variables and most also displayed an interaction between bandwidth and pattern implying that their effects were conditional or interdependent. A spectral linear filter model was able to qualitatively reproduce the basic effects and interactions, indicating that a simple neural mechanism may be able to account for these interdependencies. Our results suggest that the behavior of most A1 neurons is likely to depend on multiple parameters, and so most are unlikely to respond independently or invariantly to specific acoustic features.
Collapse
Affiliation(s)
- Kevin N O'Connor
- Center for Neuroscience, University of California Davis Davis, CA, USA
| | | | | | | |
Collapse
|
69
|
Spectral integration in primary auditory cortex attributable to temporally precise convergence of thalamocortical and intracortical input. J Neurosci 2010; 30:11114-27. [PMID: 20720119 DOI: 10.1523/jneurosci.0689-10.2010] [Citation(s) in RCA: 97] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Primary sensory cortex integrates sensory information from afferent feedforward thalamocortical projection systems and convergent intracortical microcircuits. Both input systems have been demonstrated to provide different aspects of sensory information. Here we have used high-density recordings of laminar current source density (CSD) distributions in primary auditory cortex of Mongolian gerbils in combination with pharmacological silencing of cortical activity and analysis of the residual CSD, to dissociate the feedforward thalamocortical contribution and the intracortical contribution to spectral integration. We found a temporally highly precise integration of both types of inputs when the stimulation frequency was in close spectral neighborhood of the best frequency of the measurement site, in which the overlap between both inputs is maximal. Local intracortical connections provide both directly feedforward excitatory and modulatory input from adjacent cortical sites, which determine how concurrent afferent inputs are integrated. Through separate excitatory horizontal projections, terminating in cortical layers II/III, information about stimulus energy in greater spectral distance is provided even over long cortical distances. These projections effectively broaden spectral tuning width. Based on these data, we suggest a mechanism of spectral integration in primary auditory cortex that is based on temporally precise interactions of afferent thalamocortical inputs and different short- and long-range intracortical networks. The proposed conceptual framework allows integration of different and partly controversial anatomical and physiological models of spectral integration in the literature.
Collapse
|
70
|
Yavuzoglu A, Schofield BR, Wenstrup JJ. Substrates of auditory frequency integration in a nucleus of the lateral lemniscus. Neuroscience 2010; 169:906-19. [PMID: 20451586 PMCID: PMC2904423 DOI: 10.1016/j.neuroscience.2010.04.073] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2010] [Revised: 04/18/2010] [Accepted: 04/20/2010] [Indexed: 11/27/2022]
Abstract
In the intermediate nucleus of the lateral lemniscus (INLL), some neurons display a form of spectral integration in which excitatory responses to sounds at their best frequency are inhibited by sounds within a frequency band at least one octave lower. Previous work showed that this response property depends on low-frequency-tuned glycinergic input. To identify all sources of inputs to these INLL neurons, and in particular the low-frequency glycinergic input, we combined retrograde tracing with immunohistochemistry for the neurotransmitter glycine. We deposited a retrograde tracer at recording sites displaying either high best frequencies (>75 kHz) in conjunction with combination-sensitive inhibition, or at sites displaying low best frequencies (23-30 kHz). Most retrogradely labeled cells were located in the ipsilateral medial nucleus of the trapezoid body (MNTB) and contralateral anteroventral cochlear nucleus. Consistent labeling, but in fewer numbers, was observed in the ipsilateral lateral nucleus of the trapezoid body (LNTB), contralateral posteroventral cochlear nucleus, and a few other brainstem nuclei. When tracer deposits were combined with glycine immunohistochemistry, most double-labeled cells were observed in the ipsilateral MNTB (84%), with fewer in LNTB (13%). After tracer deposits at combination-sensitive recording sites, a striking result was that MNTB labeling occurred in both medial and lateral regions. This labeling appeared to overlap the MNTB labeling that resulted from tracer deposits in low-frequency recording sites of INLL. These findings suggest that MNTB is the most likely source of low-frequency glycinergic input to INLL neurons with high best frequencies and combination-sensitive inhibition. This work establishes an anatomical basis for frequency integration in the auditory brainstem.
Collapse
Affiliation(s)
- A Yavuzoglu
- Department of Anatomy and Neurobiology, Northeastern Ohio Universities Colleges of Medicine and Pharmacy, Rootstown, Ohio 44272, USA
| | | | | |
Collapse
|
71
|
Terashima H, Hosoya H. Sparse codes of harmonic sound and their interaction explain harmony-related response of auditory cortex. BMC Neurosci 2010. [PMCID: PMC3090787 DOI: 10.1186/1471-2202-11-s1-o19] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
|
72
|
Bizley JK, Walker KMM. Sensitivity and selectivity of neurons in auditory cortex to the pitch, timbre, and location of sounds. Neuroscientist 2010; 16:453-69. [PMID: 20530254 DOI: 10.1177/1073858410371009] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
We are able to rapidly recognize and localize the many sounds in our environment. We can describe any of these sounds in terms of various independent "features" such as their loudness, pitch, or position in space. However, we still know surprisingly little about how neurons in the auditory brain, specifically the auditory cortex, might form representations of these perceptual characteristics from the information that the ear provides about sound acoustics. In this article, the authors examine evidence that the auditory cortex is necessary for processing the pitch, timbre, and location of sounds, and document how neurons across multiple auditory cortical fields might represent these as trains of action potentials. They conclude by asking whether neurons in different regions of the auditory cortex might not be simply sensitive to each of these three sound features but whether they might be selective for one of them. The few studies that have examined neural sensitivity to multiple sound attributes provide only limited support for neural selectivity within auditory cortex. Providing an explanation of the neural basis of feature invariance is thus one of the major challenges to sensory neuroscience obtaining the ultimate goal of understanding how neural firing patterns in the brain give rise to perception.
Collapse
Affiliation(s)
- Jennifer K Bizley
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom.
| | | |
Collapse
|
73
|
McDermott JH, Lehr AJ, Oxenham AJ. Individual differences reveal the basis of consonance. Curr Biol 2010; 20:1035-41. [PMID: 20493704 DOI: 10.1016/j.cub.2010.04.019] [Citation(s) in RCA: 127] [Impact Index Per Article: 9.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2010] [Revised: 04/07/2010] [Accepted: 04/08/2010] [Indexed: 11/15/2022]
Abstract
Some combinations of musical notes are consonant (pleasant), whereas others are dissonant (unpleasant), a distinction central to music. Explanations of consonance in terms of acoustics, auditory neuroscience, and enculturation have been debated for centuries. We utilized individual differences to distinguish the candidate theories. We measured preferences for musical chords as well as nonmusical sounds that isolated particular acoustic factors--specifically, the beating and the harmonic relationships between frequency components, two factors that have long been thought to potentially underlie consonance. Listeners preferred stimuli without beats and with harmonic spectra, but across more than 250 subjects, only the preference for harmonic spectra was consistently correlated with preferences for consonant over dissonant chords. Harmonicity preferences were also correlated with the number of years subjects had spent playing a musical instrument, suggesting that exposure to music amplifies preferences for harmonic frequencies because of their musical importance. Harmonic spectra are prominent features of natural sounds, and our results indicate that they also underlie the perception of consonance.
Collapse
Affiliation(s)
- Josh H McDermott
- Center for Neural Science, New York University, New York, NY 10003, USA.
| | | | | |
Collapse
|
74
|
Cortical encoding of pitch: recent results and open questions. Hear Res 2010; 271:74-87. [PMID: 20457240 PMCID: PMC3098378 DOI: 10.1016/j.heares.2010.04.015] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/03/2009] [Revised: 04/30/2010] [Accepted: 04/30/2010] [Indexed: 11/16/2022]
Abstract
It is widely appreciated that the key predictor of the pitch of a sound is its periodicity. Neural structures which support pitch perception must therefore be able to reflect the repetition rate of a sound, but this alone is not sufficient. Since pitch is a psychoacoustic property, a putative cortical code for pitch must also be able to account for the relationship between the amount to which a sound is periodic (i.e. its temporal regularity) and the perceived pitch salience, as well as limits in our ability to detect pitch changes or to discriminate rising from falling pitch. Pitch codes must also be robust in the presence of nuisance variables such as loudness or timbre. Here, we review a large body of work on the cortical basis of pitch perception, which illustrates that the distribution of cortical processes that give rise to pitch perception is likely to depend on both the acoustical features and functional relevance of a sound. While previous studies have greatly advanced our understanding, we highlight several open questions regarding the neural basis of pitch perception. These questions can begin to be addressed through a cooperation of investigative efforts across species and experimental techniques, and, critically, by examining the responses of single neurons in behaving animals.
Collapse
|
75
|
Pienkowski M, Eggermont JJ. Intermittent exposure with moderate-level sound impairs central auditory function of mature animals without concomitant hearing loss. Hear Res 2010; 261:30-5. [DOI: 10.1016/j.heares.2009.12.025] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/27/2009] [Revised: 12/16/2009] [Accepted: 12/18/2009] [Indexed: 11/25/2022]
|
76
|
Pienkowski M, Eggermont JJ. Nonlinear cross-frequency interactions in primary auditory cortex spectrotemporal receptive fields: a Wiener-Volterra analysis. J Comput Neurosci 2010; 28:285-303. [PMID: 20072806 DOI: 10.1007/s10827-009-0209-8] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2009] [Revised: 12/11/2009] [Accepted: 12/22/2009] [Indexed: 11/28/2022]
Abstract
The effects of nonlinear interactions between different sound frequencies on the responses of neurons in primary auditory cortex (AI) have only been investigated using two-tone paradigms. Here we stimulated with relatively dense, Poisson-distributed trains of tone pips (with frequency ranges spanning five octaves, 16 frequencies /octave, and mean rates of 20 or 120 pips /s), and examined within-frequency (or auto-frequency) and cross-frequency interactions in three types of AI unit responses by computing second-order "Poisson-Wiener" auto- and cross-kernels. Units were classified on the basis of their spectrotemporal receptive fields (STRFs) as "double-peaked", "single-peaked" or "peak-valley". Second-order interactions were investigated between the two bands of excitatory frequencies on double-peaked STRFs, between an excitatory band and various non-excitatory bands on single-peaked STRFs, and between an excitatory band and an inhibitory sideband on peak-valley STRFs. We found that auto-frequency interactions (i.e., those within a single excitatory band) were always characterized by a strong depression of (first-order) excitation that decayed with the interstimulus lag up to approximately 200 ms. That depression was weaker in cross-frequency compared to auto-frequency interactions for approximately 25% of dual-peaked STRFs, evidence of "combination sensitivity" for the two bands. Non-excitatory and inhibitory frequencies (on single-peaked and peak-valley STRFs, respectively) typically weakly depressed the excitatory response at short interstimulus lags (<50 ms), but weakly facilitated it at longer lags ( approximately 50-200 ms). Both the depression and especially the facilitation were stronger for interactions with inhibitory frequencies rather than just non-excitatory ones. Finally, facilitation in single-peaked and peak-valley units decreased with increasing stimulus density. Our results indicate that the strong combination sensitivity and cross-frequency facilitation suggested by previous two-tone-paradigm studies are much less pronounced when using more temporally-dense stimuli.
Collapse
Affiliation(s)
- Martin Pienkowski
- Department of Physiology and Pharmacology, University of Calgary, Calgary, AB, Canada
| | | |
Collapse
|
77
|
Moeller CK, Kurt S, Happel MFK, Schulze H. Long-range effects of GABAergic inhibition in gerbil primary auditory cortex. Eur J Neurosci 2009; 31:49-59. [PMID: 20092555 DOI: 10.1111/j.1460-9568.2009.07039.x] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Throughout the literature, the effects of iontophoretically applied neurotransmitter agonists or antagonists on the local activity of neurons are typically studied at the site of drug application. Recently, we have demonstrated long-range inhibitory interactions within the primary auditory cortex (AI) that are effective in complex acoustic situations. To further characterize this long-range functional connectivity, we here report the effects of the inhibitory neurotransmitter gamma-aminobutyric acid (GABA) and the GABA(A) antagonist gabazine (SR 95531) on neuronal activity as a function of distance from the application site reaching beyond the diffusion radius of the applied drug. Neuronal responses to pure tone stimulation were simultaneously recorded at the application site and four additional sites, at distances between 300 and 1350 microm from the application site. We found that whereas application of GABA during best frequency (BF) stimulation in general led to a decrease, and gabazine to an increase, in neuronal activity at the application site, a considerable number of units at remote recording sites showed effects opposite to these local, drug-induced effects. These effects were seen both in spiking activity and in amplitudes of local field potentials. At all locations, the effects varied as a function of pure tone stimulation frequency, pointing to a Mexican-hat-like input function resulting from thalamic inputs to the BF region of the cortical neurons and intracortical interconnections projecting to off-BF regions of the neurons. These data demonstrate the existence of long-range, inhibitory interactions within the gerbil AI, realized either by long-range inhibitory projections or by long-range excitatory projections to local inhibitory interneurons.
Collapse
Affiliation(s)
- Christoph K Moeller
- Experimental Otolaryngology, University of Erlangen-Nuremberg, Waldstr. 1, 91054 Erlangen, Germany
| | | | | | | |
Collapse
|
78
|
Peterson DC, Nataraj K, Wenstrup J. Glycinergic inhibition creates a form of auditory spectral integration in nuclei of the lateral lemniscus. J Neurophysiol 2009; 102:1004-16. [PMID: 19515958 DOI: 10.1152/jn.00040.2009] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
For analyses of complex sounds, many neurons integrate information across different spectral elements via suppressive effects that are distant from the neurons' excitatory tuning. In the mustached bat, suppression evoked by sounds within the first sonar harmonic (23-30 kHz) or in the subsonar band (<23 kHz) alters responsiveness to the higher best frequencies of many neurons. This study examined features and mechanisms associated with low-frequency (LF) suppression among neurons of the lateral lemniscal nuclei (NLL). We obtained extracellular recordings from neurons in the intermediate and ventral nuclei of the lateral lemniscus, observing different forms of LF suppression related to the two above-cited frequency bands. To understand the mechanisms underlying this suppression in NLL neurons, we examined the roles of glycinergic and GABAergic input through local microiontophoretic application of strychnine, an antagonist to glycine receptors (GlyRs), or bicuculline, an antagonist to gamma-aminobutyric acid type A receptors (GABA(A)Rs). With blockade of GABA(A)Rs, neurons showed an increase in firing rate to best frequency (BF) and/or LF tones but retained LF suppression of BF sounds. For neurons that displayed LF suppression tuned to 23-30 kHz, the suppression was eliminated or nearly eliminated by GlyR blockade. In contrast, GABA(A)R blockade did not eliminate nor had any consistent effect on suppression tuned to these frequencies. We conclude that LF suppression tuned in the 23- to 30-kHz range results from neuronal inhibition within the NLL via glycinergic inputs. For neurons displaying suppression tuned <23 kHz, neither GlyR nor GABAR blockade altered LF suppression. We conclude that such suppression originates at a lower auditory level, perhaps a result of cochlear mechanisms. These findings demonstrate that neuronal interactions within NLL create a particular form of LF suppression that contributes to the analysis of complex acoustic signals.
Collapse
Affiliation(s)
- Diana Coomes Peterson
- Department of Anatomy and Neurobiology, Northeastern Ohio Universities College of Medicine, Rootstown, Ohio 44272, USA
| | | | | |
Collapse
|
79
|
Gans D, Sheykholeslami K, Peterson DC, Wenstrup J. Temporal features of spectral integration in the inferior colliculus: effects of stimulus duration and rise time. J Neurophysiol 2009; 102:167-80. [PMID: 19403742 DOI: 10.1152/jn.91300.2008] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
This report examines temporal features of facilitation and suppression that underlie spectrally integrative responses to complex vocal signals. Auditory responses were recorded from 160 neurons in the inferior colliculus (IC) of awake mustached bats. Sixty-two neurons showed combination-sensitive facilitation: responses to best frequency (BF) signals were facilitated by well-timed signals at least an octave lower in frequency, in the range 16-31 kHz. Temporal features and strength of facilitation were generally unaffected by changes in duration of facilitating signals from 4 to 31 ms. Changes in stimulus rise time from 0.5 to 5.0 ms had little effect on facilitatory strength. These results suggest that low frequency facilitating inputs to high BF neurons have phasic-on temporal patterns and are responsive to stimulus rise times over the tested range. We also recorded from 98 neurons showing low-frequency (11-32 kHz) suppression of higher BF responses. Effects of changing duration were related to the frequency of suppressive signals. Signals<23 kHz usually evoked suppression sustained throughout signal duration. This and other features of such suppression are consistent with a cochlear origin that results in masking of responses to higher, near-BF signal frequencies. Signals in the 23- to 30-kHz range-frequencies in the first sonar harmonic-generally evoked phasic suppression of BF responses. This may result from neural inhibitory interactions within and below IC. In many neurons, we observed two or more forms of the spectral interactions described here. Thus IC neurons display temporally and spectrally complex responses to sound that result from multiple spectral interactions at different levels of the ascending auditory pathway.
Collapse
Affiliation(s)
- Donald Gans
- Department of Anatomy and Neurobiology, Northeastern Ohio University College of Medicine, 4209 State Route 44, PO Box 95, Rootstown, OH 44272, USA
| | | | | | | |
Collapse
|
80
|
Temporally dynamic frequency tuning of population responses in monkey primary auditory cortex. Hear Res 2009; 254:64-76. [PMID: 19389466 DOI: 10.1016/j.heares.2009.04.010] [Citation(s) in RCA: 43] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/05/2008] [Revised: 03/20/2009] [Accepted: 04/10/2009] [Indexed: 11/20/2022]
Abstract
Frequency tuning of auditory cortical neurons is typically determined by integrating spikes over the entire duration of a tone stimulus. However, this approach may mask functionally significant variations in tuning over the time course of the response. To explore this possibility, frequency response functions (FRFs) based on population multiunit activity evoked by pure tones of 175 or 200 ms duration were examined within four time windows relative to stimulus onset corresponding to "on" (10-30 ms), "early sustained" (30-100 ms), "late sustained" (100-175 ms), and "off" (185-235 or 210-260 ms) portions of responses in primary auditory cortex (A1) of 5 awake macaques. FRFs of "on" and "early sustained" responses displayed a good concordance, with best frequencies (BFs) differing, on average, by less than 0.25 octaves. In contrast, FRFs of "on" and "late sustained" responses differed considerably, with a mean difference in BF of 0.68 octaves. At many sites, tuning of "off" responses was inversely related to that of "on" responses, with "off" FRFs displaying a trough at the BF of "on" responses. Inversely correlated "on" and "off" FRFs were more common at sites with a higher "on" BF, thus suggesting functional differences between sites with low and high "on" BF. These results indicate that frequency tuning of population responses in A1 may vary considerably over the course of the response to a tone, thus revealing a temporal dimension to the representation of sound spectrum in A1.
Collapse
|
81
|
Abstract
Auditory perception depends on the coding and organization of the information-bearing acoustic features of sounds by auditory neurons. We report here that auditory neurons can be classified into functional groups, each of which plays a specific role in extracting distinct complex sound features. We recorded the electrophysiological responses of single auditory neurons in the songbird midbrain and forebrain to conspecific song, measured their tuning by calculating spectrotemporal receptive fields (STRFs), and classified them using multiple cluster analysis methods. Based on STRF shape, cells clustered into functional groups that divided the space of acoustical features into regions that represent cues for the fundamental acoustic percepts of pitch, timbre, and rhythm. Four major groups were found in the midbrain, and five major groups were found in the forebrain. Comparing STRFs in midbrain and forebrain neurons suggested that both inheritance and emergence of tuning properties occur as information ascends the auditory processing stream.
Collapse
|
82
|
Terashima H, Hosoya H. Sparse codes of harmonic natural sounds and their modulatory interactions. NETWORK (BRISTOL, ENGLAND) 2009; 20:253-267. [PMID: 19919283 DOI: 10.3109/09548980903447751] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
Sparse coding and its related theories have been successful to explain various response properties of early stages of sensory information processing such as primary visual cortex and peripheral auditory system, which suggests that the emergence of such properties results from adaptation of the nerve system to natural stimuli. The present study continues this line of research in a higher stage of auditory processing, focusing on harmonic structures that are often found in behaviourally important natural sound like animal vocalization. It has been physiologically shown that monkey primary auditory cortices (A1) have neurons with response properties capturing such harmonic structures: their response and modulation peaks are often found at frequencies that are harmonically related to each other. We hypothesize that such relations emerge from sparse coding of harmonic natural sounds. Our simulation shows that similar harmonic relations emerge from frequency-domain sparse codes of harmonic sounds, namely, piano performance and human speech. Moreover, the modulatory behaviours can be explained by competitive interactions of model neurons that capture partially common harmonic structures.
Collapse
Affiliation(s)
- Hiroki Terashima
- Department of Complexity Science and Engineering, The University of Tokyo, Japan.
| | | |
Collapse
|
83
|
Increasing spectrotemporal sound density reveals an octave-based organization in cat primary auditory cortex. J Neurosci 2008; 28:8885-96. [PMID: 18768682 DOI: 10.1523/jneurosci.2693-08.2008] [Citation(s) in RCA: 40] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Auditory neurons are likely adapted to process complex stimuli, such as vocalizations, which contain spectrotemporal modulations. However, basic properties of auditory neurons are often derived from tone pips presented in isolation, which lack spectrotemporal modulations. In this context, it is unclear how to deduce the functional role of auditory neurons from their tone pip-derived tuning properties. In this study, spectrotemporal receptive fields (STRFs) were obtained from responses to multi-tone stimulus ensembles differing in their average spectrotemporal density (i.e., number of tone pips per second). STRFs for different stimulus densities were derived from multiple single-unit activity (MUA) and local field potentials (LFPs), simultaneously recorded in primary auditory cortex of cats. Consistent with earlier studies, we found that the spectral bandwidth was narrower for MUA compared with LFPs. Both neural firing rate and LFP amplitude were reduced when the density of the stimulus ensemble increased. Surprisingly, we found that increasing the spectrotemporal sound density revealed with increasing clarity an over-representation of response peaks at frequencies of approximately 3, 5, 10, and 20 kHz, in both MUA- and LFP-derived STRFs. Although the decrease in spectral bandwidth and neural activity with increasing stimulus density can likely be accounted for by forward suppression, the mechanisms underlying the over-representation of the octave-spaced response peaks are unclear. Plausibly, the over-representation may be a functional correlate of the periodic pattern of corticocortical connections observed along the tonotopic axis of cat auditory cortex.
Collapse
|
84
|
Yin P, Mishkin M, Sutter M, Fritz JB. Early stages of melody processing: stimulus-sequence and task-dependent neuronal activity in monkey auditory cortical fields A1 and R. J Neurophysiol 2008; 100:3009-29. [PMID: 18842950 DOI: 10.1152/jn.00828.2007] [Citation(s) in RCA: 43] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
To explore the effects of acoustic and behavioral context on neuronal responses in the core of auditory cortex (fields A1 and R), two monkeys were trained on a go/no-go discrimination task in which they learned to respond selectively to a four-note target (S+) melody and withhold response to a variety of other nontarget (S-) sounds. We analyzed evoked activity from 683 units in A1/R of the trained monkeys during task performance and from 125 units in A1/R of two naive monkeys. We characterized two broad classes of neural activity that were modulated by task performance. Class I consisted of tone-sequence-sensitive enhancement and suppression responses. Enhanced or suppressed responses to specific tonal components of the S+ melody were frequently observed in trained monkeys, but enhanced responses were rarely seen in naive monkeys. Both facilitatory and suppressive responses in the trained monkeys showed a temporal pattern different from that observed in naive monkeys. Class II consisted of nonacoustic activity, characterized by a task-related component that correlated with bar release, the behavioral response leading to reward. We observed a significantly higher percentage of both Class I and Class II neurons in field R than in A1. Class I responses may help encode a long-term representation of the behaviorally salient target melody. Class II activity may reflect a variety of nonacoustic influences, such as attention, reward expectancy, somatosensory inputs, and/or motor set and may help link auditory perception and behavioral response. Both types of neuronal activity are likely to contribute to the performance of the auditory task.
Collapse
Affiliation(s)
- Pingbo Yin
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health, Bethesda, Maryland, USA
| | | | | | | |
Collapse
|
85
|
Bendor D, Wang X. Neural response properties of primary, rostral, and rostrotemporal core fields in the auditory cortex of marmoset monkeys. J Neurophysiol 2008; 100:888-906. [PMID: 18525020 DOI: 10.1152/jn.00884.2007] [Citation(s) in RCA: 162] [Impact Index Per Article: 10.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The core region of primate auditory cortex contains a primary and two primary-like fields (AI, primary auditory cortex; R, rostral field; RT, rostrotemporal field). Although it is reasonable to assume that multiple core fields provide an advantage for auditory processing over a single primary field, the differential roles these fields play and whether they form a functional pathway collectively such as for the processing of spectral or temporal information are unknown. In this report we compare the response properties of neurons in the three core fields to pure tones and sinusoidally amplitude modulated tones in awake marmoset monkeys (Callithrix jacchus). The main observations are as follows. (1) All three fields are responsive to spectrally narrowband sounds and are tonotopically organized. (2) Field AI responds more strongly to pure tones than fields R and RT. (3) Field RT neurons have lower best sound levels than those of neurons in fields AI and R. In addition, rate-level functions in field RT are more commonly nonmonotonic than in fields AI and R. (4) Neurons in fields RT and R have longer minimum latencies than those of field AI neurons. (5) Fields RT and R have poorer stimulus synchronization than that of field AI to amplitude-modulated tones. (6) Between the three core fields the more rostral regions (R and RT) have narrower firing-rate-based modulation transfer functions than that of AI. This effect was seen only for the nonsynchronized neurons. Synchronized neurons showed no such trend.
Collapse
Affiliation(s)
- Daniel Bendor
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, The Johns Hopkins University, 720 Rutland Avenue, Traylor 410, Baltimore, MD 21205, USA
| | | |
Collapse
|
86
|
Peterson DC, Voytenko S, Gans D, Galazyuk A, Wenstrup J. Intracellular recordings from combination-sensitive neurons in the inferior colliculus. J Neurophysiol 2008; 100:629-45. [PMID: 18497365 DOI: 10.1152/jn.90390.2008] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
In vertebrate auditory systems, specialized combination-sensitive neurons analyze complex vocal signals by integrating information across multiple frequency bands. We studied combination-sensitive interactions in neurons of the inferior colliculus (IC) of awake mustached bats, using intracellular somatic recording with sharp electrodes. Facilitated combinatorial neurons are coincidence detectors, showing maximum facilitation when excitation from low- and high-frequency stimuli coincide. Previous work showed that facilitatory interactions originate in the IC, require both low and high frequency-tuned glycinergic inputs, and are independent of glutamatergic inputs. These results suggest that glycinergic inputs evoke facilitation through either postinhibitory rebound or direct depolarizing mechanisms. However, in 35 of 36 facilitated neurons, we observed no evidence of low frequency-evoked transient hyperpolarization or depolarization that was closely related to response facilitation. Furthermore, we observed no evidence of shunting inhibition that might conceal inhibitory inputs. Since these facilitatory interactions originate in IC neurons, the results suggest that inputs underlying facilitation are electrically segregated from the soma. We also recorded inhibitory combinatorial interactions, in which low frequency sounds suppress responses to higher frequency signals. In 43% of 118 neurons, we observed low frequency-evoked hyperpolarizations associated with combinatorial inhibition. For these neurons, we conclude that low frequency-tuned inhibitory inputs terminate on neurons primarily excited by high-frequency signals; these inhibitory inputs may create or enhance inhibitory combinatorial interactions. In the remainder of inhibited combinatorial neurons (57%), we observed no evidence of low frequency-evoked hyperpolarizations, consistent with observations that inhibitory combinatorial responses may originate in lateral lemniscal nuclei.
Collapse
Affiliation(s)
- Diana Coomes Peterson
- Department of Neurobiology, Northeastern Ohio Universities, College of Medicine, Rootstown, Ohio 44272, USA
| | | | | | | | | |
Collapse
|
87
|
Kalluri S, Depireux DA, Shamma SA. Perception and cortical neural coding of harmonic fusion in ferrets. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2008; 123:2701-16. [PMID: 18529189 PMCID: PMC2677325 DOI: 10.1121/1.2902178] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
This study examined the perception and cortical representation of harmonic complex tones, from the perspective of the spectral fusion evoked by such sounds. Experiment 1 tested whether ferrets spontaneously distinguish harmonic from inharmonic tones. In baseline sessions, ferrets detected a pure tone terminating a sequence of inharmonic tones. After they reached proficiency, a small fraction of the inharmonic tones were replaced with harmonic tones. Some of the animals confused the harmonic tones with the pure tones at twice the false-alarm rate. Experiment 2 sought correlates of harmonic fusion in single neurons of primary auditory cortex and anterior auditory field, by comparing responses to harmonic tones with those to inharmonic tones in the awake alert ferret. The effects of spectro-temporal filtering were accounted for by using the measured spectrotemporal receptive field to predict responses and by seeking correlates of fusion in the predictability of responses. Only 12% of units sampled distinguished harmonic tones from inharmonic tones, a small percentage that is consistent with the relatively weak ability of the ferrets to spontaneously discriminate harmonic tones from inharmonic tones in Experiment 1.
Collapse
Affiliation(s)
- Sridhar Kalluri
- Institute for Systems Research, University of Maryland, College Park, Maryland 20742, USA.
| | | | | |
Collapse
|
88
|
Sadagopan S, Wang X. Level invariant representation of sounds by populations of neurons in primary auditory cortex. J Neurosci 2008; 28:3415-26. [PMID: 18367608 PMCID: PMC6670591 DOI: 10.1523/jneurosci.2743-07.2008] [Citation(s) in RCA: 115] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2007] [Revised: 02/20/2008] [Accepted: 02/20/2008] [Indexed: 11/21/2022] Open
Abstract
A fundamental feature of auditory perception is the constancy of sound recognition over a large range of intensities. Although this invariance has been described in behavioral studies, the underlying neural mechanism is essentially unknown. Here we show a putative level-invariant representation of sounds by populations of neurons in primary auditory cortex (A1) that may provide a neural basis for the behavioral observations. Previous studies reported that pure-tone frequency tuning of most A1 neurons widens with increasing sound level. In sharp contrast, we found that a large proportion of neurons in A1 of awake marmosets were narrowly and separably tuned to both frequency and sound level. Tuning characteristics and firing rates of the neural population were preserved across all tested sound levels. These response properties lead to a level-invariant representation of sounds over the population of A1 neurons. Such a representation is an important step for robust feature recognition in natural environments.
Collapse
Affiliation(s)
- Srivatsun Sadagopan
- Laboratory of Auditory Neurophysiology, Departments of Neuroscience and Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, Maryland 21205
| | - Xiaoqin Wang
- Laboratory of Auditory Neurophysiology, Departments of Neuroscience and Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, Maryland 21205
| |
Collapse
|
89
|
|
90
|
Christianson GB, Sahani M, Linden JF. The consequences of response nonlinearities for interpretation of spectrotemporal receptive fields. J Neurosci 2008; 28:446-55. [PMID: 18184787 PMCID: PMC6670552 DOI: 10.1523/jneurosci.1775-07.2007] [Citation(s) in RCA: 97] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2007] [Revised: 10/07/2007] [Accepted: 11/08/2007] [Indexed: 11/21/2022] Open
Abstract
Neurons in the central auditory system are often described by the spectrotemporal receptive field (STRF), conventionally defined as the best linear fit between the spectrogram of a sound and the spike rate it evokes. An STRF is often assumed to provide an estimate of the receptive field of a neuron, i.e., the spectral and temporal range of stimuli that affect the response. However, when the true stimulus-response function is nonlinear, the STRF will be stimulus dependent, and changes in the stimulus properties can alter estimates of the sign and spectrotemporal extent of receptive field components. We demonstrate analytically and in simulations that, even when uncorrelated stimuli are used, interactions between simple neuronal nonlinearities and higher-order structure in the stimulus can produce STRFs that show contributions from time-frequency combinations to which the neuron is actually insensitive. Only when spectrotemporally independent stimuli are used does the STRF reliably indicate features of the underlying receptive field, and even then it provides only a conservative estimate. One consequence of these observations, illustrated using natural stimuli, is that a stimulus-induced change in an STRF could arise from a consistent but nonlinear neuronal response to stimulus ensembles with differing higher-order dependencies. Thus, although the responses of higher auditory neurons may well involve adaptation to the statistics of different stimulus ensembles, stimulus dependence of STRFs alone, or indeed of any overly constrained stimulus-response mapping, cannot demonstrate the nature or magnitude of such effects.
Collapse
Affiliation(s)
| | | | - Jennifer F. Linden
- UCL Ear Institute
- Department of Anatomy and Developmental Biology, University College London, London, WC1E 6BT, United Kingdom
| |
Collapse
|
91
|
Bar-Yosef O, Nelken I. The effects of background noise on the neural responses to natural sounds in cat primary auditory cortex. Front Comput Neurosci 2007; 1:3. [PMID: 18946525 PMCID: PMC2525935 DOI: 10.3389/neuro.10.003.2007] [Citation(s) in RCA: 47] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2007] [Accepted: 10/09/2007] [Indexed: 11/23/2022] Open
Abstract
Animal vocalizations in natural settings are invariably accompanied by an acoustic background with a complex statistical structure. We have previously demonstrated that neuronal responses in primary auditory cortex of halothane-anesthetized cats depend strongly on the natural background. Here, we study in detail the neuronal responses to the background sounds and their relationships to the responses to the foreground sounds. Natural bird chirps as well as modifications of these chirps were used. The chirps were decomposed into three components: the clean chirps, their echoes, and the background noise. The last two were weaker than the clean chirp by 13 and 29 dB on average respectively. The test stimuli consisted of the full natural stimulus, the three basic components, and their three pairwise combinations. When the level of the background components (echoes and background noise) presented alone was sufficiently loud to evoke neuronal activity, these background components had an unexpectedly strong effect on the responses of the neurons to the main bird chirp. In particular, the responses to the original chirps were more similar on average to the responses evoked by the two background components than to the responses evoked by the clean chirp, both in terms of the evoked spike count and in terms of the temporal pattern of the responses. These results suggest that some of the neurons responded specifically to the acoustic background even when presented together with the substantially louder main chirp, and may imply that neurons in A1 already participate in auditory source segregation.
Collapse
Affiliation(s)
- Omer Bar-Yosef
- Department of Pediatrics, Safra Children's Hospital, Sheba Medical Center, Israel.
| | | |
Collapse
|
92
|
Felix RA, Portfors CV. Excitatory, inhibitory and facilitatory frequency response areas in the inferior colliculus of hearing impaired mice. Hear Res 2007; 228:212-29. [PMID: 17412539 PMCID: PMC1950695 DOI: 10.1016/j.heares.2007.02.009] [Citation(s) in RCA: 30] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/07/2006] [Revised: 02/26/2007] [Accepted: 02/26/2007] [Indexed: 11/24/2022]
Abstract
Individuals with age-related hearing loss often have difficulty understanding complex sounds such as basic speech. The C57BL/6 mouse suffers from progressive sensorineural hearing loss and thus is an effective tool for dissecting the neural mechanisms underlying changes in complex sound processing observed in humans. Neural mechanisms important for processing complex sounds include multiple tuning and combination sensitivity, and these responses are common in the inferior colliculus (IC) of normal hearing mice. We examined neural responses in the IC of C57Bl/6 mice to single and combinations of tones to examine the extent of spectral integration in the IC after age-related high frequency hearing loss. Ten percent of the neurons were tuned to multiple frequency bands and an additional 10% displayed non-linear facilitation to the combination of two different tones (combination sensitivity). No combination-sensitive inhibition was observed. By comparing these findings to spectral integration properties in the IC of normal hearing CBA/CaJ mice, we suggest that high frequency hearing loss affects some of the neural mechanisms in the IC that underlie the processing of complex sounds. The loss of spectral integration properties in the IC during aging likely impairs the central auditory system's ability to process complex sounds such as speech.
Collapse
Affiliation(s)
- Richard A Felix
- School of Biological Sciences, Washington State University, 14204 NE Salmon Creek Avenue, Vancouver, WA 98686, United States
| | | |
Collapse
|
93
|
Roberts B, Holmes SD. Grouping and the pitch of a mistuned fundamental component: Effects of applying simultaneous multiple mistunings to the other harmonics. Hear Res 2006; 222:79-88. [PMID: 17055676 DOI: 10.1016/j.heares.2006.08.013] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/17/2006] [Revised: 07/06/2006] [Accepted: 08/30/2006] [Indexed: 11/25/2022]
Abstract
Mistuning a harmonic produces an exaggerated change in its pitch. This occurs because the component becomes inconsistent with the regular pattern that causes the other harmonics (constituting the spectral frame) to integrate perceptually. These pitch shifts were measured when the fundamental (F0) component of a complex tone (nominal F0 frequency = 200 Hz) was mistuned by +8% and -8%. The pitch-shift gradient was defined as the difference between these values and its magnitude was used as a measure of frame integration. An independent and random perturbation (spectral jitter) was applied simultaneously to most or all of the frame components. The gradient magnitude declined gradually as the degree of jitter increased from 0% to +/-40% of F0. The component adjacent to the mistuned target made the largest contribution to the gradient, but more distant components also contributed. The stimuli were passed through an auditory model, and the exponential height of the F0-period peak in the averaged summary autocorrelation function correlated well with the gradient magnitude. The fit improved when the weighting on more distant channels was attenuated by a factor of three per octave. The results are consistent with a grouping mechanism that computes a weighted average of periodicity strength across several components.
Collapse
Affiliation(s)
- Brian Roberts
- School of Psychology, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK.
| | | |
Collapse
|
94
|
Bendor D, Wang X. Cortical representations of pitch in monkeys and humans. Curr Opin Neurobiol 2006; 16:391-9. [PMID: 16842992 PMCID: PMC4325365 DOI: 10.1016/j.conb.2006.07.001] [Citation(s) in RCA: 79] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2006] [Accepted: 07/03/2006] [Indexed: 10/24/2022]
Abstract
Pitch perception is crucial for vocal communication, music perception, and auditory object processing in a complex acoustic environment. How pitch is represented in the cerebral cortex has for a long time remained an unanswered question in auditory neuroscience. Several lines of evidence now point to a distinct non-primary region of auditory cortex in primates that contains a cortical representation of pitch.
Collapse
Affiliation(s)
- Daniel Bendor
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | | |
Collapse
|
95
|
Noreña AJ, Gourévitch B, Gourevich B, Aizawa N, Eggermont JJ. Spectrally enhanced acoustic environment disrupts frequency representation in cat auditory cortex. Nat Neurosci 2006; 9:932-9. [PMID: 16783369 DOI: 10.1038/nn1720] [Citation(s) in RCA: 104] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2006] [Accepted: 05/25/2006] [Indexed: 11/09/2022]
Abstract
Sensory environments are known to shape nervous system organization. Here we show that passive long-term exposure to a spectrally enhanced acoustic environment (EAE) causes reorganization of the tonotopic map in juvenile cat auditory cortex without inducing any hearing loss. The EAE consisted of tone pips of 32 different frequencies (5-20 kHz), presented in random order at an average rate of 96 Hz. The EAE caused a strong reduction of the representation of EAE frequencies and an over-representation of frequencies neighboring those of the EAE. This is in sharp contrast with earlier developmental studies showing an enlargement of the cortical representation of EAEs consisting of a narrow frequency band. We observed fewer than normal appropriately tuned short-latency responses to EAE frequencies, together with more common long-latency responses tuned to EAE-neighboring frequencies.
Collapse
Affiliation(s)
- Arnaud J Noreña
- Neurosciences and Sensory Systems Laboratory, UMR Centre National de la Recherche Scientifique 5020, Université Claude Bernard, Lyon, France
| | | | | | | | | |
Collapse
|
96
|
Affiliation(s)
- Mitchell L Sutter
- Center for Neuroscience and Section of Neurobiology, Physiology, and Behavior, University of California Davis, Davis, California 95616, USA
| |
Collapse
|
97
|
Moshitch D, Las L, Ulanovsky N, Bar-Yosef O, Nelken I. Responses of neurons in primary auditory cortex (A1) to pure tones in the halothane-anesthetized cat. J Neurophysiol 2006; 95:3756-69. [PMID: 16554513 DOI: 10.1152/jn.00822.2005] [Citation(s) in RCA: 81] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The responses of primary auditory cortex (A1) neurons to pure tones in anesthetized animals are usually described as having mostly narrow, unimodal frequency tuning and phasic responses. Thus A1 neurons are believed not to carry much information about pure tones beyond sound onset. In awake cats, however, tuning may be wider and responses may have substantially longer duration. Here we analyze frequency-response areas (FRAs) and temporal-response patterns of 1,828 units in A1 of halothane-anesthetized cats. Tuning was generally wide: the total bandwidth at 40 dB above threshold was 4 octaves on average. FRA shapes were highly variable and many were diffuse, not fitting into standard classification schemes. Analyzing the temporal patterns of the largest responses of each unit revealed that only 9% of the units had pure onset responses. About 40% of the units had sustained responses throughout stimulus duration (115 ms) and 13% of the units had significant and informative responses lasting 300 ms and more after stimulus offset. We conclude that under halothane anesthesia, neural responses show many of the characteristics of awake responses. Furthermore, A1 units maintain sensory information in their activity not only throughout sound presentation but also for hundreds of milliseconds after stimulus offset, thus possibly playing a role in sensory memory.
Collapse
Affiliation(s)
- Dina Moshitch
- Department of Neurobiology, The Alexander Silberman Institute of Life Sciences, Faculty of Sciences, Hadassah Medical School, The Hebrew University, Edmund Safra Campus, Givat Ram, Jerusalem 91904, Israel
| | | | | | | | | |
Collapse
|
98
|
Nataraj K, Wenstrup JJ. Roles of inhibition in complex auditory responses in the inferior colliculus: inhibited combination-sensitive neurons. J Neurophysiol 2005; 95:2179-92. [PMID: 16371455 PMCID: PMC1471895 DOI: 10.1152/jn.01148.2005] [Citation(s) in RCA: 31] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We studied the functional properties and underlying neural mechanisms associated with inhibitory combination-sensitive neurons in the mustached bat's inferior colliculus (IC). In these neurons, the excitatory response to best frequency tones was suppressed by lower frequency signals (usually in the range of 12-30 kHz) in a time-dependant manner. Of 143 inhibitory units, the majority (71%) were type I, in which low-frequency sounds evoked inhibition only. In the remainder, however, the low-frequency inhibitory signal also evoked excitation. Of these, excitation preceded the inhibition in type E/I units (16%), whereas in type I/E units (13%), excitation followed the inhibition. Type E/I and I/E units were distinct in the tuning and threshold sensitivity of low-frequency responses, whereas type I units overlapped the other types in these features. In 71 neurons, antagonists to receptors for glycine [strychnine (STRY)] or GABA [bicuculline (BIC)] were applied microiontophoretically. These antagonists failed to eliminate combination-sensitive inhibition in 92% (STRY), 93% (BIC), and 87% (BIC + STRY) of the type I units tested. However, inhibition was reduced in many neurons. Results were similar for type E/I and I/E inhibitory neurons. The results indicate that there are distinct populations of combination-sensitive inhibited neurons in the IC and that these populations are at least partly independent of glycine or GABAA receptors in the IC. We propose that these populations originate in different brain stem auditory nuclei, that they may be modified by interactions within the IC, and that they may perform different spectrotemporal analyses of vocal signals.
Collapse
Affiliation(s)
- Kiran Nataraj
- Department of Neurobiology, Northeastern Ohio Universities College of Medicine, Rootstown, OH 44272, USA
| | | |
Collapse
|
99
|
Chang EF, Bao S, Imaizumi K, Schreiner CE, Merzenich MM. Development of spectral and temporal response selectivity in the auditory cortex. Proc Natl Acad Sci U S A 2005; 102:16460-5. [PMID: 16263924 PMCID: PMC1283465 DOI: 10.1073/pnas.0508239102] [Citation(s) in RCA: 131] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The mechanisms by which hearing selectivity is elaborated and refined in early development are very incompletely determined. In this study, we documented contributions of progressively maturing inhibitory influences on the refinement of spectral and temporal response properties in the primary auditory cortex. Inhibitory receptive fields (IRFs) of infant rat auditory cortical neurons were spectrally far broader and had extended over far longer duration than did those of adults. The selective refinement of IRFs was delayed relative to that of excitatory receptive fields by an approximately 2-week period that corresponded to the critical period for plasticity. Local application of a GABA(A) receptor antagonist revealed that intracortical inhibition contributes to this progressive receptive field maturation for response selectivity in frequency. Conversely, it had no effect on the duration of IRFs or successive-signal cortical response recovery times. The importance of exposure to patterned acoustic inputs was suggested when both spectral and temporal IRF maturation were disrupted in rat pups reared in continuous, moderate-intensity noise. They were subsequently renormalized when animals were returned to standard housing conditions as adults.
Collapse
Affiliation(s)
- Edward F Chang
- Coleman Memorial Laboratory, Department of Otolaryngology, W. M. Keck Center for Integrative Neuroscience, University of California, San Francisco, CA 94143-0444, USA.
| | | | | | | | | |
Collapse
|
100
|
Portfors CV, Felix RA. Spectral integration in the inferior colliculus of the CBA/CaJ mouse. Neuroscience 2005; 136:1159-70. [PMID: 16216422 DOI: 10.1016/j.neuroscience.2005.08.031] [Citation(s) in RCA: 50] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2005] [Revised: 05/18/2005] [Accepted: 08/04/2005] [Indexed: 11/23/2022]
Abstract
The inferior colliculus receives a massive convergence of inputs and in the mustached bat, this convergence leads to the creation of neurons in the inferior colliculus that integrate information across multiple frequency bands. These neurons are tuned to multiple frequency bands or are combination-sensitive; responding best to the combination of two signals of different frequency composition. The importance of combination-sensitive neurons in processing echolocation signals is well described, and it has been thought that combination sensitivity is a neural specialization for echolocation behaviors. Combination sensitivity and other response properties indicative of spectral integration have not been thoroughly examined in the inferior colliculus of non-echolocating mammals. In this study we tested the hypothesis that integration across frequencies occurs in the inferior colliculus of mice. We tested excitatory frequency response areas in the inferior colliculus of unanesthetized mice by varying the frequency of a single tone between 6 and 100 kHz. We then tested combination-sensitive responses by holding one tone at the unit's best frequency, and varying the frequency and intensity of a second tone. Thirty-two percent of the neurons were tuned to multiple frequency bands, 16% showed combination-sensitive facilitation and another 12% showed combination-sensitive inhibition. These findings suggests that the neural mechanisms underlying processing of complex sounds in the inferior colliculus share some common features among mammals as different as the bat and the mouse.
Collapse
Affiliation(s)
- C V Portfors
- School of Biological Sciences, 14204 Northeast Salmon Creek Avenue, Washington State University, Vancouver, WA 98686, USA.
| | | |
Collapse
|