1
|
Undurraga JA, Luke R, Van Yper L, Monaghan JJM, McAlpine D. The neural representation of an auditory spatial cue in the primate cortex. Curr Biol 2024; 34:2162-2174.e5. [PMID: 38718798 DOI: 10.1016/j.cub.2024.04.034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 02/14/2024] [Accepted: 04/12/2024] [Indexed: 05/23/2024]
Abstract
Humans make use of small differences in the timing of sounds at the two ears-interaural time differences (ITDs)-to locate their sources. Despite extensive investigation, however, the neural representation of ITDs in the human brain is contentious, particularly the range of ITDs explicitly represented by dedicated neural detectors. Here, using magneto- and electro-encephalography (MEG and EEG), we demonstrate evidence of a sparse neural representation of ITDs in the human cortex. The magnitude of cortical activity to sounds presented via insert earphones oscillated as a function of increasing ITD-within and beyond auditory cortical regions-and listeners rated the perceptual quality of these sounds according to the same oscillating pattern. This pattern was accurately described by a population of model neurons with preferred ITDs constrained to the narrow, sound-frequency-dependent range evident in other mammalian species. When scaled for head size, the distribution of ITD detectors in the human cortex is remarkably like that recorded in vivo from the cortex of rhesus monkeys, another large primate that uses ITDs for source localization. The data solve a long-standing issue concerning the neural representation of ITDs in humans and suggest a representation that scales for head size and sound frequency in an optimal manner.
Collapse
Affiliation(s)
- Jaime A Undurraga
- Department of Linguistics, Macquarie University, 16 University Avenue, Sydney, NSW 2109, Australia; Interacoustics Research Unit, Technical University of Denmark, Ørsteds Plads, Building 352, 2800 Kgs. Lyngby, Denmark.
| | - Robert Luke
- Department of Linguistics, Macquarie University, 16 University Avenue, Sydney, NSW 2109, Australia; The Bionics Institute, 384-388 Albert St., East Melbourne, VIC 3002, Australia
| | - Lindsey Van Yper
- Department of Linguistics, Macquarie University, 16 University Avenue, Sydney, NSW 2109, Australia; Institute of Clinical Research, University of Southern Denmark, 5230 Odense, Denmark; Research Unit for ORL, Head & Neck Surgery and Audiology, Odense University Hospital & University of Southern Denmark, 5230 Odense, Denmark
| | - Jessica J M Monaghan
- Department of Linguistics, Macquarie University, 16 University Avenue, Sydney, NSW 2109, Australia; National Acoustic Laboratories, Australian Hearing Hub, 16 University Avenue, Sydney, NSW 2109, Australia
| | - David McAlpine
- Department of Linguistics, Macquarie University, 16 University Avenue, Sydney, NSW 2109, Australia; Macquarie University Hearing and the Australian Hearing Hub, Macquarie University, 16 University Avenue, Sydney, NSW 2109, Australia.
| |
Collapse
|
2
|
Singh R, Bharadwaj HM. Cortical temporal integration can account for limits of temporal perception: investigations in the binaural system. Commun Biol 2023; 6:981. [PMID: 37752215 PMCID: PMC10522716 DOI: 10.1038/s42003-023-05361-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Accepted: 09/15/2023] [Indexed: 09/28/2023] Open
Abstract
The auditory system has exquisite temporal coding in the periphery which is transformed into a rate-based code in central auditory structures, like auditory cortex. However, the cortex is still able to synchronize, albeit at lower modulation rates, to acoustic fluctuations. The perceptual significance of this cortical synchronization is unknown. We estimated physiological synchronization limits of cortex (in humans with electroencephalography) and brainstem neurons (in chinchillas) to dynamic binaural cues using a novel system-identification technique, along with parallel perceptual measurements. We find that cortex can synchronize to dynamic binaural cues up to approximately 10 Hz, which aligns well with our measured limits of perceiving dynamic spatial information and utilizing dynamic binaural cues for spatial unmasking, i.e. measures of binaural sluggishness. We also find that the tracking limit for frequency modulation (FM) is similar to the limit for spatial tracking, demonstrating that this sluggish tracking is a more general perceptual limit that can be accounted for by cortical temporal integration limits.
Collapse
Affiliation(s)
- Ravinderjit Singh
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
| | - Hari M Bharadwaj
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA.
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN, USA.
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, USA.
| |
Collapse
|
3
|
Shamsi E, Ahmadi-Pajouh MA, Seifi Ala T. Higuchi fractal dimension: An efficient approach to detection of brain entrainment to theta binaural beats. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102580] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
|
4
|
Synthesis of Hemispheric ITD Tuning from the Readout of a Neural Map: Commonalities of Proposed Coding Schemes in Birds and Mammals. J Neurosci 2019; 39:9053-9061. [PMID: 31570537 DOI: 10.1523/jneurosci.0873-19.2019] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2019] [Revised: 09/23/2019] [Accepted: 09/25/2019] [Indexed: 11/21/2022] Open
Abstract
A major cue to infer sound direction is the difference in arrival time of the sound at the left and right ears, called interaural time difference (ITD). The neural coding of ITD and its similarity across species have been strongly debated. In the barn owl, an auditory specialist relying on sound localization to capture prey, ITDs within the physiological range determined by the head width are topographically represented at each frequency. The topographic representation suggests that sound direction may be inferred from the location of maximal neural activity within the map. Such topographical representation of ITD, however, is not evident in mammals. Instead, the preferred ITD of neurons in the mammalian brainstem often lies outside the physiological range and depends on the neuron's best frequency. Because of these disparities, it has been assumed that how spatial hearing is achieved in birds and mammals is fundamentally different. However, recent studies reveal ITD responses in the owl's forebrain and midbrain premotor area that are consistent with coding schemes proposed in mammals. Particularly, sound location in owls could be decoded from the relative firing rates of two broadly and inversely ITD-tuned channels. This evidence suggests that, at downstream stages, the code for ITD may not be qualitatively different across species. Thus, while experimental evidence continues to support the notion of differences in ITD representation across species and brain regions, the latest results indicate notable commonalities, suggesting that codes driving orienting behavior in mammals and birds may be comparable.
Collapse
|
5
|
Zuk NJ, Delgutte B. Neural coding and perception of auditory motion direction based on interaural time differences. J Neurophysiol 2019; 122:1821-1842. [PMID: 31461376 DOI: 10.1152/jn.00081.2019] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023] Open
Abstract
While motion is important for parsing a complex auditory scene into perceptual objects, how it is encoded in the auditory system is unclear. Perceptual studies suggest that the ability to identify the direction of motion is limited by the duration of the moving sound, yet we can detect changes in interaural differences at even shorter durations. To understand the source of these distinct temporal limits, we recorded from single units in the inferior colliculus (IC) of unanesthetized rabbits in response to noise stimuli containing a brief segment with linearly time-varying interaural time difference ("ITD sweep") temporally embedded in interaurally uncorrelated noise. We also tested the ability of human listeners to either detect the ITD sweeps or identify the motion direction. Using a point-process model to separate the contributions of stimulus dependence and spiking history to single-neuron responses, we found that the neurons respond primarily by following the instantaneous ITD rather than exhibiting true direction selectivity. Furthermore, using an optimal classifier to decode the single-neuron responses, we found that neural threshold durations of ITD sweeps for both direction identification and detection overlapped with human threshold durations even though the average response of the neurons could track the instantaneous ITD beyond psychophysical limits. Our results suggest that the IC does not explicitly encode motion direction, but internal neural noise may limit the speed at which we can identify the direction of motion.NEW & NOTEWORTHY Recognizing motion and identifying an object's trajectory are important for parsing a complex auditory scene, but how we do so is unclear. We show that neurons in the auditory midbrain do not exhibit direction selectivity as found in the visual system but instead follow the trajectory of the motion in their temporal firing patterns. Our results suggest that the inherent variability in neural firings may limit our ability to identify motion direction at short durations.
Collapse
Affiliation(s)
- Nathaniel J Zuk
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, Massachusetts
| | - Bertrand Delgutte
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, Massachusetts.,Department of Otolaryngology, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
6
|
Spectrotemporal window of binaural integration in auditory object formation. Hear Res 2018; 370:155-167. [PMID: 30388573 DOI: 10.1016/j.heares.2018.10.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/14/2018] [Revised: 10/12/2018] [Accepted: 10/17/2018] [Indexed: 11/21/2022]
Abstract
Binaural integration of interaural temporal information is essential for sound source localization and segregation. Current models of binaural interaction have shown that accurate sound localization in the horizontal plane depends on the resolution of phase ambiguous information by across-frequency integration. However, as such models are mostly static, it is not clear how proximate in time binaural events in different frequency channels should occur to form an auditory object with a unique lateral position. The present study examined the spectrotemporal window required for effective integration of binaural cues across frequency to form the perception of a stationary position. In Experiment 1, listeners judged whether dichotic frequency-modulated (FM) sweeps with a constant large nominal interaural delay (1500 μs), whose perceived laterality was ambiguous depending on the sweep rate (1500, 3000, 6000, and 12,000 Hz/s), produced a percept of continuous motion or a stationary image. Motion detection performance, indexed by d-prime (d') values, showed a clear effect of sweep rate, with auditory motion effects most pronounced for low sweep rates, and a punctate stationary image at high rates. Experiment 2 examined the effect of modulation rate (0.5, 3, 20, and 50 Hz) on lateralizing sinusoidally frequency-modulated (SFM) tones to confirm the effect of sweep rate on motion detection, independent of signal duration. Lateralization accuracy increased with increasing modulation rate up to 20 Hz and saturated at 50 Hz, with poorest performance occurring below 3 Hz depending on modulator phase. Using the transition point where percepts changed from motion to stationary images, we estimated a spectrotemporal integration window of approximately 150 ms per octave required for effective integration of interaural temporal cues across frequency channels. A Monte Carlo simulation based on a cross-correlation model of binaural interaction predicted 90% of the variance on perceptual motion detection performance as a function of FM sweep rate. Findings suggest that the rate of frequency channel convergence of binaural cues is essential to binaural lateralization.
Collapse
|
7
|
Beauchene C, Abaid N, Moran R, Diana RA, Leonessa A. The effect of binaural beats on verbal working memory and cortical connectivity. J Neural Eng 2017; 14:026014. [DOI: 10.1088/1741-2552/aa5d67] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
|
8
|
Beauchene C, Abaid N, Moran R, Diana RA, Leonessa A. The Effect of Binaural Beats on Visuospatial Working Memory and Cortical Connectivity. PLoS One 2016; 11:e0166630. [PMID: 27893766 PMCID: PMC5125618 DOI: 10.1371/journal.pone.0166630] [Citation(s) in RCA: 41] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2016] [Accepted: 11/01/2016] [Indexed: 11/30/2022] Open
Abstract
Binaural beats utilize a phenomenon that occurs within the cortex when two different frequencies are presented separately to each ear. This procedure produces a third phantom binaural beat, whose frequency is equal to the difference of the two presented tones and which can be manipulated for non-invasive brain stimulation. The effects of binaural beats on working memory, the system in control of temporary retention and online organization of thoughts for successful goal directed behavior, have not been well studied. Furthermore, no studies have evaluated the effects of binaural beats on brain connectivity during working memory tasks. In this study, we determined the effects of different acoustic stimulation conditions on participant response accuracy and cortical network topology, as measured by EEG recordings, during a visuospatial working memory task. Three acoustic stimulation control conditions and three binaural beat stimulation conditions were used: None, Pure Tone, Classical Music, 5Hz binaural beats, 10Hz binaural beats, and 15Hz binaural beats. We found that listening to 15Hz binaural beats during a visuospatial working memory task not only increased the response accuracy, but also modified the strengths of the cortical networks during the task. The three auditory control conditions and the 5Hz and 10Hz binaural beats all decreased accuracy. Based on graphical network analyses, the cortical activity during 15Hz binaural beats produced networks characteristic of high information transfer with consistent connection strengths throughout the visuospatial working memory task.
Collapse
Affiliation(s)
- Christine Beauchene
- Center for Dynamic Systems Modeling and Control, Department of Mechanical Engineering, Virginia Polytechnic Institute and State University, Blacksburg, Virginia, United States of America
| | - Nicole Abaid
- Department of Biomedical Engineering and Mechanics, Virginia Polytechnic Institute and State University, Blacksburg, Virginia, United States of America
| | - Rosalyn Moran
- Department of Engineering Mathematics, University of Bristol, Clifton, Bristol, United Kingdom
| | - Rachel A. Diana
- Department of Psychology, Virginia Polytechnic Institute and State University, Blacksburg, Virginia, United States of America
| | - Alexander Leonessa
- Center for Dynamic Systems Modeling and Control, Department of Mechanical Engineering, Virginia Polytechnic Institute and State University, Blacksburg, Virginia, United States of America
- * E-mail:
| |
Collapse
|
9
|
The neural representation of interaural time differences in gerbils is transformed from midbrain to cortex. J Neurosci 2015; 34:16796-808. [PMID: 25505332 DOI: 10.1523/jneurosci.2432-14.2014] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Interaural time differences (ITDs) are the dominant cue for the localization of low-frequency sounds. While much is known about the processing of ITDs in the auditory brainstem and midbrain, there have been relatively few studies of ITD processing in auditory cortex. In this study, we compared the neural representation of ITDs in the inferior colliculus (IC) and primary auditory cortex (A1) of gerbils. Our IC results were largely consistent with previous studies, with most cells responding maximally to ITDs that correspond to the contralateral edge of the physiological range. In A1, however, we found that preferred ITDs were distributed evenly throughout the physiological range without any contralateral bias. This difference in the distribution of preferred ITDs in IC and A1 had a major impact on the coding of ITDs at the population level: while a labeled-line decoder that considered the tuning of individual cells performed well on both IC and A1 responses, a two-channel decoder based on the overall activity in each hemisphere performed poorly on A1 responses relative to either labeled-line decoding of A1 responses or two-channel decoding of IC responses. These results suggest that the neural representation of ITDs in gerbils is transformed from IC to A1 and have important implications for how spatial location may be combined with other acoustic features for the analysis of complex auditory scenes.
Collapse
|
10
|
Malone BJ, Scott BH, Semple MN. Diverse cortical codes for scene segmentation in primate auditory cortex. J Neurophysiol 2015; 113:2934-52. [PMID: 25695655 DOI: 10.1152/jn.01054.2014] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2014] [Accepted: 02/04/2015] [Indexed: 11/22/2022] Open
Abstract
The temporal coherence of amplitude fluctuations is a critical cue for segmentation of complex auditory scenes. The auditory system must accurately demarcate the onsets and offsets of acoustic signals. We explored how and how well the timing of onsets and offsets of gated tones are encoded by auditory cortical neurons in awake rhesus macaques. Temporal features of this representation were isolated by presenting otherwise identical pure tones of differing durations. Cortical response patterns were diverse, including selective encoding of onset and offset transients, tonic firing, and sustained suppression. Spike train classification methods revealed that many neurons robustly encoded tone duration despite substantial diversity in the encoding process. Excellent discrimination performance was achieved by neurons whose responses were primarily phasic at tone offset and by those that responded robustly while the tone persisted. Although diverse cortical response patterns converged on effective duration discrimination, this diversity significantly constrained the utility of decoding models referenced to a spiking pattern averaged across all responses or averaged within the same response category. Using maximum likelihood-based decoding models, we demonstrated that the spike train recorded in a single trial could support direct estimation of stimulus onset and offset. Comparisons between different decoding models established the substantial contribution of bursts of activity at sound onset and offset to demarcating the temporal boundaries of gated tones. Our results indicate that relatively few neurons suffice to provide temporally precise estimates of such auditory "edges," particularly for models that assume and exploit the heterogeneity of neural responses in awake cortex.
Collapse
Affiliation(s)
- Brian J Malone
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California;
| | - Brian H Scott
- Laboratory of Neuropsychology, National Institute of Mental Health/National Institutes of Health, Bethesda, Maryland; and
| | - Malcolm N Semple
- Center for Neural Science at New York University, New York, New York
| |
Collapse
|
11
|
The neural code for auditory space depends on sound frequency and head size in an optimal manner. PLoS One 2014; 9:e108154. [PMID: 25372405 PMCID: PMC4220907 DOI: 10.1371/journal.pone.0108154] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2014] [Accepted: 08/26/2014] [Indexed: 11/19/2022] Open
Abstract
A major cue to the location of a sound source is the interaural time difference (ITD)-the difference in sound arrival time at the two ears. The neural representation of this auditory cue is unresolved. The classic model of ITD coding, dominant for a half-century, posits that the distribution of best ITDs (the ITD evoking a neuron's maximal response) is unimodal and largely within the range of ITDs permitted by head-size. This is often interpreted as a place code for source location. An alternative model, based on neurophysiology in small mammals, posits a bimodal distribution of best ITDs with exquisite sensitivity to ITDs generated by means of relative firing rates between the distributions. Recently, an optimal-coding model was proposed, unifying the disparate features of these two models under the framework of efficient coding by neural populations. The optimal-coding model predicts that distributions of best ITDs depend on head size and sound frequency: for high frequencies and large heads it resembles the classic model, for low frequencies and small head sizes it resembles the bimodal model. The optimal-coding model makes key, yet unobserved, predictions: for many species, including humans, both forms of neural representation are employed, depending on sound frequency. Furthermore, novel representations are predicted for intermediate frequencies. Here, we examine these predictions in neurophysiological data from five mammalian species: macaque, guinea pig, cat, gerbil and kangaroo rat. We present the first evidence supporting these untested predictions, and demonstrate that different representations appear to be employed at different sound frequencies in the same species.
Collapse
|
12
|
Ross B, Miyazaki T, Thompson J, Jamali S, Fujioka T. Human cortical responses to slow and fast binaural beats reveal multiple mechanisms of binaural hearing. J Neurophysiol 2014; 112:1871-84. [PMID: 25008412 DOI: 10.1152/jn.00224.2014] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023] Open
Abstract
When two tones with slightly different frequencies are presented to both ears, they interact in the central auditory system and induce the sensation of a beating sound. At low difference frequencies, we perceive a single sound, which is moving across the head between the left and right ears. The percept changes to loudness fluctuation, roughness, and pitch with increasing beat rate. To examine the neural representations underlying these different perceptions, we recorded neuromagnetic cortical responses while participants listened to binaural beats at a continuously varying rate between 3 Hz and 60 Hz. Binaural beat responses were analyzed as neuromagnetic oscillations following the trajectory of the stimulus rate. Responses were largest in the 40-Hz gamma range and at low frequencies. Binaural beat responses at 3 Hz showed opposite polarity in the left and right auditory cortices. We suggest that this difference in polarity reflects the opponent neural population code for representing sound location. Binaural beats at any rate induced gamma oscillations. However, the responses were largest at 40-Hz stimulation. We propose that the neuromagnetic gamma oscillations reflect postsynaptic modulation that allows for precise timing of cortical neural firing. Systematic phase differences between bilateral responses suggest that separate sound representations of a sound object exist in the left and right auditory cortices. We conclude that binaural processing at the cortical level occurs with the same temporal acuity as monaural processing whereas the identification of sound location requires further interpretation and is limited by the rate of object representations.
Collapse
Affiliation(s)
- Bernhard Ross
- Rotman Research Institute, Baycrest Centre, Toronto, Ontario, Canada; Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada;
| | - Takahiro Miyazaki
- Rotman Research Institute, Baycrest Centre, Toronto, Ontario, Canada
| | - Jessica Thompson
- International Laboratory for Brain, Music and Sound Research, Department of Psychology, University of Montreal, Montreal, Quebec, Canada; and
| | - Shahab Jamali
- Rotman Research Institute, Baycrest Centre, Toronto, Ontario, Canada
| | - Takako Fujioka
- Center for Computer Research in Music and Acoustics, Stanford University, Stanford, California
| |
Collapse
|
13
|
Vonderschen K, Wagner H. Detecting interaural time differences and remodeling their representation. Trends Neurosci 2014; 37:289-300. [DOI: 10.1016/j.tins.2014.03.002] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2012] [Revised: 03/06/2014] [Accepted: 03/11/2014] [Indexed: 10/25/2022]
|
14
|
Malone BJ, Scott BH, Semple MN. Encoding frequency contrast in primate auditory cortex. J Neurophysiol 2014; 111:2244-63. [PMID: 24598525 DOI: 10.1152/jn.00878.2013] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Changes in amplitude and frequency jointly determine much of the communicative significance of complex acoustic signals, including human speech. We have previously described responses of neurons in the core auditory cortex of awake rhesus macaques to sinusoidal amplitude modulation (SAM) signals. Here we report a complementary study of sinusoidal frequency modulation (SFM) in the same neurons. Responses to SFM were analogous to SAM responses in that changes in multiple parameters defining SFM stimuli (e.g., modulation frequency, modulation depth, carrier frequency) were robustly encoded in the temporal dynamics of the spike trains. For example, changes in the carrier frequency produced highly reproducible changes in shapes of the modulation period histogram, consistent with the notion that the instantaneous probability of discharge mirrors the moment-by-moment spectrum at low modulation rates. The upper limit for phase locking was similar across SAM and SFM within neurons, suggesting shared biophysical constraints on temporal processing. Using spike train classification methods, we found that neural thresholds for modulation depth discrimination are typically far lower than would be predicted from frequency tuning to static tones. This "dynamic hyperacuity" suggests a substantial central enhancement of the neural representation of frequency changes relative to the auditory periphery. Spike timing information was superior to average rate information when discriminating among SFM signals, and even when discriminating among static tones varying in frequency. This finding held even when differences in total spike count across stimuli were normalized, indicating both the primacy and generality of temporal response dynamics in cortical auditory processing.
Collapse
Affiliation(s)
- Brian J Malone
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California;
| | - Brian H Scott
- Laboratory of Neuropsychology, National Institute of Mental Health/National Institutes of Health, Bethesda, Maryland; and
| | - Malcolm N Semple
- Center for Neural Science, New York University, New York, New York
| |
Collapse
|
15
|
Keating P, Nodal FR, King AJ. Behavioural sensitivity to binaural spatial cues in ferrets: evidence for plasticity in the duplex theory of sound localization. Eur J Neurosci 2014; 39:197-206. [PMID: 24256073 PMCID: PMC4063341 DOI: 10.1111/ejn.12402] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2013] [Revised: 09/05/2013] [Accepted: 09/27/2013] [Indexed: 11/30/2022]
Abstract
For over a century, the duplex theory has guided our understanding of human sound localization in the horizontal plane. According to this theory, the auditory system uses interaural time differences (ITDs) and interaural level differences (ILDs) to localize low-frequency and high-frequency sounds, respectively. Whilst this theory successfully accounts for the localization of tones by humans, some species show very different behaviour. Ferrets are widely used for studying both clinical and fundamental aspects of spatial hearing, but it is not known whether the duplex theory applies to this species or, if so, to what extent the frequency range over which each binaural cue is used depends on acoustical or neurophysiological factors. To address these issues, we trained ferrets to lateralize tones presented over earphones and found that the frequency dependence of ITD and ILD sensitivity broadly paralleled that observed in humans. Compared with humans, however, the transition between ITD and ILD sensitivity was shifted toward higher frequencies. We found that the frequency dependence of ITD sensitivity in ferrets can partially be accounted for by acoustical factors, although neurophysiological mechanisms are also likely to be involved. Moreover, we show that binaural cue sensitivity can be shaped by experience, as training ferrets on a 1-kHz ILD task resulted in significant improvements in thresholds that were specific to the trained cue and frequency. Our results provide new insights into the factors limiting the use of different sound localization cues and highlight the importance of sensory experience in shaping the underlying neural mechanisms.
Collapse
Affiliation(s)
- Peter Keating
- Department of Physiology, Anatomy and Genetics, University of Oxford, Parks Road, Oxford, OX1 3PT, UK
| | | | | |
Collapse
|
16
|
Magezi DA, Buetler KA, Chouiter L, Annoni JM, Spierer L. Electrical neuroimaging during auditory motion aftereffects reveals that auditory motion processing is motion sensitive but not direction selective. J Neurophysiol 2012; 109:321-31. [PMID: 23076114 DOI: 10.1152/jn.00625.2012] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Following prolonged exposure to adaptor sounds moving in a single direction, participants may perceive stationary-probe sounds as moving in the opposite direction [direction-selective auditory motion aftereffect (aMAE)] and be less sensitive to motion of any probe sounds that are actually moving (motion-sensitive aMAE). The neural mechanisms of aMAEs, and notably whether they are due to adaptation of direction-selective motion detectors, as found in vision, is presently unknown and would provide critical insight into auditory motion processing. We measured human behavioral responses and auditory evoked potentials to probe sounds following four types of moving-adaptor sounds: leftward and rightward unidirectional, bidirectional, and stationary. Behavioral data replicated both direction-selective and motion-sensitive aMAEs. Electrical neuroimaging analyses of auditory evoked potentials to stationary probes revealed no significant difference in either global field power (GFP) or scalp topography between leftward and rightward conditions, suggesting that aMAEs are not based on adaptation of direction-selective motion detectors. By contrast, the bidirectional and stationary conditions differed significantly in the stationary-probe GFP at 200 ms poststimulus onset without concomitant topographic modulation, indicative of a difference in the response strength between statistically indistinguishable intracranial generators. The magnitude of this GFP difference was positively correlated with the magnitude of the motion-sensitive aMAE, supporting the functional relevance of the neurophysiological measures. Electrical source estimations revealed that the GFP difference followed from a modulation of activity in predominantly right hemisphere frontal-temporal-parietal brain regions previously implicated in auditory motion processing. Our collective results suggest that auditory motion processing relies on motion-sensitive, but, in contrast to vision, non-direction-selective mechanisms.
Collapse
Affiliation(s)
- David A Magezi
- Neurology Unit, Department of Medicine, Faculty of Sciences, University of Fribourg, Fribourg, Switzerland.
| | | | | | | | | |
Collapse
|
17
|
Sarro EC, Rosen MJ, Sanes DH. Taking advantage of behavioral changes during development and training to assess sensory coding mechanisms. Ann N Y Acad Sci 2011; 1225:142-54. [PMID: 21535001 DOI: 10.1111/j.1749-6632.2011.06023.x] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
The relationship between behavioral and neural performance has been explored in adult animals, but rarely during the developmental period when perceptual abilities emerge. We used these naturally occurring changes in auditory perception to evaluate underlying encoding mechanisms. Performance of juvenile and adult gerbils on an amplitude modulation (AM) detection task was compared with response properties from auditory cortex of age-matched animals. When tested with an identical behavioral procedure, juveniles display poorer AM detection thresholds than adults. Two neurometric analyses indicate that the most sensitive juvenile and adult neurons have equivalent AM thresholds. However, a pooling neurometric revealed that adult cortex encodes smaller AM depths. By each measure, neural sensitivity was superior to psychometric thresholds. However, juvenile training improved adult behavioral thresholds, such that they verged on the best sensitivity of adult neurons. Thus, periods of training may allow an animal to use the encoded information already present in cortex.
Collapse
Affiliation(s)
- Emma C Sarro
- Center for Neural Science, New York University, New York, New York, USA.
| | | | | |
Collapse
|
18
|
Akeroyd MA. A binaural beat constructed from a noise (L). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2010; 128:3301-3304. [PMID: 21218863 PMCID: PMC3515796 DOI: 10.1121/1.3505122] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
The binaural beat has been used for over 100 years as a stimulus for generating the percept of motion. Classically the beat consists of a pure tone at one ear (e.g., 500 Hz) and the same pure tone at the other ear but shifted upward or downward in frequency (e.g., 501 Hz). An experiment and binaural computational analysis are reported which demonstrate that a more powerful motion percept can be obtained by applying the concept of the frequency shift to a noise, via an upward or downward shift in the frequency of the Fourier components of its spectrum.
Collapse
Affiliation(s)
- Michael A Akeroyd
- MRC Institute of Hearing Research, Scottish Section, Glasgow Royal Infirmary, Alexandra Parade, Glasgow G31 2ER, United Kingdom.
| |
Collapse
|
19
|
Scott BH, Malone BJ, Semple MN. Transformation of temporal processing across auditory cortex of awake macaques. J Neurophysiol 2010; 105:712-30. [PMID: 21106896 DOI: 10.1152/jn.01120.2009] [Citation(s) in RCA: 62] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The anatomy and connectivity of the primate auditory cortex has been modeled as a core region receiving direct thalamic input surrounded by a belt of secondary fields. The core contains multiple tonotopic fields (including the primary auditory cortex, AI, and the rostral field, R), but available data only partially address the degree to which those fields are functionally distinct. This report, based on single-unit recordings across four hemispheres in awake macaques, argues that the functional organization of auditory cortex is best understood in terms of temporal processing. Frequency tuning, response threshold, and strength of activation are similar between AI and R, validating their inclusion as a unified core, but the temporal properties of the fields clearly differ. Onset latencies to pure tones are longer in R (median, 33 ms) than in AI (20 ms); moreover, synchronization of spike discharges to dynamic modulations of stimulus amplitude and frequency, similar to those present in macaque and human vocalizations, suggest distinctly different windows of temporal integration in AI (20-30 ms) and R (100 ms). Incorporating data from the adjacent auditory belt reveals that the divergence of temporal properties within the core is in some cases greater than the temporal differences between core and belt.
Collapse
Affiliation(s)
- Brian H Scott
- Center for Neural Science, New York University, New York, New York, USA.
| | | | | |
Collapse
|
20
|
Wallace MN, Coomber B, Sumner CJ, Grimsley JMS, Shackleton TM, Palmer AR. Location of cells giving phase-locked responses to pure tones in the primary auditory cortex. Hear Res 2010; 274:142-51. [PMID: 20630479 DOI: 10.1016/j.heares.2010.05.012] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/26/2010] [Revised: 05/23/2010] [Accepted: 05/24/2010] [Indexed: 11/30/2022]
Abstract
Phase-locked responses to pure tones have previously been described in the primary auditory cortex (AI) of the guinea pig. They are interesting because they show that some cells may use a temporal code for representing sounds of 60-300 Hz rather than the rate or place mechanisms used over most of AI. Our previous study had shown that the phase-locked responses were grouped together, but it was not clear whether they were in separate minicolumns or a larger macrocolumn. We now show that the phase-locked cells are arranged in a macrocolumn within AI that forms a subdivision of the isofrequency bands. Phase-locked responses were recorded from 158 multiunits using silicon based multiprobes with four shanks. The phase-locked units gave the strongest response in layers III/IV but phase-locked units were also recorded in layers II, V and VI. The column included cells with characteristic frequencies of 80 Hz-1.3 kHz (0.5-0.8 mm long) and was about 0.5 mm wide. It was located at a constant position at the intersection of the coronal plane 1 mm caudal to bregma and the suture that forms the lateral edge of the parietal bone.
Collapse
Affiliation(s)
- M N Wallace
- MRC Institute of Hearing Research, University Park, Nottingham NG7 2RD, UK.
| | | | | | | | | | | |
Collapse
|
21
|
Malone BJ, Scott BH, Semple MN. Temporal codes for amplitude contrast in auditory cortex. J Neurosci 2010; 30:767-84. [PMID: 20071542 PMCID: PMC3551278 DOI: 10.1523/jneurosci.4170-09.2010] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2009] [Revised: 10/16/2009] [Accepted: 11/11/2009] [Indexed: 11/21/2022] Open
Abstract
The encoding of sound level is fundamental to auditory signal processing, and the temporal information present in amplitude modulation is crucial to the complex signals used for communication sounds, including human speech. The modulation transfer function, which measures the minimum detectable modulation depth across modulation frequency, has been shown to predict speech intelligibility performance in a range of adverse listening conditions and hearing impairments, and even for users of cochlear implants. We presented sinusoidal amplitude modulation (SAM) tones of varying modulation depths to awake macaque monkeys while measuring the responses of neurons in the auditory core. Using spike train classification methods, we found that thresholds for modulation depth detection and discrimination in the most sensitive units are comparable to psychophysical thresholds when precise temporal discharge patterns rather than average firing rates are considered. Moreover, spike timing information was also superior to average rate information when discriminating static pure tones varying in level but with similar envelopes. The limited utility of average firing rate information in many units also limited the utility of standard measures of sound level tuning, such as the rate level function (RLF), in predicting cortical responses to dynamic signals like SAM. Response modulation typically exceeded that predicted by the slope of the RLF by large factors. The decoupling of the cortical encoding of SAM and static tones indicates that enhancing the representation of acoustic contrast is a cardinal feature of the ascending auditory pathway.
Collapse
Affiliation(s)
- Brian J Malone
- Center for Neural Science at New York University, New York, New York 10003, USA.
| | | | | |
Collapse
|
22
|
Temporally dynamic frequency tuning of population responses in monkey primary auditory cortex. Hear Res 2009; 254:64-76. [PMID: 19389466 DOI: 10.1016/j.heares.2009.04.010] [Citation(s) in RCA: 43] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/05/2008] [Revised: 03/20/2009] [Accepted: 04/10/2009] [Indexed: 11/20/2022]
Abstract
Frequency tuning of auditory cortical neurons is typically determined by integrating spikes over the entire duration of a tone stimulus. However, this approach may mask functionally significant variations in tuning over the time course of the response. To explore this possibility, frequency response functions (FRFs) based on population multiunit activity evoked by pure tones of 175 or 200 ms duration were examined within four time windows relative to stimulus onset corresponding to "on" (10-30 ms), "early sustained" (30-100 ms), "late sustained" (100-175 ms), and "off" (185-235 or 210-260 ms) portions of responses in primary auditory cortex (A1) of 5 awake macaques. FRFs of "on" and "early sustained" responses displayed a good concordance, with best frequencies (BFs) differing, on average, by less than 0.25 octaves. In contrast, FRFs of "on" and "late sustained" responses differed considerably, with a mean difference in BF of 0.68 octaves. At many sites, tuning of "off" responses was inversely related to that of "on" responses, with "off" FRFs displaying a trough at the BF of "on" responses. Inversely correlated "on" and "off" FRFs were more common at sites with a higher "on" BF, thus suggesting functional differences between sites with low and high "on" BF. These results indicate that frequency tuning of population responses in A1 may vary considerably over the course of the response to a tone, thus revealing a temporal dimension to the representation of sound spectrum in A1.
Collapse
|