1
|
Tobin M, Sheth J, Wood KC, Michel EK, Geffen MN. "Distinct inhibitory neurons differently shape neuronal codes for sound intensity in the auditory cortex". BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.02.01.526470. [PMID: 36778269 PMCID: PMC9915672 DOI: 10.1101/2023.02.01.526470] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Cortical circuits contain multiple types of inhibitory neurons which shape how information is processed within neuronal networks. Here, we asked whether somatostatin-expressing (SST) and vasoactive intestinal peptide-expressing (VIP) inhibitory neurons have distinct effects on population neuronal responses to noise bursts of varying intensities. We optogenetically stimulated SST or VIP neurons while simultaneously measuring the calcium responses of populations of hundreds of neurons in the auditory cortex of male and female awake, head-fixed mice to sounds. Upon SST neuronal activation, noise bursts representations became more discrete for different intensity levels, relying on cell identity rather than strength. By contrast, upon VIP neuronal activation, noise bursts of different intensity level activated overlapping neuronal populations, albeit at different response strengths. At the single-cell level, SST and VIP neuronal activation differentially modulated the response-level curves of monotonic and nonmonotonic neurons. SST neuronal activation effects were consistent with a shift of the neuronal population responses toward a more localist code with different cells responding to sounds of different intensity. By contrast, VIP neuronal activation shifted responses towards a more distributed code, in which sounds of different intensity level are encoded in the relative response of similar populations of cells. These results delineate how distinct inhibitory neurons in the auditory cortex dynamically control cortical population codes. Different inhibitory neuronal populations may be recruited under different behavioral demands, depending on whether categorical or invariant representations are advantageous for the task.
Collapse
Affiliation(s)
- Melanie Tobin
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Janaki Sheth
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Katherine C. Wood
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Erin K. Michel
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Maria N. Geffen
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA 19104, United States
- Department of Neuroscience, University of Pennsylvania, Philadelphia, PA 19104, United States
- Department of Neurology, University of Pennsylvania, Philadelphia, PA 19104, United States
| |
Collapse
|
2
|
van den Wildenberg MF, Bremen P. Heterogeneous spatial tuning in the auditory pathway of the Mongolian Gerbil (Meriones unguiculatus). Eur J Neurosci 2024; 60:4954-4981. [PMID: 39085952 DOI: 10.1111/ejn.16472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Revised: 06/17/2024] [Accepted: 07/09/2024] [Indexed: 08/02/2024]
Abstract
Sound-source localization is based on spatial cues arising due to interactions of sound waves with the torso, head and ears. Here, we evaluated neural responses to free-field sound sources in the central nucleus of the inferior colliculus (CIC), the medial geniculate body (MGB) and the primary auditory cortex (A1) of Mongolian gerbils. Using silicon probes we recorded from anaesthetized gerbils positioned in the centre of a sound-attenuating, anechoic chamber. We measured rate-azimuth functions (RAFs) with broad-band noise of varying levels presented from loudspeakers spanning 210° in azimuth and characterized RAFs by calculating spatial centroids, Equivalent Rectangular Receptive Fields (ERRFs), steepest slope locations and spatial-separation thresholds. To compare neuronal responses with behavioural discrimination thresholds from the literature we performed a neurometric analysis based on signal-detection theory. All structures demonstrated heterogeneous spatial tuning with a clear dominance of contralateral tuning. However, the relative amount of contralateral tuning decreased from the CIC to A1. In all three structures spatial tuning broadened with increasing sound-level. This effect was strongest in CIC and weakest in A1. Neurometric spatial-separation thresholds compared well with behavioural discrimination thresholds for locations directly in front of the animal. Our findings contrast with those reported for another rodent, the rat, which exhibits homogenous and sharply delimited contralateral spatial tuning. Spatial tuning in gerbils resembles more closely the tuning reported in A1 of cats, ferrets and non-human primates. Interestingly, gerbils, in contrast to rats, share good low-frequency hearing with carnivores and non-human primates, which may account for the observed spatial tuning properties.
Collapse
Affiliation(s)
| | - Peter Bremen
- Department of Neuroscience, Erasmus MC, Rotterdam, The Netherlands
| |
Collapse
|
3
|
van der Heijden K, Patel P, Bickel S, Herrero JL, Mehta AD, Mesgarani N. Joint population coding and temporal coherence link an attended talker's voice and location features in naturalistic multi-talker scenes. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.13.593814. [PMID: 38798551 PMCID: PMC11118436 DOI: 10.1101/2024.05.13.593814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2024]
Abstract
Listeners readily extract multi-dimensional auditory objects such as a 'localized talker' from complex acoustic scenes with multiple talkers. Yet, the neural mechanisms underlying simultaneous encoding and linking of different sound features - for example, a talker's voice and location - are poorly understood. We analyzed invasive intracranial recordings in neurosurgical patients attending to a localized talker in real-life cocktail party scenarios. We found that sensitivity to an individual talker's voice and location features was distributed throughout auditory cortex and that neural sites exhibited a gradient from sensitivity to a single feature to joint sensitivity to both features. On a population level, cortical response patterns of both dual-feature sensitive sites but also single-feature sensitive sites revealed simultaneous encoding of an attended talker's voice and location features. However, for single-feature sensitive sites, the representation of the primary feature was more precise. Further, sites which selective tracked an attended speech stream concurrently encoded an attended talker's voice and location features, indicating that such sites combine selective tracking of an attended auditory object with encoding of the object's features. Finally, we found that attending a localized talker selectively enhanced temporal coherence between single-feature voice sensitive sites and single-feature location sensitive sites, providing an additional mechanism for linking voice and location in multi-talker scenes. These results demonstrate that a talker's voice and location features are linked during multi-dimensional object formation in naturalistic multi-talker scenes by joint population coding as well as by temporal coherence between neural sites. SIGNIFICANCE STATEMENT Listeners effortlessly extract auditory objects from complex acoustic scenes consisting of multiple sound sources in naturalistic, spatial sound scenes. Yet, how the brain links different sound features to form a multi-dimensional auditory object is poorly understood. We investigated how neural responses encode and integrate an attended talker's voice and location features in spatial multi-talker sound scenes to elucidate which neural mechanisms underlie simultaneous encoding and linking of different auditory features. Our results show that joint population coding as well as temporal coherence mechanisms contribute to distributed multi-dimensional auditory object encoding. These findings shed new light on cortical functional specialization and multidimensional auditory object formation in complex, naturalistic listening scenes. HIGHLIGHTS Cortical responses to an single talker exhibit a distributed gradient, ranging from sites that are sensitive to both a talker's voice and location (dual-feature sensitive sites) to sites that are sensitive to either voice or location (single-feature sensitive sites).Population response patterns of dual-feature sensitive sites encode voice and location features of the attended talker in multi-talker scenes jointly and with equal precision.Despite their sensitivity to a single feature at the level of individual cortical sites, population response patterns of single-feature sensitive sites also encode location and voice features of a talker jointly, but with higher precision for the feature they are primarily sensitive to.Neural sites which selectively track an attended speech stream concurrently encode the attended talker's voice and location features.Attention selectively enhances temporal coherence between voice and location selective sites over time.Joint population coding as well as temporal coherence mechanisms underlie distributed multi-dimensional auditory object encoding in auditory cortex.
Collapse
|
4
|
Ying R, Stolzberg DJ, Caras ML. Neural correlates of flexible sound perception in the auditory midbrain and thalamus. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.12.589266. [PMID: 38645241 PMCID: PMC11030403 DOI: 10.1101/2024.04.12.589266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/23/2024]
Abstract
Hearing is an active process in which listeners must detect and identify sounds, segregate and discriminate stimulus features, and extract their behavioral relevance. Adaptive changes in sound detection can emerge rapidly, during sudden shifts in acoustic or environmental context, or more slowly as a result of practice. Although we know that context- and learning-dependent changes in the spectral and temporal sensitivity of auditory cortical neurons support many aspects of flexible listening, the contribution of subcortical auditory regions to this process is less understood. Here, we recorded single- and multi-unit activity from the central nucleus of the inferior colliculus (ICC) and the ventral subdivision of the medial geniculate nucleus (MGV) of Mongolian gerbils under two different behavioral contexts: as animals performed an amplitude modulation (AM) detection task and as they were passively exposed to AM sounds. Using a signal detection framework to estimate neurometric sensitivity, we found that neural thresholds in both regions improved during task performance, and this improvement was driven by changes in firing rate rather than phase locking. We also found that ICC and MGV neurometric thresholds improved and correlated with behavioral performance as animals learn to detect small AM depths during a multi-day perceptual training paradigm. Finally, we reveal that in the MGV, but not the ICC, context-dependent enhancements in AM sensitivity grow stronger during perceptual training, mirroring prior observations in the auditory cortex. Together, our results suggest that the auditory midbrain and thalamus contribute to flexible sound processing and perception over rapid and slow timescales.
Collapse
Affiliation(s)
- Rose Ying
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland, 20742
- Department of Biology, University of Maryland, College Park, Maryland, 20742
- Center for Comparative and Evolutionary Biology of Hearing, University of Maryland, College Park, Maryland, 20742
| | - Daniel J. Stolzberg
- Department of Biology, University of Maryland, College Park, Maryland, 20742
| | - Melissa L. Caras
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland, 20742
- Department of Biology, University of Maryland, College Park, Maryland, 20742
- Center for Comparative and Evolutionary Biology of Hearing, University of Maryland, College Park, Maryland, 20742
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland, 20742
| |
Collapse
|
5
|
Mazo C, Baeta M, Petreanu L. Auditory cortex conveys non-topographic sound localization signals to visual cortex. Nat Commun 2024; 15:3116. [PMID: 38600132 PMCID: PMC11006897 DOI: 10.1038/s41467-024-47546-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Accepted: 04/02/2024] [Indexed: 04/12/2024] Open
Abstract
Spatiotemporally congruent sensory stimuli are fused into a unified percept. The auditory cortex (AC) sends projections to the primary visual cortex (V1), which could provide signals for binding spatially corresponding audio-visual stimuli. However, whether AC inputs in V1 encode sound location remains unknown. Using two-photon axonal calcium imaging and a speaker array, we measured the auditory spatial information transmitted from AC to layer 1 of V1. AC conveys information about the location of ipsilateral and contralateral sound sources to V1. Sound location could be accurately decoded by sampling AC axons in V1, providing a substrate for making location-specific audiovisual associations. However, AC inputs were not retinotopically arranged in V1, and audio-visual modulations of V1 neurons did not depend on the spatial congruency of the sound and light stimuli. The non-topographic sound localization signals provided by AC might allow the association of specific audiovisual spatial patterns in V1 neurons.
Collapse
Affiliation(s)
- Camille Mazo
- Champalimaud Neuroscience Programme, Champalimaud Foundation, Lisbon, Portugal.
| | - Margarida Baeta
- Champalimaud Neuroscience Programme, Champalimaud Foundation, Lisbon, Portugal
| | - Leopoldo Petreanu
- Champalimaud Neuroscience Programme, Champalimaud Foundation, Lisbon, Portugal.
| |
Collapse
|
6
|
Sun L, Li C, Wang S, Si Q, Lin M, Wang N, Sun J, Li H, Liang Y, Wei J, Zhang X, Zhang J. Left frontal eye field encodes sound locations during passive listening. Cereb Cortex 2023; 33:3067-3079. [PMID: 35858212 DOI: 10.1093/cercor/bhac261] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 06/02/2022] [Accepted: 06/04/2022] [Indexed: 11/12/2022] Open
Abstract
Previous studies reported that auditory cortices (AC) were mostly activated by sounds coming from the contralateral hemifield. As a result, sound locations could be encoded by integrating opposite activations from both sides of AC ("opponent hemifield coding"). However, human auditory "where" pathway also includes a series of parietal and prefrontal regions. It was unknown how sound locations were represented in those high-level regions during passive listening. Here, we investigated the neural representation of sound locations in high-level regions by voxel-level tuning analysis, regions-of-interest-level (ROI-level) laterality analysis, and ROI-level multivariate pattern analysis. Functional magnetic resonance imaging data were collected while participants listened passively to sounds from various horizontal locations. We found that opponent hemifield coding of sound locations not only existed in AC, but also spanned over intraparietal sulcus, superior parietal lobule, and frontal eye field (FEF). Furthermore, multivariate pattern representation of sound locations in both hemifields could be observed in left AC, right AC, and left FEF. Overall, our results demonstrate that left FEF, a high-level region along the auditory "where" pathway, encodes sound locations during passive listening in two ways: a univariate opponent hemifield activation representation and a multivariate full-field activation pattern representation.
Collapse
Affiliation(s)
- Liwei Sun
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Chunlin Li
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Songjian Wang
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Qian Si
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Meng Lin
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Ningyu Wang
- Department of Otorhinolaryngology, Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing 100020, China
| | - Jun Sun
- Department of Radiology, Beijing Youan Hospital, Capital Medical University, Beijing 100069, China
| | - Hongjun Li
- Department of Radiology, Beijing Youan Hospital, Capital Medical University, Beijing 100069, China
| | - Ying Liang
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Jing Wei
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Xu Zhang
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Juan Zhang
- Department of Otorhinolaryngology, Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing 100020, China
| |
Collapse
|
7
|
Lage-Castellanos A, De Martino F, Ghose GM, Gulban OF, Moerel M. Selective attention sharpens population receptive fields in human auditory cortex. Cereb Cortex 2022; 33:5395-5408. [PMID: 36336333 PMCID: PMC10152083 DOI: 10.1093/cercor/bhac427] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 10/03/2022] [Accepted: 10/04/2022] [Indexed: 11/09/2022] Open
Abstract
Abstract
Selective attention enables the preferential processing of relevant stimulus aspects. Invasive animal studies have shown that attending a sound feature rapidly modifies neuronal tuning throughout the auditory cortex. Human neuroimaging studies have reported enhanced auditory cortical responses with selective attention. To date, it remains unclear how the results obtained with functional magnetic resonance imaging (fMRI) in humans relate to the electrophysiological findings in animal models. Here we aim to narrow the gap between animal and human research by combining a selective attention task similar in design to those used in animal electrophysiology with high spatial resolution ultra-high field fMRI at 7 Tesla. Specifically, human participants perform a detection task, whereas the probability of target occurrence varies with sound frequency. Contrary to previous fMRI studies, we show that selective attention resulted in population receptive field sharpening, and consequently reduced responses, at the attended sound frequencies. The difference between our results to those of previous fMRI studies supports the notion that the influence of selective attention on auditory cortex is diverse and may depend on context, stimulus, and task.
Collapse
Affiliation(s)
- Agustin Lage-Castellanos
- Department of Cognitive Neuroscience , Faculty of Psychology and Neuroscience, , 6200 MD, Maastricht , The Netherlands
- Maastricht University , Faculty of Psychology and Neuroscience, , 6200 MD, Maastricht , The Netherlands
- Maastricht Brain Imaging Center (MBIC) , 6200 MD, Maastricht , The Netherlands
- Department of NeuroInformatics, Cuban Neuroscience Center , Havana City 11600 , Cuba
| | - Federico De Martino
- Department of Cognitive Neuroscience , Faculty of Psychology and Neuroscience, , 6200 MD, Maastricht , The Netherlands
- Maastricht University , Faculty of Psychology and Neuroscience, , 6200 MD, Maastricht , The Netherlands
- Maastricht Brain Imaging Center (MBIC) , 6200 MD, Maastricht , The Netherlands
- Center for Magnetic Resonance Research , Department of Radiology, , Minneapolis, MN 55455 , United States
- University of Minnesota , Department of Radiology, , Minneapolis, MN 55455 , United States
| | - Geoffrey M Ghose
- Center for Magnetic Resonance Research , Department of Radiology, , Minneapolis, MN 55455 , United States
- University of Minnesota , Department of Radiology, , Minneapolis, MN 55455 , United States
| | | | - Michelle Moerel
- Department of Cognitive Neuroscience , Faculty of Psychology and Neuroscience, , 6200 MD, Maastricht , The Netherlands
- Maastricht University , Faculty of Psychology and Neuroscience, , 6200 MD, Maastricht , The Netherlands
- Maastricht Brain Imaging Center (MBIC) , 6200 MD, Maastricht , The Netherlands
- Maastricht Centre for Systems Biology, Maastricht University , 6200 MD, Maastricht , The Netherlands
| |
Collapse
|
8
|
Interaction of bottom-up and top-down neural mechanisms in spatial multi-talker speech perception. Curr Biol 2022; 32:3971-3986.e4. [PMID: 35973430 DOI: 10.1016/j.cub.2022.07.047] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 06/08/2022] [Accepted: 07/19/2022] [Indexed: 11/20/2022]
Abstract
How the human auditory cortex represents spatially separated simultaneous talkers and how talkers' locations and voices modulate the neural representations of attended and unattended speech are unclear. Here, we measured the neural responses from electrodes implanted in neurosurgical patients as they performed single-talker and multi-talker speech perception tasks. We found that spatial separation between talkers caused a preferential encoding of the contralateral speech in Heschl's gyrus (HG), planum temporale (PT), and superior temporal gyrus (STG). Location and spectrotemporal features were encoded in different aspects of the neural response. Specifically, the talker's location changed the mean response level, whereas the talker's spectrotemporal features altered the variation of response around response's baseline. These components were differentially modulated by the attended talker's voice or location, which improved the population decoding of attended speech features. Attentional modulation due to the talker's voice only appeared in the auditory areas with longer latencies, but attentional modulation due to location was present throughout. Our results show that spatial multi-talker speech perception relies upon a separable pre-attentive neural representation, which could be further tuned by top-down attention to the location and voice of the talker.
Collapse
|
9
|
Klatt LI, Getzmann S, Schneider D. Attentional Modulations of Alpha Power Are Sensitive to the Task-relevance of Auditory Spatial Information. Cortex 2022; 153:1-20. [DOI: 10.1016/j.cortex.2022.03.022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 02/10/2022] [Accepted: 03/10/2022] [Indexed: 11/16/2022]
|
10
|
Auerbach BD, Gritton HJ. Hearing in Complex Environments: Auditory Gain Control, Attention, and Hearing Loss. Front Neurosci 2022; 16:799787. [PMID: 35221899 PMCID: PMC8866963 DOI: 10.3389/fnins.2022.799787] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Accepted: 01/18/2022] [Indexed: 12/12/2022] Open
Abstract
Listening in noisy or complex sound environments is difficult for individuals with normal hearing and can be a debilitating impairment for those with hearing loss. Extracting meaningful information from a complex acoustic environment requires the ability to accurately encode specific sound features under highly variable listening conditions and segregate distinct sound streams from multiple overlapping sources. The auditory system employs a variety of mechanisms to achieve this auditory scene analysis. First, neurons across levels of the auditory system exhibit compensatory adaptations to their gain and dynamic range in response to prevailing sound stimulus statistics in the environment. These adaptations allow for robust representations of sound features that are to a large degree invariant to the level of background noise. Second, listeners can selectively attend to a desired sound target in an environment with multiple sound sources. This selective auditory attention is another form of sensory gain control, enhancing the representation of an attended sound source while suppressing responses to unattended sounds. This review will examine both “bottom-up” gain alterations in response to changes in environmental sound statistics as well as “top-down” mechanisms that allow for selective extraction of specific sound features in a complex auditory scene. Finally, we will discuss how hearing loss interacts with these gain control mechanisms, and the adaptive and/or maladaptive perceptual consequences of this plasticity.
Collapse
Affiliation(s)
- Benjamin D. Auerbach
- Department of Molecular and Integrative Physiology, Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- Neuroscience Program, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- *Correspondence: Benjamin D. Auerbach,
| | - Howard J. Gritton
- Neuroscience Program, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- Department of Comparative Biosciences, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, IL, United States
| |
Collapse
|
11
|
Veugen LCE, van Opstal AJ, van Wanrooij MM. Reaction Time Sensitivity to Spectrotemporal Modulations of Sound. Trends Hear 2022; 26:23312165221127589. [PMID: 36172759 PMCID: PMC9523861 DOI: 10.1177/23312165221127589] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
We tested whether sensitivity to acoustic spectrotemporal modulations can be observed from reaction times for normal-hearing and impaired-hearing conditions. In a manual reaction-time task, normal-hearing listeners had to detect the onset of a ripple (with density between 0–8 cycles/octave and a fixed modulation depth of 50%), that moved up or down the log-frequency axis at constant velocity (between 0–64 Hz), in an otherwise-unmodulated broadband white-noise. Spectral and temporal modulations elicited band-pass filtered sensitivity characteristics, with fastest detection rates around 1 cycle/oct and 32 Hz for normal-hearing conditions. These results closely resemble data from other studies that typically used the modulation-depth threshold as a sensitivity criterion. To simulate hearing-impairment, stimuli were processed with a 6-channel cochlear-implant vocoder, and a hearing-aid simulation that introduced separate spectral smearing and low-pass filtering. Reaction times were always much slower compared to normal hearing, especially for the highest spectral densities. Binaural performance was predicted well by the benchmark race model of binaural independence, which models statistical facilitation of independent monaural channels. For the impaired-hearing simulations this implied a “best-of-both-worlds” principle in which the listeners relied on the hearing-aid ear to detect spectral modulations, and on the cochlear-implant ear for temporal-modulation detection. Although singular-value decomposition indicated that the joint spectrotemporal sensitivity matrix could be largely reconstructed from independent temporal and spectral sensitivity functions, in line with time-spectrum separability, a substantial inseparable spectral-temporal interaction was present in all hearing conditions. These results suggest that the reaction-time task yields a valid and effective objective measure of acoustic spectrotemporal-modulation sensitivity.
Collapse
Affiliation(s)
- Lidwien C E Veugen
- Department of Biophysics, Donders Institute for Brain, Cognition and Behavior, 6029Radboud University, Nijmegen, Netherlands
| | - A John van Opstal
- Department of Biophysics, Donders Institute for Brain, Cognition and Behavior, 6029Radboud University, Nijmegen, Netherlands
| | - Marc M van Wanrooij
- Department of Biophysics, Donders Institute for Brain, Cognition and Behavior, 6029Radboud University, Nijmegen, Netherlands
| |
Collapse
|
12
|
Goal-driven, neurobiological-inspired convolutional neural network models of human spatial hearing. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.05.104] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
13
|
Willett SM, Groh JM. Multiple sounds degrade the frequency representation in monkey inferior colliculus. Eur J Neurosci 2021; 55:528-548. [PMID: 34844286 PMCID: PMC9267755 DOI: 10.1111/ejn.15545] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2021] [Revised: 11/16/2021] [Accepted: 11/17/2021] [Indexed: 11/28/2022]
Abstract
How we distinguish multiple simultaneous stimuli is uncertain, particularly given that such stimuli sometimes recruit largely overlapping populations of neurons. One commonly proposed hypothesis is that the sharpness of tuning curves might change to limit the number of stimuli driving any given neuron when multiple stimuli are present. To test this hypothesis, we recorded the activity of neurons in the inferior colliculus while monkeys made saccades to either one or two simultaneous sounds differing in frequency and spatial location. Although monkeys easily distinguished simultaneous sounds (~90% correct performance), the frequency selectivity of inferior colliculus neurons on dual‐sound trials did not improve in any obvious way. Frequency selectivity was degraded on dual‐sound trials compared to single‐sound trials: neural response functions broadened and frequency accounted for less of the variance in firing rate. These changes in neural firing led a maximum‐likelihood decoder to perform worse on dual‐sound trials than on single‐sound trials. These results fail to support the hypothesis that changes in frequency response functions serve to reduce the overlap in the representation of simultaneous sounds. Instead, these results suggest that alternative possibilities, such as recent evidence of alternations in firing rate between the rates corresponding to each of the two stimuli, offer a more promising approach.
Collapse
Affiliation(s)
- Shawn M Willett
- Department of Ophthalmology, University of Pittsburgh, Pittsburgh, Pennsylvania, USA.,Department of Neurobiology, Center for Cognitive Neuroscience, Duke University, Durham, North Carolina, USA
| | - Jennifer M Groh
- Department of Neurobiology, Center for Cognitive Neuroscience, Duke University, Durham, North Carolina, USA
| |
Collapse
|
14
|
Amaro D, Ferreiro DN, Grothe B, Pecka M. Source identity shapes spatial preference in primary auditory cortex during active navigation. Curr Biol 2021; 31:3875-3883.e5. [PMID: 34192513 DOI: 10.1016/j.cub.2021.06.025] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Revised: 05/10/2021] [Accepted: 06/09/2021] [Indexed: 01/05/2023]
Abstract
Information about the position of sensory objects and identifying their concurrent behavioral relevance is vital to navigate the environment. In the auditory system, spatial information is computed in the brain based on the position of the sound source relative to the observer and thus assumed to be egocentric throughout the auditory pathway. This assumption is largely based on studies conducted in either anesthetized or head-fixed and passively listening animals, thus lacking self-motion and selective listening. Yet these factors are fundamental components of natural sensing1 that may crucially impact the nature of spatial coding and sensory object representation.2 How individual objects are neuronally represented during unrestricted self-motion and active sensing remains mostly unexplored. Here, we trained gerbils on a behavioral foraging paradigm that required localization and identification of sound sources during free navigation. Chronic tetrode recordings in primary auditory cortex during task performance revealed previously unreported sensory object representations. Strikingly, the egocentric angle preference of the majority of spatially sensitive neurons changed significantly depending on the task-specific identity (outcome association) of the sound source. Spatial tuning also exhibited large temporal complexity. Moreover, we encountered egocentrically untuned neurons whose response magnitude differed between source identities. Using a neural network decoder, we show that, together, these neuronal response ensembles provide spatiotemporally co-existent information about both the egocentric location and the identity of individual sensory objects during self-motion, revealing a novel cortical computation principle for naturalistic sensing.
Collapse
Affiliation(s)
- Diana Amaro
- Division of Neurobiology, Department Biology II, Ludwig-Maximilians-Universität München, Planegg-Martinsried, Germany; Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität München, Planegg-Martinsried, Germany
| | - Dardo N Ferreiro
- Division of Neurobiology, Department Biology II, Ludwig-Maximilians-Universität München, Planegg-Martinsried, Germany; Department of General Psychology and Education, Ludwig-Maximilians-Universität München, Germany
| | - Benedikt Grothe
- Division of Neurobiology, Department Biology II, Ludwig-Maximilians-Universität München, Planegg-Martinsried, Germany; Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität München, Planegg-Martinsried, Germany; Max Planck Institute of Neurobiology, Planegg-Martinsried, Germany
| | - Michael Pecka
- Division of Neurobiology, Department Biology II, Ludwig-Maximilians-Universität München, Planegg-Martinsried, Germany.
| |
Collapse
|
15
|
AIM: A network model of attention in auditory cortex. PLoS Comput Biol 2021; 17:e1009356. [PMID: 34449761 PMCID: PMC8462696 DOI: 10.1371/journal.pcbi.1009356] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 09/24/2021] [Accepted: 08/18/2021] [Indexed: 11/19/2022] Open
Abstract
Attentional modulation of cortical networks is critical for the cognitive flexibility required to process complex scenes. Current theoretical frameworks for attention are based almost exclusively on studies in visual cortex, where attentional effects are typically modest and excitatory. In contrast, attentional effects in auditory cortex can be large and suppressive. A theoretical framework for explaining attentional effects in auditory cortex is lacking, preventing a broader understanding of cortical mechanisms underlying attention. Here, we present a cortical network model of attention in primary auditory cortex (A1). A key mechanism in our network is attentional inhibitory modulation (AIM) of cortical inhibitory neurons. In this mechanism, top-down inhibitory neurons disinhibit bottom-up cortical circuits, a prominent circuit motif observed in sensory cortex. Our results reveal that the same underlying mechanisms in the AIM network can explain diverse attentional effects on both spatial and frequency tuning in A1. We find that a dominant effect of disinhibition on cortical tuning is suppressive, consistent with experimental observations. Functionally, the AIM network may play a key role in solving the cocktail party problem. We demonstrate how attention can guide the AIM network to monitor an acoustic scene, select a specific target, or switch to a different target, providing flexible outputs for solving the cocktail party problem. Selective attention plays a key role in how we navigate our everyday lives. For example, at a cocktail party, we can attend to friend’s speech amidst other speakers, music, and background noise. In stark contrast, hundreds of millions of people with hearing impairment and other disorders find such environments overwhelming and debilitating. Understanding the mechanisms underlying selective attention may lead to breakthroughs in improving the quality of life for those negatively affected. Here, we propose a mechanistic network model of attention in primary auditory cortex based on attentional inhibitory modulation (AIM). In the AIM model, attention targets specific cortical inhibitory neurons, which then modulate local cortical circuits to emphasize a particular feature of sounds and suppress competing features. We show that the AIM model can account for experimental observations across different species and stimulus domains. We also demonstrate that the same mechanisms can enable listeners to flexibly switch between attending to specific targets sounds and monitoring the environment in complex acoustic scenes, such as a cocktail party. The AIM network provides a theoretical framework which can work in tandem with new experiments to help unravel cortical circuits underlying attention.
Collapse
|
16
|
Souffi S, Nodal FR, Bajo VM, Edeline JM. When and How Does the Auditory Cortex Influence Subcortical Auditory Structures? New Insights About the Roles of Descending Cortical Projections. Front Neurosci 2021; 15:690223. [PMID: 34413722 PMCID: PMC8369261 DOI: 10.3389/fnins.2021.690223] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Accepted: 06/14/2021] [Indexed: 12/28/2022] Open
Abstract
For decades, the corticofugal descending projections have been anatomically well described but their functional role remains a puzzling question. In this review, we will first describe the contributions of neuronal networks in representing communication sounds in various types of degraded acoustic conditions from the cochlear nucleus to the primary and secondary auditory cortex. In such situations, the discrimination abilities of collicular and thalamic neurons are clearly better than those of cortical neurons although the latter remain very little affected by degraded acoustic conditions. Second, we will report the functional effects resulting from activating or inactivating corticofugal projections on functional properties of subcortical neurons. In general, modest effects have been observed in anesthetized and in awake, passively listening, animals. In contrast, in behavioral tasks including challenging conditions, behavioral performance was severely reduced by removing or transiently silencing the corticofugal descending projections. This suggests that the discriminative abilities of subcortical neurons may be sufficient in many acoustic situations. It is only in particularly challenging situations, either due to the task difficulties and/or to the degraded acoustic conditions that the corticofugal descending connections bring additional abilities. Here, we propose that it is both the top-down influences from the prefrontal cortex, and those from the neuromodulatory systems, which allow the cortical descending projections to impact behavioral performance in reshaping the functional circuitry of subcortical structures. We aim at proposing potential scenarios to explain how, and under which circumstances, these projections impact on subcortical processing and on behavioral responses.
Collapse
Affiliation(s)
- Samira Souffi
- Department of Integrative and Computational Neurosciences, Paris-Saclay Institute of Neuroscience (NeuroPSI), UMR CNRS 9197, Paris-Saclay University, Orsay, France
| | - Fernando R Nodal
- Department of Physiology, Anatomy and Genetics, Medical Sciences Division, University of Oxford, Oxford, United Kingdom
| | - Victoria M Bajo
- Department of Physiology, Anatomy and Genetics, Medical Sciences Division, University of Oxford, Oxford, United Kingdom
| | - Jean-Marc Edeline
- Department of Integrative and Computational Neurosciences, Paris-Saclay Institute of Neuroscience (NeuroPSI), UMR CNRS 9197, Paris-Saclay University, Orsay, France
| |
Collapse
|
17
|
Homma NY, Bajo VM. Lemniscal Corticothalamic Feedback in Auditory Scene Analysis. Front Neurosci 2021; 15:723893. [PMID: 34489635 PMCID: PMC8417129 DOI: 10.3389/fnins.2021.723893] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Accepted: 07/30/2021] [Indexed: 12/15/2022] Open
Abstract
Sound information is transmitted from the ear to central auditory stations of the brain via several nuclei. In addition to these ascending pathways there exist descending projections that can influence the information processing at each of these nuclei. A major descending pathway in the auditory system is the feedback projection from layer VI of the primary auditory cortex (A1) to the ventral division of medial geniculate body (MGBv) in the thalamus. The corticothalamic axons have small glutamatergic terminals that can modulate thalamic processing and thalamocortical information transmission. Corticothalamic neurons also provide input to GABAergic neurons of the thalamic reticular nucleus (TRN) that receives collaterals from the ascending thalamic axons. The balance of corticothalamic and TRN inputs has been shown to refine frequency tuning, firing patterns, and gating of MGBv neurons. Therefore, the thalamus is not merely a relay stage in the chain of auditory nuclei but does participate in complex aspects of sound processing that include top-down modulations. In this review, we aim (i) to examine how lemniscal corticothalamic feedback modulates responses in MGBv neurons, and (ii) to explore how the feedback contributes to auditory scene analysis, particularly on frequency and harmonic perception. Finally, we will discuss potential implications of the role of corticothalamic feedback in music and speech perception, where precise spectral and temporal processing is essential.
Collapse
Affiliation(s)
- Natsumi Y. Homma
- Center for Integrative Neuroscience, University of California, San Francisco, San Francisco, CA, United States
- Coleman Memorial Laboratory, Department of Otolaryngology – Head and Neck Surgery, University of California, San Francisco, San Francisco, CA, United States
| | - Victoria M. Bajo
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
18
|
Middlebrooks JC. A Search for a Cortical Map of Auditory Space. J Neurosci 2021; 41:5772-5778. [PMID: 34011526 PMCID: PMC8265804 DOI: 10.1523/jneurosci.0501-21.2021] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Revised: 05/10/2021] [Accepted: 05/12/2021] [Indexed: 11/21/2022] Open
Abstract
This is the story of a search for a cortical map of auditory space. The search began with a study that was reported in the first issue of The Journal of Neuroscience (Middlebrooks and Pettigrew, 1981). That paper described some unexpected features of spatial sensitivity in the auditory cortex while failing to demonstrate the expected map. In the ensuing 40 years, we have encountered the following: panoramic spatial coding by single neurons; a rich variety of response patterns that are unmasked in the absence of general anesthesia; sharpening of spatial sensitivity when an animal is engaged in a listening task; and reorganization of spatial sensitivity in the presence of competing sounds. We have not encountered a map, but not through lack of trying. On the basis of years of negative results by our group and others, and positive results that are inconsistent with static point-to-point topography, we are confident in concluding that there just ain't no map. Instead, we have come to appreciate the highly dynamic spatial properties of cortical neurons, which serve the needs of listeners in a changing sonic environment.
Collapse
Affiliation(s)
- John C Middlebrooks
- Department of Otolaryngology
- Department of Neurobiology and Behavior
- Department of Cognitive Sciences
- Department of Biomedical Engineering, University of California at Irvine, Irvine, California 92697-5310
| |
Collapse
|
19
|
Reznik D, Guttman N, Buaron B, Zion-Golumbic E, Mukamel R. Action-locked Neural Responses in Auditory Cortex to Self-generated Sounds. Cereb Cortex 2021; 31:5560-5569. [PMID: 34185837 DOI: 10.1093/cercor/bhab179] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 05/24/2021] [Accepted: 05/25/2021] [Indexed: 11/14/2022] Open
Abstract
Sensory perception is a product of interactions between the internal state of an organism and the physical attributes of a stimulus. It has been shown across the animal kingdom that perception and sensory-evoked physiological responses are modulated depending on whether or not the stimulus is the consequence of voluntary actions. These phenomena are often attributed to motor signals sent to relevant sensory regions that convey information about upcoming sensory consequences. However, the neurophysiological signature of action-locked modulations in sensory cortex, and their relationship with perception, is still unclear. In the current study, we recorded neurophysiological (using Magnetoencephalography) and behavioral responses from 16 healthy subjects performing an auditory detection task of faint tones. Tones were either generated by subjects' voluntary button presses or occurred predictably following a visual cue. By introducing a constant temporal delay between button press/cue and tone delivery, and applying source-level analysis, we decoupled action-locked and auditory-locked activity in auditory cortex. We show action-locked evoked-responses in auditory cortex following sound-triggering actions and preceding sound onset. Such evoked-responses were not found for button-presses that were not coupled with sounds, or sounds delivered following a predictive visual cue. Our results provide evidence for efferent signals in human auditory cortex that are locked to voluntary actions coupled with future auditory consequences.
Collapse
Affiliation(s)
- Daniel Reznik
- Max Planck Institute for Human Cognitive and Brain Sciences, Psychology Department, Leipzig, 04103, Germany
| | - Noa Guttman
- The Gonda Center for Multidisciplinary Brain Research, Bar-Ilan University, Ramat Gan, 5290002, Israel
| | - Batel Buaron
- Sagol School of Neuroscience and School of Psychological Sciences, Tel-Aviv University, 69978, Israel
| | - Elana Zion-Golumbic
- The Gonda Center for Multidisciplinary Brain Research, Bar-Ilan University, Ramat Gan, 5290002, Israel
| | - Roy Mukamel
- Sagol School of Neuroscience and School of Psychological Sciences, Tel-Aviv University, 69978, Israel
| |
Collapse
|
20
|
Yu L, Hu J, Shi C, Zhou L, Tian M, Zhang J, Xu J. The causal role of auditory cortex in auditory working memory. eLife 2021; 10:64457. [PMID: 33913809 PMCID: PMC8169109 DOI: 10.7554/elife.64457] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2020] [Accepted: 04/28/2021] [Indexed: 01/18/2023] Open
Abstract
Working memory (WM), the ability to actively hold information in memory over a delay period of seconds, is a fundamental constituent of cognition. Delay-period activity in sensory cortices has been observed in WM tasks, but whether and when the activity plays a functional role for memory maintenance remains unclear. Here, we investigated the causal role of auditory cortex (AC) for memory maintenance in mice performing an auditory WM task. Electrophysiological recordings revealed that AC neurons were active not only during the presentation of the auditory stimulus but also early in the delay period. Furthermore, optogenetic suppression of neural activity in AC during the stimulus epoch and early delay period impaired WM performance, whereas suppression later in the delay period did not. Thus, AC is essential for information encoding and maintenance in auditory WM task, especially during the early delay period.
Collapse
Affiliation(s)
- Liping Yu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), Key Laboratory of Adolescent Health Assessment and Exercise Intervention of Ministry of Education, and School of Life Sciences, East China Normal University, Shanghai, China
| | - Jiawei Hu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), Key Laboratory of Adolescent Health Assessment and Exercise Intervention of Ministry of Education, and School of Life Sciences, East China Normal University, Shanghai, China
| | - Chenlin Shi
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), Key Laboratory of Adolescent Health Assessment and Exercise Intervention of Ministry of Education, and School of Life Sciences, East China Normal University, Shanghai, China
| | - Li Zhou
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), Key Laboratory of Adolescent Health Assessment and Exercise Intervention of Ministry of Education, and School of Life Sciences, East China Normal University, Shanghai, China
| | - Maozhi Tian
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), Key Laboratory of Adolescent Health Assessment and Exercise Intervention of Ministry of Education, and School of Life Sciences, East China Normal University, Shanghai, China
| | - Jiping Zhang
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), Key Laboratory of Adolescent Health Assessment and Exercise Intervention of Ministry of Education, and School of Life Sciences, East China Normal University, Shanghai, China
| | - Jinghong Xu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), Key Laboratory of Adolescent Health Assessment and Exercise Intervention of Ministry of Education, and School of Life Sciences, East China Normal University, Shanghai, China
| |
Collapse
|
21
|
Mohn JL, Downer JD, O'Connor KN, Johnson JS, Sutter ML. Choice-related activity and neural encoding in primary auditory cortex and lateral belt during feature-selective attention. J Neurophysiol 2021; 125:1920-1937. [PMID: 33788616 DOI: 10.1152/jn.00406.2020] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Selective attention is necessary to sift through, form a coherent percept of, and make behavioral decisions on the vast amount of information present in most sensory environments. How and where selective attention is employed in cortex and how this perceptual information then informs the relevant behavioral decisions is still not well understood. Studies probing selective attention and decision-making in visual cortex have been enlightening as to how sensory attention might work in that modality; whether or not similar mechanisms are employed in auditory attention is not yet clear. Therefore, we trained rhesus macaques on a feature-selective attention task, where they switched between reporting changes in temporal (amplitude modulation, AM) and spectral (carrier bandwidth) features of a broadband noise stimulus. We investigated how the encoding of these features by single neurons in primary (A1) and secondary (middle lateral belt, ML) auditory cortex was affected by the different attention conditions. We found that neurons in A1 and ML showed mixed selectivity to the sound and task features. We found no difference in AM encoding between the attention conditions. We found that choice-related activity in both A1 and ML neurons shifts between attentional conditions. This finding suggests that choice-related activity in auditory cortex does not simply reflect motor preparation or action and supports the relationship between reported choice-related activity and the decision and perceptual process.NEW & NOTEWORTHY We recorded from primary and secondary auditory cortex while monkeys performed a nonspatial feature attention task. Both areas exhibited rate-based choice-related activity. The manifestation of choice-related activity was attention dependent, suggesting that choice-related activity in auditory cortex does not simply reflect arousal or motor influences but relates to the specific perceptual choice.
Collapse
Affiliation(s)
- Jennifer L Mohn
- Center for Neuroscience, University of California, Davis, California.,Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| | - Joshua D Downer
- Center for Neuroscience, University of California, Davis, California.,Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California
| | - Kevin N O'Connor
- Center for Neuroscience, University of California, Davis, California.,Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| | - Jeffrey S Johnson
- Center for Neuroscience, University of California, Davis, California.,Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| | - Mitchell L Sutter
- Center for Neuroscience, University of California, Davis, California.,Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| |
Collapse
|
22
|
Homma NY, Hullett PW, Atencio CA, Schreiner CE. Auditory Cortical Plasticity Dependent on Environmental Noise Statistics. Cell Rep 2021; 30:4445-4458.e5. [PMID: 32234479 PMCID: PMC7326484 DOI: 10.1016/j.celrep.2020.03.014] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2019] [Revised: 08/07/2019] [Accepted: 03/05/2020] [Indexed: 01/14/2023] Open
Abstract
During critical periods, neural circuits develop to form receptive fields that adapt to the sensory environment and enable optimal performance of relevant tasks. We hypothesized that early exposure to background noise can improve signal-in-noise processing, and the resulting receptive field plasticity in the primary auditory cortex can reveal functional principles guiding that important task. We raised rat pups in different spectro-temporal noise statistics during their auditory critical period. As adults, they showed enhanced behavioral performance in detecting vocalizations in noise. Concomitantly, encoding of vocalizations in noise in the primary auditory cortex improves with noise-rearing. Significantly, spectro-temporal modulation plasticity shifts cortical preferences away from the exposed noise statistics, thus reducing noise interference with the foreground sound representation. Auditory cortical plasticity shapes receptive field preferences to optimally extract foreground information in noisy environments during noise-rearing. Early noise exposure induces cortical circuits to implement efficient coding in the joint spectral and temporal modulation domain. After rearing rats in moderately loud spectro-temporally modulated background noise, Homma et al. investigated signal-in-noise processing in the primary auditory cortex. Noise-rearing improved vocalization-in-noise performance in both behavioral testing and neural decoding. Cortical plasticity shifted neuronal spectro-temporal modulation preferences away from the exposed noise statistics.
Collapse
Affiliation(s)
- Natsumi Y Homma
- Coleman Memorial Laboratory, Department of Otolaryngology - Head and Neck Surgery, University of California, San Francisco, San Francisco, CA 94143, USA; Center for Integrative Neuroscience, University of California, San Francisco, San Francisco, CA 94143, USA
| | - Patrick W Hullett
- Coleman Memorial Laboratory, Department of Otolaryngology - Head and Neck Surgery, University of California, San Francisco, San Francisco, CA 94143, USA; Center for Integrative Neuroscience, University of California, San Francisco, San Francisco, CA 94143, USA
| | - Craig A Atencio
- Coleman Memorial Laboratory, Department of Otolaryngology - Head and Neck Surgery, University of California, San Francisco, San Francisco, CA 94143, USA; Center for Integrative Neuroscience, University of California, San Francisco, San Francisco, CA 94143, USA
| | - Christoph E Schreiner
- Coleman Memorial Laboratory, Department of Otolaryngology - Head and Neck Surgery, University of California, San Francisco, San Francisco, CA 94143, USA; Center for Integrative Neuroscience, University of California, San Francisco, San Francisco, CA 94143, USA.
| |
Collapse
|
23
|
Saderi D, Schwartz ZP, Heller CR, Pennington JR, David SV. Dissociation of task engagement and arousal effects in auditory cortex and midbrain. eLife 2021; 10:e60153. [PMID: 33570493 PMCID: PMC7909948 DOI: 10.7554/elife.60153] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2020] [Accepted: 02/10/2021] [Indexed: 12/18/2022] Open
Abstract
Both generalized arousal and engagement in a specific task influence sensory neural processing. To isolate effects of these state variables in the auditory system, we recorded single-unit activity from primary auditory cortex (A1) and inferior colliculus (IC) of ferrets during a tone detection task, while monitoring arousal via changes in pupil size. We used a generalized linear model to assess the influence of task engagement and pupil size on sound-evoked activity. In both areas, these two variables affected independent neural populations. Pupil size effects were more prominent in IC, while pupil and task engagement effects were equally likely in A1. Task engagement was correlated with larger pupil; thus, some apparent effects of task engagement should in fact be attributed to fluctuations in pupil size. These results indicate a hierarchy of auditory processing, where generalized arousal enhances activity in midbrain, and effects specific to task engagement become more prominent in cortex.
Collapse
Affiliation(s)
- Daniela Saderi
- Oregon Hearing Research Center, Oregon Health and Science UniversityPortlandUnited States
- Neuroscience Graduate Program, Oregon Health and Science UniversityPortlandUnited States
| | - Zachary P Schwartz
- Oregon Hearing Research Center, Oregon Health and Science UniversityPortlandUnited States
- Neuroscience Graduate Program, Oregon Health and Science UniversityPortlandUnited States
| | - Charles R Heller
- Oregon Hearing Research Center, Oregon Health and Science UniversityPortlandUnited States
- Neuroscience Graduate Program, Oregon Health and Science UniversityPortlandUnited States
| | - Jacob R Pennington
- Department of Mathematics and Statistics, Washington State UniversityVancouverUnited States
| | - Stephen V David
- Oregon Hearing Research Center, Oregon Health and Science UniversityPortlandUnited States
| |
Collapse
|
24
|
Task Engagement Improves Neural Discriminability in the Auditory Midbrain of the Marmoset Monkey. J Neurosci 2020; 41:284-297. [PMID: 33208469 DOI: 10.1523/jneurosci.1112-20.2020] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Revised: 10/24/2020] [Accepted: 10/27/2020] [Indexed: 11/21/2022] Open
Abstract
While task-dependent changes have been demonstrated in auditory cortex for a number of behavioral paradigms and mammalian species, less is known about how behavioral state can influence neural coding in the midbrain areas that provide auditory information to cortex. We measured single-unit activity in the inferior colliculus (IC) of common marmosets of both sexes while they performed a tone-in-noise detection task and during passive presentation of identical task stimuli. In contrast to our previous study in the ferret IC, task engagement had little effect on sound-evoked activity in central (lemniscal) IC of the marmoset. However, activity was significantly modulated in noncentral fields, where responses were selectively enhanced for the target tone relative to the distractor noise. This led to an increase in neural discriminability between target and distractors. The results confirm that task engagement can modulate sound coding in the auditory midbrain, and support a hypothesis that subcortical pathways can mediate highly trained auditory behaviors.SIGNIFICANCE STATEMENT While the cerebral cortex is widely viewed as playing an essential role in the learning and performance of complex auditory behaviors, relatively little attention has been paid to the role of brainstem and midbrain areas that process sound information before it reaches cortex. This study demonstrates that the auditory midbrain is also modulated during behavior. These modulations amplify task-relevant sensory information, a process that is traditionally attributed to cortex.
Collapse
|
25
|
Ferreiro DN, Amaro D, Schmidtke D, Sobolev A, Gundi P, Belliveau L, Sirota A, Grothe B, Pecka M. Sensory Island Task (SIT): A New Behavioral Paradigm to Study Sensory Perception and Neural Processing in Freely Moving Animals. Front Behav Neurosci 2020; 14:576154. [PMID: 33100981 PMCID: PMC7546252 DOI: 10.3389/fnbeh.2020.576154] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Accepted: 08/27/2020] [Indexed: 11/17/2022] Open
Abstract
A central function of sensory systems is the gathering of information about dynamic interactions with the environment during self-motion. To determine whether modulation of a sensory cue was externally caused or a result of self-motion is fundamental to perceptual invariance and requires the continuous update of sensory processing about recent movements. This process is highly context-dependent and crucial for perceptual performances such as decision-making and sensory object formation. Yet despite its fundamental ecological role, voluntary self-motion is rarely incorporated in perceptual or neurophysiological investigations of sensory processing in animals. Here, we present the Sensory Island Task (SIT), a new freely moving search paradigm to study sensory processing and perception. In SIT, animals explore an open-field arena to find a sensory target relying solely on changes in the presented stimulus, which is controlled by closed-loop position tracking in real-time. Within a few sessions, animals are trained via positive reinforcement to search for a particular area in the arena (“target island”), which triggers the presentation of the target stimulus. The location of the target island is randomized across trials, making the modulated stimulus feature the only informative cue for task completion. Animals report detection of the target stimulus by remaining within the island for a defined time (“sit-time”). Multiple “non-target” islands can be incorporated to test psychometric discrimination and identification performance. We exemplify the suitability of SIT for rodents (Mongolian gerbil, Meriones unguiculatus) and small primates (mouse lemur, Microcebus murinus) and for studying various sensory perceptual performances (auditory frequency discrimination, sound source localization, visual orientation discrimination). Furthermore, we show that pairing SIT with chronic electrophysiological recordings allows revealing neuronal signatures of sensory processing under ecologically relevant conditions during goal-oriented behavior. In conclusion, SIT represents a flexible and easily implementable behavioral paradigm for mammals that combines self-motion and natural exploratory behavior to study sensory sensitivity and decision-making and their underlying neuronal processing.
Collapse
Affiliation(s)
- Dardo N Ferreiro
- Division of Neurobiology, Department Biology II, Ludwig-Maximilians-Universität München, Munich, Germany.,Department of General Psychology and Education, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Diana Amaro
- Division of Neurobiology, Department Biology II, Ludwig-Maximilians-Universität München, Munich, Germany.,Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Daniel Schmidtke
- Institute of Zoology, University of Veterinary Medicine Hannover, Hanover, Germany
| | - Andrey Sobolev
- Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Paula Gundi
- Division of Neurobiology, Department Biology II, Ludwig-Maximilians-Universität München, Munich, Germany.,Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Lucile Belliveau
- Division of Neurobiology, Department Biology II, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Anton Sirota
- Faculty of Medicine, Bernstein Center for Computational Neuroscience Munich, Munich Cluster of Systems Neurology (SyNergy), Ludwig-Maximilians-Universität München, Munich, Germany
| | - Benedikt Grothe
- Division of Neurobiology, Department Biology II, Ludwig-Maximilians-Universität München, Munich, Germany.,Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Michael Pecka
- Division of Neurobiology, Department Biology II, Ludwig-Maximilians-Universität München, Munich, Germany
| |
Collapse
|
26
|
Lambriks LJG, van Hoof M, Debruyne JA, Janssen M, Chalupper J, van der Heijden KA, Hof JR, Hellingman CA, George ELJ, Devocht EMJ. Evaluating hearing performance with cochlear implants within the same patient using daily randomization and imaging-based fitting - The ELEPHANT study. Trials 2020; 21:564. [PMID: 32576247 PMCID: PMC7310427 DOI: 10.1186/s13063-020-04469-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2020] [Accepted: 05/30/2020] [Indexed: 02/08/2023] Open
Abstract
Background Prospective research in the field of cochlear implants is hampered by methodological issues and small sample sizes. The ELEPHANT study presents an alternative clinical trial design with a daily randomized approach evaluating individualized tonotopical fitting of a cochlear implant (CI). Methods A single-blinded, daily-randomized clinical trial will be implemented to evaluate a new imaging-based CI mapping strategy. A minimum of 20 participants will be included from the start of the rehabilitation process with a 1-year follow-up period. Based on a post-operative cone beam CT scan (CBCT), mapping of electrical input will be aligned to natural place-pitch arrangement in the individual cochlea. The CI’s frequency allocation table will be adjusted to match the electrical stimulation of frequencies as closely as possible to corresponding acoustic locations in the cochlea. A randomization scheme will be implemented whereby the participant, blinded to the intervention allocation, crosses over between the experimental and standard fitting program on a daily basis, and thus effectively acts as his own control, followed by a period of free choice between both maps to incorporate patient preference. With this new approach the occurrence of a first-order carryover effect and a limited sample size is addressed. Discussion The experimental fitting strategy is thought to give rise to a steeper learning curve, result in better performance in challenging listening situations, improve sound quality, better complement residual acoustic hearing in the contralateral ear and be preferred by recipients of a CI. Concurrently, the suitability of the novel trial design will be considered in investigating these hypotheses. Trial registration ClinicalTrials.gov: NCT03892941. Registered 27 March 2019.
Collapse
Affiliation(s)
- L J G Lambriks
- Department of ENT/Audiology, School for Mental Health and Neuroscience (MHeNs), Maastricht University Medical Center, Maastricht, The Netherlands.
| | - M van Hoof
- Department of ENT/Audiology, School for Mental Health and Neuroscience (MHeNs), Maastricht University Medical Center, Maastricht, The Netherlands
| | - J A Debruyne
- Department of ENT/Audiology, School for Mental Health and Neuroscience (MHeNs), Maastricht University Medical Center, Maastricht, The Netherlands
| | - M Janssen
- Department of ENT/Audiology, School for Mental Health and Neuroscience (MHeNs), Maastricht University Medical Center, Maastricht, The Netherlands.,Department of Methodology and Statistics, School for Public Health and Primary Care (CAPHRI), Maastricht University Medical Center, Maastricht, The Netherlands
| | - J Chalupper
- Advanced Bionics European Research Centre (AB ERC), Hannover, Germany
| | - K A van der Heijden
- Department of ENT/Audiology, School for Mental Health and Neuroscience (MHeNs), Maastricht University Medical Center, Maastricht, The Netherlands
| | - J R Hof
- Department of ENT/Audiology, School for Mental Health and Neuroscience (MHeNs), Maastricht University Medical Center, Maastricht, The Netherlands
| | - C A Hellingman
- Department of ENT/Audiology, School for Mental Health and Neuroscience (MHeNs), Maastricht University Medical Center, Maastricht, The Netherlands
| | - E L J George
- Department of ENT/Audiology, School for Mental Health and Neuroscience (MHeNs), Maastricht University Medical Center, Maastricht, The Netherlands
| | - E M J Devocht
- Department of ENT/Audiology, School for Mental Health and Neuroscience (MHeNs), Maastricht University Medical Center, Maastricht, The Netherlands
| |
Collapse
|
27
|
Baltzell LS, Cho AY, Swaminathan J, Best V. Spectro-temporal weighting of interaural time differences in speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:3883. [PMID: 32611137 PMCID: PMC7297545 DOI: 10.1121/10.0001418] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/23/2019] [Revised: 05/06/2020] [Accepted: 05/18/2020] [Indexed: 05/19/2023]
Abstract
Numerous studies have demonstrated that the perceptual weighting of interaural time differences (ITDs) is non-uniform in time and frequency, leading to reports of spectral and temporal "dominance" regions. It is unclear however, how these dominance regions apply to spectro-temporally complex stimuli such as speech. The authors report spectro-temporal weighting functions for ITDs in a pair of naturally spoken speech tokens ("two" and "eight"). Each speech token was composed of two phonemes, and was partitioned into eight frequency regions over two time bins (one time bin for each phoneme). To derive lateralization weights, ITDs for each time-frequency bin were drawn independently from a normal distribution with a mean of 0 and a standard deviation of 200 μs, and listeners were asked to indicate whether the speech token was presented from the left or right. ITD thresholds were also obtained for each of the 16 time-frequency bins in isolation. The results suggest that spectral dominance regions apply to speech, and that ITDs carried by phonemes in the first position of the syllable contribute more strongly to lateralization judgments than ITDs carried by phonemes in the second position. The results also show that lateralization judgments are partially accounted for by ITD sensitivity across time-frequency bins.
Collapse
Affiliation(s)
- Lucas S Baltzell
- Department of Speech, Language, and Hearing Sciences, Boston University, 635 Commonwealth Avenue, Boston, Massachusetts 02215, USA
| | - Adrian Y Cho
- Department of Speech, Language, and Hearing Sciences, Boston University, 635 Commonwealth Avenue, Boston, Massachusetts 02215, USA
| | - Jayaganesh Swaminathan
- Department of Speech, Language, and Hearing Sciences, Boston University, 635 Commonwealth Avenue, Boston, Massachusetts 02215, USA
| | - Virginia Best
- Department of Speech, Language, and Hearing Sciences, Boston University, 635 Commonwealth Avenue, Boston, Massachusetts 02215, USA
| |
Collapse
|
28
|
Ribic A. Stability in the Face of Change: Lifelong Experience-Dependent Plasticity in the Sensory Cortex. Front Cell Neurosci 2020; 14:76. [PMID: 32372915 PMCID: PMC7186337 DOI: 10.3389/fncel.2020.00076] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2019] [Accepted: 03/17/2020] [Indexed: 11/13/2022] Open
Abstract
Plasticity is a fundamental property of the nervous system that enables its adaptations to the ever-changing environment. Heightened plasticity typical for developing circuits facilitates their robust experience-dependent functional maturation. This plasticity wanes during adolescence to permit the stabilization of mature brain function, but abundant evidence supports that adult circuits exhibit both transient and long-term experience-induced plasticity. Cortical plasticity has been extensively studied throughout the life span in sensory systems and the main distinction between development and adulthood arising from these studies is the concept that passive exposure to relevant information is sufficient to drive robust plasticity early in life, while higher-order attentional mechanisms are necessary to drive plastic changes in adults. Recent work in the primary visual and auditory cortices began to define the circuit mechanisms that govern these processes and enable continuous adaptation to the environment, with transient circuit disinhibition emerging as a common prerequisite for both developmental and adult plasticity. Drawing from studies in visual and auditory systems, this review article summarizes recent reports on the circuit and cellular mechanisms of experience-driven plasticity in the developing and adult brains and emphasizes the similarities and differences between them. The benefits of distinct plasticity mechanisms used at different ages are discussed in the context of sensory learning, as well as their relationship to maladaptive plasticity and neurodevelopmental brain disorders. Knowledge gaps and avenues for future work are highlighted, and these will hopefully motivate future research in these areas, particularly those about the learning of complex skills during development.
Collapse
Affiliation(s)
- Adema Ribic
- Department of Psychology, College and Graduate School of Arts and Sciences, University of Virginia, Charlottesville, VA, United States
| |
Collapse
|
29
|
Schwartz ZP, Buran BN, David SV. Pupil-associated states modulate excitability but not stimulus selectivity in primary auditory cortex. J Neurophysiol 2019; 123:191-208. [PMID: 31721652 DOI: 10.1152/jn.00595.2019] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
Recent research in mice indicates that luminance-independent fluctuations in pupil size predict variability in spontaneous and evoked activity of single neurons in auditory and visual cortex. These findings suggest that pupil is an indicator of large-scale changes in arousal state that affect sensory processing. However, it is not known whether pupil-related state also influences the selectivity of auditory neurons. We recorded pupil size and single-unit spiking activity in the primary auditory cortex (A1) of nonanesthetized male and female ferrets during presentation of natural vocalizations and tone stimuli that allow measurement of frequency and level tuning. Neurons showed a systematic increase in both spontaneous and sound-evoked activity when pupil was large, as well as desynchronization and a decrease in trial-to-trial variability. Relationships between pupil size and firing rate were nonmonotonic in some cells. In most neurons, several measurements of tuning, including acoustic threshold, spectral bandwidth, and best frequency, remained stable across large changes in pupil size. Across the population, however, there was a small but significant decrease in acoustic threshold when pupil was dilated. In some recordings, we observed rapid, saccade-like eye movements during sustained pupil constriction, which may indicate sleep. Including the presence of this state as a separate variable in a regression model of neural variability accounted for some, but not all, of the variability and nonmonotonicity associated with changes in pupil size.NEW & NOTEWORTHY Cortical neurons vary in their response to repeated stimuli, and some portion of the variability is due to fluctuations in network state. By simultaneously recording pupil and single-neuron activity in auditory cortex of ferrets, we provide new evidence that network state affects the excitability of auditory neurons, but not sensory selectivity. In addition, we report the occurrence of possible sleep states, adding to evidence that pupil provides an index of both sleep and physiological arousal.
Collapse
Affiliation(s)
- Zachary P Schwartz
- Neuroscience Graduate Program, Oregon Health and Science University, Portland, Oregon
| | - Brad N Buran
- Oregon Hearing Research Center, Oregon Health and Science University, Portland, Oregon
| | - Stephen V David
- Oregon Hearing Research Center, Oregon Health and Science University, Portland, Oregon
| |
Collapse
|
30
|
Bednar A, Lalor EC. Where is the cocktail party? Decoding locations of attended and unattended moving sound sources using EEG. Neuroimage 2019; 205:116283. [PMID: 31629828 DOI: 10.1016/j.neuroimage.2019.116283] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Revised: 10/08/2019] [Accepted: 10/14/2019] [Indexed: 11/18/2022] Open
Abstract
Recently, we showed that in a simple acoustic scene with one sound source, auditory cortex tracks the time-varying location of a continuously moving sound. Specifically, we found that both the delta phase and alpha power of the electroencephalogram (EEG) can be used to reconstruct the sound source azimuth. However, in natural settings, we are often presented with a mixture of multiple competing sounds and so we must focus our attention on the relevant source in order to segregate it from the competing sources e.g. 'cocktail party effect'. While many studies have examined this phenomenon in the context of sound envelope tracking by the cortex, it is unclear how we process and utilize spatial information in complex acoustic scenes with multiple sound sources. To test this, we created an experiment where subjects listened to two concurrent sound stimuli that were moving within the horizontal plane over headphones while we recorded their EEG. Participants were tasked with paying attention to one of the two presented stimuli. The data were analyzed by deriving linear mappings, temporal response functions (TRF), between EEG data and attended as well unattended sound source trajectories. Next, we used these TRFs to reconstruct both trajectories from previously unseen EEG data. In a first experiment we used noise stimuli and included the task involved spatially localizing embedded targets. Then, in a second experiment, we employed speech stimuli and a non-spatial speech comprehension task. Results showed the trajectory of an attended sound source can be reliably reconstructed from both delta phase and alpha power of EEG even in the presence of distracting stimuli. Moreover, the reconstruction was robust to task and stimulus type. The cortical representation of the unattended source position was below detection level for the noise stimuli, but we observed weak tracking of the unattended source location for the speech stimuli by the delta phase of EEG. In addition, we demonstrated that the trajectory reconstruction method can in principle be used to decode selective attention on a single-trial basis, however, its performance was inferior to envelope-based decoders. These results suggest a possible dissociation of delta phase and alpha power of EEG in the context of sound trajectory tracking. Moreover, the demonstrated ability to localize and determine the attended speaker in complex acoustic environments is particularly relevant for cognitively controlled hearing devices.
Collapse
Affiliation(s)
- Adam Bednar
- School of Engineering, Trinity College Dublin, Dublin, Ireland; Trinity Center for Bioengineering, Trinity College Dublin, Dublin, Ireland.
| | - Edmund C Lalor
- School of Engineering, Trinity College Dublin, Dublin, Ireland; Trinity Center for Bioengineering, Trinity College Dublin, Dublin, Ireland; Department of Biomedical Engineering, Department of Neuroscience, University of Rochester, Rochester, NY, USA.
| |
Collapse
|
31
|
Movement and VIP Interneuron Activation Differentially Modulate Encoding in Mouse Auditory Cortex. eNeuro 2019; 6:ENEURO.0164-19.2019. [PMID: 31481397 PMCID: PMC6751373 DOI: 10.1523/eneuro.0164-19.2019] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2019] [Revised: 08/02/2019] [Accepted: 08/14/2019] [Indexed: 11/22/2022] Open
Abstract
Information processing in sensory cortex is highly sensitive to nonsensory variables such as anesthetic state, arousal, and task engagement. Recent work in mouse visual cortex suggests that evoked firing rates, stimulus–response mutual information, and encoding efficiency increase when animals are engaged in movement. A disinhibitory circuit appears central to this change: inhibitory neurons expressing vasoactive intestinal peptide (VIP) are activated during movement and disinhibit pyramidal cells by suppressing other inhibitory interneurons. Paradoxically, although movement activates a similar disinhibitory circuit in auditory cortex (ACtx), most ACtx studies report reduced spiking during movement. It is unclear whether the resulting changes in spike rates result in corresponding changes in stimulus–response mutual information. We examined ACtx responses evoked by tone cloud stimuli, in awake mice of both sexes, during spontaneous movement and still conditions. VIP+ cells were optogenetically activated on half of trials, permitting independent analysis of the consequences of movement and VIP activation, as well as their intersection. Movement decreased stimulus-related spike rates as well as mutual information and encoding efficiency. VIP interneuron activation tended to increase stimulus-evoked spike rates but not stimulus–response mutual information, thus reducing encoding efficiency. The intersection of movement and VIP activation was largely consistent with a linear combination of these main effects: VIP activation recovered movement-induced reduction in spike rates, but not information transfer.
Collapse
|
32
|
Abstract
Humans and other animals use spatial hearing to rapidly localize events in the environment. However, neural encoding of sound location is a complex process involving the computation and integration of multiple spatial cues that are not represented directly in the sensory organ (the cochlea). Our understanding of these mechanisms has increased enormously in the past few years. Current research is focused on the contribution of animal models for understanding human spatial audition, the effects of behavioural demands on neural sound location encoding, the emergence of a cue-independent location representation in the auditory cortex, and the relationship between single-source and concurrent location encoding in complex auditory scenes. Furthermore, computational modelling seeks to unravel how neural representations of sound source locations are derived from the complex binaural waveforms of real-life sounds. In this article, we review and integrate the latest insights from neurophysiological, neuroimaging and computational modelling studies of mammalian spatial hearing. We propose that the cortical representation of sound location emerges from recurrent processing taking place in a dynamic, adaptive network of early (primary) and higher-order (posterior-dorsal and dorsolateral prefrontal) auditory regions. This cortical network accommodates changing behavioural requirements and is especially relevant for processing the location of real-life, complex sounds and complex auditory scenes.
Collapse
|
33
|
Gleiss H, Encke J, Lingner A, Jennings TR, Brosel S, Kunz L, Grothe B, Pecka M. Cooperative population coding facilitates efficient sound-source separability by adaptation to input statistics. PLoS Biol 2019; 17:e3000150. [PMID: 31356637 PMCID: PMC6687189 DOI: 10.1371/journal.pbio.3000150] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2019] [Revised: 08/08/2019] [Accepted: 07/11/2019] [Indexed: 01/31/2023] Open
Abstract
Our sensory environment changes constantly. Accordingly, neural systems continually adapt to the concurrent stimulus statistics to remain sensitive over a wide range of conditions. Such dynamic range adaptation (DRA) is assumed to increase both the effectiveness of the neuronal code and perceptual sensitivity. However, direct demonstrations of DRA-based efficient neuronal processing that also produces perceptual benefits are lacking. Here, we investigated the impact of DRA on spatial coding in the rodent brain and the perception of human listeners. Complex spatial stimulation with dynamically changing source locations elicited prominent DRA already on the initial spatial processing stage, the Lateral Superior Olive (LSO) of gerbils. Surprisingly, on the level of individual neurons, DRA diminished spatial tuning because of large response variability across trials. However, when considering single-trial population averages of multiple neurons, DRA enhanced the coding efficiency specifically for the concurrently most probable source locations. Intrinsic LSO population imaging of energy consumption combined with pharmacology revealed that a slow-acting LSO gain-control mechanism distributes activity across a group of neurons during DRA, thereby enhancing population coding efficiency. Strikingly, such “efficient cooperative coding” also improved neuronal source separability specifically for the locations that were most likely to occur. These location-specific enhancements in neuronal coding were paralleled by human listeners exhibiting a selective improvement in spatial resolution. We conclude that, contrary to canonical models of sensory encoding, the primary motive of early spatial processing is efficiency optimization of neural populations for enhanced source separability in the concurrent environment. The efficient coding hypothesis suggests that sensory processing adapts to the stimulus statistics to maximize information while minimizing energetic costs. This study finds that an auditory spatial processing circuit distributes activity across neurons to enhance processing efficiency, focally improving spatial resolution both in neurons and in human listeners.
Collapse
Affiliation(s)
- Helge Gleiss
- Division of Neurobiology, Department of Biology II, Ludwig-Maximilians-Universitaet Muenchen, Martinsried, Germany
| | - Jörg Encke
- Chair of Bio-Inspired Information Processing, Department of Electrical and Computer Engineering, Technical University of Munich, Garching, Germany
| | - Andrea Lingner
- Division of Neurobiology, Department of Biology II, Ludwig-Maximilians-Universitaet Muenchen, Martinsried, Germany
| | - Todd R. Jennings
- Division of Neurobiology, Department of Biology II, Ludwig-Maximilians-Universitaet Muenchen, Martinsried, Germany
| | - Sonja Brosel
- Division of Neurobiology, Department of Biology II, Ludwig-Maximilians-Universitaet Muenchen, Martinsried, Germany
| | - Lars Kunz
- Division of Neurobiology, Department of Biology II, Ludwig-Maximilians-Universitaet Muenchen, Martinsried, Germany
| | - Benedikt Grothe
- Division of Neurobiology, Department of Biology II, Ludwig-Maximilians-Universitaet Muenchen, Martinsried, Germany
| | - Michael Pecka
- Division of Neurobiology, Department of Biology II, Ludwig-Maximilians-Universitaet Muenchen, Martinsried, Germany
- * E-mail:
| |
Collapse
|
34
|
Liu X, Wei F, Cheng Y, Zhang Y, Jia G, Zhou J, Zhu M, Shan Y, Sun X, Yu L, Merzenich MM, Lurie DI, Zheng Q, Zhou X. Auditory Training Reverses Lead (Pb)-Toxicity-Induced Changes in Sound-Azimuth Selectivity of Cortical Neurons. Cereb Cortex 2019; 29:3294-3304. [PMID: 30137254 DOI: 10.1093/cercor/bhy199] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2018] [Revised: 07/20/2018] [Accepted: 07/26/2018] [Indexed: 01/16/2023] Open
Abstract
Lead (Pb) causes significant adverse effects on the developing brain, resulting in cognitive and learning disabilities in children. The process by which lead produces these negative changes is largely unknown. The fact that children with these syndromes also show deficits in central auditory processing, however, indicates a speculative but disturbing relationship between lead-exposure, impaired auditory processing, and behavioral dysfunction. Here we studied in rats the changes in cortical spatial tuning impacted by early lead-exposure and their potential restoration to normal by auditory training. We found animals that were exposed to lead early in life displayed significant behavioral impairments compared with naïve controls while conducting the sound-azimuth discrimination task. Lead-exposure also degraded the sound-azimuth selectivity of neurons in the primary auditory cortex. Subsequent sound-azimuth discrimination training, however, restored to nearly normal the lead-degraded cortical azimuth selectivity. This reversal of cortical spatial fidelity was paralleled by changes in cortical expression of certain excitatory and inhibitory neurotransmitter receptor subunits. These results in a rodent model demonstrate the persisting neurotoxic effects of early lead-exposure on behavioral and cortical neuronal processing of spatial information of sound. They also indicate that attention-demanding auditory training may remediate lead-induced cortical neurological deficits even after these deficits have occurred.
Collapse
Affiliation(s)
- Xia Liu
- Key Laboratory of Brain Functional Genomics of Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics, Institute of Cognitive Neuroscience, Collaborative Innovation Center for Brain Science, School of Life Sciences, East China Normal University, Shanghai, China
| | - Fanfan Wei
- Key Laboratory of Brain Functional Genomics of Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics, Institute of Cognitive Neuroscience, Collaborative Innovation Center for Brain Science, School of Life Sciences, East China Normal University, Shanghai, China
| | - Yuan Cheng
- Key Laboratory of Brain Functional Genomics of Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics, Institute of Cognitive Neuroscience, Collaborative Innovation Center for Brain Science, School of Life Sciences, East China Normal University, Shanghai, China.,New York University-East China Normal University Institute of Brain and Cognitive Science, New York University-Shanghai, Shanghai, China
| | - Yifan Zhang
- Key Laboratory of Brain Functional Genomics of Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics, Institute of Cognitive Neuroscience, Collaborative Innovation Center for Brain Science, School of Life Sciences, East China Normal University, Shanghai, China.,New York University-East China Normal University Institute of Brain and Cognitive Science, New York University-Shanghai, Shanghai, China
| | - Guoqiang Jia
- Key Laboratory of Brain Functional Genomics of Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics, Institute of Cognitive Neuroscience, Collaborative Innovation Center for Brain Science, School of Life Sciences, East China Normal University, Shanghai, China.,New York University-East China Normal University Institute of Brain and Cognitive Science, New York University-Shanghai, Shanghai, China
| | - Jie Zhou
- Key Laboratory of Brain Functional Genomics of Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics, Institute of Cognitive Neuroscience, Collaborative Innovation Center for Brain Science, School of Life Sciences, East China Normal University, Shanghai, China.,New York University-East China Normal University Institute of Brain and Cognitive Science, New York University-Shanghai, Shanghai, China
| | - Min Zhu
- Key Laboratory of Brain Functional Genomics of Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics, Institute of Cognitive Neuroscience, Collaborative Innovation Center for Brain Science, School of Life Sciences, East China Normal University, Shanghai, China.,New York University-East China Normal University Institute of Brain and Cognitive Science, New York University-Shanghai, Shanghai, China
| | - Ye Shan
- Key Laboratory of Brain Functional Genomics of Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics, Institute of Cognitive Neuroscience, Collaborative Innovation Center for Brain Science, School of Life Sciences, East China Normal University, Shanghai, China
| | - Xinde Sun
- Key Laboratory of Brain Functional Genomics of Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics, Institute of Cognitive Neuroscience, Collaborative Innovation Center for Brain Science, School of Life Sciences, East China Normal University, Shanghai, China
| | - Liping Yu
- Key Laboratory of Brain Functional Genomics of Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics, Institute of Cognitive Neuroscience, Collaborative Innovation Center for Brain Science, School of Life Sciences, East China Normal University, Shanghai, China
| | | | - Diana I Lurie
- Center for Structural and Functional Neuroscience, Center for Environmental Health Sciences, Department of Biomedical & Pharmaceutical Sciences, College of Health Professions and Biomedical Sciences, University of Montana, Missoula, MT, USA
| | - Qingyin Zheng
- Transformative Otology and Neuroscience Center, Binzhou Medical University, Yantai, China
| | - Xiaoming Zhou
- Key Laboratory of Brain Functional Genomics of Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics, Institute of Cognitive Neuroscience, Collaborative Innovation Center for Brain Science, School of Life Sciences, East China Normal University, Shanghai, China.,New York University-East China Normal University Institute of Brain and Cognitive Science, New York University-Shanghai, Shanghai, China
| |
Collapse
|
35
|
Neurons in primary auditory cortex represent sound source location in a cue-invariant manner. Nat Commun 2019; 10:3019. [PMID: 31289272 PMCID: PMC6616358 DOI: 10.1038/s41467-019-10868-9] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2018] [Accepted: 06/07/2019] [Indexed: 02/04/2023] Open
Abstract
Auditory cortex is required for sound localisation, but how neural firing in auditory cortex underlies our perception of sound sources in space remains unclear. Specifically, whether neurons in auditory cortex represent spatial cues or an integrated representation of auditory space across cues is not known. Here, we measured the spatial receptive fields of neurons in primary auditory cortex (A1) while ferrets performed a relative localisation task. Manipulating the availability of binaural and spectral localisation cues had little impact on ferrets’ performance, or on neural spatial tuning. A subpopulation of neurons encoded spatial position consistently across localisation cue type. Furthermore, neural firing pattern decoders outperformed two-channel model decoders using population activity. Together, these observations suggest that A1 encodes the location of sound sources, as opposed to spatial cue values. The brain's auditory cortex is involved not just in detection of sounds, but also in localizing them. Here, the authors show that neurons in ferret primary auditory cortex (A1) encode the location of sound sources, as opposed to merely reflecting spatial cues.
Collapse
|
36
|
Evoked Response Strength in Primary Auditory Cortex Predicts Performance in a Spectro-Spatial Discrimination Task in Rats. J Neurosci 2019; 39:6108-6121. [PMID: 31175214 DOI: 10.1523/jneurosci.0041-18.2019] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2018] [Revised: 04/19/2019] [Accepted: 05/12/2019] [Indexed: 11/21/2022] Open
Abstract
The extent to which the primary auditory cortex (A1) participates in instructing animal behavior remains debated. Although multiple studies have shown A1 activity to correlate with animals' perceptual judgments (Jaramillo and Zador, 2011; Bizley et al., 2013; Rodgers and DeWeese, 2014), others have found no relationship between A1 responses and reported auditory percepts (Lemus et al., 2009; Dong et al., 2011). To address this ambiguity, we performed chronic recordings of evoked local field potentials (eLFPs) in A1 of head-fixed female rats performing a two-alternative forced-choice auditory discrimination task. Rats were presented with two interleaved sequences of pure tones from opposite sides and had to indicate the side from which the higher-frequency target stimulus was played. Animal performance closely correlated (r rm = 0.68) with the difference between the target and distractor eLFP responses: the more the target response exceeded the distractor response, the better the animals were at identifying the side of the target frequency. Reducing the evoked response of either frequency through stimulus-specific adaptation affected performance in the expected way: target localization accuracy was degraded when the target frequency was adapted and improved when the distractor frequency was adapted. Target frequency eLFPs were stronger on hit trials than on error trials. Our results suggest that the degree to which one stimulus stands out over others within A1 activity may determine its perceptual saliency for the animals and accordingly bias their behavioral choices.SIGNIFICANCE STATEMENT The brain must continuously calibrate the saliency of sensory percepts against their relevance to the current behavioral goal. The inability to ignore irrelevant distractors characterizes a spectrum of human attentional disorders. Meanwhile, the connection between the neural underpinnings of stimulus saliency and sensory decisions remains elusive. Here, we record local field potentials in the primary auditory cortex of rats engaged in auditory discrimination to investigate how the cortical representation of target and distractor stimuli impacts behavior. We find that the amplitude difference between target- and distractor-evoked activity predicts discrimination performance (r rm = 0.68). Specific adaptation of target or distractor shifts performance either below or above chance, respectively. It appears that recent auditory history profoundly influences stimulus saliency, biasing animals toward diametrically-opposed decisions.
Collapse
|
37
|
Bihemispheric anodal transcranial direct-current stimulation over temporal cortex enhances auditory selective spatial attention. Exp Brain Res 2019; 237:1539-1549. [PMID: 30927041 DOI: 10.1007/s00221-019-05525-y] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2019] [Accepted: 03/20/2019] [Indexed: 10/27/2022]
Abstract
The capacity to selectively focus on a particular speaker of interest in a complex acoustic environment with multiple persons speaking simultaneously-a so-called "cocktail-party" situation-is of decisive importance for human verbal communication. Here, the efficacy of single-dose transcranial direct-current stimulation (tDCS) in improving this ability was tested in young healthy adults (n = 24), using a spatial task that required the localization of a target word in a simulated "cocktail-party" situation. In a sham-controlled crossover design, offline bihemispheric double-monopolar anodal tDCS was applied for 30 min at 1 mA over auditory regions of temporal lobe, and the participant's performance was assessed prior to tDCS, immediately after tDCS, and 1 h after tDCS. A significant increase in the amount of correct localizations by on average 3.7 percentage points (d = 1.04) was found after active, relative to sham, tDCS, with only insignificant reduction of the effect within 1 h after tDCS offset. Thus, the method of bihemispheric tDCS could be a promising tool for enhancement of human auditory attentional functions that are relevant for spatial orientation and communication in everyday life.
Collapse
|
38
|
Remington ED, Wang X. Neural Representations of the Full Spatial Field in Auditory Cortex of Awake Marmoset (Callithrix jacchus). Cereb Cortex 2019; 29:1199-1216. [PMID: 29420692 PMCID: PMC6373678 DOI: 10.1093/cercor/bhy025] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2017] [Accepted: 01/13/2018] [Indexed: 11/14/2022] Open
Abstract
Unlike visual signals, sound can reach the ears from any direction, and the ability to localize sounds from all directions is essential for survival in a natural environment. Previous studies have largely focused on the space in front of a subject that is also covered by vision and were often limited to measuring spatial tuning along the horizontal (azimuth) plane. As a result, we know relatively little about how the auditory cortex responds to sounds coming from spatial locations outside the frontal space where visual information is unavailable. By mapping single-neuron responses to the full spatial field in awake marmoset (Callithrix jacchus), an arboreal animal for which spatial processing is vital in its natural habitat, we show that spatial receptive fields in several auditory areas cover all spatial locations. Several complementary measures of spatial tuning showed that neurons were tuned to both frontal space and rear space (outside the coverage of vision), as well as the space above and below the horizontal plane. Together, these findings provide valuable new insights into the representation of all spatial locations by primate auditory cortex.
Collapse
Affiliation(s)
- Evan D Remington
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Xiaoqin Wang
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
39
|
Motor output, neural states and auditory perception. Neurosci Biobehav Rev 2019; 96:116-126. [DOI: 10.1016/j.neubiorev.2018.10.021] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2018] [Revised: 10/26/2018] [Accepted: 10/29/2018] [Indexed: 12/12/2022]
|
40
|
Sound identity is represented robustly in auditory cortex during perceptual constancy. Nat Commun 2018; 9:4786. [PMID: 30429465 PMCID: PMC6235866 DOI: 10.1038/s41467-018-07237-3] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2018] [Accepted: 10/23/2018] [Indexed: 12/02/2022] Open
Abstract
Perceptual constancy requires neural representations that are selective for object identity, but also tolerant across identity-preserving transformations. How such representations arise in the brain and support perception remains unclear. Here, we study tolerant representation of sound identity in the auditory system by recording neural activity in auditory cortex of ferrets during perceptual constancy. Ferrets generalize vowel identity across variations in fundamental frequency, sound level and location, while neurons represent sound identity robustly across acoustic variations. Stimulus features are encoded with distinct time-courses in all conditions, however encoding of sound identity is delayed when animals fail to generalize and during passive listening. Neurons also encode information about task-irrelevant sound features, as well as animals’ choices and accuracy, while population decoding out-performs animals’ behavior. Our results show that during perceptual constancy, sound identity is represented robustly in auditory cortex across widely varying conditions, and behavioral generalization requires conserved timing of identity information. Perceptual constancy requires neural representations selective for object identity, yet tolerant of identity-preserving transformations. Here, the authors show that sound identity is represented robustly in auditory cortex and that behavioral generalization requires precise timing of identity information.
Collapse
|
41
|
Active Sound Localization Sharpens Spatial Tuning in Human Primary Auditory Cortex. J Neurosci 2018; 38:8574-8587. [PMID: 30126968 DOI: 10.1523/jneurosci.0587-18.2018] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2018] [Revised: 07/09/2018] [Accepted: 07/19/2018] [Indexed: 11/21/2022] Open
Abstract
Spatial hearing sensitivity in humans is dynamic and task-dependent, but the mechanisms in human auditory cortex that enable dynamic sound location encoding remain unclear. Using functional magnetic resonance imaging (fMRI), we assessed how active behavior affects encoding of sound location (azimuth) in primary auditory cortical areas and planum temporale (PT). According to the hierarchical model of auditory processing and cortical functional specialization, PT is implicated in sound location ("where") processing. Yet, our results show that spatial tuning profiles in primary auditory cortical areas (left primary core and right caudo-medial belt) sharpened during a sound localization ("where") task compared with a sound identification ("what") task. In contrast, spatial tuning in PT was sharp but did not vary with task performance. We further applied a population pattern decoder to the measured fMRI activity patterns, which confirmed the task-dependent effects in the left core: sound location estimates from fMRI patterns measured during active sound localization were most accurate. In PT, decoding accuracy was not modulated by task performance. These results indicate that changes of population activity in human primary auditory areas reflect dynamic and task-dependent processing of sound location. As such, our findings suggest that the hierarchical model of auditory processing may need to be revised to include an interaction between primary and functionally specialized areas depending on behavioral requirements.SIGNIFICANCE STATEMENT According to a purely hierarchical view, cortical auditory processing consists of a series of analysis stages from sensory (acoustic) processing in primary auditory cortex to specialized processing in higher-order areas. Posterior-dorsal cortical auditory areas, planum temporale (PT) in humans, are considered to be functionally specialized for spatial processing. However, this model is based mostly on passive listening studies. Our results provide compelling evidence that active behavior (sound localization) sharpens spatial selectivity in primary auditory cortex, whereas spatial tuning in functionally specialized areas (PT) is narrow but task-invariant. These findings suggest that the hierarchical view of cortical functional specialization needs to be extended: our data indicate that active behavior involves feedback projections from higher-order regions to primary auditory cortex.
Collapse
|
42
|
Schwartz ZP, David SV. Focal Suppression of Distractor Sounds by Selective Attention in Auditory Cortex. Cereb Cortex 2018; 28:323-339. [PMID: 29136104 PMCID: PMC6057511 DOI: 10.1093/cercor/bhx288] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2017] [Indexed: 11/15/2022] Open
Abstract
Auditory selective attention is required for parsing crowded acoustic environments, but cortical systems mediating the influence of behavioral state on auditory perception are not well characterized. Previous neurophysiological studies suggest that attention produces a general enhancement of neural responses to important target sounds versus irrelevant distractors. However, behavioral studies suggest that in the presence of masking noise, attention provides a focal suppression of distractors that compete with targets. Here, we compared effects of attention on cortical responses to masking versus non-masking distractors, controlling for effects of listening effort and general task engagement. We recorded single-unit activity from primary auditory cortex (A1) of ferrets during behavior and found that selective attention decreased responses to distractors masking targets in the same spectral band, compared with spectrally distinct distractors. This suppression enhanced neural target detection thresholds, suggesting that limited attention resources serve to focally suppress responses to distractors that interfere with target detection. Changing effort by manipulating target salience consistently modulated spontaneous but not evoked activity. Task engagement and changing effort tended to affect the same neurons, while attention affected an independent population, suggesting that distinct feedback circuits mediate effects of attention and effort in A1.
Collapse
Affiliation(s)
- Zachary P Schwartz
- Neuroscience Graduate Program, Oregon Health and Science University, OR, USA
| | - Stephen V David
- Oregon Hearing Research Center, Oregon Health and Science University, OR, USA
- Address Correspondence to Stephen V. David, Oregon Hearing Research Center, Oregon Health and Science University, 3181 SW Sam Jackson Park Road, MC L335A, Portland, OR 97239, USA.
| |
Collapse
|
43
|
Yao JD, Sanes DH. Developmental deprivation-induced perceptual and cortical processing deficits in awake-behaving animals. eLife 2018; 7:33891. [PMID: 29873632 PMCID: PMC6005681 DOI: 10.7554/elife.33891] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2017] [Accepted: 06/04/2018] [Indexed: 01/02/2023] Open
Abstract
Sensory deprivation during development induces lifelong changes to central nervous system function that are associated with perceptual impairments. However, the relationship between neural and behavioral deficits is uncertain due to a lack of simultaneous measurements during task performance. Therefore, we telemetrically recorded from auditory cortex neurons in gerbils reared with developmental conductive hearing loss as they performed an auditory task in which rapid fluctuations in amplitude are detected. These data were compared to a measure of auditory brainstem temporal processing from each animal. We found that developmental HL diminished behavioral performance, but did not alter brainstem temporal processing. However, the simultaneous assessment of neural and behavioral processing revealed that perceptual deficits were associated with a degraded cortical population code that could be explained by greater trial-to-trial response variability. Our findings suggest that the perceptual limitations that attend early hearing loss are best explained by an encoding deficit in auditory cortex.
Collapse
Affiliation(s)
- Justin D Yao
- Center for Neural Science, New York University, New York, United States
| | - Dan H Sanes
- Center for Neural Science, New York University, New York, United States.,Department of Psychology, New York University, New York, United States.,Department of Biology, New York University, New York, United States.,Neuroscience Institute, NYU Langone Medical Center, New York, United States
| |
Collapse
|
44
|
Li WL, Chu MW, Wu A, Suzuki Y, Imayoshi I, Komiyama T. Adult-born neurons facilitate olfactory bulb pattern separation during task engagement. eLife 2018; 7:e33006. [PMID: 29533179 PMCID: PMC5912906 DOI: 10.7554/elife.33006] [Citation(s) in RCA: 45] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2017] [Accepted: 03/12/2018] [Indexed: 11/18/2022] Open
Abstract
The rodent olfactory bulb incorporates thousands of newly generated inhibitory neurons daily throughout adulthood, but the role of adult neurogenesis in olfactory processing is not fully understood. Here we adopted a genetic method to inducibly suppress adult neurogenesis and investigated its effect on behavior and bulbar activity. Mice without young adult-born neurons (ABNs) showed normal ability in discriminating very different odorants but were impaired in fine discrimination. Furthermore, two-photon calcium imaging of mitral cells (MCs) revealed that the ensemble odor representations of similar odorants were more ambiguous in the ablation animals. This increased ambiguity was primarily due to a decrease in MC suppressive responses. Intriguingly, these deficits in MC encoding were only observed during task engagement but not passive exposure. Our results indicate that young olfactory ABNs are essential for the enhancement of MC pattern separation in a task engagement-dependent manner, potentially functioning as a gateway for top-down modulation.
Collapse
Affiliation(s)
- Wankun L Li
- Neurobiology Section, Center for Neural Circuits and BehaviorUniversity of California, San DiegoSan DiegoUnited States
- Department of NeurosciencesUniversity of California, San DiegoSan DiegoUnited States
| | - Monica W Chu
- Neurobiology Section, Center for Neural Circuits and BehaviorUniversity of California, San DiegoSan DiegoUnited States
- Department of NeurosciencesUniversity of California, San DiegoSan DiegoUnited States
| | - An Wu
- Neurobiology Section, Center for Neural Circuits and BehaviorUniversity of California, San DiegoSan DiegoUnited States
- Department of NeurosciencesUniversity of California, San DiegoSan DiegoUnited States
| | - Yusuke Suzuki
- Medical Innovation Center/SK Project, Graduate School of MedicineKyoto UniversityKyotoJapan
| | | | - Takaki Komiyama
- Neurobiology Section, Center for Neural Circuits and BehaviorUniversity of California, San DiegoSan DiegoUnited States
- Department of NeurosciencesUniversity of California, San DiegoSan DiegoUnited States
| |
Collapse
|
45
|
David SV. Incorporating behavioral and sensory context into spectro-temporal models of auditory encoding. Hear Res 2018; 360:107-123. [PMID: 29331232 PMCID: PMC6292525 DOI: 10.1016/j.heares.2017.12.021] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/27/2017] [Revised: 12/18/2017] [Accepted: 12/26/2017] [Indexed: 01/11/2023]
Abstract
For several decades, auditory neuroscientists have used spectro-temporal encoding models to understand how neurons in the auditory system represent sound. Derived from early applications of systems identification tools to the auditory periphery, the spectro-temporal receptive field (STRF) and more sophisticated variants have emerged as an efficient means of characterizing representation throughout the auditory system. Most of these encoding models describe neurons as static sensory filters. However, auditory neural coding is not static. Sensory context, reflecting the acoustic environment, and behavioral context, reflecting the internal state of the listener, can both influence sound-evoked activity, particularly in central auditory areas. This review explores recent efforts to integrate context into spectro-temporal encoding models. It begins with a brief tutorial on the basics of estimating and interpreting STRFs. Then it describes three recent studies that have characterized contextual effects on STRFs, emerging over a range of timescales, from many minutes to tens of milliseconds. An important theme of this work is not simply that context influences auditory coding, but also that contextual effects span a large continuum of internal states. The added complexity of these context-dependent models introduces new experimental and theoretical challenges that must be addressed in order to be used effectively. Several new methodological advances promise to address these limitations and allow the development of more comprehensive context-dependent models in the future.
Collapse
Affiliation(s)
- Stephen V David
- Oregon Hearing Research Center, Oregon Health & Science University, 3181 SW Sam Jackson Park Rd, MC L335A, Portland, OR 97239, United States.
| |
Collapse
|
46
|
Golob EJ, Lewald J, Getzmann S, Mock JR. Numerical value biases sound localization. Sci Rep 2017; 7:17252. [PMID: 29222526 PMCID: PMC5722947 DOI: 10.1038/s41598-017-17429-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2016] [Accepted: 11/27/2017] [Indexed: 11/18/2022] Open
Abstract
Speech recognition starts with representations of basic acoustic perceptual features and ends by categorizing the sound based on long-term memory for word meaning. However, little is known about whether the reverse pattern of lexical influences on basic perception can occur. We tested for a lexical influence on auditory spatial perception by having subjects make spatial judgments of number stimuli. Four experiments used pointing or left/right 2-alternative forced choice tasks to examine perceptual judgments of sound location as a function of digit magnitude (1–9). The main finding was that for stimuli presented near the median plane there was a linear left-to-right bias for localizing smaller-to-larger numbers. At lateral locations there was a central-eccentric location bias in the pointing task, and either a bias restricted to the smaller numbers (left side) or no significant number bias (right side). Prior number location also biased subsequent number judgments towards the opposite side. Findings support a lexical influence on auditory spatial perception, with a linear mapping near midline and more complex relations at lateral locations. Results may reflect coding of dedicated spatial channels, with two representing lateral positions in each hemispace, and the midline area represented by either their overlap or a separate third channel.
Collapse
Affiliation(s)
- Edward J Golob
- Department of Psychology, Tulane University, New Orleans, LA, USA. .,Program in Neuroscience, Tulane University, New Orleans, LA, USA. .,Department of Psychology, University of Texas, San Antonio, USA.
| | - Jörg Lewald
- Faculty of Psychology, Ruhr University Bochum, D-44780, Bochum, Germany.,Leibniz Research Centre for Working Environment and Human Factors, Ardeystrasse 67, D-44139, Dortmund, Germany
| | - Stephan Getzmann
- Faculty of Psychology, Ruhr University Bochum, D-44780, Bochum, Germany.,Leibniz Research Centre for Working Environment and Human Factors, Ardeystrasse 67, D-44139, Dortmund, Germany
| | - Jeffrey R Mock
- Department of Psychology, Tulane University, New Orleans, LA, USA.,Department of Psychology, University of Texas, San Antonio, USA
| |
Collapse
|
47
|
Cortical Processing of Level Cues for Spatial Hearing is Impaired in Children with Prelingual Deafness Despite Early Bilateral Access to Sound. Brain Topogr 2017; 31:270-287. [DOI: 10.1007/s10548-017-0596-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2017] [Accepted: 09/25/2017] [Indexed: 01/13/2023]
|
48
|
Spence C, Lee J, Van der Stoep N. Responding to sounds from unseen locations: crossmodal attentional orienting in response to sounds presented from the rear. Eur J Neurosci 2017; 51:1137-1150. [PMID: 28973789 DOI: 10.1111/ejn.13733] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2017] [Revised: 09/27/2017] [Accepted: 09/27/2017] [Indexed: 11/28/2022]
Abstract
To date, most of the research on spatial attention has focused on probing people's responses to stimuli presented in frontal space. That is, few researchers have attempted to assess what happens in the space that is currently unseen (essentially rear space). In a sense, then, 'out of sight' is, very much, 'out of mind'. In this review, we highlight what is presently known about the perception and processing of sensory stimuli (focusing on sounds) whose source is not currently visible. We briefly summarize known differences in the localizability of sounds presented from different locations in 3D space, and discuss the consequences for the crossmodal attentional and multisensory perceptual interactions taking place in various regions of space. The latest research now clearly shows that the kinds of crossmodal interactions that take place in rear space are very often different in kind from those that have been documented in frontal space. Developing a better understanding of how people respond to unseen sound sources in naturalistic environments by integrating findings emerging from multiple fields of research will likely lead to the design of better warning signals in the future. This review highlights the need for neuroscientists interested in spatial attention to spend more time researching what happens (in terms of the covert and overt crossmodal orienting of attention) in rear space.
Collapse
Affiliation(s)
- Charles Spence
- Crossmodal Research Laboratory, Department of Experimental Psychology, Oxford University, Oxford, OX1 3UD, UK
| | - Jae Lee
- Crossmodal Research Laboratory, Department of Experimental Psychology, Oxford University, Oxford, OX1 3UD, UK
| | - Nathan Van der Stoep
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
49
|
Abstract
Practice sharpens our perceptual judgments, a process known as perceptual learning. Although several brain regions and neural mechanisms have been proposed to support perceptual learning, formal tests of causality are lacking. Furthermore, the temporal relationship between neural and behavioral plasticity remains uncertain. To address these issues, we recorded the activity of auditory cortical neurons as gerbils trained on a sound detection task. Training led to improvements in cortical and behavioral sensitivity that were closely matched in terms of magnitude and time course. Surprisingly, the degree of neural improvement was behaviorally gated. During task performance, cortical improvements were large and predicted behavioral outcomes. In contrast, during nontask listening sessions, cortical improvements were weak and uncorrelated with perceptual performance. Targeted reduction of auditory cortical activity during training diminished perceptual learning while leaving psychometric performance largely unaffected. Collectively, our findings suggest that training facilitates perceptual learning by strengthening both bottom-up sensory encoding and top-down modulation of auditory cortex.
Collapse
Affiliation(s)
- Melissa L Caras
- Center for Neural Science, New York University, New York, NY 10003;
| | - Dan H Sanes
- Center for Neural Science, New York University, New York, NY 10003
- Department of Psychology, New York University, New York, NY 10003
- Department of Biology, New York University, New York, NY 10003
- Neuroscience Institute, New York University Langone Medical Center, New York, NY 10016
| |
Collapse
|
50
|
Christison-Lagay KL, Bennur S, Cohen YE. Contribution of spiking activity in the primary auditory cortex to detection in noise. J Neurophysiol 2017; 118:3118-3131. [PMID: 28855294 DOI: 10.1152/jn.00521.2017] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2017] [Revised: 08/25/2017] [Accepted: 08/27/2017] [Indexed: 01/08/2023] Open
Abstract
A fundamental problem in hearing is detecting a "target" stimulus (e.g., a friend's voice) that is presented with a noisy background (e.g., the din of a crowded restaurant). Despite its importance to hearing, a relationship between spiking activity and behavioral performance during such a "detection-in-noise" task has yet to be fully elucidated. In this study, we recorded spiking activity in primary auditory cortex (A1) while rhesus monkeys detected a target stimulus that was presented with a noise background. Although some neurons were modulated, the response of the typical A1 neuron was not modulated by the stimulus- and task-related parameters of our task. In contrast, we found more robust representations of these parameters in population-level activity: small populations of neurons matched the monkeys' behavioral sensitivity. Overall, these findings are consistent with the hypothesis that the sensory evidence, which is needed to solve such detection-in-noise tasks, is represented in population-level A1 activity and may be available to be read out by downstream neurons that are involved in mediating this task.NEW & NOTEWORTHY This study examines the contribution of A1 to detecting a sound that is presented with a noisy background. We found that population-level A1 activity, but not single neurons, could provide the evidence needed to make this perceptual decision.
Collapse
Affiliation(s)
| | - Sharath Bennur
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Yale E Cohen
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, Pennsylvania; .,Department of Neuroscience, University of Pennsylvania, Philadelphia, Pennsylvania; and.,Department of Bioengineering, University of Pennsylvania, Philadelphia, Pennsylvania
| |
Collapse
|