1
|
Homma NY, See JZ, Atencio CA, Hu C, Downer JD, Beitel RE, Cheung SW, Najafabadi MS, Olsen T, Bigelow J, Hasenstaub AR, Malone BJ, Schreiner CE. Receptive-field nonlinearities in primary auditory cortex: a comparative perspective. Cereb Cortex 2024; 34:bhae364. [PMID: 39270676 PMCID: PMC11398879 DOI: 10.1093/cercor/bhae364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Revised: 08/14/2024] [Accepted: 08/21/2024] [Indexed: 09/15/2024] Open
Abstract
Cortical processing of auditory information can be affected by interspecies differences as well as brain states. Here we compare multifeature spectro-temporal receptive fields (STRFs) and associated input/output functions or nonlinearities (NLs) of neurons in primary auditory cortex (AC) of four mammalian species. Single-unit recordings were performed in awake animals (female squirrel monkeys, female, and male mice) and anesthetized animals (female squirrel monkeys, rats, and cats). Neuronal responses were modeled as consisting of two STRFs and their associated NLs. The NLs for the STRF with the highest information content show a broad distribution between linear and quadratic forms. In awake animals, we find a higher percentage of quadratic-like NLs as opposed to more linear NLs in anesthetized animals. Moderate sex differences of the shape of NLs were observed between male and female unanesthetized mice. This indicates that the core AC possesses a rich variety of potential computations, particularly in awake animals, suggesting that multiple computational algorithms are at play to enable the auditory system's robust recognition of auditory events.
Collapse
Affiliation(s)
- Natsumi Y Homma
- John & Edward Coleman Memorial Laboratory, Kavli Institute for Fundamental Neuroscience, Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA
- Department of Physiology, Development and Neuroscience, University of Cambridge, Downing Street, Cambridge, UK
| | - Jermyn Z See
- John & Edward Coleman Memorial Laboratory, Kavli Institute for Fundamental Neuroscience, Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA
| | - Craig A Atencio
- John & Edward Coleman Memorial Laboratory, Kavli Institute for Fundamental Neuroscience, Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA
| | - Congcong Hu
- John & Edward Coleman Memorial Laboratory, Kavli Institute for Fundamental Neuroscience, Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA
| | - Joshua D Downer
- John & Edward Coleman Memorial Laboratory, Kavli Institute for Fundamental Neuroscience, Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA
- Center of Neuroscience, University of California Davis, Newton Ct, Davis, CA, USA
| | - Ralph E Beitel
- John & Edward Coleman Memorial Laboratory, Kavli Institute for Fundamental Neuroscience, Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA
| | - Steven W Cheung
- John & Edward Coleman Memorial Laboratory, Kavli Institute for Fundamental Neuroscience, Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA
| | - Mina Sadeghi Najafabadi
- John & Edward Coleman Memorial Laboratory, Kavli Institute for Fundamental Neuroscience, Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA
| | - Timothy Olsen
- John & Edward Coleman Memorial Laboratory, Kavli Institute for Fundamental Neuroscience, Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA
| | - James Bigelow
- John & Edward Coleman Memorial Laboratory, Kavli Institute for Fundamental Neuroscience, Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA
| | - Andrea R Hasenstaub
- John & Edward Coleman Memorial Laboratory, Kavli Institute for Fundamental Neuroscience, Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA
| | - Brian J Malone
- John & Edward Coleman Memorial Laboratory, Kavli Institute for Fundamental Neuroscience, Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA
- Center of Neuroscience, University of California Davis, Newton Ct, Davis, CA, USA
| | - Christoph E Schreiner
- John & Edward Coleman Memorial Laboratory, Kavli Institute for Fundamental Neuroscience, Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA
| |
Collapse
|
2
|
Li YH, Joris PX. Case reopened: A temporal basis for harmonic pitch templates in the early auditory system?a). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 154:3986-4003. [PMID: 38149819 DOI: 10.1121/10.0023969] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 12/04/2023] [Indexed: 12/28/2023]
Abstract
A fundamental assumption of rate-place models of pitch is the existence of harmonic templates in the central nervous system (CNS). Shamma and Klein [(2000). J. Acoust. Soc. Am. 107, 2631-2644] hypothesized that these templates have a temporal basis. Coincidences in the temporal fine-structure of neural spike trains, even in response to nonharmonic, stochastic stimuli, would be sufficient for the development of harmonic templates. The physiological plausibility of this hypothesis is tested. Responses to pure tones, low-pass noise, and broadband noise from auditory nerve fibers and brainstem "high-sync" neurons are studied. Responses to tones simulate the output of fibers with infinitely sharp filters: for these responses, harmonic structure in a coincidence matrix comparing pairs of spike trains is indeed found. However, harmonic template structure is not observed in coincidences across responses to broadband noise, which are obtained from nerve fibers or neurons with enhanced synchronization. Using a computer model based on that of Shamma and Klein, it is shown that harmonic templates only emerge when consecutive processing steps (cochlear filtering, lateral inhibition, and temporal enhancement) are implemented in extreme, physiologically implausible form. It is concluded that current physiological knowledge does not support the hypothesis of Shamma and Klein (2000).
Collapse
Affiliation(s)
- Yi-Hsuan Li
- Laboratory of Auditory Neurophysiology, Medical School, Campus Gasthuisberg, University of Leuven, B-3000 Leuven, Belgium
| | - Philip X Joris
- Laboratory of Auditory Neurophysiology, Medical School, Campus Gasthuisberg, University of Leuven, B-3000 Leuven, Belgium
| |
Collapse
|
3
|
López Espejo M, David SV. A sparse code for natural sound context in auditory cortex. CURRENT RESEARCH IN NEUROBIOLOGY 2023; 6:100118. [PMID: 38152461 PMCID: PMC10749876 DOI: 10.1016/j.crneur.2023.100118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 10/27/2023] [Accepted: 11/14/2023] [Indexed: 12/29/2023] Open
Abstract
Accurate sound perception can require integrating information over hundreds of milliseconds or even seconds. Spectro-temporal models of sound coding by single neurons in auditory cortex indicate that the majority of sound-evoked activity can be attributed to stimuli with a few tens of milliseconds. It remains uncertain how the auditory system integrates information about sensory context on a longer timescale. Here we characterized long-lasting contextual effects in auditory cortex (AC) using a diverse set of natural sound stimuli. We measured context effects as the difference in a neuron's response to a single probe sound following two different context sounds. Many AC neurons showed context effects lasting longer than the temporal window of a traditional spectro-temporal receptive field. The duration and magnitude of context effects varied substantially across neurons and stimuli. This diversity of context effects formed a sparse code across the neural population that encoded a wider range of contexts than any constituent neuron. Encoding model analysis indicates that context effects can be explained by activity in the local neural population, suggesting that recurrent local circuits support a long-lasting representation of sensory context in auditory cortex.
Collapse
Affiliation(s)
- Mateo López Espejo
- Neuroscience Graduate Program, Oregon Health & Science University, Portland, OR, USA
| | - Stephen V. David
- Otolaryngology, Oregon Health & Science University, Portland, OR, USA
| |
Collapse
|
4
|
Kline AM, Aponte DA, Kato HK. Distinct nonlinear spectrotemporal integration in primary and secondary auditory cortices. Sci Rep 2023; 13:7658. [PMID: 37169827 PMCID: PMC10175507 DOI: 10.1038/s41598-023-34731-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Accepted: 05/06/2023] [Indexed: 05/13/2023] Open
Abstract
Animals sense sounds through hierarchical neural pathways that ultimately reach higher-order cortices to extract complex acoustic features, such as vocalizations. Elucidating how spectrotemporal integration varies along the hierarchy from primary to higher-order auditory cortices is a crucial step in understanding this elaborate sensory computation. Here we used two-photon calcium imaging and two-tone stimuli with various frequency-timing combinations to compare spectrotemporal integration between primary (A1) and secondary (A2) auditory cortices in mice. Individual neurons showed mixed supralinear and sublinear integration in a frequency-timing combination-specific manner, and we found unique integration patterns in these two areas. Temporally asymmetric spectrotemporal integration in A1 neurons suggested their roles in discriminating frequency-modulated sweep directions. In contrast, temporally symmetric and coincidence-preferring integration in A2 neurons made them ideal spectral integrators of concurrent multifrequency sounds. Moreover, the ensemble neural activity in A2 was sensitive to two-tone timings, and coincident two-tones evoked distinct ensemble activity patterns from the linear sum of component tones. Together, these results demonstrate distinct roles of A1 and A2 in encoding complex acoustic features, potentially suggesting parallel rather than sequential information extraction between these regions.
Collapse
Affiliation(s)
- Amber M Kline
- Department of Psychiatry, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA
- Neuroscience Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA
| | - Destinee A Aponte
- Department of Psychiatry, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA
- Neuroscience Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA
| | - Hiroyuki K Kato
- Department of Psychiatry, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA.
- Neuroscience Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA.
- Carolina Institute for Developmental Disabilities, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA.
| |
Collapse
|
5
|
Zhou B, Tomioka R, Song WJ. Temporal profiles of neuronal responses to repeated tone stimuli in the mouse primary auditory cortex. Hear Res 2023; 430:108710. [PMID: 36758331 DOI: 10.1016/j.heares.2023.108710] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Revised: 01/26/2023] [Accepted: 02/01/2023] [Indexed: 02/05/2023]
Abstract
How the auditory system processes temporal information of sound has been investigated extensively using repeated stimuli. Recent studies on how the response of neurons in the primary auditory cortex (A1) changes with the progression of stimulus repetition, have reported response temporal profiles of two categories: "adaptation", i.e., gradual decrease, and "facilitation", i.e., gradual increase. To explore the existence of profiles of other categories and to examine the tone-frequency-dependence of the profile category in single neurons, here we studied the response of mouse A1 neurons to four or five tone-trains; each train comprised 10 identical tone pips, with 0.5-s inter-tone-intervals, and the four or five trains differed only in tone frequency. The response to each tone in a train was evaluated using the peak of the ON response, and how the peak response changed with the tone number in the train, i.e., the response temporal profile, was examined. We confirmed the existence of profiles of both "adaptation" and "facilitation" categories; "adaptation" could be further subcategorized into "slow adaptation" and "fast adaptation" profiles, with the latter being encountered more frequently. Moreover, two new categories of non-monotonic profiles were identified: an "adaptation with recovery" profile and a "facilitation followed by adaptation" profile. Examination of single neurons with trains of different tone frequencies revealed that some A1 neurons exhibited profiles of the same category to tone trains of different tone frequencies, whereas others exhibited profiles of different categories, depending on the tone frequency. These results demonstrate the variety in the response temporal profiles of mouse A1 neurons, which may benefit the encoding of individual tones in a train.
Collapse
Affiliation(s)
- Bo Zhou
- Department of Sensory and Cognitive Physiology, Graduate School of Medical Sciences, Kumamoto University 860-8556, Japan
| | - Ryohei Tomioka
- Department of Sensory and Cognitive Physiology, Graduate School of Medical Sciences, Kumamoto University 860-8556, Japan.
| | - Wen-Jie Song
- Department of Sensory and Cognitive Physiology, Graduate School of Medical Sciences, Kumamoto University 860-8556, Japan; Center for Metabolic Regulation of Healthy Aging, Faculty of Life Sciences, Kumamoto University, Kumamoto 860-8556, Japan.
| |
Collapse
|
6
|
Sadagopan S, Kar M, Parida S. Quantitative models of auditory cortical processing. Hear Res 2023; 429:108697. [PMID: 36696724 PMCID: PMC9928778 DOI: 10.1016/j.heares.2023.108697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 12/17/2022] [Accepted: 01/12/2023] [Indexed: 01/15/2023]
Abstract
To generate insight from experimental data, it is critical to understand the inter-relationships between individual data points and place them in context within a structured framework. Quantitative modeling can provide the scaffolding for such an endeavor. Our main objective in this review is to provide a primer on the range of quantitative tools available to experimental auditory neuroscientists. Quantitative modeling is advantageous because it can provide a compact summary of observed data, make underlying assumptions explicit, and generate predictions for future experiments. Quantitative models may be developed to characterize or fit observed data, to test theories of how a task may be solved by neural circuits, to determine how observed biophysical details might contribute to measured activity patterns, or to predict how an experimental manipulation would affect neural activity. In complexity, quantitative models can range from those that are highly biophysically realistic and that include detailed simulations at the level of individual synapses, to those that use abstract and simplified neuron models to simulate entire networks. Here, we survey the landscape of recently developed models of auditory cortical processing, highlighting a small selection of models to demonstrate how they help generate insight into the mechanisms of auditory processing. We discuss examples ranging from models that use details of synaptic properties to explain the temporal pattern of cortical responses to those that use modern deep neural networks to gain insight into human fMRI data. We conclude by discussing a biologically realistic and interpretable model that our laboratory has developed to explore aspects of vocalization categorization in the auditory pathway.
Collapse
Affiliation(s)
- Srivatsun Sadagopan
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA; Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, USA.
| | - Manaswini Kar
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA
| | - Satyabrata Parida
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
7
|
Kline AM, Aponte DA, Kato HK. Distinct nonlinear spectrotemporal integration in primary and secondary auditory cortices. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.01.25.525588. [PMID: 36747812 PMCID: PMC9900815 DOI: 10.1101/2023.01.25.525588] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
Animals sense sounds through hierarchical neural pathways that ultimately reach higher-order cortices to extract complex acoustic features, such as vocalizations. Elucidating how spectrotemporal integration varies along the hierarchy from primary to higher-order auditory cortices is a crucial step in understanding this elaborate sensory computation. Here we used two-photon calcium imaging and two-tone stimuli with various frequency-timing combinations to compare spectrotemporal integration between primary (A1) and secondary (A2) auditory cortices in mice. Individual neurons showed mixed supralinear and sublinear integration in a frequency-timing combination-specific manner, and we found unique integration patterns in these two areas. Temporally asymmetric spectrotemporal integration in A1 neurons enabled their discrimination of frequency-modulated sweep directions. In contrast, temporally symmetric and coincidence-preferring integration in A2 neurons made them ideal spectral integrators of concurrent multifrequency sounds. Moreover, the ensemble neural activity in A2 was sensitive to two-tone timings, and coincident two-tones evoked distinct ensemble activity patterns from the linear sum of component tones. Together, these results demonstrate distinct roles of A1 and A2 in encoding complex acoustic features, potentially suggesting parallel rather than sequential information extraction between these regions.
Collapse
Affiliation(s)
- Amber M. Kline
- Department of Psychiatry, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA,Neuroscience Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA,These authors contributed equally
| | - Destinee A. Aponte
- Department of Psychiatry, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA,Neuroscience Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA,These authors contributed equally
| | - Hiroyuki K. Kato
- Department of Psychiatry, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA,Neuroscience Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA,Institute for Developmental Disabilities, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA,Correspondence: Hiroyuki Kato, Mary Ellen Jones Building, Rm. 6212B, 116 Manning Dr., Chapel Hill, NC 27599-7250, USA, , 919-843-8764
| |
Collapse
|
8
|
A Redundant Cortical Code for Speech Envelope. J Neurosci 2023; 43:93-112. [PMID: 36379706 PMCID: PMC9838705 DOI: 10.1523/jneurosci.1616-21.2022] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Revised: 08/19/2022] [Accepted: 10/23/2022] [Indexed: 11/17/2022] Open
Abstract
Animal communication sounds exhibit complex temporal structure because of the amplitude fluctuations that comprise the sound envelope. In human speech, envelope modulations drive synchronized activity in auditory cortex (AC), which correlates strongly with comprehension (Giraud and Poeppel, 2012; Peelle and Davis, 2012; Haegens and Zion Golumbic, 2018). Studies of envelope coding in single neurons, performed in nonhuman animals, have focused on periodic amplitude modulation (AM) stimuli and use response metrics that are not easy to juxtapose with data from humans. In this study, we sought to bridge these fields. Specifically, we looked directly at the temporal relationship between stimulus envelope and spiking, and we assessed whether the apparent diversity across neurons' AM responses contributes to the population representation of speech-like sound envelopes. We gathered responses from single neurons to vocoded speech stimuli and compared them to sinusoidal AM responses in auditory cortex (AC) of alert, freely moving Mongolian gerbils of both sexes. While AC neurons displayed heterogeneous tuning to AM rate, their temporal dynamics were stereotyped. Preferred response phases accumulated near the onsets of sinusoidal AM periods for slower rates (<8 Hz), and an over-representation of amplitude edges was apparent in population responses to both sinusoidal AM and vocoded speech envelopes. Crucially, this encoding bias imparted a decoding benefit: a classifier could discriminate vocoded speech stimuli using summed population activity, while higher frequency modulations required a more sophisticated decoder that tracked spiking responses from individual cells. Together, our results imply that the envelope structure relevant to parsing an acoustic stream could be read-out from a distributed, redundant population code.SIGNIFICANCE STATEMENT Animal communication sounds have rich temporal structure and are often produced in extended sequences, including the syllabic structure of human speech. Although the auditory cortex (AC) is known to play a crucial role in representing speech syllables, the contribution of individual neurons remains uncertain. Here, we characterized the representations of both simple, amplitude-modulated sounds and complex, speech-like stimuli within a broad population of cortical neurons, and we found an overrepresentation of amplitude edges. Thus, a phasic, redundant code in auditory cortex can provide a mechanistic explanation for segmenting acoustic streams like human speech.
Collapse
|
9
|
Lage-Castellanos A, De Martino F, Ghose GM, Gulban OF, Moerel M. Selective attention sharpens population receptive fields in human auditory cortex. Cereb Cortex 2022; 33:5395-5408. [PMID: 36336333 PMCID: PMC10152083 DOI: 10.1093/cercor/bhac427] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 10/03/2022] [Accepted: 10/04/2022] [Indexed: 11/09/2022] Open
Abstract
Abstract
Selective attention enables the preferential processing of relevant stimulus aspects. Invasive animal studies have shown that attending a sound feature rapidly modifies neuronal tuning throughout the auditory cortex. Human neuroimaging studies have reported enhanced auditory cortical responses with selective attention. To date, it remains unclear how the results obtained with functional magnetic resonance imaging (fMRI) in humans relate to the electrophysiological findings in animal models. Here we aim to narrow the gap between animal and human research by combining a selective attention task similar in design to those used in animal electrophysiology with high spatial resolution ultra-high field fMRI at 7 Tesla. Specifically, human participants perform a detection task, whereas the probability of target occurrence varies with sound frequency. Contrary to previous fMRI studies, we show that selective attention resulted in population receptive field sharpening, and consequently reduced responses, at the attended sound frequencies. The difference between our results to those of previous fMRI studies supports the notion that the influence of selective attention on auditory cortex is diverse and may depend on context, stimulus, and task.
Collapse
Affiliation(s)
- Agustin Lage-Castellanos
- Department of Cognitive Neuroscience , Faculty of Psychology and Neuroscience, , 6200 MD, Maastricht , The Netherlands
- Maastricht University , Faculty of Psychology and Neuroscience, , 6200 MD, Maastricht , The Netherlands
- Maastricht Brain Imaging Center (MBIC) , 6200 MD, Maastricht , The Netherlands
- Department of NeuroInformatics, Cuban Neuroscience Center , Havana City 11600 , Cuba
| | - Federico De Martino
- Department of Cognitive Neuroscience , Faculty of Psychology and Neuroscience, , 6200 MD, Maastricht , The Netherlands
- Maastricht University , Faculty of Psychology and Neuroscience, , 6200 MD, Maastricht , The Netherlands
- Maastricht Brain Imaging Center (MBIC) , 6200 MD, Maastricht , The Netherlands
- Center for Magnetic Resonance Research , Department of Radiology, , Minneapolis, MN 55455 , United States
- University of Minnesota , Department of Radiology, , Minneapolis, MN 55455 , United States
| | - Geoffrey M Ghose
- Center for Magnetic Resonance Research , Department of Radiology, , Minneapolis, MN 55455 , United States
- University of Minnesota , Department of Radiology, , Minneapolis, MN 55455 , United States
| | | | - Michelle Moerel
- Department of Cognitive Neuroscience , Faculty of Psychology and Neuroscience, , 6200 MD, Maastricht , The Netherlands
- Maastricht University , Faculty of Psychology and Neuroscience, , 6200 MD, Maastricht , The Netherlands
- Maastricht Brain Imaging Center (MBIC) , 6200 MD, Maastricht , The Netherlands
- Maastricht Centre for Systems Biology, Maastricht University , 6200 MD, Maastricht , The Netherlands
| |
Collapse
|
10
|
Suri H, Rothschild G. Enhanced stability of complex sound representations relative to simple sounds in the auditory cortex. eNeuro 2022; 9:ENEURO.0031-22.2022. [PMID: 35868858 PMCID: PMC9347310 DOI: 10.1523/eneuro.0031-22.2022] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Revised: 06/29/2022] [Accepted: 06/30/2022] [Indexed: 11/29/2022] Open
Abstract
Typical everyday sounds, such as those of speech or running water, are spectrotemporally complex. The ability to recognize complex sounds (CxS) and their associated meaning is presumed to rely on their stable neural representations across time. The auditory cortex is critical for processing of CxS, yet little is known of the degree of stability of auditory cortical representations of CxS across days. Previous studies have shown that the auditory cortex represents CxS identity with a substantial degree of invariance to basic sound attributes such as frequency. We therefore hypothesized that auditory cortical representations of CxS are more stable across days than those of sounds that lack spectrotemporal structure such as pure tones (PTs). To test this hypothesis, we recorded responses of identified L2/3 auditory cortical excitatory neurons to both PTs and CxS across days using two-photon calcium imaging in awake mice. Auditory cortical neurons showed significant daily changes of responses to both types of sounds, yet responses to CxS exhibited significantly lower rates of daily change than those of PTs. Furthermore, daily changes in response profiles to PTs tended to be more stimulus-specific, reflecting changes in sound selectivity, as compared to changes of CxS responses. Lastly, the enhanced stability of responses to CxS was evident across longer time intervals as well. Together, these results suggest that spectrotemporally CxS are more stably represented in the auditory cortex across time than PTs. These findings support a role of the auditory cortex in representing CxS identity across time.Significance statementThe ability to recognize everyday complex sounds such as those of speech or running water is presumed to rely on their stable neural representations. Yet, little is known of the degree of stability of single-neuron sound responses across days. As the auditory cortex is critical for complex sound perception, we hypothesized that the auditory cortical representations of complex sounds are relatively stable across days. To test this, we recorded sound responses of identified auditory cortical neurons across days in awake mice. We found that auditory cortical responses to complex sounds are significantly more stable across days as compared to those of simple pure tones. These findings support a role of the auditory cortex in representing complex sound identity across time.
Collapse
Affiliation(s)
- Harini Suri
- Department of Psychology, University of Michigan, Ann Arbor, MI, 48109, USA
| | - Gideon Rothschild
- Department of Psychology, University of Michigan, Ann Arbor, MI, 48109, USA
- Kresge Hearing Research Institute and Department of Otolaryngology - Head and Neck Surgery, University of Michigan, Ann Arbor, MI 48109, USA
| |
Collapse
|
11
|
Francis NA, Mukherjee S, Koçillari L, Panzeri S, Babadi B, Kanold PO. Sequential transmission of task-relevant information in cortical neuronal networks. Cell Rep 2022; 39:110878. [PMID: 35649366 PMCID: PMC9387204 DOI: 10.1016/j.celrep.2022.110878] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Revised: 02/28/2022] [Accepted: 05/05/2022] [Indexed: 11/10/2022] Open
Abstract
Cortical processing of task-relevant information enables recognition of behaviorally meaningful sensory events. It is unclear how task-related information is represented within cortical networks by the activity of individual neurons and their functional interactions. Here, we use two-photon imaging to record neuronal activity from the primary auditory cortex of mice during a pure-tone discrimination task. We find that a subset of neurons transiently encode sensory information used to inform behavioral choice. Using Granger causality analysis, we show that these neurons form functional networks in which information transmits sequentially. Network structures differ for target versus non-target tones, encode behavioral choice, and differ between correct versus incorrect behavioral choices. Correct behavioral choices are associated with shorter communication timescales, larger functional correlations, and greater information redundancy. In summary, specialized neurons in primary auditory cortex integrate task-related information and form functional networks whose structures encode both sensory input and behavioral choice. Francis et al. find that, as mice perform an auditory discrimination task, cortical neurons form functional networks in which task-relevant information transmits sequentially between neurons. Network structures encode behavioral choice, and correct behavioral choices are associated with shorter communication timescales, larger functional correlations, and greater information redundancy between neurons.
Collapse
Affiliation(s)
- Nikolas A Francis
- Department of Biology & Brain and Behavior Institute, University of Maryland, College Park, MD 20742, USA
| | - Shoutik Mukherjee
- Department of Electrical and Computer Engineering & Institute for Systems Research, University of Maryland, College Park, MD 20742, USA
| | - Loren Koçillari
- Laboratory of Neural Computation, Istituto Italiano di Tecnologia, Rovereto 38068, Italy; Department of Excellence for Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), Falkenried 94, D-20251 Hamburg, Germany
| | - Stefano Panzeri
- Laboratory of Neural Computation, Istituto Italiano di Tecnologia, Rovereto 38068, Italy; Department of Excellence for Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), Falkenried 94, D-20251 Hamburg, Germany.
| | - Behtash Babadi
- Department of Electrical and Computer Engineering & Institute for Systems Research, University of Maryland, College Park, MD 20742, USA.
| | - Patrick O Kanold
- Department of Biology & Brain and Behavior Institute, University of Maryland, College Park, MD 20742, USA; Department of Biomedical Engineering & Kavli Neuroscience Discovery Institute, Johns Hopkins University, Baltimore, MD 21205, USA.
| |
Collapse
|
12
|
Ruthig P, Schönwiesner M. Common principles in the lateralisation of auditory cortex structure and function for vocal communication in primates and rodents. Eur J Neurosci 2022; 55:827-845. [PMID: 34984748 DOI: 10.1111/ejn.15590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2021] [Accepted: 12/24/2021] [Indexed: 11/27/2022]
Abstract
This review summarises recent findings on the lateralisation of communicative sound processing in the auditory cortex (AC) of humans, non-human primates, and rodents. Functional imaging in humans has demonstrated a left hemispheric preference for some acoustic features of speech, but it is unclear to which degree this is caused by bottom-up acoustic feature selectivity or top-down modulation from language areas. Although non-human primates show a less pronounced functional lateralisation in AC, the properties of AC fields and behavioral asymmetries are qualitatively similar. Rodent studies demonstrate microstructural circuits that might underlie bottom-up acoustic feature selectivity in both hemispheres. Functionally, the left AC in the mouse appears to be specifically tuned to communication calls, whereas the right AC may have a more 'generalist' role. Rodents also show anatomical AC lateralisation, such as differences in size and connectivity. Several of these functional and anatomical characteristics are also lateralized in human AC. Thus, complex vocal communication processing shares common features among rodents and primates. We argue that a synthesis of results from humans, non-human primates, and rodents is necessary to identify the neural circuitry of vocal communication processing. However, data from different species and methods are often difficult to compare. Recent advances may enable better integration of methods across species. Efforts to standardise data formats and analysis tools would benefit comparative research and enable synergies between psychological and biological research in the area of vocal communication processing.
Collapse
Affiliation(s)
- Philip Ruthig
- Faculty of Life Sciences, Leipzig University, Leipzig, Sachsen.,Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig
| | | |
Collapse
|
13
|
Downer JD, Verhein JR, Rapone BC, O'Connor KN, Sutter ML. An Emergent Population Code in Primary Auditory Cortex Supports Selective Attention to Spectral and Temporal Sound Features. J Neurosci 2021; 41:7561-7577. [PMID: 34210783 PMCID: PMC8425978 DOI: 10.1523/jneurosci.0693-20.2021] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Revised: 05/19/2021] [Accepted: 05/28/2021] [Indexed: 11/21/2022] Open
Abstract
Textbook descriptions of primary sensory cortex (PSC) revolve around single neurons' representation of low-dimensional sensory features, such as visual object orientation in primary visual cortex (V1), location of somatic touch in primary somatosensory cortex (S1), and sound frequency in primary auditory cortex (A1). Typically, studies of PSC measure neurons' responses along few (one or two) stimulus and/or behavioral dimensions. However, real-world stimuli usually vary along many feature dimensions and behavioral demands change constantly. In order to illuminate how A1 supports flexible perception in rich acoustic environments, we recorded from A1 neurons while rhesus macaques (one male, one female) performed a feature-selective attention task. We presented sounds that varied along spectral and temporal feature dimensions (carrier bandwidth and temporal envelope, respectively). Within a block, subjects attended to one feature of the sound in a selective change detection task. We found that single neurons tend to be high-dimensional, in that they exhibit substantial mixed selectivity for both sound features, as well as task context. We found no overall enhancement of single-neuron coding of the attended feature, as attention could either diminish or enhance this coding. However, a population-level analysis reveals that ensembles of neurons exhibit enhanced encoding of attended sound features, and this population code tracks subjects' performance. Importantly, surrogate neural populations with intact single-neuron tuning but shuffled higher-order correlations among neurons fail to yield attention- related effects observed in the intact data. These results suggest that an emergent population code not measurable at the single-neuron level might constitute the functional unit of sensory representation in PSC.SIGNIFICANCE STATEMENT The ability to adapt to a dynamic sensory environment promotes a range of important natural behaviors. We recorded from single neurons in monkey primary auditory cortex (A1), while subjects attended to either the spectral or temporal features of complex sounds. Surprisingly, we found no average increase in responsiveness to, or encoding of, the attended feature across single neurons. However, when we pooled the activity of the sampled neurons via targeted dimensionality reduction (TDR), we found enhanced population-level representation of the attended feature and suppression of the distractor feature. This dissociation of the effects of attention at the level of single neurons versus the population highlights the synergistic nature of cortical sound encoding and enriches our understanding of sensory cortical function.
Collapse
Affiliation(s)
- Joshua D Downer
- Center for Neuroscience, University of California, Davis, Davis, California 95618
- Department of Otolaryngology, Head and Neck Surgery, University of California, San Francisco, California 94143
| | - Jessica R Verhein
- Center for Neuroscience, University of California, Davis, Davis, California 95618
- School of Medicine, Stanford University, Stanford, California 94305
| | - Brittany C Rapone
- Center for Neuroscience, University of California, Davis, Davis, California 95618
- School of Social Sciences, Oxford Brookes University, Oxford, OX4 0BP, United Kingdom
| | - Kevin N O'Connor
- Center for Neuroscience, University of California, Davis, Davis, California 95618
- Department of Neurobiology, Physiology and Behavior, University of California, Davis, Davis, California 95618
| | - Mitchell L Sutter
- Center for Neuroscience, University of California, Davis, Davis, California 95618
- Department of Neurobiology, Physiology and Behavior, University of California, Davis, Davis, California 95618
| |
Collapse
|
14
|
Montes-Lourido P, Kar M, David SV, Sadagopan S. Neuronal selectivity to complex vocalization features emerges in the superficial layers of primary auditory cortex. PLoS Biol 2021; 19:e3001299. [PMID: 34133413 PMCID: PMC8238193 DOI: 10.1371/journal.pbio.3001299] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Revised: 06/28/2021] [Accepted: 05/24/2021] [Indexed: 01/11/2023] Open
Abstract
Early in auditory processing, neural responses faithfully reflect acoustic input. At higher stages of auditory processing, however, neurons become selective for particular call types, eventually leading to specialized regions of cortex that preferentially process calls at the highest auditory processing stages. We previously proposed that an intermediate step in how nonselective responses are transformed into call-selective responses is the detection of informative call features. But how neural selectivity for informative call features emerges from nonselective inputs, whether feature selectivity gradually emerges over the processing hierarchy, and how stimulus information is represented in nonselective and feature-selective populations remain open question. In this study, using unanesthetized guinea pigs (GPs), a highly vocal and social rodent, as an animal model, we characterized the neural representation of calls in 3 auditory processing stages-the thalamus (ventral medial geniculate body (vMGB)), and thalamorecipient (L4) and superficial layers (L2/3) of primary auditory cortex (A1). We found that neurons in vMGB and A1 L4 did not exhibit call-selective responses and responded throughout the call durations. However, A1 L2/3 neurons showed high call selectivity with about a third of neurons responding to only 1 or 2 call types. These A1 L2/3 neurons only responded to restricted portions of calls suggesting that they were highly selective for call features. Receptive fields of these A1 L2/3 neurons showed complex spectrotemporal structures that could underlie their high call feature selectivity. Information theoretic analysis revealed that in A1 L4, stimulus information was distributed over the population and was spread out over the call durations. In contrast, in A1 L2/3, individual neurons showed brief bursts of high stimulus-specific information and conveyed high levels of information per spike. These data demonstrate that a transformation in the neural representation of calls occurs between A1 L4 and A1 L2/3, leading to the emergence of a feature-based representation of calls in A1 L2/3. Our data thus suggest that observed cortical specializations for call processing emerge in A1 and set the stage for further mechanistic studies.
Collapse
Affiliation(s)
- Pilar Montes-Lourido
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | - Manaswini Kar
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
- Center for Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | - Stephen V. David
- Department of Otolaryngology, Oregon Health and Science University, Portland, Oregon, United States of America
| | - Srivatsun Sadagopan
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
- Center for Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
- Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| |
Collapse
|
15
|
Bandet MV, Dong B, Winship IR. Distinct patterns of activity in individual cortical neurons and local networks in primary somatosensory cortex of mice evoked by square-wave mechanical limb stimulation. PLoS One 2021; 16:e0236684. [PMID: 33914738 PMCID: PMC8084136 DOI: 10.1371/journal.pone.0236684] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Accepted: 04/15/2021] [Indexed: 11/19/2022] Open
Abstract
Artificial forms of mechanical limb stimulation are used within multiple fields of study to determine the level of cortical excitability and to map the trajectory of neuronal recovery from cortical damage or disease. Square-wave mechanical or electrical stimuli are often used in these studies, but a characterization of sensory-evoked response properties to square-waves with distinct fundamental frequencies but overlapping harmonics has not been performed. To distinguish between somatic stimuli, the primary somatosensory cortex must be able to represent distinct stimuli with unique patterns of activity, even if they have overlapping features. Thus, mechanical square-wave stimulation was used in conjunction with regional and cellular imaging to examine regional and cellular response properties evoked by different frequencies of stimulation. Flavoprotein autofluorescence imaging was used to map the somatosensory cortex of anaesthetized C57BL/6 mice, and in vivo two-photon Ca2+ imaging was used to define patterns of neuronal activation during mechanical square-wave stimulation of the contralateral forelimb or hindlimb at various frequencies (3, 10, 100, 200, and 300 Hz). The data revealed that neurons within the limb associated somatosensory cortex responding to various frequencies of square-wave stimuli exhibit stimulus-specific patterns of activity. Subsets of neurons were found to have sensory-evoked activity that is either primarily responsive to single stimulus frequencies or broadly responsive to multiple frequencies of limb stimulation. High frequency stimuli were shown to elicit more population activity, with a greater percentage of the population responding and greater percentage of cells with high amplitude responses. Stimulus-evoked cell-cell correlations within these neuronal networks varied as a function of frequency of stimulation, such that each stimulus elicited a distinct pattern that was more consistent across multiple trials of the same stimulus compared to trials at different frequencies of stimulation. The variation in cortical response to different square-wave stimuli can thus be represented by the population pattern of supra-threshold Ca2+ transients, the magnitude and temporal properties of the evoked activity, and the structure of the stimulus-evoked correlation between neurons.
Collapse
Affiliation(s)
- Mischa V. Bandet
- Neuroscience and Mental Health Institute, University of Alberta, Edmonton, Alberta, Canada
- Neurochemical Research Unit, University of Alberta, Edmonton, Alberta, Canada
| | - Bin Dong
- Neurochemical Research Unit, University of Alberta, Edmonton, Alberta, Canada
- Department of Psychiatry, University of Alberta, Edmonton, Alberta, Canada
| | - Ian R. Winship
- Neuroscience and Mental Health Institute, University of Alberta, Edmonton, Alberta, Canada
- Neurochemical Research Unit, University of Alberta, Edmonton, Alberta, Canada
- Department of Psychiatry, University of Alberta, Edmonton, Alberta, Canada
| |
Collapse
|
16
|
Aponte DA, Handy G, Kline AM, Tsukano H, Doiron B, Kato HK. Recurrent network dynamics shape direction selectivity in primary auditory cortex. Nat Commun 2021; 12:314. [PMID: 33436635 PMCID: PMC7804939 DOI: 10.1038/s41467-020-20590-6] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2020] [Accepted: 12/11/2020] [Indexed: 02/03/2023] Open
Abstract
Detecting the direction of frequency modulation (FM) is essential for vocal communication in both animals and humans. Direction-selective firing of neurons in the primary auditory cortex (A1) has been classically attributed to temporal offsets between feedforward excitatory and inhibitory inputs. However, it remains unclear how cortical recurrent circuitry contributes to this computation. Here, we used two-photon calcium imaging and whole-cell recordings in awake mice to demonstrate that direction selectivity is not caused by temporal offsets between synaptic currents, but by an asymmetry in total synaptic charge between preferred and non-preferred directions. Inactivation of cortical somatostatin-expressing interneurons (SOM cells) reduced direction selectivity, revealing its cortical contribution. Our theoretical models showed that charge asymmetry arises due to broad spatial topography of SOM cell-mediated inhibition which regulates signal amplification in strongly recurrent circuitry. Together, our findings reveal a major contribution of recurrent network dynamics in shaping cortical tuning to behaviorally relevant complex sounds.
Collapse
Affiliation(s)
- Destinee A Aponte
- Department of Psychiatry, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA
- Neuroscience Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA
| | - Gregory Handy
- Departments of Neurobiology and Statistics, University of Chicago, Chicago, IL, USA
- Department of Mathematics, University of Pittsburgh, Pittsburgh, USA
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL, USA
| | - Amber M Kline
- Department of Psychiatry, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA
- Neuroscience Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA
| | - Hiroaki Tsukano
- Department of Psychiatry, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA
- Neuroscience Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA
| | - Brent Doiron
- Departments of Neurobiology and Statistics, University of Chicago, Chicago, IL, USA
- Department of Mathematics, University of Pittsburgh, Pittsburgh, USA
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL, USA
| | - Hiroyuki K Kato
- Department of Psychiatry, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA.
- Neuroscience Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA.
- Carolina Institute for Developmental Disabilities, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA.
| |
Collapse
|
17
|
Pennington JR, David SV. Complementary Effects of Adaptation and Gain Control on Sound Encoding in Primary Auditory Cortex. eNeuro 2020; 7:ENEURO.0205-20.2020. [PMID: 33109632 PMCID: PMC7675144 DOI: 10.1523/eneuro.0205-20.2020] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2020] [Revised: 08/15/2020] [Accepted: 09/05/2020] [Indexed: 11/24/2022] Open
Abstract
An important step toward understanding how the brain represents complex natural sounds is to develop accurate models of auditory coding by single neurons. A commonly used model is the linear-nonlinear spectro-temporal receptive field (STRF; LN model). The LN model accounts for many features of auditory tuning, but it cannot account for long-lasting effects of sensory context on sound-evoked activity. Two mechanisms that may support these contextual effects are short-term plasticity (STP) and contrast-dependent gain control (GC), which have inspired expanded versions of the LN model. Both models improve performance over the LN model, but they have never been compared directly. Thus, it is unclear whether they account for distinct processes or describe one phenomenon in different ways. To address this question, we recorded activity of neurons in primary auditory cortex (A1) of awake ferrets during presentation of natural sounds. We then fit models incorporating one nonlinear mechanism (GC or STP) or both (GC+STP) using this single dataset, and measured the correlation between the models' predictions and the recorded neural activity. Both the STP and GC models performed significantly better than the LN model, but the GC+STP model outperformed both individual models. We also quantified the equivalence of STP and GC model predictions and found only modest similarity. Consistent results were observed for a dataset collected in clean and noisy acoustic contexts. These results establish general methods for evaluating the equivalence of arbitrarily complex encoding models and suggest that the STP and GC models describe complementary processes in the auditory system.
Collapse
Affiliation(s)
- Jacob R Pennington
- Department of Mathematics, Washington State University, Vancouver, WA, 98686
| | - Stephen V David
- Department of Otolaryngology, Oregon Health and Science University, Portland, OR, 97239
| |
Collapse
|
18
|
Differential Short-Term Plasticity of PV and SST Neurons Accounts for Adaptation and Facilitation of Cortical Neurons to Auditory Tones. J Neurosci 2020; 40:9224-9235. [PMID: 33097639 DOI: 10.1523/jneurosci.0686-20.2020] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2020] [Revised: 09/16/2020] [Accepted: 10/14/2020] [Indexed: 11/21/2022] Open
Abstract
Cortical responses to sensory stimuli are strongly modulated by temporal context. One of the best studied examples of such modulation is sensory adaptation. We first show that in response to repeated tones pyramidal (Pyr) neurons in male mouse auditory cortex (A1) exhibit facilitating and stable responses, in addition to adapting responses. To examine the potential mechanisms underlying these distinct temporal profiles, we developed a reduced spiking model of sensory cortical circuits that incorporated the signature short-term synaptic plasticity (STP) profiles of the inhibitory parvalbumin (PV) and somatostatin (SST) interneurons. The model accounted for all three temporal response profiles as the result of dynamic changes in excitatory/inhibitory balance produced by STP, primarily through shifts in the relative latency of Pyr and inhibitory neurons. Transition between the three response profiles was possible by changing the strength of the inhibitory PV→Pyr and SST→Pyr synapses. The model predicted that a unit's latency would be related to its temporal profile. Consistent with this prediction, the latency of stable units was significantly shorter than that of adapting and facilitating units. Furthermore, because of the history-dependence of STP the model generated a paradoxical prediction: that inactivation of inhibitory neurons during one tone would decrease the response of A1 neurons to a subsequent tone. Indeed, we observed that optogenetic inactivation of PV neurons during one tone counterintuitively decreased the spiking of Pyr neurons to a subsequent tone 400 ms later. These results provide evidence that STP is critical to temporal context-dependent responses in the sensory cortex.SIGNIFICANCE STATEMENT Our perception of speech and music depends strongly on temporal context, i.e., the significance of a stimulus depends on the preceding stimuli. Complementary neural mechanisms are needed to sometimes ignore repetitive stimuli (e.g., the tic of a clock) or detect meaningful repetition (e.g., consecutive tones in Morse code). We modeled a neural circuit that accounts for diverse experimentally-observed response profiles in auditory cortex (A1) neurons, based on known forms of short-term synaptic plasticity (STP). Whether the simulated circuit reduced, maintained, or enhanced its response to repeated tones depended on the relative dominance of two different types of inhibitory cells. The model made novel predictions that were experimentally validated. Results define an important role for STP in temporal context-dependent perception.
Collapse
|
19
|
Sohoglu E, Kumar S, Chait M, Griffiths TD. Multivoxel codes for representing and integrating acoustic features in human cortex. Neuroimage 2020; 217:116661. [PMID: 32081785 PMCID: PMC7339141 DOI: 10.1016/j.neuroimage.2020.116661] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2019] [Revised: 02/13/2020] [Accepted: 02/15/2020] [Indexed: 10/25/2022] Open
Abstract
Using fMRI and multivariate pattern analysis, we determined whether spectral and temporal acoustic features are represented by independent or integrated multivoxel codes in human cortex. Listeners heard band-pass noise varying in frequency (spectral) and amplitude-modulation (AM) rate (temporal) features. In the superior temporal plane, changes in multivoxel activity due to frequency were largely invariant with respect to AM rate (and vice versa), consistent with an independent representation. In contrast, in posterior parietal cortex, multivoxel representation was exclusively integrated and tuned to specific conjunctions of frequency and AM features (albeit weakly). Direct between-region comparisons show that whereas independent coding of frequency weakened with increasing levels of the hierarchy, such a progression for AM and integrated coding was less fine-grained and only evident in the higher hierarchical levels from non-core to parietal cortex (with AM coding weakening and integrated coding strengthening). Our findings support the notion that primary auditory cortex can represent spectral and temporal acoustic features in an independent fashion and suggest a role for parietal cortex in feature integration and the structuring of sensory input.
Collapse
Affiliation(s)
- Ediz Sohoglu
- School of Psychology, University of Sussex, Brighton, BN1 9QH, United Kingdom.
| | - Sukhbinder Kumar
- Institute of Neurobiology, Medical School, Newcastle University, Newcastle Upon Tyne, NE2 4HH, United Kingdom; Wellcome Trust Centre for Human Neuroimaging, University College London, London, WC1N 3BG, United Kingdom
| | - Maria Chait
- Ear Institute, University College London, London, United Kingdom
| | - Timothy D Griffiths
- Institute of Neurobiology, Medical School, Newcastle University, Newcastle Upon Tyne, NE2 4HH, United Kingdom; Wellcome Trust Centre for Human Neuroimaging, University College London, London, WC1N 3BG, United Kingdom
| |
Collapse
|
20
|
Issa JB, Tocker G, Hasselmo ME, Heys JG, Dombeck DA. Navigating Through Time: A Spatial Navigation Perspective on How the Brain May Encode Time. Annu Rev Neurosci 2020; 43:73-93. [PMID: 31961765 PMCID: PMC7351603 DOI: 10.1146/annurev-neuro-101419-011117] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Interval timing, which operates on timescales of seconds to minutes, is distributed across multiple brain regions and may use distinct circuit mechanisms as compared to millisecond timing and circadian rhythms. However, its study has proven difficult, as timing on this scale is deeply entangled with other behaviors. Several circuit and cellular mechanisms could generate sequential or ramping activity patterns that carry timing information. Here we propose that a productive approach is to draw parallels between interval timing and spatial navigation, where direct analogies can be made between the variables of interest and the mathematical operations necessitated. Along with designing experiments that isolate or disambiguate timing behavior from other variables, new techniques will facilitate studies that directly address the neural mechanisms that are responsible for interval timing.
Collapse
Affiliation(s)
- John B Issa
- Department of Neurobiology, Northwestern University, Evanston, Illinois 60208, USA;
| | - Gilad Tocker
- Department of Neurobiology, Northwestern University, Evanston, Illinois 60208, USA;
| | - Michael E Hasselmo
- Center for Systems Neuroscience, Boston University, Boston, Massachusetts 02215, USA
| | - James G Heys
- Department of Neurobiology and Anatomy, University of Utah, Salt Lake City, Utah 84112, USA
| | - Daniel A Dombeck
- Department of Neurobiology, Northwestern University, Evanston, Illinois 60208, USA;
| |
Collapse
|
21
|
Keshishian M, Akbari H, Khalighinejad B, Herrero JL, Mehta AD, Mesgarani N. Estimating and interpreting nonlinear receptive field of sensory neural responses with deep neural network models. eLife 2020; 9:53445. [PMID: 32589140 PMCID: PMC7347387 DOI: 10.7554/elife.53445] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2019] [Accepted: 06/21/2020] [Indexed: 12/21/2022] Open
Abstract
Our understanding of nonlinear stimulus transformations by neural circuits is hindered by the lack of comprehensive yet interpretable computational modeling frameworks. Here, we propose a data-driven approach based on deep neural networks to directly model arbitrarily nonlinear stimulus-response mappings. Reformulating the exact function of a trained neural network as a collection of stimulus-dependent linear functions enables a locally linear receptive field interpretation of the neural network. Predicting the neural responses recorded invasively from the auditory cortex of neurosurgical patients as they listened to speech, this approach significantly improves the prediction accuracy of auditory cortical responses, particularly in nonprimary areas. Moreover, interpreting the functions learned by neural networks uncovered three distinct types of nonlinear transformations of speech that varied considerably from primary to nonprimary auditory regions. The ability of this framework to capture arbitrary stimulus-response mappings while maintaining model interpretability leads to a better understanding of cortical processing of sensory signals.
Collapse
Affiliation(s)
- Menoua Keshishian
- Department of Electrical Engineering, Columbia University, New York, United States.,Zuckerman Mind Brain Behavior Institute, Columbia University, New York, United States
| | - Hassan Akbari
- Department of Electrical Engineering, Columbia University, New York, United States.,Zuckerman Mind Brain Behavior Institute, Columbia University, New York, United States
| | - Bahar Khalighinejad
- Department of Electrical Engineering, Columbia University, New York, United States.,Zuckerman Mind Brain Behavior Institute, Columbia University, New York, United States
| | - Jose L Herrero
- Feinstein Institute for Medical Research, Manhasset, United States.,Department of Neurosurgery, Hofstra-Northwell School of Medicine and Feinstein Institute for Medical Research, Manhasset, United States
| | - Ashesh D Mehta
- Feinstein Institute for Medical Research, Manhasset, United States.,Department of Neurosurgery, Hofstra-Northwell School of Medicine and Feinstein Institute for Medical Research, Manhasset, United States
| | - Nima Mesgarani
- Department of Electrical Engineering, Columbia University, New York, United States.,Zuckerman Mind Brain Behavior Institute, Columbia University, New York, United States
| |
Collapse
|
22
|
Gaucher Q, Panniello M, Ivanov AZ, Dahmen JC, King AJ, Walker KM. Complexity of frequency receptive fields predicts tonotopic variability across species. eLife 2020; 9:53462. [PMID: 32420865 PMCID: PMC7269667 DOI: 10.7554/elife.53462] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2019] [Accepted: 05/18/2020] [Indexed: 12/17/2022] Open
Abstract
Primary cortical areas contain maps of sensory features, including sound frequency in primary auditory cortex (A1). Two-photon calcium imaging in mice has confirmed the presence of these global tonotopic maps, while uncovering an unexpected local variability in the stimulus preferences of individual neurons in A1 and other primary regions. Here we show that local heterogeneity of frequency preferences is not unique to rodents. Using two-photon calcium imaging in layers 2/3, we found that local variance in frequency preferences is equivalent in ferrets and mice. Neurons with multipeaked frequency tuning are less spatially organized than those tuned to a single frequency in both species. Furthermore, we show that microelectrode recordings may describe a smoother tonotopic arrangement due to a sampling bias towards neurons with simple frequency tuning. These results help explain previous inconsistencies in cortical topography across species and recording techniques.
Collapse
Affiliation(s)
- Quentin Gaucher
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, United Kingdom
| | - Mariangela Panniello
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, United Kingdom
| | - Aleksandar Z Ivanov
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, United Kingdom
| | - Johannes C Dahmen
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, United Kingdom
| | - Andrew J King
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, United Kingdom
| | - Kerry Mm Walker
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
23
|
Paton JJ, Buonomano DV. The Neural Basis of Timing: Distributed Mechanisms for Diverse Functions. Neuron 2019; 98:687-705. [PMID: 29772201 DOI: 10.1016/j.neuron.2018.03.045] [Citation(s) in RCA: 197] [Impact Index Per Article: 39.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2018] [Revised: 02/26/2018] [Accepted: 03/24/2018] [Indexed: 12/15/2022]
Abstract
Timing is critical to most forms of learning, behavior, and sensory-motor processing. Converging evidence supports the notion that, precisely because of its importance across a wide range of brain functions, timing relies on intrinsic and general properties of neurons and neural circuits; that is, the brain uses its natural cellular and network dynamics to solve a diversity of temporal computations. Many circuits have been shown to encode elapsed time in dynamically changing patterns of neural activity-so-called population clocks. But temporal processing encompasses a wide range of different computations, and just as there are different circuits and mechanisms underlying computations about space, there are a multitude of circuits and mechanisms underlying the ability to tell time and generate temporal patterns.
Collapse
Affiliation(s)
- Joseph J Paton
- Champalimaud Research, Champalimaud Centre for the Unknown, Lisbon, Portugal.
| | - Dean V Buonomano
- Departments of Neurobiology and Psychology and Brain Research Institute, Integrative Center for Learning and Memory, University of California, Los Angeles, Los Angeles, CA, USA.
| |
Collapse
|
24
|
Laboy-Juárez KJ, Langberg T, Ahn S, Feldman DE. Elementary motion sequence detectors in whisker somatosensory cortex. Nat Neurosci 2019; 22:1438-1449. [PMID: 31332375 PMCID: PMC6713603 DOI: 10.1038/s41593-019-0448-6] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2018] [Accepted: 06/11/2019] [Indexed: 01/09/2023]
Abstract
How somatosensory cortex (S1) encodes complex patterns of touch, as occur during tactile exploration, is poorly understood. In mouse whisker S1, temporally dense stimulation of local whisker pairs revealed that most neurons are not classical single-whisker feature detectors, but instead are strongly tuned to 2-whisker sequences involving the columnar whisker (CW) and one, specific surround whisker (SW), usually in SW-leading-CW order. Tuning was spatiotemporally precise and diverse across cells, generating a rate code for local motion vectors defined by SW-CW combinations. Spatially asymmetric, sublinear suppression for suboptimal combinations and near-linearity for preferred combinations sharpened combination tuning relative to linearly predicted tuning. This resembles computation of motion direction selectivity in vision. SW-tuned neurons, misplaced in the classical whisker map, had the strongest combination tuning. Thus, each S1 column contains a rate code for local motion sequences involving the CW, providing a basis for higher-order feature extraction.
Collapse
Affiliation(s)
- Keven J Laboy-Juárez
- Deparment of Molecular and Cell Biology and Helen Wills Neuroscience Institute, University of California Berkeley, Berkeley, CA, USA.,Department of Organismic and Evolutionary Biology and Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Tomer Langberg
- Deparment of Molecular and Cell Biology and Helen Wills Neuroscience Institute, University of California Berkeley, Berkeley, CA, USA
| | - Seoiyoung Ahn
- Deparment of Molecular and Cell Biology and Helen Wills Neuroscience Institute, University of California Berkeley, Berkeley, CA, USA
| | - Daniel E Feldman
- Deparment of Molecular and Cell Biology and Helen Wills Neuroscience Institute, University of California Berkeley, Berkeley, CA, USA.
| |
Collapse
|
25
|
Crommett LE, Madala D, Yau JM. Multisensory perceptual interactions between higher-order temporal frequency signals. J Exp Psychol Gen 2019; 148:1124-1137. [PMID: 30335446 PMCID: PMC6472995 DOI: 10.1037/xge0000513] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Naturally occurring signals in audition and touch can be complex and marked by temporal variations in frequency and amplitude. Auditory frequency sweep processing has been studied extensively; however, much less is known about sweep processing in touch because studies have primarily focused on the perception of simple sinusoidal vibrations. Given the extensive interactions between audition and touch in the frequency processing of pure tone signals, we reasoned that these senses might also interact in the processing of higher-order frequency representations like sweeps. In a series of psychophysical experiments, we characterized the influence of auditory distractors on the ability of participants to discriminate tactile frequency sweeps. Auditory frequency sweeps systematically biased the tactile perception of sweep direction. Importantly, auditory cues exerted little influence on tactile sweep direction perception when the sounds and vibrations occupied different absolute frequency ranges or when the sounds consisted of intensity sweeps. Thus, audition and touch interact in frequency sweep perception in a frequency- and feature-specific manner. Our results demonstrate that audio-tactile interactions are not constrained to the processing of simple sinusoids. Because higher-order frequency representations may be synthesized from simpler representations, our findings imply that multisensory interactions in the temporal frequency domain span multiple hierarchical levels in sensory processing. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Collapse
Affiliation(s)
- Lexi E. Crommett
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas 77030, USA
| | | | - Jeffrey M. Yau
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas 77030, USA
| |
Collapse
|
26
|
Xu N, Luo L, Wang Q, Li L. Binaural unmasking of the accuracy of envelope-signal representation in rat auditory cortex but not auditory midbrain. Hear Res 2019; 377:224-233. [PMID: 30991272 DOI: 10.1016/j.heares.2019.04.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/26/2018] [Revised: 03/25/2019] [Accepted: 04/03/2019] [Indexed: 01/16/2023]
Abstract
Accurate neural representations of acoustic signals under noisy conditions are critical for animals' survival. Detecting signal against background noise can be improved by binaural hearing particularly when an interaural-time-difference (ITD) disparity is introduced between the signal and the noise, a phenomenon known as binaural unmasking. Previous studies have mainly focused on the binaural unmasking effect on response magnitudes, and it is not clear whether binaural unmasking affects the accuracy of central representations of target acoustic signals and the relative contributions of different central auditory structures to this accuracy. Frequency following responses (FFRs), which are sustained phase-locked neural activities, can be used for measuring the accuracy of the representation of signals. Using intracranial recordings of local field potentials, this study aimed to assess whether the binaural unmasking effects include an improvement of the accuracy of neural representations of sound-envelope signals in the rat IC and/or auditory cortex (AC). The results showed that (1) when a narrow-band noise was presented binaurally, the stimulus-response (S-R) coherence of the FFRs to the envelope (FFRenvelope) of the narrow-band noise recorded in the IC was higher than that recorded in the AC. (2) Presenting a broad-band masking noise caused a larger reduction of the S-R coherence for FFRenvelope in the IC than that in the AC. (3) Introducing an ITD disparity between the narrow-band signal noise and the broad-band masking noise did not affect the IC S-R coherence, but enhanced both the AC S-R coherence and the coherence between the IC FFRenvelope and AC FFRenvelope. Thus, although the accuracy of representing envelope signals in the AC is lower than that in the IC, it can be binaurally unmasked, indicating a binaural-unmasking mechanism that is formed during the signal transmission from the IC to the AC.
Collapse
Affiliation(s)
- Na Xu
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, 100080, China
| | - Lu Luo
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, 100080, China
| | - Qian Wang
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, 100080, China; Beijing Key Laboratory of Epilepsy, Epilepsy Center, Department of Functional Neurosurgery, Sanbo Brain Hospital, Capital Medical University, Beijing, 100093, China
| | - Liang Li
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, 100080, China; Speech and Hearing Research Center, Key Laboratory on Machine Perception (Ministry of Education), Peking University, Beijing, 100871, China; Beijing Institute for Brain Disorders, Beijing, 100096, China.
| |
Collapse
|
27
|
Moerel M, De Martino F, Uğurbil K, Yacoub E, Formisano E. Processing complexity increases in superficial layers of human primary auditory cortex. Sci Rep 2019; 9:5502. [PMID: 30940888 PMCID: PMC6445291 DOI: 10.1038/s41598-019-41965-w] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2018] [Accepted: 03/20/2019] [Indexed: 11/29/2022] Open
Abstract
The layers of the neocortex each have a unique anatomical connectivity and functional role. Their exploration in the human brain, however, has been severely restricted by the limited spatial resolution of non-invasive measurement techniques. Here, we exploit the sensitivity and specificity of ultra-high field fMRI at 7 Tesla to investigate responses to natural sounds at deep, middle, and superficial cortical depths of the human auditory cortex. Specifically, we compare the performance of computational models that represent different hypotheses on sound processing inside and outside the primary auditory cortex (PAC). We observe that while BOLD responses in deep and middle PAC layers are equally well represented by a simple frequency model and a more complex spectrotemporal modulation model, responses in superficial PAC are better represented by the more complex model. This indicates an increase in processing complexity in superficial PAC, which remains present throughout cortical depths in the non-primary auditory cortex. These results suggest that a relevant transformation in sound processing takes place between the thalamo-recipient middle PAC layers and superficial PAC. This transformation may be a first computational step towards sound abstraction and perception, serving to form an increasingly more complex representation of the physical input.
Collapse
Affiliation(s)
- Michelle Moerel
- Maastricht Centre for Systems Biology, Maastricht University, Universiteitssingel 60, 6229 ER, Maastricht, The Netherlands.
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Oxfordlaan 55, 6229 EV, Maastricht, The Netherlands.
- Maastricht Brain Imaging Center (MBIC), Oxfordlaan 55, 6229 EV, Maastricht, The Netherlands.
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, 2021 6th Street SE, Minneapolis, MN, 55455, USA.
| | - Federico De Martino
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Oxfordlaan 55, 6229 EV, Maastricht, The Netherlands
- Maastricht Brain Imaging Center (MBIC), Oxfordlaan 55, 6229 EV, Maastricht, The Netherlands
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, 2021 6th Street SE, Minneapolis, MN, 55455, USA
| | - Kâmil Uğurbil
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, 2021 6th Street SE, Minneapolis, MN, 55455, USA
| | - Essa Yacoub
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, 2021 6th Street SE, Minneapolis, MN, 55455, USA
| | - Elia Formisano
- Maastricht Centre for Systems Biology, Maastricht University, Universiteitssingel 60, 6229 ER, Maastricht, The Netherlands
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Oxfordlaan 55, 6229 EV, Maastricht, The Netherlands
- Maastricht Brain Imaging Center (MBIC), Oxfordlaan 55, 6229 EV, Maastricht, The Netherlands
| |
Collapse
|
28
|
Liu ST, Montes-Lourido P, Wang X, Sadagopan S. Optimal features for auditory categorization. Nat Commun 2019; 10:1302. [PMID: 30899018 PMCID: PMC6428858 DOI: 10.1038/s41467-019-09115-y] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2018] [Accepted: 02/20/2019] [Indexed: 01/13/2023] Open
Abstract
Humans and vocal animals use vocalizations to communicate with members of their species. A necessary function of auditory perception is to generalize across the high variability inherent in vocalization production and classify them into behaviorally distinct categories ('words' or 'call types'). Here, we demonstrate that detecting mid-level features in calls achieves production-invariant classification. Starting from randomly chosen marmoset call features, we use a greedy search algorithm to determine the most informative and least redundant features necessary for call classification. High classification performance is achieved using only 10-20 features per call type. Predictions of tuning properties of putative feature-selective neurons accurately match some observed auditory cortical responses. This feature-based approach also succeeds for call categorization in other species, and for other complex classification tasks such as caller identification. Our results suggest that high-level neural representations of sounds are based on task-dependent features optimized for specific computational goals.
Collapse
Affiliation(s)
- Shi Tong Liu
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, 15213, PA, USA
| | - Pilar Montes-Lourido
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, 15213, PA, USA
| | - Xiaoqin Wang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, 21205, MD, USA
| | - Srivatsun Sadagopan
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, 15213, PA, USA. .,Department of Neurobiology, University of Pittsburgh, Pittsburgh, 15213, PA, USA. .,Department of Otolaryngology, University of Pittsburgh, Pittsburgh, 15213, PA, USA.
| |
Collapse
|
29
|
Saunders JL, Wehr M. Mice can learn phonetic categories. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 145:1168. [PMID: 31067917 PMCID: PMC6910010 DOI: 10.1121/1.5091776] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/04/2018] [Revised: 01/26/2019] [Accepted: 02/04/2019] [Indexed: 06/09/2023]
Abstract
Speech is perceived as a series of relatively invariant phonemes despite extreme variability in the acoustic signal. To be perceived as nearly-identical phonemes, speech sounds that vary continuously over a range of acoustic parameters must be perceptually discretized by the auditory system. Such many-to-one mappings of undifferentiated sensory information to a finite number of discrete categories are ubiquitous in perception. Although many mechanistic models of phonetic perception have been proposed, they remain largely unconstrained by neurobiological data. Current human neurophysiological methods lack the necessary spatiotemporal resolution to provide it: speech is too fast, and the neural circuitry involved is too small. This study demonstrates that mice are capable of learning generalizable phonetic categories, and can thus serve as a model for phonetic perception. Mice learned to discriminate consonants and generalized consonant identity across novel vowel contexts and speakers, consistent with true category learning. A mouse model, given the powerful genetic and electrophysiological tools for probing neural circuits available for them, has the potential to powerfully augment a mechanistic understanding of phonetic perception.
Collapse
Affiliation(s)
- Jonny L Saunders
- University of Oregon, Institute of Neuroscience and Department of Psychology, Eugene, Oregon 97403, USA
| | - Michael Wehr
- University of Oregon, Institute of Neuroscience and Department of Psychology, Eugene, Oregon 97403, USA
| |
Collapse
|
30
|
Neural processes of vocal social perception: Dog-human comparative fMRI studies. Neurosci Biobehav Rev 2019; 85:54-64. [PMID: 29287629 DOI: 10.1016/j.neubiorev.2017.11.017] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2017] [Revised: 11/20/2017] [Accepted: 11/23/2017] [Indexed: 11/20/2022]
Abstract
In this review we focus on the exciting new opportunities in comparative neuroscience to study neural processes of vocal social perception by comparing dog and human neural activity using fMRI methods. The dog is a relatively new addition to this research area; however, it has a large potential to become a standard species in such investigations. Although there has been great interest in the emergence of human language abilities, in case of fMRI methods, most research to date focused on homologue comparisons within Primates. By belonging to a very different clade of mammalian evolution, dogs could give such research agendas a more general mammalian foundation. In addition, broadening the scope of investigations into vocal communication in general can also deepen our understanding of human vocal skills. Being selected for and living in an anthropogenic environment, research with dogs may also be informative about the way in which human non-linguistic and linguistic signals are represented in a mammalian brain without skills for language production.
Collapse
|
31
|
Williamson RS, Polley DB. Parallel pathways for sound processing and functional connectivity among layer 5 and 6 auditory corticofugal neurons. eLife 2019; 8:e42974. [PMID: 30735128 PMCID: PMC6384027 DOI: 10.7554/elife.42974] [Citation(s) in RCA: 56] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2018] [Accepted: 02/06/2019] [Indexed: 11/27/2022] Open
Abstract
Cortical layers (L) 5 and 6 are populated by intermingled cell-types with distinct inputs and downstream targets. Here, we made optogenetically guided recordings from L5 corticofugal (CF) and L6 corticothalamic (CT) neurons in the auditory cortex of awake mice to discern differences in sensory processing and underlying patterns of functional connectivity. Whereas L5 CF neurons showed broad stimulus selectivity with sluggish response latencies and extended temporal non-linearities, L6 CTs exhibited sparse selectivity and rapid temporal processing. L5 CF spikes lagged behind neighboring units and imposed weak feedforward excitation within the local column. By contrast, L6 CT spikes drove robust and sustained activity, particularly in local fast-spiking interneurons. Our findings underscore a duality among sub-cortical projection neurons, where L5 CF units are canonical broadcast neurons that integrate sensory inputs for transmission to distributed downstream targets, while L6 CT neurons are positioned to regulate thalamocortical response gain and selectivity.
Collapse
Affiliation(s)
- Ross S Williamson
- Eaton-Peabody LaboratoriesMassachusetts Eye and Ear InfirmaryBostonUnited States
- Department of OtolaryngologyHarvard Medical SchoolBostonUnited States
| | - Daniel B Polley
- Eaton-Peabody LaboratoriesMassachusetts Eye and Ear InfirmaryBostonUnited States
- Department of OtolaryngologyHarvard Medical SchoolBostonUnited States
| |
Collapse
|
32
|
Bottjer SW, Ronald AA, Kaye T. Response properties of single neurons in higher level auditory cortex of adult songbirds. J Neurophysiol 2019; 121:218-237. [PMID: 30461366 PMCID: PMC6383665 DOI: 10.1152/jn.00751.2018] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2018] [Accepted: 11/08/2018] [Indexed: 01/28/2023] Open
Abstract
The caudomedial nidopallium (NCM) is a higher level region of auditory cortex in songbirds that has been implicated in encoding learned vocalizations and mediating perception of complex sounds. We made cell-attached recordings in awake adult male zebra finches ( Taeniopygia guttata) to characterize responses of single NCM neurons to playback of tones and songs. Neurons fell into two broad classes: narrow fast-spiking cells and broad sparsely firing cells. Virtually all narrow-spiking cells responded to playback of pure tones, compared with approximately half of broad-spiking cells. In addition, narrow-spiking cells tended to have lower thresholds and faster, less variable spike onset latencies than did broad-spiking cells, as well as higher firing rates. Tonal responses of narrow-spiking cells also showed broader ranges for both frequency and amplitude compared with broad-spiking neurons and were more apt to have V-shaped tuning curves compared with broad-spiking neurons, which tended to have complex (discontinuous), columnar, or O-shaped frequency response areas. In response to playback of conspecific songs, narrow-spiking neurons showed high firing rates and low levels of selectivity whereas broad-spiking neurons responded sparsely and selectively. Broad-spiking neurons in which tones failed to evoke a response showed greater song selectivity compared with those with a clear tuning curve. These results are consistent with the idea that narrow-spiking neurons represent putative fast-spiking interneurons, which may provide a source of intrinsic inhibition that contributes to the more selective tuning in broad-spiking cells. NEW & NOTEWORTHY The response properties of neurons in higher level regions of auditory cortex in songbirds are of fundamental interest because processing in such regions is essential for vocal learning and plasticity and for auditory perception of complex sounds. Within a region of secondary auditory cortex, neurons with narrow spikes exhibited high firing rates to playback of both tones and multiple conspecific songs, whereas broad-spiking neurons responded sparsely and selectively to both tones and songs.
Collapse
Affiliation(s)
- Sarah W Bottjer
- Section of Neurobiology, University of Southern California , Los Angeles, California
| | - Andrew A Ronald
- Section of Neurobiology, University of Southern California , Los Angeles, California
| | - Tiara Kaye
- Section of Neurobiology, University of Southern California , Los Angeles, California
| |
Collapse
|
33
|
Norman-Haignere SV, McDermott JH. Neural responses to natural and model-matched stimuli reveal distinct computations in primary and nonprimary auditory cortex. PLoS Biol 2018; 16:e2005127. [PMID: 30507943 PMCID: PMC6292651 DOI: 10.1371/journal.pbio.2005127] [Citation(s) in RCA: 51] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2017] [Revised: 12/13/2018] [Accepted: 11/08/2018] [Indexed: 11/19/2022] Open
Abstract
A central goal of sensory neuroscience is to construct models that can explain neural responses to natural stimuli. As a consequence, sensory models are often tested by comparing neural responses to natural stimuli with model responses to those stimuli. One challenge is that distinct model features are often correlated across natural stimuli, and thus model features can predict neural responses even if they do not in fact drive them. Here, we propose a simple alternative for testing a sensory model: we synthesize a stimulus that yields the same model response as each of a set of natural stimuli, and test whether the natural and "model-matched" stimuli elicit the same neural responses. We used this approach to test whether a common model of auditory cortex-in which spectrogram-like peripheral input is processed by linear spectrotemporal filters-can explain fMRI responses in humans to natural sounds. Prior studies have that shown that this model has good predictive power throughout auditory cortex, but this finding could reflect feature correlations in natural stimuli. We observed that fMRI responses to natural and model-matched stimuli were nearly equivalent in primary auditory cortex (PAC) but that nonprimary regions, including those selective for music or speech, showed highly divergent responses to the two sound sets. This dissociation between primary and nonprimary regions was less clear from model predictions due to the influence of feature correlations across natural stimuli. Our results provide a signature of hierarchical organization in human auditory cortex, and suggest that nonprimary regions compute higher-order stimulus properties that are not well captured by traditional models. Our methodology enables stronger tests of sensory models and could be broadly applied in other domains.
Collapse
Affiliation(s)
- Sam V. Norman-Haignere
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Zuckerman Institute of Mind, Brain and Behavior, Columbia University, New York, New York, United States of America
- Laboratoire des Sytèmes Perceptifs, Département d’Études Cognitives, ENS, PSL University, CNRS, Paris France
| | - Josh H. McDermott
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Program in Speech and Hearing Biosciences and Technology, Harvard University, Cambridge, Massachusetts, United States of America
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| |
Collapse
|
34
|
Zhu S, Allitt B, Samuel A, Lui L, Rosa MGP, Rajan R. Distributed representation of vocalization pitch in marmoset primary auditory cortex. Eur J Neurosci 2018; 49:179-198. [PMID: 30307660 DOI: 10.1111/ejn.14204] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2018] [Revised: 09/10/2018] [Accepted: 10/04/2018] [Indexed: 11/30/2022]
Abstract
The pitch of vocalizations is a key communication feature aiding recognition of individuals and separating sound sources in complex acoustic environments. The neural representation of the pitch of periodic sounds is well defined. However, many natural sounds, like complex vocalizations, contain rich, aperiodic or not strictly periodic frequency content and/or include high-frequency components, but still evoke a strong sense of pitch. Indeed, such sounds are the rule, not the exception but the cortical mechanisms for encoding pitch of such sounds are unknown. We investigated how neurons in the high-frequency representation of primary auditory cortex (A1) of marmosets encoded changes in pitch of four natural vocalizations, two centred around a dominant frequency similar to the neuron's best sensitivity and two around a much lower dominant frequency. Pitch was varied over a fine range that can be used by marmosets to differentiate individuals. The responses of most high-frequency A1 neurons were sensitive to pitch changes in all four vocalizations, with a smaller proportion of the neurons showing pitch-insensitive responses. Classically defined excitatory drive, from the neuron's monaural frequency response area, predicted responses to changes in vocalization pitch in <30% of neurons suggesting most pitch tuning observed is not simple frequency-level response. Moreover, 39% of A1 neurons showed call-invariant tuning of pitch. These results suggest that distributed activity across A1 can represent the pitch of natural sounds over a fine, functionally relevant range, and exhibits pitch tuning for vocalizations within and outside the classical neural tuning area.
Collapse
Affiliation(s)
- Shuyu Zhu
- Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, Victoria, Australia.,Centre of Excellence in Integrative Brain Function, Australian Research Council, Clayton, Victoria, Australia
| | - Ben Allitt
- Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, Victoria, Australia
| | - Anil Samuel
- Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, Victoria, Australia
| | - Leo Lui
- Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, Victoria, Australia.,Centre of Excellence in Integrative Brain Function, Australian Research Council, Clayton, Victoria, Australia
| | - Marcello G P Rosa
- Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, Victoria, Australia.,Centre of Excellence in Integrative Brain Function, Australian Research Council, Clayton, Victoria, Australia
| | - Ramesh Rajan
- Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, Victoria, Australia
| |
Collapse
|
35
|
Steadman MA, Sumner CJ. Changes in Neuronal Representations of Consonants in the Ascending Auditory System and Their Role in Speech Recognition. Front Neurosci 2018; 12:671. [PMID: 30369863 PMCID: PMC6194309 DOI: 10.3389/fnins.2018.00671] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2018] [Accepted: 09/06/2018] [Indexed: 11/25/2022] Open
Abstract
A fundamental task of the ascending auditory system is to produce representations that facilitate the recognition of complex sounds. This is particularly challenging in the context of acoustic variability, such as that between different talkers producing the same phoneme. These representations are transformed as information is propagated throughout the ascending auditory system from the inner ear to the auditory cortex (AI). Investigating these transformations and their role in speech recognition is key to understanding hearing impairment and the development of future clinical interventions. Here, we obtained neural responses to an extensive set of natural vowel-consonant-vowel phoneme sequences, each produced by multiple talkers, in three stages of the auditory processing pathway. Auditory nerve (AN) representations were simulated using a model of the peripheral auditory system and extracellular neuronal activity was recorded in the inferior colliculus (IC) and primary auditory cortex (AI) of anaesthetized guinea pigs. A classifier was developed to examine the efficacy of these representations for recognizing the speech sounds. Individual neurons convey progressively less information from AN to AI. Nonetheless, at the population level, representations are sufficiently rich to facilitate recognition of consonants with a high degree of accuracy at all stages indicating a progression from a dense, redundant representation to a sparse, distributed one. We examined the timescale of the neural code for consonant recognition and found that optimal timescales increase throughout the ascending auditory system from a few milliseconds in the periphery to several tens of milliseconds in the cortex. Despite these longer timescales, we found little evidence to suggest that representations up to the level of AI become increasingly invariant to across-talker differences. Instead, our results support the idea that the role of the subcortical auditory system is one of dimensionality expansion, which could provide a basis for flexible classification of arbitrary speech sounds.
Collapse
Affiliation(s)
- Mark A. Steadman
- MRC Institute of Hearing Research, School of Medicine, The University of Nottingham, Nottingham, United Kingdom
- Department of Bioengineering, Imperial College London, London, United Kingdom
| | - Christian J. Sumner
- MRC Institute of Hearing Research, School of Medicine, The University of Nottingham, Nottingham, United Kingdom
| |
Collapse
|
36
|
Westö J, May PJC. Describing complex cells in primary visual cortex: a comparison of context and multifilter LN models. J Neurophysiol 2018; 120:703-719. [PMID: 29718805 PMCID: PMC6139451 DOI: 10.1152/jn.00916.2017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2018] [Revised: 04/30/2018] [Accepted: 04/30/2018] [Indexed: 11/24/2022] Open
Abstract
Receptive field (RF) models are an important tool for deciphering neural responses to sensory stimuli. The two currently popular RF models are multifilter linear-nonlinear (LN) models and context models. Models are, however, never correct, and they rely on assumptions to keep them simple enough to be interpretable. As a consequence, different models describe different stimulus-response mappings, which may or may not be good approximations of real neural behavior. In the current study, we take up two tasks: 1) we introduce new ways to estimate context models with realistic nonlinearities, that is, with logistic and exponential functions, and 2) we evaluate context models and multifilter LN models in terms of how well they describe recorded data from complex cells in cat primary visual cortex. Our results, based on single-spike information and correlation coefficients, indicate that context models outperform corresponding multifilter LN models of equal complexity (measured in terms of number of parameters), with the best increase in performance being achieved by the novel context models. Consequently, our results suggest that the multifilter LN-model framework is suboptimal for describing the behavior of complex cells: the context-model framework is clearly superior while still providing interpretable quantizations of neural behavior. NEW & NOTEWORTHY We used data from complex cells in primary visual cortex to estimate a wide variety of receptive field models from two frameworks that have previously not been compared with each other. The models included traditionally used multifilter linear-nonlinear models and novel variants of context models. Using mutual information and correlation coefficients as performance measures, we showed that context models are superior for describing complex cells and that the novel context models performed the best.
Collapse
Affiliation(s)
- Johan Westö
- Department of Neuroscience and Biomedical Engineering Aalto University , Espoo , Finland
| | - Patrick J C May
- Department of Psychology, Lancaster University , Lancaster , United Kingdom
| |
Collapse
|
37
|
Abstract
How the cerebral cortex encodes auditory features of biologically important sounds, including speech and music, is one of the most important questions in auditory neuroscience. The pursuit to understand related neural coding mechanisms in the mammalian auditory cortex can be traced back several decades to the early exploration of the cerebral cortex. Significant progress in this field has been made in the past two decades with new technical and conceptual advances. This article reviews the progress and challenges in this area of research.
Collapse
Affiliation(s)
- Xiaoqin Wang
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland 21205, USA
| |
Collapse
|
38
|
Martin LM, García-Rosales F, Beetz MJ, Hechavarría JC. Processing of temporally patterned sounds in the auditory cortex of Seba's short-tailed bat,Carollia perspicillata. Eur J Neurosci 2018; 46:2365-2379. [PMID: 28921742 DOI: 10.1111/ejn.13702] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2017] [Revised: 09/06/2017] [Accepted: 09/07/2017] [Indexed: 11/29/2022]
Abstract
This article presents a characterization of cortical responses to artificial and natural temporally patterned sounds in the bat species Carollia perspicillata, a species that produces vocalizations at rates above 50 Hz. Multi-unit activity was recorded in three different experiments. In the first experiment, amplitude-modulated (AM) pure tones were used as stimuli to drive auditory cortex (AC) units. AC units of both ketamine-anesthetized and awake bats could lock their spikes to every cycle of the stimulus modulation envelope, but only if the modulation frequency was below 22 Hz. In the second experiment, two identical communication syllables were presented at variable intervals. Suppressed responses to the lagging syllable were observed, unless the second syllable followed the first one with a delay of at least 80 ms (i.e., 12.5 Hz repetition rate). In the third experiment, natural distress vocalization sequences were used as stimuli to drive AC units. Distress sequences produced by C. perspicillata contain bouts of syllables repeated at intervals of ~60 ms (16 Hz). Within each bout, syllables are repeated at intervals as short as 14 ms (~71 Hz). Cortical units could follow the slow temporal modulation flow produced by the occurrence of multisyllabic bouts, but not the fast acoustic flow created by rapid syllable repetition within the bouts. Taken together, our results indicate that even in fast vocalizing animals, such as bats, cortical neurons can only track the temporal structure of acoustic streams modulated at frequencies lower than 22 Hz.
Collapse
Affiliation(s)
- Lisa M Martin
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Max-von-Laue-Straße 13, 60438, Frankfurt/Main, Germany
| | - Francisco García-Rosales
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Max-von-Laue-Straße 13, 60438, Frankfurt/Main, Germany
| | - M Jerome Beetz
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Max-von-Laue-Straße 13, 60438, Frankfurt/Main, Germany
| | - Julio C Hechavarría
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Max-von-Laue-Straße 13, 60438, Frankfurt/Main, Germany
| |
Collapse
|
39
|
Higgins I, Stringer S, Schnupp J. A Computational Account of the Role of Cochlear Nucleus and Inferior Colliculus in Stabilizing Auditory Nerve Firing for Auditory Category Learning. Neural Comput 2018; 30:1801-1829. [PMID: 29652586 DOI: 10.1162/neco_a_01085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
It is well known that auditory nerve (AN) fibers overcome bandwidth limitations through the volley principle, a form of multiplexing. What is less well known is that the volley principle introduces a degree of unpredictability into AN neural firing patterns that may be affecting even simple stimulus categorization learning. We use a physiologically grounded, unsupervised spiking neural network model of the auditory brain with spike time dependent plasticity learning to demonstrate that plastic auditory cortex is unable to learn even simple auditory object categories when exposed to the raw AN firing input without subcortical preprocessing. We then demonstrate the importance of nonplastic subcortical preprocessing within the cochlear nucleus and the inferior colliculus for stabilizing and denoising AN responses. Such preprocessing enables the plastic auditory cortex to learn efficient robust representations of the auditory object categories. The biological realism of our model makes it suitable for generating neurophysiologically testable hypotheses.
Collapse
Affiliation(s)
- Irina Higgins
- Department of Experimental Psychology, University of Oxford, Oxford, OX2 6GG, U.K.
| | - Simon Stringer
- Department of Experimental Psychology, University of Oxford, Oxford, OX2 6GG, U.K.
| | - Jan Schnupp
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, OX1 3QX, U.K.
| |
Collapse
|
40
|
Chang KH, Thomas JM, Boynton GM, Fine I. Reconstructing Tone Sequences from Functional Magnetic Resonance Imaging Blood-Oxygen Level Dependent Responses within Human Primary Auditory Cortex. Front Psychol 2017; 8:1983. [PMID: 29184522 PMCID: PMC5694557 DOI: 10.3389/fpsyg.2017.01983] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2017] [Accepted: 10/30/2017] [Indexed: 01/12/2023] Open
Abstract
Here we show that, using functional magnetic resonance imaging (fMRI) blood-oxygen level dependent (BOLD) responses in human primary auditory cortex, it is possible to reconstruct the sequence of tones that a person has been listening to over time. First, we characterized the tonotopic organization of each subject’s auditory cortex by measuring auditory responses to randomized pure tone stimuli and modeling the frequency tuning of each fMRI voxel as a Gaussian in log frequency space. Then, we tested our model by examining its ability to work in reverse. Auditory responses were re-collected in the same subjects, except this time they listened to sequences of frequencies taken from simple songs (e.g., “Somewhere Over the Rainbow”). By finding the frequency that minimized the difference between the model’s prediction of BOLD responses and actual BOLD responses, we were able to reconstruct tone sequences, with mean frequency estimation errors of half an octave or less, and little evidence of systematic biases.
Collapse
Affiliation(s)
- Kelly H Chang
- Department of Psychology, University of Washington, Seattle, WA, United States
| | - Jessica M Thomas
- Department of Psychology, University of Washington, Seattle, WA, United States
| | - Geoffrey M Boynton
- Department of Psychology, University of Washington, Seattle, WA, United States
| | - Ione Fine
- Department of Psychology, University of Washington, Seattle, WA, United States
| |
Collapse
|
41
|
Beetz MJ, Kordes S, García-Rosales F, Kössl M, Hechavarría JC. Processing of Natural Echolocation Sequences in the Inferior Colliculus of Seba's Fruit Eating Bat, Carollia perspicillata. eNeuro 2017; 4:ENEURO.0314-17.2017. [PMID: 29242823 PMCID: PMC5729038 DOI: 10.1523/eneuro.0314-17.2017] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2017] [Revised: 11/17/2017] [Accepted: 11/25/2017] [Indexed: 11/21/2022] Open
Abstract
For the purpose of orientation, echolocating bats emit highly repetitive and spatially directed sonar calls. Echoes arising from call reflections are used to create an acoustic image of the environment. The inferior colliculus (IC) represents an important auditory stage for initial processing of echolocation signals. The present study addresses the following questions: (1) how does the temporal context of an echolocation sequence mimicking an approach flight of an animal affect neuronal processing of distance information to echo delays? (2) how does the IC process complex echolocation sequences containing echo information from multiple objects (multiobject sequence)? Here, we conducted neurophysiological recordings from the IC of ketamine-anaesthetized bats of the species Carollia perspicillata and compared the results from the IC with the ones from the auditory cortex (AC). Neuronal responses to an echolocation sequence was suppressed when compared to the responses to temporally isolated and randomized segments of the sequence. The neuronal suppression was weaker in the IC than in the AC. In contrast to the cortex, the time course of the acoustic events is reflected by IC activity. In the IC, suppression sharpens the neuronal tuning to specific call-echo elements and increases the signal-to-noise ratio in the units' responses. When presenting multiple-object sequences, despite collicular suppression, the neurons responded to each object-specific echo. The latter allows parallel processing of multiple echolocation streams at the IC level. Altogether, our data suggests that temporally-precise neuronal responses in the IC could allow fast and parallel processing of multiple acoustic streams.
Collapse
Affiliation(s)
- M. Jerome Beetz
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Frankfurt am Main 60438, Germany
- Department of Behavioral Physiology and Sociobiology, Biozentrum, University of Würzburg, Am Hubland, Würzburg 97074, Germany
| | - Sebastian Kordes
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Frankfurt am Main 60438, Germany
| | - Francisco García-Rosales
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Frankfurt am Main 60438, Germany
| | - Manfred Kössl
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Frankfurt am Main 60438, Germany
| | - Julio C. Hechavarría
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Frankfurt am Main 60438, Germany
| |
Collapse
|
42
|
Hoke KL, Hebets EA, Shizuka D. Neural Circuitry for Target Selection and Action Selection in Animal Behavior. Integr Comp Biol 2017; 57:808-819. [DOI: 10.1093/icb/icx109] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023] Open
|
43
|
Toarmino CR, Yen CCC, Papoti D, Bock NA, Leopold DA, Miller CT, Silva AC. Functional magnetic resonance imaging of auditory cortical fields in awake marmosets. Neuroimage 2017; 162:86-92. [PMID: 28830766 DOI: 10.1016/j.neuroimage.2017.08.052] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2017] [Revised: 08/14/2017] [Accepted: 08/18/2017] [Indexed: 11/25/2022] Open
Abstract
The primate auditory cortex is organized into a network of anatomically and functionally distinct processing fields. Because of its tonotopic properties, the auditory core has been the main target of neurophysiological studies ranging from sensory encoding to perceptual decision-making. By comparison, the auditory belt has been less extensively studied, in part due to the fact that neurons in the belt areas prefer more complex stimuli and integrate over a wider frequency range than neurons in the core, which prefer pure tones of a single frequency. Complementary approaches, such as functional magnetic resonance imaging (fMRI), allow the anatomical identification of both the auditory core and belt and facilitate their functional characterization by rapidly testing a range of stimuli across multiple brain areas simultaneously that can be used to guide subsequent neural recordings. Bridging these technologies in primates will serve to further expand our understanding of primate audition. Here, we developed a novel preparation to test whether different areas of the auditory cortex could be identified using fMRI in common marmosets (Callithrix jacchus), a powerful model of the primate auditory system. We used two types of stimulation, band pass noise and pure tones, to parse apart the auditory core from surrounding secondary belt fields. In contrast to most auditory fMRI experiments in primates, we employed a continuous sampling paradigm to rapidly collect data with little deleterious effects. Here we found robust bilateral auditory cortex activation in two marmosets and unilateral activation in a third utilizing this preparation. Furthermore, we confirmed results previously reported in electrophysiology experiments, such as the tonotopic organization of the auditory core and regions activating preferentially to complex over simple stimuli. Overall, these data establish a key preparation for future research to investigate various functional properties of marmoset auditory cortex.
Collapse
Affiliation(s)
- Camille R Toarmino
- Cortical Systems and Behavior Laboratory, Department of Psychology and Neurosciences Graduate Program, The University of California at San Diego, La Jolla, CA, 92093-0109, USA
| | - Cecil C C Yen
- Cerebral Microcirculation Section, Laboratory of Functional and Molecular Imaging, National Institute of Neurological Disorders and Stroke, Bethesda, MD, 20892-4478, USA
| | - Daniel Papoti
- Cerebral Microcirculation Section, Laboratory of Functional and Molecular Imaging, National Institute of Neurological Disorders and Stroke, Bethesda, MD, 20892-4478, USA
| | - Nicholas A Bock
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Ontario, L8S 4K1, Canada
| | - David A Leopold
- Section on Cognitive Neurophysiology and Imaging, Laboratory of Neuropsychology, National Institute of Mental Health, Bethesda, MD, 20892-4400, USA
| | - Cory T Miller
- Cortical Systems and Behavior Laboratory, Department of Psychology and Neurosciences Graduate Program, The University of California at San Diego, La Jolla, CA, 92093-0109, USA
| | - Afonso C Silva
- Cerebral Microcirculation Section, Laboratory of Functional and Molecular Imaging, National Institute of Neurological Disorders and Stroke, Bethesda, MD, 20892-4478, USA.
| |
Collapse
|
44
|
Harmonic template neurons in primate auditory cortex underlying complex sound processing. Proc Natl Acad Sci U S A 2017; 114:E840-E848. [PMID: 28096341 DOI: 10.1073/pnas.1607519114] [Citation(s) in RCA: 46] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Harmonicity is a fundamental element of music, speech, and animal vocalizations. How the auditory system extracts harmonic structures embedded in complex sounds and uses them to form a coherent unitary entity is not fully understood. Despite the prevalence of sounds rich in harmonic structures in our everyday hearing environment, it has remained largely unknown what neural mechanisms are used by the primate auditory cortex to extract these biologically important acoustic structures. In this study, we discovered a unique class of harmonic template neurons in the core region of auditory cortex of a highly vocal New World primate, the common marmoset (Callithrix jacchus), across the entire hearing frequency range. Marmosets have a rich vocal repertoire and a similar hearing range to that of humans. Responses of these neurons show nonlinear facilitation to harmonic complex sounds over inharmonic sounds, selectivity for particular harmonic structures beyond two-tone combinations, and sensitivity to harmonic number and spectral regularity. Our findings suggest that the harmonic template neurons in auditory cortex may play an important role in processing sounds with harmonic structures, such as animal vocalizations, human speech, and music.
Collapse
|
45
|
Issa JB, Haeffele BD, Young ED, Yue DT. Multiscale mapping of frequency sweep rate in mouse auditory cortex. Hear Res 2016; 344:207-222. [PMID: 28011084 DOI: 10.1016/j.heares.2016.11.018] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/31/2016] [Revised: 11/23/2016] [Accepted: 11/28/2016] [Indexed: 11/25/2022]
Abstract
Functional organization is a key feature of the neocortex that often guides studies of sensory processing, development, and plasticity. Tonotopy, which arises from the transduction properties of the cochlea, is the most widely studied organizational feature in auditory cortex; however, in order to process complex sounds, cortical regions are likely specialized for higher order features. Here, motivated by the prevalence of frequency modulations in mouse ultrasonic vocalizations and aided by the use of a multiscale imaging approach, we uncover a functional organization across the extent of auditory cortex for the rate of frequency modulated (FM) sweeps. In particular, using two-photon Ca2+ imaging of layer 2/3 neurons, we identify a tone-insensitive region at the border of AI and AAF. This central sweep region behaves fundamentally differently from nearby neurons in AI and AII, responding preferentially to fast FM sweeps but not to tones or bandlimited noise. Together these findings define a second dimension of organization in the mouse auditory cortex for sweep rate complementary to that of tone frequency.
Collapse
Affiliation(s)
- John B Issa
- Department of Biomedical Engineering, The Johns Hopkins University School of Medicine, Ross Building, Room 713, 720 Rutland Avenue, Baltimore, MD 21205, USA.
| | - Benjamin D Haeffele
- Department of Biomedical Engineering, The Johns Hopkins University School of Medicine, Ross Building, Room 713, 720 Rutland Avenue, Baltimore, MD 21205, USA
| | - Eric D Young
- Department of Biomedical Engineering, The Johns Hopkins University School of Medicine, Ross Building, Room 713, 720 Rutland Avenue, Baltimore, MD 21205, USA; Solomon H. Snyder Department of Neuroscience, The Johns Hopkins University School of Medicine, 725 N. Wolfe Street, WBSB, Baltimore, MD 21205, USA
| | - David T Yue
- Department of Biomedical Engineering, The Johns Hopkins University School of Medicine, Ross Building, Room 713, 720 Rutland Avenue, Baltimore, MD 21205, USA; Center for Cell Dynamics, The Johns Hopkins University School of Medicine, 720 Rutland Avenue, Baltimore, MD 21205, USA; Solomon H. Snyder Department of Neuroscience, The Johns Hopkins University School of Medicine, 725 N. Wolfe Street, WBSB, Baltimore, MD 21205, USA
| |
Collapse
|
46
|
Johnson LA, Della Santina CC, Wang X. Selective Neuronal Activation by Cochlear Implant Stimulation in Auditory Cortex of Awake Primate. J Neurosci 2016; 36:12468-12484. [PMID: 27927962 PMCID: PMC5148231 DOI: 10.1523/jneurosci.1699-16.2016] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2016] [Revised: 10/05/2016] [Accepted: 10/10/2016] [Indexed: 11/21/2022] Open
Abstract
Despite the success of cochlear implants (CIs) in human populations, most users perform poorly in noisy environments and music and tonal language perception. How CI devices engage the brain at the single neuron level has remained largely unknown, in particular in the primate brain. By comparing neuronal responses with acoustic and CI stimulation in marmoset monkeys unilaterally implanted with a CI electrode array, we discovered that CI stimulation was surprisingly ineffective at activating many neurons in auditory cortex, particularly in the hemisphere ipsilateral to the CI. Further analyses revealed that the CI-nonresponsive neurons were narrowly tuned to frequency and sound level when probed with acoustic stimuli; such neurons likely play a role in perceptual behaviors requiring fine frequency and level discrimination, tasks that CI users find especially challenging. These findings suggest potential deficits in central auditory processing of CI stimulation and provide important insights into factors responsible for poor CI user performance in a wide range of perceptual tasks. SIGNIFICANCE STATEMENT The cochlear implant (CI) is the most successful neural prosthetic device to date and has restored hearing in hundreds of thousands of deaf individuals worldwide. However, despite its huge successes, CI users still face many perceptual limitations, and the brain mechanisms involved in hearing through CI devices remain poorly understood. By directly comparing single-neuron responses to acoustic and CI stimulation in auditory cortex of awake marmoset monkeys, we discovered that neurons unresponsive to CI stimulation were sharply tuned to frequency and sound level. Our results point out a major deficit in central auditory processing of CI stimulation and provide important insights into mechanisms underlying the poor CI user performance in a wide range of perceptual tasks.
Collapse
Affiliation(s)
| | - Charles C Della Santina
- Departments of Biomedical Engineering and
- Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland 21025
| | | |
Collapse
|
47
|
Westö J, May PJC. Capturing contextual effects in spectro-temporal receptive fields. Hear Res 2016; 339:195-210. [PMID: 27473504 DOI: 10.1016/j.heares.2016.07.012] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/30/2016] [Revised: 06/16/2016] [Accepted: 07/24/2016] [Indexed: 11/25/2022]
Abstract
Spectro-temporal receptive fields (STRFs) are thought to provide descriptive images of the computations performed by neurons along the auditory pathway. However, their validity can be questioned because they rely on a set of assumptions that are probably not fulfilled by real neurons exhibiting contextual effects, that is, nonlinear interactions in the time or frequency dimension that cannot be described with a linear filter. We used a novel approach to investigate how a variety of contextual effects, due to facilitating nonlinear interactions and synaptic depression, affect different STRF models, and if these effects can be captured with a context field (CF). Contextual effects were incorporated in simulated networks of spiking neurons, allowing one to define the true STRFs of the neurons. This, in turn, made it possible to evaluate the performance of each STRF model by comparing the estimations with the true STRFs. We found that currently used STRF models are particularly poor at estimating inhibitory regions. Specifically, contextual effects make estimated STRFs dependent on stimulus density in a contrasting fashion: inhibitory regions are underestimated at lower densities while artificial inhibitory regions emerge at higher densities. The CF was found to provide a solution to this dilemma, but only when it is used together with a generalized linear model. Our results therefore highlight the limitations of the traditional STRF approach and provide useful recipes for how different STRF models and stimuli can be used to arrive at reliable quantifications of neural computations in the presence of contextual effects. The results therefore push the purpose of STRF analysis from simply finding an optimal stimulus toward describing context-dependent computations of neurons along the auditory pathway.
Collapse
Affiliation(s)
- Johan Westö
- Department of Neuroscience and Biomedical Engineering, Aalto University, FI-00076 Espoo, Finland.
| | - Patrick J C May
- Special Laboratory Non-Invasive Brain Imaging, Leibniz Institute for Neurobiology, D-39118 Magdeburg, Germany.
| |
Collapse
|
48
|
Sloas DC, Zhuo R, Xue H, Chambers AR, Kolaczyk E, Polley DB, Sen K. Interactions across Multiple Stimulus Dimensions in Primary Auditory Cortex. eNeuro 2016; 3:ENEURO.0124-16.2016. [PMID: 27622211 PMCID: PMC5008244 DOI: 10.1523/eneuro.0124-16.2016] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2016] [Revised: 07/28/2016] [Accepted: 08/07/2016] [Indexed: 11/21/2022] Open
Abstract
Although sensory cortex is thought to be important for the perception of complex objects, its specific role in representing complex stimuli remains unknown. Complex objects are rich in information along multiple stimulus dimensions. The position of cortex in the sensory hierarchy suggests that cortical neurons may integrate across these dimensions to form a more gestalt representation of auditory objects. Yet, studies of cortical neurons typically explore single or few dimensions due to the difficulty of determining optimal stimuli in a high dimensional stimulus space. Evolutionary algorithms (EAs) provide a potentially powerful approach for exploring multidimensional stimulus spaces based on real-time spike feedback, but two important issues arise in their application. First, it is unclear whether it is necessary to characterize cortical responses to multidimensional stimuli or whether it suffices to characterize cortical responses to a single dimension at a time. Second, quantitative methods for analyzing complex multidimensional data from an EA are lacking. Here, we apply a statistical method for nonlinear regression, the generalized additive model (GAM), to address these issues. The GAM quantitatively describes the dependence between neural response and all stimulus dimensions. We find that auditory cortical neurons in mice are sensitive to interactions across dimensions. These interactions are diverse across the population, indicating significant integration across stimulus dimensions in auditory cortex. This result strongly motivates using multidimensional stimuli in auditory cortex. Together, the EA and the GAM provide a novel quantitative paradigm for investigating neural coding of complex multidimensional stimuli in auditory and other sensory cortices.
Collapse
Affiliation(s)
- David C. Sloas
- Hearing Research Center and Department of Biomedical Engineering, Boston University, Boston, Massachusetts 02215
| | - Ran Zhuo
- Department of Mathematics and Statistics, Boston University, Boston, Massachusetts 02215
| | - Hongbo Xue
- Department of Mathematics and Statistics, Boston University, Boston, Massachusetts 02215
| | - Anna R. Chambers
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, Massachusetts 02114, and
| | - Eric Kolaczyk
- Department of Mathematics and Statistics, Boston University, Boston, Massachusetts 02215
| | - Daniel B. Polley
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, Massachusetts 02114, and
- Department of Otolaryngology, Harvard Medical School, Boston, Massachusetts 02115
| | - Kamal Sen
- Hearing Research Center and Department of Biomedical Engineering, Boston University, Boston, Massachusetts 02215
| |
Collapse
|
49
|
Williamson RS, Ahrens MB, Linden JF, Sahani M. Input-Specific Gain Modulation by Local Sensory Context Shapes Cortical and Thalamic Responses to Complex Sounds. Neuron 2016; 91:467-81. [PMID: 27346532 PMCID: PMC4961224 DOI: 10.1016/j.neuron.2016.05.041] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2015] [Revised: 10/25/2015] [Accepted: 05/12/2016] [Indexed: 01/19/2023]
Abstract
Sensory neurons are customarily characterized by one or more linearly weighted receptive fields describing sensitivity in sensory space and time. We show that in auditory cortical and thalamic neurons, the weight of each receptive field element depends on the pattern of sound falling within a local neighborhood surrounding it in time and frequency. Accounting for this change in effective receptive field with spectrotemporal context improves predictions of both cortical and thalamic responses to stationary complex sounds. Although context dependence varies among neurons and across brain areas, there are strong shared qualitative characteristics. In a spectrotemporally rich soundscape, sound elements modulate neuronal responsiveness more effectively when they coincide with sounds at other frequencies, and less effectively when they are preceded by sounds at similar frequencies. This local-context-driven lability in the representation of complex sounds—a modulation of “input-specific gain” rather than “output gain”—may be a widespread motif in sensory processing. Gain of neuronal responses to sound components varies with immediate acoustic context “Contextual gain fields” can be estimated from neuronal responses to complex sounds Coincident sound at different frequencies boosts gain in cortex and thalamus Preceding sound at similar frequency reduces gain for longer in cortex than thalamus
Collapse
Affiliation(s)
- Ross S Williamson
- Gatsby Computational Neuroscience Unit, University College London, London W1T 4JG, UK; Centre for Mathematics and Physics in the Life Sciences and Experimental Biology, University College London, London WC1E 6BT, UK
| | - Misha B Ahrens
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA 02138, USA; Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge CB2 1PZ, UK
| | - Jennifer F Linden
- Ear Institute, University College London, London WC1X 8EE, UK; Department of Neuroscience, Physiology and Pharmacology, University College London, London WC1E 6BT, UK.
| | - Maneesh Sahani
- Gatsby Computational Neuroscience Unit, University College London, London W1T 4JG, UK.
| |
Collapse
|
50
|
Kotchoubey B, Pavlov YG, Kleber B. Music in Research and Rehabilitation of Disorders of Consciousness: Psychological and Neurophysiological Foundations. Front Psychol 2015; 6:1763. [PMID: 26640445 PMCID: PMC4661237 DOI: 10.3389/fpsyg.2015.01763] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2015] [Accepted: 11/03/2015] [Indexed: 01/18/2023] Open
Abstract
According to a prevailing view, the visual system works by dissecting stimuli into primitives, whereas the auditory system processes simple and complex stimuli with their corresponding features in parallel. This makes musical stimulation particularly suitable for patients with disorders of consciousness (DoC), because the processing pathways related to complex stimulus features can be preserved even when those related to simple features are no longer available. An additional factor speaking in favor of musical stimulation in DoC is the low efficiency of visual stimulation due to prevalent maladies of vision or gaze fixation in DoC patients. Hearing disorders, in contrast, are much less frequent in DoC, which allows us to use auditory stimulation at various levels of complexity. The current paper overviews empirical data concerning the four main domains of brain functioning in DoC patients that musical stimulation can address: perception (e.g., pitch, timbre, and harmony), cognition (e.g., musical syntax and meaning), emotions, and motor functions. Music can approach basic levels of patients' self-consciousness, which may even exist when all higher-level cognitions are lost, whereas music induced emotions and rhythmic stimulation can affect the dopaminergic reward-system and activity in the motor system respectively, thus serving as a starting point for rehabilitation.
Collapse
Affiliation(s)
- Boris Kotchoubey
- Institute for Medical Psychology and Behavioural Neurobiology, University of Tübingen, Tübingen, Germany
| | - Yuri G. Pavlov
- Institute for Medical Psychology and Behavioural Neurobiology, University of Tübingen, Tübingen, Germany
- Department of Psychology, Ural Federal University, Yekaterinburg, Russia
| | - Boris Kleber
- Institute for Medical Psychology and Behavioural Neurobiology, University of Tübingen, Tübingen, Germany
| |
Collapse
|