1
|
Li YH, Joris PX. Case reopened: A temporal basis for harmonic pitch templates in the early auditory system?a). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 154:3986-4003. [PMID: 38149819 DOI: 10.1121/10.0023969] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 12/04/2023] [Indexed: 12/28/2023]
Abstract
A fundamental assumption of rate-place models of pitch is the existence of harmonic templates in the central nervous system (CNS). Shamma and Klein [(2000). J. Acoust. Soc. Am. 107, 2631-2644] hypothesized that these templates have a temporal basis. Coincidences in the temporal fine-structure of neural spike trains, even in response to nonharmonic, stochastic stimuli, would be sufficient for the development of harmonic templates. The physiological plausibility of this hypothesis is tested. Responses to pure tones, low-pass noise, and broadband noise from auditory nerve fibers and brainstem "high-sync" neurons are studied. Responses to tones simulate the output of fibers with infinitely sharp filters: for these responses, harmonic structure in a coincidence matrix comparing pairs of spike trains is indeed found. However, harmonic template structure is not observed in coincidences across responses to broadband noise, which are obtained from nerve fibers or neurons with enhanced synchronization. Using a computer model based on that of Shamma and Klein, it is shown that harmonic templates only emerge when consecutive processing steps (cochlear filtering, lateral inhibition, and temporal enhancement) are implemented in extreme, physiologically implausible form. It is concluded that current physiological knowledge does not support the hypothesis of Shamma and Klein (2000).
Collapse
Affiliation(s)
- Yi-Hsuan Li
- Laboratory of Auditory Neurophysiology, Medical School, Campus Gasthuisberg, University of Leuven, B-3000 Leuven, Belgium
| | - Philip X Joris
- Laboratory of Auditory Neurophysiology, Medical School, Campus Gasthuisberg, University of Leuven, B-3000 Leuven, Belgium
| |
Collapse
|
2
|
Grijseels DM, Prendergast BJ, Gorman JC, Miller CT. The neurobiology of vocal communication in marmosets. Ann N Y Acad Sci 2023; 1528:13-28. [PMID: 37615212 PMCID: PMC10592205 DOI: 10.1111/nyas.15057] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/25/2023]
Abstract
An increasingly popular animal model for studying the neural basis of social behavior, cognition, and communication is the common marmoset (Callithrix jacchus). Interest in this New World primate across neuroscience is now being driven by their proclivity for prosociality across their repertoire, high volubility, and rapid development, as well as their amenability to naturalistic testing paradigms and freely moving neural recording and imaging technologies. The complement of these characteristics set marmosets up to be a powerful model of the primate social brain in the years to come. Here, we focus on vocal communication because it is the area that has both made the most progress and illustrates the prodigious potential of this species. We review the current state of the field with a focus on the various brain areas and networks involved in vocal perception and production, comparing the findings from marmosets to other animals, including humans.
Collapse
Affiliation(s)
- Dori M Grijseels
- Cortical Systems and Behavior Laboratory, University of California, San Diego, La Jolla, California, USA
| | - Brendan J Prendergast
- Cortical Systems and Behavior Laboratory, University of California, San Diego, La Jolla, California, USA
| | - Julia C Gorman
- Cortical Systems and Behavior Laboratory, University of California, San Diego, La Jolla, California, USA
- Neurosciences Graduate Program, University of California, San Diego, La Jolla, California, USA
| | - Cory T Miller
- Cortical Systems and Behavior Laboratory, University of California, San Diego, La Jolla, California, USA
- Neurosciences Graduate Program, University of California, San Diego, La Jolla, California, USA
| |
Collapse
|
3
|
Eliades SJ, Tsunada J. Effects of Cortical Stimulation on Feedback-Dependent Vocal Control in Non-Human Primates. Laryngoscope 2023; 133 Suppl 2:S1-S10. [PMID: 35538859 PMCID: PMC9649833 DOI: 10.1002/lary.30175] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Revised: 04/16/2022] [Accepted: 04/24/2022] [Indexed: 11/07/2022]
Abstract
OBJECTIVES Hearing plays an important role in our ability to control voice, and perturbations in auditory feedback result in compensatory changes in vocal production. The auditory cortex (AC) has been proposed as an important mediator of this behavior, but causal evidence is lacking. We tested this in an animal model, hypothesizing that AC is necessary for vocal self-monitoring and feedback-dependent control, and that altering activity in AC during vocalization will interfere with vocal control. METHODS We implanted two marmoset monkeys (Callithrix jacchus) with bilateral AC electrode arrays. Acoustic signals were recorded from vocalizing marmosets while altering vocal feedback or electrically stimulating AC during random subsets of vocalizations. Feedback was altered by real-time frequency shifts and presented through headphones and electrical stimulation delivered to individual electrodes. We analyzed recordings to measure changes in vocal acoustics during shifted feedback and stimulation, and to determine their interaction. Results were correlated with the location and frequency tuning of stimulation sites. RESULTS Consistent with previous results, we found electrical stimulation alone evoked changes in vocal production. Results were stronger in the right hemisphere, but decreased with lower currents or repeated stimulation. Simultaneous stimulation and shifted feedback significantly altered vocal control for a subset of sites, decreasing feedback compensation at some and increasing it at others. Inhibited compensation was more likely at sites closer to vocal frequencies. CONCLUSIONS Results provide causal evidence that the AC is involved in feedback-dependent vocal control, and that it is sufficient and may also be necessary to drive changes in vocal production. LEVEL OF EVIDENCE N/A Laryngoscope, 133:1-10, 2023.
Collapse
Affiliation(s)
- Steven J Eliades
- Auditory and Communication Systems Laboratory, Department of Otorhinolaryngology: Head and Neck Surgery, University of Pennsylvania Perelman School of Medicine, Philadelphia, Pennsylvania, USA
- Department of Head and Neck Surgery & Communication Sciences, Duke University School of Medicine, Durham, North Carolina, USA
| | - Joji Tsunada
- Auditory and Communication Systems Laboratory, Department of Otorhinolaryngology: Head and Neck Surgery, University of Pennsylvania Perelman School of Medicine, Philadelphia, Pennsylvania, USA
- Chinese Institute for Brain Research, Beijing, China
| |
Collapse
|
4
|
Basiński K, Quiroga-Martinez DR, Vuust P. Temporal hierarchies in the predictive processing of melody - From pure tones to songs. Neurosci Biobehav Rev 2023; 145:105007. [PMID: 36535375 DOI: 10.1016/j.neubiorev.2022.105007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 11/30/2022] [Accepted: 12/14/2022] [Indexed: 12/23/2022]
Abstract
Listening to musical melodies is a complex task that engages perceptual and memoryrelated processes. The processes underlying melody cognition happen simultaneously on different timescales, ranging from milliseconds to minutes. Although attempts have been made, research on melody perception is yet to produce a unified framework of how melody processing is achieved in the brain. This may in part be due to the difficulty of integrating concepts such as perception, attention and memory, which pertain to different temporal scales. Recent theories on brain processing, which hold prediction as a fundamental principle, offer potential solutions to this problem and may provide a unifying framework for explaining the neural processes that enable melody perception on multiple temporal levels. In this article, we review empirical evidence for predictive coding on the levels of pitch formation, basic pitch-related auditory patterns,more complex regularity processing extracted from basic patterns and long-term expectations related to musical syntax. We also identify areas that would benefit from further inquiry and suggest future directions in research on musical melody perception.
Collapse
Affiliation(s)
- Krzysztof Basiński
- Division of Quality of Life Research, Medical University of Gdańsk, Poland
| | - David Ricardo Quiroga-Martinez
- Helen Wills Neuroscience Institute & Department of Psychology, University of California Berkeley, USA; Center for Music in the Brain, Aarhus University & The Royal Academy of Music, Denmark
| | - Peter Vuust
- Center for Music in the Brain, Aarhus University & The Royal Academy of Music, Denmark
| |
Collapse
|
5
|
Di Stefano N, Vuust P, Brattico E. Consonance and dissonance perception. A critical review of the historical sources, multidisciplinary findings, and main hypotheses. Phys Life Rev 2022; 43:273-304. [PMID: 36372030 DOI: 10.1016/j.plrev.2022.10.004] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Accepted: 10/17/2022] [Indexed: 11/05/2022]
Abstract
Revealed more than two millennia ago by Pythagoras, consonance and dissonance (C/D) are foundational concepts in music theory, perception, and aesthetics. The search for the biological, acoustical, and cultural factors that affect C/D perception has resulted in descriptive accounts inspired by arithmetic, musicological, psychoacoustical or neurobiological frameworks without reaching a consensus. Here, we review the key historical sources and modern multidisciplinary findings on C/D and integrate them into three main hypotheses: the vocal similarity hypothesis (VSH), the psychocultural hypothesis (PH), and the sensorimotor hypothesis (SH). By illustrating the hypotheses-related findings, we highlight their major conceptual, methodological, and terminological shortcomings. Trying to provide a unitary framework for C/D understanding, we put together multidisciplinary research on human and animal vocalizations, which converges to suggest that auditory roughness is associated with distress/danger and, therefore, elicits defensive behavioral reactions and neural responses that indicate aversion. We therefore stress the primacy of vocality and roughness as key factors in the explanation of C/D phenomenon, and we explore the (neuro)biological underpinnings of the attraction-aversion mechanisms that are triggered by C/D stimuli. Based on the reviewed evidence, while the aversive nature of dissonance appears as solidly rooted in the multidisciplinary findings, the attractive nature of consonance remains a somewhat speculative claim that needs further investigation. Finally, we outline future directions for empirical research in C/D, especially regarding cross-modal and cross-cultural approaches.
Collapse
Affiliation(s)
- Nicola Di Stefano
- Institute for Cognitive Sciences and Technologies (ISTC), National Research Council of Italy (CNR), Via San Martino della Battaglia 44, 00185 Rome, Italy.
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University Royal Academy of Music Aarhus/Aalborg (RAMA), 8000 Aarhus, Denmark.
| | - Elvira Brattico
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University Royal Academy of Music Aarhus/Aalborg (RAMA), 8000 Aarhus, Denmark; Department of Education, Psychology, Communication, University of Bari Aldo Moro, 70122 Bari, Italy.
| |
Collapse
|
6
|
Lage-Castellanos A, De Martino F, Ghose GM, Gulban OF, Moerel M. Selective attention sharpens population receptive fields in human auditory cortex. Cereb Cortex 2022; 33:5395-5408. [PMID: 36336333 PMCID: PMC10152083 DOI: 10.1093/cercor/bhac427] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 10/03/2022] [Accepted: 10/04/2022] [Indexed: 11/09/2022] Open
Abstract
Abstract
Selective attention enables the preferential processing of relevant stimulus aspects. Invasive animal studies have shown that attending a sound feature rapidly modifies neuronal tuning throughout the auditory cortex. Human neuroimaging studies have reported enhanced auditory cortical responses with selective attention. To date, it remains unclear how the results obtained with functional magnetic resonance imaging (fMRI) in humans relate to the electrophysiological findings in animal models. Here we aim to narrow the gap between animal and human research by combining a selective attention task similar in design to those used in animal electrophysiology with high spatial resolution ultra-high field fMRI at 7 Tesla. Specifically, human participants perform a detection task, whereas the probability of target occurrence varies with sound frequency. Contrary to previous fMRI studies, we show that selective attention resulted in population receptive field sharpening, and consequently reduced responses, at the attended sound frequencies. The difference between our results to those of previous fMRI studies supports the notion that the influence of selective attention on auditory cortex is diverse and may depend on context, stimulus, and task.
Collapse
Affiliation(s)
- Agustin Lage-Castellanos
- Department of Cognitive Neuroscience , Faculty of Psychology and Neuroscience, , 6200 MD, Maastricht , The Netherlands
- Maastricht University , Faculty of Psychology and Neuroscience, , 6200 MD, Maastricht , The Netherlands
- Maastricht Brain Imaging Center (MBIC) , 6200 MD, Maastricht , The Netherlands
- Department of NeuroInformatics, Cuban Neuroscience Center , Havana City 11600 , Cuba
| | - Federico De Martino
- Department of Cognitive Neuroscience , Faculty of Psychology and Neuroscience, , 6200 MD, Maastricht , The Netherlands
- Maastricht University , Faculty of Psychology and Neuroscience, , 6200 MD, Maastricht , The Netherlands
- Maastricht Brain Imaging Center (MBIC) , 6200 MD, Maastricht , The Netherlands
- Center for Magnetic Resonance Research , Department of Radiology, , Minneapolis, MN 55455 , United States
- University of Minnesota , Department of Radiology, , Minneapolis, MN 55455 , United States
| | - Geoffrey M Ghose
- Center for Magnetic Resonance Research , Department of Radiology, , Minneapolis, MN 55455 , United States
- University of Minnesota , Department of Radiology, , Minneapolis, MN 55455 , United States
| | | | - Michelle Moerel
- Department of Cognitive Neuroscience , Faculty of Psychology and Neuroscience, , 6200 MD, Maastricht , The Netherlands
- Maastricht University , Faculty of Psychology and Neuroscience, , 6200 MD, Maastricht , The Netherlands
- Maastricht Brain Imaging Center (MBIC) , 6200 MD, Maastricht , The Netherlands
- Maastricht Centre for Systems Biology, Maastricht University , 6200 MD, Maastricht , The Netherlands
| |
Collapse
|
7
|
Auerbach BD, Gritton HJ. Hearing in Complex Environments: Auditory Gain Control, Attention, and Hearing Loss. Front Neurosci 2022; 16:799787. [PMID: 35221899 PMCID: PMC8866963 DOI: 10.3389/fnins.2022.799787] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Accepted: 01/18/2022] [Indexed: 12/12/2022] Open
Abstract
Listening in noisy or complex sound environments is difficult for individuals with normal hearing and can be a debilitating impairment for those with hearing loss. Extracting meaningful information from a complex acoustic environment requires the ability to accurately encode specific sound features under highly variable listening conditions and segregate distinct sound streams from multiple overlapping sources. The auditory system employs a variety of mechanisms to achieve this auditory scene analysis. First, neurons across levels of the auditory system exhibit compensatory adaptations to their gain and dynamic range in response to prevailing sound stimulus statistics in the environment. These adaptations allow for robust representations of sound features that are to a large degree invariant to the level of background noise. Second, listeners can selectively attend to a desired sound target in an environment with multiple sound sources. This selective auditory attention is another form of sensory gain control, enhancing the representation of an attended sound source while suppressing responses to unattended sounds. This review will examine both “bottom-up” gain alterations in response to changes in environmental sound statistics as well as “top-down” mechanisms that allow for selective extraction of specific sound features in a complex auditory scene. Finally, we will discuss how hearing loss interacts with these gain control mechanisms, and the adaptive and/or maladaptive perceptual consequences of this plasticity.
Collapse
Affiliation(s)
- Benjamin D. Auerbach
- Department of Molecular and Integrative Physiology, Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- Neuroscience Program, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- *Correspondence: Benjamin D. Auerbach,
| | - Howard J. Gritton
- Neuroscience Program, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- Department of Comparative Biosciences, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, IL, United States
| |
Collapse
|
8
|
Spence C, Di Stefano N. Crossmodal Harmony: Looking for the Meaning of Harmony Beyond Hearing. Iperception 2022; 13:20416695211073817. [PMID: 35186248 PMCID: PMC8850342 DOI: 10.1177/20416695211073817] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 11/20/2021] [Accepted: 12/23/2021] [Indexed: 12/02/2022] Open
Abstract
The notion of harmony was first developed in the context of metaphysics before being applied to the domain of music. However, in recent centuries, the term has often been used to describe especially pleasing combinations of colors by those working in the visual arts too. Similarly, the harmonization of flavors is nowadays often invoked as one of the guiding principles underpinning the deliberate pairing of food and drink. However, beyond the various uses of the term to describe and construct pleasurable unisensory perceptual experiences, it has also been suggested that music and painting may be combined harmoniously (e.g., see the literature on "color music"). Furthermore, those working in the area of "sonic seasoning" sometimes describe certain sonic compositions as harmonizing crossmodally with specific flavor sensations. In this review, we take a critical look at the putative meaning(s) of the term "harmony" when used in a crossmodal, or multisensory, context. Furthermore, we address the question of whether the term's use outside of a strictly unimodal auditory context should be considered literally or merely metaphorically (i.e., as a shorthand to describe those combinations of sensory stimuli that, for whatever reason, appear to go well together, and hence which can be processed especially fluently).
Collapse
Affiliation(s)
- Charles Spence
- Crossmodal Research Laboratory, University of Oxford, Oxford, UK
| | - Nicola Di Stefano
- Institute for Cognitive Sciences and Technologies, National Research Council of Italy (CNR), Rome, Italy
| |
Collapse
|
9
|
Kline AM, Aponte DA, Tsukano H, Giovannucci A, Kato HK. Inhibitory gating of coincidence-dependent sensory binding in secondary auditory cortex. Nat Commun 2021; 12:4610. [PMID: 34326331 PMCID: PMC8322099 DOI: 10.1038/s41467-021-24758-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Accepted: 07/05/2021] [Indexed: 11/09/2022] Open
Abstract
Integration of multi-frequency sounds into a unified perceptual object is critical for recognizing syllables in speech. This "feature binding" relies on the precise synchrony of each component's onset timing, but little is known regarding its neural correlates. We find that multi-frequency sounds prevalent in vocalizations, specifically harmonics, preferentially activate the mouse secondary auditory cortex (A2), whose response deteriorates with shifts in component onset timings. The temporal window for harmonics integration in A2 was broadened by inactivation of somatostatin-expressing interneurons (SOM cells), but not parvalbumin-expressing interneurons (PV cells). Importantly, A2 has functionally connected subnetworks of neurons preferentially encoding harmonic over inharmonic sounds. These subnetworks are stable across days and exist prior to experimental harmonics exposure, suggesting their formation during development. Furthermore, A2 inactivation impairs performance in a discrimination task for coincident harmonics. Together, we propose A2 as a locus for multi-frequency integration, which may form the circuit basis for vocal processing.
Collapse
Affiliation(s)
- Amber M Kline
- Department of Psychiatry, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.,Neuroscience Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Destinee A Aponte
- Department of Psychiatry, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.,Neuroscience Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Hiroaki Tsukano
- Department of Psychiatry, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.,Neuroscience Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Andrea Giovannucci
- Neuroscience Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.,Joint Department of Biomedical Engineering, University of North Carolina at Chapel Hill and North Carolina State University, Chapel Hill, NC, USA
| | - Hiroyuki K Kato
- Department of Psychiatry, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA. .,Neuroscience Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA. .,Carolina Institute for Developmental Disabilities, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.
| |
Collapse
|
10
|
Recio-Spinoso A, Rhode WS. Information Processing by Onset Neurons in the Cat Auditory Brainstem. J Assoc Res Otolaryngol 2020; 21:201-224. [PMID: 32458083 PMCID: PMC7392981 DOI: 10.1007/s10162-020-00757-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2019] [Accepted: 04/28/2020] [Indexed: 12/18/2022] Open
Abstract
Octopus cells in the ventral cochlear nucleus (VCN) have been difficult to study because of the very features that distinguish them from other VCN neurons. We performed in vivo recordings in cats on well-isolated units, some of which were intracellularly labeled and histologically reconstructed. We found that responses to low-frequency tones with frequencies < 1 kHz reveal higher levels of neural synchrony and entrainment to the stimulus than the auditory nerve. In responses to higher frequency tones, the neural discharges occur mostly near the stimulus onset. These neurons also respond in a unique way to 100 % amplitude-modulated (AM) tones with discharges exhibiting a bandpass tuning. Responses to frequency-modulated sounds (FM) are unusual: Octopus cells react more vigorously during the ascending than the descending parts of the FM stimulus. We examined responses of neurons in the ventral nucleus of the lateral lemniscus (VNLL) whose discharges to tones and AM sounds are similar to octopus cells. Repeated stimulation with short tone pips of VCN and VNLL onset neurons evokes trains of action potentials with gradual shifts toward later times in their first spike latency. This behavior parallels short-term post-synaptic depression observed by other authors in in vitro VCN recordings of octopus cells. VCN and VNLL onset units in cats respond to frozen noise stimuli with gaps as narrow as 1 ms with a robust discharge near the stimulus onset following the gap. This finding suggests that VCN and VNLL onset cells play a role in gap detection, which is of great importance to speech perception.
Collapse
Affiliation(s)
- Alberto Recio-Spinoso
- Instituto de Investigación en Discapacidades Neurológicas (IDINE), Universidad de Castilla-La Mancha, 02006 Albacete, Spain
| | - William S. Rhode
- Department of Neuroscience, University of Wisconsin, Madison, WI 53705 USA
| |
Collapse
|
11
|
Gaucher Q, Panniello M, Ivanov AZ, Dahmen JC, King AJ, Walker KM. Complexity of frequency receptive fields predicts tonotopic variability across species. eLife 2020; 9:53462. [PMID: 32420865 PMCID: PMC7269667 DOI: 10.7554/elife.53462] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2019] [Accepted: 05/18/2020] [Indexed: 12/17/2022] Open
Abstract
Primary cortical areas contain maps of sensory features, including sound frequency in primary auditory cortex (A1). Two-photon calcium imaging in mice has confirmed the presence of these global tonotopic maps, while uncovering an unexpected local variability in the stimulus preferences of individual neurons in A1 and other primary regions. Here we show that local heterogeneity of frequency preferences is not unique to rodents. Using two-photon calcium imaging in layers 2/3, we found that local variance in frequency preferences is equivalent in ferrets and mice. Neurons with multipeaked frequency tuning are less spatially organized than those tuned to a single frequency in both species. Furthermore, we show that microelectrode recordings may describe a smoother tonotopic arrangement due to a sampling bias towards neurons with simple frequency tuning. These results help explain previous inconsistencies in cortical topography across species and recording techniques.
Collapse
Affiliation(s)
- Quentin Gaucher
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, United Kingdom
| | - Mariangela Panniello
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, United Kingdom
| | - Aleksandar Z Ivanov
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, United Kingdom
| | - Johannes C Dahmen
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, United Kingdom
| | - Andrew J King
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, United Kingdom
| | - Kerry Mm Walker
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
12
|
Qureshi F, Yan J. Three dimensional rendering of auditory neuronal responses: A novel illustration of receptive field across frequency, intensity & time domains. J Neurosci Methods 2020; 338:108682. [PMID: 32165230 DOI: 10.1016/j.jneumeth.2020.108682] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2019] [Revised: 03/07/2020] [Accepted: 03/08/2020] [Indexed: 10/24/2022]
Abstract
BACKGROUND Neural coding of sound information is often studied through frequency tuning curve (FTC), spectro-temporal receptive field (STRF), post-stimulus time histogram (PSTH), and other methods such as rate functions. These methods, despite providing a robust characterization of auditory responses in their specific domains, lack a complete description in terms of three sound fundamentals: frequency, amplitude, and time. NEW METHOD Using the techniques of electrophysiology, neural signal processing and medical image processing, a standalone method is created to illustrate the neural processing of three sound fundamentals in one representation. RESULTS The new method comprehensively showed frequency tuning, intensity tuning, time tuning as well as a novel representation of frequency and time dependent intensity coding. It provides most of the necessary parameters that are used to quantify neural response properties, such as minimum threshold (MT), frequency tuning, latency, best frequency (BF), characteristic frequency (CF), bandwidth (BW), etc. COMPARISON WITH EXISTING METHODS: Our method shows neural responses as a function of all three sound fundamentals in a single representation that was not possible in previous methods. It covers many functions of conventional methods and allow extracting novel information such as the intensity coding as the function of the spectrotemporal response area of auditory neurons. CONCLUSION This method can be used as a standalone package to study auditory neural responses and evaluate the performance of different hearing related devices such as cochlear implants and hearing aids in animal models as well as study and compare auditory processing in aged and hearing impaired animal models.
Collapse
Affiliation(s)
- Farhad Qureshi
- Department of Physiology and Pharmacology, University of Calgary, Cumming School of Medicine 3330 Hospital Drive NW, Calgary Alberta, T2N 4N1, Canada.
| | - Jun Yan
- Department of Physiology and Pharmacology, University of Calgary, Cumming School of Medicine 3330 Hospital Drive NW, Calgary Alberta, T2N 4N1, Canada
| |
Collapse
|
13
|
Kim KX, Atencio CA, Schreiner CE. Stimulus dependent transformations between synaptic and spiking receptive fields in auditory cortex. Nat Commun 2020; 11:1102. [PMID: 32107370 PMCID: PMC7046699 DOI: 10.1038/s41467-020-14835-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2019] [Accepted: 02/06/2020] [Indexed: 11/09/2022] Open
Abstract
Auditory cortex neurons nonlinearly integrate synaptic inputs from the thalamus and cortex, and generate spiking outputs for simple and complex sounds. Directly comparing synaptic and spiking activity can determine whether this input-output transformation is stimulus-dependent. We employ in vivo whole-cell recordings in the mouse primary auditory cortex, using pure tones and broadband dynamic moving ripple stimuli, to examine properties of functional integration in tonal (TRFs) and spectrotemporal (STRFs) receptive fields. Spectral tuning in STRFs derived from synaptic, subthreshold and spiking responses proves to be substantially more selective than for TRFs. We describe diverse spectral and temporal modulation preferences and distinct nonlinearities, and their modifications between the input and output stages of neural processing. These results characterize specific processing differences at the level of synaptic convergence, integration and spike generation resulting in stimulus-dependent transformation patterns in the primary auditory cortex. The authors compare receptive fields and nonlinearities of synaptic inputs, membrane potentials, and spiking activity in the auditory cortex for broadband stimuli revealing distinct differences, which lead to an increase in feature selectivity from neuron input to output. Frequency selectivity is distinctly higher for spectrotemporal receptive fields (STRFs) than for tonal receptive fields (TRFs).
Collapse
Affiliation(s)
- Kyunghee X Kim
- Coleman Memorial Laboratory, Department of Otolaryngology - Head and Neck Surgery, University of California San Francisco, San Francisco, USA.
| | - Craig A Atencio
- Coleman Memorial Laboratory, Department of Otolaryngology - Head and Neck Surgery, University of California San Francisco, San Francisco, USA
| | - Christoph E Schreiner
- Coleman Memorial Laboratory, Department of Otolaryngology - Head and Neck Surgery, University of California San Francisco, San Francisco, USA.,Center for Integrative Neuroscience, University of California San Francisco, San Francisco, USA
| |
Collapse
|
14
|
Remington ED, Wang X. Neural Representations of the Full Spatial Field in Auditory Cortex of Awake Marmoset (Callithrix jacchus). Cereb Cortex 2019; 29:1199-1216. [PMID: 29420692 PMCID: PMC6373678 DOI: 10.1093/cercor/bhy025] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2017] [Accepted: 01/13/2018] [Indexed: 11/14/2022] Open
Abstract
Unlike visual signals, sound can reach the ears from any direction, and the ability to localize sounds from all directions is essential for survival in a natural environment. Previous studies have largely focused on the space in front of a subject that is also covered by vision and were often limited to measuring spatial tuning along the horizontal (azimuth) plane. As a result, we know relatively little about how the auditory cortex responds to sounds coming from spatial locations outside the frontal space where visual information is unavailable. By mapping single-neuron responses to the full spatial field in awake marmoset (Callithrix jacchus), an arboreal animal for which spatial processing is vital in its natural habitat, we show that spatial receptive fields in several auditory areas cover all spatial locations. Several complementary measures of spatial tuning showed that neurons were tuned to both frontal space and rear space (outside the coverage of vision), as well as the space above and below the horizontal plane. Together, these findings provide valuable new insights into the representation of all spatial locations by primate auditory cortex.
Collapse
Affiliation(s)
- Evan D Remington
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Xiaoqin Wang
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
15
|
Tabas A, Andermann M, Schuberth V, Riedel H, Balaguer-Ballester E, Rupp A. Modeling and MEG evidence of early consonance processing in auditory cortex. PLoS Comput Biol 2019; 15:e1006820. [PMID: 30818358 PMCID: PMC6413961 DOI: 10.1371/journal.pcbi.1006820] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2018] [Revised: 03/12/2019] [Accepted: 01/24/2019] [Indexed: 11/18/2022] Open
Abstract
Pitch is a fundamental attribute of auditory perception. The interaction of concurrent pitches gives rise to a sensation that can be characterized by its degree of consonance or dissonance. In this work, we propose that human auditory cortex (AC) processes pitch and consonance through a common neural network mechanism operating at early cortical levels. First, we developed a new model of neural ensembles incorporating realistic neuronal and synaptic parameters to assess pitch processing mechanisms at early stages of AC. Next, we designed a magnetoencephalography (MEG) experiment to measure the neuromagnetic activity evoked by dyads with varying degrees of consonance or dissonance. MEG results show that dissonant dyads evoke a pitch onset response (POR) with a latency up to 36 ms longer than consonant dyads. Additionally, we used the model to predict the processing time of concurrent pitches; here, consonant pitch combinations were decoded faster than dissonant combinations, in line with the experimental observations. Specifically, we found a striking match between the predicted and the observed latency of the POR as elicited by the dyads. These novel results suggest that consonance processing starts early in human auditory cortex and may share the network mechanisms that are responsible for (single) pitch processing.
Collapse
Affiliation(s)
- Alejandro Tabas
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Faculty of Science and Technology, Bournemouth University, Poole, United Kingdom
- * E-mail: (AT); (EBB)
| | - Martin Andermann
- Section of Biomagnetism, Department of Neurology, Heidelberg University Hospital, Heidelberg, Germany
| | - Valeria Schuberth
- Section of Biomagnetism, Department of Neurology, Heidelberg University Hospital, Heidelberg, Germany
| | - Helmut Riedel
- Section of Biomagnetism, Department of Neurology, Heidelberg University Hospital, Heidelberg, Germany
| | - Emili Balaguer-Ballester
- Faculty of Science and Technology, Bournemouth University, Poole, United Kingdom
- Bernstein Center for Computational Neuroscience, Heidelberg/Mannheim, Mannheim, Germany
- * E-mail: (AT); (EBB)
| | - André Rupp
- Section of Biomagnetism, Department of Neurology, Heidelberg University Hospital, Heidelberg, Germany
| |
Collapse
|
16
|
Williamson RS, Polley DB. Parallel pathways for sound processing and functional connectivity among layer 5 and 6 auditory corticofugal neurons. eLife 2019; 8:e42974. [PMID: 30735128 PMCID: PMC6384027 DOI: 10.7554/elife.42974] [Citation(s) in RCA: 56] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2018] [Accepted: 02/06/2019] [Indexed: 11/27/2022] Open
Abstract
Cortical layers (L) 5 and 6 are populated by intermingled cell-types with distinct inputs and downstream targets. Here, we made optogenetically guided recordings from L5 corticofugal (CF) and L6 corticothalamic (CT) neurons in the auditory cortex of awake mice to discern differences in sensory processing and underlying patterns of functional connectivity. Whereas L5 CF neurons showed broad stimulus selectivity with sluggish response latencies and extended temporal non-linearities, L6 CTs exhibited sparse selectivity and rapid temporal processing. L5 CF spikes lagged behind neighboring units and imposed weak feedforward excitation within the local column. By contrast, L6 CT spikes drove robust and sustained activity, particularly in local fast-spiking interneurons. Our findings underscore a duality among sub-cortical projection neurons, where L5 CF units are canonical broadcast neurons that integrate sensory inputs for transmission to distributed downstream targets, while L6 CT neurons are positioned to regulate thalamocortical response gain and selectivity.
Collapse
Affiliation(s)
- Ross S Williamson
- Eaton-Peabody LaboratoriesMassachusetts Eye and Ear InfirmaryBostonUnited States
- Department of OtolaryngologyHarvard Medical SchoolBostonUnited States
| | - Daniel B Polley
- Eaton-Peabody LaboratoriesMassachusetts Eye and Ear InfirmaryBostonUnited States
- Department of OtolaryngologyHarvard Medical SchoolBostonUnited States
| |
Collapse
|
17
|
Insanally MN, Carcea I, Field RE, Rodgers CC, DePasquale B, Rajan K, DeWeese MR, Albanna BF, Froemke RC. Spike-timing-dependent ensemble encoding by non-classically responsive cortical neurons. eLife 2019; 8:42409. [PMID: 30688649 PMCID: PMC6391134 DOI: 10.7554/elife.42409] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2018] [Accepted: 01/27/2019] [Indexed: 12/02/2022] Open
Abstract
Neurons recorded in behaving animals often do not discernibly respond to sensory input and are not overtly task-modulated. These non-classically responsive neurons are difficult to interpret and are typically neglected from analysis, confounding attempts to connect neural activity to perception and behavior. Here, we describe a trial-by-trial, spike-timing-based algorithm to reveal the coding capacities of these neurons in auditory and frontal cortex of behaving rats. Classically responsive and non-classically responsive cells contained significant information about sensory stimuli and behavioral decisions. Stimulus category was more accurately represented in frontal cortex than auditory cortex, via ensembles of non-classically responsive cells coordinating the behavioral meaning of spike timings on correct but not error trials. This unbiased approach allows the contribution of all recorded neurons – particularly those without obvious task-related, trial-averaged firing rate modulation – to be assessed for behavioral relevance on single trials. Neurons encode information in the form of electrical signals called spikes. Certain neurons increase the rate at which they produce spikes under specific circumstances, e.g., whenever an animal hears a particular sound. These neurons are said to be 'classically responsive'. But not all neurons behave in this way. Others produce spikes at a variable rate that does not obviously relate to the animal's behavior. These neurons are said to be 'non-classically responsive'. They are often omitted from analyses, despite typically outnumbering their classically responsive counterparts. So, what are these neurons doing? To find out, Insanally et al. trained rats to respond to sounds. The animals learned to poke their nose into a window whenever they heard a specific tone, and to avoid responding whenever they heard any other tone. As the rats performed the task, Insanally et al. recorded from neurons in two areas of the brain, the frontal cortex and the auditory cortex. A computer then analyzed the activity of individual neurons during each trial. As expected, the firing rate of non-classically responsive cells did not relate to the animals' behavior. But the timing of this firing did. The interval between spikes contained information about which tone the animals had heard and/or how they had responded. The cells worked together in groups to encode this information. Over the course of each trial, every neuron in the group varied the interval between its spikes. Eventually, the group reached a consensus, with all neurons using the same interval to represent information relevant to the task. Groups of neurons in the frontal cortex encoded more information about the category of the tone than those in the auditory cortex. By including all neurons – both classically and non-classically responsive – this model offers a more comprehensive view of how neural activity relates to behavior. This may in turn help us understand the variable and complex neural activity seen in people with sensory and cognitive disorders.
Collapse
Affiliation(s)
- Michele N Insanally
- Skirball Institute for Biomolecular Medicine, New York University School of Medicine, New York, United States.,Neuroscience Institute, New York University School of Medicine, New York, United States.,Department of Otolaryngology, New York University School of Medicine, New York, United States.,Department of Neuroscience and Physiology, New York University School of Medicine, New York, United States.,Center for Neural Science, New York University, New York, United States
| | - Ioana Carcea
- Skirball Institute for Biomolecular Medicine, New York University School of Medicine, New York, United States.,Neuroscience Institute, New York University School of Medicine, New York, United States.,Department of Otolaryngology, New York University School of Medicine, New York, United States.,Department of Neuroscience and Physiology, New York University School of Medicine, New York, United States.,Center for Neural Science, New York University, New York, United States
| | - Rachel E Field
- Skirball Institute for Biomolecular Medicine, New York University School of Medicine, New York, United States.,Neuroscience Institute, New York University School of Medicine, New York, United States.,Department of Otolaryngology, New York University School of Medicine, New York, United States.,Department of Neuroscience and Physiology, New York University School of Medicine, New York, United States.,Center for Neural Science, New York University, New York, United States
| | - Chris C Rodgers
- Department of Neuroscience, Columbia University, New York, United States.,Kavli Institute of Brain Science, Columbia University, New York, United States
| | - Brian DePasquale
- Princeton Neuroscience Institute, Princeton University, Princeton, United States
| | - Kanaka Rajan
- Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, United States.,Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, United States
| | - Michael R DeWeese
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, United States.,Department of Physics, University of California, Berkeley, Berkeley, United States
| | - Badr F Albanna
- Department of Natural Sciences, Fordham University, New York, United States
| | - Robert C Froemke
- Skirball Institute for Biomolecular Medicine, New York University School of Medicine, New York, United States.,Neuroscience Institute, New York University School of Medicine, New York, United States.,Department of Neuroscience and Physiology, New York University School of Medicine, New York, United States.,Center for Neural Science, New York University, New York, United States.,Howard Hughes Medical Institute, New York University School of Medicine, New York, United States
| |
Collapse
|
18
|
Bottjer SW, Ronald AA, Kaye T. Response properties of single neurons in higher level auditory cortex of adult songbirds. J Neurophysiol 2019; 121:218-237. [PMID: 30461366 PMCID: PMC6383665 DOI: 10.1152/jn.00751.2018] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2018] [Accepted: 11/08/2018] [Indexed: 01/28/2023] Open
Abstract
The caudomedial nidopallium (NCM) is a higher level region of auditory cortex in songbirds that has been implicated in encoding learned vocalizations and mediating perception of complex sounds. We made cell-attached recordings in awake adult male zebra finches ( Taeniopygia guttata) to characterize responses of single NCM neurons to playback of tones and songs. Neurons fell into two broad classes: narrow fast-spiking cells and broad sparsely firing cells. Virtually all narrow-spiking cells responded to playback of pure tones, compared with approximately half of broad-spiking cells. In addition, narrow-spiking cells tended to have lower thresholds and faster, less variable spike onset latencies than did broad-spiking cells, as well as higher firing rates. Tonal responses of narrow-spiking cells also showed broader ranges for both frequency and amplitude compared with broad-spiking neurons and were more apt to have V-shaped tuning curves compared with broad-spiking neurons, which tended to have complex (discontinuous), columnar, or O-shaped frequency response areas. In response to playback of conspecific songs, narrow-spiking neurons showed high firing rates and low levels of selectivity whereas broad-spiking neurons responded sparsely and selectively. Broad-spiking neurons in which tones failed to evoke a response showed greater song selectivity compared with those with a clear tuning curve. These results are consistent with the idea that narrow-spiking neurons represent putative fast-spiking interneurons, which may provide a source of intrinsic inhibition that contributes to the more selective tuning in broad-spiking cells. NEW & NOTEWORTHY The response properties of neurons in higher level regions of auditory cortex in songbirds are of fundamental interest because processing in such regions is essential for vocal learning and plasticity and for auditory perception of complex sounds. Within a region of secondary auditory cortex, neurons with narrow spikes exhibited high firing rates to playback of both tones and multiple conspecific songs, whereas broad-spiking neurons responded sparsely and selectively to both tones and songs.
Collapse
Affiliation(s)
- Sarah W Bottjer
- Section of Neurobiology, University of Southern California , Los Angeles, California
| | - Andrew A Ronald
- Section of Neurobiology, University of Southern California , Los Angeles, California
| | - Tiara Kaye
- Section of Neurobiology, University of Southern California , Los Angeles, California
| |
Collapse
|
19
|
Gervain J, Geffen MN. Efficient Neural Coding in Auditory and Speech Perception. Trends Neurosci 2019; 42:56-65. [PMID: 30297085 PMCID: PMC6542557 DOI: 10.1016/j.tins.2018.09.004] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2018] [Revised: 09/06/2018] [Accepted: 09/10/2018] [Indexed: 02/05/2023]
Abstract
Speech has long been recognized as 'special'. Here, we suggest that one of the reasons for speech being special is that our auditory system has evolved to encode it in an efficient, optimal way. The theory of efficient neural coding argues that our perceptual systems have evolved to encode environmental stimuli in the most efficient way. Mathematically, this can be achieved if the optimally efficient codes match the statistics of the signals they represent. Experimental evidence suggests that the auditory code is optimal in this mathematical sense: statistical properties of speech closely match response properties of the cochlea, the auditory nerve, and the auditory cortex. Even more interestingly, these results may be linked to phenomena in auditory and speech perception.
Collapse
Affiliation(s)
- Judit Gervain
- Laboratoire Psychologie de la Perception, Université Paris Descartes, Paris, France; Laboratoire Psychologie de la Perception, CNRS, Paris, France
| | - Maria N Geffen
- Departments of Otorhinolaryngology, Neuroscience and Neurology, University of Pennsylvania, Philadelphia, PA, USA.
| |
Collapse
|
20
|
Zhu S, Allitt B, Samuel A, Lui L, Rosa MGP, Rajan R. Distributed representation of vocalization pitch in marmoset primary auditory cortex. Eur J Neurosci 2018; 49:179-198. [PMID: 30307660 DOI: 10.1111/ejn.14204] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2018] [Revised: 09/10/2018] [Accepted: 10/04/2018] [Indexed: 11/30/2022]
Abstract
The pitch of vocalizations is a key communication feature aiding recognition of individuals and separating sound sources in complex acoustic environments. The neural representation of the pitch of periodic sounds is well defined. However, many natural sounds, like complex vocalizations, contain rich, aperiodic or not strictly periodic frequency content and/or include high-frequency components, but still evoke a strong sense of pitch. Indeed, such sounds are the rule, not the exception but the cortical mechanisms for encoding pitch of such sounds are unknown. We investigated how neurons in the high-frequency representation of primary auditory cortex (A1) of marmosets encoded changes in pitch of four natural vocalizations, two centred around a dominant frequency similar to the neuron's best sensitivity and two around a much lower dominant frequency. Pitch was varied over a fine range that can be used by marmosets to differentiate individuals. The responses of most high-frequency A1 neurons were sensitive to pitch changes in all four vocalizations, with a smaller proportion of the neurons showing pitch-insensitive responses. Classically defined excitatory drive, from the neuron's monaural frequency response area, predicted responses to changes in vocalization pitch in <30% of neurons suggesting most pitch tuning observed is not simple frequency-level response. Moreover, 39% of A1 neurons showed call-invariant tuning of pitch. These results suggest that distributed activity across A1 can represent the pitch of natural sounds over a fine, functionally relevant range, and exhibits pitch tuning for vocalizations within and outside the classical neural tuning area.
Collapse
Affiliation(s)
- Shuyu Zhu
- Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, Victoria, Australia.,Centre of Excellence in Integrative Brain Function, Australian Research Council, Clayton, Victoria, Australia
| | - Ben Allitt
- Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, Victoria, Australia
| | - Anil Samuel
- Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, Victoria, Australia
| | - Leo Lui
- Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, Victoria, Australia.,Centre of Excellence in Integrative Brain Function, Australian Research Council, Clayton, Victoria, Australia
| | - Marcello G P Rosa
- Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, Victoria, Australia.,Centre of Excellence in Integrative Brain Function, Australian Research Council, Clayton, Victoria, Australia
| | - Ramesh Rajan
- Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, Victoria, Australia
| |
Collapse
|
21
|
Abstract
How the cerebral cortex encodes auditory features of biologically important sounds, including speech and music, is one of the most important questions in auditory neuroscience. The pursuit to understand related neural coding mechanisms in the mammalian auditory cortex can be traced back several decades to the early exploration of the cerebral cortex. Significant progress in this field has been made in the past two decades with new technical and conceptual advances. This article reviews the progress and challenges in this area of research.
Collapse
Affiliation(s)
- Xiaoqin Wang
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland 21205, USA
| |
Collapse
|
22
|
Angeloni C, Geffen MN. Contextual modulation of sound processing in the auditory cortex. Curr Opin Neurobiol 2018; 49:8-15. [PMID: 29125987 PMCID: PMC6037899 DOI: 10.1016/j.conb.2017.10.012] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2017] [Revised: 10/11/2017] [Accepted: 10/13/2017] [Indexed: 12/26/2022]
Abstract
In everyday acoustic environments, we navigate through a maze of sounds that possess a complex spectrotemporal structure, spanning many frequencies and exhibiting temporal modulations that differ within frequency bands. Our auditory system needs to efficiently encode the same sounds in a variety of different contexts, while preserving the ability to separate complex sounds within an acoustic scene. Recent work in auditory neuroscience has made substantial progress in studying how sounds are represented in the auditory system under different contexts, demonstrating that auditory processing of seemingly simple acoustic features, such as frequency and time, is highly dependent on co-occurring acoustic and behavioral stimuli. Through a combination of electrophysiological recordings, computational analysis and behavioral techniques, recent research identified the interactions between external spectral and temporal context of stimuli, as well as the internal behavioral state.
Collapse
Affiliation(s)
- C Angeloni
- Department of Otorhinolaryngology: HNS, Department of Neuroscience, Psychology Graduate Group, Computational Neuroscience Initiative, University of Pennsylvania, Philadelphia, PA, United States
| | - M N Geffen
- Department of Otorhinolaryngology: HNS, Department of Neuroscience, Psychology Graduate Group, Computational Neuroscience Initiative, University of Pennsylvania, Philadelphia, PA, United States.
| |
Collapse
|
23
|
Bianchi F, Hjortkjær J, Santurette S, Zatorre RJ, Siebner HR, Dau T. Subcortical and cortical correlates of pitch discrimination: Evidence for two levels of neuroplasticity in musicians. Neuroimage 2017; 163:398-412. [DOI: 10.1016/j.neuroimage.2017.07.057] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2017] [Revised: 07/11/2017] [Accepted: 07/27/2017] [Indexed: 10/19/2022] Open
|
24
|
Subplate neurons are the first cortical neurons to respond to sensory stimuli. Proc Natl Acad Sci U S A 2017; 114:12602-12607. [PMID: 29114043 DOI: 10.1073/pnas.1710793114] [Citation(s) in RCA: 62] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022] Open
Abstract
In utero experience, such as maternal speech in humans, can shape later perception, although the underlying cortical substrate is unknown. In adult mammals, ascending thalamocortical projections target layer 4, and the onset of sensory responses in the cortex is thought to be dependent on the onset of thalamocortical transmission to layer 4 as well as the ear and eye opening. In developing animals, thalamic fibers do not target layer 4 but instead target subplate neurons deep in the developing white matter. We investigated if subplate neurons respond to sensory stimuli. Using electrophysiological recordings in young ferrets, we show that auditory cortex neurons respond to sound at very young ages, even before the opening of the ears. Single unit recordings showed that auditory responses emerged first in cortical subplate neurons. Subsequently, responses appeared in the future thalamocortical input layer 4, and sound-evoked spike latencies were longer in layer 4 than in subplate, consistent with the known relay of thalamic information to layer 4 by subplate neurons. Electrode array recordings show that early auditory responses demonstrate a nascent topographic organization, suggesting that topographic maps emerge before the onset of spiking responses in layer 4. Together our results show that sound-evoked activity and topographic organization of the cortex emerge earlier and in a different layer than previously thought. Thus, early sound experience can activate and potentially sculpt subplate circuits before permanent thalamocortical circuits to layer 4 are present, and disruption of this early sensory activity could be utilized for early diagnosis of developmental disorders.
Collapse
|
25
|
A Crucial Test of the Population Separation Model of Auditory Stream Segregation in Macaque Primary Auditory Cortex. J Neurosci 2017; 37:10645-10655. [PMID: 28954867 DOI: 10.1523/jneurosci.0792-17.2017] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2017] [Revised: 08/29/2017] [Accepted: 09/05/2017] [Indexed: 11/21/2022] Open
Abstract
An important aspect of auditory scene analysis is auditory stream segregation-the organization of sound sequences into perceptual streams reflecting different sound sources in the environment. Several models have been proposed to account for stream segregation. According to the "population separation" (PS) model, alternating ABAB tone sequences are perceived as a single stream or as two separate streams when "A" and "B" tones activate the same or distinct frequency-tuned neuronal populations in primary auditory cortex (A1), respectively. A crucial test of the PS model is whether it can account for the observation that A and B tones are generally perceived as a single stream when presented synchronously, rather than in an alternating pattern, even if they are widely separated in frequency. Here, we tested the PS model by recording neural responses to alternating (ALT) and synchronous (SYNC) tone sequences in A1 of male macaques. Consistent with predictions of the PS model, a greater effective tonotopic separation of A and B tone responses was observed under ALT than under SYNC conditions, thus paralleling the perceptual organization of the sequences. While other models of stream segregation, such as temporal coherence, are not excluded by the present findings, we conclude that PS is sufficient to account for the perceptual organization of ALT and SYNC sequences and thus remains a viable model of auditory stream segregation.SIGNIFICANCE STATEMENT According to the population separation (PS) model of auditory stream segregation, sounds that activate the same or separate neural populations in primary auditory cortex (A1) are perceived as one or two streams, respectively. It is unclear, however, whether the PS model can account for the perception of sounds as a single stream when they are presented synchronously. Here, we tested the PS model by recording neural responses to alternating (ALT) and synchronous (SYNC) tone sequences in macaque A1. A greater effective separation of tonotopic activity patterns was observed under ALT than under SYNC conditions, thus paralleling the perceptual organization of the sequences. Based on these findings, we conclude that PS remains a plausible neurophysiological model of auditory stream segregation.
Collapse
|
26
|
Nonlinear processing of a multicomponent communication signal by combination-sensitive neurons in the anuran inferior colliculus. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2017; 203:749-772. [DOI: 10.1007/s00359-017-1195-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2017] [Revised: 06/23/2017] [Accepted: 06/24/2017] [Indexed: 11/25/2022]
|
27
|
Zhou C, Tao C, Zhang G, Yan S, Wang L, Zhou Y, Xiong Y. Unbalanced synaptic inputs underlying multi-peaked frequency selectivity in rat auditory cortex. Eur J Neurosci 2017; 45:1078-1084. [PMID: 28231378 DOI: 10.1111/ejn.13548] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2016] [Revised: 02/13/2017] [Accepted: 02/20/2017] [Indexed: 11/28/2022]
Abstract
By measuring the frequency selectivity at different intensities in the primary auditory cortex of adult rats, we found that a small group of cortical neurons can exhibit relatively weak but robust selectivity at multiple frequencies that are different from the most preferred frequency. Both in vivo multi-unit recordings (26/93 recordings) and single-unit recordings (16/137 neurons) confirmed that the preferred frequencies are periodic and have an averaged bandwidth (BW) of 0.3-0.4 octaves, which leads to multi-peaked frequency selectivity. Interestingly, the averaged bandwidth of the ripple in the frequency response tuning curve was invariant with the sound intensity. An investigation of the synaptic currents in vivo also revealed similar multi-peaked frequency selectivity for both excitation and inhibition. While the excitatory and inhibitory inputs were relatively balanced for most frequencies, the ratio between excitation and inhibition at the peak and valley of each ripple was highly unbalanced. Since this multi-peaked frequency selectivity can be observed at the synaptic, single-cell, and population levels, our results reveal a potential mechanism underlying the multi-peaked pattern of frequency selectivity in the primary auditory cortex.
Collapse
Affiliation(s)
- Chang Zhou
- Department of Neurobiology, Chongqing Key Laboratory of Neurobiology, Third Military Medical University, 30 GaoTanyan Street, Chongqing, 400038, China
| | - Can Tao
- Department of Neurobiology, Chongqing Key Laboratory of Neurobiology, Third Military Medical University, 30 GaoTanyan Street, Chongqing, 400038, China
| | - Guangwei Zhang
- Department of Neurobiology, Chongqing Key Laboratory of Neurobiology, Third Military Medical University, 30 GaoTanyan Street, Chongqing, 400038, China
| | - Sumei Yan
- Department of Neurobiology, Chongqing Key Laboratory of Neurobiology, Third Military Medical University, 30 GaoTanyan Street, Chongqing, 400038, China
| | - Lijuan Wang
- Department of Neurobiology, Chongqing Key Laboratory of Neurobiology, Third Military Medical University, 30 GaoTanyan Street, Chongqing, 400038, China
| | - Yi Zhou
- Department of Neurobiology, Chongqing Key Laboratory of Neurobiology, Third Military Medical University, 30 GaoTanyan Street, Chongqing, 400038, China
| | - Ying Xiong
- Department of Neurobiology, Chongqing Key Laboratory of Neurobiology, Third Military Medical University, 30 GaoTanyan Street, Chongqing, 400038, China
| |
Collapse
|
28
|
Eliades SJ, Wang X. Contributions of sensory tuning to auditory-vocal interactions in marmoset auditory cortex. Hear Res 2017; 348:98-111. [PMID: 28284736 DOI: 10.1016/j.heares.2017.03.001] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/09/2016] [Revised: 02/27/2017] [Accepted: 03/02/2017] [Indexed: 01/30/2023]
Abstract
During speech, humans continuously listen to their own vocal output to ensure accurate communication. Such self-monitoring is thought to require the integration of information about the feedback of vocal acoustics with internal motor control signals. The neural mechanism of this auditory-vocal interaction remains largely unknown at the cellular level. Previous studies in naturally vocalizing marmosets have demonstrated diverse neural activities in auditory cortex during vocalization, dominated by a vocalization-induced suppression of neural firing. How underlying auditory tuning properties of these neurons might contribute to this sensory-motor processing is unknown. In the present study, we quantitatively compared marmoset auditory cortex neural activities during vocal production with those during passive listening. We found that neurons excited during vocalization were readily driven by passive playback of vocalizations and other acoustic stimuli. In contrast, neurons suppressed during vocalization exhibited more diverse playback responses, including responses that were not predictable by auditory tuning properties. These results suggest that vocalization-related excitation in auditory cortex is largely a sensory-driven response. In contrast, vocalization-induced suppression is not well predicted by a neuron's auditory responses, supporting the prevailing theory that internal motor-related signals contribute to the auditory-vocal interaction observed in auditory cortex.
Collapse
Affiliation(s)
- Steven J Eliades
- Department of Otorhinolaryngology: Head and Neck Surgery, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA.
| | - Xiaoqin Wang
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
29
|
Harmonic template neurons in primate auditory cortex underlying complex sound processing. Proc Natl Acad Sci U S A 2017; 114:E840-E848. [PMID: 28096341 DOI: 10.1073/pnas.1607519114] [Citation(s) in RCA: 46] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Harmonicity is a fundamental element of music, speech, and animal vocalizations. How the auditory system extracts harmonic structures embedded in complex sounds and uses them to form a coherent unitary entity is not fully understood. Despite the prevalence of sounds rich in harmonic structures in our everyday hearing environment, it has remained largely unknown what neural mechanisms are used by the primate auditory cortex to extract these biologically important acoustic structures. In this study, we discovered a unique class of harmonic template neurons in the core region of auditory cortex of a highly vocal New World primate, the common marmoset (Callithrix jacchus), across the entire hearing frequency range. Marmosets have a rich vocal repertoire and a similar hearing range to that of humans. Responses of these neurons show nonlinear facilitation to harmonic complex sounds over inharmonic sounds, selectivity for particular harmonic structures beyond two-tone combinations, and sensitivity to harmonic number and spectral regularity. Our findings suggest that the harmonic template neurons in auditory cortex may play an important role in processing sounds with harmonic structures, such as animal vocalizations, human speech, and music.
Collapse
|
30
|
Akimov AG, Egorova MA, Ehret G. Spectral summation and facilitation in on- and off-responses for optimized representation of communication calls in mouse inferior colliculus. Eur J Neurosci 2017; 45:440-459. [PMID: 27891665 DOI: 10.1111/ejn.13488] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2016] [Revised: 11/17/2016] [Accepted: 11/21/2016] [Indexed: 12/01/2022]
Abstract
Selectivity for processing of species-specific vocalizations and communication sounds has often been associated with the auditory cortex. The midbrain inferior colliculus, however, is the first center in the auditory pathways of mammals integrating acoustic information processed in separate nuclei and channels in the brainstem and, therefore, could significantly contribute to enhance the perception of species' communication sounds. Here, we used natural wriggling calls of mouse pups, which communicate need for maternal care to adult females, and further 15 synthesized sounds to test the hypothesis that neurons in the central nucleus of the inferior colliculus of adult females optimize their response rates for reproduction of the three main harmonics (formants) of wriggling calls. The results confirmed the hypothesis showing that average response rates, as recorded extracellularly from single units, were highest and spectral facilitation most effective for both onset and offset responses to the call and call models with three resolved frequencies according to critical bands in perception. In addition, the general on- and/or off-response enhancement in almost half the investigated 122 neurons favors not only perception of single calls but also of vocalization rhythm. In summary, our study provides strong evidence that critical-band resolved frequency components within a communication sound increase the probability of its perception by boosting the signal-to-noise ratio of neural response rates within the inferior colliculus for at least 20% (our criterion for facilitation). These mechanisms, including enhancement of rhythm coding, are generally favorable to processing of other animal and human vocalizations, including formants of speech sounds.
Collapse
Affiliation(s)
- Alexander G Akimov
- Sechnov Institute of Evolutionary Physiology and Biochemistry, Russian Academy of Sciences, St. Petersburg, Russia
| | - Marina A Egorova
- Sechnov Institute of Evolutionary Physiology and Biochemistry, Russian Academy of Sciences, St. Petersburg, Russia
| | - Günter Ehret
- Institute of Neurobiology, University of Ulm, D-89069, Ulm, Germany
| |
Collapse
|
31
|
Happel MFK, Ohl FW. Compensating Level-Dependent Frequency Representation in Auditory Cortex by Synaptic Integration of Corticocortical Input. PLoS One 2017; 12:e0169461. [PMID: 28046062 PMCID: PMC5207691 DOI: 10.1371/journal.pone.0169461] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2016] [Accepted: 12/16/2016] [Indexed: 11/20/2022] Open
Abstract
Robust perception of auditory objects over a large range of sound intensities is a fundamental feature of the auditory system. However, firing characteristics of single neurons across the entire auditory system, like the frequency tuning, can change significantly with stimulus intensity. Physiological correlates of level-constancy of auditory representations hence should be manifested on the level of larger neuronal assemblies or population patterns. In this study we have investigated how information of frequency and sound level is integrated on the circuit-level in the primary auditory cortex (AI) of the Mongolian gerbil. We used a combination of pharmacological silencing of corticocortically relayed activity and laminar current source density (CSD) analysis. Our data demonstrate that with increasing stimulus intensities progressively lower frequencies lead to the maximal impulse response within cortical input layers at a given cortical site inherited from thalamocortical synaptic inputs. We further identified a temporally precise intercolumnar synaptic convergence of early thalamocortical and horizontal corticocortical inputs. Later tone-evoked activity in upper layers showed a preservation of broad tonotopic tuning across sound levels without shifts towards lower frequencies. Synaptic integration within corticocortical circuits may hence contribute to a level-robust representation of auditory information on a neuronal population level in the auditory cortex.
Collapse
Affiliation(s)
- Max F. K. Happel
- Leibniz Institute for Neurobiology, D-39118, Magdeburg, Germany
- Institute of Biology, Otto-von-Guericke-University, D-39120 Magdeburg, Germany
- * E-mail: (MH); (FO)
| | - Frank W. Ohl
- Leibniz Institute for Neurobiology, D-39118, Magdeburg, Germany
- Institute of Biology, Otto-von-Guericke-University, D-39120 Magdeburg, Germany
- Center for Behavioral Brain Sciences (CBBS), Magdeburg, Germany
- * E-mail: (MH); (FO)
| |
Collapse
|
32
|
Johnson LA, Della Santina CC, Wang X. Selective Neuronal Activation by Cochlear Implant Stimulation in Auditory Cortex of Awake Primate. J Neurosci 2016; 36:12468-12484. [PMID: 27927962 PMCID: PMC5148231 DOI: 10.1523/jneurosci.1699-16.2016] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2016] [Revised: 10/05/2016] [Accepted: 10/10/2016] [Indexed: 11/21/2022] Open
Abstract
Despite the success of cochlear implants (CIs) in human populations, most users perform poorly in noisy environments and music and tonal language perception. How CI devices engage the brain at the single neuron level has remained largely unknown, in particular in the primate brain. By comparing neuronal responses with acoustic and CI stimulation in marmoset monkeys unilaterally implanted with a CI electrode array, we discovered that CI stimulation was surprisingly ineffective at activating many neurons in auditory cortex, particularly in the hemisphere ipsilateral to the CI. Further analyses revealed that the CI-nonresponsive neurons were narrowly tuned to frequency and sound level when probed with acoustic stimuli; such neurons likely play a role in perceptual behaviors requiring fine frequency and level discrimination, tasks that CI users find especially challenging. These findings suggest potential deficits in central auditory processing of CI stimulation and provide important insights into factors responsible for poor CI user performance in a wide range of perceptual tasks. SIGNIFICANCE STATEMENT The cochlear implant (CI) is the most successful neural prosthetic device to date and has restored hearing in hundreds of thousands of deaf individuals worldwide. However, despite its huge successes, CI users still face many perceptual limitations, and the brain mechanisms involved in hearing through CI devices remain poorly understood. By directly comparing single-neuron responses to acoustic and CI stimulation in auditory cortex of awake marmoset monkeys, we discovered that neurons unresponsive to CI stimulation were sharply tuned to frequency and sound level. Our results point out a major deficit in central auditory processing of CI stimulation and provide important insights into mechanisms underlying the poor CI user performance in a wide range of perceptual tasks.
Collapse
Affiliation(s)
| | - Charles C Della Santina
- Departments of Biomedical Engineering and
- Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland 21025
| | | |
Collapse
|
33
|
Eliades SJ, Miller CT. Marmoset vocal communication: Behavior and neurobiology. Dev Neurobiol 2016; 77:286-299. [DOI: 10.1002/dneu.22464] [Citation(s) in RCA: 52] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2016] [Revised: 09/27/2016] [Accepted: 10/08/2016] [Indexed: 11/10/2022]
Affiliation(s)
- Steven J. Eliades
- Department of Otorhinolaryngology- Head and Neck Surgery; University of Pennsylvania Perelman School of Medicine; Philadelphia Pennsylvania
| | - Cory T. Miller
- Cortical Systems and Behavior Laboratory; University of California San Diego; San Diego California
| |
Collapse
|
34
|
Proverbio AM, Orlandi A, Pisanu F. Brain processing of consonance/dissonance in musicians and controls: a hemispheric asymmetry revisited. Eur J Neurosci 2016; 44:2340-56. [DOI: 10.1111/ejn.13330] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2016] [Revised: 06/28/2016] [Accepted: 07/01/2016] [Indexed: 11/28/2022]
Affiliation(s)
- Alice Mado Proverbio
- Milan-Mi Center for Neuroscience; Department of Psychology; University of Milano-Bicocca; piazza dell'Ateneo Nuovo 1 U6 Building Milan Italy
| | - Andrea Orlandi
- Milan-Mi Center for Neuroscience; Department of Psychology; University of Milano-Bicocca; piazza dell'Ateneo Nuovo 1 U6 Building Milan Italy
| | - Francesca Pisanu
- Milan-Mi Center for Neuroscience; Department of Psychology; University of Milano-Bicocca; piazza dell'Ateneo Nuovo 1 U6 Building Milan Italy
| |
Collapse
|
35
|
Sloas DC, Zhuo R, Xue H, Chambers AR, Kolaczyk E, Polley DB, Sen K. Interactions across Multiple Stimulus Dimensions in Primary Auditory Cortex. eNeuro 2016; 3:ENEURO.0124-16.2016. [PMID: 27622211 PMCID: PMC5008244 DOI: 10.1523/eneuro.0124-16.2016] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2016] [Revised: 07/28/2016] [Accepted: 08/07/2016] [Indexed: 11/21/2022] Open
Abstract
Although sensory cortex is thought to be important for the perception of complex objects, its specific role in representing complex stimuli remains unknown. Complex objects are rich in information along multiple stimulus dimensions. The position of cortex in the sensory hierarchy suggests that cortical neurons may integrate across these dimensions to form a more gestalt representation of auditory objects. Yet, studies of cortical neurons typically explore single or few dimensions due to the difficulty of determining optimal stimuli in a high dimensional stimulus space. Evolutionary algorithms (EAs) provide a potentially powerful approach for exploring multidimensional stimulus spaces based on real-time spike feedback, but two important issues arise in their application. First, it is unclear whether it is necessary to characterize cortical responses to multidimensional stimuli or whether it suffices to characterize cortical responses to a single dimension at a time. Second, quantitative methods for analyzing complex multidimensional data from an EA are lacking. Here, we apply a statistical method for nonlinear regression, the generalized additive model (GAM), to address these issues. The GAM quantitatively describes the dependence between neural response and all stimulus dimensions. We find that auditory cortical neurons in mice are sensitive to interactions across dimensions. These interactions are diverse across the population, indicating significant integration across stimulus dimensions in auditory cortex. This result strongly motivates using multidimensional stimuli in auditory cortex. Together, the EA and the GAM provide a novel quantitative paradigm for investigating neural coding of complex multidimensional stimuli in auditory and other sensory cortices.
Collapse
Affiliation(s)
- David C. Sloas
- Hearing Research Center and Department of Biomedical Engineering, Boston University, Boston, Massachusetts 02215
| | - Ran Zhuo
- Department of Mathematics and Statistics, Boston University, Boston, Massachusetts 02215
| | - Hongbo Xue
- Department of Mathematics and Statistics, Boston University, Boston, Massachusetts 02215
| | - Anna R. Chambers
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, Massachusetts 02114, and
| | - Eric Kolaczyk
- Department of Mathematics and Statistics, Boston University, Boston, Massachusetts 02215
| | - Daniel B. Polley
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, Massachusetts 02114, and
- Department of Otolaryngology, Harvard Medical School, Boston, Massachusetts 02115
| | - Kamal Sen
- Hearing Research Center and Department of Biomedical Engineering, Boston University, Boston, Massachusetts 02215
| |
Collapse
|
36
|
Williamson RS, Ahrens MB, Linden JF, Sahani M. Input-Specific Gain Modulation by Local Sensory Context Shapes Cortical and Thalamic Responses to Complex Sounds. Neuron 2016; 91:467-81. [PMID: 27346532 PMCID: PMC4961224 DOI: 10.1016/j.neuron.2016.05.041] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2015] [Revised: 10/25/2015] [Accepted: 05/12/2016] [Indexed: 01/19/2023]
Abstract
Sensory neurons are customarily characterized by one or more linearly weighted receptive fields describing sensitivity in sensory space and time. We show that in auditory cortical and thalamic neurons, the weight of each receptive field element depends on the pattern of sound falling within a local neighborhood surrounding it in time and frequency. Accounting for this change in effective receptive field with spectrotemporal context improves predictions of both cortical and thalamic responses to stationary complex sounds. Although context dependence varies among neurons and across brain areas, there are strong shared qualitative characteristics. In a spectrotemporally rich soundscape, sound elements modulate neuronal responsiveness more effectively when they coincide with sounds at other frequencies, and less effectively when they are preceded by sounds at similar frequencies. This local-context-driven lability in the representation of complex sounds—a modulation of “input-specific gain” rather than “output gain”—may be a widespread motif in sensory processing. Gain of neuronal responses to sound components varies with immediate acoustic context “Contextual gain fields” can be estimated from neuronal responses to complex sounds Coincident sound at different frequencies boosts gain in cortex and thalamus Preceding sound at similar frequency reduces gain for longer in cortex than thalamus
Collapse
Affiliation(s)
- Ross S Williamson
- Gatsby Computational Neuroscience Unit, University College London, London W1T 4JG, UK; Centre for Mathematics and Physics in the Life Sciences and Experimental Biology, University College London, London WC1E 6BT, UK
| | - Misha B Ahrens
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA 02138, USA; Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge CB2 1PZ, UK
| | - Jennifer F Linden
- Ear Institute, University College London, London WC1X 8EE, UK; Department of Neuroscience, Physiology and Pharmacology, University College London, London WC1E 6BT, UK.
| | - Maneesh Sahani
- Gatsby Computational Neuroscience Unit, University College London, London W1T 4JG, UK.
| |
Collapse
|
37
|
Neural Mechanisms Underlying Musical Pitch Perception and Clinical Applications Including Developmental Dyslexia. Curr Neurol Neurosci Rep 2016; 15:51. [PMID: 26092314 DOI: 10.1007/s11910-015-0574-9] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Music production and perception invoke a complex set of cognitive functions that rely on the integration of sensorimotor, cognitive, and emotional pathways. Pitch is a fundamental perceptual attribute of sound and a building block for both music and speech. Although the cerebral processing of pitch is not completely understood, recent advances in imaging and electrophysiology have provided insight into the functional and anatomical pathways of pitch processing. This review examines the current understanding of pitch processing and behavioral and neural variations that give rise to difficulties in pitch processing, and potential applications of music education for language processing disorders such as dyslexia.
Collapse
|
38
|
Intskirveli I, Joshi A, Vizcarra-Chacón BJ, Metherate R. Spectral breadth and laminar distribution of thalamocortical inputs to A1. J Neurophysiol 2016; 115:2083-94. [PMID: 26888102 DOI: 10.1152/jn.00887.2015] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2015] [Accepted: 02/15/2016] [Indexed: 11/22/2022] Open
Abstract
The GABAergic agonist muscimol is used to inactivate brain regions in order to reveal afferent inputs in isolation. However, muscimol's use in primary auditory cortex (A1) has been questioned on the grounds that it may unintentionally suppress thalamocortical inputs. We tested whether muscimol can preferentially suppress cortical, but not thalamocortical, circuits in urethane-anesthetized mice. We recorded tone-evoked current source density profiles to determine frequency receptive fields (RFs) for three current sinks: the "layer 4" sink (fastest onset, middle-layer sink) and current sinks 100 μm above ("layer 2/3") and 300 μm below ("layer 5/6") the main input. We first determined effects of muscimol dose (0.01-1 mM) on the characteristic frequency (CF) tone-evoked layer 4 sink. An "ideal" dose (100 μM) had no effect on CF-evoked sink onset latency or initial response but reduced peak amplitude by >80%, implying inhibition of intracortical, but not thalamocortical, activity. We extended the analysis to current sinks in layers 2/3 and 5/6 and for all three sinks determined RF breadth (quarter-octave steps, 20 dB above CF threshold). Muscimol reduced RF breadth 42% in layer 2/3 (from 2.4 ± 0.14 to 1.4 ± 0.11 octaves), 14% in layer 4 (2.2 ± 0.12 to 1.9 ± 0.10 octaves), and not at all in layer 5/6 (1.8 ± 0.10 to 1.7 ± 0.12 octaves). The results provide an estimate of the laminar and spectral extent of thalamocortical projections and support the hypothesis that intracortical pathways contribute to spectral integration in A1.
Collapse
Affiliation(s)
- Irakli Intskirveli
- Department of Neurobiology and Behavior and Center for Hearing Research, University of California, Irvine, California
| | - Anar Joshi
- Department of Neurobiology and Behavior and Center for Hearing Research, University of California, Irvine, California
| | | | - Raju Metherate
- Department of Neurobiology and Behavior and Center for Hearing Research, University of California, Irvine, California
| |
Collapse
|
39
|
George SS, Shivdasani MN, Wise AK, Shepherd RK, Fallon JB. Electrophysiological channel interactions using focused multipolar stimulation for cochlear implants. J Neural Eng 2015; 12:066005. [DOI: 10.1088/1741-2560/12/6/066005] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
40
|
Abstract
The basis of musical consonance has been debated for centuries without resolution. Three interpretations have been considered: (i) that consonance derives from the mathematical simplicity of small integer ratios; (ii) that consonance derives from the physical absence of interference between harmonic spectra; and (iii) that consonance derives from the advantages of recognizing biological vocalization and human vocalization in particular. Whereas the mathematical and physical explanations are at odds with the evidence that has now accumulated, biology provides a plausible explanation for this central issue in music and audition.
Collapse
Affiliation(s)
- Daniel L Bowling
- Department of Cognitive Biology, University of Vienna, 1090 Vienna, Austria;
| | - Dale Purves
- Duke Institute for Brain Sciences, Duke University, Durham, NC 27708
| |
Collapse
|
41
|
High-field functional magnetic resonance imaging of vocalization processing in marmosets. Sci Rep 2015; 5:10950. [PMID: 26091254 PMCID: PMC4473644 DOI: 10.1038/srep10950] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2014] [Accepted: 04/29/2015] [Indexed: 11/17/2022] Open
Abstract
Vocalizations are behaviorally critical sounds, and this behavioral importance is reflected in the ascending auditory system, where conspecific vocalizations are increasingly over-represented at higher processing stages. Recent evidence suggests that, in macaques, this increasing selectivity for vocalizations might culminate in a cortical region that is densely populated by vocalization-preferring neurons. Such a region might be a critical node in the representation of vocal communication sounds, underlying the recognition of vocalization type, caller and social context. These results raise the questions of whether cortical specializations for vocalization processing exist in other species, their cortical location, and their relationship to the auditory processing hierarchy. To explore cortical specializations for vocalizations in another species, we performed high-field fMRI of the auditory cortex of a vocal New World primate, the common marmoset (Callithrix jacchus). Using a sparse imaging paradigm, we discovered a caudal-rostral gradient for the processing of conspecific vocalizations in marmoset auditory cortex, with regions of the anterior temporal lobe close to the temporal pole exhibiting the highest preference for vocalizations. These results demonstrate similar cortical specializations for vocalization processing in macaques and marmosets, suggesting that cortical specializations for vocal processing might have evolved before the lineages of these species diverged.
Collapse
|
42
|
Kratz MB, Manis PB. Spatial organization of excitatory synaptic inputs to layer 4 neurons in mouse primary auditory cortex. Front Neural Circuits 2015; 9:17. [PMID: 25972787 PMCID: PMC4413692 DOI: 10.3389/fncir.2015.00017] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2015] [Accepted: 04/07/2015] [Indexed: 12/28/2022] Open
Abstract
Layer 4 (L4) of primary auditory cortex (A1) receives a tonotopically organized projection from the medial geniculate nucleus of the thalamus. However, individual neurons in A1 respond to a wider range of sound frequencies than would be predicted by their thalamic input, which suggests the existence of cross-frequency intracortical networks. We used laser scanning photostimulation and uncaging of glutamate in brain slices of mouse A1 to characterize the spatial organization of intracortical inputs to L4 neurons. Slices were prepared to include the entire tonotopic extent of A1. We find that L4 neurons receive local vertically organized (columnar) excitation from layers 2 through 6 (L6) and horizontally organized excitation primarily from L4 and L6 neurons in regions centered ~300–500 μm caudal and/or rostral to the cell. Excitatory horizontal synaptic connections from layers 2 and 3 were sparse. The origins of horizontal projections from L4 and L6 correspond to regions in the tonotopic map that are approximately an octave away from the target cell location. Such spatially organized lateral connections may contribute to the detection and processing of auditory objects with specific spectral structures.
Collapse
Affiliation(s)
- Megan B Kratz
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill Chapel Hill, NC, USA ; The Curriculum in Neurobiology, University of North Carolina Chapel Hill, NC, USA
| | - Paul B Manis
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill Chapel Hill, NC, USA ; The Curriculum in Neurobiology, University of North Carolina Chapel Hill, NC, USA ; Department of Cell Biology and Physiology, University of North Carolina Chapel Hill, NC, USA
| |
Collapse
|
43
|
Montejo N, Noreña AJ. Dynamic representation of spectral edges in guinea pig primary auditory cortex. J Neurophysiol 2015; 113:2998-3012. [PMID: 25744885 PMCID: PMC4416612 DOI: 10.1152/jn.00785.2014] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2014] [Accepted: 03/02/2015] [Indexed: 11/22/2022] Open
Abstract
The central representation of a given acoustic motif is thought to be strongly context dependent, i.e., to rely on the spectrotemporal past and present of the acoustic mixture in which it is embedded. The present study investigated the cortical representation of spectral edges (i.e., where stimulus energy changes abruptly over frequency) and its dependence on stimulus duration and depth of the spectral contrast in guinea pig. We devised a stimulus ensemble composed of random tone pips with or without an attenuated frequency band (AFB) of variable depth. Additionally, the multitone ensemble with AFB was interleaved with periods of silence or with multitone ensembles without AFB. We have shown that the representation of the frequencies near but outside the AFB is greatly enhanced, whereas the representation of frequencies near and inside the AFB is strongly suppressed. These cortical changes depend on the depth of the AFB: although they are maximal for the largest depth of the AFB, they are also statistically significant for depths as small as 10 dB. Finally, the cortical changes are quick, occurring within a few seconds of stimulus ensemble presentation with AFB, and are very labile, disappearing within a few seconds after the presentation without AFB. Overall, this study demonstrates that the representation of spectral edges is dynamically enhanced in the auditory centers. These central changes may have important functional implications, particularly in noisy environments where they could contribute to preserving the central representation of spectral edges.
Collapse
Affiliation(s)
- Noelia Montejo
- Laboratoire de Neurosciences Intégratives et Adaptatives, Aix Marseille Université, CNRS UMR 7260, Marseille, France
| | - Arnaud J Noreña
- Laboratoire de Neurosciences Intégratives et Adaptatives, Aix Marseille Université, CNRS UMR 7260, Marseille, France
| |
Collapse
|
44
|
Abstract
The auditory sense of humans transforms intrinsically senseless pressure waveforms into spectacularly rich perceptual phenomena: the music of Bach or the Beatles, the poetry of Li Bai or Omar Khayyam, or more prosaically the sense of the world filled with objects emitting sounds that is so important for those of us lucky enough to have hearing. Whereas the early representations of sounds in the auditory system are based on their physical structure, higher auditory centers are thought to represent sounds in terms of their perceptual attributes. In this symposium, we will illustrate the current research into this process, using four case studies. We will illustrate how the spectral and temporal properties of sounds are used to bind together, segregate, categorize, and interpret sound patterns on their way to acquire meaning, with important lessons to other sensory systems as well.
Collapse
|
45
|
Moerel M, De Martino F, Santoro R, Yacoub E, Formisano E. Representation of pitch chroma by multi-peak spectral tuning in human auditory cortex. Neuroimage 2015; 106:161-9. [PMID: 25479020 PMCID: PMC4388253 DOI: 10.1016/j.neuroimage.2014.11.044] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2014] [Revised: 10/31/2014] [Accepted: 11/20/2014] [Indexed: 01/04/2023] Open
Abstract
Musical notes played at octave intervals (i.e., having the same pitch chroma) are perceived as similar. This well-known perceptual phenomenon lays at the foundation of melody recognition and music perception, yet its neural underpinnings remain largely unknown to date. Using fMRI with high sensitivity and spatial resolution, we examined the contribution of multi-peak spectral tuning to the neural representation of pitch chroma in human auditory cortex in two experiments. In experiment 1, our estimation of population spectral tuning curves from the responses to natural sounds confirmed--with new data--our recent results on the existence of cortical ensemble responses finely tuned to multiple frequencies at one octave distance (Moerel et al., 2013). In experiment 2, we fitted a mathematical model consisting of a pitch chroma and height component to explain the measured fMRI responses to piano notes. This analysis revealed that the octave-tuned populations-but not other cortical populations-harbored a neural representation of musical notes according to their pitch chroma. These results indicate that responses of auditory cortical populations selectively tuned to multiple frequencies at one octave distance predict well the perceptual similarity of musical notes with the same chroma, beyond the physical (frequency) distance of notes.
Collapse
Affiliation(s)
- Michelle Moerel
- Department of Radiology, Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN 55455, USA.
| | - Federico De Martino
- Faculty of Psychology and Neuroscience, Department of Cognitive Neuroscience, Maastricht University, Maastricht, 6200 MD, the Netherlands; Maastricht Brain Imaging Center (MBIC), Maastricht University, Maastricht, 6229 EV, the Netherlands
| | - Roberta Santoro
- Faculty of Psychology and Neuroscience, Department of Cognitive Neuroscience, Maastricht University, Maastricht, 6200 MD, the Netherlands; Maastricht Brain Imaging Center (MBIC), Maastricht University, Maastricht, 6229 EV, the Netherlands
| | - Essa Yacoub
- Department of Radiology, Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN 55455, USA
| | - Elia Formisano
- Faculty of Psychology and Neuroscience, Department of Cognitive Neuroscience, Maastricht University, Maastricht, 6200 MD, the Netherlands; Maastricht Brain Imaging Center (MBIC), Maastricht University, Maastricht, 6229 EV, the Netherlands
| |
Collapse
|
46
|
|
47
|
Kikuchi Y, Horwitz B, Mishkin M, Rauschecker JP. Processing of harmonics in the lateral belt of macaque auditory cortex. Front Neurosci 2014; 8:204. [PMID: 25100935 PMCID: PMC4104551 DOI: 10.3389/fnins.2014.00204] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2014] [Accepted: 06/30/2014] [Indexed: 11/23/2022] Open
Abstract
Many speech sounds and animal vocalizations contain components, referred to as complex tones, that consist of a fundamental frequency (F0) and higher harmonics. In this study we examined single-unit activity recorded in the core (A1) and lateral belt (LB) areas of auditory cortex in two rhesus monkeys as they listened to pure tones and pitch-shifted conspecific vocalizations (“coos”). The latter consisted of complex-tone segments in which F0 was matched to a corresponding pure-tone stimulus. In both animals, neuronal latencies to pure-tone stimuli at the best frequency (BF) were ~10 to 15 ms longer in LB than in A1. This might be expected, since LB is considered to be at a hierarchically higher level than A1. On the other hand, the latency of LB responses to coos was ~10 to 20 ms shorter than to the corresponding pure-tone BF, suggesting facilitation in LB by the harmonics. This latency reduction by coos was not observed in A1, resulting in similar coo latencies in A1 and LB. Multi-peaked neurons were present in both A1 and LB; however, harmonically-related peaks were observed in LB for both early and late response components, whereas in A1 they were observed only for late components. Our results suggest that harmonic features, such as relationships between specific frequency intervals of communication calls, are processed at relatively early stages of the auditory cortical pathway, but preferentially in LB.
Collapse
Affiliation(s)
- Yukiko Kikuchi
- Department of Neuroscience, Georgetown University Medical Center Washington, DC, USA ; Brain Imaging and Modeling Section, Voice, Speech and Language Branch, National Institute on Deafness and Other Communication Disorders, National Institutes of Health Bethesda, MD, USA ; Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health Bethesda, MD, USA
| | - Barry Horwitz
- Brain Imaging and Modeling Section, Voice, Speech and Language Branch, National Institute on Deafness and Other Communication Disorders, National Institutes of Health Bethesda, MD, USA
| | - Mortimer Mishkin
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health Bethesda, MD, USA
| | - Josef P Rauschecker
- Department of Neuroscience, Georgetown University Medical Center Washington, DC, USA
| |
Collapse
|
48
|
Joachimsthaler B, Uhlmann M, Miller F, Ehret G, Kurt S. Quantitative analysis of neuronal response properties in primary and higher-order auditory cortical fields of awake house mice (Mus musculus). Eur J Neurosci 2014; 39:904-918. [PMID: 24506843 PMCID: PMC4264920 DOI: 10.1111/ejn.12478] [Citation(s) in RCA: 47] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2013] [Revised: 12/10/2013] [Accepted: 12/11/2013] [Indexed: 12/01/2022]
Abstract
Because of its great genetic potential, the mouse (Mus musculus) has become a popular model species for studies on hearing and sound processing along the auditory pathways. Here, we present the first comparative study on the representation of neuronal response parameters to tones in primary and higher-order auditory cortical fields of awake mice. We quantified 12 neuronal properties of tone processing in order to estimate similarities and differences of function between the fields, and to discuss how far auditory cortex (AC) function in the mouse is comparable to that in awake monkeys and cats. Extracellular recordings were made from 1400 small clusters of neurons from cortical layers III/IV in the primary fields AI (primary auditory field) and AAF (anterior auditory field), and the higher-order fields AII (second auditory field) and DP (dorsoposterior field). Field specificity was shown with regard to spontaneous activity, correlation between spontaneous and evoked activity, tone response latency, sharpness of frequency tuning, temporal response patterns (occurrence of phasic responses, phasic-tonic responses, tonic responses, and off-responses), and degree of variation between the characteristic frequency (CF) and the best frequency (BF) (CF-BF relationship). Field similarities were noted as significant correlations between CFs and BFs, V-shaped frequency tuning curves, similar minimum response thresholds and non-monotonic rate-level functions in approximately two-thirds of the neurons. Comparative and quantitative analyses showed that the measured response characteristics were, to various degrees, susceptible to influences of anesthetics. Therefore, studies of neuronal responses in the awake AC are important in order to establish adequate relationships between neuronal data and auditory perception and acoustic response behavior.
Collapse
Affiliation(s)
- Bettina Joachimsthaler
- Institute of Neurobiology, University of UlmInstitute of Neurobiology 89081 Ulm, Germany
- Systems Neurophysiology, Department of Cognitive Neurology, Werner Reichardt Centre for Integrative Neuroscience, Hertie Institute for Clinical Brain Research, University of TübingenTübingen, Germany
| | - Michaela Uhlmann
- Institute of Neurobiology, University of UlmInstitute of Neurobiology 89081 Ulm, Germany
| | - Frank Miller
- Institute of Neurobiology, University of UlmInstitute of Neurobiology 89081 Ulm, Germany
| | - Günter Ehret
- Institute of Neurobiology, University of UlmInstitute of Neurobiology 89081 Ulm, Germany
| | - Simone Kurt
- Institute of Neurobiology, University of UlmInstitute of Neurobiology 89081 Ulm, Germany
- Cluster of Excellence “Hearing4all”, Institute of Audioneurotechnology and Hannover Medical School, Department of Experimental Otology, ENT Clinics30625 Hannover, Germany
| |
Collapse
|
49
|
Abstract
A fundamental structure of sounds encountered in the natural environment is the harmonicity. Harmonicity is an essential component of music found in all cultures. It is also a unique feature of vocal communication sounds such as human speech and animal vocalizations. Harmonics in sounds are produced by a variety of acoustic generators and reflectors in the natural environment, including vocal apparatuses of humans and animal species as well as music instruments of many types. We live in an acoustic world full of harmonicity. Given the widespread existence of the harmonicity in many aspects of the hearing environment, it is natural to expect that it be reflected in the evolution and development of the auditory systems of both humans and animals, in particular the auditory cortex. Recent neuroimaging and neurophysiology experiments have identified regions of non-primary auditory cortex in humans and non-human primates that have selective responses to harmonic pitches. Accumulating evidence has also shown that neurons in many regions of the auditory cortex exhibit characteristic responses to harmonically related frequencies beyond the range of pitch. Together, these findings suggest that a fundamental organizational principle of auditory cortex is based on the harmonicity. Such an organization likely plays an important role in music processing by the brain. It may also form the basis of the preference for particular classes of music and voice sounds.
Collapse
Affiliation(s)
- Xiaoqin Wang
- Department of Biomedical Engineering, Johns Hopkins University School of MedicineBaltimore, MD, USA
- Tsinghua-Johns Hopkins Joint Center for Biomedical Engineering Research and Department of Biomedical Engineering, Tsinghua UniversityBeijing, China
| |
Collapse
|
50
|
Town SM, Bizley JK. Neural and behavioral investigations into timbre perception. Front Syst Neurosci 2013; 7:88. [PMID: 24312021 PMCID: PMC3826062 DOI: 10.3389/fnsys.2013.00088] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2013] [Accepted: 10/27/2013] [Indexed: 11/23/2022] Open
Abstract
Timbre is the attribute that distinguishes sounds of equal pitch, loudness and duration. It contributes to our perception and discrimination of different vowels and consonants in speech, instruments in music and environmental sounds. Here we begin by reviewing human timbre perception and the spectral and temporal acoustic features that give rise to timbre in speech, musical and environmental sounds. We also consider the perception of timbre by animals, both in the case of human vowels and non-human vocalizations. We then explore the neural representation of timbre, first within the peripheral auditory system and later at the level of the auditory cortex. We examine the neural networks that are implicated in timbre perception and the computations that may be performed in auditory cortex to enable listeners to extract information about timbre. We consider whether single neurons in auditory cortex are capable of representing spectral timbre independently of changes in other perceptual attributes and the mechanisms that may shape neural sensitivity to timbre. Finally, we conclude by outlining some of the questions that remain about the role of neural mechanisms in behavior and consider some potentially fruitful avenues for future research.
Collapse
|