1
|
Igamberdiev AU. Reflexive neural circuits and the origin of language and music codes. Biosystems 2024; 246:105346. [PMID: 39349135 DOI: 10.1016/j.biosystems.2024.105346] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2024] [Revised: 09/23/2024] [Accepted: 09/26/2024] [Indexed: 10/02/2024]
Abstract
Conscious activity is grounded in the reflexive self-awareness in sense perception, through which the codes signifying sensual perceptive events operate and constrain human behavior. These codes grow via the creative generation of hypertextual statements. We apply the model of Vladimir Lefebvre (Lefebvre, V.A., 1987, J. Soc. Biol. Struct. 10, 129-175) to reveal the underlying structures on which the perception and creative development of language and music codes are based. According to this model, the reflexive structure of conscious subject is grounded in three thermodynamic cycles united by the control of the basic functional cycle by the second one, and resulting in the internal action that it turn is perceived by the third cycle evaluating this action. In this arrangement, the generative language structures are formed and the frequencies of sounds that form musical phrases and patterns are selected. We discuss the participation of certain neural brain structures and the establishment of reflexive neural circuits in the ad hoc transformation of perceptive signals, and show the similarities between the processes of perception and of biological self-maintenance and morphogenesis. We trace the peculiarities of the temporal encoding of emotions in music and musical creativity, as well as the principles of sharing musical information between the performing and the perceiving individuals.
Collapse
Affiliation(s)
- Abir U Igamberdiev
- Department of Biology, Memorial University of Newfoundland, St. John's, NL A1C 5S7, Canada.
| |
Collapse
|
2
|
Mackey CA, Hauser S, Schoenhaut AM, Temghare N, Ramachandran R. Hierarchical differences in the encoding of amplitude modulation in the subcortical auditory system of awake nonhuman primates. J Neurophysiol 2024; 132:1098-1114. [PMID: 39140590 PMCID: PMC11427057 DOI: 10.1152/jn.00329.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2024] [Revised: 07/31/2024] [Accepted: 08/12/2024] [Indexed: 08/15/2024] Open
Abstract
Sinusoidal amplitude modulation (SAM) is a key feature of complex sounds. Although psychophysical studies have characterized SAM perception, and neurophysiological studies in anesthetized animals report a transformation from the cochlear nucleus' (CN; brainstem) temporal code to the inferior colliculus' (IC; midbrain's) rate code, none have used awake animals or nonhuman primates to compare CN and IC's coding strategies to modulation-frequency perception. To address this, we recorded single-unit responses and compared derived neurometric measures in the CN and IC to psychometric measures of modulation frequency (MF) discrimination in macaques. IC and CN neurons often exhibited tuned responses to SAM in rate and spike-timing measures of modulation coding. Neurometric thresholds spanned a large range (2-200 Hz ΔMF). The lowest 40% of IC thresholds were less than or equal to psychometric thresholds, regardless of which code was used, whereas CN thresholds were greater than psychometric thresholds. Discrimination at 10-20 Hz could be explained by indiscriminately pooling 30 units in either structure, whereas discrimination at higher MFs was best explained by more selective pooling. This suggests that pooled CN activity was sufficient for AM discrimination. Psychometric and neurometric thresholds decreased as stimulus duration increased, but IC and CN thresholds were higher and more variable than behavior at short durations. This slower subcortical temporal integration compared with behavior was consistent with a drift diffusion model that reproduced individual differences in performance and can constrain future neurophysiological studies of temporal integration. These measures provide an account of AM perception at the neurophysiological, computational, and behavioral levels.NEW & NOTEWORTHY In everyday environments, the brain is tasked with extracting information from sound envelopes, which involves both sensory encoding and perceptual decision-making. Different neural codes for envelope representation have been characterized in midbrain and cortex, but studies of brainstem nuclei such as the cochlear nucleus (CN) have usually been conducted under anesthesia in nonprimate species. Here, we found that subcortical activity in awake monkeys and a biologically plausible perceptual decision-making model accounted for sound envelope discrimination behavior.
Collapse
Affiliation(s)
- Chase A Mackey
- Neuroscience Graduate Program, Vanderbilt University, Nashville, Tennessee, United States
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, United States
| | - Samantha Hauser
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, United States
| | - Adriana M Schoenhaut
- Neuroscience Graduate Program, Vanderbilt University, Nashville, Tennessee, United States
| | - Namrata Temghare
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, United States
| | - Ramnarayan Ramachandran
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, United States
| |
Collapse
|
3
|
Funamizu A, Marbach F, Zador AM. Stable sound decoding despite modulated sound representation in the auditory cortex. Curr Biol 2023; 33:4470-4483.e7. [PMID: 37802051 PMCID: PMC10665086 DOI: 10.1016/j.cub.2023.09.031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2023] [Revised: 07/27/2023] [Accepted: 09/13/2023] [Indexed: 10/08/2023]
Abstract
The activity of neurons in the auditory cortex is driven by both sounds and non-sensory context. To investigate the neuronal correlates of non-sensory context, we trained head-fixed mice to perform a two-alternative-choice auditory task in which either reward or stimulus expectation (prior) was manipulated in blocks. Using two-photon calcium imaging to record populations of single neurons in the auditory cortex, we found that both stimulus and reward expectation modulated the activity of these neurons. A linear decoder trained on this population activity could decode stimuli as well or better than predicted by the animal's performance. Interestingly, the optimal decoder was stable even in the face of variable sensory representations. Neither the context nor the mouse's choice could be reliably decoded from the recorded neural activity. Our findings suggest that, in spite of modulation of auditory cortical activity by task priors, the auditory cortex does not represent sufficient information about these priors to exploit them optimally. Thus, the combination of rapidly changing sensory information with more slowly varying task information required for decisions in this task might be represented in brain regions other than the auditory cortex.
Collapse
Affiliation(s)
- Akihiro Funamizu
- Cold Spring Harbor Laboratory, 1 Bungtown Rd, Cold Spring Harbor, NY 11724, USA.
| | - Fred Marbach
- Cold Spring Harbor Laboratory, 1 Bungtown Rd, Cold Spring Harbor, NY 11724, USA
| | - Anthony M Zador
- Cold Spring Harbor Laboratory, 1 Bungtown Rd, Cold Spring Harbor, NY 11724, USA
| |
Collapse
|
4
|
Sadaf MUK, Sakib NU, Pannone A, Ravichandran H, Das S. A bio-inspired visuotactile neuron for multisensory integration. Nat Commun 2023; 14:5729. [PMID: 37714853 PMCID: PMC10504285 DOI: 10.1038/s41467-023-40686-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Accepted: 08/03/2023] [Indexed: 09/17/2023] Open
Abstract
Multisensory integration is a salient feature of the brain which enables better and faster responses in comparison to unisensory integration, especially when the unisensory cues are weak. Specialized neurons that receive convergent input from two or more sensory modalities are responsible for such multisensory integration. Solid-state devices that can emulate the response of these multisensory neurons can advance neuromorphic computing and bridge the gap between artificial and natural intelligence. Here, we introduce an artificial visuotactile neuron based on the integration of a photosensitive monolayer MoS2 memtransistor and a triboelectric tactile sensor which minutely captures the three essential features of multisensory integration, namely, super-additive response, inverse effectiveness effect, and temporal congruency. We have also realized a circuit which can encode visuotactile information into digital spiking events, with probability of spiking determined by the strength of the visual and tactile cues. We believe that our comprehensive demonstration of bio-inspired and multisensory visuotactile neuron and spike encoding circuitry will advance the field of neuromorphic computing, which has thus far primarily focused on unisensory intelligence and information processing.
Collapse
Affiliation(s)
| | - Najam U Sakib
- Engineering Science and Mechanics, Penn State University, University Park, PA, 16802, USA
| | - Andrew Pannone
- Engineering Science and Mechanics, Penn State University, University Park, PA, 16802, USA
| | | | - Saptarshi Das
- Engineering Science and Mechanics, Penn State University, University Park, PA, 16802, USA.
- Electrical Engineering, Penn State University, University Park, PA, 16802, USA.
- Materials Science and Engineering, Penn State University, University Park, PA, 16802, USA.
- Materials Research Institute, Penn State University, University Park, PA, 16802, USA.
| |
Collapse
|
5
|
Funamizu A, Marbach F, Zador AM. Stable sound decoding despite modulated sound representation in the auditory cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.01.31.526457. [PMID: 37745428 PMCID: PMC10515783 DOI: 10.1101/2023.01.31.526457] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/26/2023]
Abstract
The activity of neurons in the auditory cortex is driven by both sounds and non-sensory context. To investigate the neuronal correlates of non-sensory context, we trained head-fixed mice to perform a two-alternative choice auditory task in which either reward or stimulus expectation (prior) was manipulated in blocks. Using two-photon calcium imaging to record populations of single neurons in auditory cortex, we found that both stimulus and reward expectation modulated the activity of these neurons. A linear decoder trained on this population activity could decode stimuli as well or better than predicted by the animal's performance. Interestingly, the optimal decoder was stable even in the face of variable sensory representations. Neither the context nor the mouse's choice could be reliably decoded from the recorded neural activity. Our findings suggest that in spite of modulation of auditory cortical activity by task priors, auditory cortex does not represent sufficient information about these priors to exploit them optimally and that decisions in this task require that rapidly changing sensory information be combined with more slowly varying task information extracted and represented in brain regions other than auditory cortex.
Collapse
Affiliation(s)
- Akihiro Funamizu
- Cold Spring Harbor Laboratory, 1 Bungtown Rd, Cold Spring Harbor, NY 11724, USA
- Present address: Institute for Quantitative Biosciences, the University of Tokyo, 1-1-1 Yayoi, Bunkyo-ku, Tokyo, 1130032, Japan
- Present address: Department of Life Sciences, Graduate School of Arts and Sciences, the University of Tokyo, 3-8-1 Komaba, Meguro-ku, Tokyo, 1538902, Japan
| | - Fred Marbach
- Cold Spring Harbor Laboratory, 1 Bungtown Rd, Cold Spring Harbor, NY 11724, USA
- Present address: The Francis Crick Institute, 1 Midland Rd, NW1 4AT London, UK
| | - Anthony M Zador
- Cold Spring Harbor Laboratory, 1 Bungtown Rd, Cold Spring Harbor, NY 11724, USA
| |
Collapse
|
6
|
Lestang JH, Cai H, Averbeck BB, Cohen YE. Functional network properties of the auditory cortex. Hear Res 2023; 433:108768. [PMID: 37075536 PMCID: PMC10205700 DOI: 10.1016/j.heares.2023.108768] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Revised: 03/27/2023] [Accepted: 04/11/2023] [Indexed: 04/21/2023]
Abstract
The auditory system transforms auditory stimuli from the external environment into perceptual auditory objects. Recent studies have focused on the contribution of the auditory cortex to this transformation. Other studies have yielded important insights into the contributions of neural activity in the auditory cortex to cognition and decision-making. However, despite this important work, the relationship between auditory-cortex activity and behavior/perception has not been fully elucidated. Two of the more important gaps in our understanding are (1) the specific and differential contributions of different fields of the auditory cortex to auditory perception and behavior and (2) the way networks of auditory neurons impact and facilitate auditory information processing. Here, we focus on recent work from non-human-primate models of hearing and review work related to these gaps and put forth challenges to further our understanding of how single-unit activity and network activity in different cortical fields contribution to behavior and perception.
Collapse
Affiliation(s)
- Jean-Hugues Lestang
- Departments of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Huaizhen Cai
- Departments of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Bruno B Averbeck
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health, Bethesda, MD 20892, USA.
| | - Yale E Cohen
- Departments of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA 19104, USA; Neuroscience, University of Pennsylvania, Philadelphia, PA 19104, USA; Bioengineering, University of Pennsylvania, Philadelphia, PA 19104, USA
| |
Collapse
|
7
|
Mackey CA, Dylla M, Bohlen P, Grigsby J, Hrnicek A, Mayfield J, Ramachandran R. Hierarchical differences in the encoding of sound and choice in the subcortical auditory system. J Neurophysiol 2023; 129:591-608. [PMID: 36651913 PMCID: PMC9988536 DOI: 10.1152/jn.00439.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 01/03/2023] [Accepted: 01/16/2023] [Indexed: 01/19/2023] Open
Abstract
Detection of sounds is a fundamental function of the auditory system. Although studies of auditory cortex have gained substantial insight into detection performance using behaving animals, previous subcortical studies have mostly taken place under anesthesia, in passively listening animals, or have not measured performance at threshold. These limitations preclude direct comparisons between neuronal responses and behavior. To address this, we simultaneously measured auditory detection performance and single-unit activity in the inferior colliculus (IC) and cochlear nucleus (CN) in macaques. The spontaneous activity and response variability of CN neurons were higher than those observed for IC neurons. Signal detection theoretic methods revealed that the magnitude of responses of IC neurons provided more reliable estimates of psychometric threshold and slope compared with the responses of single CN neurons. However, pooling small populations of CN neurons provided reliable estimates of psychometric threshold and slope, suggesting sufficient information in CN population activity. Trial-by-trial correlations between spike count and behavioral response emerged 50-75 ms after sound onset for most IC neurons, but for few neurons in the CN. These results highlight hierarchical differences between neurometric-psychometric correlations in CN and IC and have important implications for how subcortical information could be decoded.NEW & NOTEWORTHY The cerebral cortex is widely recognized to play a role in sensory processing and decision-making. Accounts of the neural basis of auditory perception and its dysfunction are based on this idea. However, significantly less attention has been paid to midbrain and brainstem structures in this regard. Here, we find that subcortical auditory neurons represent stimulus information sufficient for detection and predict behavioral choice on a trial-by-trial basis.
Collapse
Affiliation(s)
- Chase A Mackey
- Neuroscience Graduate Program, Vanderbilt University, Nashville, Tennessee, United States
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, United States
| | - Margit Dylla
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, United States
| | - Peter Bohlen
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, United States
| | - Jason Grigsby
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, United States
| | - Andrew Hrnicek
- Department of Neurobiology and Anatomy, Wake Forest University Health Sciences, Winston-Salem, North Carolina, United States
| | - Jackson Mayfield
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, United States
| | - Ramnarayan Ramachandran
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, United States
| |
Collapse
|
8
|
Melchor J, Vergara J, Figueroa T, Morán I, Lemus L. Formant-Based Recognition of Words and Other Naturalistic Sounds in Rhesus Monkeys. Front Neurosci 2021; 15:728686. [PMID: 34776842 PMCID: PMC8586527 DOI: 10.3389/fnins.2021.728686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Accepted: 10/08/2021] [Indexed: 11/21/2022] Open
Abstract
In social animals, identifying sounds is critical for communication. In humans, the acoustic parameters involved in speech recognition, such as the formant frequencies derived from the resonance of the supralaryngeal vocal tract, have been well documented. However, how formants contribute to recognizing learned sounds in non-human primates remains unclear. To determine this, we trained two rhesus monkeys to discriminate target and non-target sounds presented in sequences of 1–3 sounds. After training, we performed three experiments: (1) We tested the monkeys’ accuracy and reaction times during the discrimination of various acoustic categories; (2) their ability to discriminate morphing sounds; and (3) their ability to identify sounds consisting of formant 1 (F1), formant 2 (F2), or F1 and F2 (F1F2) pass filters. Our results indicate that macaques can learn diverse sounds and discriminate from morphs and formants F1 and F2, suggesting that information from few acoustic parameters suffice for recognizing complex sounds. We anticipate that future neurophysiological experiments in this paradigm may help elucidate how formants contribute to the recognition of sounds.
Collapse
Affiliation(s)
- Jonathan Melchor
- Department of Cognitive Neuroscience, Institute of Cell Physiology, Universidad Nacional Autónoma de México, Mexico City, Mexico
| | - José Vergara
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, United States
| | - Tonatiuh Figueroa
- Department of Cognitive Neuroscience, Institute of Cell Physiology, Universidad Nacional Autónoma de México, Mexico City, Mexico
| | - Isaac Morán
- Department of Cognitive Neuroscience, Institute of Cell Physiology, Universidad Nacional Autónoma de México, Mexico City, Mexico
| | - Luis Lemus
- Department of Cognitive Neuroscience, Institute of Cell Physiology, Universidad Nacional Autónoma de México, Mexico City, Mexico
| |
Collapse
|
9
|
Abstract
Working memory (WM) is the ability to maintain and manipulate information in the conscious mind over a timescale of seconds. This ability is thought to be maintained through the persistent discharges of neurons in a network of brain areas centered on the prefrontal cortex, as evidenced by neurophysiological recordings in nonhuman primates, though both the localization and the neural basis of WM has been a matter of debate in recent years. Neural correlates of WM are evident in species other than primates, including rodents and corvids. A specialized network of excitatory and inhibitory neurons, aided by neuromodulatory influences of dopamine, is critical for the maintenance of neuronal activity. Limitations in WM capacity and duration, as well as its enhancement during development, can be attributed to properties of neural activity and circuits. Changes in these factors can be observed through training-induced improvements and in pathological impairments. WM thus provides a prototypical cognitive function whose properties can be tied to the spiking activity of brain neurons. © 2021 American Physiological Society. Compr Physiol 11:1-41, 2021.
Collapse
Affiliation(s)
- Russell J Jaffe
- Department of Neurobiology & Anatomy, Wake Forest School of Medicine, Winston-Salem, North Carolina, USA
| | - Christos Constantinidis
- Department of Neurobiology & Anatomy, Wake Forest School of Medicine, Winston-Salem, North Carolina, USA
- Department of Biomedical Engineering, Vanderbilt University, Nashville, Tennessee, USA
- Neuroscience Program, Vanderbilt University, Nashville, Tennessee, USA
- Department of Ophthalmology and Visual Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| |
Collapse
|
10
|
Yao JD, Sanes DH. Temporal Encoding is Required for Categorization, But Not Discrimination. Cereb Cortex 2021; 31:2886-2897. [PMID: 33429423 DOI: 10.1093/cercor/bhaa396] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Revised: 10/26/2020] [Accepted: 11/03/2020] [Indexed: 11/14/2022] Open
Abstract
Core auditory cortex (AC) neurons encode slow fluctuations of acoustic stimuli with temporally patterned activity. However, whether temporal encoding is necessary to explain auditory perceptual skills remains uncertain. Here, we recorded from gerbil AC neurons while they discriminated between a 4-Hz amplitude modulation (AM) broadband noise and AM rates >4 Hz. We found a proportion of neurons possessed neural thresholds based on spike pattern or spike count that were better than the recorded session's behavioral threshold, suggesting that spike count could provide sufficient information for this perceptual task. A population decoder that relied on temporal information outperformed a decoder that relied on spike count alone, but the spike count decoder still remained sufficient to explain average behavioral performance. This leaves open the possibility that more demanding perceptual judgments require temporal information. Thus, we asked whether accurate classification of different AM rates between 4 and 12 Hz required the information contained in AC temporal discharge patterns. Indeed, accurate classification of these AM stimuli depended on the inclusion of temporal information rather than spike count alone. Overall, our results compare two different representations of time-varying acoustic features that can be accessed by downstream circuits required for perceptual judgments.
Collapse
Affiliation(s)
- Justin D Yao
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - Dan H Sanes
- Center for Neural Science, New York University, New York, NY 10003, USA.,Department of Psychology, New York University, New York, NY 10003, USA.,Department of Biology, New York University, New York, NY 10003, USA.,Neuroscience Institute, NYU Langone Medical Center, New York University, New York, NY 10016, USA
| |
Collapse
|
11
|
Morán I, Perez-Orive J, Melchor J, Figueroa T, Lemus L. Auditory decisions in the supplementary motor area. Prog Neurobiol 2021; 202:102053. [PMID: 33957182 DOI: 10.1016/j.pneurobio.2021.102053] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 04/06/2021] [Accepted: 04/20/2021] [Indexed: 10/21/2022]
Abstract
In human speech and communication across various species, recognizing and categorizing sounds is fundamental for the selection of appropriate behaviors. However, how does the brain decide which action to perform based on sounds? We explored whether the supplementary motor area (SMA), responsible for linking sensory information to motor programs, also accounts for auditory-driven decision making. To this end, we trained two rhesus monkeys to discriminate between numerous naturalistic sounds and words learned as target (T) or non-target (nT) categories. We found that the SMA at single and population neuronal levels perform decision-related computations that transition from auditory to movement representations in this task. Moreover, we demonstrated that the neural population is organized orthogonally during the auditory and the movement periods, implying that the SMA performs different computations. In conclusion, our results suggest that the SMA integrates acoustic information in order to form categorical signals that drive behavior.
Collapse
Affiliation(s)
- Isaac Morán
- Department of Cognitive Neuroscience, Institute of Cell Physiology, Universidad Nacional Autónoma de México (UNAM), 04510, Mexico City, Mexico
| | - Javier Perez-Orive
- Instituto Nacional de Rehabilitacion "Luis Guillermo Ibarra Ibarra", Mexico City, Mexico
| | - Jonathan Melchor
- Department of Cognitive Neuroscience, Institute of Cell Physiology, Universidad Nacional Autónoma de México (UNAM), 04510, Mexico City, Mexico
| | - Tonatiuh Figueroa
- Department of Cognitive Neuroscience, Institute of Cell Physiology, Universidad Nacional Autónoma de México (UNAM), 04510, Mexico City, Mexico
| | - Luis Lemus
- Department of Cognitive Neuroscience, Institute of Cell Physiology, Universidad Nacional Autónoma de México (UNAM), 04510, Mexico City, Mexico.
| |
Collapse
|
12
|
Frühholz S, Dietziker J, Staib M, Trost W. Neurocognitive processing efficiency for discriminating human non-alarm rather than alarm scream calls. PLoS Biol 2021; 19:e3000751. [PMID: 33848299 PMCID: PMC8043411 DOI: 10.1371/journal.pbio.3000751] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2020] [Accepted: 02/15/2021] [Indexed: 11/19/2022] Open
Abstract
Across many species, scream calls signal the affective significance of events to other agents. Scream calls were often thought to be of generic alarming and fearful nature, to signal potential threats, with instantaneous, involuntary, and accurate recognition by perceivers. However, scream calls are more diverse in their affective signaling nature than being limited to fearfully alarming a threat, and thus the broader sociobiological relevance of various scream types is unclear. Here we used 4 different psychoacoustic, perceptual decision-making, and neuroimaging experiments in humans to demonstrate the existence of at least 6 psychoacoustically distinctive types of scream calls of both alarming and non-alarming nature, rather than there being only screams caused by fear or aggression. Second, based on perceptual and processing sensitivity measures for decision-making during scream recognition, we found that alarm screams (with some exceptions) were overall discriminated the worst, were responded to the slowest, and were associated with a lower perceptual sensitivity for their recognition compared with non-alarm screams. Third, the neural processing of alarm compared with non-alarm screams during an implicit processing task elicited only minimal neural signal and connectivity in perceivers, contrary to the frequent assumption of a threat processing bias of the primate neural system. These findings show that scream calls are more diverse in their signaling and communicative nature in humans than previously assumed, and, in contrast to a commonly observed threat processing bias in perceptual discriminations and neural processes, we found that especially non-alarm screams, and positive screams in particular, seem to have higher efficiency in speeded discriminations and the implicit neural processing of various scream types in humans.
Collapse
Affiliation(s)
- Sascha Frühholz
- Cognitive and Affective Neuroscience Unit, University of Zurich, Zurich, Switzerland
- Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland
- Department of Psychology, University of Oslo, Oslo, Norway
- Center for the Interdisciplinary Study of Language Evolution, University of Zurich, Zurich, Switzerland
- * E-mail:
| | - Joris Dietziker
- Cognitive and Affective Neuroscience Unit, University of Zurich, Zurich, Switzerland
| | - Matthias Staib
- Cognitive and Affective Neuroscience Unit, University of Zurich, Zurich, Switzerland
| | - Wiebke Trost
- Cognitive and Affective Neuroscience Unit, University of Zurich, Zurich, Switzerland
| |
Collapse
|
13
|
Mohn JL, Downer JD, O'Connor KN, Johnson JS, Sutter ML. Choice-related activity and neural encoding in primary auditory cortex and lateral belt during feature-selective attention. J Neurophysiol 2021; 125:1920-1937. [PMID: 33788616 DOI: 10.1152/jn.00406.2020] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Selective attention is necessary to sift through, form a coherent percept of, and make behavioral decisions on the vast amount of information present in most sensory environments. How and where selective attention is employed in cortex and how this perceptual information then informs the relevant behavioral decisions is still not well understood. Studies probing selective attention and decision-making in visual cortex have been enlightening as to how sensory attention might work in that modality; whether or not similar mechanisms are employed in auditory attention is not yet clear. Therefore, we trained rhesus macaques on a feature-selective attention task, where they switched between reporting changes in temporal (amplitude modulation, AM) and spectral (carrier bandwidth) features of a broadband noise stimulus. We investigated how the encoding of these features by single neurons in primary (A1) and secondary (middle lateral belt, ML) auditory cortex was affected by the different attention conditions. We found that neurons in A1 and ML showed mixed selectivity to the sound and task features. We found no difference in AM encoding between the attention conditions. We found that choice-related activity in both A1 and ML neurons shifts between attentional conditions. This finding suggests that choice-related activity in auditory cortex does not simply reflect motor preparation or action and supports the relationship between reported choice-related activity and the decision and perceptual process.NEW & NOTEWORTHY We recorded from primary and secondary auditory cortex while monkeys performed a nonspatial feature attention task. Both areas exhibited rate-based choice-related activity. The manifestation of choice-related activity was attention dependent, suggesting that choice-related activity in auditory cortex does not simply reflect arousal or motor influences but relates to the specific perceptual choice.
Collapse
Affiliation(s)
- Jennifer L Mohn
- Center for Neuroscience, University of California, Davis, California.,Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| | - Joshua D Downer
- Center for Neuroscience, University of California, Davis, California.,Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California
| | - Kevin N O'Connor
- Center for Neuroscience, University of California, Davis, California.,Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| | - Jeffrey S Johnson
- Center for Neuroscience, University of California, Davis, California.,Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| | - Mitchell L Sutter
- Center for Neuroscience, University of California, Davis, California.,Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| |
Collapse
|
14
|
Beitel RE, Schreiner CE, Vollmer M. Spectral plasticity in monkey primary auditory cortex limits performance generalization in a temporal discrimination task. J Neurophysiol 2020; 124:1798-1814. [PMID: 32997564 DOI: 10.1152/jn.00278.2020] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Auditory experience and behavioral training can modify perceptual performance. However, the consequences of temporal perceptual learning for temporal and spectral neural processing remain unclear. Specifically, the attributes of neural plasticity that underlie task generalization in behavioral performance remain uncertain. To assess the relationship between behavioral and neural plasticity, we evaluated neuronal temporal processing and spectral tuning in primary auditory cortex (AI) of anesthetized owl monkeys trained to discriminate increases in the envelope frequency (e.g., 4-Hz standard vs. >5-Hz targets) of sinusoidally amplitude-modulated (SAM) 1-kHz or 2-kHz carriers. Behavioral and neuronal performance generalization was evaluated for carriers ranging from 0.5 kHz to 8 kHz. Psychophysical thresholds revealed high SAM discrimination acuity for carriers from one octave below to ∼0.6 octave above the trained carrier frequency. However, generalization of SAM discrimination learning progressively declined for carrier frequencies >0.6 octave above the trained carrier frequency. Neural responses in AI showed that SAM discrimination training resulted in 1) increases in temporal modulation preference, especially at carriers close to the trained frequency, 2) narrowing of spectral tuning for neurons with characteristic frequencies near the trained carrier frequency, potentially limiting spectral generalization of temporal training effects, and 3) enhancement of firing-rate contrast for rewarded versus nonrewarded SAM frequencies, providing a potential cue for behavioral temporal discrimination near the trained carrier frequency. Our findings suggest that temporal training at a specific spectral location sharpens local frequency tuning, thus, confining the training effects to a narrow frequency range and limiting generalization of temporal discrimination learning across a wider frequency range.NEW & NOTEWORTHY Monkeys' ability to generalize amplitude modulation discrimination to nontrained carriers was limited to one octave below and 0.6 octave above the trained carrier frequency. Asymmetric generalization was paralleled by sharpening in cortical spectral tuning and enhanced firing-rate contrast between rewarded and nonrewarded SAM stimuli at carriers near the trained frequency. The spectral content of the training stimulus specified spectral and temporal plasticity that may provide a neural substrate for limitations in generalization of temporal discrimination learning.
Collapse
Affiliation(s)
- Ralph E Beitel
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California
| | - Christoph E Schreiner
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California
| | - Maike Vollmer
- Department of Otolaryngology-Head and Neck Surgery, University Hospital Magdeburg, Otto-von-Guericke University, Magdeburg, Germany.,Center for Learning and Memory Research, Leibniz Institute for Neurobiology, Magdeburg, Germany
| |
Collapse
|
15
|
Inda M, Hotta K, Oka K. High responsiveness of auditory neurons to specific combination of acoustic features in female songbirds. Eur J Neurosci 2020; 53:1412-1427. [PMID: 33205482 DOI: 10.1111/ejn.15047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2020] [Revised: 11/11/2020] [Accepted: 11/11/2020] [Indexed: 11/26/2022]
Abstract
Zebra finch (Taeniopygia guttata) is a songbird species in which males sing their unique songs to attract females who then select their preferred male. Acoustic features in the songs of individual males are important features for female auditory perception. While the male of this species is a classic model of vocal production, it has been little known about auditory processing in female. In the higher auditory brain regions, the caudomedial mesopallium (CMM) and nidopallium (NCM) contribute to female's sound recognition, we, therefore, extracted acoustic features that induce neural activities with high detection power on both regions in female finches. A multiple linear regression analysis revealed that neurons were sensitive to mean frequency and Wiener entropy. In addition, we performed an experiment with modified artificial songs and harmonic songs to directly investigate neural responsiveness for deriving further evidence for the contribution of these two acoustic features. Finally, we illustrated a specific ratio combining these two acoustic features that showed highest sensitivity to neural responsiveness, and we found that properties of sensitivity are different between CMM and NCM. Our results indicate that the mixture of the two acoustic features with the specific ratio is important in the higher auditory regions of female songbirds, and these two regions have differences in encoding for sensitivity to these acoustic features.
Collapse
Affiliation(s)
- Masahiro Inda
- Department of Biosciences and Informatics, Faculty of Science and Technology, Keio University, Yokohama, Japan
| | - Kohji Hotta
- Department of Biosciences and Informatics, Faculty of Science and Technology, Keio University, Yokohama, Japan
| | - Kotaro Oka
- Department of Biosciences and Informatics, Faculty of Science and Technology, Keio University, Yokohama, Japan.,Graduate Institute of Medicine, College of Medicine, Kaohsiung Medical University, Kaohsiung, Taiwan.,Waseda Research Institute for Science and Engineering, Waseda University, Shinjuku, Tokyo, Japan
| |
Collapse
|
16
|
Lee JH, Wang X, Bendor D. The role of adaptation in generating monotonic rate codes in auditory cortex. PLoS Comput Biol 2020; 16:e1007627. [PMID: 32069272 PMCID: PMC7048304 DOI: 10.1371/journal.pcbi.1007627] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2019] [Revised: 02/28/2020] [Accepted: 01/02/2020] [Indexed: 11/19/2022] Open
Abstract
In primary auditory cortex, slowly repeated acoustic events are represented temporally by the stimulus-locked activity of single neurons. Single-unit studies in awake marmosets (Callithrix jacchus) have shown that a sub-population of these neurons also monotonically increase or decrease their average discharge rate during stimulus presentation for higher repetition rates. Building on a computational single-neuron model that generates stimulus-locked responses with stimulus evoked excitation followed by strong inhibition, we find that stimulus-evoked short-term depression is sufficient to produce synchronized monotonic positive and negative responses to slowly repeated stimuli. By exploring model robustness and comparing it to other models for adaptation to such stimuli, we conclude that short-term depression best explains our observations in single-unit recordings in awake marmosets. Together, our results show how a simple biophysical mechanism in single neurons can generate complementary neural codes for acoustic stimuli.
Collapse
Affiliation(s)
- Jong Hoon Lee
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, Maryland, United States of America
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College London, London, United Kingdom
| | - Xiaoqin Wang
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, Maryland, United States of America
| | - Daniel Bendor
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College London, London, United Kingdom
| |
Collapse
|
17
|
Burton JA, Valero MD, Hackett TA, Ramachandran R. The use of nonhuman primates in studies of noise injury and treatment. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 146:3770. [PMID: 31795680 PMCID: PMC6881191 DOI: 10.1121/1.5132709] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2019] [Revised: 07/25/2019] [Accepted: 07/30/2019] [Indexed: 05/10/2023]
Abstract
Exposure to prolonged or high intensity noise increases the risk for permanent hearing impairment. Over several decades, researchers characterized the nature of harmful noise exposures and worked to establish guidelines for effective protection. Recent laboratory studies, primarily conducted in rodent models, indicate that the auditory system may be more vulnerable to noise-induced hearing loss (NIHL) than previously thought, driving renewed inquiries into the harmful effects of noise in humans. To bridge the translational gaps between rodents and humans, nonhuman primates (NHPs) may serve as key animal models. The phylogenetic proximity of NHPs to humans underlies tremendous similarity in many features of the auditory system (genomic, anatomical, physiological, behavioral), all of which are important considerations in the assessment and treatment of NIHL. This review summarizes the literature pertaining to NHPs as models of hearing and noise-induced hearing loss, discusses factors relevant to the translation of diagnostics and therapeutics from animals to humans, and concludes with some of the practical considerations involved in conducting NHP research.
Collapse
Affiliation(s)
- Jane A Burton
- Neuroscience Graduate Program, Vanderbilt University, Nashville, Tennessee 37212, USA
| | - Michelle D Valero
- Eaton Peabody Laboratories at Massachusetts Eye and Ear, Boston, Massachusetts 02114, USA
| | - Troy A Hackett
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee 37232, USA
| | - Ramnarayan Ramachandran
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee 37232, USA
| |
Collapse
|
18
|
Evoked Response Strength in Primary Auditory Cortex Predicts Performance in a Spectro-Spatial Discrimination Task in Rats. J Neurosci 2019; 39:6108-6121. [PMID: 31175214 DOI: 10.1523/jneurosci.0041-18.2019] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2018] [Revised: 04/19/2019] [Accepted: 05/12/2019] [Indexed: 11/21/2022] Open
Abstract
The extent to which the primary auditory cortex (A1) participates in instructing animal behavior remains debated. Although multiple studies have shown A1 activity to correlate with animals' perceptual judgments (Jaramillo and Zador, 2011; Bizley et al., 2013; Rodgers and DeWeese, 2014), others have found no relationship between A1 responses and reported auditory percepts (Lemus et al., 2009; Dong et al., 2011). To address this ambiguity, we performed chronic recordings of evoked local field potentials (eLFPs) in A1 of head-fixed female rats performing a two-alternative forced-choice auditory discrimination task. Rats were presented with two interleaved sequences of pure tones from opposite sides and had to indicate the side from which the higher-frequency target stimulus was played. Animal performance closely correlated (r rm = 0.68) with the difference between the target and distractor eLFP responses: the more the target response exceeded the distractor response, the better the animals were at identifying the side of the target frequency. Reducing the evoked response of either frequency through stimulus-specific adaptation affected performance in the expected way: target localization accuracy was degraded when the target frequency was adapted and improved when the distractor frequency was adapted. Target frequency eLFPs were stronger on hit trials than on error trials. Our results suggest that the degree to which one stimulus stands out over others within A1 activity may determine its perceptual saliency for the animals and accordingly bias their behavioral choices.SIGNIFICANCE STATEMENT The brain must continuously calibrate the saliency of sensory percepts against their relevance to the current behavioral goal. The inability to ignore irrelevant distractors characterizes a spectrum of human attentional disorders. Meanwhile, the connection between the neural underpinnings of stimulus saliency and sensory decisions remains elusive. Here, we record local field potentials in the primary auditory cortex of rats engaged in auditory discrimination to investigate how the cortical representation of target and distractor stimuli impacts behavior. We find that the amplitude difference between target- and distractor-evoked activity predicts discrimination performance (r rm = 0.68). Specific adaptation of target or distractor shifts performance either below or above chance, respectively. It appears that recent auditory history profoundly influences stimulus saliency, biasing animals toward diametrically-opposed decisions.
Collapse
|
19
|
Teichert T, Gurnsey K. Formation and decay of auditory short-term memory in the macaque monkey. J Neurophysiol 2019; 121:2401-2415. [PMID: 31017849 DOI: 10.1152/jn.00821.2018] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Echoic memory (EM) is a short-lived, precategorical, and passive form of auditory short-term memory (STM). A key hallmark of EM is its rapid exponential decay with a time constant between 1 and 2 s. It is not clear whether auditory STM in the rhesus, an important model system, shares this rapid exponential decay. To resolve this shortcoming, two rhesus macaques were trained to perform a delayed frequency discrimination task. Discriminability of delayed tones was measured as a function of retention duration and the number of times the standard had been repeated before the target. Like in the human, our results show a rapid decline of discriminability with retention duration. In addition, the results suggest a gradual strengthening of discriminability with repetition number. Model-based analyses suggest the presence of two components of auditory STM: a short-lived component with a time constant on the order of 550 ms that most likely corresponds to EM and a more stable memory trace with time constants on the order of 10 s that strengthens with repetition and most likely corresponds to auditory recognition memory. NEW & NOTEWORTHY This is the first detailed quantification of the rapid temporal dynamics of auditory short-term memory in the rhesus. Much of the auditory information in short-term memory is lost within the first couple of seconds. Repeated presentations of a tone strengthen its encoding into short-term memory. Model-based analyses suggest two distinct components: an echoic memory homolog that mediates the rapid decay and a more stable but less detail-rich component that mediates strengthening of the trace with repetition.
Collapse
Affiliation(s)
- Tobias Teichert
- Department of Psychiatry, University of Pittsburgh , Pittsburgh, Pennsylvania.,Department of Bioengineering, University of Pittsburgh , Pittsburgh, Pennsylvania
| | - Kate Gurnsey
- Department of Psychiatry, University of Pittsburgh , Pittsburgh, Pennsylvania
| |
Collapse
|
20
|
Zhao Z, Ma L, Wang Y, Qin L. A comparison of neural responses in the primary auditory cortex, amygdala, and medial prefrontal cortex of cats during auditory discrimination tasks. J Neurophysiol 2019; 121:785-798. [PMID: 30649979 DOI: 10.1152/jn.00425.2018] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Discriminating biologically relevant sounds is crucial for survival. The neurophysiological mechanisms that mediate this process must register both the reward significance and the physical parameters of acoustic stimuli. Previous experiments have revealed that the primary function of the auditory cortex (AC) is to provide a neural representation of the acoustic parameters of sound stimuli. However, how the brain associates acoustic signals with reward remains unresolved. The amygdala (AMY) and medial prefrontal cortex (mPFC) play keys role in emotion and learning, but it is unknown whether AMY and mPFC neurons are involved in sound discrimination or how the roles of AMY and mPFC neurons differ from those of AC neurons. To examine this, we recorded neural activity in the primary auditory cortex (A1), AMY, and mPFC of cats while they performed a Go/No-go task to discriminate sounds with different temporal patterns. We found that the activity of A1 neurons faithfully coded the temporal patterns of sound stimuli; this activity was not affected by the cats' behavioral choices. The neural representation of stimulus patterns decreased in the AMY, but the neural activity increased when the cats were preparing to discriminate the sound stimuli and waiting for reward. Neural activity in the mPFC did not represent sound patterns, but it showed a clear association with reward and was modulated by the cats' behavioral choices. Our results indicate that the initial auditory representation in A1 is gradually transformed into a stimulus-reward association in the AMY and mPFC to ultimately generate a behavioral choice. NEW & NOTEWORTHY We compared the characteristics of neural activities of primary auditory cortex (A1), amygdala (AMY), and medial prefrontal cortex (mPFC) while cats were performing the same auditory discrimination task. Our results show that there is a gradual transformation of the neural code from a faithful temporal representation of the stimulus in A1, which is insensitive to behavioral choices, to an association with the predictive reward in AMY and mPFC, which, to some extent, is correlated with the animal's behavioral choice.
Collapse
Affiliation(s)
- Zhenling Zhao
- Jinan Biomedicine R&D Center, School of Life Science and Technology, Jinan University , Guangzhou , People's Republic of China
| | - Lanlan Ma
- Department of Physiology, School of Life Science, China Medical University, Shenyang, Liaoning Province, People's Republic of China
| | - Yifei Wang
- Jinan Biomedicine R&D Center, School of Life Science and Technology, Jinan University , Guangzhou , People's Republic of China
| | - Ling Qin
- Department of Physiology, School of Life Science, China Medical University, Shenyang, Liaoning Province, People's Republic of China
| |
Collapse
|
21
|
Convento S, Wegner-Clemens KA, Yau JM. Reciprocal Interactions Between Audition and Touch in Flutter Frequency Perception. Multisens Res 2019; 32:67-85. [PMID: 31059492 DOI: 10.1163/22134808-20181334] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2018] [Accepted: 11/09/2018] [Indexed: 11/19/2022]
Abstract
In both audition and touch, sensory cues comprising repeating events are perceived either as a continuous signal or as a stream of temporally discrete events (flutter), depending on the events' repetition rate. At high repetition rates (>100 Hz), auditory and tactile cues interact reciprocally in pitch processing. The frequency of a cue experienced in one modality systematically biases the perceived frequency of a cue experienced in the other modality. Here, we tested whether audition and touch also interact in the processing of low-frequency stimulation. We also tested whether multisensory interactions occurred if the stimulation in one modality comprised click trains and the stimulation in the other modality comprised amplitude-modulated signals. We found that auditory cues bias touch and tactile cues bias audition on a flutter discrimination task. Even though participants were instructed to attend to a single sensory modality and ignore the other cue, the flutter rate in the attended modality is perceived to be similar to that of the distractor modality. Moreover, we observed similar interaction patterns regardless of stimulus type and whether the same stimulus types were experienced by both senses. Combined with earlier studies, our results suggest that the nervous system extracts and combines temporal rate information from multisensory environmental signals, regardless of stimulus type, in both the low- and high temporal frequency domains. This function likely reflects the importance of temporal frequency as a fundamental feature of our multisensory experience.
Collapse
Affiliation(s)
- Silvia Convento
- 1Department of Neuroscience, Baylor College of Medicine, Houston, One Baylor Plaza, Houston, TX 77030, USA
| | - Kira A Wegner-Clemens
- 2Department of Neurosurgery, Baylor College of Medicine, Houston, One Baylor Plaza, Houston, TX 77030, USA
| | - Jeffrey M Yau
- 1Department of Neuroscience, Baylor College of Medicine, Houston, One Baylor Plaza, Houston, TX 77030, USA
| |
Collapse
|
22
|
Christison-Lagay KL, Cohen YE. The Contribution of Primary Auditory Cortex to Auditory Categorization in Behaving Monkeys. Front Neurosci 2018; 12:601. [PMID: 30210282 PMCID: PMC6123543 DOI: 10.3389/fnins.2018.00601] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2018] [Accepted: 08/09/2018] [Indexed: 11/13/2022] Open
Abstract
The specific contribution of core auditory cortex to auditory perception –such as categorization– remains controversial. To identify a contribution of the primary auditory cortex (A1) to perception, we recorded A1 activity while monkeys reported whether a temporal sequence of tone bursts was heard as having a “small” or “large” frequency difference. We found that A1 had frequency-tuned responses that habituated, independent of frequency content, as this auditory sequence unfolded over time. We also found that A1 firing rate was modulated by the monkeys’ reports of “small” and “large” frequency differences; this modulation correlated with their behavioral performance. These findings are consistent with the hypothesis that A1 contributes to the processes underlying auditory categorization.
Collapse
Affiliation(s)
- Kate L Christison-Lagay
- Neuroscience Graduate Group, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Yale E Cohen
- Departments of Otorhinolaryngology, Neuroscience, and Bioengineering, University of Pennsylvania, Philadelphia, PA, United States
| |
Collapse
|
23
|
Uluç I, Schmidt TT, Wu YH, Blankenburg F. Content-specific codes of parametric auditory working memory in humans. Neuroimage 2018; 183:254-262. [PMID: 30107259 DOI: 10.1016/j.neuroimage.2018.08.024] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2018] [Revised: 08/09/2018] [Accepted: 08/11/2018] [Indexed: 10/28/2022] Open
Abstract
Brain activity in frontal regions has been found to represent frequency information with a parametric code during working memory delay phases. The mental representation of frequencies has furthermore been shown to be modality independent in non-human primate electrophysiology and human EEG studies, suggesting frontal regions encoding quantitative information in a supramodal manner. A recent fMRI study using multivariate pattern analysis (MVPA) supports an overlapping multimodal network for the maintenance of visual and tactile frequency information over frontal and parietal brain regions. The present study extends the investigation of working memory representation of frequency information to the auditory domain. To this aim, we used MVPA on fMRI data recorded during an auditory frequency maintenance task. A support vector regression analysis revealed working memory information in auditory association areas and, consistent with earlier findings of parametric working memory, in a frontoparietal network. A direct comparison to an analogous dataset of vibrotactile parametric working memory revealed an overlap of information coding in prefrontal regions, particularly in the right inferior frontal gyrus. Therefore, our findings indicate that the prefrontal cortex represents frequency-specific working memory content irrespective of the modality as has been now also revealed for the auditory modality.
Collapse
Affiliation(s)
- Işıl Uluç
- Neurocomputation and Neuroimaging Unit (NNU), Department of Education and Psychology, Freie Universität Berlin, 14195 Berlin, Germany; Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, 10099 Berlin, Germany.
| | - Timo Torsten Schmidt
- Neurocomputation and Neuroimaging Unit (NNU), Department of Education and Psychology, Freie Universität Berlin, 14195 Berlin, Germany; Institute of Cognitive Science, University of Osnabrück, 49090 Osnabrück, Germany
| | - Yuan-Hao Wu
- Neurocomputation and Neuroimaging Unit (NNU), Department of Education and Psychology, Freie Universität Berlin, 14195 Berlin, Germany; Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, 10099 Berlin, Germany
| | - Felix Blankenburg
- Neurocomputation and Neuroimaging Unit (NNU), Department of Education and Psychology, Freie Universität Berlin, 14195 Berlin, Germany; Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, 10099 Berlin, Germany
| |
Collapse
|
24
|
Go/No-Go task engagement enhances population representation of target stimuli in primary auditory cortex. Nat Commun 2018; 9:2529. [PMID: 29955046 PMCID: PMC6023878 DOI: 10.1038/s41467-018-04839-9] [Citation(s) in RCA: 41] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2018] [Accepted: 05/22/2018] [Indexed: 11/09/2022] Open
Abstract
Primary sensory cortices are classically considered to extract and represent stimulus features, while association and higher-order areas are thought to carry information about stimulus meaning. Here we show that this information can in fact be found in the neuronal population code of the primary auditory cortex (A1). A1 activity was recorded in awake ferrets while they either passively listened or actively discriminated stimuli in a range of Go/No-Go paradigms, with different sounds and reinforcements. Population-level dimensionality reduction techniques reveal that task engagement induces a shift in stimulus encoding from a sensory to a behaviorally driven representation that specifically enhances the target stimulus in all paradigms. This shift partly relies on task-engagement-induced changes in spontaneous activity. Altogether, we show that A1 population activity bears strong similarities to frontal cortex responses. These findings indicate that primary sensory cortices implement a crucial change in the structure of population activity to extract task-relevant information during behavior. Sensory areas are thought to process stimulus information while higher-order processing occurs in association cortices. Here the authors report that during task engagement population activity in ferret primary auditory cortex shifts away from encoding stimulus features toward detection of the behaviourally relevant targets.
Collapse
|
25
|
Kurata K. Hierarchical Organization Within the Ventral Premotor Cortex of the Macaque Monkey. Neuroscience 2018; 382:127-143. [PMID: 29715510 DOI: 10.1016/j.neuroscience.2018.04.033] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2017] [Revised: 04/20/2018] [Accepted: 04/20/2018] [Indexed: 11/25/2022]
Abstract
Recent studies have revealed that the ventral premotor cortex (PMv) of nonhuman primates plays a pivotal role in various behaviors that require the transformation of sensory cues to appropriate actions. Examples include decision-making based on various sensory cues, preparation for upcoming motor behavior, adaptive sensorimotor transformation, and the generation of motor commands using rapid sensory feedback. Although the PMv has frequently been regarded as a single entity, it can be divided into at least five functionally distinct regions: F4, a dorsal convexity region immediately rostral to the primary motor cortex (M1); F5p, a cortical region immediately rostral to F4, lying within the arcuate sulcus; F5c, a ventral convexity region rostral to F4; and F5a, located in the caudal bank of the arcuate sulcus inferior limb lateral to F5p. Among these, F4 can be further divided into dorsal and ventral subregions (F4d and F4v), which are involved in forelimb and orofacial movements, respectively. F5p contains "mirror neurons" to understand others' actions based on visual and other types of information, and F4d and F5p work together as a functional complex involved in controlling forelimb and eye movements, most efficiently in the execution and completion of coordinated eye-hand movements for reaching and grasping under visual guidance. In contrast, F5c and F5a are hierarchically higher than the F4d, F5p, and F5v complexes, and play a role in decision-making based on various sensory discriminations. Hence, the PMv subregions form a hierarchically organized integral system from decision-making to eye-hand coordination under various behavioral circumstances.
Collapse
Affiliation(s)
- Kiyoshi Kurata
- Department of Physiology, Hirosaki University School of Medicine, Hirosaki 036-8562, Japan.
| |
Collapse
|
26
|
Selezneva E, Oshurkova E, Scheich H, Brosch M. Category-specific neuronal activity in left and right auditory cortex and in medial geniculate body of monkeys. PLoS One 2017; 12:e0186556. [PMID: 29073162 PMCID: PMC5657994 DOI: 10.1371/journal.pone.0186556] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2017] [Accepted: 09/27/2017] [Indexed: 11/19/2022] Open
Abstract
We address the question of whether the auditory cortex of the left and right hemisphere and the auditory thalamus are differently involved in the performance of cognitive tasks. To understand these differences on the level of single neurons we compared neuronal firing in the primary and posterior auditory cortex of the two hemispheres and in the medial geniculate body in monkeys while subjects categorized pitch relationships in tone sequences. In contrast to earlier findings in imaging studies performed on humans, we found little difference between the three brain regions in terms of the category-specificity of their neuronal responses, of tonic firing related to task components, and of decision-related firing. The differences between the results in humans and monkeys may result from the type of neuronal activity considered and how it was analyzed, from the auditory cortical fields studied, or from fundamental differences between these species.
Collapse
Affiliation(s)
- Elena Selezneva
- Specal Lab Primate Neurobiology, Leibniz-Institute for Neurobiology, Magdeburg, Germany
| | - Elena Oshurkova
- Department Auditory Learning and Speech, Leibniz-Institute for Neurobiology, Magdeburg, Germany
| | - Henning Scheich
- Department Auditory Learning and Speech, Leibniz-Institute for Neurobiology, Magdeburg, Germany
| | - Michael Brosch
- Specal Lab Primate Neurobiology, Leibniz-Institute for Neurobiology, Magdeburg, Germany
| |
Collapse
|
27
|
Christison-Lagay KL, Bennur S, Cohen YE. Contribution of spiking activity in the primary auditory cortex to detection in noise. J Neurophysiol 2017; 118:3118-3131. [PMID: 28855294 DOI: 10.1152/jn.00521.2017] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2017] [Revised: 08/25/2017] [Accepted: 08/27/2017] [Indexed: 01/08/2023] Open
Abstract
A fundamental problem in hearing is detecting a "target" stimulus (e.g., a friend's voice) that is presented with a noisy background (e.g., the din of a crowded restaurant). Despite its importance to hearing, a relationship between spiking activity and behavioral performance during such a "detection-in-noise" task has yet to be fully elucidated. In this study, we recorded spiking activity in primary auditory cortex (A1) while rhesus monkeys detected a target stimulus that was presented with a noise background. Although some neurons were modulated, the response of the typical A1 neuron was not modulated by the stimulus- and task-related parameters of our task. In contrast, we found more robust representations of these parameters in population-level activity: small populations of neurons matched the monkeys' behavioral sensitivity. Overall, these findings are consistent with the hypothesis that the sensory evidence, which is needed to solve such detection-in-noise tasks, is represented in population-level A1 activity and may be available to be read out by downstream neurons that are involved in mediating this task.NEW & NOTEWORTHY This study examines the contribution of A1 to detecting a sound that is presented with a noisy background. We found that population-level A1 activity, but not single neurons, could provide the evidence needed to make this perceptual decision.
Collapse
Affiliation(s)
| | - Sharath Bennur
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Yale E Cohen
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, Pennsylvania; .,Department of Neuroscience, University of Pennsylvania, Philadelphia, Pennsylvania; and.,Department of Bioengineering, University of Pennsylvania, Philadelphia, Pennsylvania
| |
Collapse
|
28
|
Duarte F, Lemus L. The Time Is Up: Compression of Visual Time Interval Estimations of Bimodal Aperiodic Patterns. Front Integr Neurosci 2017; 11:17. [PMID: 28848406 PMCID: PMC5550683 DOI: 10.3389/fnint.2017.00017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2017] [Accepted: 07/28/2017] [Indexed: 11/13/2022] Open
Abstract
The ability to estimate time intervals subserves many of our behaviors and perceptual experiences. However, it is not clear how aperiodic (AP) stimuli affect our perception of time intervals across sensory modalities. To address this question, we evaluated the human capacity to discriminate between two acoustic (A), visual (V) or audiovisual (AV) time intervals of trains of scattered pulses. We first measured the periodicity of those stimuli and then sought for correlations with the accuracy and reaction times (RTs) of the subjects. We found that, for all time intervals tested in our experiment, the visual system consistently perceived AP stimuli as being shorter than the periodic (P) ones. In contrast, such a compression phenomenon was not apparent during auditory trials. Our conclusions are: first, the subjects exposed to P stimuli are more likely to measure their durations accurately. Second, perceptual time compression occurs for AP visual stimuli. Lastly, AV discriminations are determined by A dominance rather than by AV enhancement.
Collapse
Affiliation(s)
- Fabiola Duarte
- Primate Neurobiology Laboratory, Instituto de Fisiología Celular, Neurociencia Cognitiva, Universidad Nacional Autónoma de MéxicoCiudad de México, Mexico
| | - Luis Lemus
- Primate Neurobiology Laboratory, Instituto de Fisiología Celular, Neurociencia Cognitiva, Universidad Nacional Autónoma de MéxicoCiudad de México, Mexico
| |
Collapse
|
29
|
Leavitt ML, Mendoza-Halliday D, Martinez-Trujillo JC. Sustained Activity Encoding Working Memories: Not Fully Distributed. Trends Neurosci 2017; 40:328-346. [PMID: 28515011 DOI: 10.1016/j.tins.2017.04.004] [Citation(s) in RCA: 116] [Impact Index Per Article: 16.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2017] [Revised: 04/14/2017] [Accepted: 04/18/2017] [Indexed: 10/19/2022]
Abstract
Working memory (WM) is the ability to remember and manipulate information for short time intervals. Recent studies have proposed that sustained firing encoding the contents of WM is ubiquitous across cortical neurons. We review here the collective evidence supporting this claim. A variety of studies report that neurons in prefrontal, parietal, and inferotemporal association cortices show robust sustained activity encoding the location and features of memoranda during WM tasks. However, reports of WM-related sustained activity in early sensory areas are rare, and typically lack stimulus specificity. We propose that robust sustained activity that can support WM coding arises as a property of association cortices downstream from the early stages of sensory processing.
Collapse
Affiliation(s)
- Matthew L Leavitt
- Department of Physiology, McGill University, Montreal, QC H3G 1Y6, Canada.
| | - Diego Mendoza-Halliday
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Julio C Martinez-Trujillo
- Robarts Research Institute, Brain and Mind Institute, Department of Psychiatry, and Department of Physiology and Pharmacology, University of Western Ontario, London, ON N6A 5B7, Canada.
| |
Collapse
|
30
|
Downer JD, Niwa M, Sutter ML. Hierarchical differences in population coding within auditory cortex. J Neurophysiol 2017; 118:717-731. [PMID: 28446588 PMCID: PMC5539454 DOI: 10.1152/jn.00899.2016] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2016] [Revised: 04/21/2017] [Accepted: 04/21/2017] [Indexed: 01/04/2023] Open
Abstract
Most models of auditory cortical (AC) population coding have focused on primary auditory cortex (A1). Thus our understanding of how neural coding for sounds progresses along the cortical hierarchy remains obscure. To illuminate this, we recorded from two AC fields: A1 and middle lateral belt (ML) of rhesus macaques. We presented amplitude-modulated (AM) noise during both passive listening and while the animals performed an AM detection task ("active" condition). In both fields, neurons exhibit monotonic AM-depth tuning, with A1 neurons mostly exhibiting increasing rate-depth functions and ML neurons approximately evenly distributed between increasing and decreasing functions. We measured noise correlation (rnoise) between simultaneously recorded neurons and found that whereas engagement decreased average rnoise in A1, engagement increased average rnoise in ML. This finding surprised us, because attentive states are commonly reported to decrease average rnoise We analyzed the effect of rnoise on AM coding in both A1 and ML and found that whereas engagement-related shifts in rnoise in A1 enhance AM coding, rnoise shifts in ML have little effect. These results imply that the effect of rnoise differs between sensory areas, based on the distribution of tuning properties among the neurons within each population. A possible explanation of this is that higher areas need to encode nonsensory variables (e.g., attention, choice, and motor preparation), which impart common noise, thus increasing rnoise Therefore, the hierarchical emergence of rnoise-robust population coding (e.g., as we observed in ML) enhances the ability of sensory cortex to integrate cognitive and sensory information without a loss of sensory fidelity.NEW & NOTEWORTHY Prevailing models of population coding of sensory information are based on a limited subset of neural structures. An important and under-explored question in neuroscience is how distinct areas of sensory cortex differ in their population coding strategies. In this study, we compared population coding between primary and secondary auditory cortex. Our findings demonstrate striking differences between the two areas and highlight the importance of considering the diversity of neural structures as we develop models of population coding.
Collapse
Affiliation(s)
- Joshua D Downer
- Center for Neuroscience and Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| | - Mamiko Niwa
- Center for Neuroscience and Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| | - Mitchell L Sutter
- Center for Neuroscience and Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| |
Collapse
|
31
|
Huang Y, Matysiak A, Heil P, König R, Brosch M. Persistent neural activity in auditory cortex is related to auditory working memory in humans and nonhuman primates. eLife 2016; 5. [PMID: 27438411 PMCID: PMC4974052 DOI: 10.7554/elife.15441] [Citation(s) in RCA: 35] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2016] [Accepted: 07/19/2016] [Indexed: 12/28/2022] Open
Abstract
Working memory is the cognitive capacity of short-term storage of information for goal-directed behaviors. Where and how this capacity is implemented in the brain are unresolved questions. We show that auditory cortex stores information by persistent changes of neural activity. We separated activity related to working memory from activity related to other mental processes by having humans and monkeys perform different tasks with varying working memory demands on the same sound sequences. Working memory was reflected in the spiking activity of individual neurons in auditory cortex and in the activity of neuronal populations, that is, in local field potentials and magnetic fields. Our results provide direct support for the idea that temporary storage of information recruits the same brain areas that also process the information. Because similar activity was observed in the two species, the cellular bases of some auditory working memory processes in humans can be studied in monkeys.
Collapse
Affiliation(s)
- Ying Huang
- Special Lab Primate Neurobiology, Leibniz Institute for Neurobiology, Magdeburg, Germany
| | - Artur Matysiak
- Special Lab Non-Invasive Brain Imaging, Leibniz Institute for Neurobiology, Magdeburg, Germany
| | - Peter Heil
- Department Systems Physiology of Learning, Leibniz Institute for Neurobiology, Magdeburg, Germany.,Center for Behavioral Brain Sciences, Otto-von-Guericke-University, Magdeburg, Germany
| | - Reinhard König
- Special Lab Non-Invasive Brain Imaging, Leibniz Institute for Neurobiology, Magdeburg, Germany
| | - Michael Brosch
- Special Lab Primate Neurobiology, Leibniz Institute for Neurobiology, Magdeburg, Germany.,Center for Behavioral Brain Sciences, Otto-von-Guericke-University, Magdeburg, Germany
| |
Collapse
|
32
|
Fritz JB, Malloy M, Mishkin M, Saunders RC. Monkey׳s short-term auditory memory nearly abolished by combined removal of the rostral superior temporal gyrus and rhinal cortices. Brain Res 2016; 1640:289-98. [PMID: 26707975 PMCID: PMC5890928 DOI: 10.1016/j.brainres.2015.12.012] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2015] [Revised: 12/06/2015] [Accepted: 12/07/2015] [Indexed: 01/19/2023]
Abstract
While monkeys easily acquire the rules for performing visual and tactile delayed matching-to-sample, a method for testing recognition memory, they have extraordinary difficulty acquiring a similar rule in audition. Another striking difference between the modalities is that whereas bilateral ablation of the rhinal cortex (RhC) leads to profound impairment in visual and tactile recognition, the same lesion has no detectable effect on auditory recognition memory (Fritz et al., 2005). In our previous study, a mild impairment in auditory memory was obtained following bilateral ablation of the entire medial temporal lobe (MTL), including the RhC, and an equally mild effect was observed after bilateral ablation of the auditory cortical areas in the rostral superior temporal gyrus (rSTG). In order to test the hypothesis that each of these mild impairments was due to partial disconnection of acoustic input to a common target (e.g., the ventromedial prefrontal cortex), in the current study we examined the effects of a more complete auditory disconnection of this common target by combining the removals of both the rSTG and the MTL. We found that the combined lesion led to forgetting thresholds (performance at 75% accuracy) that fell precipitously from the normal retention duration of ~30 to 40s to a duration of ~1 to 2s, thus nearly abolishing auditory recognition memory, and leaving behind only a residual echoic memory. This article is part of a Special Issue entitled SI: Auditory working memory.
Collapse
Affiliation(s)
- Jonathan B Fritz
- Neural Systems Laboratory, Center for Acoustic and Auditory Research, Institute for Systems Research, University of Maryland, College Park, MD 20742, United States.
| | - Megan Malloy
- Laboratory of Neuropsychology, National Institute of Mental Health, NIH, Bethesda, MD 20892, United States.
| | - Mortimer Mishkin
- Laboratory of Neuropsychology, National Institute of Mental Health, NIH, Bethesda, MD 20892, United States.
| | - Richard C Saunders
- Laboratory of Neuropsychology, National Institute of Mental Health, NIH, Bethesda, MD 20892, United States.
| |
Collapse
|
33
|
Bourgeon S, Dépeault A, Meftah EM, Chapman CE. Tactile texture signals in primate primary somatosensory cortex and their relation to subjective roughness intensity. J Neurophysiol 2016; 115:1767-85. [PMID: 26763776 DOI: 10.1152/jn.00303.2015] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2015] [Accepted: 01/06/2016] [Indexed: 11/22/2022] Open
Abstract
This study investigated the hypothesis that a simple intensive code, based on mean firing rate, could explain the cortical representation of subjective roughness intensity and its invariance with scanning speed. We examined the sensitivity of neurons in the cutaneous, finger representation of primary somatosensory cortex (S1) to a wide range of textures [1 mm high, raised-dot surfaces; spatial periods (SPs), 1.5-8.5 mm], scanned under the digit tips at different speeds (40-115 mm/s). Since subjective roughness estimates show a monotonic increase over this range and are independent of speed, we predicted that the mean firing rate of a subgroup of S1 neurons would share these properties. Single-unit recordings were made in four alert macaques (areas 3b, 1 and 2). Cells whose discharge rate showed a monotonic increase with SP, independent of speed, were particularly concentrated in area 3b. Area 2 was characterized by a high proportion of cells sensitive to speed, with or without texture sensitivity. Area 1 had intermediate properties. We suggest that area 3b and most likely area 1 play a key role in signaling roughness intensity, and that a mean rate code, signaled by both slowly and rapidly adapting neurons, is present at the level of area 3b. Finally, the substantial proportion of neurons that showed a monotonic change in discharge limited to a small range of SPs (often independent of response saturation) could play a role in discriminating smaller changes in SP.
Collapse
Affiliation(s)
- Stéphanie Bourgeon
- Groupe de Recherche sur le Système Nerveux Central, Department of Neurosciences, University of Montréal, Montréal, Québec, Canada
| | - Alexandra Dépeault
- Groupe de Recherche sur le Système Nerveux Central, Department of Neurosciences, University of Montréal, Montréal, Québec, Canada
| | - El-Mehdi Meftah
- Groupe de Recherche sur le Système Nerveux Central, Department of Neurosciences, University of Montréal, Montréal, Québec, Canada
| | - C Elaine Chapman
- Groupe de Recherche sur le Système Nerveux Central, Department of Neurosciences, University of Montréal, Montréal, Québec, Canada
| |
Collapse
|
34
|
Cohen YE, Bennur S, Christison-Lagay K, Gifford AM, Tsunada J. Functional Organization of the Ventral Auditory Pathway. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2016; 894:381-388. [PMID: 27080679 PMCID: PMC5444378 DOI: 10.1007/978-3-319-25474-6_40] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
The fundamental problem in audition is determining the mechanisms required by the brain to transform an unlabelled mixture of auditory stimuli into coherent perceptual representations. This process is called auditory-scene analysis. The perceptual representations that result from auditory-scene analysis are formed through a complex interaction of perceptual grouping, attention, categorization and decision-making. Despite a great deal of scientific energy devoted to understanding these aspects of hearing, we still do not understand (1) how sound perception arises from neural activity and (2) the causal relationship between neural activity and sound perception. Here, we review the role of the "ventral" auditory pathway in sound perception. We hypothesize that, in the early parts of the auditory cortex, neural activity reflects the auditory properties of a stimulus. However, in latter parts of the auditory cortex, neurons encode the sensory evidence that forms an auditory decision and are causally involved in the decision process. Finally, in the prefrontal cortex, which receives input from the auditory cortex, neural activity reflects the actual perceptual decision. Together, these studies indicate that the ventral pathway contains hierarchical circuits that are specialized for auditory perception and scene analysis.
Collapse
Affiliation(s)
- Yale E Cohen
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, USA.
- Department of Neuroscience, University of Pennsylvania, Philadelphia, USA.
- Department of Bioengineering, University of Pennsylvania, Philadelphia, USA.
| | - Sharath Bennur
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, USA
| | | | - Adam M Gifford
- Neuroscience Graduate Group, University of Pennsylvania, Philadelphia, USA
| | - Joji Tsunada
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, USA
| |
Collapse
|
35
|
Tsunada J, Liu ASK, Gold JI, Cohen YE. Causal contribution of primate auditory cortex to auditory perceptual decision-making. Nat Neurosci 2015; 19:135-42. [PMID: 26656644 PMCID: PMC4696881 DOI: 10.1038/nn.4195] [Citation(s) in RCA: 85] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2015] [Accepted: 11/11/2015] [Indexed: 11/09/2022]
Abstract
Auditory perceptual decisions are thought to be mediated by the ventral auditory pathway. However, the specific and causal contributions of different brain regions in this pathway, including the middle-lateral (ML) and anterolateral (AL) belt regions of the auditory cortex, to auditory decisions have not been fully identified. To identify these contributions, we recorded from and microstimulated ML and AL sites while monkeys decided whether an auditory stimulus contained more low-frequency or high-frequency tone bursts. Both ML and AL neural activity was modulated by the frequency content of the stimulus. But, only the responses of the most stimulus-sensitive AL neurons were systematically modulated by the monkeys' choices. Consistent with this observation, microstimulation of AL, but not ML, systematically biased the monkeys' behavior toward the choice associated with the preferred frequency of the stimulated site. Together, these findings suggest that AL directly and causally contributes sensory evidence to form this auditory decision.
Collapse
Affiliation(s)
- Joji Tsunada
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Andrew S K Liu
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Joshua I Gold
- Department of Neuroscience, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Yale E Cohen
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, Pennsylvania, USA.,Department of Neuroscience, University of Pennsylvania, Philadelphia, Pennsylvania, USA.,Department of Bioengineering, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| |
Collapse
|
36
|
Abstract
Studies of interference in working and short-term memory suggest that irrelevant information may overwrite the contents of memory or intrude into memory. While some previous studies have reported greater interference when irrelevant information is similar to the contents of memory than when it is dissimilar, other studies have reported greater interference for dissimilar distractors than for similar distractors. In the present study, we find the latter effect in a paradigm that uses auditory tones as stimuli. We suggest that the effects of distractor similarity to memory contents are mediated by the type of information held in memory, particularly the complexity or simplicity of information.
Collapse
|
37
|
Zhao Z, Sato Y, Qin L. Response properties of neurons in the cat's putamen during auditory discrimination. Behav Brain Res 2015; 292:448-62. [PMID: 26162752 DOI: 10.1016/j.bbr.2015.07.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2015] [Revised: 06/27/2015] [Accepted: 07/02/2015] [Indexed: 11/30/2022]
Abstract
The striatum integrates diverse convergent input and plays a critical role in the goal-directed behaviors. To date, the auditory functions of striatum are less studied. Recently, it was demonstrated that auditory cortico-striatal projections influence behavioral performance during a frequency discrimination task. To reveal the functions of striatal neurons in auditory discrimination, we recorded the single-unit spike activities in the putamen (dorsal striatum) of free-moving cats while performing a Go/No-go task to discriminate the sounds with different modulation rates (12.5 Hz vs. 50 Hz) or envelopes (damped vs. ramped). We found that the putamen neurons can be broadly divided into four groups according to their contributions to sound discrimination. First, 40% of neurons showed vigorous responses synchronized to the sound envelope, and could precisely discriminate different sounds. Second, 18% of neurons showed a high preference of ramped to damped sounds, but no preference for modulation rate. They could only discriminate the change of sound envelope. Third, 27% of neurons rapidly adapted to the sound stimuli, had no ability of sound discrimination. Fourth, 15% of neurons discriminated the sounds dependent on the reward-prediction. Comparing to passively listening condition, the activities of putamen neurons were significantly enhanced by the engagement of the auditory tasks, but not modulated by the cat's behavioral choice. The coexistence of multiple types of neurons suggests that the putamen is involved in the transformation from auditory representation to stimulus-reward association.
Collapse
Affiliation(s)
- Zhenling Zhao
- Department of Physiology, Interdisciplinary Graduate School of Medicine and Engineering, University of Yamanashi, Chuo, Yamanashi 409-3898, Japan; Jinan Biomedicine R&D Center, School of Life Science and Technology, Jinan University, Guangzhou 510632, People's Republic of China
| | - Yu Sato
- Department of Physiology, Interdisciplinary Graduate School of Medicine and Engineering, University of Yamanashi, Chuo, Yamanashi 409-3898, Japan
| | - Ling Qin
- Department of Physiology, China Medical University, Shenyang 110001, People's Republic of China.
| |
Collapse
|
38
|
Abstract
Amplitude modulations are fundamental features of natural signals, including human speech and nonhuman primate vocalizations. Because natural signals frequently occur in the context of other competing signals, we used a forward-masking paradigm to investigate how the modulation context of a prior signal affects cortical responses to subsequent modulated sounds. Psychophysical "modulation masking," in which the presentation of a modulated "masker" signal elevates the threshold for detecting the modulation of a subsequent stimulus, has been interpreted as evidence of a central modulation filterbank and modeled accordingly. Whether cortical modulation tuning is compatible with such models remains unknown. By recording responses to pairs of sinusoidally amplitude modulated (SAM) tones in the auditory cortex of awake squirrel monkeys, we show that the prior presentation of the SAM masker elicited persistent and tuned suppression of the firing rate to subsequent SAM signals. Population averages of these effects are compatible with adaptation in broadly tuned modulation channels. In contrast, modulation context had little effect on the synchrony of the cortical representation of the second SAM stimuli and the tuning of such effects did not match that observed for firing rate. Our results suggest that, although the temporal representation of modulated signals is more robust to changes in stimulus context than representations based on average firing rate, this representation is not fully exploited and psychophysical modulation masking more closely mirrors physiological rate suppression and that rate tuning for a given stimulus feature in a given neuron's signal pathway appears sufficient to engender context-sensitive cortical adaptation.
Collapse
|
39
|
Osmanski MS, Wang X. Behavioral dependence of auditory cortical responses. Brain Topogr 2015; 28:365-78. [PMID: 25690831 PMCID: PMC4409507 DOI: 10.1007/s10548-015-0428-4] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2014] [Accepted: 02/12/2015] [Indexed: 10/24/2022]
Abstract
Neural responses in the auditory cortex have historically been measured from either anesthetized or awake but non-behaving animals. A growing body of work has begun to focus instead on recording from auditory cortex of animals actively engaged in behavior tasks. These studies have shown that auditory cortical responses are dependent upon the behavioral state of the animal. The longer ascending subcortical pathway of the auditory system and unique characteristics of auditory processing suggest that such dependencies may have a more profound influence on cortical processing in the auditory system compared to other sensory systems. It is important to understand the nature of these dependencies and their functional implications. In this article, we review the literature on this topic pertaining to cortical processing of sounds.
Collapse
Affiliation(s)
- Michael S Osmanski
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, 720 Rutland Ave., Traylor 410, Baltimore, MD, 21025, USA,
| | | |
Collapse
|
40
|
Bendor D. The role of inhibition in a computational model of an auditory cortical neuron during the encoding of temporal information. PLoS Comput Biol 2015; 11:e1004197. [PMID: 25879843 PMCID: PMC4400160 DOI: 10.1371/journal.pcbi.1004197] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2014] [Accepted: 02/12/2015] [Indexed: 11/19/2022] Open
Abstract
In auditory cortex, temporal information within a sound is represented by two complementary neural codes: a temporal representation based on stimulus-locked firing and a rate representation, where discharge rate co-varies with the timing between acoustic events but lacks a stimulus-synchronized response. Using a computational neuronal model, we find that stimulus-locked responses are generated when sound-evoked excitation is combined with strong, delayed inhibition. In contrast to this, a non-synchronized rate representation is generated when the net excitation evoked by the sound is weak, which occurs when excitation is coincident and balanced with inhibition. Using single-unit recordings from awake marmosets (Callithrix jacchus), we validate several model predictions, including differences in the temporal fidelity, discharge rates and temporal dynamics of stimulus-evoked responses between neurons with rate and temporal representations. Together these data suggest that feedforward inhibition provides a parsimonious explanation of the neural coding dichotomy observed in auditory cortex.
Collapse
Affiliation(s)
- Daniel Bendor
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College London, London, United Kingdom
- * E-mail:
| |
Collapse
|
41
|
Pannunzi M, Pérez-Bellido A, Pereda-Baños A, López-Moliner J, Deco G, Soto-Faraco S. Deconstructing multisensory enhancement in detection. J Neurophysiol 2014; 113:1800-18. [PMID: 25520431 DOI: 10.1152/jn.00341.2014] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The mechanisms responsible for the integration of sensory information from different modalities have become a topic of intense interest in psychophysics and neuroscience. Many authors now claim that early, sensory-based cross-modal convergence improves performance in detection tasks. An important strand of supporting evidence for this claim is based on statistical models such as the Pythagorean model or the probabilistic summation model. These models establish statistical benchmarks representing the best predicted performance under the assumption that there are no interactions between the two sensory paths. Following this logic, when observed detection performances surpass the predictions of these models, it is often inferred that such improvement indicates cross-modal convergence. We present a theoretical analyses scrutinizing some of these models and the statistical criteria most frequently used to infer early cross-modal interactions during detection tasks. Our current analysis shows how some common misinterpretations of these models lead to their inadequate use and, in turn, to contradictory results and misleading conclusions. To further illustrate the latter point, we introduce a model that accounts for detection performances in multimodal detection tasks but for which surpassing of the Pythagorean or probabilistic summation benchmark can be explained without resorting to early cross-modal interactions. Finally, we report three experiments that put our theoretical interpretation to the test and further propose how to adequately measure multimodal interactions in audiotactile detection tasks.
Collapse
Affiliation(s)
| | | | | | - Joan López-Moliner
- Universitat de Barcelona, Barcelona, Spain; Institute for Brain, Cognition and Behaviour (IR3C), Barcelona, Spain; and
| | - Gustavo Deco
- Universitat Pompeu Fabra, Barcelona, Spain; Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain
| | - Salvador Soto-Faraco
- Universitat Pompeu Fabra, Barcelona, Spain; Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain
| |
Collapse
|
42
|
Jaramillo S, Borges K, Zador AM. Auditory thalamus and auditory cortex are equally modulated by context during flexible categorization of sounds. J Neurosci 2014; 34:5291-301. [PMID: 24719107 PMCID: PMC3983805 DOI: 10.1523/jneurosci.4888-13.2014] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2013] [Revised: 03/04/2014] [Accepted: 03/08/2014] [Indexed: 11/21/2022] Open
Abstract
In a dynamic world, animals must adapt rapidly to changes in the meaning of environmental cues. Such changes can influence the neural representation of sensory stimuli. Previous studies have shown that associating a stimulus with a reward or punishment can modulate neural activity in the auditory cortex (AC) and its thalamic input, the medial geniculate body (MGB). However, it is not known whether changes in stimulus-action associations alone can also modulate neural responses in these areas. We designed a categorization task for rats in which the boundary that separated low- from high-frequency sounds varied several times within a behavioral session, thus allowing us to manipulate the action associated with some sounds without changing the associated reward. We developed a computational model that accounted for the rats' performance and compared predictions from this model with sound-evoked responses from single neurons in AC and MGB in animals performing this task. We found that the responses of 15% of AC neurons and 16% of MGB neurons were modulated by changes in stimulus-action association and that the magnitude of the modulation was comparable between the two brain areas. Our results suggest that the AC and thalamus play only a limited role in mediating changes in associations between acoustic stimuli and behavioral responses.
Collapse
Affiliation(s)
| | - Katharine Borges
- Cold Spring Harbor Laboratory, Cold Spring Harbor, New York 11724
| | - Anthony M. Zador
- Cold Spring Harbor Laboratory, Cold Spring Harbor, New York 11724
| |
Collapse
|
43
|
Malone BJ, Scott BH, Semple MN. Encoding frequency contrast in primate auditory cortex. J Neurophysiol 2014; 111:2244-63. [PMID: 24598525 DOI: 10.1152/jn.00878.2013] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Changes in amplitude and frequency jointly determine much of the communicative significance of complex acoustic signals, including human speech. We have previously described responses of neurons in the core auditory cortex of awake rhesus macaques to sinusoidal amplitude modulation (SAM) signals. Here we report a complementary study of sinusoidal frequency modulation (SFM) in the same neurons. Responses to SFM were analogous to SAM responses in that changes in multiple parameters defining SFM stimuli (e.g., modulation frequency, modulation depth, carrier frequency) were robustly encoded in the temporal dynamics of the spike trains. For example, changes in the carrier frequency produced highly reproducible changes in shapes of the modulation period histogram, consistent with the notion that the instantaneous probability of discharge mirrors the moment-by-moment spectrum at low modulation rates. The upper limit for phase locking was similar across SAM and SFM within neurons, suggesting shared biophysical constraints on temporal processing. Using spike train classification methods, we found that neural thresholds for modulation depth discrimination are typically far lower than would be predicted from frequency tuning to static tones. This "dynamic hyperacuity" suggests a substantial central enhancement of the neural representation of frequency changes relative to the auditory periphery. Spike timing information was superior to average rate information when discriminating among SFM signals, and even when discriminating among static tones varying in frequency. This finding held even when differences in total spike count across stimuli were normalized, indicating both the primacy and generality of temporal response dynamics in cortical auditory processing.
Collapse
Affiliation(s)
- Brian J Malone
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California;
| | - Brian H Scott
- Laboratory of Neuropsychology, National Institute of Mental Health/National Institutes of Health, Bethesda, Maryland; and
| | - Malcolm N Semple
- Center for Neural Science, New York University, New York, New York
| |
Collapse
|
44
|
Abstract
The fundamental perceptual unit in hearing is the 'auditory object'. Similar to visual objects, auditory objects are the computational result of the auditory system's capacity to detect, extract, segregate and group spectrotemporal regularities in the acoustic environment; the multitude of acoustic stimuli around us together form the auditory scene. However, unlike the visual scene, resolving the component objects within the auditory scene crucially depends on their temporal structure. Neural correlates of auditory objects are found throughout the auditory system. However, neural responses do not become correlated with a listener's perceptual reports until the level of the cortex. The roles of different neural structures and the contribution of different cognitive states to the perception of auditory objects are not yet fully understood.
Collapse
|
45
|
Ng CW, Plakke B, Poremba A. Neural correlates of auditory recognition memory in the primate dorsal temporal pole. J Neurophysiol 2013; 111:455-69. [PMID: 24198324 DOI: 10.1152/jn.00401.2012] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Temporal pole (TP) cortex is associated with higher-order sensory perception and/or recognition memory, as human patients with damage in this region show impaired performance during some tasks requiring recognition memory (Olson et al. 2007). The underlying mechanisms of TP processing are largely based on examination of the visual nervous system in humans and monkeys, while little is known about neuronal activity patterns in the auditory portion of this region, dorsal TP (dTP; Poremba et al. 2003). The present study examines single-unit activity of dTP in rhesus monkeys performing a delayed matching-to-sample task utilizing auditory stimuli, wherein two sounds are determined to be the same or different. Neurons of dTP encode several task-relevant events during the delayed matching-to-sample task, and encoding of auditory cues in this region is associated with accurate recognition performance. Population activity in dTP shows a match suppression mechanism to identical, repeated sound stimuli similar to that observed in the visual object identification pathway located ventral to dTP (Desimone 1996; Nakamura and Kubota 1996). However, in contrast to sustained visual delay-related activity in nearby analogous regions, auditory delay-related activity in dTP is transient and limited. Neurons in dTP respond selectively to different sound stimuli and often change their sound response preferences between experimental contexts. Current findings suggest a significant role for dTP in auditory recognition memory similar in many respects to the visual nervous system, while delay memory firing patterns are not prominent, which may relate to monkeys' shorter forgetting thresholds for auditory vs. visual objects.
Collapse
Affiliation(s)
- Chi-Wing Ng
- Center for Neuroscience, University of California, Davis, California
| | | | | |
Collapse
|
46
|
Patel AD. Can nonlinguistic musical training change the way the brain processes speech? The expanded OPERA hypothesis. Hear Res 2013; 308:98-108. [PMID: 24055761 DOI: 10.1016/j.heares.2013.08.011] [Citation(s) in RCA: 157] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/19/2013] [Revised: 08/18/2013] [Accepted: 08/26/2013] [Indexed: 10/26/2022]
Abstract
A growing body of research suggests that musical training has a beneficial impact on speech processing (e.g., hearing of speech in noise and prosody perception). As this research moves forward two key questions need to be addressed: 1) Can purely instrumental musical training have such effects? 2) If so, how and why would such effects occur? The current paper offers a conceptual framework for understanding such effects based on mechanisms of neural plasticity. The expanded OPERA hypothesis proposes that when music and speech share sensory or cognitive processing mechanisms in the brain, and music places higher demands on these mechanisms than speech does, this sets the stage for musical training to enhance speech processing. When these higher demands are combined with the emotional rewards of music, the frequent repetition that musical training engenders, and the focused attention that it requires, neural plasticity is activated and makes lasting changes in brain structure and function which impact speech processing. Initial data from a new study motivated by the OPERA hypothesis is presented, focusing on the impact of musical training on speech perception in cochlear-implant users. Suggestions for the development of animal models to test OPERA are also presented, to help motivate neurophysiological studies of how auditory training using non-biological sounds can impact the brain's perceptual processing of species-specific vocalizations. This article is part of a Special Issue entitled <Music: A window into the hearing brain>.
Collapse
Affiliation(s)
- Aniruddh D Patel
- Dept. of Psychology, Tufts University, 490 Boston Ave., Medford, MA 02155, USA.
| |
Collapse
|
47
|
Abolafia JM, Martinez-Garcia M, Deco G, Sanchez-Vives MV. Variability and information content in auditory cortex spike trains during an interval-discrimination task. J Neurophysiol 2013; 110:2163-74. [PMID: 23945780 DOI: 10.1152/jn.00381.2013] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Processing of temporal information is key in auditory processing. In this study, we recorded single-unit activity from rat auditory cortex while they performed an interval-discrimination task. The animals had to decide whether two auditory stimuli were separated by either 150 or 300 ms and nose-poke to the left or to the right accordingly. The spike firing of single neurons in the auditory cortex was then compared in engaged vs. idle brain states. We found that spike firing variability measured with the Fano factor was markedly reduced, not only during stimulation, but also in between stimuli in engaged trials. We next explored if this decrease in variability was associated with an increased information encoding. Our information theory analysis revealed increased information content in auditory responses during engagement compared with idle states, in particular in the responses to task-relevant stimuli. Altogether, we demonstrate that task-engagement significantly modulates coding properties of auditory cortical neurons during an interval-discrimination task.
Collapse
Affiliation(s)
- Juan M Abolafia
- Institut d'Investigacions Biomèdiques August Pi i Sunyer, Barcelona, Spain
| | | | | | | |
Collapse
|
48
|
Behavioral modulation of neural encoding of click-trains in the primary and nonprimary auditory cortex of cats. J Neurosci 2013. [PMID: 23926266 DOI: 10.1523/jneurosci.1724-13] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/17/2023] Open
Abstract
Neural representation of acoustic stimuli in the mammal auditory cortex (AC) has been extensively studied using anesthetized or awake nonbehaving animals. Recently, several studies have shown that active engagement in an auditory behavioral task can substantially change the neuron response properties compared with when animals were passively listening to the same sounds; however, these studies mainly investigated the effect of behavioral state on the primary auditory cortex and the reported effects were inconsistent. Here, we examined the single-unit spike activities in both the primary and nonprimary areas along the dorsal-to-ventral direction of the cat's AC, when the cat was actively discriminating click-trains at different repetition rates and when it was passively listening to the same stimuli. We found that the changes due to task engagement were heterogeneous in the primary AC; some neurons showed significant increases in driven firing rate, others showed decreases. But in the nonprimary AC, task engagement predominantly enhanced the neural responses, resulting in a substantial improvement of the neural discriminability of click-trains. Additionally, our results revealed that neural responses synchronizing to click-trains gradually decreased along the dorsal-to-ventral direction of cat AC, while nonsynchronizing responses remained less changed. The present study provides new insights into the hierarchical organization of AC along the dorsal-to-ventral direction and highlights the importance of using behavioral animals to investigate the later stages of cortical processing.
Collapse
|
49
|
Adamchic I, Toth T, Hauptmann C, Tass PA. Reversing pathologically increased EEG power by acoustic coordinated reset neuromodulation. Hum Brain Mapp 2013; 35:2099-118. [PMID: 23907785 PMCID: PMC4216412 DOI: 10.1002/hbm.22314] [Citation(s) in RCA: 58] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2012] [Revised: 02/24/2013] [Accepted: 04/08/2013] [Indexed: 01/19/2023] Open
Abstract
Acoustic Coordinated Reset (CR) neuromodulation is a patterned stimulation with tones adjusted to the patient's dominant tinnitus frequency, which aims at desynchronizing pathological neuronal synchronization. In a recent proof-of-concept study, CR therapy, delivered 4-6 h/day more than 12 weeks, induced a significant clinical improvement along with a significant long-lasting decrease of pathological oscillatory power in the low frequency as well as γ band and an increase of the α power in a network of tinnitus-related brain areas. As yet, it remains unclear whether CR shifts the brain activity toward physiological levels or whether it induces clinically beneficial, but nonetheless abnormal electroencephalographic (EEG) patterns, for example excessively decreased δ and/or γ. Here, we compared the patients' spontaneous EEG data at baseline as well as after 12 weeks of CR therapy with the spontaneous EEG of healthy controls by means of Brain Electrical Source Analysis source montage and standardized low-resolution brain electromagnetic tomography techniques. The relationship between changes in EEG power and clinical scores was investigated using a partial least squares approach. In this way, we show that acoustic CR neuromodulation leads to a normalization of the oscillatory power in the tinnitus-related network of brain areas, most prominently in temporal regions. A positive association was found between the changes in tinnitus severity and the normalization of δ and γ power in the temporal, parietal, and cingulate cortical regions. Our findings demonstrate a widespread CR-induced normalization of EEG power, significantly associated with a reduction of tinnitus severity.
Collapse
Affiliation(s)
- Ilya Adamchic
- Institute of Neuroscience and Medicine-Neuromodulation (INM-7), Jülich Research Center, Jülich, Germany
| | | | | | | |
Collapse
|
50
|
Auditory cortex represents both pitch judgments and the corresponding acoustic cues. Curr Biol 2013; 23:620-5. [PMID: 23523247 PMCID: PMC3696731 DOI: 10.1016/j.cub.2013.03.003] [Citation(s) in RCA: 76] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2012] [Revised: 12/23/2012] [Accepted: 03/01/2013] [Indexed: 11/22/2022]
Abstract
The neural processing of sensory stimuli involves a transformation of physical stimulus parameters into perceptual features, and elucidating where and how this transformation occurs is one of the ultimate aims of sensory neurophysiology. Recent studies have shown that the firing of neurons in early sensory cortex can be modulated by multisensory interactions [1-5], motor behavior [1, 3, 6, 7], and reward feedback [1, 8, 9], but it remains unclear whether neural activity is more closely tied to perception, as indicated by behavioral choice, or to the physical properties of the stimulus. We investigated which of these properties are predominantly represented in auditory cortex by recording local field potentials (LFPs) and multiunit spiking activity in ferrets while they discriminated the pitch of artificial vowels. We found that auditory cortical activity is informative both about the fundamental frequency (F0) of a target sound and also about the pitch that the animals appear to perceive given their behavioral responses. Surprisingly, although the stimulus F0 was well represented at the onset of the target sound, neural activity throughout auditory cortex frequently predicted the reported pitch better than the target F0.
Collapse
|