1
|
Tolentino-Castro JW, Schroeger A, Cañal-Bruland R, Raab M. Increasing auditory intensity enhances temporal but deteriorates spatial accuracy in a virtual interception task. Exp Brain Res 2024:10.1007/s00221-024-06787-x. [PMID: 38334793 DOI: 10.1007/s00221-024-06787-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 01/15/2024] [Indexed: 02/10/2024]
Abstract
Humans are quite accurate and precise in interception performance. So far, it is still unclear what role auditory information plays in spatiotemporal accuracy and consistency during interception. In the current study, interception performance was measured as the spatiotemporal accuracy and consistency of when and where a virtual ball was intercepted on a visible line displayed on a screen based on auditory information alone. We predicted that participants would more accurately indicate when the ball would cross a target line than where it would cross the line, because human hearing is particularly sensitive to temporal parameters. In a within-subject design, we manipulated auditory intensity (52, 61, 70, 79, 88 dB) using a sound stimulus programmed to be perceived over the screen in an inverted C-shape trajectory. Results showed that the louder the sound, the better was temporal accuracy, but the worse was spatial accuracy. We argue that louder sounds increased attention toward auditory information when performing interception judgments. How balls are intercepted and practically how intensity of sound may add to temporal accuracy and consistency is discussed from a theoretical perspective of modality-specific interception behavior.
Collapse
Affiliation(s)
- J Walter Tolentino-Castro
- Department of Performance Psychology, Institute of Psychology, German Sport University Cologne, Am Sportpark Müngersdorf 6, 50933, Cologne, Germany
| | - Anna Schroeger
- Department for General Psychology, Justus Liebig University Giessen, Giessen, Germany
| | - Rouwen Cañal-Bruland
- Department for the Psychology of Human Movement and Sport, Institute of Sport Science, Friedrich Schiller University Jena, Jena, Germany
| | - Markus Raab
- Department of Performance Psychology, Institute of Psychology, German Sport University Cologne, Am Sportpark Müngersdorf 6, 50933, Cologne, Germany.
- School of Applied Sciences, London South Bank University, London, England.
| |
Collapse
|
2
|
Monti M, Molholm S, Cuppini C. Atypical development of causal inference in autism inferred through a neurocomputational model. Front Comput Neurosci 2023; 17:1258590. [PMID: 37927544 PMCID: PMC10620690 DOI: 10.3389/fncom.2023.1258590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Accepted: 10/05/2023] [Indexed: 11/07/2023] Open
Abstract
In everyday life, the brain processes a multitude of stimuli from the surrounding environment, requiring the integration of information from different sensory modalities to form a coherent perception. This process, known as multisensory integration, enhances the brain's response to redundant congruent sensory cues. However, it is equally important for the brain to segregate sensory inputs from distinct events, to interact with and correctly perceive the multisensory environment. This problem the brain must face, known as the causal inference problem, is strictly related to multisensory integration. It is widely recognized that the ability to integrate information from different senses emerges during the developmental period, as a function of our experience with multisensory stimuli. Consequently, multisensory integrative abilities are altered in individuals who have atypical experiences with cross-modal cues, such as those on the autistic spectrum. However, no research has been conducted on the developmental trajectories of causal inference and its relationship with experience thus far. Here, we used a neuro-computational model to simulate and investigate the development of causal inference in both typically developing children and those in the autistic spectrum. Our results indicate that higher exposure to cross-modal cues accelerates the acquisition of causal inference abilities, and a minimum level of experience with multisensory stimuli is required to develop fully mature behavior. We then simulated the altered developmental trajectory of causal inference in individuals with autism by assuming reduced multisensory experience during training. The results suggest that causal inference reaches complete maturity much later in these individuals compared to neurotypical individuals. Furthermore, we discuss the underlying neural mechanisms and network architecture involved in these processes, highlighting that the development of causal inference follows the evolution of the mechanisms subserving multisensory integration. Overall, this study provides a computational framework, unifying causal inference and multisensory integration, which allows us to suggest neural mechanisms and provide testable predictions about the development of such abilities in typically developed and autistic children.
Collapse
Affiliation(s)
- Melissa Monti
- Department of Electrical, Electronic, and Information Engineering Guglielmo Marconi, University of Bologna, Bologna, Italy
| | - Sophie Molholm
- Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine, Bronx, NY, United States
| | - Cristiano Cuppini
- Department of Electrical, Electronic, and Information Engineering Guglielmo Marconi, University of Bologna, Bologna, Italy
| |
Collapse
|
3
|
Benner J, Reinhardt J, Christiner M, Wengenroth M, Stippich C, Schneider P, Blatow M. Temporal hierarchy of cortical responses reflects core-belt-parabelt organization of auditory cortex in musicians. Cereb Cortex 2023:7030622. [PMID: 36786655 DOI: 10.1093/cercor/bhad020] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Revised: 01/11/2023] [Accepted: 01/12/2023] [Indexed: 02/15/2023] Open
Abstract
Human auditory cortex (AC) organization resembles the core-belt-parabelt organization in nonhuman primates. Previous studies assessed mostly spatial characteristics; however, temporal aspects were little considered so far. We employed co-registration of functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG) in musicians with and without absolute pitch (AP) to achieve spatial and temporal segregation of human auditory responses. First, individual fMRI activations induced by complex harmonic tones were consistently identified in four distinct regions-of-interest within AC, namely in medial Heschl's gyrus (HG), lateral HG, anterior superior temporal gyrus (STG), and planum temporale (PT). Second, we analyzed the temporal dynamics of individual MEG responses at the location of corresponding fMRI activations. In the AP group, the auditory evoked P2 onset occurred ~25 ms earlier in the right as compared with the left PT and ~15 ms earlier in the right as compared with the left anterior STG. This effect was consistent at the individual level and correlated with AP proficiency. Based on the combined application of MEG and fMRI measurements, we were able for the first time to demonstrate a characteristic temporal hierarchy ("chronotopy") of human auditory regions in relation to specific auditory abilities, reflecting the prediction for serial processing from nonhuman studies.
Collapse
Affiliation(s)
- Jan Benner
- Department of Neuroradiology and Section of Biomagnetism, University of Heidelberg Hospital, Heidelberg, Germany
| | - Julia Reinhardt
- Department of Cardiology and Cardiovascular Research Institute Basel (CRIB), University Hospital Basel, University of Basel, Basel, Switzerland.,Department of Orthopedic Surgery and Traumatology, University Hospital Basel, University of Basel, Basel, Switzerland
| | - Markus Christiner
- Centre for Systematic Musicology, University of Graz, Graz, Austria.,Department of Musicology, Vitols Jazeps Latvian Academy of Music, Riga, Latvia
| | - Martina Wengenroth
- Department of Neuroradiology, University Medical Center Schleswig-Holstein, Campus Lübeck, Lübeck, Germany
| | - Christoph Stippich
- Department of Neuroradiology and Radiology, Kliniken Schmieder, Allensbach, Germany
| | - Peter Schneider
- Department of Neuroradiology and Section of Biomagnetism, University of Heidelberg Hospital, Heidelberg, Germany.,Centre for Systematic Musicology, University of Graz, Graz, Austria.,Department of Musicology, Vitols Jazeps Latvian Academy of Music, Riga, Latvia
| | - Maria Blatow
- Section of Neuroradiology, Department of Radiology and Nuclear Medicine, Neurocenter, Cantonal Hospital Lucerne, University of Lucerne, Lucerne, Switzerland
| |
Collapse
|
4
|
Jun NY, Ruff DA, Kramer LE, Bowes B, Tokdar ST, Cohen MR, Groh JM. Coordinated multiplexing of information about separate objects in visual cortex. eLife 2022; 11:e76452. [PMID: 36444983 PMCID: PMC9708082 DOI: 10.7554/elife.76452] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Accepted: 10/21/2022] [Indexed: 11/30/2022] Open
Abstract
Sensory receptive fields are large enough that they can contain more than one perceptible stimulus. How, then, can the brain encode information about each of the stimuli that may be present at a given moment? We recently showed that when more than one stimulus is present, single neurons can fluctuate between coding one vs. the other(s) across some time period, suggesting a form of neural multiplexing of different stimuli (Caruso et al., 2018). Here, we investigate (a) whether such coding fluctuations occur in early visual cortical areas; (b) how coding fluctuations are coordinated across the neural population; and (c) how coordinated coding fluctuations depend on the parsing of stimuli into separate vs. fused objects. We found coding fluctuations do occur in macaque V1 but only when the two stimuli form separate objects. Such separate objects evoked a novel pattern of V1 spike count ('noise') correlations involving distinct distributions of positive and negative values. This bimodal correlation pattern was most pronounced among pairs of neurons showing the strongest evidence for coding fluctuations or multiplexing. Whether a given pair of neurons exhibited positive or negative correlations depended on whether the two neurons both responded better to the same object or had different object preferences. Distinct distributions of spike count correlations based on stimulus preferences were also seen in V4 for separate objects but not when two stimuli fused to form one object. These findings suggest multiple objects evoke different response dynamics than those evoked by single stimuli, lending support to the multiplexing hypothesis and suggesting a means by which information about multiple objects can be preserved despite the apparent coarseness of sensory coding.
Collapse
Affiliation(s)
- Na Young Jun
- Department of Neurobiology, Duke UniversityDurhamUnited States
- Center for Cognitive Neuroscience, Duke UniversityDurhamUnited States
- Duke Institute for Brain SciencesDurhamUnited States
| | - Douglas A Ruff
- Department of Neuroscience, University of PittsburghPittsburghUnited States
- Center for the Neural Basis of Cognition, University of PittsburghPittsburghUnited States
| | - Lily E Kramer
- Department of Neuroscience, University of PittsburghPittsburghUnited States
- Center for the Neural Basis of Cognition, University of PittsburghPittsburghUnited States
| | - Brittany Bowes
- Department of Neuroscience, University of PittsburghPittsburghUnited States
- Center for the Neural Basis of Cognition, University of PittsburghPittsburghUnited States
| | - Surya T Tokdar
- Department of Statistical Science, Duke UniversityDurhamUnited States
| | - Marlene R Cohen
- Department of Neuroscience, University of PittsburghPittsburghUnited States
- Center for the Neural Basis of Cognition, University of PittsburghPittsburghUnited States
| | - Jennifer M Groh
- Department of Neurobiology, Duke UniversityDurhamUnited States
- Center for Cognitive Neuroscience, Duke UniversityDurhamUnited States
- Duke Institute for Brain SciencesDurhamUnited States
- Department of Psychology and Neuroscience, Duke UniversityDurhamUnited States
- Department of Biomedical Engineering, Duke UniversityDurhamUnited States
- Department of Computer Science, Duke UniversityDurhamUnited States
| |
Collapse
|
5
|
Comparison of non-invasive, scalp-recorded auditory steady-state responses in humans, rhesus monkeys, and common marmosets. Sci Rep 2022; 12:9210. [PMID: 35654875 PMCID: PMC9163194 DOI: 10.1038/s41598-022-13228-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Accepted: 05/23/2022] [Indexed: 12/27/2022] Open
Abstract
Auditory steady-state responses (ASSRs) are basic neural responses used to probe the ability of auditory circuits to produce synchronous activity to repetitive external stimulation. Reduced ASSR has been observed in patients with schizophrenia, especially at 40 Hz. Although ASSR is a translatable biomarker with a potential both in animal models and patients with schizophrenia, little is known about the features of ASSR in monkeys. Herein, we recorded the ASSR from humans, rhesus monkeys, and marmosets using the same method to directly compare the characteristics of ASSRs among the species. We used auditory trains on a wide range of frequencies to investigate the suitable frequency for ASSRs induction, because monkeys usually use stimulus frequency ranges different from humans for vocalization. We found that monkeys and marmosets also show auditory event-related potentials and phase-locking activity in gamma-frequency trains, although the optimal frequency with the best synchronization differed among these species. These results suggest that the ASSR could be a useful translational, cross-species biomarker to examine the generation of gamma-band synchronization in nonhuman primate models of schizophrenia.
Collapse
|
6
|
Predicting neuronal response properties from hemodynamic responses in the auditory cortex. Neuroimage 2021; 244:118575. [PMID: 34517127 DOI: 10.1016/j.neuroimage.2021.118575] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2021] [Accepted: 09/10/2021] [Indexed: 11/22/2022] Open
Abstract
Recent functional MRI (fMRI) studies have highlighted differences in responses to natural sounds along the rostral-caudal axis of the human superior temporal gyrus. However, due to the indirect nature of the fMRI signal, it has been challenging to relate these fMRI observations to actual neuronal response properties. To bridge this gap, we present a forward model of the fMRI responses to natural sounds combining a neuronal model of the auditory cortex with physiological modeling of the hemodynamic BOLD response. Neuronal responses are modeled with a dynamic recurrent firing rate model, reflecting the tonotopic, hierarchical processing in the auditory cortex along with the spectro-temporal tradeoff in the rostral-caudal axis of its belt areas. To link modeled neuronal response properties with human fMRI data in the auditory belt regions, we generated a space of neuronal models, which differed parametrically in spectral and temporal specificity of neuronal responses. Then, we obtained predictions of fMRI responses through a biophysical model of the hemodynamic BOLD response (P-DCM). Using Bayesian model comparison, our results showed that the hemodynamic BOLD responses of the caudal belt regions in the human auditory cortex were best explained by modeling faster temporal dynamics and broader spectral tuning of neuronal populations, while rostral belt regions were best explained through fine spectral tuning combined with slower temporal dynamics. These results support the hypotheses of complementary neural information processing along the rostral-caudal axis of the human superior temporal gyrus.
Collapse
|
7
|
Willett SM, Groh JM. Multiple sounds degrade the frequency representation in monkey inferior colliculus. Eur J Neurosci 2021; 55:528-548. [PMID: 34844286 PMCID: PMC9267755 DOI: 10.1111/ejn.15545] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2021] [Revised: 11/16/2021] [Accepted: 11/17/2021] [Indexed: 11/28/2022]
Abstract
How we distinguish multiple simultaneous stimuli is uncertain, particularly given that such stimuli sometimes recruit largely overlapping populations of neurons. One commonly proposed hypothesis is that the sharpness of tuning curves might change to limit the number of stimuli driving any given neuron when multiple stimuli are present. To test this hypothesis, we recorded the activity of neurons in the inferior colliculus while monkeys made saccades to either one or two simultaneous sounds differing in frequency and spatial location. Although monkeys easily distinguished simultaneous sounds (~90% correct performance), the frequency selectivity of inferior colliculus neurons on dual‐sound trials did not improve in any obvious way. Frequency selectivity was degraded on dual‐sound trials compared to single‐sound trials: neural response functions broadened and frequency accounted for less of the variance in firing rate. These changes in neural firing led a maximum‐likelihood decoder to perform worse on dual‐sound trials than on single‐sound trials. These results fail to support the hypothesis that changes in frequency response functions serve to reduce the overlap in the representation of simultaneous sounds. Instead, these results suggest that alternative possibilities, such as recent evidence of alternations in firing rate between the rates corresponding to each of the two stimuli, offer a more promising approach.
Collapse
Affiliation(s)
- Shawn M Willett
- Department of Ophthalmology, University of Pittsburgh, Pittsburgh, Pennsylvania, USA.,Department of Neurobiology, Center for Cognitive Neuroscience, Duke University, Durham, North Carolina, USA
| | - Jennifer M Groh
- Department of Neurobiology, Center for Cognitive Neuroscience, Duke University, Durham, North Carolina, USA
| |
Collapse
|
8
|
Neuronal figure-ground responses in primate primary auditory cortex. Cell Rep 2021; 35:109242. [PMID: 34133935 PMCID: PMC8220257 DOI: 10.1016/j.celrep.2021.109242] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2020] [Revised: 12/09/2020] [Accepted: 05/20/2021] [Indexed: 11/22/2022] Open
Abstract
Figure-ground segregation, the brain’s ability to group related features into stable perceptual entities, is crucial for auditory perception in noisy environments. The neuronal mechanisms for this process are poorly understood in the auditory system. Here, we report figure-ground modulation of multi-unit activity (MUA) in the primary and non-primary auditory cortex of rhesus macaques. Across both regions, MUA increases upon presentation of auditory figures, which consist of coherent chord sequences. We show increased activity even in the absence of any perceptual decision, suggesting that neural mechanisms for perceptual grouping are, to some extent, independent of behavioral demands. Furthermore, we demonstrate differences in figure encoding between more anterior and more posterior regions; perceptual saliency is represented in anterior cortical fields only. Our results suggest an encoding of auditory figures from the earliest cortical stages by a rate code. Neuronal figure-ground modulation in primary auditory cortex A rate code is used to signal the presence of auditory figures Anteriorly located recording sites encode perceptual saliency Figure-ground modulation is present without perceptual detection
Collapse
|
9
|
An H, Ho Kei S, Auksztulewicz R, Schnupp JWH. Do Auditory Mismatch Responses Differ Between Acoustic Features? Front Hum Neurosci 2021; 15:613903. [PMID: 33597853 PMCID: PMC7882487 DOI: 10.3389/fnhum.2021.613903] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2020] [Accepted: 01/07/2021] [Indexed: 11/13/2022] Open
Abstract
Mismatch negativity (MMN) is the electroencephalographic (EEG) waveform obtained by subtracting event-related potential (ERP) responses evoked by unexpected deviant stimuli from responses evoked by expected standard stimuli. While the MMN is thought to reflect an unexpected change in an ongoing, predictable stimulus, it is unknown whether MMN responses evoked by changes in different stimulus features have different magnitudes, latencies, and topographies. The present study aimed to investigate whether MMN responses differ depending on whether sudden stimulus change occur in pitch, duration, location or vowel identity, respectively. To calculate ERPs to standard and deviant stimuli, EEG signals were recorded in normal-hearing participants (N = 20; 13 males, 7 females) who listened to roving oddball sequences of artificial syllables. In the roving paradigm, any given stimulus is repeated several times to form a standard, and then suddenly replaced with a deviant stimulus which differs from the standard. Here, deviants differed from preceding standards along one of four features (pitch, duration, vowel or interaural level difference). The feature levels were individually chosen to match behavioral discrimination performance. We identified neural activity evoked by unexpected violations along all four acoustic dimensions. Evoked responses to deviant stimuli increased in amplitude relative to the responses to standard stimuli. A univariate (channel-by-channel) analysis yielded no significant differences between MMN responses following violations of different features. However, in a multivariate analysis (pooling information from multiple EEG channels), acoustic features could be decoded from the topography of mismatch responses, although at later latencies than those typical for MMN. These results support the notion that deviant feature detection may be subserved by a different process than general mismatch detection.
Collapse
Affiliation(s)
- HyunJung An
- Department of Neuroscience, City University of Hong Kong, Kowloon, Hong Kong
| | - Shing Ho Kei
- Department of Neuroscience, City University of Hong Kong, Kowloon, Hong Kong
| | - Ryszard Auksztulewicz
- Department of Neuroscience, City University of Hong Kong, Kowloon, Hong Kong.,Department of Neuroscience, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| | - Jan W H Schnupp
- Department of Neuroscience, City University of Hong Kong, Kowloon, Hong Kong
| |
Collapse
|
10
|
Song PR, Zhai YY, Gong YM, Du XY, He J, Zhang QC, Yu X. Adaptation in the Dorsal Belt and Core Regions of the Auditory Cortex in the Awake Rat. Neuroscience 2020; 455:79-88. [PMID: 33285236 DOI: 10.1016/j.neuroscience.2020.11.042] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2020] [Revised: 11/26/2020] [Accepted: 11/27/2020] [Indexed: 11/29/2022]
Abstract
The rat auditory cortex is divided anatomically into several areas, but little is known about the functional differences in information processing among these areas. Three tonotopically organized core fields, namely, the primary (A1), anterior (AAF), and ventral (VAF) auditory fields, as well as one non-tonotopically organized belt field, the dorsal belt (DB), were identified based on their response properties. Compared to neurons in A1, AAF and VAF, units in the DB exhibited little or no response to pure tones but strong responses to white noise. The few DB neurons responded to pure tones with thresholds greater than 60 dB SPL, which was significantly higher than the thresholds of neurons in the core regions. In response to white noise, units in DB showed significantly longer latency and lower peak response, as well as longer response duration, than those in the core regions. Responses to repeated white noise were also examined. In contrast to neurons in A1, AAF and VAF, DB neurons could not follow repeated stimulation at a 300 ms inter-stimulus interval (ISI) and showed a significant steeper ISI tuning curve slope when the ISI was increased from 300 ms to 4.8 s. These results indicate that the DB processes auditory information on broader spectral and longer temporal scales than the core regions, reflecting a distinct role in the hierarchical cortical pathway.
Collapse
Affiliation(s)
- Pei-Run Song
- Department of Neurology of the Second Affiliated Hospital of Zhejiang University School of Medicine, Interdisciplinary Institute of Neuroscience and Technology, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, Zhejiang Province, China; Key Laboratory for Biomedical Engineering of Ministry of Education, Zhejiang University, China
| | - Yu-Ying Zhai
- Department of Neurology of the Second Affiliated Hospital of Zhejiang University School of Medicine, Interdisciplinary Institute of Neuroscience and Technology, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, Zhejiang Province, China; Key Laboratory for Biomedical Engineering of Ministry of Education, Zhejiang University, China
| | - Yu-Mei Gong
- Department of Neurology of the Second Affiliated Hospital of Zhejiang University School of Medicine, Interdisciplinary Institute of Neuroscience and Technology, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, Zhejiang Province, China
| | - Xin-Yu Du
- Department of Neurology of the Second Affiliated Hospital of Zhejiang University School of Medicine, Interdisciplinary Institute of Neuroscience and Technology, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, Zhejiang Province, China
| | - Jie He
- Department of Neurology of the Second Affiliated Hospital of Zhejiang University School of Medicine, Interdisciplinary Institute of Neuroscience and Technology, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, Zhejiang Province, China
| | - Qi-Chen Zhang
- Department of Neurology of the Second Affiliated Hospital of Zhejiang University School of Medicine, Interdisciplinary Institute of Neuroscience and Technology, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, Zhejiang Province, China
| | - Xiongjie Yu
- Department of Neurology of the Second Affiliated Hospital of Zhejiang University School of Medicine, Interdisciplinary Institute of Neuroscience and Technology, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, Zhejiang Province, China; Key Laboratory for Biomedical Engineering of Ministry of Education, Zhejiang University, China.
| |
Collapse
|
11
|
Johnson JS, Niwa M, O'Connor KN, Sutter ML. Amplitude modulation encoding in the auditory cortex: comparisons between the primary and middle lateral belt regions. J Neurophysiol 2020; 124:1706-1726. [PMID: 33026929 DOI: 10.1152/jn.00171.2020] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
In macaques, the middle lateral auditory cortex (ML) is a belt region adjacent to the primary auditory cortex (A1) and believed to be at a hierarchically higher level. Although ML single-unit responses have been studied for several auditory stimuli, the ability of ML cells to encode amplitude modulation (AM)-an ability that has been widely studied in A1-has not yet been characterized. Here, we compared the responses of A1 and ML neurons to amplitude-modulated (AM) noise in awake macaques. Although several of the basic properties of A1 and ML responses to AM noise were similar, we found several key differences. ML neurons were less likely to phase lock, did not phase lock as strongly, and were more likely to respond in a nonsynchronized fashion than A1 cells, consistent with a temporal-to-rate transformation as information ascends the auditory hierarchy. ML neurons tended to have lower temporally (phase-locking) based best modulation frequencies than A1 neurons. Neurons that decreased their firing rate in response to AM noise relative to their firing rate in response to unmodulated noise became more common at the level of ML than they were in A1. In both A1 and ML, we found a prevalent class of neurons that usually have enhanced rate responses relative to responses to the unmodulated noise at lower modulation frequencies and suppressed rate responses relative to responses to the unmodulated noise at middle modulation frequencies.NEW & NOTEWORTHY ML neurons synchronized less than A1 neurons, consistent with a hierarchical temporal-to-rate transformation. Both A1 and ML had a class of modulation transfer functions previously unreported in the cortex with a low-modulation-frequency (MF) peak, a middle-MF trough, and responses similar to unmodulated noise responses at high MFs. The results support a hierarchical shift toward a two-pool opponent code, where subtraction of neural activity between two populations of oppositely tuned neurons encodes AM.
Collapse
Affiliation(s)
- Jeffrey S Johnson
- Center for Neuroscience, University of California, Davis, California
| | - Mamiko Niwa
- Center for Neuroscience, University of California, Davis, California
| | - Kevin N O'Connor
- Center for Neuroscience, University of California, Davis, California.,Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| | - Mitchell L Sutter
- Center for Neuroscience, University of California, Davis, California.,Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| |
Collapse
|
12
|
Kuiper JJ, Lin YH, Young IM, Bai MY, Briggs RG, Tanglay O, Fonseka RD, Hormovas J, Dhanaraj V, Conner AK, O'Neal CM, Sughrue ME. A parcellation-based model of the auditory network. Hear Res 2020; 396:108078. [PMID: 32961519 DOI: 10.1016/j.heares.2020.108078] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Revised: 09/01/2020] [Accepted: 09/11/2020] [Indexed: 10/23/2022]
Abstract
INTRODUCTION The auditory network plays an important role in interaction with the environment. Multiple cortical areas, such as the inferior frontal gyrus, superior temporal gyrus and adjacent insula have been implicated in this processing. However, understanding of this network's connectivity has been devoid of tractography specificity. METHODS Using attention task-based functional magnetic resonance imaging (MRI) studies, an activation likelihood estimation (ALE) of the auditory network was generated. Regions of interest corresponding to the cortical parcellation scheme previously published under the Human Connectome Project were co-registered onto the ALE in the Montreal Neurological Institute coordinate space, and visually assessed for inclusion in the network. Diffusion spectrum MRI-based fiber tractography was performed to determine the structural connections between cortical parcellations comprising the network. RESULTS Fifteen cortical regions were found to be part of the auditory network: areas 44 and 8C, auditory area 1, 4, and 5, frontal operculum area 4, the lateral belt, medial belt and parabelt, parietal area F centromedian, perisylvian language area, retroinsular cortex, supplementary and cingulate eye field and the temporoparietal junction area 1. These regions showed consistent interconnections between adjacent parcellations. The frontal aslant tract was found to connect areas within the frontal lobe, while the arcuate fasciculus was found to connect the frontal and temporal lobe, and subcortical U-fibers were found to connect parcellations within the temporal area. Further studies may refine this model with the ultimate goal of clinical application.
Collapse
Affiliation(s)
- Joseph J Kuiper
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, OK, United States
| | - Yueh-Hsin Lin
- Centre for Minimally Invasive Neurosurgery, Prince of Wales Private Hospital, Suite 19, Level 7 Prince of Wales Private Hospital, Randwick, Sydney, NSW 2031, Australia
| | | | - Michael Y Bai
- Centre for Minimally Invasive Neurosurgery, Prince of Wales Private Hospital, Suite 19, Level 7 Prince of Wales Private Hospital, Randwick, Sydney, NSW 2031, Australia
| | - Robert G Briggs
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, OK, United States
| | - Onur Tanglay
- Centre for Minimally Invasive Neurosurgery, Prince of Wales Private Hospital, Suite 19, Level 7 Prince of Wales Private Hospital, Randwick, Sydney, NSW 2031, Australia
| | - R Dineth Fonseka
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, OK, United States
| | - Jorge Hormovas
- Centre for Minimally Invasive Neurosurgery, Prince of Wales Private Hospital, Suite 19, Level 7 Prince of Wales Private Hospital, Randwick, Sydney, NSW 2031, Australia
| | - Vukshitha Dhanaraj
- Centre for Minimally Invasive Neurosurgery, Prince of Wales Private Hospital, Suite 19, Level 7 Prince of Wales Private Hospital, Randwick, Sydney, NSW 2031, Australia
| | - Andrew K Conner
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, OK, United States
| | - Christen M O'Neal
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, Oklahoma City, OK, United States
| | - Michael E Sughrue
- Centre for Minimally Invasive Neurosurgery, Prince of Wales Private Hospital, Suite 19, Level 7 Prince of Wales Private Hospital, Randwick, Sydney, NSW 2031, Australia.
| |
Collapse
|
13
|
Stereotactic electroencephalography in humans reveals multisensory signal in early visual and auditory cortices. Cortex 2020; 126:253-264. [PMID: 32092494 DOI: 10.1016/j.cortex.2019.12.032] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2019] [Revised: 08/20/2019] [Accepted: 12/30/2019] [Indexed: 02/02/2023]
Abstract
Unequivocally demonstrating the presence of multisensory signals at the earliest stages of cortical processing remains challenging in humans. In our study, we relied on the unique spatio-temporal resolution provided by intracranial stereotactic electroencephalographic (SEEG) recordings in patients with drug-resistant epilepsy to characterize the signal extracted from early visual (calcarine and pericalcarine) and auditory (Heschl's gyrus and planum temporale) regions during a simple audio-visual oddball task. We provide evidences that both cross-modal responses (visual responses in auditory cortex or the reverse) and multisensory processing (alteration of the unimodal responses during bimodal stimulation) can be observed in intracranial event-related potentials (iERPs) and in power modulations of oscillatory activity at different temporal scales within the first 150 msec after stimulus onset. The temporal profiles of the iERPs are compatible with the hypothesis that MSI occurs by means of direct pathways linking early visual and auditory regions. Our data indicate, moreover, that MSI mainly relies on modulations of the low-frequency bands (foremost the theta band in the auditory cortex and the alpha band in the visual cortex), suggesting the involvement of feedback pathways between the two sensory regions. Remarkably, we also observed high-gamma power modulations by sounds in the early visual cortex, thus suggesting the presence of neuronal populations involved in auditory processing in the calcarine and pericalcarine region in humans.
Collapse
|
14
|
Ramamurthy DL, Recanzone GH. Age-related changes in sound onset and offset intensity coding in auditory cortical fields A1 and CL of rhesus macaques. J Neurophysiol 2020; 123:1015-1025. [PMID: 31995426 DOI: 10.1152/jn.00373.2019] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Inhibition plays a key role in shaping sensory processing in the central auditory system and has been implicated in sculpting receptive field properties such as sound intensity coding and also in shaping temporal patterns of neuronal firing such as onset- or offset-evoked responses. There is substantial evidence supporting a decrease in inhibition throughout the ascending auditory pathway in geriatric animals. We therefore examined intensity coding of onset (ON) and offset (OFF) responses in auditory cortex of aged and young monkeys. A large proportion of cells in the primary auditory cortex (A1) and the caudolateral field (CL) displayed nonmonotonic rate-level functions for OFF responses in addition to nonmonotonic coding of ON responses. Aging differentially affected ON and OFF responses; the magnitude of effects was generally greater for ON responses. In addition to higher firing rates, neurons in old monkeys exhibited a significant increase in the proportion of monotonic rate-level functions and had higher best intensities than those in young monkeys. OFF responses in young monkeys displayed a range of intensity coding relationships with ON responses of the same cells, ranging from highly similar to highly dissimilar. Dissimilarity in ON/OFF coding was greater in CL and was reduced with aging, which was largely explained by a preferential decrease in the percentage of cells with nonmonotonic coding of ON and OFF responses. The changes we observed are consistent with previously demonstrated alterations in inhibition in the ascending auditory pathway of primates and could be involved in age-related deficits in the temporal processing of sounds.NEW & NOTEWORTHY Aging has a major impact on intensity coding of neurons in auditory cortex of rhesus macaques. Neural responses to sound onset and offset were affected to different extents, and their rate-level functions became more mutually similar, which could be accounted for by the loss of nonmonotonic intensity coding in geriatric monkeys. These findings were consistent with weakened inhibition in the central auditory system and could contribute to auditory processing deficits in elderly subjects.
Collapse
Affiliation(s)
| | - Gregg H Recanzone
- Center for Neuroscience, University of California, Davis, California.,Department of Neurobiology, Physiology, and Behavior, University of California, Davis, California
| |
Collapse
|
15
|
Zulfiqar I, Moerel M, Formisano E. Spectro-Temporal Processing in a Two-Stream Computational Model of Auditory Cortex. Front Comput Neurosci 2020; 13:95. [PMID: 32038212 PMCID: PMC6987265 DOI: 10.3389/fncom.2019.00095] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Accepted: 12/23/2019] [Indexed: 12/14/2022] Open
Abstract
Neural processing of sounds in the dorsal and ventral streams of the (human) auditory cortex is optimized for analyzing fine-grained temporal and spectral information, respectively. Here we use a Wilson and Cowan firing-rate modeling framework to simulate spectro-temporal processing of sounds in these auditory streams and to investigate the link between neural population activity and behavioral results of psychoacoustic experiments. The proposed model consisted of two core (A1 and R, representing primary areas) and two belt (Slow and Fast, representing rostral and caudal processing respectively) areas, differing in terms of their spectral and temporal response properties. First, we simulated the responses to amplitude modulated (AM) noise and tones. In agreement with electrophysiological results, we observed an area-dependent transition from a temporal (synchronization) to a rate code when moving from low to high modulation rates. Simulated neural responses in a task of amplitude modulation detection suggested that thresholds derived from population responses in core areas closely resembled those of psychoacoustic experiments in human listeners. For tones, simulated modulation threshold functions were found to be dependent on the carrier frequency. Second, we simulated the responses to complex tones with missing fundamental stimuli and found that synchronization of responses in the Fast area accurately encoded pitch, with the strength of synchronization depending on number and order of harmonic components. Finally, using speech stimuli, we showed that the spectral and temporal structure of the speech was reflected in parallel by the modeled areas. The analyses highlighted that the Slow stream coded with high spectral precision the aspects of the speech signal characterized by slow temporal changes (e.g., prosody), while the Fast stream encoded primarily the faster changes (e.g., phonemes, consonants, temporal pitch). Interestingly, the pitch of a speaker was encoded both spatially (i.e., tonotopically) in Slow area and temporally in Fast area. Overall, performed simulations showed that the model is valuable for generating hypotheses on how the different cortical areas/streams may contribute toward behaviorally relevant aspects of auditory processing. The model can be used in combination with physiological models of neurovascular coupling to generate predictions for human functional MRI experiments.
Collapse
Affiliation(s)
- Isma Zulfiqar
- Maastricht Centre for Systems Biology, Maastricht University, Maastricht, Netherlands
| | - Michelle Moerel
- Maastricht Centre for Systems Biology, Maastricht University, Maastricht, Netherlands.,Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands.,Maastricht Brain Imaging Center, Maastricht, Netherlands
| | - Elia Formisano
- Maastricht Centre for Systems Biology, Maastricht University, Maastricht, Netherlands.,Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands.,Maastricht Brain Imaging Center, Maastricht, Netherlands
| |
Collapse
|
16
|
Locating the engram: Should we look for plastic synapses or information-storing molecules? Neurobiol Learn Mem 2020; 169:107164. [PMID: 31945459 DOI: 10.1016/j.nlm.2020.107164] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2019] [Revised: 09/18/2019] [Accepted: 01/10/2020] [Indexed: 12/12/2022]
Abstract
Karl Lashley began the search for the engram nearly seventy years ago. In the time since, much has been learned but divisions remain. In the contemporary neurobiology of learning and memory, two profoundly different conceptions contend: the associative/connectionist (A/C) conception and the computational/representational (C/R) conception. Both theories ground themselves in the belief that the mind is emergent from the properties and processes of a material brain. Where these theories differ is in their description of what the neurobiological substrate of memory is and where it resides in the brain. The A/C theory of memory emphasizes the need to distinguish memory cognition from the memory engram and postulates that memory cognition is an emergent property of patterned neural activity routed through engram circuits. In this model, learning re-organizes synapse association strengths to guide future neural activity. Importantly, the version of the A/C theory advocated for here contends that synaptic change is not symbolic and, despite normally being necessary, is not sufficient for memory cognition. Instead, synaptic change provides the capacity and a blueprint for reinstating symbolic patterns of neural activity. Unlike the A/C theory, which posits that memory emerges at the circuit level, the C/R conception suggests that memory manifests at the level of intracellular molecular structures. In C/R theory, these intracellular structures are information-conveying and have properties compatible with the view that brain computation utilizes a read/write memory, functionally similar to that in a computer. New research has energized both sides and highlighted the need for new discussion. Both theories, the key questions each theory has yet to resolve and several potential paths forward are presented here.
Collapse
|
17
|
Ng CW, Recanzone GH. Age-Related Changes in Temporal Processing of Rapidly-Presented Sound Sequences in the Macaque Auditory Cortex. Cereb Cortex 2019; 28:3775-3796. [PMID: 29040403 DOI: 10.1093/cercor/bhx240] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2017] [Accepted: 08/31/2017] [Indexed: 11/13/2022] Open
Abstract
The mammalian auditory cortex is necessary to resolve temporal features in rapidly-changing sound streams. This capability is crucial for speech comprehension in humans and declines with normal aging. Nonhuman primate studies have revealed detrimental effects of normal aging on the auditory nervous system, and yet the underlying influence on temporal processing remains less well-defined. Therefore, we recorded from the core and lateral belt areas of auditory cortex when awake young and old monkeys listened to tone-pip and noise-burst sound sequences. Elevated spontaneous and stimulus-driven activity were the hallmark characteristics in old monkeys. These old neurons showed isomorphic-like discharge patterns to stimulus envelopes, though their phase-locking was less precise. Functional preference in temporal coding between the core and belt existed in the young monkeys but was mostly absent in the old monkeys, in which old belt neurons showed core-like response profiles. Finally, the analysis of population activity patterns indicated that the aged auditory cortex demonstrated a homogenous, distributed coding strategy, compared to the selective, sparse coding strategy observed in the young monkeys. Degraded temporal fidelity and highly-responsive, broadly-tuned cortical responses could underlie how aged humans have difficulties to resolve and track dynamic sounds leading to speech processing deficits.
Collapse
Affiliation(s)
- Chi-Wing Ng
- Center for the Neurobiology of Learning and Memory, University of California, Irvine, CA, USA
| | - Gregg H Recanzone
- Center for Neuroscience, University of California, Davis, CA, USA.,Department of Neurobiology, Physiology and Behavior, University of California, Davis, CA, USA
| |
Collapse
|
18
|
Camalier CR, Scarim K, Mishkin M, Averbeck BB. A Comparison of Auditory Oddball Responses in Dorsolateral Prefrontal Cortex, Basolateral Amygdala, and Auditory Cortex of Macaque. J Cogn Neurosci 2019; 31:1054-1064. [PMID: 30883292 DOI: 10.1162/jocn_a_01387] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
The mismatch negativity (MMN) is an ERP component seen in response to unexpected "novel" stimuli, such as in an auditory oddball task. The MMN is of wide interest and application, but the neural responses that generate it are poorly understood. This is in part due to differences in design and focus between animal and human oddball paradigms. For example, one of the main explanatory models, the "predictive error hypothesis", posits differences in timing and selectivity between signals carried in auditory and prefrontal cortex (PFC). However, these predictions have not been fully tested because (1) noninvasive techniques used in humans lack the combined spatial and temporal precision necessary for these comparisons and (2) single-neuron studies in animal models, which combine necessary spatial and temporal precision, have not focused on higher order contributions to novelty signals. In addition, accounts of the MMN traditionally do not address contributions from subcortical areas known to be involved in novelty detection, such as the amygdala. To better constrain hypotheses and to address methodological gaps between human and animal studies, we recorded single neuron activity from the auditory cortex, dorsolateral PFC, and basolateral amygdala of two macaque monkeys during an auditory oddball paradigm modeled after that used in humans. Consistent with predictions of the predictive error hypothesis, novelty signals in PFC were generally later than in auditory cortex and were abstracted from stimulus-specific effects seen in auditory cortex. However, we found signals in amygdala that were comparable in magnitude and timing to those in PFC, and both prefrontal and amygdala signals were generally much weaker than those in auditory cortex. These observations place useful quantitative constraints on putative generators of the auditory oddball-based MMN and additionally indicate that there are subcortical areas, such as the amygdala, that may be involved in novelty detection in an auditory oddball paradigm.
Collapse
|
19
|
Remington ED, Wang X. Neural Representations of the Full Spatial Field in Auditory Cortex of Awake Marmoset (Callithrix jacchus). Cereb Cortex 2019; 29:1199-1216. [PMID: 29420692 PMCID: PMC6373678 DOI: 10.1093/cercor/bhy025] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2017] [Accepted: 01/13/2018] [Indexed: 11/14/2022] Open
Abstract
Unlike visual signals, sound can reach the ears from any direction, and the ability to localize sounds from all directions is essential for survival in a natural environment. Previous studies have largely focused on the space in front of a subject that is also covered by vision and were often limited to measuring spatial tuning along the horizontal (azimuth) plane. As a result, we know relatively little about how the auditory cortex responds to sounds coming from spatial locations outside the frontal space where visual information is unavailable. By mapping single-neuron responses to the full spatial field in awake marmoset (Callithrix jacchus), an arboreal animal for which spatial processing is vital in its natural habitat, we show that spatial receptive fields in several auditory areas cover all spatial locations. Several complementary measures of spatial tuning showed that neurons were tuned to both frontal space and rear space (outside the coverage of vision), as well as the space above and below the horizontal plane. Together, these findings provide valuable new insights into the representation of all spatial locations by primate auditory cortex.
Collapse
Affiliation(s)
- Evan D Remington
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Xiaoqin Wang
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
20
|
Venezia JH, Thurman SM, Richards VM, Hickok G. Hierarchy of speech-driven spectrotemporal receptive fields in human auditory cortex. Neuroimage 2018; 186:647-666. [PMID: 30500424 DOI: 10.1016/j.neuroimage.2018.11.049] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2018] [Revised: 10/11/2018] [Accepted: 11/26/2018] [Indexed: 12/22/2022] Open
Abstract
Existing data indicate that cortical speech processing is hierarchically organized. Numerous studies have shown that early auditory areas encode fine acoustic details while later areas encode abstracted speech patterns. However, it remains unclear precisely what speech information is encoded across these hierarchical levels. Estimation of speech-driven spectrotemporal receptive fields (STRFs) provides a means to explore cortical speech processing in terms of acoustic or linguistic information associated with characteristic spectrotemporal patterns. Here, we estimate STRFs from cortical responses to continuous speech in fMRI. Using a novel approach based on filtering randomly-selected spectrotemporal modulations (STMs) from aurally-presented sentences, STRFs were estimated for a group of listeners and categorized using a data-driven clustering algorithm. 'Behavioral STRFs' highlighting STMs crucial for speech recognition were derived from intelligibility judgments. Clustering revealed that STRFs in the supratemporal plane represented a broad range of STMs, while STRFs in the lateral temporal lobe represented circumscribed STM patterns important to intelligibility. Detailed analysis recovered a bilateral organization with posterior-lateral regions preferentially processing STMs associated with phonological information and anterior-lateral regions preferentially processing STMs associated with word- and phrase-level information. Regions in lateral Heschl's gyrus preferentially processed STMs associated with vocalic information (pitch).
Collapse
Affiliation(s)
- Jonathan H Venezia
- VA Loma Linda Healthcare System, Loma Linda, CA, USA; Dept. of Otolaryngology, School of Medicine, Loma Linda University, Loma Linda, CA, USA.
| | | | - Virginia M Richards
- Depts. of Cognitive Sciences and Language Science, University of California, Irvine, Irvine, CA, USA
| | - Gregory Hickok
- Depts. of Cognitive Sciences and Language Science, University of California, Irvine, Irvine, CA, USA
| |
Collapse
|
21
|
Zhu S, Allitt B, Samuel A, Lui L, Rosa MGP, Rajan R. Distributed representation of vocalization pitch in marmoset primary auditory cortex. Eur J Neurosci 2018; 49:179-198. [PMID: 30307660 DOI: 10.1111/ejn.14204] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2018] [Revised: 09/10/2018] [Accepted: 10/04/2018] [Indexed: 11/30/2022]
Abstract
The pitch of vocalizations is a key communication feature aiding recognition of individuals and separating sound sources in complex acoustic environments. The neural representation of the pitch of periodic sounds is well defined. However, many natural sounds, like complex vocalizations, contain rich, aperiodic or not strictly periodic frequency content and/or include high-frequency components, but still evoke a strong sense of pitch. Indeed, such sounds are the rule, not the exception but the cortical mechanisms for encoding pitch of such sounds are unknown. We investigated how neurons in the high-frequency representation of primary auditory cortex (A1) of marmosets encoded changes in pitch of four natural vocalizations, two centred around a dominant frequency similar to the neuron's best sensitivity and two around a much lower dominant frequency. Pitch was varied over a fine range that can be used by marmosets to differentiate individuals. The responses of most high-frequency A1 neurons were sensitive to pitch changes in all four vocalizations, with a smaller proportion of the neurons showing pitch-insensitive responses. Classically defined excitatory drive, from the neuron's monaural frequency response area, predicted responses to changes in vocalization pitch in <30% of neurons suggesting most pitch tuning observed is not simple frequency-level response. Moreover, 39% of A1 neurons showed call-invariant tuning of pitch. These results suggest that distributed activity across A1 can represent the pitch of natural sounds over a fine, functionally relevant range, and exhibits pitch tuning for vocalizations within and outside the classical neural tuning area.
Collapse
Affiliation(s)
- Shuyu Zhu
- Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, Victoria, Australia.,Centre of Excellence in Integrative Brain Function, Australian Research Council, Clayton, Victoria, Australia
| | - Ben Allitt
- Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, Victoria, Australia
| | - Anil Samuel
- Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, Victoria, Australia
| | - Leo Lui
- Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, Victoria, Australia.,Centre of Excellence in Integrative Brain Function, Australian Research Council, Clayton, Victoria, Australia
| | - Marcello G P Rosa
- Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, Victoria, Australia.,Centre of Excellence in Integrative Brain Function, Australian Research Council, Clayton, Victoria, Australia
| | - Ramesh Rajan
- Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, Victoria, Australia
| |
Collapse
|
22
|
Christison-Lagay KL, Cohen YE. The Contribution of Primary Auditory Cortex to Auditory Categorization in Behaving Monkeys. Front Neurosci 2018; 12:601. [PMID: 30210282 PMCID: PMC6123543 DOI: 10.3389/fnins.2018.00601] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2018] [Accepted: 08/09/2018] [Indexed: 11/13/2022] Open
Abstract
The specific contribution of core auditory cortex to auditory perception –such as categorization– remains controversial. To identify a contribution of the primary auditory cortex (A1) to perception, we recorded A1 activity while monkeys reported whether a temporal sequence of tone bursts was heard as having a “small” or “large” frequency difference. We found that A1 had frequency-tuned responses that habituated, independent of frequency content, as this auditory sequence unfolded over time. We also found that A1 firing rate was modulated by the monkeys’ reports of “small” and “large” frequency differences; this modulation correlated with their behavioral performance. These findings are consistent with the hypothesis that A1 contributes to the processes underlying auditory categorization.
Collapse
Affiliation(s)
- Kate L Christison-Lagay
- Neuroscience Graduate Group, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Yale E Cohen
- Departments of Otorhinolaryngology, Neuroscience, and Bioengineering, University of Pennsylvania, Philadelphia, PA, United States
| |
Collapse
|
23
|
Gao F, Chen L, Zhang J. Nonuniform impacts of forward suppression on neural responses to preferred stimuli and nonpreferred stimuli in the rat auditory cortex. Eur J Neurosci 2018; 47:1320-1338. [PMID: 29761576 DOI: 10.1111/ejn.13943] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2017] [Revised: 03/30/2018] [Accepted: 04/03/2018] [Indexed: 11/29/2022]
Abstract
In natural conditions, human and animals need to extract target sound information from noisy acoustic environments for communication and survival. However, how the contextual environmental sounds impact the tuning of central auditory neurons to target sound source azimuth over a wide range of sound levels is not fully understood. Here, we determined the azimuth-level response areas (ALRAs) of rat auditory cortex neurons by recording their responses to probe tones varying with levels and sound source azimuths under both quiet (probe alone) and forward masking conditions (preceding noise + probe). In quiet, cortical neurons responded stronger to their preferred stimuli than to their nonpreferred stimuli. In forward masking conditions, an effective preceding noise reduced the extents of the ALRAs and suppressed the neural responses across the ALRAs by decreasing the response strength and lengthening the first-spike latency. The forward suppressive effect on neural response strength was increased with increasing masker level and decreased with prolonging the time interval between masker and probe. For a portion of cortical neurons studied, the effects of forward suppression on the response strength to preferred stimuli was weaker than those to nonpreferred stimuli, and the recovery from forward suppression of the response strength to preferred stimuli was earlier than that to nonpreferred stimuli. We suggest that this nonuniform forward suppression of neural responses to preferred stimuli and to nonpreferred stimuli is important for cortical neurons to maintain their relative stable preferences for target sound source azimuth and level in noisy acoustic environments.
Collapse
Affiliation(s)
- Fei Gao
- Key Laboratory of Brain Functional Genomics, Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics, NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, School of Life Sciences, East China Normal University, Shanghai, China
| | - Liang Chen
- Key Laboratory of Brain Functional Genomics, Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics, NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, School of Life Sciences, East China Normal University, Shanghai, China
| | - Jiping Zhang
- Key Laboratory of Brain Functional Genomics, Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics, NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, School of Life Sciences, East China Normal University, Shanghai, China
| |
Collapse
|
24
|
Scott BH, Leccese PA, Saleem KS, Kikuchi Y, Mullarkey MP, Fukushima M, Mishkin M, Saunders RC. Intrinsic Connections of the Core Auditory Cortical Regions and Rostral Supratemporal Plane in the Macaque Monkey. Cereb Cortex 2018; 27:809-840. [PMID: 26620266 DOI: 10.1093/cercor/bhv277] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
In the ventral stream of the primate auditory cortex, cortico-cortical projections emanate from the primary auditory cortex (AI) along 2 principal axes: one mediolateral, the other caudorostral. Connections in the mediolateral direction from core, to belt, to parabelt, have been well described, but less is known about the flow of information along the supratemporal plane (STP) in the caudorostral dimension. Neuroanatomical tracers were injected throughout the caudorostral extent of the auditory core and rostral STP by direct visualization of the cortical surface. Auditory cortical areas were distinguished by SMI-32 immunostaining for neurofilament, in addition to established cytoarchitectonic criteria. The results describe a pathway comprising step-wise projections from AI through the rostral and rostrotemporal fields of the core (R and RT), continuing to the recently identified rostrotemporal polar field (RTp) and the dorsal temporal pole. Each area was strongly and reciprocally connected with the areas immediately caudal and rostral to it, though deviations from strictly serial connectivity were observed. In RTp, inputs converged from core, belt, parabelt, and the auditory thalamus, as well as higher order cortical regions. The results support a rostrally directed flow of auditory information with complex and recurrent connections, similar to the ventral stream of macaque visual cortex.
Collapse
Affiliation(s)
- Brian H Scott
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health (NIMH/NIH), Bethesda, MD 20892, USA
| | - Paul A Leccese
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health (NIMH/NIH), Bethesda, MD 20892, USA
| | - Kadharbatcha S Saleem
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health (NIMH/NIH), Bethesda, MD 20892, USA
| | - Yukiko Kikuchi
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health (NIMH/NIH), Bethesda, MD 20892, USA.,Present address: Institute of Neuroscience, Newcastle University Medical School, Newcastle Upon Tyne NE2 4HH, UK
| | - Matthew P Mullarkey
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health (NIMH/NIH), Bethesda, MD 20892, USA
| | - Makoto Fukushima
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health (NIMH/NIH), Bethesda, MD 20892, USA
| | - Mortimer Mishkin
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health (NIMH/NIH), Bethesda, MD 20892, USA
| | - Richard C Saunders
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health (NIMH/NIH), Bethesda, MD 20892, USA
| |
Collapse
|
25
|
Abstract
Most behaviors in mammals are directly or indirectly guided by prior experience and therefore depend on the ability of our brains to form memories. The ability to form an association between an initially possibly neutral sensory stimulus and its behavioral relevance is essential for our ability to navigate in a changing environment. The formation of a memory is a complex process involving many areas of the brain. In this chapter we review classic and recent work that has shed light on the specific contribution of sensory cortical areas to the formation of associative memories. We discuss synaptic and circuit mechanisms that mediate plastic adaptations of functional properties in individual neurons as well as larger neuronal populations forming topographically organized representations. Furthermore, we describe commonly used behavioral paradigms that are used to study the mechanisms of memory formation. We focus on the auditory modality that is receiving increasing attention for the study of associative memory in rodent model systems. We argue that sensory cortical areas may play an important role for the memory-dependent categorical recognition of previously encountered sensory stimuli.
Collapse
Affiliation(s)
- Dominik Aschauer
- Institute of Physiology, Focus Program Translational Neurosciences (FTN), University Medical Center, Johannes Gutenberg University, Mainz, Germany
| | - Simon Rumpel
- Institute of Physiology, Focus Program Translational Neurosciences (FTN), University Medical Center, Johannes Gutenberg University, Mainz, Germany.
| |
Collapse
|
26
|
Organization of auditory areas in the superior temporal gyrus of marmoset monkeys revealed by real-time optical imaging. Brain Struct Funct 2017; 223:1599-1614. [DOI: 10.1007/s00429-017-1574-0] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2017] [Accepted: 11/18/2017] [Indexed: 11/25/2022]
|
27
|
Tonotopic organisation of the auditory cortex in sloping sensorineural hearing loss. Hear Res 2017; 355:81-96. [DOI: 10.1016/j.heares.2017.09.012] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/14/2017] [Revised: 07/28/2017] [Accepted: 09/23/2017] [Indexed: 01/09/2023]
|
28
|
Sun W, Marongelli EN, Watkins PV, Barbour DL. Decoding sound level in the marmoset primary auditory cortex. J Neurophysiol 2017; 118:2024-2033. [PMID: 28701545 PMCID: PMC5626894 DOI: 10.1152/jn.00670.2016] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2016] [Revised: 07/11/2017] [Accepted: 07/11/2017] [Indexed: 11/22/2022] Open
Abstract
Neurons that respond favorably to a particular sound level have been observed throughout the central auditory system, becoming steadily more common at higher processing areas. One theory about the role of these level-tuned or nonmonotonic neurons is the level-invariant encoding of sounds. To investigate this theory, we simulated various subpopulations of neurons by drawing from real primary auditory cortex (A1) neuron responses and surveyed their performance in forming different sound level representations. Pure nonmonotonic subpopulations did not provide the best level-invariant decoding; instead, mixtures of monotonic and nonmonotonic neurons provided the most accurate decoding. For level-fidelity decoding, the inclusion of nonmonotonic neurons slightly improved or did not change decoding accuracy until they constituted a high proportion. These results indicate that nonmonotonic neurons fill an encoding role complementary to, rather than alternate to, monotonic neurons.NEW & NOTEWORTHY Neurons with nonmonotonic rate-level functions are unique to the central auditory system. These level-tuned neurons have been proposed to account for invariant sound perception across sound levels. Through systematic simulations based on real neuron responses, this study shows that neuron populations perform sound encoding optimally when containing both monotonic and nonmonotonic neurons. The results indicate that instead of working independently, nonmonotonic neurons complement the function of monotonic neurons in different sound-encoding contexts.
Collapse
Affiliation(s)
- Wensheng Sun
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri
| | - Ellisha N Marongelli
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri
| | - Paul V Watkins
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri
| | - Dennis L Barbour
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri
| |
Collapse
|
29
|
A Crucial Test of the Population Separation Model of Auditory Stream Segregation in Macaque Primary Auditory Cortex. J Neurosci 2017; 37:10645-10655. [PMID: 28954867 DOI: 10.1523/jneurosci.0792-17.2017] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2017] [Revised: 08/29/2017] [Accepted: 09/05/2017] [Indexed: 11/21/2022] Open
Abstract
An important aspect of auditory scene analysis is auditory stream segregation-the organization of sound sequences into perceptual streams reflecting different sound sources in the environment. Several models have been proposed to account for stream segregation. According to the "population separation" (PS) model, alternating ABAB tone sequences are perceived as a single stream or as two separate streams when "A" and "B" tones activate the same or distinct frequency-tuned neuronal populations in primary auditory cortex (A1), respectively. A crucial test of the PS model is whether it can account for the observation that A and B tones are generally perceived as a single stream when presented synchronously, rather than in an alternating pattern, even if they are widely separated in frequency. Here, we tested the PS model by recording neural responses to alternating (ALT) and synchronous (SYNC) tone sequences in A1 of male macaques. Consistent with predictions of the PS model, a greater effective tonotopic separation of A and B tone responses was observed under ALT than under SYNC conditions, thus paralleling the perceptual organization of the sequences. While other models of stream segregation, such as temporal coherence, are not excluded by the present findings, we conclude that PS is sufficient to account for the perceptual organization of ALT and SYNC sequences and thus remains a viable model of auditory stream segregation.SIGNIFICANCE STATEMENT According to the population separation (PS) model of auditory stream segregation, sounds that activate the same or separate neural populations in primary auditory cortex (A1) are perceived as one or two streams, respectively. It is unclear, however, whether the PS model can account for the perception of sounds as a single stream when they are presented synchronously. Here, we tested the PS model by recording neural responses to alternating (ALT) and synchronous (SYNC) tone sequences in macaque A1. A greater effective separation of tonotopic activity patterns was observed under ALT than under SYNC conditions, thus paralleling the perceptual organization of the sequences. Based on these findings, we conclude that PS remains a plausible neurophysiological model of auditory stream segregation.
Collapse
|
30
|
Primary Generators of Visually Evoked Field Potentials Recorded in the Macaque Auditory Cortex. J Neurosci 2017; 37:10139-10153. [PMID: 28924008 DOI: 10.1523/jneurosci.3800-16.2017] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2016] [Revised: 07/21/2017] [Indexed: 01/02/2023] Open
Abstract
Prior studies have reported "local" field potential (LFP) responses to faces in the macaque auditory cortex and have suggested that such face-LFPs may be substrates of audiovisual integration. However, although field potentials (FPs) may reflect the synaptic currents of neurons near the recording electrode, due to the use of a distant reference electrode, they often reflect those of synaptic activity occurring in distant sites as well. Thus, FP recordings within a given brain region (e.g., auditory cortex) may be "contaminated" by activity generated elsewhere in the brain. To determine whether face responses are indeed generated within macaque auditory cortex, we recorded FPs and concomitant multiunit activity with linear array multielectrodes across auditory cortex in three macaques (one female), and applied current source density (CSD) analysis to the laminar FP profile. CSD analysis revealed no appreciable local generator contribution to the visual FP in auditory cortex, although we did note an increase in the amplitude of visual FP with cortical depth, suggesting that their generators are located below auditory cortex. In the underlying inferotemporal cortex, we found polarity inversions of the main visual FP components accompanied by robust CSD responses and large-amplitude multiunit activity. These results indicate that face-evoked FP responses in auditory cortex are not generated locally but are volume-conducted from other face-responsive regions. In broader terms, our results underscore the caution that, unless far-field contamination is removed, LFPs in general may reflect such "far-field" activity, in addition to, or in absence of, local synaptic responses.SIGNIFICANCE STATEMENT Field potentials (FPs) can index neuronal population activity that is not evident in action potentials. However, due to volume conduction, FPs may reflect activity in distant neurons superimposed upon that of neurons close to the recording electrode. This is problematic as the default assumption is that FPs originate from local activity, and thus are termed "local" (LFP). We examine this general problem in the context of previously reported face-evoked FPs in macaque auditory cortex. Our findings suggest that face-FPs are indeed generated in the underlying inferotemporal cortex and volume-conducted to the auditory cortex. The note of caution raised by these findings is of particular importance for studies that seek to assign FP/LFP recordings to specific cortical layers.
Collapse
|
31
|
Christison-Lagay KL, Bennur S, Cohen YE. Contribution of spiking activity in the primary auditory cortex to detection in noise. J Neurophysiol 2017; 118:3118-3131. [PMID: 28855294 DOI: 10.1152/jn.00521.2017] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2017] [Revised: 08/25/2017] [Accepted: 08/27/2017] [Indexed: 01/08/2023] Open
Abstract
A fundamental problem in hearing is detecting a "target" stimulus (e.g., a friend's voice) that is presented with a noisy background (e.g., the din of a crowded restaurant). Despite its importance to hearing, a relationship between spiking activity and behavioral performance during such a "detection-in-noise" task has yet to be fully elucidated. In this study, we recorded spiking activity in primary auditory cortex (A1) while rhesus monkeys detected a target stimulus that was presented with a noise background. Although some neurons were modulated, the response of the typical A1 neuron was not modulated by the stimulus- and task-related parameters of our task. In contrast, we found more robust representations of these parameters in population-level activity: small populations of neurons matched the monkeys' behavioral sensitivity. Overall, these findings are consistent with the hypothesis that the sensory evidence, which is needed to solve such detection-in-noise tasks, is represented in population-level A1 activity and may be available to be read out by downstream neurons that are involved in mediating this task.NEW & NOTEWORTHY This study examines the contribution of A1 to detecting a sound that is presented with a noisy background. We found that population-level A1 activity, but not single neurons, could provide the evidence needed to make this perceptual decision.
Collapse
Affiliation(s)
| | - Sharath Bennur
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Yale E Cohen
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, Pennsylvania; .,Department of Neuroscience, University of Pennsylvania, Philadelphia, Pennsylvania; and.,Department of Bioengineering, University of Pennsylvania, Philadelphia, Pennsylvania
| |
Collapse
|
32
|
Cortical Representations of Speech in a Multitalker Auditory Scene. J Neurosci 2017; 37:9189-9196. [PMID: 28821680 DOI: 10.1523/jneurosci.0938-17.2017] [Citation(s) in RCA: 58] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2017] [Revised: 07/20/2017] [Accepted: 08/08/2017] [Indexed: 11/21/2022] Open
Abstract
The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in the auditory cortex contain dominantly spectrotemporal-based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. We also show that higher-order auditory cortical areas, by contrast, represent the attended stream separately and with significantly higher fidelity than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of the human auditory cortex.SIGNIFICANCE STATEMENT Using magnetoencephalography recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of the auditory cortex. We show that the primary-like areas in the auditory cortex use a dominantly spectrotemporal-based representation of the entire auditory scene, with both attended and unattended speech streams represented with almost equal fidelity. We also show that higher-order auditory cortical areas, by contrast, represent an attended speech stream separately from, and with significantly higher fidelity than, unattended speech streams. Furthermore, the unattended background streams are represented as a single undivided background object rather than as distinct background objects.
Collapse
|
33
|
Scott BH, Saleem KS, Kikuchi Y, Fukushima M, Mishkin M, Saunders RC. Thalamic connections of the core auditory cortex and rostral supratemporal plane in the macaque monkey. J Comp Neurol 2017; 525:3488-3513. [PMID: 28685822 DOI: 10.1002/cne.24283] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2016] [Revised: 06/29/2017] [Accepted: 06/30/2017] [Indexed: 01/06/2023]
Abstract
In the primate auditory cortex, information flows serially in the mediolateral dimension from core, to belt, to parabelt. In the caudorostral dimension, stepwise serial projections convey information through the primary, rostral, and rostrotemporal (AI, R, and RT) core areas on the supratemporal plane, continuing to the rostrotemporal polar area (RTp) and adjacent auditory-related areas of the rostral superior temporal gyrus (STGr) and temporal pole. In addition to this cascade of corticocortical connections, the auditory cortex receives parallel thalamocortical projections from the medial geniculate nucleus (MGN). Previous studies have examined the projections from MGN to auditory cortex, but most have focused on the caudal core areas AI and R. In this study, we investigated the full extent of connections between MGN and AI, R, RT, RTp, and STGr using retrograde and anterograde anatomical tracers. Both AI and R received nearly 90% of their thalamic inputs from the ventral subdivision of the MGN (MGv; the primary/lemniscal auditory pathway). By contrast, RT received only ∼45% from MGv, and an equal share from the dorsal subdivision (MGd). Area RTp received ∼25% of its inputs from MGv, but received additional inputs from multisensory areas outside the MGN (30% in RTp vs. 1-5% in core areas). The MGN input to RTp distinguished this rostral extension of auditory cortex from the adjacent auditory-related cortex of the STGr, which received 80% of its thalamic input from multisensory nuclei (primarily medial pulvinar). Anterograde tracers identified complementary descending connections by which highly processed auditory information may modulate thalamocortical inputs.
Collapse
Affiliation(s)
- Brian H Scott
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health (NIMH/NIH), Bethesda, Maryland
| | - Kadharbatcha S Saleem
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health (NIMH/NIH), Bethesda, Maryland
| | - Yukiko Kikuchi
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health (NIMH/NIH), Bethesda, Maryland
| | - Makoto Fukushima
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health (NIMH/NIH), Bethesda, Maryland
| | - Mortimer Mishkin
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health (NIMH/NIH), Bethesda, Maryland
| | - Richard C Saunders
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health (NIMH/NIH), Bethesda, Maryland
| |
Collapse
|
34
|
Carpenter-Hyland EP, Griffeth J, Bunting K, Terry A, Vazdarjanova A, Blake DT. Tone identification behavior in Rattus norvegicus: muscarinic receptor blockage lowers responsiveness in nontarget selective neurons, while nicotinic receptor blockage selectively lowers target responses. Eur J Neurosci 2017; 46:1779-1789. [PMID: 28544049 DOI: 10.1111/ejn.13611] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2017] [Revised: 04/24/2017] [Accepted: 05/14/2017] [Indexed: 11/30/2022]
Abstract
Learning to associate a stimulus with reinforcement causes plasticity in primary sensory cortex. Neural activity caused by the associated stimulus is paired with reinforcement, but population analyses have not found a selective increase in response to that stimulus. Responses to other stimuli increase as much as, or more than, responses to the associated stimulus. Here, we applied population analysis at a new time point and additionally evaluated whether cholinergic receptor blockers interacted with the plastic changes in cortex. Three days of tone identification behavior caused responsiveness to increase broadly across primary auditory cortex, and target responses strengthened less than overall responsiveness. In pharmacology studies, behaviorally impairing doses of selective acetylcholine receptor blockers were administered during behavior. Neural responses were evaluated on the following day, while the blockers were absent. The muscarinic group, blocked by scopolamine, showed lower responsiveness and an increased response to the tone identification target that exceeded both the 3-day control group and task-naïve controls. Also, a selective increase in the late phase of the response to the tone identification stimulus emerged. Nicotinic receptor antagonism, with mecamylamine, more modestly lowered responses the following day and lowered target responses more than overall responses. Control acute studies demonstrated the muscarinic block did not acutely alter response rates, but the nicotinic block did. These results lead to the hypothesis that the decrease in the proportion of the population spiking response that is selective for the target may be explained by the balance between effects modulated by muscarinic and nicotinic receptors.
Collapse
Affiliation(s)
| | - Jackson Griffeth
- Department of Neurology, Brain and Behavior Discovery Institute, Augusta University, 1120 15th St CL-3031, Augusta, GA, 30912, USA
| | - Kristopher Bunting
- Department of Pharmacology and Toxicology, Augusta University, Augusta, GA, USA
| | - Alvin Terry
- Department of Pharmacology and Toxicology, Augusta University, Augusta, GA, USA
| | - Almira Vazdarjanova
- Department of Pharmacology and Toxicology, Augusta University, Augusta, GA, USA.,VA Research Service, Charlie Norwood VA Medical Center, Augusta, GA, USA
| | - David T Blake
- Department of Neurology, Brain and Behavior Discovery Institute, Augusta University, 1120 15th St CL-3031, Augusta, GA, 30912, USA
| |
Collapse
|
35
|
Downer JD, Niwa M, Sutter ML. Hierarchical differences in population coding within auditory cortex. J Neurophysiol 2017; 118:717-731. [PMID: 28446588 PMCID: PMC5539454 DOI: 10.1152/jn.00899.2016] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2016] [Revised: 04/21/2017] [Accepted: 04/21/2017] [Indexed: 01/04/2023] Open
Abstract
Most models of auditory cortical (AC) population coding have focused on primary auditory cortex (A1). Thus our understanding of how neural coding for sounds progresses along the cortical hierarchy remains obscure. To illuminate this, we recorded from two AC fields: A1 and middle lateral belt (ML) of rhesus macaques. We presented amplitude-modulated (AM) noise during both passive listening and while the animals performed an AM detection task ("active" condition). In both fields, neurons exhibit monotonic AM-depth tuning, with A1 neurons mostly exhibiting increasing rate-depth functions and ML neurons approximately evenly distributed between increasing and decreasing functions. We measured noise correlation (rnoise) between simultaneously recorded neurons and found that whereas engagement decreased average rnoise in A1, engagement increased average rnoise in ML. This finding surprised us, because attentive states are commonly reported to decrease average rnoise We analyzed the effect of rnoise on AM coding in both A1 and ML and found that whereas engagement-related shifts in rnoise in A1 enhance AM coding, rnoise shifts in ML have little effect. These results imply that the effect of rnoise differs between sensory areas, based on the distribution of tuning properties among the neurons within each population. A possible explanation of this is that higher areas need to encode nonsensory variables (e.g., attention, choice, and motor preparation), which impart common noise, thus increasing rnoise Therefore, the hierarchical emergence of rnoise-robust population coding (e.g., as we observed in ML) enhances the ability of sensory cortex to integrate cognitive and sensory information without a loss of sensory fidelity.NEW & NOTEWORTHY Prevailing models of population coding of sensory information are based on a limited subset of neural structures. An important and under-explored question in neuroscience is how distinct areas of sensory cortex differ in their population coding strategies. In this study, we compared population coding between primary and secondary auditory cortex. Our findings demonstrate striking differences between the two areas and highlight the importance of considering the diversity of neural structures as we develop models of population coding.
Collapse
Affiliation(s)
- Joshua D Downer
- Center for Neuroscience and Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| | - Mamiko Niwa
- Center for Neuroscience and Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| | - Mitchell L Sutter
- Center for Neuroscience and Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| |
Collapse
|
36
|
Crommett LE, Pérez-Bellido A, Yau JM. Auditory adaptation improves tactile frequency perception. J Neurophysiol 2017; 117:1352-1362. [PMID: 28077668 PMCID: PMC5350269 DOI: 10.1152/jn.00783.2016] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2016] [Revised: 01/06/2017] [Accepted: 01/06/2017] [Indexed: 11/22/2022] Open
Abstract
Our ability to process temporal frequency information by touch underlies our capacity to perceive and discriminate surface textures. Auditory signals, which also provide extensive temporal frequency information, can systematically alter the perception of vibrations on the hand. How auditory signals shape tactile processing is unclear; perceptual interactions between contemporaneous sounds and vibrations are consistent with multiple neural mechanisms. Here we used a crossmodal adaptation paradigm, which separated auditory and tactile stimulation in time, to test the hypothesis that tactile frequency perception depends on neural circuits that also process auditory frequency. We reasoned that auditory adaptation effects would transfer to touch only if signals from both senses converge on common representations. We found that auditory adaptation can improve tactile frequency discrimination thresholds. This occurred only when adaptor and test frequencies overlapped. In contrast, auditory adaptation did not influence tactile intensity judgments. Thus auditory adaptation enhances touch in a frequency- and feature-specific manner. A simple network model in which tactile frequency information is decoded from sensory neurons that are susceptible to auditory adaptation recapitulates these behavioral results. Our results imply that the neural circuits supporting tactile frequency perception also process auditory signals. This finding is consistent with the notion of supramodal operators performing canonical operations, like temporal frequency processing, regardless of input modality.NEW & NOTEWORTHY Auditory signals can influence the tactile perception of temporal frequency. Multiple neural mechanisms could account for the perceptual interactions between contemporaneous auditory and tactile signals. Using a crossmodal adaptation paradigm, we found that auditory adaptation causes frequency- and feature-specific improvements in tactile perception. This crossmodal transfer of aftereffects between audition and touch implies that tactile frequency perception relies on neural circuits that also process auditory frequency.
Collapse
Affiliation(s)
- Lexi E Crommett
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas
| | | | - Jeffrey M Yau
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas
| |
Collapse
|
37
|
Ursino M, Cuppini C, Magosso E. Multisensory Bayesian Inference Depends on Synapse Maturation during Training: Theoretical Analysis and Neural Modeling Implementation. Neural Comput 2017; 29:735-782. [DOI: 10.1162/neco_a_00935] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Recent theoretical and experimental studies suggest that in multisensory conditions, the brain performs a near-optimal Bayesian estimate of external events, giving more weight to the more reliable stimuli. However, the neural mechanisms responsible for this behavior, and its progressive maturation in a multisensory environment, are still insufficiently understood. The aim of this letter is to analyze this problem with a neural network model of audiovisual integration, based on probabilistic population coding—the idea that a population of neurons can encode probability functions to perform Bayesian inference. The model consists of two chains of unisensory neurons (auditory and visual) topologically organized. They receive the corresponding input through a plastic receptive field and reciprocally exchange plastic cross-modal synapses, which encode the spatial co-occurrence of visual-auditory inputs. A third chain of multisensory neurons performs a simple sum of auditory and visual excitations. The work includes a theoretical part and a computer simulation study. We show how a simple rule for synapse learning (consisting of Hebbian reinforcement and a decay term) can be used during training to shrink the receptive fields and encode the unisensory likelihood functions. Hence, after training, each unisensory area realizes a maximum likelihood estimate of stimulus position (auditory or visual). In cross-modal conditions, the same learning rule can encode information on prior probability into the cross-modal synapses. Computer simulations confirm the theoretical results and show that the proposed network can realize a maximum likelihood estimate of auditory (or visual) positions in unimodal conditions and a Bayesian estimate, with moderate deviations from optimality, in cross-modal conditions. Furthermore, the model explains the ventriloquism illusion and, looking at the activity in the multimodal neurons, explains the automatic reweighting of auditory and visual inputs on a trial-by-trial basis, according to the reliability of the individual cues.
Collapse
Affiliation(s)
- Mauro Ursino
- Department of Electrical, Electronic and Information Engineering University of Bologna, I 40136 Bologna, Italy
| | - Cristiano Cuppini
- Department of Electrical, Electronic and Information Engineering University of Bologna, I 40136 Bologna, Italy
| | - Elisa Magosso
- Department of Electrical, Electronic and Information Engineering University of Bologna, I 40136 Bologna, Italy
| |
Collapse
|
38
|
Primate Audition: Reception, Perception, and Ecology. SPRINGER HANDBOOK OF AUDITORY RESEARCH 2017. [DOI: 10.1007/978-3-319-59478-1_3] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|
39
|
Intracortical depth analyses of frequency-sensitive regions of human auditory cortex using 7TfMRI. Neuroimage 2016; 143:116-127. [PMID: 27608603 DOI: 10.1016/j.neuroimage.2016.09.010] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2016] [Revised: 08/15/2016] [Accepted: 09/04/2016] [Indexed: 11/23/2022] Open
Abstract
Despite recent advances in auditory neuroscience, the exact functional organization of human auditory cortex (AC) has been difficult to investigate. Here, using reversals of tonotopic gradients as the test case, we examined whether human ACs can be more precisely mapped by avoiding signals caused by large draining vessels near the pial surface, which bias blood-oxygen level dependent (BOLD) signals away from the actual sites of neuronal activity. Using ultra-high field (7T) fMRI and cortical depth analysis techniques previously applied in visual cortices, we sampled 1mm isotropic voxels from different depths of AC during narrow-band sound stimulation with biologically relevant temporal patterns. At the group level, analyses that considered voxels from all cortical depths, but excluded those intersecting the pial surface, showed (a) the greatest statistical sensitivity in contrasts between activations to high vs. low frequency sounds and (b) the highest inter-subject consistency of phase-encoded continuous tonotopy mapping. Analyses based solely on voxels intersecting the pial surface produced the least consistent group results, even when compared to analyses based solely on voxels intersecting the white-matter surface where both signal strength and within-subject statistical power are weakest. However, no evidence was found for reduced within-subject reliability in analyses considering the pial voxels only. Our group results could, thus, reflect improved inter-subject correspondence of high and low frequency gradients after the signals from voxels near the pial surface are excluded. Using tonotopy analyses as the test case, our results demonstrate that when the major physiological and anatomical biases imparted by the vasculature are controlled, functional mapping of human ACs becomes more consistent from subject to subject than previously thought.
Collapse
|
40
|
Teichert T, Gurnsey K, Salisbury D, Sweet RA. Contextual processing in unpredictable auditory environments: the limited resource model of auditory refractoriness in the rhesus. J Neurophysiol 2016; 116:2125-2139. [PMID: 27512021 DOI: 10.1152/jn.00419.2016] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2016] [Accepted: 08/09/2016] [Indexed: 01/15/2023] Open
Abstract
Auditory refractoriness refers to the finding of smaller electroencephalographic (EEG) responses to tones preceded by shorter periods of silence. To date, its physiological mechanisms remain unclear, limiting the insights gained from findings of abnormal refractoriness in individuals with schizophrenia. To resolve this roadblock, we studied auditory refractoriness in the rhesus, one of the most important animal models of auditory function, using grids of up to 32 chronically implanted cranial EEG electrodes. Four macaques passively listened to sounds whose identity and timing was random, thus preventing animals from forming valid predictions about upcoming sounds. Stimulus onset asynchrony ranged between 0.2 and 12.8 s, thus encompassing the clinically relevant timescale of refractoriness. Our results show refractoriness in all 8 previously identified middle- and long-latency components that peaked between 14 and 170 ms after tone onset. Refractoriness may reflect the formation and gradual decay of a basic sensory memory trace that may be mirrored by the expenditure and gradual recovery of a limited physiological resource that determines generator excitability. For all 8 components, results were consistent with the assumption that processing of each tone expends ∼65% of the available resource. Differences between components are caused by how quickly the resource recovers. Recovery time constants of different components ranged between 0.5 and 2 s. This work provides a solid conceptual, methodological, and computational foundation to dissect the physiological mechanisms of auditory refractoriness in the rhesus. Such knowledge may, in turn, help develop novel pharmacological, mechanism-targeted interventions.
Collapse
Affiliation(s)
- Tobias Teichert
- Department of Psychiatry, University of Pittsburgh, Pittsburgh, Pennsylvania; .,Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Kate Gurnsey
- Department of Psychiatry, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Dean Salisbury
- Department of Psychiatry, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Robert A Sweet
- Department of Psychiatry, University of Pittsburgh, Pittsburgh, Pennsylvania.,Department of Neurology, University of Pittsburgh, Pittsburgh, Pennsylvania; and.,Mental Illness Research, Education, and Clinical Center, Veterans Affairs Pittsburgh Healthcare System, Pittsburgh, Pennsylvania
| |
Collapse
|
41
|
Neural Representation of Concurrent Vowels in Macaque Primary Auditory Cortex. eNeuro 2016; 3:eN-NWR-0071-16. [PMID: 27294198 PMCID: PMC4901243 DOI: 10.1523/eneuro.0071-16.2016] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2016] [Accepted: 04/15/2016] [Indexed: 11/30/2022] Open
Abstract
Successful speech perception in real-world environments requires that the auditory system segregate competing voices that overlap in frequency and time into separate streams. Vowels are major constituents of speech and are comprised of frequencies (harmonics) that are integer multiples of a common fundamental frequency (F0). The pitch and identity of a vowel are determined by its F0 and spectral envelope (formant structure), respectively. When two spectrally overlapping vowels differing in F0 are presented concurrently, they can be readily perceived as two separate “auditory objects” with pitches at their respective F0s. A difference in pitch between two simultaneous vowels provides a powerful cue for their segregation, which in turn, facilitates their individual identification. The neural mechanisms underlying the segregation of concurrent vowels based on pitch differences are poorly understood. Here, we examine neural population responses in macaque primary auditory cortex (A1) to single and double concurrent vowels (/a/ and /i/) that differ in F0 such that they are heard as two separate auditory objects with distinct pitches. We find that neural population responses in A1 can resolve, via a rate-place code, lower harmonics of both single and double concurrent vowels. Furthermore, we show that the formant structures, and hence the identities, of single vowels can be reliably recovered from the neural representation of double concurrent vowels. We conclude that A1 contains sufficient spectral information to enable concurrent vowel segregation and identification by downstream cortical areas.
Collapse
|
42
|
Abstract
UNLABELLED Functional and anatomical studies have clearly demonstrated that auditory cortex is populated by multiple subfields. However, functional characterization of those fields has been largely the domain of animal electrophysiology, limiting the extent to which human and animal research can inform each other. In this study, we used high-resolution functional magnetic resonance imaging to characterize human auditory cortical subfields using a variety of low-level acoustic features in the spectral and temporal domains. Specifically, we show that topographic gradients of frequency preference, or tonotopy, extend along two axes in human auditory cortex, thus reconciling historical accounts of a tonotopic axis oriented medial to lateral along Heschl's gyrus and more recent findings emphasizing tonotopic organization along the anterior-posterior axis. Contradictory findings regarding topographic organization according to temporal modulation rate in acoustic stimuli, or "periodotopy," are also addressed. Although isolated subregions show a preference for high rates of amplitude-modulated white noise (AMWN) in our data, large-scale "periodotopic" organization was not found. Organization by AM rate was correlated with dominant pitch percepts in AMWN in many regions. In short, our data expose early auditory cortex chiefly as a frequency analyzer, and spectral frequency, as imposed by the sensory receptor surface in the cochlea, seems to be the dominant feature governing large-scale topographic organization across human auditory cortex. SIGNIFICANCE STATEMENT In this study, we examine the nature of topographic organization in human auditory cortex with fMRI. Topographic organization by spectral frequency (tonotopy) extended in two directions: medial to lateral, consistent with early neuroimaging studies, and anterior to posterior, consistent with more recent reports. Large-scale organization by rates of temporal modulation (periodotopy) was correlated with confounding spectral content of amplitude-modulated white-noise stimuli. Together, our results suggest that the organization of human auditory cortex is driven primarily by its response to spectral acoustic features, and large-scale periodotopy spanning across multiple regions is not supported. This fundamental information regarding the functional organization of early auditory cortex will inform our growing understanding of speech perception and the processing of other complex sounds.
Collapse
|
43
|
Eliminating dual-task costs by minimizing crosstalk between tasks: The role of modality and feature pairings. Cognition 2016; 150:92-108. [DOI: 10.1016/j.cognition.2016.02.003] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2014] [Revised: 02/03/2016] [Accepted: 02/04/2016] [Indexed: 11/23/2022]
|
44
|
Abstract
One of the fundamental properties of the mammalian brain is that sensory regions of cortex are formed of multiple, functionally specialized cortical field maps (CFMs). Each CFM comprises two orthogonal topographical representations, reflecting two essential aspects of sensory space. In auditory cortex, auditory field maps (AFMs) are defined by the combination of tonotopic gradients, representing the spectral aspects of sound (i.e., tones), with orthogonal periodotopic gradients, representing the temporal aspects of sound (i.e., period or temporal envelope). Converging evidence from cytoarchitectural and neuroimaging measurements underlies the definition of 11 AFMs across core and belt regions of human auditory cortex, with likely homology to those of macaque. On a macrostructural level, AFMs are grouped into cloverleaf clusters, an organizational structure also seen in visual cortex. Future research can now use these AFMs to investigate specific stages of auditory processing, key for understanding behaviors such as speech perception and multimodal sensory integration.
Collapse
Affiliation(s)
- Alyssa A Brewer
- Department of Cognitive Sciences and Center for Hearing Research, University of California, Irvine, California 92697; ,
| | - Brian Barton
- Department of Cognitive Sciences and Center for Hearing Research, University of California, Irvine, California 92697; ,
| |
Collapse
|
45
|
Perceptual learning shapes multisensory causal inference via two distinct mechanisms. Sci Rep 2016; 6:24673. [PMID: 27091411 PMCID: PMC4835789 DOI: 10.1038/srep24673] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2015] [Accepted: 04/04/2016] [Indexed: 11/29/2022] Open
Abstract
To accurately represent the environment, our brains must integrate sensory signals from a common source while segregating those from independent sources. A reasonable strategy for performing this task is to restrict integration to cues that coincide in space and time. However, because multisensory signals are subject to differential transmission and processing delays, the brain must retain a degree of tolerance for temporal discrepancies. Recent research suggests that the width of this ‘temporal binding window’ can be reduced through perceptual learning, however, little is known about the mechanisms underlying these experience-dependent effects. Here, in separate experiments, we measure the temporal and spatial binding windows of human participants before and after training on an audiovisual temporal discrimination task. We show that training leads to two distinct effects on multisensory integration in the form of (i) a specific narrowing of the temporal binding window that does not transfer to spatial binding and (ii) a general reduction in the magnitude of crossmodal interactions across all spatiotemporal disparities. These effects arise naturally from a Bayesian model of causal inference in which learning improves the precision of audiovisual timing estimation, whilst concomitantly decreasing the prior expectation that stimuli emanate from a common source.
Collapse
|
46
|
Overton JA, Recanzone GH. Effects of aging on the response of single neurons to amplitude-modulated noise in primary auditory cortex of rhesus macaque. J Neurophysiol 2016; 115:2911-23. [PMID: 26936987 DOI: 10.1152/jn.01098.2015] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2015] [Accepted: 03/02/2016] [Indexed: 12/13/2022] Open
Abstract
Temporal envelope processing is critical for speech comprehension, which is known to be affected by normal aging. Whereas the macaque is an excellent animal model for human cerebral cortical function, few studies have investigated neural processing in the auditory cortex of aged, nonhuman primates. Therefore, we investigated age-related changes in the spiking activity of neurons in primary auditory cortex (A1) of two aged macaque monkeys using amplitude-modulated (AM) noise and compared these responses with data from a similar study in young monkeys (Yin P, Johnson JS, O'Connor KN, Sutter ML. J Neurophysiol 105: 582-600, 2011). For each neuron, we calculated firing rate (rate code) and phase-locking using phase-projected vector strength (temporal code). We made several key findings where neurons in old monkeys differed from those in young monkeys. Old monkeys had higher spontaneous and driven firing rates, fewer neurons that synchronized with the AM stimulus, and fewer neurons that had differential responses to AM stimuli with both a rate and temporal code. Finally, whereas rate and temporal tuning functions were positively correlated in young monkeys, this relationship was lost in older monkeys at both the population and single neuron levels. These results are consistent with considerable evidence from rodents and primates of an age-related decrease in inhibition throughout the auditory pathway. Furthermore, this dual coding in A1 is thought to underlie the capacity to encode multiple features of an acoustic stimulus. The apparent loss of ability to encode AM with both rate and temporal codes may have consequences for stream segregation and effective speech comprehension in complex listening environments.
Collapse
Affiliation(s)
| | - Gregg H Recanzone
- Center for Neuroscience, University of California, Davis, California; and Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| |
Collapse
|
47
|
O'Connell MN, Barczak A, Ross D, McGinnis T, Schroeder CE, Lakatos P. Multi-Scale Entrainment of Coupled Neuronal Oscillations in Primary Auditory Cortex. Front Hum Neurosci 2015; 9:655. [PMID: 26696866 PMCID: PMC4673342 DOI: 10.3389/fnhum.2015.00655] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2015] [Accepted: 11/17/2015] [Indexed: 12/02/2022] Open
Abstract
Earlier studies demonstrate that when the frequency of rhythmic tone sequences or streams is task relevant, ongoing excitability fluctuations (oscillations) of neuronal ensembles in primary auditory cortex (A1) entrain to stimulation in a frequency dependent way that sharpens frequency tuning. The phase distribution across A1 neuronal ensembles at time points when attended stimuli are predicted to occur reflects the focus of attention along the spectral attribute of auditory stimuli. This study examined how neuronal activity is modulated if only the temporal features of rhythmic stimulus streams are relevant. We presented macaques with auditory clicks arranged in 33 Hz (gamma timescale) quintets, repeated at a 1.6 Hz (delta timescale) rate. Such multi-scale, hierarchically organized temporal structure is characteristic of vocalizations and other natural stimuli. Monkeys were required to detect and respond to deviations in the temporal pattern of gamma quintets. As expected, engagement in the auditory task resulted in the multi-scale entrainment of delta- and gamma-band neuronal oscillations across all of A1. Surprisingly, however, the phase-alignment, and thus, the physiological impact of entrainment differed across the tonotopic map in A1. In the region of 11–16 kHz representation, entrainment most often aligned high excitability oscillatory phases with task-relevant events in the input stream and thus resulted in response enhancement. In the remainder of the A1 sites, entrainment generally resulted in response suppression. Our data indicate that the suppressive effects were due to low excitability phase delta oscillatory entrainment and the phase amplitude coupling of delta and gamma oscillations. Regardless of the phase or frequency, entrainment appeared stronger in left A1, indicative of the hemispheric lateralization of auditory function.
Collapse
Affiliation(s)
- M N O'Connell
- Cognitive Neuroscience and Schizophrenia Program, Nathan Kline Institute Orangeburg, NY, USA
| | - A Barczak
- Cognitive Neuroscience and Schizophrenia Program, Nathan Kline Institute Orangeburg, NY, USA
| | - D Ross
- Cognitive Neuroscience and Schizophrenia Program, Nathan Kline Institute Orangeburg, NY, USA
| | - T McGinnis
- Cognitive Neuroscience and Schizophrenia Program, Nathan Kline Institute Orangeburg, NY, USA
| | - C E Schroeder
- Cognitive Neuroscience and Schizophrenia Program, Nathan Kline Institute Orangeburg, NY, USA ; Department of Psychiatry, Columbia College of Physicians and Surgeons New York, NY, USA
| | - P Lakatos
- Cognitive Neuroscience and Schizophrenia Program, Nathan Kline Institute Orangeburg, NY, USA ; Department of Psychiatry, NYU School of Medicine New York, NY, USA
| |
Collapse
|
48
|
Ervast L, Hämäläinen JA, Zachau S, Lohvansuu K, Heinänen K, Veijola M, Heikkinen E, Suominen K, Luotonen M, Lehtihalmes M, Leppänen PHT. Event-related brain potentials to change in the frequency and temporal structure of sounds in typically developing 5-6-year-old children. Int J Psychophysiol 2015; 98:413-25. [PMID: 26342552 DOI: 10.1016/j.ijpsycho.2015.08.007] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2015] [Revised: 08/14/2015] [Accepted: 08/20/2015] [Indexed: 11/24/2022]
Abstract
The brain's ability to recognize different acoustic cues (e.g., frequency changes in rapid temporal succession) is important for speech perception and thus for successful language development. Here we report on distinct event-related potentials (ERPs) in 5-6-year-old children recorded in a passive oddball paradigm to repeated tone pair stimuli with a frequency change in the second tone in the pair, replicating earlier findings. An occasional insertion of a third tone within the tone pair generated a more merged pattern, which has not been reported previously in 5-6-year-old children. Both types of deviations elicited pre-attentive discriminative mismatch negativity (MMN) and late discriminative negativity (LDN) responses. Temporal principal component analysis (tPCA) showed a similar topographical pattern with fronto-central negativity for MMN and LDN. We also found a previously unreported discriminative response complex (P340-N440) at the temporal electrode sites at about 140 ms and 240 ms after the frequency deviance, which we suggest reflects a discriminative processing of frequency change. The P340 response was positive with a clear radial distribution preceding the fronto-central frequency MMN by about 30 ms. The results indicate that 5-6-year-old children can detect frequency change and the occasional insertion of an additional tone in sound pairs as reflected by MMN and LDN, even with quite short within-stimulus intervals (150 ms and 50 ms). Furthermore, MMN for these changes is preceded by another response to deviancy, temporal P340, which seems to reflect a parallel but earlier discriminatory process.
Collapse
Affiliation(s)
- Leena Ervast
- Logopedics and Child Language Research Center, Faculty of Humanities, P.O. Box 1000, 90014, University of Oulu, Finland; Department of Clinical Neurophysiology, Neurocognitive Unit, Oulu University Hospital, P.O. Box 50, 90029, Oulu University Hospital, Finland.
| | - Jarmo A Hämäläinen
- Department of Psychology, P.O. Box 35, 40014, University of Jyväskylä, Finland
| | - Swantje Zachau
- Logopedics and Child Language Research Center, Faculty of Humanities, P.O. Box 1000, 90014, University of Oulu, Finland; Department of Clinical Neurophysiology, Neurocognitive Unit, Oulu University Hospital, P.O. Box 50, 90029, Oulu University Hospital, Finland
| | - Kaisa Lohvansuu
- Department of Psychology, P.O. Box 35, 40014, University of Jyväskylä, Finland
| | - Kaisu Heinänen
- Logopedics and Child Language Research Center, Faculty of Humanities, P.O. Box 1000, 90014, University of Oulu, Finland; Department of Clinical Neurophysiology, Neurocognitive Unit, Oulu University Hospital, P.O. Box 50, 90029, Oulu University Hospital, Finland
| | - Mari Veijola
- Department of Clinical Neurophysiology, Neurocognitive Unit, Oulu University Hospital, P.O. Box 50, 90029, Oulu University Hospital, Finland; Department of Otorhinolaryngology, Oulu University Hospital, P.O. Box 21, 90029, Oulu University Hospital, Finland
| | - Elisa Heikkinen
- Logopedics and Child Language Research Center, Faculty of Humanities, P.O. Box 1000, 90014, University of Oulu, Finland; Department of Clinical Neurophysiology, Neurocognitive Unit, Oulu University Hospital, P.O. Box 50, 90029, Oulu University Hospital, Finland
| | - Kalervo Suominen
- Department of Clinical Neurophysiology, Neurocognitive Unit, Oulu University Hospital, P.O. Box 50, 90029, Oulu University Hospital, Finland
| | - Mirja Luotonen
- Department of Otorhinolaryngology, Oulu University Hospital, P.O. Box 21, 90029, Oulu University Hospital, Finland
| | - Matti Lehtihalmes
- Logopedics and Child Language Research Center, Faculty of Humanities, P.O. Box 1000, 90014, University of Oulu, Finland
| | - Paavo H T Leppänen
- Department of Psychology, P.O. Box 35, 40014, University of Jyväskylä, Finland
| |
Collapse
|
49
|
High-field functional magnetic resonance imaging of vocalization processing in marmosets. Sci Rep 2015; 5:10950. [PMID: 26091254 PMCID: PMC4473644 DOI: 10.1038/srep10950] [Citation(s) in RCA: 41] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2014] [Accepted: 04/29/2015] [Indexed: 11/17/2022] Open
Abstract
Vocalizations are behaviorally critical sounds, and this behavioral importance is reflected in the ascending auditory system, where conspecific vocalizations are increasingly over-represented at higher processing stages. Recent evidence suggests that, in macaques, this increasing selectivity for vocalizations might culminate in a cortical region that is densely populated by vocalization-preferring neurons. Such a region might be a critical node in the representation of vocal communication sounds, underlying the recognition of vocalization type, caller and social context. These results raise the questions of whether cortical specializations for vocalization processing exist in other species, their cortical location, and their relationship to the auditory processing hierarchy. To explore cortical specializations for vocalizations in another species, we performed high-field fMRI of the auditory cortex of a vocal New World primate, the common marmoset (Callithrix jacchus). Using a sparse imaging paradigm, we discovered a caudal-rostral gradient for the processing of conspecific vocalizations in marmoset auditory cortex, with regions of the anterior temporal lobe close to the temporal pole exhibiting the highest preference for vocalizations. These results demonstrate similar cortical specializations for vocalization processing in macaques and marmosets, suggesting that cortical specializations for vocal processing might have evolved before the lineages of these species diverged.
Collapse
|
50
|
Auditory properties in the parabelt regions of the superior temporal gyrus in the awake macaque monkey: an initial survey. J Neurosci 2015; 35:4140-50. [PMID: 25762661 DOI: 10.1523/jneurosci.3556-14.2015] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023] Open
Abstract
The superior temporal gyrus (STG) is on the inferior-lateral brain surface near the external ear. In macaques, 2/3 of the STG is occupied by an auditory cortical region, the "parabelt," which is part of a network of inferior temporal areas subserving communication and social cognition as well as object recognition and other functions. However, due to its location beneath the squamous temporal bone and temporalis muscle, the STG, like other inferior temporal regions, has been a challenging target for physiological studies in awake-behaving macaques. We designed a new procedure for implanting recording chambers to provide direct access to the STG, allowing us to evaluate neuronal properties and their topography across the full extent of the STG in awake-behaving macaques. Initial surveys of the STG have yielded several new findings. Unexpectedly, STG sites in monkeys that were listening passively responded to tones with magnitudes comparable to those of responses to 1/3 octave band-pass noise. Mapping results showed longer response latencies in more rostral sites and possible tonotopic patterns parallel to core and belt areas, suggesting the reversal of gradients between caudal and rostral parabelt areas. These results will help further exploration of parabelt areas.
Collapse
|