1
|
Garcia MM, Kline AM, Onodera K, Tsukano H, Dandu PR, Acosta HC, Kasten M, Manis PB, Kato HK. Noncanonical Short-Latency Auditory Pathway Directly Activates Deep Cortical Layers. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2025.01.06.631598. [PMID: 39829930 PMCID: PMC11741258 DOI: 10.1101/2025.01.06.631598] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 01/22/2025]
Abstract
Auditory processing in the cerebral cortex is considered to begin with thalamocortical inputs to layer 4 (L4) of the primary auditory cortex (A1). In this canonical model, A1 L4 inputs initiate a hierarchical cascade, with higher-order cortices receiving pre-processed information for the slower integration of complex sounds. Here, we identify alternative ascending pathways in mice that bypass A1 and directly reach multiple layers of the secondary auditory cortex (A2), indicating parallel activation of these areas alongside sequential information processing. We found that L6 of both A1 and A2 receive short-latency (<10 ms) sound inputs, comparable in speed to the canonical A1 L4 input but transmitted through higher-order thalamic nuclei. Additionally, A2 L4 is innervated by a caudal subdivision within the traditionally defined primary thalamus, which we now identify as belonging to the non-primary system. Notably, both thalamic regions receive projections from distinct subdivisions of the higher-order inferior colliculus, which in turn are directly innervated by cochlear nucleus neurons. These findings reveal alternative ascending pathways reaching A2 at L4 and L6 via secondary subcortical structures. Thus, higher-order auditory cortex processes both slow, pre-processed information and rapid, direct sensory inputs, enabling parallel and distributed processing of fast sensory information across cortical areas.
Collapse
Affiliation(s)
- Michellee M. Garcia
- Department of Psychiatry, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Neuroscience Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Amber M Kline
- Department of Psychiatry, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Neuroscience Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Koun Onodera
- Department of Psychiatry, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Neuroscience Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Hiroaki Tsukano
- Department of Psychiatry, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Neuroscience Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Pranathi R. Dandu
- Department of Psychiatry, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Neuroscience Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Hailey C. Acosta
- Department of Psychiatry, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Neuroscience Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Michael Kasten
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Paul B. Manis
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Department of Cell Biology and Physiology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Hiroyuki K. Kato
- Department of Psychiatry, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Neuroscience Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, MA 02114, USA
- Department of Otolaryngology - Head and Neck Surgery, Harvard Medical School, Boston, MA 02114, USA
| |
Collapse
|
2
|
Gori M, Amadeo MB, Pavani F, Valzolgher C, Campus C. Temporal visual representation elicits early auditory-like responses in hearing but not in deaf individuals. Sci Rep 2022; 12:19036. [PMID: 36351944 PMCID: PMC9646881 DOI: 10.1038/s41598-022-22224-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2022] [Accepted: 10/11/2022] [Indexed: 11/10/2022] Open
Abstract
It is evident that the brain is capable of large-scale reorganization following sensory deprivation, but the extent of such reorganization is to date, not clear. The auditory modality is the most accurate to represent temporal information, and deafness is an ideal clinical condition to study the reorganization of temporal representation when the audio signal is not available. Here we show that hearing, but not deaf individuals, show a strong ERP response to visual stimuli in temporal areas during a time-bisection task. This ERP response appears 50-90 ms after the flash and recalls some aspects of the N1 ERP component usually elicited by auditory stimuli. The same ERP is not evident for a visual space-bisection task, suggesting that the early recruitment of temporal cortex is specific for building a highly resolved temporal representation within the visual modality. These findings provide evidence that the lack of auditory input can interfere with typical development of complex visual temporal representations.
Collapse
Affiliation(s)
- Monica Gori
- grid.25786.3e0000 0004 1764 2907Unit for Visually Impaired People, Fondazione Istituto Italiano di Tecnologia, Via Enrico Melen 83, 16152 Genoa, Italy
| | - Maria Bianca Amadeo
- grid.25786.3e0000 0004 1764 2907Unit for Visually Impaired People, Fondazione Istituto Italiano di Tecnologia, Via Enrico Melen 83, 16152 Genoa, Italy
| | - Francesco Pavani
- grid.11696.390000 0004 1937 0351Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento, Italy ,grid.11696.390000 0004 1937 0351Centro Interateneo di Ricerca Cognizione, Linguaggio e Sordità (CIRCLeS), University of Trento, Trento, Italy ,grid.461862.f0000 0004 0614 7222Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Centre de Recherche en Neuroscience de Lyon (CRNL), Bron, France
| | - Chiara Valzolgher
- grid.11696.390000 0004 1937 0351Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento, Italy ,grid.461862.f0000 0004 0614 7222Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Centre de Recherche en Neuroscience de Lyon (CRNL), Bron, France
| | - Claudio Campus
- grid.25786.3e0000 0004 1764 2907Unit for Visually Impaired People, Fondazione Istituto Italiano di Tecnologia, Via Enrico Melen 83, 16152 Genoa, Italy
| |
Collapse
|
3
|
Socially meaningful visual context either enhances or inhibits vocalisation processing in the macaque brain. Nat Commun 2022; 13:4886. [PMID: 35985995 PMCID: PMC9391382 DOI: 10.1038/s41467-022-32512-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Accepted: 08/03/2022] [Indexed: 11/08/2022] Open
Abstract
Social interactions rely on the interpretation of semantic and emotional information, often from multiple sensory modalities. Nonhuman primates send and receive auditory and visual communicative signals. However, the neural mechanisms underlying the association of visual and auditory information based on their common social meaning are unknown. Using heart rate estimates and functional neuroimaging, we show that in the lateral and superior temporal sulcus of the macaque monkey, neural responses are enhanced in response to species-specific vocalisations paired with a matching visual context, or when vocalisations follow, in time, visual information, but inhibited when vocalisation are incongruent with the visual context. For example, responses to affiliative vocalisations are enhanced when paired with affiliative contexts but inhibited when paired with aggressive or escape contexts. Overall, we propose that the identified neural network represents social meaning irrespective of sensory modality. Social interaction involves processing semantic and emotional information. Here the authors show that in the macaque monkey lateral and superior temporal sulcus, cortical activity is enhanced in response to species-specific vocalisations predicted by matching face or social visual stimuli but inhibited when vocalisations are incongruent with the predictive visual context.
Collapse
|
4
|
Predicting neuronal response properties from hemodynamic responses in the auditory cortex. Neuroimage 2021; 244:118575. [PMID: 34517127 DOI: 10.1016/j.neuroimage.2021.118575] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2021] [Accepted: 09/10/2021] [Indexed: 11/22/2022] Open
Abstract
Recent functional MRI (fMRI) studies have highlighted differences in responses to natural sounds along the rostral-caudal axis of the human superior temporal gyrus. However, due to the indirect nature of the fMRI signal, it has been challenging to relate these fMRI observations to actual neuronal response properties. To bridge this gap, we present a forward model of the fMRI responses to natural sounds combining a neuronal model of the auditory cortex with physiological modeling of the hemodynamic BOLD response. Neuronal responses are modeled with a dynamic recurrent firing rate model, reflecting the tonotopic, hierarchical processing in the auditory cortex along with the spectro-temporal tradeoff in the rostral-caudal axis of its belt areas. To link modeled neuronal response properties with human fMRI data in the auditory belt regions, we generated a space of neuronal models, which differed parametrically in spectral and temporal specificity of neuronal responses. Then, we obtained predictions of fMRI responses through a biophysical model of the hemodynamic BOLD response (P-DCM). Using Bayesian model comparison, our results showed that the hemodynamic BOLD responses of the caudal belt regions in the human auditory cortex were best explained by modeling faster temporal dynamics and broader spectral tuning of neuronal populations, while rostral belt regions were best explained through fine spectral tuning combined with slower temporal dynamics. These results support the hypotheses of complementary neural information processing along the rostral-caudal axis of the human superior temporal gyrus.
Collapse
|
5
|
Morán I, Perez-Orive J, Melchor J, Figueroa T, Lemus L. Auditory decisions in the supplementary motor area. Prog Neurobiol 2021; 202:102053. [PMID: 33957182 DOI: 10.1016/j.pneurobio.2021.102053] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 04/06/2021] [Accepted: 04/20/2021] [Indexed: 10/21/2022]
Abstract
In human speech and communication across various species, recognizing and categorizing sounds is fundamental for the selection of appropriate behaviors. However, how does the brain decide which action to perform based on sounds? We explored whether the supplementary motor area (SMA), responsible for linking sensory information to motor programs, also accounts for auditory-driven decision making. To this end, we trained two rhesus monkeys to discriminate between numerous naturalistic sounds and words learned as target (T) or non-target (nT) categories. We found that the SMA at single and population neuronal levels perform decision-related computations that transition from auditory to movement representations in this task. Moreover, we demonstrated that the neural population is organized orthogonally during the auditory and the movement periods, implying that the SMA performs different computations. In conclusion, our results suggest that the SMA integrates acoustic information in order to form categorical signals that drive behavior.
Collapse
Affiliation(s)
- Isaac Morán
- Department of Cognitive Neuroscience, Institute of Cell Physiology, Universidad Nacional Autónoma de México (UNAM), 04510, Mexico City, Mexico
| | - Javier Perez-Orive
- Instituto Nacional de Rehabilitacion "Luis Guillermo Ibarra Ibarra", Mexico City, Mexico
| | - Jonathan Melchor
- Department of Cognitive Neuroscience, Institute of Cell Physiology, Universidad Nacional Autónoma de México (UNAM), 04510, Mexico City, Mexico
| | - Tonatiuh Figueroa
- Department of Cognitive Neuroscience, Institute of Cell Physiology, Universidad Nacional Autónoma de México (UNAM), 04510, Mexico City, Mexico
| | - Luis Lemus
- Department of Cognitive Neuroscience, Institute of Cell Physiology, Universidad Nacional Autónoma de México (UNAM), 04510, Mexico City, Mexico.
| |
Collapse
|
6
|
Henschke JU, Price AT, Pakan JMP. Enhanced modulation of cell-type specific neuronal responses in mouse dorsal auditory field during locomotion. Cell Calcium 2021; 96:102390. [PMID: 33744780 DOI: 10.1016/j.ceca.2021.102390] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Revised: 03/05/2021] [Accepted: 03/10/2021] [Indexed: 11/16/2022]
Abstract
As we move through the environment we experience constantly changing sensory input that must be merged with our ongoing motor behaviors - creating dynamic interactions between our sensory and motor systems. Active behaviors such as locomotion generally increase the sensory-evoked neuronal activity in visual and somatosensory cortices, but evidence suggests that locomotion largely suppresses neuronal responses in the auditory cortex. However, whether this effect is ubiquitous across different anatomical regions of the auditory cortex is largely unknown. In mice, auditory association fields such as the dorsal auditory cortex (AuD), have been shown to have different physiological response properties, protein expression patterns, and cortical as well as subcortical connections, in comparison to primary auditory regions (A1) - suggesting there may be important functional differences. Here we examined locomotion-related modulation of neuronal activity in cortical layers ⅔ of AuD and A1 using two-photon Ca2+ imaging in head-fixed behaving mice that are able to freely run on a spherical treadmill. We determined the proportion of neurons in these two auditory regions that show enhanced and suppressed sensory-evoked responses during locomotion and quantified the depth of modulation. We found that A1 shows more suppression and AuD more enhanced responses during locomotion periods. We further revealed differences in the circuitry between these auditory regions and motor cortex, and found that AuD is more highly connected to motor cortical regions. Finally, we compared the cell-type specific locomotion-evoked modulation of responses in AuD and found that, while subpopulations of PV-expressing interneurons showed heterogeneous responses, the population in general was largely suppressed during locomotion, while excitatory population responses were generally enhanced in AuD. Therefore, neurons in primary and dorsal auditory fields have distinct response properties, with dorsal regions exhibiting enhanced activity in response to movement. This functional distinction may be important for auditory processing during navigation and acoustically guided behavior.
Collapse
Affiliation(s)
- Julia U Henschke
- Institute of Cognitive Neurology and Dementia Research, Otto-von-Guericke-University Magdeburg, Leipziger Str. 44, 39120, Magdeburg, Germany; German Centre for Neurodegenerative Diseases, Leipziger Str. 44, 39120, Magdeburg, Germany
| | - Alan T Price
- Institute of Cognitive Neurology and Dementia Research, Otto-von-Guericke-University Magdeburg, Leipziger Str. 44, 39120, Magdeburg, Germany; German Centre for Neurodegenerative Diseases, Leipziger Str. 44, 39120, Magdeburg, Germany; Cognitive Neurophysiology group, Leibniz Institute for Neurobiology (LIN), 39118, Magdeburg, Germany
| | - Janelle M P Pakan
- Institute of Cognitive Neurology and Dementia Research, Otto-von-Guericke-University Magdeburg, Leipziger Str. 44, 39120, Magdeburg, Germany; German Centre for Neurodegenerative Diseases, Leipziger Str. 44, 39120, Magdeburg, Germany; Center for Behavioral Brain Sciences, Universitätsplatz 2, 39120, Magdeburg, Germany.
| |
Collapse
|
7
|
An H, Ho Kei S, Auksztulewicz R, Schnupp JWH. Do Auditory Mismatch Responses Differ Between Acoustic Features? Front Hum Neurosci 2021; 15:613903. [PMID: 33597853 PMCID: PMC7882487 DOI: 10.3389/fnhum.2021.613903] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2020] [Accepted: 01/07/2021] [Indexed: 11/13/2022] Open
Abstract
Mismatch negativity (MMN) is the electroencephalographic (EEG) waveform obtained by subtracting event-related potential (ERP) responses evoked by unexpected deviant stimuli from responses evoked by expected standard stimuli. While the MMN is thought to reflect an unexpected change in an ongoing, predictable stimulus, it is unknown whether MMN responses evoked by changes in different stimulus features have different magnitudes, latencies, and topographies. The present study aimed to investigate whether MMN responses differ depending on whether sudden stimulus change occur in pitch, duration, location or vowel identity, respectively. To calculate ERPs to standard and deviant stimuli, EEG signals were recorded in normal-hearing participants (N = 20; 13 males, 7 females) who listened to roving oddball sequences of artificial syllables. In the roving paradigm, any given stimulus is repeated several times to form a standard, and then suddenly replaced with a deviant stimulus which differs from the standard. Here, deviants differed from preceding standards along one of four features (pitch, duration, vowel or interaural level difference). The feature levels were individually chosen to match behavioral discrimination performance. We identified neural activity evoked by unexpected violations along all four acoustic dimensions. Evoked responses to deviant stimuli increased in amplitude relative to the responses to standard stimuli. A univariate (channel-by-channel) analysis yielded no significant differences between MMN responses following violations of different features. However, in a multivariate analysis (pooling information from multiple EEG channels), acoustic features could be decoded from the topography of mismatch responses, although at later latencies than those typical for MMN. These results support the notion that deviant feature detection may be subserved by a different process than general mismatch detection.
Collapse
Affiliation(s)
- HyunJung An
- Department of Neuroscience, City University of Hong Kong, Kowloon, Hong Kong
| | - Shing Ho Kei
- Department of Neuroscience, City University of Hong Kong, Kowloon, Hong Kong
| | - Ryszard Auksztulewicz
- Department of Neuroscience, City University of Hong Kong, Kowloon, Hong Kong.,Department of Neuroscience, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| | - Jan W H Schnupp
- Department of Neuroscience, City University of Hong Kong, Kowloon, Hong Kong
| |
Collapse
|
8
|
Amadeo MB, Campus C, Gori M. Visual representations of time elicit early responses in human temporal cortex. Neuroimage 2020; 217:116912. [PMID: 32389726 DOI: 10.1016/j.neuroimage.2020.116912] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2020] [Revised: 04/28/2020] [Accepted: 04/30/2020] [Indexed: 11/29/2022] Open
Abstract
Time perception is inherently part of human life. All human sensory modalities are always involved in the complex task of creating a temporal representation of the external world. However, when representing time, people primarily rely on auditory information. Since the auditory system prevails in many audio-visual temporal tasks, one may expect that the early recruitment of the auditory network is necessary for building a highly resolved and flexible temporal representation in the visual modality. To test this hypothesis, we asked 17 healthy participants to temporally bisect three consecutive flashes while we recorded EEG. We demonstrated that visual stimuli during temporal bisection elicit an early (50-90 ms) response of an extended area of the temporal cortex, likely including auditory cortex too. The same activation did not appear during an easier spatial bisection task. These findings suggest that the brain may use auditory representations to deal with complex temporal representation in the visual system.
Collapse
Affiliation(s)
- Maria Bianca Amadeo
- U-VIP Unit for Visually Impaired People, Fondazione Istituto Italiano di Tecnologia, Via E. Melen, 83, 16152, Genova, Italy; Department of Informatics, Bioengineering, Robotics and Systems Engineering, Università degli Studi di Genova, via all'Opera Pia, 13, 16145, Genova, Italy.
| | - Claudio Campus
- U-VIP Unit for Visually Impaired People, Fondazione Istituto Italiano di Tecnologia, Via E. Melen, 83, 16152, Genova, Italy.
| | - Monica Gori
- U-VIP Unit for Visually Impaired People, Fondazione Istituto Italiano di Tecnologia, Via E. Melen, 83, 16152, Genova, Italy.
| |
Collapse
|
9
|
Stankova EP, Kruchinina OV, Shepovalnikov AN, Galperina EI. Evolution of the Central Mechanisms
of Oral Speech. J EVOL BIOCHEM PHYS+ 2020. [DOI: 10.1134/s0022093020030011] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
10
|
Erb J, Schmitt LM, Obleser J. Temporal selectivity declines in the aging human auditory cortex. eLife 2020; 9:55300. [PMID: 32618270 PMCID: PMC7410487 DOI: 10.7554/elife.55300] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2020] [Accepted: 07/02/2020] [Indexed: 12/03/2022] Open
Abstract
Current models successfully describe the auditory cortical response to natural sounds with a set of spectro-temporal features. However, these models have hardly been linked to the ill-understood neurobiological changes that occur in the aging auditory cortex. Modelling the hemodynamic response to a rich natural sound mixture in N = 64 listeners of varying age, we here show that in older listeners’ auditory cortex, the key feature of temporal rate is represented with a markedly broader tuning. This loss of temporal selectivity is most prominent in primary auditory cortex and planum temporale, with no such changes in adjacent auditory or other brain areas. Amongst older listeners, we observe a direct relationship between chronological age and temporal-rate tuning, unconfounded by auditory acuity or model goodness of fit. In line with senescent neural dedifferentiation more generally, our results highlight decreased selectivity to temporal information as a hallmark of the aging auditory cortex. It can often be difficult for an older person to understand what someone is saying, particularly in noisy environments. Exactly how and why this age-related change occurs is not clear, but it is thought that older individuals may become less able to tune in to certain features of sound. Newer tools are making it easier to study age-related changes in hearing in the brain. For example, functional magnetic resonance imaging (fMRI) can allow scientists to ‘see’ and measure how certain parts of the brain react to different features of sound. Using fMRI data, researchers can compare how younger and older people process speech. They can also track how speech processing in the brain changes with age. Now, Erb et al. show that older individuals have a harder time tuning into the rhythm of speech. In the experiments, 64 people between the ages of 18 to 78 were asked to listen to speech in a noisy setting while they underwent fMRI. The researchers then tested a computer model using the data. In the older individuals, the brain’s tuning to the timing or rhythm of speech was broader, while the younger participants were more able to finely tune into this feature of sound. The older a person was the less able their brain was to distinguish rhythms in speech, likely making it harder to understand what had been said. This hearing change likely occurs because brain cells become less specialised overtime, which can contribute to many kinds of age-related cognitive decline. This new information about why understanding speech becomes more difficult with age may help scientists develop better hearing aids that are individualised to a person’s specific needs.
Collapse
Affiliation(s)
- Julia Erb
- Department of Psychology, University of Lübeck, Lübeck, Germany
| | | | - Jonas Obleser
- Department of Psychology, University of Lübeck, Lübeck, Germany
| |
Collapse
|
11
|
Archakov D, DeWitt I, Kuśmierek P, Ortiz-Rios M, Cameron D, Cui D, Morin EL, VanMeter JW, Sams M, Jääskeläinen IP, Rauschecker JP. Auditory representation of learned sound sequences in motor regions of the macaque brain. Proc Natl Acad Sci U S A 2020; 117:15242-15252. [PMID: 32541016 PMCID: PMC7334521 DOI: 10.1073/pnas.1915610117] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022] Open
Abstract
Human speech production requires the ability to couple motor actions with their auditory consequences. Nonhuman primates might not have speech because they lack this ability. To address this question, we trained macaques to perform an auditory-motor task producing sound sequences via hand presses on a newly designed device ("monkey piano"). Catch trials were interspersed to ascertain the monkeys were listening to the sounds they produced. Functional MRI was then used to map brain activity while the animals listened attentively to the sound sequences they had learned to produce and to two control sequences, which were either completely unfamiliar or familiar through passive exposure only. All sounds activated auditory midbrain and cortex, but listening to the sequences that were learned by self-production additionally activated the putamen and the hand and arm regions of motor cortex. These results indicate that, in principle, monkeys are capable of forming internal models linking sound perception and production in motor regions of the brain, so this ability is not special to speech in humans. However, the coupling of sounds and actions in nonhuman primates (and the availability of an internal model supporting it) seems not to extend to the upper vocal tract, that is, the supralaryngeal articulators, which are key for the production of speech sounds in humans. The origin of speech may have required the evolution of a "command apparatus" similar to the control of the hand, which was crucial for the evolution of tool use.
Collapse
Affiliation(s)
- Denis Archakov
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20057
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, FI-02150 Espoo, Finland
| | - Iain DeWitt
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20057
| | - Paweł Kuśmierek
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20057
| | - Michael Ortiz-Rios
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20057
| | - Daniel Cameron
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20057
| | - Ding Cui
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20057
| | - Elyse L Morin
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20057
| | - John W VanMeter
- Center for Functional and Molecular Imaging, Georgetown University Medical Center, Washington, DC 20057
| | - Mikko Sams
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, FI-02150 Espoo, Finland
| | - Iiro P Jääskeläinen
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, FI-02150 Espoo, Finland
| | - Josef P Rauschecker
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20057;
| |
Collapse
|
12
|
Abstract
There are functional and anatomical distinctions between the neural systems involved in the recognition of sounds in the environment and those involved in the sensorimotor guidance of sound production and the spatial processing of sound. Evidence for the separation of these processes has historically come from disparate literatures on the perception and production of speech, music and other sounds. More recent evidence indicates that there are computational distinctions between the rostral and caudal primate auditory cortex that may underlie functional differences in auditory processing. These functional differences may originate from differences in the response times and temporal profiles of neurons in the rostral and caudal auditory cortex, suggesting that computational accounts of primate auditory pathways should focus on the implications of these temporal response differences.
Collapse
|
13
|
Zulfiqar I, Moerel M, Formisano E. Spectro-Temporal Processing in a Two-Stream Computational Model of Auditory Cortex. Front Comput Neurosci 2020; 13:95. [PMID: 32038212 PMCID: PMC6987265 DOI: 10.3389/fncom.2019.00095] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Accepted: 12/23/2019] [Indexed: 12/14/2022] Open
Abstract
Neural processing of sounds in the dorsal and ventral streams of the (human) auditory cortex is optimized for analyzing fine-grained temporal and spectral information, respectively. Here we use a Wilson and Cowan firing-rate modeling framework to simulate spectro-temporal processing of sounds in these auditory streams and to investigate the link between neural population activity and behavioral results of psychoacoustic experiments. The proposed model consisted of two core (A1 and R, representing primary areas) and two belt (Slow and Fast, representing rostral and caudal processing respectively) areas, differing in terms of their spectral and temporal response properties. First, we simulated the responses to amplitude modulated (AM) noise and tones. In agreement with electrophysiological results, we observed an area-dependent transition from a temporal (synchronization) to a rate code when moving from low to high modulation rates. Simulated neural responses in a task of amplitude modulation detection suggested that thresholds derived from population responses in core areas closely resembled those of psychoacoustic experiments in human listeners. For tones, simulated modulation threshold functions were found to be dependent on the carrier frequency. Second, we simulated the responses to complex tones with missing fundamental stimuli and found that synchronization of responses in the Fast area accurately encoded pitch, with the strength of synchronization depending on number and order of harmonic components. Finally, using speech stimuli, we showed that the spectral and temporal structure of the speech was reflected in parallel by the modeled areas. The analyses highlighted that the Slow stream coded with high spectral precision the aspects of the speech signal characterized by slow temporal changes (e.g., prosody), while the Fast stream encoded primarily the faster changes (e.g., phonemes, consonants, temporal pitch). Interestingly, the pitch of a speaker was encoded both spatially (i.e., tonotopically) in Slow area and temporally in Fast area. Overall, performed simulations showed that the model is valuable for generating hypotheses on how the different cortical areas/streams may contribute toward behaviorally relevant aspects of auditory processing. The model can be used in combination with physiological models of neurovascular coupling to generate predictions for human functional MRI experiments.
Collapse
Affiliation(s)
- Isma Zulfiqar
- Maastricht Centre for Systems Biology, Maastricht University, Maastricht, Netherlands
| | - Michelle Moerel
- Maastricht Centre for Systems Biology, Maastricht University, Maastricht, Netherlands.,Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands.,Maastricht Brain Imaging Center, Maastricht, Netherlands
| | - Elia Formisano
- Maastricht Centre for Systems Biology, Maastricht University, Maastricht, Netherlands.,Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands.,Maastricht Brain Imaging Center, Maastricht, Netherlands
| |
Collapse
|
14
|
Pérez-Bellido A, Anne Barnes K, Crommett LE, Yau JM. Auditory Frequency Representations in Human Somatosensory Cortex. Cereb Cortex 2019; 28:3908-3921. [PMID: 29045579 DOI: 10.1093/cercor/bhx255] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2017] [Indexed: 01/01/2023] Open
Abstract
Recent studies have challenged the traditional notion of modality-dedicated cortical systems by showing that audition and touch evoke responses in the same sensory brain regions. While much of this work has focused on somatosensory responses in auditory regions, fewer studies have investigated sound responses and representations in somatosensory regions. In this functional magnetic resonance imaging (fMRI) study, we measured BOLD signal changes in participants performing an auditory frequency discrimination task and characterized activation patterns related to stimulus frequency using both univariate and multivariate analysis approaches. Outside of bilateral temporal lobe regions, we observed robust and frequency-specific responses to auditory stimulation in classically defined somatosensory areas. Moreover, using representational similarity analysis to define the relationships between multi-voxel activation patterns for all sound pairs, we found clear similarity patterns for auditory responses in the parietal lobe that correlated significantly with perceptual similarity judgments. Our results demonstrate that auditory frequency representations can be distributed over brain regions traditionally considered to be dedicated to somatosensation. The broad distribution of auditory and tactile responses over parietal and temporal regions reveals a number of candidate brain areas that could support general temporal frequency processing and mediate the extensive and robust perceptual interactions between audition and touch.
Collapse
Affiliation(s)
- Alexis Pérez-Bellido
- Department of Neuroscience, Baylor College of Medicine, Houston, One Baylor Plaza, Houston, TX, USA
| | - Kelly Anne Barnes
- Department of Neuroscience, Baylor College of Medicine, Houston, One Baylor Plaza, Houston, TX, USA
| | - Lexi E Crommett
- Department of Neuroscience, Baylor College of Medicine, Houston, One Baylor Plaza, Houston, TX, USA
| | - Jeffrey M Yau
- Department of Neuroscience, Baylor College of Medicine, Houston, One Baylor Plaza, Houston, TX, USA
| |
Collapse
|
15
|
Abstract
Humans and other animals use spatial hearing to rapidly localize events in the environment. However, neural encoding of sound location is a complex process involving the computation and integration of multiple spatial cues that are not represented directly in the sensory organ (the cochlea). Our understanding of these mechanisms has increased enormously in the past few years. Current research is focused on the contribution of animal models for understanding human spatial audition, the effects of behavioural demands on neural sound location encoding, the emergence of a cue-independent location representation in the auditory cortex, and the relationship between single-source and concurrent location encoding in complex auditory scenes. Furthermore, computational modelling seeks to unravel how neural representations of sound source locations are derived from the complex binaural waveforms of real-life sounds. In this article, we review and integrate the latest insights from neurophysiological, neuroimaging and computational modelling studies of mammalian spatial hearing. We propose that the cortical representation of sound location emerges from recurrent processing taking place in a dynamic, adaptive network of early (primary) and higher-order (posterior-dorsal and dorsolateral prefrontal) auditory regions. This cortical network accommodates changing behavioural requirements and is especially relevant for processing the location of real-life, complex sounds and complex auditory scenes.
Collapse
|
16
|
Nourski KV, Steinschneider M, Rhone AE, Kovach CK, Kawasaki H, Howard MA. Differential responses to spectrally degraded speech within human auditory cortex: An intracranial electrophysiology study. Hear Res 2018; 371:53-65. [PMID: 30500619 DOI: 10.1016/j.heares.2018.11.009] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/13/2018] [Revised: 11/15/2018] [Accepted: 11/19/2018] [Indexed: 12/28/2022]
Abstract
Understanding cortical processing of spectrally degraded speech in normal-hearing subjects may provide insights into how sound information is processed by cochlear implant (CI) users. This study investigated electrocorticographic (ECoG) responses to noise-vocoded speech and related these responses to behavioral performance in a phonemic identification task. Subjects were neurosurgical patients undergoing chronic invasive monitoring for medically refractory epilepsy. Stimuli were utterances /aba/ and /ada/, spectrally degraded using a noise vocoder (1-4 bands). ECoG responses were obtained from Heschl's gyrus (HG) and superior temporal gyrus (STG), and were examined within the high gamma frequency range (70-150 Hz). All subjects performed at chance accuracy with speech degraded to 1 and 2 spectral bands, and at or near ceiling for clear speech. Inter-subject variability was observed in the 3- and 4-band conditions. High gamma responses in posteromedial HG (auditory core cortex) were similar for all vocoded conditions and clear speech. A progressive preference for clear speech emerged in anterolateral segments of HG, regardless of behavioral performance. On the lateral STG, responses to all vocoded stimuli were larger in subjects with better task performance. In contrast, both behavioral and neural responses to clear speech were comparable across subjects regardless of their ability to identify degraded stimuli. Findings highlight differences in representation of spectrally degraded speech across cortical areas and their relationship to perception. The results are in agreement with prior non-invasive results. The data provide insight into the neural mechanisms associated with variability in perception of degraded speech and potentially into sources of such variability in CI users.
Collapse
Affiliation(s)
- Kirill V Nourski
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, USA.
| | - Mitchell Steinschneider
- Departments of Neurology and Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Ariane E Rhone
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA
| | | | - Hiroto Kawasaki
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA
| | - Matthew A Howard
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, USA; Pappajohn Biomedical Institute, The University of Iowa, Iowa City, IA, USA
| |
Collapse
|
17
|
Erb J, Armendariz M, De Martino F, Goebel R, Vanduffel W, Formisano E. Homology and Specificity of Natural Sound-Encoding in Human and Monkey Auditory Cortex. Cereb Cortex 2018; 29:3636-3650. [DOI: 10.1093/cercor/bhy243] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2018] [Revised: 08/08/2018] [Accepted: 09/05/2018] [Indexed: 01/01/2023] Open
Abstract
Abstract
Understanding homologies and differences in auditory cortical processing in human and nonhuman primates is an essential step in elucidating the neurobiology of speech and language. Using fMRI responses to natural sounds, we investigated the representation of multiple acoustic features in auditory cortex of awake macaques and humans. Comparative analyses revealed homologous large-scale topographies not only for frequency but also for temporal and spectral modulations. In both species, posterior regions preferably encoded relatively fast temporal and coarse spectral information, whereas anterior regions encoded slow temporal and fine spectral modulations. Conversely, we observed a striking interspecies difference in cortical sensitivity to temporal modulations: While decoding from macaque auditory cortex was most accurate at fast rates (> 30 Hz), humans had highest sensitivity to ~3 Hz, a relevant rate for speech analysis. These findings suggest that characteristic tuning of human auditory cortex to slow temporal modulations is unique and may have emerged as a critical step in the evolution of speech and language.
Collapse
Affiliation(s)
- Julia Erb
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6200 MD Maastricht, The Netherlands
- Maastricht Brain Imaging Center (MBIC), MD Maastricht, The Netherlands
- Department of Psychology, University of Lübeck, Lübeck, Germany
| | | | - Federico De Martino
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6200 MD Maastricht, The Netherlands
- Maastricht Brain Imaging Center (MBIC), MD Maastricht, The Netherlands
| | - Rainer Goebel
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6200 MD Maastricht, The Netherlands
- Maastricht Brain Imaging Center (MBIC), MD Maastricht, The Netherlands
| | - Wim Vanduffel
- Laboratorium voor Neuro-en Psychofysiologie, KU Leuven, Leuven, Belgium
- MGH Martinos Center, Charlestown, MA, USA
- Harvard Medical School, Boston, MA, USA
- Leuven Brain Institute, Leuven, Belgium
| | - Elia Formisano
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6200 MD Maastricht, The Netherlands
- Maastricht Brain Imaging Center (MBIC), MD Maastricht, The Netherlands
- Maastricht Center for Systems Biology (MaCSBio), MD Maastricht, The Netherlands
| |
Collapse
|
18
|
Chaplin TA, Rosa MGP, Lui LL. Auditory and Visual Motion Processing and Integration in the Primate Cerebral Cortex. Front Neural Circuits 2018; 12:93. [PMID: 30416431 PMCID: PMC6212655 DOI: 10.3389/fncir.2018.00093] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2018] [Accepted: 10/08/2018] [Indexed: 11/13/2022] Open
Abstract
The ability of animals to detect motion is critical for survival, and errors or even delays in motion perception may prove costly. In the natural world, moving objects in the visual field often produce concurrent sounds. Thus, it can highly advantageous to detect motion elicited from sensory signals of either modality, and to integrate them to produce more reliable motion perception. A great deal of progress has been made in understanding how visual motion perception is governed by the activity of single neurons in the primate cerebral cortex, but far less progress has been made in understanding both auditory motion and audiovisual motion integration. Here we, review the key cortical regions for motion processing, focussing on translational motion. We compare the representations of space and motion in the visual and auditory systems, and examine how single neurons in these two sensory systems encode the direction of motion. We also discuss the way in which humans integrate of audio and visual motion cues, and the regions of the cortex that may mediate this process.
Collapse
Affiliation(s)
- Tristan A Chaplin
- Neuroscience Program, Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, VIC, Australia.,Australian Research Council (ARC) Centre of Excellence for Integrative Brain Function, Monash University Node, Clayton, VIC, Australia
| | - Marcello G P Rosa
- Neuroscience Program, Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, VIC, Australia.,Australian Research Council (ARC) Centre of Excellence for Integrative Brain Function, Monash University Node, Clayton, VIC, Australia
| | - Leo L Lui
- Neuroscience Program, Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, VIC, Australia.,Australian Research Council (ARC) Centre of Excellence for Integrative Brain Function, Monash University Node, Clayton, VIC, Australia
| |
Collapse
|
19
|
Not All Predictions Are Equal: "What" and "When" Predictions Modulate Activity in Auditory Cortex through Different Mechanisms. J Neurosci 2018; 38:8680-8693. [PMID: 30143578 DOI: 10.1523/jneurosci.0369-18.2018] [Citation(s) in RCA: 43] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2018] [Revised: 07/22/2018] [Accepted: 07/26/2018] [Indexed: 11/21/2022] Open
Abstract
Using predictions based on environmental regularities is fundamental for adaptive behavior. While it is widely accepted that predictions across different stimulus attributes (e.g., time and content) facilitate sensory processing, it is unknown whether predictions across these attributes rely on the same neural mechanism. Here, to elucidate the neural mechanisms of predictions, we combine invasive electrophysiological recordings (human electrocorticography in 4 females and 2 males) with computational modeling while manipulating predictions about content ("what") and time ("when"). We found that "when" predictions increased evoked activity over motor and prefrontal regions both at early (∼180 ms) and late (430-450 ms) latencies. "What" predictability, however, increased evoked activity only over prefrontal areas late in time (420-460 ms). Beyond these dissociable influences, we found that "what" and "when" predictability interactively modulated the amplitude of early (165 ms) evoked responses in the superior temporal gyrus. We modeled the observed neural responses using biophysically realistic neural mass models, to better understand whether "what" and "when" predictions tap into similar or different neurophysiological mechanisms. Our modeling results suggest that "what" and "when" predictability rely on complementary neural processes: "what" predictions increased short-term plasticity in auditory areas, whereas "when" predictability increased synaptic gain in motor areas. Thus, content and temporal predictions engage complementary neural mechanisms in different regions, suggesting domain-specific prediction signaling along the cortical hierarchy. Encoding predictions through different mechanisms may endow the brain with the flexibility to efficiently signal different sources of predictions, weight them by their reliability, and allow for their encoding without mutual interference.SIGNIFICANCE STATEMENT Predictions of different stimulus features facilitate sensory processing. However, it is unclear whether predictions of different attributes rely on similar or different neural mechanisms. By combining invasive electrophysiological recordings of cortical activity with experimental manipulations of participants' predictions about content and time of acoustic events, we found that the two types of predictions had dissociable influences on cortical activity, both in terms of the regions involved and the timing of the observed effects. Further, our biophysical modeling analysis suggests that predictability of content and time rely on complementary neural processes: short-term plasticity in auditory areas and synaptic gain in motor areas, respectively. This suggests that predictions of different features are encoded with complementary neural mechanisms in different brain regions.
Collapse
|
20
|
Hjortkjær J, Kassuba T, Madsen KH, Skov M, Siebner HR. Task-Modulated Cortical Representations of Natural Sound Source Categories. Cereb Cortex 2018; 28:295-306. [PMID: 29069292 DOI: 10.1093/cercor/bhx263] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
In everyday sound environments, we recognize sound sources and events by attending to relevant aspects of an acoustic input. Evidence about the cortical mechanisms involved in extracting relevant category information from natural sounds is, however, limited to speech. Here, we used functional MRI to measure cortical response patterns while human listeners categorized real-world sounds created by objects of different solid materials (glass, metal, wood) manipulated by different sound-producing actions (striking, rattling, dropping). In different sessions, subjects had to identify either material or action categories in the same sound stimuli. The sound-producing action and the material of the sound source could be decoded from multivoxel activity patterns in auditory cortex, including Heschl's gyrus and planum temporale. Importantly, decoding success depended on task relevance and category discriminability. Action categories were more accurately decoded in auditory cortex when subjects identified action information. Conversely, the material of the same sound sources was decoded with higher accuracy in the inferior frontal cortex during material identification. Representational similarity analyses indicated that both early and higher-order auditory cortex selectively enhanced spectrotemporal features relevant to the target category. Together, the results indicate a cortical selection mechanism that favors task-relevant information in the processing of nonvocal sound categories.
Collapse
Affiliation(s)
- Jens Hjortkjær
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Hvidovre, 2650 Hvidovre, Denmark.,Hearing Systems Group, Department of Electrical Engineering, Technical University of Denmark, 2800 Kgs. Lyngby, Denmark
| | - Tanja Kassuba
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA
| | - Kristoffer H Madsen
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Hvidovre, 2650 Hvidovre, Denmark.,Cognitive Systems, Department of Applied Mathematics and Computer Science, Technical University of Denmark, 2800 Kgs. Lyngby, Denmark
| | - Martin Skov
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Hvidovre, 2650 Hvidovre, Denmark.,Decision Neuroscience Research Group, Copenhagen Business School, 2000 Frederiksberg, Denmark
| | - Hartwig R Siebner
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Hvidovre, 2650 Hvidovre, Denmark.,Department of Neurology, Copenhagen University Hospital Bispebjerg, Copenhagen, 2400 København NV, Denmark
| |
Collapse
|
21
|
Chronometry on Spike-LFP Responses Reveals the Functional Neural Circuitry of Early Auditory Cortex Underlying Sound Processing and Discrimination. eNeuro 2018; 5:eN-NWR-0420-17. [PMID: 29971252 PMCID: PMC6028825 DOI: 10.1523/eneuro.0420-17.2018] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2017] [Revised: 05/24/2018] [Accepted: 05/25/2018] [Indexed: 11/21/2022] Open
Abstract
Animals and humans rapidly detect specific features of sounds, but the time courses of the underlying neural response for different stimulus categories is largely unknown. Furthermore, the intricate functional organization of auditory information processing pathways is poorly understood. Here, we computed neuronal response latencies from simultaneously recorded spike trains and local field potentials (LFPs) along the first two stages of cortical sound processing, primary auditory cortex (A1) and lateral belt (LB), of awake, behaving macaques. Two types of response latencies were measured for spike trains as well as LFPs: (1) onset latency, time-locked to onset of external auditory stimuli; and (2) selection latency, time taken from stimulus onset to a selective response to a specific stimulus category. Trial-by-trial LFP onset latencies predominantly reflecting synaptic input arrival typically preceded spike onset latencies, assumed to be representative of neuronal output indicating that both areas may receive input environmental signals and relay the information to the next stage. In A1, simple sounds, such as pure tones (PTs), yielded shorter spike onset latencies compared to complex sounds, such as monkey vocalizations ("Coos"). This trend was reversed in LB, indicating a hierarchical functional organization of auditory cortex in the macaque. LFP selection latencies in A1 were always shorter than those in LB for both PT and Coo reflecting the serial arrival of stimulus-specific information in these areas. Thus, chronometry on spike-LFP signals revealed some of the effective neural circuitry underlying complex sound discrimination.
Collapse
|
22
|
Rauschecker JP. Where, When, and How: Are they all sensorimotor? Towards a unified view of the dorsal pathway in vision and audition. Cortex 2018; 98:262-268. [PMID: 29183630 PMCID: PMC5771843 DOI: 10.1016/j.cortex.2017.10.020] [Citation(s) in RCA: 67] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2017] [Revised: 08/19/2017] [Accepted: 10/12/2017] [Indexed: 10/18/2022]
Abstract
Dual processing streams in sensory systems have been postulated for a long time. Much experimental evidence has been accumulated from behavioral, neuropsychological, electrophysiological, neuroanatomical and neuroimaging work supporting the existence of largely segregated cortical pathways in both vision and audition. More recently, debate has returned to the question of overlap between these pathways and whether there aren't really more than two processing streams. The present piece defends the dual-system view. Focusing on the functions of the dorsal stream in the auditory and language system I try to reconcile the various models of Where, How and When into one coherent concept of sensorimotor integration. This framework incorporates principles of internal models in feedback control systems and is applicable to the visual system as well.
Collapse
Affiliation(s)
- Josef P Rauschecker
- Laboratory of Integrative Neuroscience and Cognition, Department of Neuroscience, Georgetown University Medical Center, Washington, DC, USA; Institute for Advanced Study, Technische Universität München, Garching bei München, Germany.
| |
Collapse
|
23
|
Poirier C, Baumann S, Dheerendra P, Joly O, Hunter D, Balezeau F, Sun L, Rees A, Petkov CI, Thiele A, Griffiths TD. Auditory motion-specific mechanisms in the primate brain. PLoS Biol 2017; 15:e2001379. [PMID: 28472038 PMCID: PMC5417421 DOI: 10.1371/journal.pbio.2001379] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2016] [Accepted: 04/07/2017] [Indexed: 12/25/2022] Open
Abstract
This work examined the mechanisms underlying auditory motion processing in the auditory cortex of awake monkeys using functional magnetic resonance imaging (fMRI). We tested to what extent auditory motion analysis can be explained by the linear combination of static spatial mechanisms, spectrotemporal processes, and their interaction. We found that the posterior auditory cortex, including A1 and the surrounding caudal belt and parabelt, is involved in auditory motion analysis. Static spatial and spectrotemporal processes were able to fully explain motion-induced activation in most parts of the auditory cortex, including A1, but not in circumscribed regions of the posterior belt and parabelt cortex. We show that in these regions motion-specific processes contribute to the activation, providing the first demonstration that auditory motion is not simply deduced from changes in static spatial location. These results demonstrate that parallel mechanisms for motion and static spatial analysis coexist within the auditory dorsal stream.
Collapse
Affiliation(s)
- Colline Poirier
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
- * E-mail: (CP); (TDG)
| | - Simon Baumann
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Pradeep Dheerendra
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Olivier Joly
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - David Hunter
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Fabien Balezeau
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Li Sun
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Adrian Rees
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Christopher I. Petkov
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Alexander Thiele
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Timothy D. Griffiths
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
- * E-mail: (CP); (TDG)
| |
Collapse
|
24
|
Ortiz-Rios M, Azevedo FAC, Kuśmierek P, Balla DZ, Munk MH, Keliris GA, Logothetis NK, Rauschecker JP. Widespread and Opponent fMRI Signals Represent Sound Location in Macaque Auditory Cortex. Neuron 2017; 93:971-983.e4. [PMID: 28190642 DOI: 10.1016/j.neuron.2017.01.013] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2015] [Revised: 12/05/2016] [Accepted: 01/15/2017] [Indexed: 11/15/2022]
Abstract
In primates, posterior auditory cortical areas are thought to be part of a dorsal auditory pathway that processes spatial information. But how posterior (and other) auditory areas represent acoustic space remains a matter of debate. Here we provide new evidence based on functional magnetic resonance imaging (fMRI) of the macaque indicating that space is predominantly represented by a distributed hemifield code rather than by a local spatial topography. Hemifield tuning in cortical and subcortical regions emerges from an opponent hemispheric pattern of activation and deactivation that depends on the availability of interaural delay cues. Importantly, these opponent signals allow responses in posterior regions to segregate space similarly to a hemifield code representation. Taken together, our results reconcile seemingly contradictory views by showing that the representation of space follows closely a hemifield code and suggest that enhanced posterior-dorsal spatial specificity in primates might emerge from this form of coding.
Collapse
Affiliation(s)
- Michael Ortiz-Rios
- Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Spemannstraße 36, 72072 Tübingen, Germany; Graduate School of Neural & Behavioural Sciences, International Max Planck Research School (IMPRS), University of Tübingen, Österbergstraße 3, 72074 Tübingen, Germany; Department of Neuroscience, Georgetown University Medical Center, 3970 Reservoir Road, N.W. Washington, D.C., 20057, USA; Institute of Neuroscience, Henry Welcome Building, Medical School, Framlington Place, Newcastle upon Tyne, NE2 4HH, UK.
| | - Frederico A C Azevedo
- Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Spemannstraße 36, 72072 Tübingen, Germany; Graduate School of Neural & Behavioural Sciences, International Max Planck Research School (IMPRS), University of Tübingen, Österbergstraße 3, 72074 Tübingen, Germany
| | - Paweł Kuśmierek
- Department of Neuroscience, Georgetown University Medical Center, 3970 Reservoir Road, N.W. Washington, D.C., 20057, USA
| | - Dávid Z Balla
- Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Spemannstraße 36, 72072 Tübingen, Germany
| | - Matthias H Munk
- Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Spemannstraße 36, 72072 Tübingen, Germany; Department of Systems Neurophysiology, Fachbereich Biologie, Technische Universität Darmstadt, Schnittspahnstraße 10, 64287, Darmstadt, Germany
| | - Georgios A Keliris
- Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Spemannstraße 36, 72072 Tübingen, Germany; Bio-Imaging Lab, Department of Biomedical Sciences, University of Antwerp, Wilrijk, 2610, Belgium
| | - Nikos K Logothetis
- Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Spemannstraße 36, 72072 Tübingen, Germany; Division of Imaging Science and Biomedical Engineering, University of Manchester, Manchester, M13 9PL, UK
| | - Josef P Rauschecker
- Department of Neuroscience, Georgetown University Medical Center, 3970 Reservoir Road, N.W. Washington, D.C., 20057, USA; Institute for Advanced Study of Technische Universität München, Lichtenbergstraße 2 a, 85748 Garching, Germany
| |
Collapse
|
25
|
Rhythmic entrainment as a musical affect induction mechanism. Neuropsychologia 2017; 96:96-110. [DOI: 10.1016/j.neuropsychologia.2017.01.004] [Citation(s) in RCA: 58] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2016] [Revised: 12/10/2016] [Accepted: 01/06/2017] [Indexed: 01/04/2023]
|
26
|
Abstract
The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior-lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain.
Collapse
|
27
|
Abstract
UNLABELLED Functional and anatomical studies have clearly demonstrated that auditory cortex is populated by multiple subfields. However, functional characterization of those fields has been largely the domain of animal electrophysiology, limiting the extent to which human and animal research can inform each other. In this study, we used high-resolution functional magnetic resonance imaging to characterize human auditory cortical subfields using a variety of low-level acoustic features in the spectral and temporal domains. Specifically, we show that topographic gradients of frequency preference, or tonotopy, extend along two axes in human auditory cortex, thus reconciling historical accounts of a tonotopic axis oriented medial to lateral along Heschl's gyrus and more recent findings emphasizing tonotopic organization along the anterior-posterior axis. Contradictory findings regarding topographic organization according to temporal modulation rate in acoustic stimuli, or "periodotopy," are also addressed. Although isolated subregions show a preference for high rates of amplitude-modulated white noise (AMWN) in our data, large-scale "periodotopic" organization was not found. Organization by AM rate was correlated with dominant pitch percepts in AMWN in many regions. In short, our data expose early auditory cortex chiefly as a frequency analyzer, and spectral frequency, as imposed by the sensory receptor surface in the cochlea, seems to be the dominant feature governing large-scale topographic organization across human auditory cortex. SIGNIFICANCE STATEMENT In this study, we examine the nature of topographic organization in human auditory cortex with fMRI. Topographic organization by spectral frequency (tonotopy) extended in two directions: medial to lateral, consistent with early neuroimaging studies, and anterior to posterior, consistent with more recent reports. Large-scale organization by rates of temporal modulation (periodotopy) was correlated with confounding spectral content of amplitude-modulated white-noise stimuli. Together, our results suggest that the organization of human auditory cortex is driven primarily by its response to spectral acoustic features, and large-scale periodotopy spanning across multiple regions is not supported. This fundamental information regarding the functional organization of early auditory cortex will inform our growing understanding of speech perception and the processing of other complex sounds.
Collapse
|
28
|
Leaver AM, Turesky TK, Seydell-Greenwald A, Morgan S, Kim HJ, Rauschecker JP. Intrinsic network activity in tinnitus investigated using functional MRI. Hum Brain Mapp 2016; 37:2717-35. [PMID: 27091485 PMCID: PMC4945432 DOI: 10.1002/hbm.23204] [Citation(s) in RCA: 84] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2015] [Revised: 02/29/2016] [Accepted: 03/24/2016] [Indexed: 12/13/2022] Open
Abstract
Tinnitus is an increasingly common disorder in which patients experience phantom auditory sensations, usually ringing or buzzing in the ear. Tinnitus pathophysiology has been repeatedly shown to involve both auditory and non-auditory brain structures, making network-level studies of tinnitus critical. In this magnetic resonance imaging (MRI) study, two resting-state functional connectivity (RSFC) approaches were used to better understand functional network disturbances in tinnitus. First, we demonstrated tinnitus-related reductions in RSFC between specific brain regions and resting-state networks (RSNs), defined by independent components analysis (ICA) and chosen for their overlap with structures known to be affected in tinnitus. Then, we restricted ICA to data from tinnitus patients, and identified one RSN not apparent in control data. This tinnitus RSN included auditory-sensory regions like inferior colliculus and medial Heschl's gyrus, as well as classically non-auditory regions like the mediodorsal nucleus of the thalamus, striatum, lateral prefrontal, and orbitofrontal cortex. Notably, patients' reported tinnitus loudness was positively correlated with RSFC between the mediodorsal nucleus and the tinnitus RSN, indicating that this network may underlie the auditory-sensory experience of tinnitus. These data support the idea that tinnitus involves network dysfunction, and further stress the importance of communication between auditory-sensory and fronto-striatal circuits in tinnitus pathophysiology. Hum Brain Mapp 37:2717-2735, 2016. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Amber M Leaver
- Department of Neuroscience, Georgetown University Medical Center, Washington, District of Columbia.,Department of Neurology, University of California Los Angeles, Los Angeles, California
| | - Ted K Turesky
- Department of Neuroscience, Georgetown University Medical Center, Washington, District of Columbia
| | - Anna Seydell-Greenwald
- Department of Neuroscience, Georgetown University Medical Center, Washington, District of Columbia
| | - Susan Morgan
- Division of Audiology, Medstar Georgetown University Hospital, Washington, District of Columbia
| | - Hung J Kim
- Department of Otolaryngology, Medstar Georgetown University Hospital, Washington, District of Columbia
| | - Josef P Rauschecker
- Department of Neuroscience, Georgetown University Medical Center, Washington, District of Columbia.,Institute for Advanced Study, TU Munich, Germany
| |
Collapse
|
29
|
Abstract
Complex audio-vocal integration systems depend on a strong interconnection between the auditory and the vocal motor system. To gain cognitive control over audio-vocal interaction during vocal motor control, the PFC needs to be involved. Neurons in the ventrolateral PFC (VLPFC) have been shown to separately encode the sensory perceptions and motor production of vocalizations. It is unknown, however, whether single neurons in the PFC reflect audio-vocal interactions. We therefore recorded single-unit activity in the VLPFC of rhesus monkeys (Macaca mulatta) while they produced vocalizations on command or passively listened to monkey calls. We found that 12% of randomly selected neurons in VLPFC modulated their discharge rate in response to acoustic stimulation with species-specific calls. Almost three-fourths of these auditory neurons showed an additional modulation of their discharge rates either before and/or during the monkeys' motor production of vocalization. Based on these audio-vocal interactions, the VLPFC might be well positioned to combine higher order auditory processing with cognitive control of the vocal motor output. Such audio-vocal integration processes in the VLPFC might constitute a precursor for the evolution of complex learned audio-vocal integration systems, ultimately giving rise to human speech.
Collapse
|
30
|
Lui LL, Mokri Y, Reser DH, Rosa MGP, Rajan R. Responses of neurons in the marmoset primary auditory cortex to interaural level differences: comparison of pure tones and vocalizations. Front Neurosci 2015; 9:132. [PMID: 25941469 PMCID: PMC4403308 DOI: 10.3389/fnins.2015.00132] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2014] [Accepted: 04/01/2015] [Indexed: 11/13/2022] Open
Abstract
Interaural level differences (ILDs) are the dominant cue for localizing the sources of high frequency sounds that differ in azimuth. Neurons in the primary auditory cortex (A1) respond differentially to ILDs of simple stimuli such as tones and noise bands, but the extent to which this applies to complex natural sounds, such as vocalizations, is not known. In sufentanil/N2O anesthetized marmosets, we compared the responses of 76 A1 neurons to three vocalizations (Ock, Tsik, and Twitter) and pure tones at cells' characteristic frequency. Each stimulus was presented with ILDs ranging from 20 dB favoring the contralateral ear to 20 dB favoring the ipsilateral ear to cover most of the frontal azimuthal space. The response to each stimulus was tested at three average binaural levels (ABLs). Most neurons were sensitive to ILDs of vocalizations and pure tones. For all stimuli, the majority of cells had monotonic ILD sensitivity functions favoring the contralateral ear, but we also observed ILD sensitivity functions that peaked near the midline and functions favoring the ipsilateral ear. Representation of ILD in A1 was better for pure tones and the Ock vocalization in comparison to the Tsik and Twitter calls; this was reflected by higher discrimination indices and greater modulation ranges. ILD sensitivity was heavily dependent on ABL: changes in ABL by ±20 dB SPL from the optimal level for ILD sensitivity led to significant decreases in ILD sensitivity for all stimuli, although ILD sensitivity to pure tones and Ock calls was most robust to such ABL changes. Our results demonstrate differences in ILD coding for pure tones and vocalizations, showing that ILD sensitivity in A1 to complex sounds cannot be simply extrapolated from that to pure tones. They also show A1 neurons do not show level-invariant representation of ILD, suggesting that such a representation of auditory space is likely to require population coding, and further processing at subsequent hierarchical stages.
Collapse
Affiliation(s)
- Leo L Lui
- Department of Physiology, Monash University Clayton, VIC, Australia ; Australian Research Council, Centre of Excellence for Integrative Brain Function, Monash University Clayton, VIC, Australia
| | - Yasamin Mokri
- Department of Physiology, Monash University Clayton, VIC, Australia
| | - David H Reser
- Department of Physiology, Monash University Clayton, VIC, Australia
| | - Marcello G P Rosa
- Department of Physiology, Monash University Clayton, VIC, Australia ; Australian Research Council, Centre of Excellence for Integrative Brain Function, Monash University Clayton, VIC, Australia
| | - Ramesh Rajan
- Department of Physiology, Monash University Clayton, VIC, Australia ; Australian Research Council, Centre of Excellence for Integrative Brain Function, Monash University Clayton, VIC, Australia ; Ear Sciences Institute of Australia Subiaco, WA, Australia
| |
Collapse
|
31
|
Ortiz-Rios M, Kuśmierek P, DeWitt I, Archakov D, Azevedo FAC, Sams M, Jääskeläinen IP, Keliris GA, Rauschecker JP. Functional MRI of the vocalization-processing network in the macaque brain. Front Neurosci 2015; 9:113. [PMID: 25883546 PMCID: PMC4381638 DOI: 10.3389/fnins.2015.00113] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2014] [Accepted: 03/17/2015] [Indexed: 12/12/2022] Open
Abstract
Using functional magnetic resonance imaging in awake behaving monkeys we investigated how species-specific vocalizations are represented in auditory and auditory-related regions of the macaque brain. We found clusters of active voxels along the ascending auditory pathway that responded to various types of complex sounds: inferior colliculus (IC), medial geniculate nucleus (MGN), auditory core, belt, and parabelt cortex, and other parts of the superior temporal gyrus (STG) and sulcus (STS). Regions sensitive to monkey calls were most prevalent in the anterior STG, but some clusters were also found in frontal and parietal cortex on the basis of comparisons between responses to calls and environmental sounds. Surprisingly, we found that spectrotemporal control sounds derived from the monkey calls (“scrambled calls”) also activated the parietal and frontal regions. Taken together, our results demonstrate that species-specific vocalizations in rhesus monkeys activate preferentially the auditory ventral stream, and in particular areas of the antero-lateral belt and parabelt.
Collapse
Affiliation(s)
- Michael Ortiz-Rios
- Department of Neuroscience, Georgetown University Medical Center Washington, DC, USA ; Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics Tübingen, Germany ; IMPRS for Cognitive and Systems Neuroscience Tübingen, Germany
| | - Paweł Kuśmierek
- Department of Neuroscience, Georgetown University Medical Center Washington, DC, USA
| | - Iain DeWitt
- Department of Neuroscience, Georgetown University Medical Center Washington, DC, USA
| | - Denis Archakov
- Department of Neuroscience, Georgetown University Medical Center Washington, DC, USA ; Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University School of Science Aalto, Finland
| | - Frederico A C Azevedo
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics Tübingen, Germany ; IMPRS for Cognitive and Systems Neuroscience Tübingen, Germany
| | - Mikko Sams
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University School of Science Aalto, Finland
| | - Iiro P Jääskeläinen
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University School of Science Aalto, Finland
| | - Georgios A Keliris
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics Tübingen, Germany ; Bernstein Centre for Computational Neuroscience Tübingen, Germany ; Department of Biomedical Sciences, University of Antwerp Wilrijk, Belgium
| | - Josef P Rauschecker
- Department of Neuroscience, Georgetown University Medical Center Washington, DC, USA ; Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University School of Science Aalto, Finland ; Institute for Advanced Study and Department of Neurology, Klinikum Rechts der Isar, Technische Universität München München, Germany
| |
Collapse
|
32
|
Bezgin G, Rybacki K, van Opstal AJ, Bakker R, Shen K, Vakorin VA, McIntosh AR, Kötter R. Auditory-prefrontal axonal connectivity in the macaque cortex: quantitative assessment of processing streams. BRAIN AND LANGUAGE 2014; 135:73-84. [PMID: 24980416 DOI: 10.1016/j.bandl.2014.05.006] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/10/2013] [Revised: 04/26/2014] [Accepted: 05/26/2014] [Indexed: 06/03/2023]
Abstract
Primate sensory systems subserve complex neurocomputational functions. Consequently, these systems are organised anatomically in a distributed fashion, commonly linking areas to form specialised processing streams. Each stream is related to a specific function, as evidenced from studies of the visual cortex, which features rather prominent segregation into spatial and non-spatial domains. It has been hypothesised that other sensory systems, including auditory, are organised in a similar way on the cortical level. Recent studies offer rich qualitative evidence for the dual stream hypothesis. Here we provide a new paradigm to quantitatively uncover these patterns in the auditory system, based on an analysis of multiple anatomical studies using multivariate techniques. As a test case, we also apply our assessment techniques to more ubiquitously-explored visual system. Importantly, the introduced framework opens the possibility for these techniques to be applied to other neural systems featuring a dichotomised organisation, such as language or music perception.
Collapse
Affiliation(s)
- Gleb Bezgin
- Rotman Research Institute of Baycrest Centre, University of Toronto, Toronto, Ontario M6A 2E1, Canada; Department of Neuroinformatics, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, 6525 AJ Nijmegen, The Netherlands; C. & O. Vogt Brain Research Institute, Heinrich Heine University, D-40225 Düsseldorf, Germany; Institute of Computer Science, Heinrich Heine University, D-40225 Düsseldorf, Germany.
| | - Konrad Rybacki
- C. & O. Vogt Brain Research Institute, Heinrich Heine University, D-40225 Düsseldorf, Germany; Department of Diagnostic and Interventional Neuroradiology, HELIOS Medical Center Wuppertal, University Hospital Witten/Herdecke, Wuppertal, Germany
| | - A John van Opstal
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, 6525 AJ Nijmegen, The Netherlands
| | - Rembrandt Bakker
- Department of Neuroinformatics, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, 6525 AJ Nijmegen, The Netherlands; Institute of Neuroscience and Medicine (INM-6), Research Center Jülich, Germany; Department of Biology II, Ludwig-Maximilians-Universität München, Germany
| | - Kelly Shen
- Rotman Research Institute of Baycrest Centre, University of Toronto, Toronto, Ontario M6A 2E1, Canada
| | - Vasily A Vakorin
- Rotman Research Institute of Baycrest Centre, University of Toronto, Toronto, Ontario M6A 2E1, Canada; The Hospital for Sick Children, University of Toronto, Toronto, Ontario, Canada
| | - Anthony R McIntosh
- Rotman Research Institute of Baycrest Centre, University of Toronto, Toronto, Ontario M6A 2E1, Canada; Department of Psychology, University of Toronto, Toronto, Ontario M5S 3G3, Canada
| | - Rolf Kötter
- Department of Neuroinformatics, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, 6525 AJ Nijmegen, The Netherlands; C. & O. Vogt Brain Research Institute, Heinrich Heine University, D-40225 Düsseldorf, Germany
| |
Collapse
|
33
|
Shrem T, Deouell LY. Frequency-dependent auditory space representation in the human planum temporale. Front Hum Neurosci 2014; 8:524. [PMID: 25100973 PMCID: PMC4106454 DOI: 10.3389/fnhum.2014.00524] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2014] [Accepted: 06/27/2014] [Indexed: 12/04/2022] Open
Abstract
Functional magnetic resonance imaging (fMRI) findings suggest that a part of the planum temporale (PT) is involved in representing spatial properties of acoustic information. Here, we tested whether this representation of space is frequency-dependent or generalizes across spectral content, as required from high order sensory representations. Using sounds with two different spectral content and two spatial locations in individually tailored virtual acoustic environment, we compared three conditions in a sparse-fMRI experiment: Single Location, in which two sounds were both presented from one location; Fixed Mapping, in which there was one-to-one mapping between two sounds and two locations; and Mixed Mapping, in which the two sounds were equally likely to appear at either one of the two locations. We surmised that only neurons tuned to both location and frequency should be differentially adapted by the Mixed and Fixed mappings. Replicating our previous findings, we found adaptation to spatial location in the PT. Importantly, activation was higher for Mixed Mapping than for Fixed Mapping blocks, even though the two sounds and the two locations appeared equally in both conditions. These results show that spatially tuned neurons in the human PT are not invariant to the spectral content of sounds.
Collapse
Affiliation(s)
- Talia Shrem
- Human Cognitive Neuroscience Lab, Department of Psychology, Social Sciences Faculty, The Hebrew University of Jerusalem Jerusalem, Israel
| | - Leon Y Deouell
- Human Cognitive Neuroscience Lab, Department of Psychology, Social Sciences Faculty, The Hebrew University of Jerusalem Jerusalem, Israel ; Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem Jerusalem, Israel
| |
Collapse
|
34
|
Joly O, Baumann S, Balezeau F, Thiele A, Griffiths TD. Merging functional and structural properties of the monkey auditory cortex. Front Neurosci 2014; 8:198. [PMID: 25100930 PMCID: PMC4104553 DOI: 10.3389/fnins.2014.00198] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2014] [Accepted: 06/25/2014] [Indexed: 11/24/2022] Open
Abstract
Recent neuroimaging studies in primates aim to define the functional properties of auditory cortical areas, especially areas beyond A1, in order to further our understanding of the auditory cortical organization. Precise mapping of functional magnetic resonance imaging (fMRI) results and interpretation of their localizations among all the small auditory subfields remains challenging. To facilitate this mapping, we combined here information from cortical folding, micro-anatomy, surface-based atlas and tonotopic mapping. We used for the first time, phase-encoded fMRI design for mapping the monkey tonotopic organization. From posterior to anterior, we found a high-low-high progression of frequency preference on the superior temporal plane. We show a faithful representation of the fMRI results on a locally flattened surface of the superior temporal plane. In a tentative scheme to delineate core versus belt regions which share similar tonotopic organizations we used the ratio of T1-weighted and T2-weighted MR images as a measure of cortical myelination. Our results, presented along a co-registered surface-based atlas, can be interpreted in terms of a current model of the monkey auditory cortex.
Collapse
Affiliation(s)
- Olivier Joly
- Auditory Group, Institute of Neuroscience, Newcastle University Newcastle Upon Tyne, UK
| | - Simon Baumann
- Auditory Group, Institute of Neuroscience, Newcastle University Newcastle Upon Tyne, UK
| | - Fabien Balezeau
- Auditory Group, Institute of Neuroscience, Newcastle University Newcastle Upon Tyne, UK
| | - Alexander Thiele
- Auditory Group, Institute of Neuroscience, Newcastle University Newcastle Upon Tyne, UK
| | - Timothy D Griffiths
- Auditory Group, Institute of Neuroscience, Newcastle University Newcastle Upon Tyne, UK
| |
Collapse
|
35
|
Patel AD, Iversen JR. The evolutionary neuroscience of musical beat perception: the Action Simulation for Auditory Prediction (ASAP) hypothesis. Front Syst Neurosci 2014; 8:57. [PMID: 24860439 PMCID: PMC4026735 DOI: 10.3389/fnsys.2014.00057] [Citation(s) in RCA: 235] [Impact Index Per Article: 21.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2013] [Accepted: 03/25/2014] [Indexed: 11/17/2022] Open
Abstract
EVERY HUMAN CULTURE HAS SOME FORM OF MUSIC WITH A BEAT a perceived periodic pulse that structures the perception of musical rhythm and which serves as a framework for synchronized movement to music. What are the neural mechanisms of musical beat perception, and how did they evolve? One view, which dates back to Darwin and implicitly informs some current models of beat perception, is that the relevant neural mechanisms are relatively general and are widespread among animal species. On the basis of recent neural and cross-species data on musical beat processing, this paper argues for a different view. Here we argue that beat perception is a complex brain function involving temporally-precise communication between auditory regions and motor planning regions of the cortex (even in the absence of overt movement). More specifically, we propose that simulation of periodic movement in motor planning regions provides a neural signal that helps the auditory system predict the timing of upcoming beats. This "action simulation for auditory prediction" (ASAP) hypothesis leads to testable predictions. We further suggest that ASAP relies on dorsal auditory pathway connections between auditory regions and motor planning regions via the parietal cortex, and suggest that these connections may be stronger in humans than in non-human primates due to the evolution of vocal learning in our lineage. This suggestion motivates cross-species research to determine which species are capable of human-like beat perception, i.e., beat perception that involves accurate temporal prediction of beat times across a fairly broad range of tempi.
Collapse
Affiliation(s)
| | - John R. Iversen
- Swartz Center for Computational Neuroscience, Institute for Neural Computation, University of California San DiegoLa Jolla, CA, USA
| |
Collapse
|