1
|
Bianco R, Zuk NJ, Bigand F, Quarta E, Grasso S, Arnese F, Ravignani A, Battaglia-Mayer A, Novembre G. Neural encoding of musical expectations in a non-human primate. Curr Biol 2024; 34:444-450.e5. [PMID: 38176416 DOI: 10.1016/j.cub.2023.12.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Revised: 10/26/2023] [Accepted: 12/07/2023] [Indexed: 01/06/2024]
Abstract
The appreciation of music is a universal trait of humankind.1,2,3 Evidence supporting this notion includes the ubiquity of music across cultures4,5,6,7 and the natural predisposition toward music that humans display early in development.8,9,10 Are we musical animals because of species-specific predispositions? This question cannot be answered by relying on cross-cultural or developmental studies alone, as these cannot rule out enculturation.11 Instead, it calls for cross-species experiments testing whether homologous neural mechanisms underlying music perception are present in non-human primates. We present music to two rhesus monkeys, reared without musical exposure, while recording electroencephalography (EEG) and pupillometry. Monkeys exhibit higher engagement and neural encoding of expectations based on the previously seeded musical context when passively listening to real music as opposed to shuffled controls. We then compare human and monkey neural responses to the same stimuli and find a species-dependent contribution of two fundamental musical features-pitch and timing12-in generating expectations: while timing- and pitch-based expectations13 are similarly weighted in humans, monkeys rely on timing rather than pitch. Together, these results shed light on the phylogeny of music perception. They highlight monkeys' capacity for processing temporal structures beyond plain acoustic processing, and they identify a species-dependent contribution of time- and pitch-related features to the neural encoding of musical expectations.
Collapse
Affiliation(s)
- Roberta Bianco
- Neuroscience of Perception & Action Lab, Italian Institute of Technology, Viale Regina Elena 291, 00161 Rome, Italy.
| | - Nathaniel J Zuk
- Department of Psychology, Nottingham Trent University, 50 Shakespeare Street, Nottingham NG1 4FQ, UK
| | - Félix Bigand
- Neuroscience of Perception & Action Lab, Italian Institute of Technology, Viale Regina Elena 291, 00161 Rome, Italy
| | - Eros Quarta
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Rome, Italy
| | - Stefano Grasso
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Rome, Italy
| | - Flavia Arnese
- Neuroscience of Perception & Action Lab, Italian Institute of Technology, Viale Regina Elena 291, 00161 Rome, Italy
| | - Andrea Ravignani
- Comparative Bioacoustics Group, Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525 XD Nijmegen, the Netherlands; Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Universitetsbyen 3, 8000 Aarhus, Denmark; Department of Human Neurosciences, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Rome, Italy
| | - Alexandra Battaglia-Mayer
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Rome, Italy
| | - Giacomo Novembre
- Neuroscience of Perception & Action Lab, Italian Institute of Technology, Viale Regina Elena 291, 00161 Rome, Italy.
| |
Collapse
|
2
|
Costalunga G, Carpena CS, Seltmann S, Benichov JI, Vallentin D. Wild nightingales flexibly match whistle pitch in real time. Curr Biol 2023; 33:3169-3178.e3. [PMID: 37453423 PMCID: PMC10414052 DOI: 10.1016/j.cub.2023.06.044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Revised: 05/10/2023] [Accepted: 06/15/2023] [Indexed: 07/18/2023]
Abstract
Interactive vocal communication, similar to a human conversation, requires flexible and real-time changes to vocal output in relation to preceding auditory stimuli. These vocal adjustments are essential to ensuring both the suitable timing and content of the interaction. Precise timing of dyadic vocal exchanges has been investigated in a variety of species, including humans. In contrast, the ability of non-human animals to accurately adjust specific spectral features of vocalization extemporaneously in response to incoming auditory information is less well studied. One spectral feature of acoustic signals is the fundamental frequency, which we perceive as pitch. Many animal species can discriminate between sound frequencies, but real-time detection and reproduction of an arbitrary pitch have only been observed in humans. Here, we show that nightingales in the wild can match the pitch of whistle songs while singing in response to conspecifics or pitch-controlled whistle playbacks. Nightingales matched whistles across their entire pitch production range indicating that they can flexibly tune their vocal output along a wide continuum. Prompt whistle pitch matches were more precise than delayed ones, suggesting the direct mapping of auditory information onto a motor command to achieve online vocal replication of a heard pitch. Although nightingales' songs follow annual cycles of crystallization and deterioration depending on breeding status, the observed pitch-matching behavior is present year-round, suggesting a stable neural circuit independent of seasonal changes in physiology. Our findings represent the first case of non-human instantaneous vocal imitation of pitch, highlighting a promising model for understanding sensorimotor transformation within an interactive context. VIDEO ABSTRACT.
Collapse
Affiliation(s)
- Giacomo Costalunga
- Neural Circuits for Vocal Communication Research Group, Max Planck Institute for Biological Intelligence, Eberhard-Gwinner-Str., Seewiesen 82319, Germany
| | - Carolina Sánchez Carpena
- Neural Circuits for Vocal Communication Research Group, Max Planck Institute for Biological Intelligence, Eberhard-Gwinner-Str., Seewiesen 82319, Germany
| | - Susanne Seltmann
- Neural Circuits for Vocal Communication Research Group, Max Planck Institute for Biological Intelligence, Eberhard-Gwinner-Str., Seewiesen 82319, Germany
| | - Jonathan I Benichov
- Neural Circuits for Vocal Communication Research Group, Max Planck Institute for Biological Intelligence, Eberhard-Gwinner-Str., Seewiesen 82319, Germany
| | - Daniela Vallentin
- Neural Circuits for Vocal Communication Research Group, Max Planck Institute for Biological Intelligence, Eberhard-Gwinner-Str., Seewiesen 82319, Germany.
| |
Collapse
|
3
|
Cabrera-Moreno J, Jeanson L, Jeschke M, Calapai A. Group-based, autonomous, individualized training and testing of long-tailed macaques ( Macaca fascicularis) in their home enclosure to a visuo-acoustic discrimination task. Front Psychol 2022; 13:1047242. [PMID: 36524199 PMCID: PMC9745322 DOI: 10.3389/fpsyg.2022.1047242] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2022] [Accepted: 11/08/2022] [Indexed: 09/10/2023] Open
Abstract
In recent years, the utility and efficiency of automated procedures for cognitive assessment in psychology and neuroscience have been demonstrated in non-human primates (NHP). This approach mimics conventional shaping principles of breaking down a final desired behavior into smaller components that can be trained in a staircase manner. When combined with home-cage-based approaches, this could lead to a reduction in human workload, enhancement in data quality, and improvement in animal welfare. However, to our knowledge, there are no reported attempts to develop automated training and testing protocols for long-tailed macaques (Macaca fascicularis), a ubiquitous NHP model in neuroscience and pharmaceutical research. In the current work, we present the results from 6 long-tailed macaques that were trained using an automated unsupervised training (AUT) protocol for introducing the animals to the basics of a two-alternative choice (2 AC) task where they had to discriminate a conspecific vocalization from a pure tone relying on images presented on a touchscreen to report their response. We found that animals (1) consistently engaged with the device across several months; (2) interacted in bouts of high engagement; (3) alternated peacefully to interact with the device; and (4) smoothly ascended from step to step in the visually guided section of the procedure, in line with previous results from other NHPs. However, we also found (5) that animals' performance remained at chance level as soon as the acoustically guided steps were reached; and (6) that the engagement level decreased significantly with decreasing performance during the transition from visual to acoustic-guided sections. We conclude that with an autonomous approach, it is possible to train long-tailed macaques in their social group using computer vision techniques and without dietary restriction to solve a visually guided discrimination task but not an acoustically guided task. We provide suggestions on what future attempts could take into consideration to instruct acoustically guided discrimination tasks successfully.
Collapse
Affiliation(s)
- Jorge Cabrera-Moreno
- Cognitive Hearing in Primates (CHiP) Group, Auditory Neuroscience and Optogenetics Laboratory, German Primate Center, Leibniz-Institute for Primate Research, Göttingen, Germany
- Göttingen Graduate School for Neurosciences, Biophysics and Molecular Biosciences, University of Göttingen, Göttingen, Germany
- Auditory Neuroscience and Optogenetics Laboratory, German Primate CenterLeibniz-Institute for Primate Research, Göttingen, Germany
- Institute for Auditory Neuroscience and InnerEarLab, University Medical Center Göttingen, Göttingen, Germany
| | - Lena Jeanson
- Cognitive Hearing in Primates (CHiP) Group, Auditory Neuroscience and Optogenetics Laboratory, German Primate Center, Leibniz-Institute for Primate Research, Göttingen, Germany
- Cognitive Neuroscience Laboratory, German Primate Center, Leibniz-Institute for Primate Research, Göttingen, Germany
| | - Marcus Jeschke
- Cognitive Hearing in Primates (CHiP) Group, Auditory Neuroscience and Optogenetics Laboratory, German Primate Center, Leibniz-Institute for Primate Research, Göttingen, Germany
- Auditory Neuroscience and Optogenetics Laboratory, German Primate CenterLeibniz-Institute for Primate Research, Göttingen, Germany
- Institute for Auditory Neuroscience and InnerEarLab, University Medical Center Göttingen, Göttingen, Germany
- Leibniz-ScienceCampus Primate Cognition, Göttingen, Germany
| | - Antonino Calapai
- Cognitive Hearing in Primates (CHiP) Group, Auditory Neuroscience and Optogenetics Laboratory, German Primate Center, Leibniz-Institute for Primate Research, Göttingen, Germany
- Auditory Neuroscience and Optogenetics Laboratory, German Primate CenterLeibniz-Institute for Primate Research, Göttingen, Germany
- Cognitive Neuroscience Laboratory, German Primate Center, Leibniz-Institute for Primate Research, Göttingen, Germany
- Leibniz-ScienceCampus Primate Cognition, Göttingen, Germany
| |
Collapse
|
4
|
Calapai A, Cabrera-Moreno J, Moser T, Jeschke M. Flexible auditory training, psychophysics, and enrichment of common marmosets with an automated, touchscreen-based system. Nat Commun 2022; 13:1648. [PMID: 35347139 PMCID: PMC8960775 DOI: 10.1038/s41467-022-29185-9] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Accepted: 02/28/2022] [Indexed: 11/09/2022] Open
Abstract
Devising new and more efficient protocols to analyze the phenotypes of non-human primates, as well as their complex nervous systems, is rapidly becoming of paramount importance. This is because with genome-editing techniques, recently adopted to non-human primates, new animal models for fundamental and translational research have been established. One aspect in particular, namely cognitive hearing, has been difficult to assess compared to visual cognition. To address this, we devised autonomous, standardized, and unsupervised training and testing of auditory capabilities of common marmosets with a cage-based standalone, wireless system. All marmosets tested voluntarily operated the device on a daily basis and went from naïve to experienced at their own pace and with ease. Through a series of experiments, here we show, that animals autonomously learn to associate sounds with images; to flexibly discriminate sounds, and to detect sounds of varying loudness. The developed platform and training principles combine in-cage training of common marmosets for cognitive and psychoacoustic assessment with an enriched environment that does not rely on dietary restriction or social separation, in compliance with the 3Rs principle.
Collapse
Affiliation(s)
- A Calapai
- Cognitive Neuroscience Laboratory, German Primate Center - Leibniz-Institute for Primate Research, Göttingen, Germany
- Cognitive Hearing in Primates (CHiP) Group, Auditory Neuroscience and Optogenetics Laboratory, German Primate Center - Leibniz-Institute for Primate Research, Göttingen, Germany
- Auditory Neuroscience and Optogenetics Laboratory, German Primate Center - Leibniz-Institute for Primate Research, Göttingen, Germany
- Leibniz ScienceCampus "Primate Cognition", Göttingen, Germany
| | - J Cabrera-Moreno
- Cognitive Hearing in Primates (CHiP) Group, Auditory Neuroscience and Optogenetics Laboratory, German Primate Center - Leibniz-Institute for Primate Research, Göttingen, Germany
- Auditory Neuroscience and Optogenetics Laboratory, German Primate Center - Leibniz-Institute for Primate Research, Göttingen, Germany
- Institute for Auditory Neuroscience and InnerEarLab, University Medical Center Göttingen, 37075, Göttingen, Germany
- Göttingen Graduate School for Neurosciences, Biophysics and Molecular Biosciences, University of Göttingen, 37075, Göttingen, Germany
| | - T Moser
- Auditory Neuroscience and Optogenetics Laboratory, German Primate Center - Leibniz-Institute for Primate Research, Göttingen, Germany
- Institute for Auditory Neuroscience and InnerEarLab, University Medical Center Göttingen, 37075, Göttingen, Germany
- Göttingen Graduate School for Neurosciences, Biophysics and Molecular Biosciences, University of Göttingen, 37075, Göttingen, Germany
- Auditory Neuroscience Group and Synaptic Nanophysiology Group, Max Planck Institute for Multidisciplinary Sciences, 37077, Göttingen, Germany
- Cluster of Excellence "Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells" (MBExC), University of Göttingen, 37075, Göttingen, Germany
| | - M Jeschke
- Cognitive Hearing in Primates (CHiP) Group, Auditory Neuroscience and Optogenetics Laboratory, German Primate Center - Leibniz-Institute for Primate Research, Göttingen, Germany.
- Auditory Neuroscience and Optogenetics Laboratory, German Primate Center - Leibniz-Institute for Primate Research, Göttingen, Germany.
- Leibniz ScienceCampus "Primate Cognition", Göttingen, Germany.
- Institute for Auditory Neuroscience and InnerEarLab, University Medical Center Göttingen, 37075, Göttingen, Germany.
| |
Collapse
|
5
|
Auditory sequence perception in common marmosets (Callithrix jacchus). Behav Processes 2019; 162:55-63. [PMID: 30716383 DOI: 10.1016/j.beproc.2019.01.014] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2018] [Revised: 12/26/2018] [Accepted: 01/31/2019] [Indexed: 11/20/2022]
Abstract
One of the essential linguistic and musical faculties of humans is the ability to recognize the structure of sound configurations and to extract words and melodies from continuous sound sequences. However, monkeys' ability to process the temporal structure of sounds is controversial. Here, to investigate whether monkeys can analyze the temporal structure of auditory patterns, two common marmosets were trained to discriminate auditory patterns in three experiments. In Experiment 1, the marmosets were able to discriminate trains of either 0.5- or 2-kHz tones repeated in either 50- or 200-ms intervals. However, the marmosets were not able to discriminate ABAB from AABB patterns consisting of A (0.5-kHz/50-ms pulse) and B (2-kHz/200-ms pulse) elements in Experiment 2, and A (0.5-kHz/50-ms pulse) and B (0.5-kHz/200-ms pulse) [or A (0.5-kHz/200-ms pulse) and B (2-kHz/200-ms pulse)] in Experiment 3. Consequently, the results indicated that the marmosets could not perceive tonal structures in terms of the temporal configuration of discrete sounds, whereas they could recognize the acoustic features of the stimuli. The present findings were supported by cognitive and brain studies that indicated a limited ability to process sound sequences. However, more studies are needed to confirm the ability of auditory sequence perception in common marmosets.
Collapse
|
6
|
Selezneva E, Gorkin A, Budinger E, Brosch M. Neuronal correlates of auditory streaming in the auditory cortex of behaving monkeys. Eur J Neurosci 2018; 48:3234-3245. [PMID: 30070745 DOI: 10.1111/ejn.14098] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2017] [Revised: 06/27/2018] [Accepted: 07/20/2018] [Indexed: 11/29/2022]
Abstract
This study tested the hypothesis that spiking activity in the primary auditory cortex of monkeys is related to auditory stream formation. Evidence for this hypothesis was previously obtained in animals that were passively exposed to stimuli and in which differences in the streaming percept were confounded with differences between the stimuli. In this study, monkeys performed an operant task on sequences that were composed of light flashes and tones. The tones alternated between a high and a low frequency and could be perceived either as one auditory stream or two auditory streams. The flashes promoted either a one-stream percept or a two-stream percept. Comparison of different types of sequences revealed that the neuronal responses to the alternating tones were more similar when the flashes promoted auditory stream integration, and were more dissimilar when the flashes promoted auditory stream segregation. Thus our findings show that the spiking activity in the monkey primary auditory cortex is related to auditory stream formation.
Collapse
Affiliation(s)
| | | | - Eike Budinger
- Leibniz Institut für Neurobiologie, Magdeburg, Germany
| | | |
Collapse
|
7
|
Selezneva E, Oshurkova E, Scheich H, Brosch M. Category-specific neuronal activity in left and right auditory cortex and in medial geniculate body of monkeys. PLoS One 2017; 12:e0186556. [PMID: 29073162 PMCID: PMC5657994 DOI: 10.1371/journal.pone.0186556] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2017] [Accepted: 09/27/2017] [Indexed: 11/19/2022] Open
Abstract
We address the question of whether the auditory cortex of the left and right hemisphere and the auditory thalamus are differently involved in the performance of cognitive tasks. To understand these differences on the level of single neurons we compared neuronal firing in the primary and posterior auditory cortex of the two hemispheres and in the medial geniculate body in monkeys while subjects categorized pitch relationships in tone sequences. In contrast to earlier findings in imaging studies performed on humans, we found little difference between the three brain regions in terms of the category-specificity of their neuronal responses, of tonic firing related to task components, and of decision-related firing. The differences between the results in humans and monkeys may result from the type of neuronal activity considered and how it was analyzed, from the auditory cortical fields studied, or from fundamental differences between these species.
Collapse
Affiliation(s)
- Elena Selezneva
- Specal Lab Primate Neurobiology, Leibniz-Institute for Neurobiology, Magdeburg, Germany
| | - Elena Oshurkova
- Department Auditory Learning and Speech, Leibniz-Institute for Neurobiology, Magdeburg, Germany
| | - Henning Scheich
- Department Auditory Learning and Speech, Leibniz-Institute for Neurobiology, Magdeburg, Germany
| | - Michael Brosch
- Specal Lab Primate Neurobiology, Leibniz-Institute for Neurobiology, Magdeburg, Germany
| |
Collapse
|
8
|
Bonn CD, Cantlon JF. Spontaneous, modality-general abstraction of a ratio scale. Cognition 2017; 169:36-45. [PMID: 28806722 DOI: 10.1016/j.cognition.2017.07.012] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2016] [Revised: 07/26/2017] [Accepted: 07/29/2017] [Indexed: 11/24/2022]
Abstract
The existence of a generalized magnitude system in the human mind and brain has been studied extensively but remains elusive because it has not been clearly defined. Here we show that one possibility is the representation of relative magnitudes via ratio calculations: ratios are a naturally dimensionless or abstract quantity that could qualify as a common currency for magnitudes measured on vastly different psychophysical scales and in different sensory modalities like size, number, duration, and loudness. In a series of demonstrations based on comparisons of item sequences, we demonstrate that subjects spontaneously use knowledge of inter-item ratios within and across sensory modalities and across magnitude domains to rate sequences as more or less similar on a sliding scale. Moreover, they rate ratio-preserved sequences as more similar to each other than sequences in which only ordinal relations are preserved, indicating that subjects are aware of differences in levels of relative-magnitude information preservation. The ubiquity of this ability across many different magnitude pairs, even those sharing no sensory information, suggests a highly general code that could qualify as a candidate for a generalized magnitude representation.
Collapse
Affiliation(s)
- Cory D Bonn
- Department of Brain and Cognitive Sciences, 358 Meliora Hall, PO Box 270268, University of Rochester, Rochester, NY 14627-0258, United States.
| | - Jessica F Cantlon
- Department of Brain and Cognitive Sciences, 358 Meliora Hall, PO Box 270268, University of Rochester, Rochester, NY 14627-0258, United States.
| |
Collapse
|
9
|
Scott BH, Mishkin M. Auditory short-term memory in the primate auditory cortex. Brain Res 2016; 1640:264-77. [PMID: 26541581 PMCID: PMC4853305 DOI: 10.1016/j.brainres.2015.10.048] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2015] [Revised: 10/17/2015] [Accepted: 10/26/2015] [Indexed: 12/20/2022]
Abstract
Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory.
Collapse
Affiliation(s)
- Brian H Scott
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health, Bethesda, MD 20892, USA.
| | - Mortimer Mishkin
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health, Bethesda, MD 20892, USA.
| |
Collapse
|
10
|
Rickard NS, Toukhsati SR, Field SE. The Effect of Music on Cognitive Performance: Insight From Neurobiological and Animal Studies. ACTA ACUST UNITED AC 2016; 4:235-61. [PMID: 16585799 DOI: 10.1177/1534582305285869] [Citation(s) in RCA: 67] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The past 50 years have seen numerous claims that music exposure enhances human cognitive performance. Critical evaluation of studies across a variety of contexts, however, reveals important methodological weaknesses. The current article argues that an interdisciplinary approach is required to advance this research. A case is made for the use of appropriate animal models to avoid many confounds associated with human music research. Although such research has validity limitations for humans, reductionist methodology enables a more controlled exploration of music's elementary effects. This article also explores candidate mechanisms for this putative effect. A review of neurobiological evidence from human and comparative animal studies confirms that musical stimuli modify autonomic and neurochemical arousal indices, and may also modify synaptic plasticity. It is proposed that understanding how music affects animals provides a valuable conjunct to human research and may be vital in uncovering how music might be used to enhance cognitive performance.
Collapse
Affiliation(s)
- Nikki S Rickard
- School of Psychology, Psychiatry and Psychological Medicine, Monash University, Australia
| | | | | |
Collapse
|
11
|
Abstract
Goal-directed behavior can be characterized as a dynamic link between a sensory stimulus and a motor act. Neural correlates of many of the intermediate events of goal-directed behavior are found in the posterior parietal cortex. Although the parietal cortex’s role in guiding visual behaviors has received considerable attention, relatively little is known about its role in mediating auditory behaviors. Here, the authors review recent studies that have focused on how neurons in the lateral intraparietal area (area LIP) differentially process auditory and visual stimuli. These studies suggest that area LIP contains a modality-dependent representation that is highly dependent on behavioral context.
Collapse
Affiliation(s)
- Yale E Cohen
- Department of Psychological and Brain Sciences, Center for Cognitive Neuroscience, Dartmouth College, Hanover, NH
| | | | | |
Collapse
|
12
|
Fukushima M, Doyle AM, Mullarkey MP, Mishkin M, Averbeck BB. Distributed acoustic cues for caller identity in macaque vocalization. ROYAL SOCIETY OPEN SCIENCE 2015; 2:150432. [PMID: 27019727 PMCID: PMC4806230 DOI: 10.1098/rsos.150432] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/25/2015] [Accepted: 11/23/2015] [Indexed: 06/05/2023]
Abstract
Individual primates can be identified by the sound of their voice. Macaques have demonstrated an ability to discern conspecific identity from a harmonically structured 'coo' call. Voice recognition presumably requires the integrated perception of multiple acoustic features. However, it is unclear how this is achieved, given considerable variability across utterances. Specifically, the extent to which information about caller identity is distributed across multiple features remains elusive. We examined these issues by recording and analysing a large sample of calls from eight macaques. Single acoustic features, including fundamental frequency, duration and Weiner entropy, were informative but unreliable for the statistical classification of caller identity. A combination of multiple features, however, allowed for highly accurate caller identification. A regularized classifier that learned to identify callers from the modulation power spectrum of calls found that specific regions of spectral-temporal modulation were informative for caller identification. These ranges are related to acoustic features such as the call's fundamental frequency and FM sweep direction. We further found that the low-frequency spectrotemporal modulation component contained an indexical cue of the caller body size. Thus, cues for caller identity are distributed across identifiable spectrotemporal components corresponding to laryngeal and supralaryngeal components of vocalizations, and the integration of those cues can enable highly reliable caller identification. Our results demonstrate a clear acoustic basis by which individual macaque vocalizations can be recognized.
Collapse
|
13
|
Brosch M, Selezneva E, Scheich H. Neuronal activity in primate auditory cortex during the performance of audiovisual tasks. Eur J Neurosci 2015; 41:603-14. [PMID: 25728179 DOI: 10.1111/ejn.12841] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2014] [Revised: 12/10/2014] [Accepted: 12/23/2014] [Indexed: 11/29/2022]
Abstract
This study aimed at a deeper understanding of which cognitive and motivational aspects of tasks affect auditory cortical activity. To this end we trained two macaque monkeys to perform two different tasks on the same audiovisual stimulus and to do this with two different sizes of water rewards. The monkeys had to touch a bar after a tone had been turned on together with an LED, and to hold the bar until either the tone (auditory task) or the LED (visual task) was turned off. In 399 multiunits recorded from core fields of auditory cortex we confirmed that during task engagement neurons responded to auditory and non-auditory stimuli that were task-relevant, such as light and water. We also confirmed that firing rates slowly increased or decreased for several seconds during various phases of the tasks. Responses to non-auditory stimuli and slow firing changes were observed during both the auditory and the visual task, with some differences between them. There was also a weak task-dependent modulation of the responses to auditory stimuli. In contrast to these cognitive aspects, motivational aspects of the tasks were not reflected in the firing, except during delivery of the water reward. In conclusion, the present study supports our previous proposal that there are two response types in the auditory cortex that represent the timing and the type of auditory and non-auditory elements of a auditory tasks as well the association between elements.
Collapse
Affiliation(s)
- Michael Brosch
- Leibniz-Institut für Neurobiologie, Brenneckestraße 6, 39118, Magdeburg, Deutschland, Germany
| | | | | |
Collapse
|
14
|
Lovell JM, Mylius J, Scheich H, Brosch M. Stimulation of the Dopaminergic Midbrain as a Behavioral Reward in Instrumentally Conditioned Monkeys. Brain Stimul 2015; 8:868-74. [PMID: 26070295 DOI: 10.1016/j.brs.2015.04.007] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2014] [Revised: 04/01/2015] [Accepted: 04/08/2015] [Indexed: 10/23/2022] Open
Abstract
BACKGROUND Since the mesocortical dopaminergic system of rodents has several differences to that found in primate species, including humans, there is the need for more exhaustively studying causative relationships between activation/stimulation of the ventral tegmental area (VTA) and substantia nigra (SN) and behavior in monkeys. OBJECTIVE To gain causative relationships between VTA/SN stimulation and behavior, we investigated whether monkeys perform audiovisual (AV) tasks using brain stimulation reward (BSR) as the reinforcer, and how reward intensity affects performance during self-stimulation. METHODS Monkeys were required to touch a bar freely when self-stimulating or when instructed by an AV stimulus, to receive BSR. RESULTS We were able to train monkeys to successfully perform the AV task for BSR within three days. Self-stimulation revealed an increase in the bar touch rate when using higher electrical currents, with no ceiling effects observed. During a training session the touch rate decreased, often before the monkeys had received 1000 deliveries of BSR, suggesting satiation. CONCLUSIONS When BSR is applied directly to the VTA/SN, it can motivate monkeys to perform detection tasks, exhibit operant actions, and may be used as a substitute for fluid or food rewards. Monkeys ceased self-stimulation during a training session by their own volition, in contrast to work on rodents. This may be an important safety aspect for consideration in the development of electrical stimulation procedures for patients with dysfunctions of the dopaminergic system; thus, satiation may avert additional compulsions to already existing compulsive behaviors in patients.
Collapse
Affiliation(s)
- Jonathan Murray Lovell
- Leibniz Institute for Neurobiology, Brenneckestraße 6, Magdeburg, Germany; Deutsches Zentrum für Neurodegenerative Erkrankungen (DZNE), Bonn, Germany.
| | - Judith Mylius
- Leibniz Institute for Neurobiology, Brenneckestraße 6, Magdeburg, Germany
| | - Henning Scheich
- Leibniz Institute for Neurobiology, Brenneckestraße 6, Magdeburg, Germany
| | - Michael Brosch
- Leibniz Institute for Neurobiology, Brenneckestraße 6, Magdeburg, Germany
| |
Collapse
|
15
|
Smayda KE, Chandrasekaran B, Maddox WT. Enhanced cognitive and perceptual processing: a computational basis for the musician advantage in speech learning. Front Psychol 2015; 6:682. [PMID: 26052304 PMCID: PMC4439769 DOI: 10.3389/fpsyg.2015.00682] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2014] [Accepted: 05/10/2015] [Indexed: 01/05/2023] Open
Abstract
Long-term music training can positively impact speech processing. A recent framework developed to explain such cross-domain plasticity posits that music training-related advantages in speech processing are due to shared cognitive and perceptual processes between music and speech. Although perceptual and cognitive processing advantages due to music training have been independently demonstrated, to date no study has examined perceptual and cognitive processing within the context of a single task. The present study examines the impact of long-term music training on speech learning from a rigorous, computational perspective derived from signal detection theory. Our computational models provide independent estimates of cognitive and perceptual processing in native English-speaking musicians (n = 15, mean age = 25 years) and non-musicians (n = 15, mean age = 23 years) learning to categorize non-native lexical pitch patterns (Mandarin tones). Musicians outperformed non-musicians in this task. Model-based analyses suggested that musicians shifted from simple unidimensional decision strategies to more optimal multidimensional (MD) decision strategies sooner than non-musicians. In addition, musicians used optimal decisional strategies more often than non-musicians. However, musicians and non-musicians who used MD strategies showed no difference in performance. We estimated parameters that quantify the magnitude of perceptual variability along two dimensions that are critical for tone categorization: pitch height and pitch direction. Both musicians and non-musicians showed a decrease in perceptual variability along the pitch height dimension, but only musicians showed a significant reduction in perceptual variability along the pitch direction dimension. Notably, these advantages persisted during a generalization phase, when no feedback was provided. These results provide an insight into the mechanisms underlying the musician advantage observed in non-native speech learning.
Collapse
Affiliation(s)
- Kirsten E Smayda
- Department of Psychology, The University of Texas at Austin Austin, TX USA
| | - Bharath Chandrasekaran
- Department of Psychology, The University of Texas at Austin Austin, TX USA ; Department of Communication Sciences and Disorders, The University of Texas at Austin Austin, TX USA
| | - W Todd Maddox
- Department of Psychology, The University of Texas at Austin Austin, TX USA
| |
Collapse
|
16
|
Joly O, Baumann S, Poirier C, Patterson RD, Thiele A, Griffiths TD. A perceptual pitch boundary in a non-human primate. Front Psychol 2014; 5:998. [PMID: 25309477 PMCID: PMC4163976 DOI: 10.3389/fpsyg.2014.00998] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2014] [Accepted: 08/21/2014] [Indexed: 11/20/2022] Open
Abstract
Pitch is an auditory percept critical to the perception of music and speech, and for these harmonic sounds, pitch is closely related to the repetition rate of the acoustic wave. This paper reports a test of the assumption that non-human primates and especially rhesus monkeys perceive the pitch of these harmonic sounds much as humans do. A new procedure was developed to train macaques to discriminate the pitch of harmonic sounds and thereby demonstrate that the lower limit for pitch perception in macaques is close to 30 Hz, as it is in humans. Moreover, when the phases of successive harmonics are alternated to cause a pseudo-doubling of the repetition rate, the lower pitch boundary in macaques decreases substantially, as it does in humans. The results suggest that both species use neural firing times to discriminate pitch, at least for sounds with relatively low repetition rates.
Collapse
Affiliation(s)
- Olivier Joly
- Auditory Group, Institute of Neuroscience, Newcastle University, Newcastle upon Tyne UK ; Department of Experimental Psychology, MRC Cognition and Brain Sciences Unit, University of Oxford Oxford, UK
| | - Simon Baumann
- Auditory Group, Institute of Neuroscience, Newcastle University, Newcastle upon Tyne UK
| | - Colline Poirier
- Auditory Group, Institute of Neuroscience, Newcastle University, Newcastle upon Tyne UK
| | - Roy D Patterson
- Department of Physiology, Development and Neuroscience, University of Cambridge, Cambridge UK
| | - Alexander Thiele
- Auditory Group, Institute of Neuroscience, Newcastle University, Newcastle upon Tyne UK
| | - Timothy D Griffiths
- Auditory Group, Institute of Neuroscience, Newcastle University, Newcastle upon Tyne UK ; University College London London, UK
| |
Collapse
|
17
|
Abstract
Complex natural and environmental sounds, such as speech and music, convey information along both spectral and temporal dimensions. The cortical representation of such stimuli rapidly adapts when animals become actively engaged in discriminating them. In this study, we examine the nature of these changes using simplified spectrotemporal versions (upward vs downward shifting tone sequences) with domestic ferrets (Mustela putorius). Cortical processing rapidly adapted to enhance the contrast between the two discriminated stimulus categories, by changing spectrotemporal receptive field properties to encode both the spectral and temporal structure of the tone sequences. Furthermore, the valence of the changes was closely linked to the task reward structure: stimuli associated with negative reward became enhanced relative to those associated with positive reward. These task- and-stimulus-related spectrotemporal receptive field changes occurred only in trained animals during, and immediately following, behavior. This plasticity was independently confirmed by parallel changes in a directionality function measured from the responses to the transition of tone sequences during task performance. The results demonstrate that induced patterns of rapid plasticity reflect closely the spectrotemporal structure of the task stimuli, thus extending the functional relevance of rapid task-related plasticity to the perception and learning of natural sounds such speech and animal vocalizations.
Collapse
|
18
|
The role of harmonic resolvability in pitch perception in a vocal nonhuman primate, the common marmoset (Callithrix jacchus). J Neurosci 2013; 33:9161-8. [PMID: 23699526 DOI: 10.1523/jneurosci.0066-13.2013] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Pitch is one of the most fundamental percepts in the auditory system and can be extracted using either spectral or temporal information in an acoustic signal. Although pitch perception has been extensively studied in human subjects, it is far less clear how nonhuman primates perceive pitch. We have addressed this question in a series of behavioral studies in which marmosets, a vocal nonhuman primate species, were trained to discriminate complex harmonic tones differing in either spectral (fundamental frequency [f0]) or temporal envelope (repetition rate) cues. We found that marmosets used temporal envelope information to discriminate pitch for acoustic stimuli with higher-order harmonics and lower f0 values and spectral information for acoustic stimuli with lower-order harmonics and higher f0 values. We further measured frequency resolution in marmosets using a psychophysical task in which pure tone thresholds were measured as a function of notched noise masker bandwidth. Results show that only the first four harmonics are resolved at low f0 values and up to 16 harmonics are resolved at higher f0 values. Resolvability in marmosets is different from that in humans, where the first five to nine harmonics are consistently resolved across most f0 values, and is likely the result of a smaller marmoset cochlea. In sum, these results show that marmosets use two mechanisms to extract pitch (harmonic templates [spectral] for resolved harmonics, and envelope extraction [temporal] for unresolved harmonics) and that species differences in stimulus resolvability need to be taken into account when investigating and comparing mechanisms of pitch perception across animals.
Collapse
|
19
|
Reaction times reflect subjective auditory perception of tone sequences in macaque monkeys. Hear Res 2012; 294:133-42. [PMID: 22990003 DOI: 10.1016/j.heares.2012.08.014] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/24/2012] [Revised: 08/10/2012] [Accepted: 08/29/2012] [Indexed: 11/23/2022]
Abstract
Perceptually ambiguous stimuli are useful for testing psychological and neuronal models of perceptual organization, e.g. for studying brain processes that underlie sequential segregation and integration. This is because the same stimuli may give rise to different subjective experiences. For humans, a tone sequence that alternates between a low-frequency and a high-frequency tone is perceptually bistable, and can be perceived as one or two streams. In the current study we present a new method based on response times (RTs) which allows identification ambiguous and unambiguous stimuli for subjects who cannot verbally report their subjective experience. We required two macaque monkeys (macaca fascicularis) to detect the termination of a sequence of light flashes which were either presented alone, or synchronized in different ways with a sequence of alternating low and high tones. We found that the monkeys responded faster to the termination of the flash sequence when the tone sequence terminated shortly before the flash sequence and thus predicted the termination of the flash sequence. This RT gain depended on the frequency separation of the tones. RT gains were largest when the frequency separation was small and the tones were presumably heard mainly as one stream. RT gains were smallest when the frequency separation was large and the tones were presumably mainly heard as two streams. RT gain was of intermediate size for intermediate frequency separations. Similar results were obtained from human subjects. We conclude that the observed RT gains reflect the perceptual organization of the tone sequence, and that tone sequences with an intermediate frequency separation, as for humans, are perceptually ambiguous for monkeys.
Collapse
|
20
|
Abstract
A stimulus trace may be temporarily retained either actively [i.e., in working memory (WM)] or by the weaker mnemonic process we will call passive short-term memory, in which a given stimulus trace is highly susceptible to "overwriting" by a subsequent stimulus. It has been suggested that WM is the more robust process because it exploits long-term memory (i.e., a current stimulus activates a stored representation of that stimulus, which can then be actively maintained). Recent studies have suggested that monkeys may be unable to store acoustic signals in long-term memory, raising the possibility that they may therefore also lack auditory WM. To explore this possibility, we tested rhesus monkeys on a serial delayed match-to-sample (DMS) task using a small set of sounds presented with ~1-s interstimulus delays. Performance was accurate whenever a match or a nonmatch stimulus followed the sample directly, but it fell precipitously if a single nonmatch stimulus intervened between sample and match. The steep drop in accuracy was found to be due not to passive decay of the sample's trace, but to retroactive interference from the intervening nonmatch stimulus. This "overwriting" effect was far greater than that observed previously in serial DMS with visual stimuli. The results, which accord with the notion that WM relies on long-term memory, indicate that monkeys perform serial DMS in audition remarkably poorly and that whatever success they had on this task depended largely, if not entirely, on the retention of stimulus traces in the passive form of short-term memory.
Collapse
|
21
|
Dailey DD, Braun CB. Perception of frequency, amplitude, and azimuth of a vibratory dipole source by the octavolateralis system of goldfish (Carassius auratus). J Comp Psychol 2011; 125:286-95. [PMID: 21574689 PMCID: PMC3156875 DOI: 10.1037/a0023499] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Goldfish (Carassius auratus) were conditioned to suppress respiration to a 40-Hz vibratory source and subsequently tested for stimulus generalization to frequency, stimulus amplitude, and position (azimuth). Animals completely failed to generalize to frequencies separated by octave intervals both lesser and greater than the CS. However, they did appear to generalize weakly to an aerial loudspeaker stimulus of the same frequency (40 Hz) after conditioning with an underwater vibratory source. Animals had a gradually decreasing amount of generalization to amplitude changes, suggesting a perceptual dimension of loudness. Animals generalized largely or completely to the same underwater source presented at a range of source azimuths. When these azimuths were presented at a transect of 3 cm, some animals did show decrements in generalization, while others did not. This suggests that although azimuth may be perceived more saliently at distances closer to a dipole source, perception of position is not immediately salient in conditioned vibratory source detection. Differential responding to test stimuli located toward the head or tail suggests the presence of perceptual differences between sources that are rostral or caudal with respect to the position of the animal or perhaps the head.
Collapse
|
22
|
Brosch M, Selezneva E, Scheich H. Representation of reward feedback in primate auditory cortex. Front Syst Neurosci 2011; 5:5. [PMID: 21369350 PMCID: PMC3037499 DOI: 10.3389/fnsys.2011.00005] [Citation(s) in RCA: 67] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2010] [Accepted: 01/22/2011] [Indexed: 11/13/2022] Open
Abstract
It is well established that auditory cortex is plastic on different time scales and that this plasticity is driven by the reinforcement that is used to motivate subjects to learn or to perform an auditory task. Motivated by these findings, we study in detail properties of neuronal firing in auditory cortex that is related to reward feedback. We recorded from the auditory cortex of two monkeys while they were performing an auditory categorization task. Monkeys listened to a sequence of tones and had to signal when the frequency of adjacent tones stepped in downward direction, irrespective of the tone frequency and step size. Correct identifications were rewarded with either a large or a small amount of water. The size of reward depended on the monkeys' performance in the previous trial: it was large after a correct trial and small after an incorrect trial. The rewards served to maintain task performance. During task performance we found three successive periods of neuronal firing in auditory cortex that reflected (1) the reward expectancy for each trial, (2) the reward-size received, and (3) the mismatch between the expected and delivered reward. These results, together with control experiments suggest that auditory cortex receives reward feedback that could be used to adapt auditory cortex to task requirements. Additionally, the results presented here extend previous observations of non-auditory roles of auditory cortex and shows that auditory cortex is even more cognitively influenced than lately recognized.
Collapse
|
23
|
Cortical encoding of pitch: recent results and open questions. Hear Res 2010; 271:74-87. [PMID: 20457240 PMCID: PMC3098378 DOI: 10.1016/j.heares.2010.04.015] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/03/2009] [Revised: 04/30/2010] [Accepted: 04/30/2010] [Indexed: 11/16/2022]
Abstract
It is widely appreciated that the key predictor of the pitch of a sound is its periodicity. Neural structures which support pitch perception must therefore be able to reflect the repetition rate of a sound, but this alone is not sufficient. Since pitch is a psychoacoustic property, a putative cortical code for pitch must also be able to account for the relationship between the amount to which a sound is periodic (i.e. its temporal regularity) and the perceived pitch salience, as well as limits in our ability to detect pitch changes or to discriminate rising from falling pitch. Pitch codes must also be robust in the presence of nuisance variables such as loudness or timbre. Here, we review a large body of work on the cortical basis of pitch perception, which illustrates that the distribution of cortical processes that give rise to pitch perception is likely to depend on both the acoustical features and functional relevance of a sound. While previous studies have greatly advanced our understanding, we highlight several open questions regarding the neural basis of pitch perception. These questions can begin to be addressed through a cooperation of investigative efforts across species and experimental techniques, and, critically, by examining the responses of single neurons in behaving animals.
Collapse
|
24
|
Yin P, Fritz JB, Shamma SA. Do ferrets perceive relative pitch? THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2010; 127:1673-80. [PMID: 20329865 PMCID: PMC2856516 DOI: 10.1121/1.3290988] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/18/2009] [Revised: 11/13/2009] [Accepted: 12/13/2009] [Indexed: 05/29/2023]
Abstract
The existence of relative pitch perception in animals is difficult to demonstrate, since unlike humans, animals often attend to absolute rather than relative properties of sound elements. However, the results of the present study show that ferrets can be trained using relative pitch to discriminate two-tone sequences (rising vs. falling). Three ferrets were trained using a positive-reinforcement paradigm in which sequences of reference (one to five repeats) and target stimuli were presented, and animals were rewarded only when responding correctly to the target. The training procedure consisted of three training phases that successively shaped the ferrets to attend to relative pitch. In Phase-1 training, animals learned the basic task with sequences of invariant tone-pairs and could use absolute pitch information. During Phase-2 training, in order to emphasize relative cues, absolute pitch was varied each trial within a two-octave frequency range. In Phase-3 training, absolute pitch cues were removed, and only relative cue information was available to solve the task. Two ferrets successfully completed training on all three phases and achieved significant discriminative performance over the trained four-octave frequency range. These results suggest that ferrets can be trained to discern the relative pitch relationship of a sequence of tone-pairs independent of frequency.
Collapse
Affiliation(s)
- Pingbo Yin
- Neural Systems Laboratory, Institute for Systems Research, University of Maryland, College Park, Maryland 20742, USA
| | | | | |
Collapse
|
25
|
Walker KMM, Schnupp JWH, Hart-Schnupp SMB, King AJ, Bizley JK. Pitch discrimination by ferrets for simple and complex sounds. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2009; 126:1321-35. [PMID: 19739746 PMCID: PMC2784999 DOI: 10.1121/1.3179676] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
Although many studies have examined the performance of animals in detecting a frequency change in a sequence of tones, few have measured animals' discrimination of the fundamental frequency (F0) of complex, naturalistic stimuli. Additionally, it is not yet clear if animals perceive the pitch of complex sounds along a continuous, low-to-high scale. Here, four ferrets (Mustela putorius) were trained on a two-alternative forced choice task to discriminate sounds that were higher or lower in F0 than a reference sound using pure tones and artificial vowels as stimuli. Average Weber fractions for ferrets on this task varied from approximately 20% to 80% across references (200-1200 Hz), and these fractions were similar for pure tones and vowels. These thresholds are approximately ten times higher than those typically reported for other mammals on frequency change detection tasks that use go/no-go designs. Naive human listeners outperformed ferrets on the present task, but they showed similar effects of stimulus type and reference F0. These results suggest that while non-human animals can be trained to label complex sounds as high or low in pitch, this task may be much more difficult for animals than simply detecting a frequency change.
Collapse
Affiliation(s)
- Kerry M M Walker
- Department of Physiology, Anatomy and Genetics, Sherrington Building, Parks Road, University of Oxford, Oxfordshire, United Kingdom.
| | | | | | | | | |
Collapse
|
26
|
Foxton JM, Weisz N, Bauchet-Lecaignard F, Delpuech C, Bertrand O. The neural bases underlying pitch processing difficulties. Neuroimage 2009; 45:1305-13. [PMID: 19349242 DOI: 10.1016/j.neuroimage.2008.10.068] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2008] [Revised: 09/30/2008] [Accepted: 10/10/2008] [Indexed: 11/26/2022] Open
Abstract
Normal listeners are often surprisingly poor at processing pitch changes. The neural bases of this difficulty were explored using magnetoencephalography (MEG) by comparing participants who obtained poor thresholds on a pitch-direction task with those who obtained good thresholds. Source-space projected data revealed that during an active listening task, the poor threshold group displayed greater activity in the left auditory cortical region when determining the direction of small pitch glides, whereas there was no difference in the good threshold group. In a passive listening task, a mismatch response (MMNm) was identified for pitch-glide direction deviants, with a tendency to be smaller in the poor listeners. The results imply that the difficulties in pitch processing are already apparent during automatic sound processing, and furthermore suggest that left hemisphere auditory regions are used by these listeners to consciously determine the direction of a pitch change. This is in line with evidence that the left hemisphere has a poor frequency resolution, and implies that normal listeners may use the sub-optimal hemisphere to process pitch changes.
Collapse
Affiliation(s)
- Jessica M Foxton
- INSERM U821, Lyon 1 University, Brain Dynamics and Cognition laboratory, Lyon, F-69500, France
| | | | | | | | | |
Collapse
|
27
|
Yin P, Mishkin M, Sutter M, Fritz JB. Early stages of melody processing: stimulus-sequence and task-dependent neuronal activity in monkey auditory cortical fields A1 and R. J Neurophysiol 2008; 100:3009-29. [PMID: 18842950 DOI: 10.1152/jn.00828.2007] [Citation(s) in RCA: 43] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
To explore the effects of acoustic and behavioral context on neuronal responses in the core of auditory cortex (fields A1 and R), two monkeys were trained on a go/no-go discrimination task in which they learned to respond selectively to a four-note target (S+) melody and withhold response to a variety of other nontarget (S-) sounds. We analyzed evoked activity from 683 units in A1/R of the trained monkeys during task performance and from 125 units in A1/R of two naive monkeys. We characterized two broad classes of neural activity that were modulated by task performance. Class I consisted of tone-sequence-sensitive enhancement and suppression responses. Enhanced or suppressed responses to specific tonal components of the S+ melody were frequently observed in trained monkeys, but enhanced responses were rarely seen in naive monkeys. Both facilitatory and suppressive responses in the trained monkeys showed a temporal pattern different from that observed in naive monkeys. Class II consisted of nonacoustic activity, characterized by a task-related component that correlated with bar release, the behavioral response leading to reward. We observed a significantly higher percentage of both Class I and Class II neurons in field R than in A1. Class I responses may help encode a long-term representation of the behaviorally salient target melody. Class II activity may reflect a variety of nonacoustic influences, such as attention, reward expectancy, somatosensory inputs, and/or motor set and may help link auditory perception and behavioral response. Both types of neuronal activity are likely to contribute to the performance of the auditory task.
Collapse
Affiliation(s)
- Pingbo Yin
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health, Bethesda, Maryland, USA
| | | | | | | |
Collapse
|
28
|
Rahne T, Deike S, Selezneva E, Brosch M, König R, Scheich H, Böckmann M, Brechmann A. A multilevel and cross-modal approach towards neuronal mechanisms of auditory streaming. Brain Res 2008; 1220:118-31. [PMID: 17765207 DOI: 10.1016/j.brainres.2007.08.011] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2007] [Revised: 07/31/2007] [Accepted: 08/04/2007] [Indexed: 11/19/2022]
Abstract
We report first results of a multilevel, cross-modal study on the neuronal mechanisms underlying auditory sequential streaming, with the focus on the impact of visual sequences on perceptually ambiguous tone sequences which can either be perceived as two separate streams or one alternating stream. We combined two psychophysical experiments performed on humans and monkeys with two human brain imaging experiments which allow to obtain complementary information on brain activation with high spatial (fMRI) and high temporal (MEG) resolution. The same acoustic paradigm based on the pairing of tone sequences with visual stimuli was used in all human studies and, in an adapted version, in the psychophysical study on monkeys. Our multilevel approach provides experimental evidence that the pairing of auditory and visual stimuli can reliably introduce a bias towards either an integrated or a segregated perception of ambiguous sequences. Thus, comparable to an explicit instruction, this approach can be used to control the subject's perceptual organization of an ambiguous sound sequence without the need for the subject to directly report it. This finding is of particular importance for animal studies because it allows to compare electrophysiological responses of auditory cortex neurons to the same acoustic stimulus sequence eliciting either a segregated or integrated percept.
Collapse
Affiliation(s)
- Torsten Rahne
- Department of Experimental Audiology and Medical Physics, Otto-von-Guericke-University Magdeburg, Germany.
| | | | | | | | | | | | | | | |
Collapse
|
29
|
Brosch M, Scheich H. Tone-sequence analysis in the auditory cortex of awake macaque monkeys. Exp Brain Res 2007; 184:349-61. [PMID: 17851656 DOI: 10.1007/s00221-007-1109-7] [Citation(s) in RCA: 44] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2006] [Accepted: 08/13/2007] [Indexed: 11/24/2022]
Abstract
The present study analyzed neuronal responses to two-tone sequences in the auditory cortex of three awake macaque monkeys. The monkeys were passively exposed to 430 different two-tone sequences, in which the frequency of the first tone and the interval between the first and the second tone in the sequence were systematically varied. The frequency of the second tone remained constant and was matched to the single-tone frequency sensitivity of the neurons. Multiunit activity was recorded from 109 sites in the primary auditory cortex and posterior auditory belt. We found that the first tone in the sequence could inhibit or facilitate the response to the second tone. Type and magnitude of poststimulatory effects depended on the sequence parameters and were related to the single-tone frequency sensitivity of neurons, similar to previous observations in the auditory cortex of anesthetized animals. This suggests that some anesthetics produce, at the most, moderate changes of poststimulatory inhibition and facilitation in the auditory cortex. Hence many properties of the sequence-sensitivity of neurons in the auditory cortex measured in anesthetized preparations can be applied to neurons in the auditory cortex of awake subjects.
Collapse
Affiliation(s)
- Michael Brosch
- Leibniz-Institut für Neurobiologie, Brenneckestrasse 6, 3911, Magdeburg, Germany.
| | | |
Collapse
|
30
|
Fritz JB, Elhilali M, David SV, Shamma SA. Auditory attention—focusing the searchlight on sound. Curr Opin Neurobiol 2007; 17:437-55. [PMID: 17714933 DOI: 10.1016/j.conb.2007.07.011] [Citation(s) in RCA: 290] [Impact Index Per Article: 17.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2007] [Accepted: 07/12/2007] [Indexed: 10/22/2022]
Abstract
Some fifty years after the first physiological studies of auditory attention, the field is now ripening, with exciting recent insights into the psychophysics, psychology, and neural basis of auditory attention. Current research seeks to unravel the complex interactions of pre-attentive and attentive processing of the acoustic scene, the role of auditory attention in mediating receptive-field plasticity in both auditory spatial and auditory feature processing, the contrasts and parallels between auditory and visual attention pathways and mechanisms, the interplay of bottom-up and top-down attentional mechanisms, the influential role of attention, goals, and expectations in shaping auditory processing, and the orchestration of diverse attentional effects at multiple levels from the cochlea to the cortex.
Collapse
Affiliation(s)
- Jonathan B Fritz
- Centre for Auditory and Acoustic Research, Institute for Systems Research, University of Maryland, College Park, MD 20742, USA.
| | | | | | | |
Collapse
|
31
|
Selezneva E, Scheich H, Brosch M. Dual time scales for categorical decision making in auditory cortex. Curr Biol 2007; 16:2428-33. [PMID: 17174917 DOI: 10.1016/j.cub.2006.10.027] [Citation(s) in RCA: 70] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2006] [Revised: 10/06/2006] [Accepted: 10/09/2006] [Indexed: 11/29/2022]
Abstract
Category formation allows us to group perceptual objects into meaningful classes and is fundamental to cognition. Categories can be derived from similarity relationships of object features by using prototypes or multiple exemplars, or from abstract relationships of features and rules . A variety of brain areas have been implicated in categorization processes, but mechanistic insights on the single-cell and local-network level are still rare and limited to the matching of individual objects to categories . For directional categorization of tone steps, as in melody recognition , abstract relationships between sequential events (higher or lower in frequency) have to be formed. To explore the neuronal mechanisms of this categorical identification of step direction, we trained monkeys for more than two years on a contour-discrimination task with multiple tone sequences. In the auditory cortex of these highly trained monkeys, we identified two interrelated types of neuronal firing: Increased phasic responses to tones categorically represented the reward-predicting downward frequency steps and not upward steps; subsequently, slow modulations of tonic firing predicted the behavioral decisions of the monkeys, including errors. Our results on neuronal mechanisms of categorical stimulus identification and of decision making attribute a cognitive role to auditory cortex, in addition to its role in signal processing.
Collapse
Affiliation(s)
- Elena Selezneva
- Leibniz-Institut für Neurobiologie, Brenneckestrasse 6, 39118 Magdeburg, Germany
| | | | | |
Collapse
|
32
|
Scheich H, Brechmann A, Brosch M, Budinger E, Ohl FW. The cognitive auditory cortex: task-specificity of stimulus representations. Hear Res 2007; 229:213-24. [PMID: 17368987 DOI: 10.1016/j.heares.2007.01.025] [Citation(s) in RCA: 80] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/28/2006] [Revised: 01/17/2007] [Accepted: 01/31/2007] [Indexed: 11/20/2022]
Abstract
Auditory cortex (AC), like subcortical auditory nuclei, represents properties of auditory stimuli by spatiotemporal activation patterns across neurons. A tacit assumption of AC research has been that the multiplicity of functional maps in primary and secondary areas serves a refined continuation of subcortical stimulus processing, i.e. a parallel orderly analysis of distinct properties of a complex sound. This view, which was mainly derived from exposure to parametric sound variation, may not fully capture the essence of cortical processing. Neocortex, in spite of its parcellation into diverse sensory, motor, associative, and cognitive areas, exhibits a rather stereotyped local architecture. The columnar arrangement of the neocortex and the quantitatively dominant connectivity with numerous other cortical areas are two of its key features. This suggests that cortex has a rather common function which lies beyond those usually leading to the distinction of functional areas. We propose that task-relatedness of the way, how any information can be represented in cortex, is one general consequence of the architecture and corticocortical connectivity. Specifically, this hypothesis predicts different spatiotemporal representations of auditory stimuli when concepts and strategies how these stimuli are analysed do change. We will describe, in an exemplary fashion, cortical patterns of local field potentials in gerbil, of unit spiking activity in monkey, and of fMRI signals in human AC during the execution of different tasks mainly in the realm of category formation of sounds. We demonstrate that the representations reflect context- and memory-related, conceptual and executional aspects of a task and that they can predict the behavioural outcome.
Collapse
Affiliation(s)
- Henning Scheich
- Leibniz Institute for Neurobiology, Department of Auditory Learning and Speech, Magdeburg, Germany.
| | | | | | | | | |
Collapse
|
33
|
Abstract
Empirical data have recently begun to inform debates on the evolutionary origins of music. In this paper we discuss some of our recent findings and related theoretical issues. We claim that theories of the origins of music will be usefully constrained if we can determine which aspects of music perception are innate, and, of those, which are uniquely human and specific to music. Comparative research in nonhuman animals, particularly nonhuman primates, is thus critical to the debate. In this paper we focus on the preferences that characterize most humans' experience of music, testing whether similar preferences exist in nonhuman primates. Our research suggests that many rudimentary acoustic preferences, such as those for consonant over dissonant intervals, may be unique to humans. If these preferences prove to be innate in humans, they may be candidates for music-specific adaptations. To establish whether such preferences are innate in humans, one important avenue for future research will be the collection of data from different cultures. This may be facilitated by studies conducted over the internet.
Collapse
Affiliation(s)
- Josh McDermott
- Perceptual Science Group, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, NE20-444, 3 Cambridge Center, Cambridge, 02139.
| | | |
Collapse
|
34
|
Brosch M, Oshurkova E, Bucks C, Scheich H. Influence of tone duration and intertone interval on the discrimination of frequency contours in a macaque monkey. Neurosci Lett 2006; 406:97-101. [PMID: 16901633 DOI: 10.1016/j.neulet.2006.07.021] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2006] [Revised: 06/28/2006] [Accepted: 07/09/2006] [Indexed: 10/24/2022]
Abstract
Behavioral studies have shown that non-human primates can categorically discriminate descending from ascending frequency steps in sequences of pure tones. Here we show that the performance of a long-tail macaque remains stable in such a task when the silent interval between the tones of a frequency step is varied between 0 and 1100 ms. Our finding suggests that: (1) some monkeys can keep frequency-specific information in their short-term memory for periods >1s, which can be used to make categorical decisions on the direction of frequency steps, and that (2) their ability to categorize the direction of frequency steps may be more similar to humans than previously assumed.
Collapse
Affiliation(s)
- Michael Brosch
- Leibniz-Institut für Neurobiologie, Brenneckestrasse 6, 39118 Magdeburg, Germany.
| | | | | | | |
Collapse
|
35
|
Bendor D, Wang X. Cortical representations of pitch in monkeys and humans. Curr Opin Neurobiol 2006; 16:391-9. [PMID: 16842992 PMCID: PMC4325365 DOI: 10.1016/j.conb.2006.07.001] [Citation(s) in RCA: 79] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2006] [Accepted: 07/03/2006] [Indexed: 10/24/2022]
Abstract
Pitch perception is crucial for vocal communication, music perception, and auditory object processing in a complex acoustic environment. How pitch is represented in the cerebral cortex has for a long time remained an unanswered question in auditory neuroscience. Several lines of evidence now point to a distinct non-primary region of auditory cortex in primates that contains a cortical representation of pitch.
Collapse
Affiliation(s)
- Daniel Bendor
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | | |
Collapse
|
36
|
Brosch M, Selezneva E, Scheich H. Nonauditory events of a behavioral procedure activate auditory cortex of highly trained monkeys. J Neurosci 2006; 25:6797-806. [PMID: 16033889 PMCID: PMC6725347 DOI: 10.1523/jneurosci.1571-05.2005] [Citation(s) in RCA: 216] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
A central tenet in brain research is that early sensory cortex is modality specific, and, only in exceptional cases, such as deaf and blind subjects or professional musicians, is influenced by other modalities. Here we describe extensive cross-modal activation in the auditory cortex of two monkeys while they performed a demanding auditory categorization task: after a cue light was turned on, monkeys could initiate a tone sequence by touching a bar and then earn a reward by releasing the bar on occurrence of a falling frequency contour in the sequence. In their primary auditory cortex and posterior belt areas, we found many acoustically responsive neurons whose firing was synchronized to the cue light or to the touch or release of the bar. Of 315 multiunits, 45 exhibited cue light-related firing, 194 exhibited firing that was related to bar touch, and 268 exhibited firing that was related to bar release. Among 60 single units, we found one neuron with cue light-related firing, 21 with bar touch-related firing, and 36 with release-related firing. This firing disappeared at individual sites when the monkeys performed a visual detection task. Our findings corroborate and extend recent findings on cross-modal activation in the auditory cortex and suggests that the auditory cortex can be activated by visual and somatosensory stimulation and by movements. We speculate that the multimodal corepresentation in the auditory cortex has arisen from the intensive practice of the subjects with the behavioral procedure and that it facilitates the performance of audiomotor tasks in proficient subjects.
Collapse
Affiliation(s)
- Michael Brosch
- Leibniz-Institut für Neurobiologie, 39118 Magdeburg, Germany.
| | | | | |
Collapse
|
37
|
Abstract
We review the literature on infants' perception of pitch and temporal patterns, relating it to comparable research with human adult and non-human listeners. Although there are parallels in relative pitch processing across age and species, there are notable differences. Infants accomplish such tasks with ease, but non-human listeners require extensive training to achieve very modest levels of performance. In general, human listeners process auditory sequences in a holistic manner, and non-human listeners focus on absolute aspects of individual tones. Temporal grouping processes and categorization on the basis of rhythm are evident in non-human listeners and in human infants and adults. Although synchronization to sound patterns is thought to be uniquely human, tapping to music, synchronous firefly flashing, and other cyclic behaviors can be described by similar mathematical principles. We conclude that infants' music perception skills are a product of general perceptual mechanisms that are neither music- nor species-specific. Along with general-purpose mechanisms for the perceptual foundations of music, we suggest unique motivational mechanisms that can account for the perpetuation of musical behavior in all human societies.
Collapse
Affiliation(s)
- Sandra E Trehub
- Department of Psychology, University of Toronto at Mississauga, Ont., Canada.
| | | |
Collapse
|