1
|
Bellmann OT, Asano R. Neural correlates of musical timbre: an ALE meta-analysis of neuroimaging data. Front Neurosci 2024; 18:1373232. [PMID: 38952924 PMCID: PMC11215185 DOI: 10.3389/fnins.2024.1373232] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Accepted: 05/29/2024] [Indexed: 07/03/2024] Open
Abstract
Timbre is a central aspect of music that allows listeners to identify musical sounds and conveys musical emotion, but also allows for the recognition of actions and is an important structuring property of music. The former functions are known to be implemented in a ventral auditory stream in processing musical timbre. While the latter functions are commonly attributed to areas in a dorsal auditory processing stream in other musical domains, its involvement in musical timbre processing is so far unknown. To investigate if musical timbre processing involves both dorsal and ventral auditory pathways, we carried out an activation likelihood estimation (ALE) meta-analysis of 18 experiments from 17 published neuroimaging studies on musical timbre perception. We identified consistent activations in Brodmann areas (BA) 41, 42, and 22 in the bilateral transverse temporal gyri, the posterior superior temporal gyri and planum temporale, in BA 40 of the bilateral inferior parietal lobe, in BA 13 in the bilateral posterior Insula, and in BA 13 and 22 in the right anterior insula and superior temporal gyrus. The vast majority of the identified regions are associated with the dorsal and ventral auditory processing streams. We therefore propose to frame the processing of musical timbre in a dual-stream model. Moreover, the regions activated in processing timbre show similarities to the brain regions involved in processing several other fundamental aspects of music, indicating possible shared neural bases of musical timbre and other musical domains.
Collapse
Affiliation(s)
| | - Rie Asano
- Systematic Musicology, Institute for Musicology, University of Cologne, Cologne, Germany
| |
Collapse
|
2
|
The rediscovered motor-related area 55b emerges as a core hub of music perception. Commun Biol 2022; 5:1104. [PMID: 36257973 PMCID: PMC9579133 DOI: 10.1038/s42003-022-04009-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Accepted: 09/19/2022] [Indexed: 12/03/2022] Open
Abstract
Passive listening to music, without sound production or evident movement, is long known to activate motor control regions. Nevertheless, the exact neuroanatomical correlates of the auditory-motor association and its underlying neural mechanisms have not been fully determined. Here, based on a NeuroSynth meta-analysis and three original fMRI paradigms of music perception, we show that the long-ignored pre-motor region, area 55b, an anatomically unique and functionally intriguing region, is a core hub of music perception. Moreover, results of a brain-behavior correlation analysis implicate neural entrainment as the underlying mechanism of area 55b’s contribution to music perception. In view of the current results and prior literature, area 55b is proposed as a keystone of sensorimotor integration, a fundamental brain machinery underlying simple to hierarchically complex behaviors. Refining the neuroanatomical and physiological understanding of sensorimotor integration is expected to have a major impact on various fields, from brain disorders to artificial general intelligence. Functional magnetic resonance imaging data acquired during passive listening to music suggest that pre-motor area 55b acts as a core hub of music processing in humans.
Collapse
|
3
|
Knipper M, Mazurek B, van Dijk P, Schulze H. Too Blind to See the Elephant? Why Neuroscientists Ought to Be Interested in Tinnitus. J Assoc Res Otolaryngol 2021; 22:609-621. [PMID: 34686939 PMCID: PMC8599745 DOI: 10.1007/s10162-021-00815-1] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Accepted: 08/30/2021] [Indexed: 01/13/2023] Open
Abstract
A curative therapy for tinnitus currently does not exist. One may actually exist but cannot currently be causally linked to tinnitus due to the lack of consistency of concepts about the neural correlate of tinnitus. Depending on predictions, these concepts would require either a suppression or enhancement of brain activity or an increase in inhibition or disinhibition. Although procedures with a potential to silence tinnitus may exist, the lack of rationale for their curative success hampers an optimization of therapeutic protocols. We discuss here six candidate contributors to tinnitus that have been suggested by a variety of scientific experts in the field and that were addressed in a virtual panel discussion at the ARO round table in February 2021. In this discussion, several potential tinnitus contributors were considered: (i) inhibitory circuits, (ii) attention, (iii) stress, (iv) unidentified sub-entities, (v) maladaptive information transmission, and (vi) minor cochlear deafferentation. Finally, (vii) some potential therapeutic approaches were discussed. The results of this discussion is reflected here in view of potential blind spots that may still remain and that have been ignored in most tinnitus literature. We strongly suggest to consider the high impact of connecting the controversial findings to unravel the whole complexity of the tinnitus phenomenon; an essential prerequisite for establishing suitable therapeutic approaches.
Collapse
Affiliation(s)
- Marlies Knipper
- Molecular Physiology of Hearing, Tübingen Hearing Research Centre (THRC), Department of Otolaryngology, Head & Neck Surgery, University of Tübingen, Elfriede-Aulhorn-Straße 5, 72076, Tübingen, Germany.
| | - Birgit Mazurek
- Tinnitus Center Charité, Universitätsmedizin Berlin, Berlin, Germany
| | - Pim van Dijk
- Department of Otorhinolaryngology/Head and Neck Surgery, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands
- Graduate School of Medical Sciences (Research School of Behavioural and Cognitive Neurosciences), University of Groningen, Groningen, The Netherlands
| | - Holger Schulze
- Experimental Otolaryngology, Friedrich-Alexander Universität Erlangen-Nürnberg, Waldstrasse 1, 91054, Erlangen, Germany
| |
Collapse
|
4
|
Fox NP, Leonard M, Sjerps MJ, Chang EF. Transformation of a temporal speech cue to a spatial neural code in human auditory cortex. eLife 2020; 9:e53051. [PMID: 32840483 PMCID: PMC7556862 DOI: 10.7554/elife.53051] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2019] [Accepted: 08/21/2020] [Indexed: 11/28/2022] Open
Abstract
In speech, listeners extract continuously-varying spectrotemporal cues from the acoustic signal to perceive discrete phonetic categories. Spectral cues are spatially encoded in the amplitude of responses in phonetically-tuned neural populations in auditory cortex. It remains unknown whether similar neurophysiological mechanisms encode temporal cues like voice-onset time (VOT), which distinguishes sounds like /b/ and/p/. We used direct brain recordings in humans to investigate the neural encoding of temporal speech cues with a VOT continuum from /ba/ to /pa/. We found that distinct neural populations respond preferentially to VOTs from one phonetic category, and are also sensitive to sub-phonetic VOT differences within a population's preferred category. In a simple neural network model, simulated populations tuned to detect either temporal gaps or coincidences between spectral cues captured encoding patterns observed in real neural data. These results demonstrate that a spatial/amplitude neural code underlies the cortical representation of both spectral and temporal speech cues.
Collapse
Affiliation(s)
- Neal P Fox
- Department of Neurological Surgery, University of California, San FranciscoSan FranciscoUnited States
| | - Matthew Leonard
- Department of Neurological Surgery, University of California, San FranciscoSan FranciscoUnited States
| | - Matthias J Sjerps
- Donders Institute for Brain, Cognition and Behaviour, Centre for Cognitive Neuroimaging, Radboud UniversityNijmegenNetherlands
- Max Planck Institute for PsycholinguisticsNijmegenNetherlands
| | - Edward F Chang
- Department of Neurological Surgery, University of California, San FranciscoSan FranciscoUnited States
- Weill Institute for Neurosciences, University of California, San FranciscoSan FranciscoUnited States
| |
Collapse
|
5
|
Archakov D, DeWitt I, Kuśmierek P, Ortiz-Rios M, Cameron D, Cui D, Morin EL, VanMeter JW, Sams M, Jääskeläinen IP, Rauschecker JP. Auditory representation of learned sound sequences in motor regions of the macaque brain. Proc Natl Acad Sci U S A 2020; 117:15242-15252. [PMID: 32541016 PMCID: PMC7334521 DOI: 10.1073/pnas.1915610117] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022] Open
Abstract
Human speech production requires the ability to couple motor actions with their auditory consequences. Nonhuman primates might not have speech because they lack this ability. To address this question, we trained macaques to perform an auditory-motor task producing sound sequences via hand presses on a newly designed device ("monkey piano"). Catch trials were interspersed to ascertain the monkeys were listening to the sounds they produced. Functional MRI was then used to map brain activity while the animals listened attentively to the sound sequences they had learned to produce and to two control sequences, which were either completely unfamiliar or familiar through passive exposure only. All sounds activated auditory midbrain and cortex, but listening to the sequences that were learned by self-production additionally activated the putamen and the hand and arm regions of motor cortex. These results indicate that, in principle, monkeys are capable of forming internal models linking sound perception and production in motor regions of the brain, so this ability is not special to speech in humans. However, the coupling of sounds and actions in nonhuman primates (and the availability of an internal model supporting it) seems not to extend to the upper vocal tract, that is, the supralaryngeal articulators, which are key for the production of speech sounds in humans. The origin of speech may have required the evolution of a "command apparatus" similar to the control of the hand, which was crucial for the evolution of tool use.
Collapse
Affiliation(s)
- Denis Archakov
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20057
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, FI-02150 Espoo, Finland
| | - Iain DeWitt
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20057
| | - Paweł Kuśmierek
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20057
| | - Michael Ortiz-Rios
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20057
| | - Daniel Cameron
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20057
| | - Ding Cui
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20057
| | - Elyse L Morin
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20057
| | - John W VanMeter
- Center for Functional and Molecular Imaging, Georgetown University Medical Center, Washington, DC 20057
| | - Mikko Sams
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, FI-02150 Espoo, Finland
| | - Iiro P Jääskeläinen
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, FI-02150 Espoo, Finland
| | - Josef P Rauschecker
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20057;
| |
Collapse
|
6
|
Sihvonen AJ, Särkämö T, Rodríguez-Fornells A, Ripollés P, Münte TF, Soinila S. Neural architectures of music - Insights from acquired amusia. Neurosci Biobehav Rev 2019; 107:104-114. [PMID: 31479663 DOI: 10.1016/j.neubiorev.2019.08.023] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2019] [Revised: 08/27/2019] [Accepted: 08/29/2019] [Indexed: 12/27/2022]
Abstract
The ability to perceive and produce music is a quintessential element of human life, present in all known cultures. Modern functional neuroimaging has revealed that music listening activates a large-scale bilateral network of cortical and subcortical regions in the healthy brain. Even the most accurate structural studies do not reveal which brain areas are critical and causally linked to music processing. Such questions may be answered by analysing the effects of focal brain lesions in patients´ ability to perceive music. In this sense, acquired amusia after stroke provides a unique opportunity to investigate the neural architectures crucial for normal music processing. Based on the first large-scale longitudinal studies on stroke-induced amusia using modern multi-modal magnetic resonance imaging (MRI) techniques, such as advanced lesion-symptom mapping, grey and white matter morphometry, tractography and functional connectivity, we discuss neural structures critical for music processing, consider music processing in light of the dual-stream model in the right hemisphere, and propose a neural model for acquired amusia.
Collapse
Affiliation(s)
- Aleksi J Sihvonen
- Department of Neurosciences, University of Helsinki, Finland; Cognitive Brain Research Unit, Department of Psychology and Logopedics, University of Helsinki, Finland.
| | - Teppo Särkämö
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, University of Helsinki, Finland
| | - Antoni Rodríguez-Fornells
- Department of Cognition, University of Barcelona, Cognition & Brain Plasticity Unit, Bellvitge Biomedical Research Institute (IDIBELL), Institució Catalana de recerca i Estudis Avançats (ICREA), Barcelona, Spain
| | - Pablo Ripollés
- Department of Psychology, New York University and Music and Audio Research Laboratory, New York University, USA
| | - Thomas F Münte
- Department of Neurology and Institute of Psychology II, University of Lübeck, Germany
| | - Seppo Soinila
- Division of Clinical Neurosciences, Turku University Hospital, Department of Neurology, University of Turku, Finland
| |
Collapse
|
7
|
|
8
|
Green B, Jääskeläinen IP, Sams M, Rauschecker JP. Distinct brain areas process novel and repeating tone sequences. BRAIN AND LANGUAGE 2018; 187:104-114. [PMID: 30278992 DOI: 10.1016/j.bandl.2018.09.006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2017] [Revised: 10/03/2017] [Accepted: 09/23/2018] [Indexed: 06/08/2023]
Abstract
The auditory dorsal stream has been implicated in sensorimotor integration and concatenation of sequential sound events, both being important for processing of speech and music. The auditory ventral stream, by contrast, is characterized as subserving sound identification and recognition. We studied the respective roles of the dorsal and ventral streams, including recruitment of basal ganglia and medial temporal lobe structures, in the processing of tone sequence elements. A sequence was presented incrementally across several runs during functional magnetic resonance imaging in humans, and we compared activation by sequence elements when heard for the first time ("novel") versus when the elements were repeating ("familiar"). Our results show a shift in tone-sequence-dependent activation from posterior-dorsal cortical areas and the basal ganglia during the processing of less familiar sequence elements towards anterior and ventral cortical areas and the medial temporal lobe after the encoding of highly familiar sequence elements into identifiable auditory objects.
Collapse
Affiliation(s)
- Brannon Green
- Laboratory of Integrative Neuroscience and Cognition, Interdisciplinary Program in Neuroscience, Georgetown University Medical Center, 3970 Reservoir Road NW, New Research Building-WP19, Washington, DC 20007, USA.
| | - Iiro P Jääskeläinen
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, School of Science, Aalto University, 00076 AALTO Espoo, Finland; AMI Centre, Aalto NeuroImaging, Aalto University, Finland
| | - Mikko Sams
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, School of Science, Aalto University, 00076 AALTO Espoo, Finland
| | - Josef P Rauschecker
- Laboratory of Integrative Neuroscience and Cognition, Interdisciplinary Program in Neuroscience, Georgetown University Medical Center, 3970 Reservoir Road NW, New Research Building-WP19, Washington, DC 20007, USA; Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, School of Science, Aalto University, 00076 AALTO Espoo, Finland; Institute for Advanced Study, TUM, Munich-Garching, 80333 Munich, Germany.
| |
Collapse
|
9
|
Rauschecker JP. Where did language come from? Precursor mechanisms in nonhuman primates. Curr Opin Behav Sci 2018; 21:195-204. [PMID: 30778394 PMCID: PMC6377164 DOI: 10.1016/j.cobeha.2018.06.003] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
At first glance, the monkey brain looks like a smaller version of the human brain. Indeed, the anatomical and functional architecture of the cortical auditory system in monkeys is very similar to that of humans, with dual pathways segregated into a ventral and a dorsal processing stream. Yet, monkeys do not speak. Repeated attempts to pin this inability on one particular cause have failed. A closer look at the necessary components of language, according to Darwin, reveals that all of them got a significant boost during evolution from nonhuman to human primates. The vocal-articulatory system, in particular, has developed into the most sophisticated of all human sensorimotor systems with about a dozen effectors that, in combination with each other, result in an auditory communication system like no other. This sensorimotor network possesses all the ingredients of an internal model system that permits the emergence of sequence processing, as required for phonology and syntax in modern languages.
Collapse
Affiliation(s)
- Josef P Rauschecker
- Department of Neuroscience, Georgetown University, Washington, DC 20057, USA
| |
Collapse
|
10
|
Rauschecker JP. Where, When, and How: Are they all sensorimotor? Towards a unified view of the dorsal pathway in vision and audition. Cortex 2018; 98:262-268. [PMID: 29183630 PMCID: PMC5771843 DOI: 10.1016/j.cortex.2017.10.020] [Citation(s) in RCA: 66] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2017] [Revised: 08/19/2017] [Accepted: 10/12/2017] [Indexed: 10/18/2022]
Abstract
Dual processing streams in sensory systems have been postulated for a long time. Much experimental evidence has been accumulated from behavioral, neuropsychological, electrophysiological, neuroanatomical and neuroimaging work supporting the existence of largely segregated cortical pathways in both vision and audition. More recently, debate has returned to the question of overlap between these pathways and whether there aren't really more than two processing streams. The present piece defends the dual-system view. Focusing on the functions of the dorsal stream in the auditory and language system I try to reconcile the various models of Where, How and When into one coherent concept of sensorimotor integration. This framework incorporates principles of internal models in feedback control systems and is applicable to the visual system as well.
Collapse
Affiliation(s)
- Josef P Rauschecker
- Laboratory of Integrative Neuroscience and Cognition, Department of Neuroscience, Georgetown University Medical Center, Washington, DC, USA; Institute for Advanced Study, Technische Universität München, Garching bei München, Germany.
| |
Collapse
|
11
|
Tracting the neural basis of music: Deficient structural connectivity underlying acquired amusia. Cortex 2017; 97:255-273. [DOI: 10.1016/j.cortex.2017.09.028] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2017] [Revised: 06/08/2017] [Accepted: 09/29/2017] [Indexed: 11/17/2022]
|
12
|
Sihvonen AJ, Särkämö T, Ripollés P, Leo V, Saunavaara J, Parkkola R, Rodríguez-Fornells A, Soinila S. Functional neural changes associated with acquired amusia across different stages of recovery after stroke. Sci Rep 2017; 7:11390. [PMID: 28900231 PMCID: PMC5595783 DOI: 10.1038/s41598-017-11841-6] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2017] [Accepted: 08/30/2017] [Indexed: 11/09/2022] Open
Abstract
Brain damage causing acquired amusia disrupts the functional music processing system, creating a unique opportunity to investigate the critical neural architectures of musical processing in the brain. In this longitudinal fMRI study of stroke patients (N = 41) with a 6-month follow-up, we used natural vocal music (sung with lyrics) and instrumental music stimuli to uncover brain activation and functional network connectivity changes associated with acquired amusia and its recovery. In the acute stage, amusic patients exhibited decreased activation in right superior temporal areas compared to non-amusic patients during instrumental music listening. During the follow-up, the activation deficits expanded to comprise a wide-spread bilateral frontal, temporal, and parietal network. The amusics showed less activation deficits to vocal music, suggesting preserved processing of singing in the amusic brain. Compared to non-recovered amusics, recovered amusics showed increased activation to instrumental music in bilateral frontoparietal areas at 3 months and in right middle and inferior frontal areas at 6 months. Amusia recovery was also associated with increased functional connectivity in right and left frontoparietal attention networks to instrumental music. Overall, our findings reveal the dynamic nature of deficient activation and connectivity patterns in acquired amusia and highlight the role of dorsal networks in amusia recovery.
Collapse
Affiliation(s)
- Aleksi J Sihvonen
- Faculty of Medicine, University of Turku, 20520, Turku, Finland. .,Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, 00014, Helsinki, Finland.
| | - Teppo Särkämö
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, 00014, Helsinki, Finland
| | - Pablo Ripollés
- Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute (IDIBELL), L'Hospitalet de Llobregat, 08907, Barcelona, Spain.,Department of Cognition, Development and Education Psychology, University of Barcelona, 08035, Barcelona, Spain.,Poeppel Lab, Department of Psychology, New York University, 10003, NY, USA
| | - Vera Leo
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, 00014, Helsinki, Finland
| | - Jani Saunavaara
- Department of Medical Physics, Turku University Hospital, 20521, Turku, Finland
| | - Riitta Parkkola
- Department of Radiology, Turku University and Turku University Hospital, 20521, Turku, Finland
| | - Antoni Rodríguez-Fornells
- Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute (IDIBELL), L'Hospitalet de Llobregat, 08907, Barcelona, Spain.,Department of Cognition, Development and Education Psychology, University of Barcelona, 08035, Barcelona, Spain.,Catalan Institution for Research and Advanced Studies, ICREA, Barcelona, Spain
| | - Seppo Soinila
- Division of Clinical Neurosciences, Turku University Hospital and Department of Neurology, University of Turku, 20521, Turku, Finland
| |
Collapse
|
13
|
Verbal and musical short-term memory: Variety of auditory disorders after stroke. Brain Cogn 2017; 113:10-22. [PMID: 28088063 DOI: 10.1016/j.bandc.2017.01.003] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2016] [Revised: 01/01/2017] [Accepted: 01/02/2017] [Indexed: 12/28/2022]
Abstract
Auditory cognitive deficits after stroke may concern language and/or music processing, resulting in aphasia and/or amusia. The aim of the present study was to assess the potential deficits of auditory short-term memory for verbal and musical material after stroke and their underlying cerebral correlates with a Voxel-based Lesion Symptom Mapping approach (VLSM). Patients with an ischemic stroke in the right (N=10) or left (N=10) middle cerebral artery territory and matched control participants (N=14) were tested with a detailed neuropsychological assessment including global cognitive functions, music perception and language tasks. All participants then performed verbal and musical auditory short-term memory (STM) tasks that were implemented in the same way for both materials. Participants had to indicate whether series of four words or four tones presented in pairs, were the same or different. To detect domain-general STM deficits, they also had to perform a visual STM task. Behavioral results showed that patients had lower performance for the STM tasks in comparison with control participants, regardless of the material (words, tones, visual) and the lesion side. The individual patient data showed a double dissociation between some patients exhibiting verbal deficits without musical deficits or the reverse. Exploratory VLSM analyses suggested that dorsal pathways are involved in verbal (phonetic), musical (melodic), and visual STM, while the ventral auditory pathway is involved in musical STM.
Collapse
|
14
|
Effenberg AO, Fehse U, Schmitz G, Krueger B, Mechling H. Movement Sonification: Effects on Motor Learning beyond Rhythmic Adjustments. Front Neurosci 2016; 10:219. [PMID: 27303255 PMCID: PMC4883456 DOI: 10.3389/fnins.2016.00219] [Citation(s) in RCA: 52] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2016] [Accepted: 05/02/2016] [Indexed: 12/19/2022] Open
Abstract
Motor learning is based on motor perception and emergent perceptual-motor representations. A lot of behavioral research is related to single perceptual modalities but during last two decades the contribution of multimodal perception on motor behavior was discovered more and more. A growing number of studies indicates an enhanced impact of multimodal stimuli on motor perception, motor control and motor learning in terms of better precision and higher reliability of the related actions. Behavioral research is supported by neurophysiological data, revealing that multisensory integration supports motor control and learning. But the overwhelming part of both research lines is dedicated to basic research. Besides research in the domains of music, dance and motor rehabilitation, there is almost no evidence for enhanced effectiveness of multisensory information on learning of gross motor skills. To reduce this gap, movement sonification is used here in applied research on motor learning in sports. Based on the current knowledge on the multimodal organization of the perceptual system, we generate additional real-time movement information being suitable for integration with perceptual feedback streams of visual and proprioceptive modality. With ongoing training, synchronously processed auditory information should be initially integrated into the emerging internal models, enhancing the efficacy of motor learning. This is achieved by a direct mapping of kinematic and dynamic motion parameters to electronic sounds, resulting in continuous auditory and convergent audiovisual or audio-proprioceptive stimulus arrays. In sharp contrast to other approaches using acoustic information as error-feedback in motor learning settings, we try to generate additional movement information suitable for acceleration and enhancement of adequate sensorimotor representations and processible below the level of consciousness. In the experimental setting, participants were asked to learn a closed motor skill (technique acquisition of indoor rowing). One group was treated with visual information and two groups with audiovisual information (sonification vs. natural sounds). For all three groups learning became evident and remained stable. Participants treated with additional movement sonification showed better performance compared to both other groups. Results indicate that movement sonification enhances motor learning of a complex gross motor skill-even exceeding usually expected acoustic rhythmic effects on motor learning.
Collapse
Affiliation(s)
- Alfred O Effenberg
- Faculty of Humanities, Institute of Sports Science, Leibniz Universität Hannover Hanover, Germany
| | - Ursula Fehse
- Faculty of Humanities, Institute of Sports Science, Leibniz Universität Hannover Hanover, Germany
| | - Gerd Schmitz
- Faculty of Humanities, Institute of Sports Science, Leibniz Universität Hannover Hanover, Germany
| | - Bjoern Krueger
- Computer Science, Faculty of Mathematics and Natural Sciences, Institute of Computer Science II, University of Bonn Bonn, Germany
| | - Heinz Mechling
- Institute of Sport Gerontology, German Sport University Cologne Cologne, Germany
| |
Collapse
|
15
|
Belyk M, Pfordresher PQ, Liotti M, Brown S. The Neural Basis of Vocal Pitch Imitation in Humans. J Cogn Neurosci 2015; 28:621-35. [PMID: 26696298 DOI: 10.1162/jocn_a_00914] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Vocal imitation is a phenotype that is unique to humans among all primate species, and so an understanding of its neural basis is critical in explaining the emergence of both speech and song in human evolution. Two principal neural models of vocal imitation have emerged from a consideration of nonhuman animals. One hypothesis suggests that putative mirror neurons in the inferior frontal gyrus pars opercularis of Broca's area may be important for imitation. An alternative hypothesis derived from the study of songbirds suggests that the corticostriate motor pathway performs sensorimotor processes that are specific to vocal imitation. Using fMRI with a sparse event-related sampling design, we investigated the neural basis of vocal imitation in humans by comparing imitative vocal production of pitch sequences with both nonimitative vocal production and pitch discrimination. The strongest difference between these tasks was found in the putamen bilaterally, providing a striking parallel to the role of the analogous region in songbirds. Other areas preferentially activated during imitation included the orofacial motor cortex, Rolandic operculum, and SMA, which together outline the corticostriate motor loop. No differences were seen in the inferior frontal gyrus. The corticostriate system thus appears to be the central pathway for vocal imitation in humans, as predicted from an analogy with songbirds.
Collapse
|
16
|
Rauschecker JP. Auditory and visual cortex of primates: a comparison of two sensory systems. Eur J Neurosci 2015; 41:579-85. [PMID: 25728177 DOI: 10.1111/ejn.12844] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2014] [Revised: 12/23/2014] [Accepted: 12/23/2014] [Indexed: 11/29/2022]
Abstract
A comparative view of the brain, comparing related functions across species and sensory systems, offers a number of advantages. In particular, it allows separation of the formal purpose of a model structure from its implementation in specific brains. Models of auditory cortical processing can be conceived by analogy to the visual cortex, incorporating neural mechanisms that are found in both the visual and auditory systems. Examples of such canonical features at the columnar level are direction selectivity, size/bandwidth selectivity, and receptive fields with segregated vs. overlapping ON and OFF subregions. On a larger scale, parallel processing pathways have been envisioned that represent the two main facets of sensory perception: (i) identification of objects; and (ii) processing of space. Expanding this model in terms of sensorimotor integration and control offers an overarching view of cortical function independently of sensory modality.
Collapse
Affiliation(s)
- Josef P Rauschecker
- Department of Neuroscience, Georgetown University Medical Center, NRB WP19, 3970 Reservoir Rd NW, Washington, DC, 20057-1460, USA; Institute for Advanced Study, Technische Universität München, Garching, Germany
| |
Collapse
|
17
|
Harris R, de Jong BM. Differential parietal and temporal contributions to music perception in improvising and score-dependent musicians, an fMRI study. Brain Res 2015. [DOI: 10.1016/j.brainres.2015.06.050] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
18
|
Lee M, Blake R, Kim S, Kim CY. Melodic sound enhances visual awareness of congruent musical notes, but only if you can read music. Proc Natl Acad Sci U S A 2015; 112:8493-8. [PMID: 26077907 PMCID: PMC4500286 DOI: 10.1073/pnas.1509529112] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Predictive influences of auditory information on resolution of visual competition were investigated using music, whose visual symbolic notation is familiar only to those with musical training. Results from two experiments using different experimental paradigms revealed that melodic congruence between what is seen and what is heard impacts perceptual dynamics during binocular rivalry. This bisensory interaction was observed only when the musical score was perceptually dominant, not when it was suppressed from awareness, and it was observed only in people who could read music. Results from two ancillary experiments showed that this effect of congruence cannot be explained by differential patterns of eye movements or by differential response sluggishness associated with congruent score/melody combinations. Taken together, these results demonstrate robust audiovisual interaction based on high-level, symbolic representations and its predictive influence on perceptual dynamics during binocular rivalry.
Collapse
Affiliation(s)
- Minyoung Lee
- Department of Psychology, Korea University, Seoul 136701, Korea
| | - Randolph Blake
- Department of Psychological Sciences, Vanderbilt Vision Research Center, Vanderbilt University, Nashville, TN 37240; Department of Brain and Cognitive Sciences, Seoul National University, Seoul 151742, Korea
| | - Sujin Kim
- Department of Psychology, Korea University, Seoul 136701, Korea;
| | - Chai-Youn Kim
- Department of Psychology, Korea University, Seoul 136701, Korea;
| |
Collapse
|