1
|
MacLean J, Stirn J, Bidelman GM. Auditory-motor entrainment and listening experience shape the perceptual learning of concurrent speech. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.07.18.604167. [PMID: 39071391 PMCID: PMC11275804 DOI: 10.1101/2024.07.18.604167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/30/2024]
Abstract
Background Plasticity from auditory experience shapes the brain's encoding and perception of sound. Though prior research demonstrates that neural entrainment (i.e., brain-to-acoustic synchronization) aids speech perception, how long- and short-term plasticity influence entrainment to concurrent speech has not been investigated. Here, we explored neural entrainment mechanisms and the interplay between short- and long-term neuroplasticity for rapid auditory perceptual learning of concurrent speech sounds in young, normal-hearing musicians and nonmusicians. Method Participants learned to identify double-vowel mixtures during ∼45 min training sessions with concurrent high-density EEG recordings. We examined the degree to which brain responses entrained to the speech-stimulus train (∼9 Hz) to investigate whether entrainment to speech prior to behavioral decision predicted task performance. Source and directed functional connectivity analyses of the EEG probed whether behavior was driven by group differences auditory-motor coupling. Results Both musicians and nonmusicians showed rapid perceptual learning in accuracy with training. Interestingly, listeners' neural entrainment strength prior to target speech mixtures predicted behavioral identification performance; stronger neural synchronization was observed preceding incorrect compared to correct trial responses. We also found stark hemispheric biases in auditory-motor coupling during speech entrainment, with greater auditory-motor connectivity in the right compared to left hemisphere for musicians (R>L) but not in nonmusicians (R=L). Conclusions Our findings confirm stronger neuroacoustic synchronization and auditory-motor coupling during speech processing in musicians. Stronger neural entrainment to rapid stimulus trains preceding incorrect behavioral responses supports the notion that alpha-band (∼10 Hz) arousal/suppression in brain activity is an important modulator of trial-by-trial success in perceptual processing.
Collapse
|
2
|
Kachlicka M, Tierney A. Voice actors show enhanced neural tracking of pitch, prosody perception, and music perception. Cortex 2024; 178:213-222. [PMID: 39024939 DOI: 10.1016/j.cortex.2024.06.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2024] [Revised: 05/28/2024] [Accepted: 06/26/2024] [Indexed: 07/20/2024]
Abstract
Experiences with sound that make strong demands on the precision of perception, such as musical training and experience speaking a tone language, can enhance auditory neural encoding. Are high demands on the precision of perception necessary for training to drive auditory neural plasticity? Voice actors are an ideal subject population for answering this question. Voice acting requires exaggerating prosodic cues to convey emotion, character, and linguistic structure, drawing upon attention to sound, memory for sound features, and accurate sound production, but not fine perceptual precision. Here we assessed neural encoding of pitch using the frequency-following response (FFR), as well as prosody, music, and sound perception, in voice actors and a matched group of non-actors. We find that the consistency of neural sound encoding, prosody perception, and musical phrase perception are all enhanced in voice actors, suggesting that a range of neural and behavioural auditory processing enhancements can result from training which lacks fine perceptual precision. However, fine discrimination was not enhanced in voice actors but was linked to degree of musical experience, suggesting that low-level auditory processing can only be enhanced by demanding perceptual training. These findings suggest that training which taxes attention, memory, and production but is not perceptually taxing may be a way to boost neural encoding of sound and auditory pattern detection in individuals with poor auditory skills.
Collapse
Affiliation(s)
- Magdalena Kachlicka
- School of Psychological Sciences, Birkbeck, University of London, London, UK
| | - Adam Tierney
- School of Psychological Sciences, Birkbeck, University of London, London, UK.
| |
Collapse
|
3
|
Martins I, Lima CF, Pinheiro AP. Enhanced salience of musical sounds in singers and instrumentalists. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2022; 22:1044-1062. [PMID: 35501427 DOI: 10.3758/s13415-022-01007-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 04/10/2022] [Indexed: 06/14/2023]
Abstract
Music training has been linked to facilitated processing of emotional sounds. However, most studies have focused on speech, and less is known about musicians' brain responses to other emotional sounds and in relation to instrument-specific experience. The current study combined behavioral and EEG methods to address two novel questions related to the perception of auditory emotional cues: whether and how long-term music training relates to a distinct emotional processing of nonverbal vocalizations and music; and whether distinct training profiles (vocal vs. instrumental) modulate brain responses to emotional sounds from early to late processing stages. Fifty-eight participants completed an EEG implicit emotional processing task, in which musical and vocal sounds differing in valence were presented as nontarget stimuli. After this task, participants explicitly evaluated the same sounds regarding the emotion being expressed, their valence, and arousal. Compared with nonmusicians, musicians displayed enhanced salience detection (P2), attention orienting (P3), and elaborative processing (Late Positive Potential) of musical (vs. vocal) sounds in event-related potential (ERP) data. The explicit evaluation of musical sounds also was distinct in musicians: accuracy in the emotional recognition of musical sounds was similar across valence types in musicians, who also judged musical sounds to be more pleasant and more arousing than nonmusicians. Specific profiles of music training (singers vs. instrumentalists) did not relate to differences in the processing of vocal vs. musical sounds. Together, these findings reveal that music has a privileged status in the auditory system of long-term musically trained listeners, irrespective of their instrument-specific experience.
Collapse
Affiliation(s)
- Inês Martins
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, 1649-013, Lisbon, Portugal
| | - César F Lima
- Instituto Universitário de Lisboa (ISCTE-IUL), Lisbon, Portugal
| | - Ana P Pinheiro
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, 1649-013, Lisbon, Portugal.
| |
Collapse
|
4
|
Whitehead JC, Armony JL. Intra-individual Reliability of Voice- and Music-elicited Responses and their Modulation by Expertise. Neuroscience 2022; 487:184-197. [PMID: 35182696 DOI: 10.1016/j.neuroscience.2022.02.011] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Revised: 01/19/2022] [Accepted: 02/10/2022] [Indexed: 10/19/2022]
Abstract
A growing number of functional neuroimaging studies have identified regions within the temporal lobe, particularly along the planum polare and planum temporale, that respond more strongly to music than other types of acoustic stimuli, including voice. This "music preferred" regions have been reported using a variety of stimulus sets, paradigms and analysis approaches and their consistency across studies confirmed through meta-analyses. However, the critical question of intra-subject reliability of these responses has received less attention. Here, we directly assessed this important issue by contrasting brain responses to musical vs. vocal stimuli in the same subjects across three consecutive fMRI runs, using different types of stimuli. Moreover, we investigated whether these music- and voice-preferred responses were reliably modulated by expertise. Results demonstrated that music-preferred activity previously reported in temporal regions, and its modulation by expertise, exhibits a high intra-subject reliability. However, we also found that activity in some extra-temporal regions, such as the precentral and middle frontal gyri, did depend on the particular stimuli employed, which may explain why these are less consistently reported in the literature. Taken together, our findings confirm and extend the notion that specific regions in the brain consistently respond more strongly to certain socially-relevant stimulus categories, such as faces, voices and music, but that some of these responses appear to depend, at least to some extent, on the specific features of the paradigm employed.
Collapse
Affiliation(s)
- Jocelyne C Whitehead
- Douglas Mental Health University Institute, Verdun, Canada; BRAMS Laboratory, Centre for Research on Brain, Language and Music, Montreal, Canada; Integrated Program in Neuroscience, McGill University, Montreal, Canada.
| | - Jorge L Armony
- Douglas Mental Health University Institute, Verdun, Canada; BRAMS Laboratory, Centre for Research on Brain, Language and Music, Montreal, Canada; Department of Psychiatry, McGill University, Montreal, Canada
| |
Collapse
|
5
|
Ekert JO, Lorca-Puls DL, Gajardo-Vidal A, Crinion JT, Hope TMH, Green DW, Price CJ. A functional dissociation of the left frontal regions that contribute to single word production tasks. Neuroimage 2021; 245:118734. [PMID: 34793955 PMCID: PMC8752962 DOI: 10.1016/j.neuroimage.2021.118734] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Revised: 10/06/2021] [Accepted: 11/14/2021] [Indexed: 11/02/2022] Open
Abstract
Controversy surrounds the interpretation of higher activation for pseudoword compared to word reading in the left precentral gyrus and pars opercularis. Specifically, does activation in these regions reflect: (1) the demands on sublexical assembly of articulatory codes, or (2) retrieval effort because the combinations of articulatory codes are unfamiliar? Using fMRI, in 84 neurologically intact participants, we addressed this issue by comparing reading and repetition of words (W) and pseudowords (P) to naming objects (O) from pictures or sounds. As objects do not provide sublexical articulatory cues, we hypothesis that retrieval effort will be greater for object naming than word repetition/reading (which benefits from both lexical and sublexical cues); while the demands on sublexical assembly will be higher for pseudoword production than object naming. We found that activation was: (i) highest for pseudoword reading [P>O&W in the visual modality] in the anterior part of the ventral precentral gyrus bordering the precentral sulcus (vPCg/vPCs), consistent with the sublexical assembly of articulatory codes; but (ii) as high for object naming as pseudoword production [P&O>W] in dorsal precentral gyrus (dPCg) and the left inferior frontal junction (IFJ), consistent with retrieval demands and cognitive control. In addition, we dissociate the response properties of vPCg/vPCs, dPCg and IFJ from other left frontal lobe regions that are activated during single word speech production. Specifically, in both auditory and visual modalities: a central part of vPCg (head and face area) was more activated for verbal than nonverbal stimuli [P&W>O]; and the pars orbitalis and inferior frontal sulcus were most activated during object naming [O>W&P]. Our findings help to resolve a previous discrepancy in the literature, dissociate three functionally distinct parts of the precentral gyrus, and refine our knowledge of the functional anatomy of speech production in the left frontal lobe.
Collapse
Affiliation(s)
- Justyna O Ekert
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, 12 Queen Square, London WC1N 3AR, United Kingdom.
| | - Diego L Lorca-Puls
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, 12 Queen Square, London WC1N 3AR, United Kingdom; Department of Speech, Language and Hearing Sciences, Faculty of Medicine, Universidad de Concepcion, Concepcion, Chile
| | - Andrea Gajardo-Vidal
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, 12 Queen Square, London WC1N 3AR, United Kingdom; Faculty of Health Sciences, Universidad del Desarrollo, Concepcion, Chile
| | - Jennifer T Crinion
- Institute of Cognitive Neuroscience, University College London, London, United Kingdom
| | - Thomas M H Hope
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, 12 Queen Square, London WC1N 3AR, United Kingdom
| | - David W Green
- Department of Experimental Psychology, University College London, London, United Kingdom
| | - Cathy J Price
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, 12 Queen Square, London WC1N 3AR, United Kingdom
| |
Collapse
|
6
|
Ekert JO, Gajardo-Vidal A, Lorca-Puls DL, Hope TMH, Dick F, Crinion JT, Green DW, Price CJ. Dissociating the functions of three left posterior superior temporal regions that contribute to speech perception and production. Neuroimage 2021; 245:118764. [PMID: 34848301 PMCID: PMC9125162 DOI: 10.1016/j.neuroimage.2021.118764] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Revised: 11/15/2021] [Accepted: 11/24/2021] [Indexed: 11/28/2022] Open
Abstract
Prior studies have shown that the left posterior superior temporal sulcus (pSTS) and left temporo-parietal junction (TPJ) both contribute to phonological short-term memory, speech perception and speech production. Here, by conducting a within-subjects multi-factorial fMRI study, we dissociate the response profiles of these regions and a third region – the anterior ascending terminal branch of the left superior temporal sulcus (atSTS), which lies dorsal to pSTS and ventral to TPJ. First, we show that each region was more activated by (i) 1-back matching on visually presented verbal stimuli (words or pseudowords) compared to 1-back matching on visually presented non-verbal stimuli (pictures of objects or non-objects), and (ii) overt speech production than 1-back matching, across 8 types of stimuli (visually presented words, pseudowords, objects and non-objects and aurally presented words, pseudowords, object sounds and meaningless hums). The response properties of the three regions dissociated within the auditory modality. In left TPJ, activation was higher for auditory stimuli that were non-verbal (sounds of objects or meaningless hums) compared to verbal (words and pseudowords), irrespective of task (speech production or 1-back matching). In left pSTS, activation was higher for non-semantic stimuli (pseudowords and hums) than semantic stimuli (words and object sounds) on the dorsal pSTS surface (dpSTS), irrespective of task. In left atSTS, activation was not sensitive to either semantic or verbal content. The contrasting response properties of left TPJ, dpSTS and atSTS was cross-validated in an independent sample of 59 participants, using region-by-condition interactions. We also show that each region participates in non-overlapping networks of frontal, parietal and cerebellar regions. Our results challenge previous claims about functional specialisation in the left posterior superior temporal lobe and motivate future studies to determine the timing and directionality of information flow in the brain networks involved in speech perception and production.
Collapse
Affiliation(s)
- Justyna O Ekert
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, 12 Queen Square, London WC1N 3AR, United Kingdom.
| | - Andrea Gajardo-Vidal
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, 12 Queen Square, London WC1N 3AR, United Kingdom; Faculty of Health Sciences, Universidad del Desarrollo, Concepcion, Chile
| | - Diego L Lorca-Puls
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, 12 Queen Square, London WC1N 3AR, United Kingdom
| | - Thomas M H Hope
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, 12 Queen Square, London WC1N 3AR, United Kingdom
| | - Fred Dick
- Department of Experimental Psychology, University College London, London, United Kingdom; Department of Psychological Sciences, Birkbeck University of London, London, United Kingdom
| | - Jennifer T Crinion
- Institute of Cognitive Neuroscience, University College London, London, United Kingdom
| | - David W Green
- Department of Experimental Psychology, University College London, London, United Kingdom
| | - Cathy J Price
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, 12 Queen Square, London WC1N 3AR, United Kingdom
| |
Collapse
|
7
|
Expertise Modulates Neural Stimulus-Tracking. eNeuro 2021; 8:ENEURO.0065-21.2021. [PMID: 34341067 PMCID: PMC8371925 DOI: 10.1523/eneuro.0065-21.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 06/14/2021] [Accepted: 06/16/2021] [Indexed: 11/21/2022] Open
Abstract
How does the brain anticipate information in language? When people perceive speech, low-frequency (<10 Hz) activity in the brain synchronizes with bursts of sound and visual motion. This phenomenon, called cortical stimulus-tracking, is thought to be one way that the brain predicts the timing of upcoming words, phrases, and syllables. In this study, we test whether stimulus-tracking depends on domain-general expertise or on language-specific prediction mechanisms. We go on to examine how the effects of expertise differ between frontal and sensory cortex. We recorded electroencephalography (EEG) from human participants who were experts in either sign language or ballet, and we compared stimulus-tracking between groups while participants watched videos of sign language or ballet. We measured stimulus-tracking by computing coherence between EEG recordings and visual motion in the videos. Results showed that stimulus-tracking depends on domain-general expertise, and not on language-specific prediction mechanisms. At frontal channels, fluent signers showed stronger coherence to sign language than to dance, whereas expert dancers showed stronger coherence to dance than to sign language. At occipital channels, however, the two groups of participants did not show different patterns of coherence. These results are difficult to explain by entrainment of endogenous oscillations, because neither sign language nor dance show any periodicity at the frequencies of significant expertise-dependent stimulus-tracking. These results suggest that the brain may rely on domain-general predictive mechanisms to optimize perception of temporally-predictable stimuli such as speech, sign language, and dance.
Collapse
|
8
|
Boebinger D, Norman-Haignere SV, McDermott JH, Kanwisher N. Music-selective neural populations arise without musical training. J Neurophysiol 2021; 125:2237-2263. [PMID: 33596723 PMCID: PMC8285655 DOI: 10.1152/jn.00588.2020] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Revised: 02/12/2021] [Accepted: 02/12/2021] [Indexed: 11/22/2022] Open
Abstract
Recent work has shown that human auditory cortex contains neural populations anterior and posterior to primary auditory cortex that respond selectively to music. However, it is unknown how this selectivity for music arises. To test whether musical training is necessary, we measured fMRI responses to 192 natural sounds in 10 people with almost no musical training. When voxel responses were decomposed into underlying components, this group exhibited a music-selective component that was very similar in response profile and anatomical distribution to that previously seen in individuals with moderate musical training. We also found that musical genres that were less familiar to our participants (e.g., Balinese gamelan) produced strong responses within the music component, as did drum clips with rhythm but little melody, suggesting that these neural populations are broadly responsive to music as a whole. Our findings demonstrate that the signature properties of neural music selectivity do not require musical training to develop, showing that the music-selective neural populations are a fundamental and widespread property of the human brain.NEW & NOTEWORTHY We show that music-selective neural populations are clearly present in people without musical training, demonstrating that they are a fundamental and widespread property of the human brain. Additionally, we show music-selective neural populations respond strongly to music from unfamiliar genres as well as music with rhythm but little pitch information, suggesting that they are broadly responsive to music as a whole.
Collapse
Affiliation(s)
- Dana Boebinger
- Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, Massachusetts
- Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts
| | - Sam V Norman-Haignere
- Laboratoire des Sytèmes Perceptifs, Département d'Études Cognitives, École Normale Supérieure, PSL Research University, CNRS, Paris France
- Zuckerman Institute for Brain Research, Columbia University, New York, New York
| | - Josh H McDermott
- Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, Massachusetts
- Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts
- Center for Brains, Minds, and Machines, Massachusetts Institute of Technology, Cambridge, Massachusetts
| | - Nancy Kanwisher
- Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts
- Center for Brains, Minds, and Machines, Massachusetts Institute of Technology, Cambridge, Massachusetts
| |
Collapse
|
9
|
Dumas D, Doherty M, Organisciak P. The psychology of professional and student actors: Creativity, personality, and motivation. PLoS One 2020; 15:e0240728. [PMID: 33091923 PMCID: PMC7580901 DOI: 10.1371/journal.pone.0240728] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2020] [Accepted: 10/01/2020] [Indexed: 11/18/2022] Open
Abstract
As a profession, acting is marked by a high-level of economic and social riskiness concomitantly with the possibility for artistic satisfaction and/or public admiration. Current understanding of the psychological attributes that distinguish professional actors is incomplete. Here, we compare samples of professional actors (n = 104), undergraduate student actors (n = 100), and non-acting adults (n = 92) on 26 psychological dimensions and use machine-learning methods to classify participants based on these attributes. Nearly all of the attributes measured here displayed significant univariate mean differences across the three groups, with the strongest effect sizes being on Creative Activities, Openness, and Extraversion. A cross-validated Least Absolute Shrinkage and Selection Operator (LASSO) classification model was capable of identifying actors (either professional or student) from non-actors with a 92% accuracy and was able to sort professional from student actors with a 96% accuracy when age was included in the model, and a 68% accuracy with only psychological attributes included. In these LASSO models, actors in general were distinguished by high levels of Openness, Assertiveness, and Elaboration, but professional actors were specifically marked by high levels of Originality, Volatility, and Literary Activities.
Collapse
Affiliation(s)
- Denis Dumas
- Department of Research Methods and Information Science, University of Denver, Denver, Colorado, United States of America
- * E-mail:
| | - Michael Doherty
- Actor’s Equity Association, New York, NY, United States of America
| | - Peter Organisciak
- Department of Research Methods and Information Science, University of Denver, Denver, Colorado, United States of America
| |
Collapse
|
10
|
Krishnan S, Lima CF, Evans S, Chen S, Guldner S, Yeff H, Manly T, Scott SK. Beatboxers and Guitarists Engage Sensorimotor Regions Selectively When Listening to the Instruments They can Play. Cereb Cortex 2019; 28:4063-4079. [PMID: 30169831 PMCID: PMC6188551 DOI: 10.1093/cercor/bhy208] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2017] [Accepted: 08/04/2018] [Indexed: 12/31/2022] Open
Abstract
Studies of classical musicians have demonstrated that expertise modulates neural responses during auditory perception. However, it remains unclear whether such expertise-dependent plasticity is modulated by the instrument that a musician plays. To examine whether the recruitment of sensorimotor regions during music perception is modulated by instrument-specific experience, we studied nonclassical musicians-beatboxers, who predominantly use their vocal apparatus to produce sound, and guitarists, who use their hands. We contrast fMRI activity in 20 beatboxers, 20 guitarists, and 20 nonmusicians as they listen to novel beatboxing and guitar pieces. All musicians show enhanced activity in sensorimotor regions (IFG, IPC, and SMA), but only when listening to the musical instrument they can play. Using independent component analysis, we find expertise-selective enhancement in sensorimotor networks, which are distinct from changes in attentional networks. These findings suggest that long-term sensorimotor experience facilitates access to the posterodorsal "how" pathway during auditory processing.
Collapse
Affiliation(s)
- Saloni Krishnan
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London, UK.,Department of Experimental Psychology, University of Oxford, Anna Watts Building, Radcliffe Observatory Quarter, Oxford, UK
| | - César F Lima
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London, UK.,Instituto Universitário de Lisboa (ISCTE-IUL), Avenida das Forças Armadas, Lisboa, Portugal
| | - Samuel Evans
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London, UK.,Department of Psychology, University of Westminster, 115 New Cavendish Street, London, UK
| | - Sinead Chen
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London, UK
| | - Stella Guldner
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London, UK.,Graduate School of Economic and Social Sciences (GESS), University of Mannheim, Mannheim, Germany
| | - Harry Yeff
- Get Involved Ltd, 3 Loughborough Street, London, UK
| | - Tom Manly
- MRC Cognition and Brain Sciences Unit, 15 Chaucer Road, Cambridge, UK
| | - Sophie K Scott
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London, UK
| |
Collapse
|
11
|
Bidelman GM, Walker B. Plasticity in auditory categorization is supported by differential engagement of the auditory-linguistic network. Neuroimage 2019; 201:116022. [PMID: 31310863 DOI: 10.1016/j.neuroimage.2019.116022] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2019] [Revised: 06/30/2019] [Accepted: 07/12/2019] [Indexed: 12/21/2022] Open
Abstract
To construct our perceptual world, the brain categorizes variable sensory cues into behaviorally-relevant groupings. Categorical representations are apparent within a distributed fronto-temporo-parietal brain network but how this neural circuitry is shaped by experience remains undefined. Here, we asked whether speech and music categories might be formed within different auditory-linguistic brain regions depending on listeners' auditory expertise. We recorded EEG in highly skilled (musicians) vs. less experienced (nonmusicians) perceivers as they rapidly categorized speech and musical sounds. Musicians showed perceptual enhancements across domains, yet source EEG data revealed a double dissociation in the neurobiological mechanisms supporting categorization between groups. Whereas musicians coded categories in primary auditory cortex (PAC), nonmusicians recruited non-auditory regions (e.g., inferior frontal gyrus, IFG) to generate category-level information. Functional connectivity confirmed nonmusicians' increased left IFG involvement reflects stronger routing of signal from PAC directed to IFG, presumably because sensory coding is insufficient to construct categories in less experienced listeners. Our findings establish auditory experience modulates specific engagement and inter-regional communication in the auditory-linguistic network supporting categorical perception. Whereas early canonical PAC representations are sufficient to generate categories in highly trained ears, less experienced perceivers broadcast information downstream to higher-order linguistic brain areas (IFG) to construct abstract sound labels.
Collapse
Affiliation(s)
- Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA; University of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, USA.
| | - Breya Walker
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; Department of Psychology, University of Memphis, Memphis, TN, USA; Department of Mathematical Sciences, University of Memphis, Memphis, TN, USA
| |
Collapse
|
12
|
Disentangling phonological and articulatory processing: A neuroanatomical study in aphasia. Neuropsychologia 2018; 121:175-185. [DOI: 10.1016/j.neuropsychologia.2018.10.015] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2018] [Revised: 10/15/2018] [Accepted: 10/16/2018] [Indexed: 11/24/2022]
|
13
|
Sood MR, Sereno MI. Areas activated during naturalistic reading comprehension overlap topological visual, auditory, and somatotomotor maps. Hum Brain Mapp 2016; 37:2784-810. [PMID: 27061771 PMCID: PMC4949687 DOI: 10.1002/hbm.23208] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2015] [Revised: 03/09/2016] [Accepted: 03/24/2016] [Indexed: 11/18/2022] Open
Abstract
Cortical mapping techniques using fMRI have been instrumental in identifying the boundaries of topological (neighbor-preserving) maps in early sensory areas. The presence of topological maps beyond early sensory areas raises the possibility that they might play a significant role in other cognitive systems, and that topological mapping might help to delineate areas involved in higher cognitive processes. In this study, we combine surface-based visual, auditory, and somatomotor mapping methods with a naturalistic reading comprehension task in the same group of subjects to provide a qualitative and quantitative assessment of the cortical overlap between sensory-motor maps in all major sensory modalities, and reading processing regions. Our results suggest that cortical activation during naturalistic reading comprehension overlaps more extensively with topological sensory-motor maps than has been heretofore appreciated. Reading activation in regions adjacent to occipital lobe and inferior parietal lobe almost completely overlaps visual maps, whereas a significant portion of frontal activation for reading in dorsolateral and ventral prefrontal cortex overlaps both visual and auditory maps. Even classical language regions in superior temporal cortex are partially overlapped by topological visual and auditory maps. By contrast, the main overlap with somatomotor maps is restricted to a small region on the anterior bank of the central sulcus near the border between the face and hand representations of M-I. Hum Brain Mapp 37:2784-2810, 2016. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Mariam R. Sood
- Department of Psychological SciencesBirkbeck, University of London Malet StreetLondonWC1E 7HXUnited Kingdom
| | - Martin I. Sereno
- Department of Psychological SciencesBirkbeck, University of London Malet StreetLondonWC1E 7HXUnited Kingdom
- Experimental Psychology, Division of Psychology and Language Sciences 26 Bedford WayLondonWC1H 0APUnited Kingdom
| |
Collapse
|
14
|
Rosslau K, Herholz SC, Knief A, Ortmann M, Deuster D, Schmidt CM, Zehnhoff-Dinnesen A, Pantev C, Dobel C. Song Perception by Professional Singers and Actors: An MEG Study. PLoS One 2016; 11:e0147986. [PMID: 26863437 PMCID: PMC4749173 DOI: 10.1371/journal.pone.0147986] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2015] [Accepted: 01/11/2016] [Indexed: 01/20/2023] Open
Abstract
The cortical correlates of speech and music perception are essentially overlapping, and the specific effects of different types of training on these networks remain unknown. We compared two groups of vocally trained professionals for music and speech, singers and actors, using recited and sung rhyme sequences from German art songs with semantic and/ or prosodic/melodic violations (i.e. violations of pitch) of the last word, in order to measure the evoked activation in a magnetoencephalographic (MEG) experiment. MEG data confirmed the existence of intertwined networks for the sung and spoken modality in an early time window after word violation. In essence for this early response, higher activity was measured after melodic/prosodic than semantic violations in predominantly right temporal areas. For singers as well as for actors, modality-specific effects were evident in predominantly left-temporal lateralized activity after semantic expectancy violations in the spoken modality, and right-dominant temporal activity in response to melodic violations in the sung modality. As an indication of a special group-dependent audiation process, higher neuronal activity for singers appeared in a late time window in right temporal and left parietal areas, both after the recited and the sung sequences.
Collapse
Affiliation(s)
- Ken Rosslau
- Department of Phoniatrics and Pedaudiology, University Hospital Muenster, Muenster, Germany
- * E-mail:
| | - Sibylle C. Herholz
- Institute for Biomagnetism and Biosignalanalysis, University of Muenster, Muenster, Germany
- German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
| | - Arne Knief
- Department of Phoniatrics and Pedaudiology, University Hospital Muenster, Muenster, Germany
| | - Magdalene Ortmann
- Institute for Biomagnetism and Biosignalanalysis, University of Muenster, Muenster, Germany
- Jean-Uhrmacher-Institute for Clinical ENT-Research, University Hospital Cologne, Cologne, Germany
| | - Dirk Deuster
- Department of Phoniatrics and Pedaudiology, University Hospital Muenster, Muenster, Germany
| | - Claus-Michael Schmidt
- Department of Phoniatrics and Pedaudiology, University Hospital Muenster, Muenster, Germany
| | | | - Christo Pantev
- Institute for Biomagnetism and Biosignalanalysis, University of Muenster, Muenster, Germany
| | - Christian Dobel
- Institute for Biomagnetism and Biosignalanalysis, University of Muenster, Muenster, Germany
- Department of Otorhinolaryngology, Friedrich-Schiller University Jena, Jena, Germany
| |
Collapse
|
15
|
Neumann N, Lotze M, Eickhoff SB. Cognitive Expertise: An ALE Meta-Analysis. Hum Brain Mapp 2015; 37:262-72. [PMID: 26467981 DOI: 10.1002/hbm.23028] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2015] [Revised: 10/02/2015] [Accepted: 10/05/2015] [Indexed: 12/17/2022] Open
Abstract
Expert performance constitutes the endpoint of skill acquisition and is accompanied by widespread neuroplastic changes. To reveal common mechanisms of reorganization associated with long-term expertise in a cognitive domain (mental calculation, chess, language, memory, music without motor involvement), we used activation likelihood estimation meta-analysis and compared brain activation of experts to nonexperts. Twenty-six studies matched inclusion criteria, most of which reported an increase and not a decrease of activation foci in experts. Increased activation occurred in the left rolandic operculum (OP 4) and left primary auditory cortex and in bilateral premotor cortex in studies that used auditory stimulation. In studies with visual stimulation, experts showed enhanced activation in the right inferior parietal cortex (area PGp) and the right lingual gyrus. Experts' brain activation patterns seem to be characterized by enhanced or additional activity in domain-specific primary, association, and motor structures, confirming that learning is localized and very specialized.
Collapse
Affiliation(s)
- Nicola Neumann
- Institute of Diagnostic Radiology and Neuroradiology, Functional Imaging Unit, Ernst-Moritz-Arndt-University of Greifswald, Greifswald, Germany
| | - Martin Lotze
- Institute of Diagnostic Radiology and Neuroradiology, Functional Imaging Unit, Ernst-Moritz-Arndt-University of Greifswald, Greifswald, Germany
| | - Simon B Eickhoff
- Cognitive Neuroscience Group, Institute of Clinical Neuroscience and Medical Psychology, Heinrich-Heine University, Düsseldorf, Germany.,Brain Network Modeling Group, Institute of Neuroscience and Medicine (INM-1), Research Center Jülich, Jülich, Germany
| |
Collapse
|
16
|
Abstract
In the age of the Internet and with the dramatic proliferation of mobile listening technologies, music has unprecedented global distribution and embeddedness in people's lives. It is a source of intense experiences of both the most intimate and solitary, and public and collective, kinds - from an individual with their smartphone and headphones, to large-scale live events and global simulcasts; and it increasingly brings together a huge range of cultures and histories, through developments in world music, sampling, the re-issue of historical recordings, and the explosion of informal and home music-making that circulates via YouTube. For many people, involvement with music can be among the most powerful and potentially transforming experiences in their lives. At the same time, there has been increasing interest in music's communicative and affective capacities, and its potential to act as an agent of social bonding and affiliation. This review critically discusses a considerable body of research and scholarship, across disciplines ranging from the neuroscience and psychology of music to cultural musicology and the sociology and anthropology of music, that provides evidence for music's capacity to promote empathy and social/cultural understanding through powerful affective, cognitive and social factors; and explores ways in which to connect and make sense of this disparate evidence (and counter-evidence). It reports the outcome of an empirical study that tests one aspect of those claims, demonstrating that 'passive' listening to the music of an unfamiliar culture can significantly change the cultural attitudes of listeners with high dispositional empathy; presents a model that brings together the primary components of the music and empathy research into a single framework; and considers both some of the applications, and some of the shortcomings and problems, of understanding music from the perspective of empathy.
Collapse
|
17
|
Ito T, Matsuda T, Shimojo S. Functional connectivity of the striatum in experts of stenography. Brain Behav 2015; 5:e00333. [PMID: 25874166 PMCID: PMC4396401 DOI: 10.1002/brb3.333] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/18/2014] [Revised: 01/17/2015] [Accepted: 01/25/2015] [Indexed: 11/08/2022] Open
Abstract
INTRODUCTION Stenography, or shorthand, is a unique set of skills that involves intensive training which is nearly life-long and orchestrating various brain functional modules, including auditory, linguistic, cognitive, mnemonic, and motor. Stenography provides cognitive neuroscientists with a unique opportunity to investigate the neural mechanisms underlying the neural plasticity that enables such a high degree of expertise. However, shorthand is quickly being replaced with voice recognition technology. We took this nearly final opportunity to scan the brains of the last alive shorthand experts of the Japanese language. METHODS Thirteen right-handed stenographers and fourteen right-handed controls participated in the functional magnetic resonance imaging (fMRI) study. RESULTS The fMRI data revealed plastic reorganization of the neural circuits around the putamen. The acquisition of expert skills was accompanied by structural and functional changes in the area. The posterior putamen is known as the execution center of acquired sensorimotor skills. Compared to nonexperts, the posterior putamen in stenographers had high covariation with the cerebellum and midbrain.The stenographers' brain developed different neural circuits from those of the nonexpert brain. CONCLUSIONS The current data illustrate the vigorous plasticity in the putamen and in its connectivity to other relevant areas in the expert brain. This is a case of vigorous neural plastic reorganization in response to massive overtraining, which is rare especially considering that it occurred in adulthood.
Collapse
Affiliation(s)
- Takehito Ito
- Brain Science Institute, Tamagawa University 6-1-1 Tamagawa Gakuen, Machida, Tokyo, 194-8610, Japan ; Molecular Neuroimaging Program, Molecular Imaging Center, National Institute of Radiological Sciences 4-9-1 Anagawa, Inage-ku, Chiba-shi, Chiba, 263-8555, Japan
| | - Tetsuya Matsuda
- Brain Science Institute, Tamagawa University 6-1-1 Tamagawa Gakuen, Machida, Tokyo, 194-8610, Japan
| | - Shinsuke Shimojo
- Division of Biology and Biological Engineering/Computation and Neural Systems, California Institute of Technology 139-74, Pasadena, California, 91125
| |
Collapse
|
18
|
Carey D, Rosen S, Krishnan S, Pearce MT, Shepherd A, Aydelott J, Dick F. Generality and specificity in the effects of musical expertise on perception and cognition. Cognition 2015; 137:81-105. [DOI: 10.1016/j.cognition.2014.12.005] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2013] [Revised: 11/03/2014] [Accepted: 12/18/2014] [Indexed: 10/24/2022]
|
19
|
Li Y, Rui X, Li S, Pu F. Investigation of global and local network properties of music perception with culturally different styles of music. Comput Biol Med 2014; 54:37-43. [PMID: 25212116 DOI: 10.1016/j.compbiomed.2014.08.017] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2013] [Revised: 07/23/2014] [Accepted: 08/16/2014] [Indexed: 11/16/2022]
Abstract
BACKGROUND Graph theoretical analysis has recently become a popular research tool in neuroscience, however, there have been very few studies on brain responses to music perception, especially when culturally different styles of music are involved. METHODS Electroencephalograms were recorded from ten subjects listening to Chinese traditional music, light music and western classical music. For event-related potentials, phase coherence was calculated in the alpha band and then constructed into correlation matrices. Clustering coefficients and characteristic path lengths were evaluated for global properties, while clustering coefficients and efficiency were assessed for local network properties. RESULTS Perception of light music and western classical music manifested small-world network properties, especially with a relatively low proportion of weights of correlation matrices. For local analysis, efficiency was more discernible than clustering coefficient. Nevertheless, there was no significant discrimination between Chinese traditional and western classical music perception. CONCLUSIONS Perception of different styles of music introduces different network properties, both globally and locally. Research into both global and local network properties has been carried out in other areas; however, this is a preliminary investigation aimed at suggesting a possible new approach to brain network properties in music perception.
Collapse
Affiliation(s)
- Yan Li
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, School of Biological Science and Medical Engineering, Beihang University, Beijing 100191, China; Research Institute of Beihang University in Shenzhen, Shenzhen 518057, China
| | - Xue Rui
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, School of Biological Science and Medical Engineering, Beihang University, Beijing 100191, China
| | - Shuyu Li
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, School of Biological Science and Medical Engineering, Beihang University, Beijing 100191, China
| | - Fang Pu
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, School of Biological Science and Medical Engineering, Beihang University, Beijing 100191, China; State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100191, China.
| |
Collapse
|
20
|
Angulo-Perkins A, Aubé W, Peretz I, Barrios FA, Armony JL, Concha L. Music listening engages specific cortical regions within the temporal lobes: differences between musicians and non-musicians. Cortex 2014; 59:126-37. [PMID: 25173956 DOI: 10.1016/j.cortex.2014.07.013] [Citation(s) in RCA: 68] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2013] [Revised: 02/22/2014] [Accepted: 07/18/2014] [Indexed: 11/26/2022]
Abstract
Music and speech are two of the most relevant and common sounds in the human environment. Perceiving and processing these two complex acoustical signals rely on a hierarchical functional network distributed throughout several brain regions within and beyond the auditory cortices. Given their similarities, the neural bases for processing these two complex sounds overlap to a certain degree, but particular brain regions may show selectivity for one or the other acoustic category, which we aimed to identify. We examined 53 subjects (28 of them professional musicians) by functional magnetic resonance imaging (fMRI), using a paradigm designed to identify regions showing increased activity in response to different types of musical stimuli, compared to different types of complex sounds, such as speech and non-linguistic vocalizations. We found a region in the anterior portion of the superior temporal gyrus (aSTG) (planum polare) that showed preferential activity in response to musical stimuli and was present in all our subjects, regardless of musical training, and invariant across different musical instruments (violin, piano or synthetic piano). Our data show that this cortical region is preferentially involved in processing musical, as compared to other complex sounds, suggesting a functional role as a second-order relay, possibly integrating acoustic characteristics intrinsic to music (e.g., melody extraction). Moreover, we assessed whether musical experience modulates the response of cortical regions involved in music processing and found evidence of functional differences between musicians and non-musicians during music listening. In particular, bilateral activation of the planum polare was more prevalent, but not exclusive, in musicians than non-musicians, and activation of the right posterior portion of the superior temporal gyrus (planum temporale) differed between groups. Our results provide evidence of functional specialization for music processing in specific regions of the auditory cortex and show domain-specific functional differences possibly correlated with musicianship.
Collapse
Affiliation(s)
- Arafat Angulo-Perkins
- Instituto de Neurobiología, Universidad Nacional Autónoma de México. Querétaro, Querétaro, México
| | - William Aubé
- International Laboratory for Brain, Music and Sound (BRAMS), Montreal, Québec, Canada; Department of Psychology, Université de Montréal, Montreal, Québec, Canada
| | - Isabelle Peretz
- International Laboratory for Brain, Music and Sound (BRAMS), Montreal, Québec, Canada; Department of Psychology, Université de Montréal, Montreal, Québec, Canada
| | - Fernando A Barrios
- Instituto de Neurobiología, Universidad Nacional Autónoma de México. Querétaro, Querétaro, México
| | - Jorge L Armony
- International Laboratory for Brain, Music and Sound (BRAMS), Montreal, Québec, Canada; Department of Psychology, Université de Montréal, Montreal, Québec, Canada; Douglas Institute and Department of Psychiatry, McGill University, Montreal, Québec, Canada
| | - Luis Concha
- Instituto de Neurobiología, Universidad Nacional Autónoma de México. Querétaro, Querétaro, México; International Laboratory for Brain, Music and Sound (BRAMS), Montreal, Québec, Canada.
| |
Collapse
|
21
|
The emergence of mirror-like response properties from domain-general principles in vision and audition. Behav Brain Sci 2014; 37:219. [PMID: 24775176 DOI: 10.1017/s0140525x13002483] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Like Cook et al., we suggest that mirror neurons are a fascinating product of cross-modal learning. As predicted by an associative account, responses in motor regions are observed for novel and/or abstract visual stimuli such as point-light and android movements. Domain-specific mirror responses also emerge as a function of audiomotor expertise that is slowly acquired over years of intensive training.
Collapse
|
22
|
Abstract
AbstractCommentators have tended to focus on the conceptual framework of our article, the contrast between genetic and associative accounts of mirror neurons, and to challenge it with additional possibilities rather than empirical data. This makes the empirically focused comments especially valuable. The mirror neuron debate is replete with ideas; what it needs now are system-level theories and careful experiments – tests and testability.
Collapse
|
23
|
Woods EA, Hernandez AE, Wagner VE, Beilock SL. Expert athletes activate somatosensory and motor planning regions of the brain when passively listening to familiar sports sounds. Brain Cogn 2014; 87:122-33. [PMID: 24732956 DOI: 10.1016/j.bandc.2014.03.007] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2013] [Revised: 03/11/2014] [Accepted: 03/16/2014] [Indexed: 10/25/2022]
Abstract
The present functional magnetic resonance imaging study examined the neural response to familiar and unfamiliar, sport and non-sport environmental sounds in expert and novice athletes. Results revealed differential neural responses dependent on sports expertise. Experts had greater neural activation than novices in focal sensorimotor areas such as the supplementary motor area, and pre- and postcentral gyri. Novices showed greater activation than experts in widespread areas involved in perception (i.e. supramarginal, middle occipital, and calcarine gyri; precuneus; inferior and superior parietal lobules), and motor planning and processing (i.e. inferior frontal, middle frontal, and middle temporal gyri). These between-group neural differences also appeared as an expertise effect within specific conditions. Experts showed greater activation than novices during the sport familiar condition in regions responsible for auditory and motor planning, including the inferior frontal gyrus and the parietal operculum. Novices only showed greater activation than experts in the supramarginal gyrus and pons during the non-sport unfamiliar condition, and in the middle frontal gyrus during the sport unfamiliar condition. These results are consistent with the view that expert athletes are attuned to only the most familiar, highly relevant sounds and tune out unfamiliar, irrelevant sounds. Furthermore, these findings that athletes show activation in areas known to be involved in action planning when passively listening to sounds suggests that auditory perception of action can lead to the re-instantiation of neural areas involved in producing these actions, especially if someone has expertise performing the actions.
Collapse
Affiliation(s)
- Elizabeth A Woods
- The University of Houston, Department of Psychology, 126 Heyne Building, Houston, TX 77204, USA.
| | - Arturo E Hernandez
- The University of Houston, Department of Psychology, 126 Heyne Building, Houston, TX 77204, USA
| | - Victoria E Wagner
- The University of Houston, Department of Psychology, 126 Heyne Building, Houston, TX 77204, USA
| | - Sian L Beilock
- The University of Chicago, Department of Psychology, 5848 South University Avenue, Chicago, IL 60637, USA
| |
Collapse
|
24
|
Cerebral activations related to audition-driven performance imagery in professional musicians. PLoS One 2014; 9:e93681. [PMID: 24714661 PMCID: PMC3979724 DOI: 10.1371/journal.pone.0093681] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2013] [Accepted: 03/10/2014] [Indexed: 11/18/2022] Open
Abstract
Functional Magnetic Resonance Imaging (fMRI) was used to study the activation of cerebral motor networks during auditory perception of music in professional keyboard musicians (n = 12). The activation paradigm implied that subjects listened to two-part polyphonic music, while either critically appraising the performance or imagining they were performing themselves. Two-part polyphonic audition and bimanual motor imagery circumvented a hemisphere bias associated with the convention of playing the melody with the right hand. Both tasks activated ventral premotor and auditory cortices, bilaterally, and the right anterior parietal cortex, when contrasted to 12 musically unskilled controls. Although left ventral premotor activation was increased during imagery (compared to judgment), bilateral dorsal premotor and right posterior-superior parietal activations were quite unique to motor imagery. The latter suggests that musicians not only recruited their manual motor repertoire but also performed a spatial transformation from the vertically perceived pitch axis (high and low sound) to the horizontal axis of the keyboard. Imagery-specific activations in controls were seen in left dorsal parietal-premotor and supplementary motor cortices. Although these activations were less strong compared to musicians, this overlapping distribution indicated the recruitment of a general 'mirror-neuron' circuitry. These two levels of sensori-motor transformations point towards common principles by which the brain organizes audition-driven music performance and visually guided task performance.
Collapse
|
25
|
Fauvel B, Groussard M, Chételat G, Fouquet M, Landeau B, Eustache F, Desgranges B, Platel H. Morphological brain plasticity induced by musical expertise is accompanied by modulation of functional connectivity at rest. Neuroimage 2014; 90:179-88. [DOI: 10.1016/j.neuroimage.2013.12.065] [Citation(s) in RCA: 75] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2013] [Revised: 12/26/2013] [Accepted: 12/30/2013] [Indexed: 12/25/2022] Open
|
26
|
Abstract
The fundamental perceptual unit in hearing is the 'auditory object'. Similar to visual objects, auditory objects are the computational result of the auditory system's capacity to detect, extract, segregate and group spectrotemporal regularities in the acoustic environment; the multitude of acoustic stimuli around us together form the auditory scene. However, unlike the visual scene, resolving the component objects within the auditory scene crucially depends on their temporal structure. Neural correlates of auditory objects are found throughout the auditory system. However, neural responses do not become correlated with a listener's perceptual reports until the level of the cortex. The roles of different neural structures and the contribution of different cognitive states to the perception of auditory objects are not yet fully understood.
Collapse
|
27
|
Suppression of the µ rhythm during speech and non-speech discrimination revealed by independent component analysis: implications for sensorimotor integration in speech processing. PLoS One 2013; 8:e72024. [PMID: 23991030 PMCID: PMC3750026 DOI: 10.1371/journal.pone.0072024] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2012] [Accepted: 07/11/2013] [Indexed: 01/17/2023] Open
Abstract
Background Constructivist theories propose that articulatory hypotheses about incoming phonetic targets may function to enhance perception by limiting the possibilities for sensory analysis. To provide evidence for this proposal, it is necessary to map ongoing, high-temporal resolution changes in sensorimotor activity (i.e., the sensorimotor μ rhythm) to accurate speech and non-speech discrimination performance (i.e., correct trials.) Methods Sixteen participants (15 female and 1 male) were asked to passively listen to or actively identify speech and tone-sweeps in a two-force choice discrimination task while the electroencephalograph (EEG) was recorded from 32 channels. The stimuli were presented at signal-to-noise ratios (SNRs) in which discrimination accuracy was high (i.e., 80–100%) and low SNRs producing discrimination performance at chance. EEG data were decomposed using independent component analysis and clustered across participants using principle component methods in EEGLAB. Results ICA revealed left and right sensorimotor µ components for 14/16 and 13/16 participants respectively that were identified on the basis of scalp topography, spectral peaks, and localization to the precentral and postcentral gyri. Time-frequency analysis of left and right lateralized µ component clusters revealed significant (pFDR<.05) suppression in the traditional beta frequency range (13–30 Hz) prior to, during, and following syllable discrimination trials. No significant differences from baseline were found for passive tasks. Tone conditions produced right µ beta suppression following stimulus onset only. For the left µ, significant differences in the magnitude of beta suppression were found for correct speech discrimination trials relative to chance trials following stimulus offset. Conclusions Findings are consistent with constructivist, internal model theories proposing that early forward motor models generate predictions about likely phonemic units that are then synthesized with incoming sensory cues during active as opposed to passive processing. Future directions and possible translational value for clinical populations in which sensorimotor integration may play a functional role are discussed.
Collapse
|
28
|
Poremba A, Bigelow J, Rossi B. Processing of communication sounds: contributions of learning, memory, and experience. Hear Res 2013; 305:31-44. [PMID: 23792078 DOI: 10.1016/j.heares.2013.06.005] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/19/2012] [Revised: 05/09/2013] [Accepted: 06/10/2013] [Indexed: 11/17/2022]
Abstract
Abundant evidence from both field and lab studies has established that conspecific vocalizations (CVs) are of critical ecological significance for a wide variety of species, including humans, non-human primates, rodents, and other mammals and birds. Correspondingly, a number of experiments have demonstrated behavioral processing advantages for CVs, such as in discrimination and memory tasks. Further, a wide range of experiments have described brain regions in many species that appear to be specialized for processing CVs. For example, several neural regions have been described in both mammals and birds wherein greater neural responses are elicited by CVs than by comparison stimuli such as heterospecific vocalizations, nonvocal complex sounds, and artificial stimuli. These observations raise the question of whether these regions reflect domain-specific neural mechanisms dedicated to processing CVs, or alternatively, if these regions reflect domain-general neural mechanisms for representing complex sounds of learned significance. Inasmuch as CVs can be viewed as complex combinations of basic spectrotemporal features, the plausibility of the latter position is supported by a large body of literature describing modulated cortical and subcortical representation of a variety of acoustic features that have been experimentally associated with stimuli of natural behavioral significance (such as food rewards). Herein, we review a relatively small body of existing literature describing the roles of experience, learning, and memory in the emergence of species-typical neural representations of CVs and auditory system plasticity. In both songbirds and mammals, manipulations of auditory experience as well as specific learning paradigms are shown to modulate neural responses evoked by CVs, either in terms of overall firing rate or temporal firing patterns. In some cases, CV-sensitive neural regions gradually acquire representation of non-CV stimuli with which subjects have training and experience. These results parallel literature in humans describing modulation of responses in face-sensitive neural regions through learning and experience. Thus, although many questions remain, the available evidence is consistent with the notion that CVs may acquire distinct neural representation through domain-general mechanisms for representing complex auditory objects that are of learned importance to the animal. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives".
Collapse
Affiliation(s)
- Amy Poremba
- University of Iowa, Dept. of Psychology, Div. Behavioral & Cognitive Neuroscience, E11 SSH, Iowa City, IA 52242, USA; University of Iowa, Neuroscience Program, Iowa City, IA 52242, USA.
| | | | | |
Collapse
|
29
|
Price CJ. A review and synthesis of the first 20 years of PET and fMRI studies of heard speech, spoken language and reading. Neuroimage 2012; 62:816-47. [PMID: 22584224 PMCID: PMC3398395 DOI: 10.1016/j.neuroimage.2012.04.062] [Citation(s) in RCA: 1279] [Impact Index Per Article: 106.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2011] [Revised: 04/25/2012] [Accepted: 04/30/2012] [Indexed: 01/17/2023] Open
Abstract
The anatomy of language has been investigated with PET or fMRI for more than 20 years. Here I attempt to provide an overview of the brain areas associated with heard speech, speech production and reading. The conclusions of many hundreds of studies were considered, grouped according to the type of processing, and reported in the order that they were published. Many findings have been replicated time and time again leading to some consistent and undisputable conclusions. These are summarised in an anatomical model that indicates the location of the language areas and the most consistent functions that have been assigned to them. The implications for cognitive models of language processing are also considered. In particular, a distinction can be made between processes that are localized to specific structures (e.g. sensory and motor processing) and processes where specialisation arises in the distributed pattern of activation over many different areas that each participate in multiple functions. For example, phonological processing of heard speech is supported by the functional integration of auditory processing and articulation; and orthographic processing is supported by the functional integration of visual processing, articulation and semantics. Future studies will undoubtedly be able to improve the spatial precision with which functional regions can be dissociated but the greatest challenge will be to understand how different brain regions interact with one another in their attempts to comprehend and produce language.
Collapse
Affiliation(s)
- Cathy J Price
- Wellcome Trust Centre for Neuroimaging, UCL, London WC1N 3BG, UK.
| |
Collapse
|
30
|
Schulze K, Mueller K, Koelsch S. Auditory stroop and absolute pitch: an fMRI study. Hum Brain Mapp 2012; 34:1579-90. [PMID: 22359341 DOI: 10.1002/hbm.22010] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2011] [Revised: 09/26/2011] [Accepted: 11/15/2011] [Indexed: 11/06/2022] Open
Abstract
To date, the underlying cognitive and neural mechanisms of absolute pitch (AP) have remained elusive. In the present fMRI study, we investigated verbal and tonal perception and working memory in musicians with and without absolute pitch. Stimuli were sine wave tones and syllables (names of the scale tones) presented simultaneously. Participants listened to sequences of five stimuli, and then rehearsed internally either the syllables or the tones. Finally participants indicated whether a test stimulus had been presented during the sequence. For an auditory stroop task, half of the tonal sequences were congruent (frequencies of tones corresponded to syllables which were the names of the scale tones) and half were incongruent (frequencies of tones did not correspond to syllables). Results indicate that first, verbal and tonal perception overlap strongly in the left superior temporal gyrus/sulcus (STG/STS) in AP musicians only. Second, AP is associated with the categorical perception of tones. Third, the left STG/STS is activated in AP musicians only for the detection of verbal-tonal incongruencies in the auditory stroop task. Finally, verbal labelling of tones in AP musicians seems to be automatic. Overall, a unique feature of AP appears to be the similarity between verbal and tonal perception.
Collapse
Affiliation(s)
- Katrin Schulze
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
| | | | | |
Collapse
|
31
|
Tierney A, Dick F, Deutsch D, Sereno M. Speech versus song: multiple pitch-sensitive areas revealed by a naturally occurring musical illusion. ACTA ACUST UNITED AC 2012; 23:249-54. [PMID: 22314043 DOI: 10.1093/cercor/bhs003] [Citation(s) in RCA: 55] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
It is normally obvious to listeners whether a human vocalization is intended to be heard as speech or song. However, the 2 signals are remarkably similar acoustically. A naturally occurring boundary case between speech and song has been discovered where a spoken phrase sounds as if it were sung when isolated and repeated. In the present study, an extensive search of audiobooks uncovered additional similar examples, which were contrasted with samples from the same corpus that do not sound like song, despite containing clear prosodic pitch contours. Using functional magnetic resonance imaging, we show that hearing these 2 closely matched stimuli is not associated with differences in response of early auditory areas. Rather, we find that a network of 8 regions, including the anterior superior temporal gyrus (STG) just anterior to Heschl's gyrus and the right midposterior STG, respond more strongly to speech perceived as song than to mere speech. This network overlaps a number of areas previously associated with pitch extraction and song production, confirming that phrases originally intended to be heard as speech can, under certain circumstances, be heard as song. Our results suggest that song processing compared with speech processing makes increased demands on pitch processing and auditory-motor integration.
Collapse
Affiliation(s)
- Adam Tierney
- Department of Communication Sciences and Disorders, Auditory Neuroscience Laboratory, Northwestern University, Evanston, IL 60208, USA.
| | | | | | | |
Collapse
|
32
|
Desai R, Liebenthal E, Waldron E, Binder JR. Left posterior temporal regions are sensitive to auditory categorization. J Cogn Neurosci 2008; 20:1174-88. [PMID: 18284339 DOI: 10.1162/jocn.2008.20081] [Citation(s) in RCA: 97] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Recent studies suggest that the left superior temporal gyrus and sulcus (LSTG/S) play a role in speech perception, although the precise function of these areas remains unclear. Here, we test the hypothesis that regions in the LSTG/S play a role in the categorization of speech phonemes, irrespective of the acoustic properties of the sounds and prior experience of the listener with them. We examined changes in functional magnetic resonance imaging brain activation related to a perceptual shift from nonphonetic to phonetic analysis of sine-wave speech analogs. Subjects performed an identification task before scanning and a discrimination task during scanning with phonetic (P) and nonphonetic (N) sine-wave sounds, both before (Pre) and after (Post) being exposed to the phonetic properties of the P sounds. Behaviorally, experience with the P sounds induced categorical identification of these sounds. In the PostP > PreP and PostP > PostN contrasts, an area in the posterior LSTG/S was activated. For both P and N sounds, the activation in this region was correlated with the degree of categorical identification in individual subjects. The results suggest that these areas in the posterior LSTG/S are sensitive neither to the acoustic properties of speech nor merely to the presence of phonetic information, but rather to the listener's awareness of category representations for auditory inputs.
Collapse
Affiliation(s)
- Rutvik Desai
- Department of Neurology, Medical College of Wisconsin, Milwaukee, WI 53226, USA.
| | | | | | | |
Collapse
|