101
|
Parkinson AL, Behroozmand R, Ibrahim N, Korzyukov O, Larson CR, Robin DA. Effective connectivity associated with auditory error detection in musicians with absolute pitch. Front Neurosci 2014; 8:46. [PMID: 24634644 PMCID: PMC3942878 DOI: 10.3389/fnins.2014.00046] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2013] [Accepted: 02/19/2014] [Indexed: 11/29/2022] Open
Abstract
It is advantageous to study a wide range of vocal abilities in order to fully understand how vocal control measures vary across the full spectrum. Individuals with absolute pitch (AP) are able to assign a verbal label to musical notes and have enhanced abilities in pitch identification without reliance on an external referent. In this study we used dynamic causal modeling (DCM) to model effective connectivity of ERP responses to pitch perturbation in voice auditory feedback in musicians with relative pitch (RP), AP, and non-musician controls. We identified a network compromising left and right hemisphere superior temporal gyrus (STG), primary motor cortex (M1), and premotor cortex (PM). We specified nine models and compared two main factors examining various combinations of STG involvement in feedback pitch error detection/correction process. Our results suggest that modulation of left to right STG connections are important in the identification of self-voice error and sensory motor integration in AP musicians. We also identify reduced connectivity of left hemisphere PM to STG connections in AP and RP groups during the error detection and corrections process relative to non-musicians. We suggest that this suppression may allow for enhanced connectivity relating to pitch identification in the right hemisphere in those with more precise pitch matching abilities. Musicians with enhanced pitch identification abilities likely have an improved auditory error detection and correction system involving connectivity of STG regions. Our findings here also suggest that individuals with AP are more adept at using feedback related to pitch from the right hemisphere.
Collapse
Affiliation(s)
- Amy L Parkinson
- Research Imaging Institute, Department of Neurology, University of Texas Health Science Center San Antonio San Antonio, TX, USA
| | - Roozbeh Behroozmand
- Human Brain Research Lab, Department of Neurosurgery, The University of Iowa Iowa City, IA, USA
| | - Nadine Ibrahim
- Department of Communication Sciences and Disorders, Northwestern University Evanston, IL, USA
| | - Oleg Korzyukov
- Department of Communication Sciences and Disorders, Northwestern University Evanston, IL, USA
| | - Charles R Larson
- Department of Communication Sciences and Disorders, Northwestern University Evanston, IL, USA
| | - Donald A Robin
- Research Imaging Institute, Department of Neurology, University of Texas Health Science Center San Antonio San Antonio, TX, USA
| |
Collapse
|
102
|
Lu Y, Paraskevopoulos E, Herholz SC, Kuchenbuch A, Pantev C. Temporal processing of audiovisual stimuli is enhanced in musicians: evidence from magnetoencephalography (MEG). PLoS One 2014; 9:e90686. [PMID: 24595014 PMCID: PMC3940930 DOI: 10.1371/journal.pone.0090686] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2013] [Accepted: 02/03/2014] [Indexed: 11/28/2022] Open
Abstract
Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave), while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events.
Collapse
Affiliation(s)
- Yao Lu
- Institute for Biomagnetism and Biosignalanalysis, University of Münster, Münster, Germany
| | | | | | - Anja Kuchenbuch
- Institute for Biomagnetism and Biosignalanalysis, University of Münster, Münster, Germany
| | - Christo Pantev
- Institute for Biomagnetism and Biosignalanalysis, University of Münster, Münster, Germany
- * E-mail:
| |
Collapse
|
103
|
Kyong JS, Scott SK, Rosen S, Howe TB, Agnew ZK, McGettigan C. Exploring the roles of spectral detail and intonation contour in speech intelligibility: an FMRI study. J Cogn Neurosci 2014; 26:1748-63. [PMID: 24568205 DOI: 10.1162/jocn_a_00583] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The melodic contour of speech forms an important perceptual aspect of tonal and nontonal languages and an important limiting factor on the intelligibility of speech heard through a cochlear implant. Previous work exploring the neural correlates of speech comprehension identified a left-dominant pathway in the temporal lobes supporting the extraction of an intelligible linguistic message, whereas the right anterior temporal lobe showed an overall preference for signals clearly conveying dynamic pitch information [Johnsrude, I. S., Penhune, V. B., & Zatorre, R. J. Functional specificity in the right human auditory cortex for perceiving pitch direction. Brain, 123, 155-163, 2000; Scott, S. K., Blank, C. C., Rosen, S., & Wise, R. J. Identification of a pathway for intelligible speech in the left temporal lobe. Brain, 123, 2400-2406, 2000]. The current study combined modulations of overall intelligibility (through vocoding and spectral inversion) with a manipulation of pitch contour (normal vs. falling) to investigate the processing of spoken sentences in functional MRI. Our overall findings replicate and extend those of Scott et al. [Scott, S. K., Blank, C. C., Rosen, S., & Wise, R. J. Identification of a pathway for intelligible speech in the left temporal lobe. Brain, 123, 2400-2406, 2000], where greater sentence intelligibility was predominately associated with increased activity in the left STS, and the greatest response to normal sentence melody was found in right superior temporal gyrus. These data suggest a spatial distinction between brain areas associated with intelligibility and those involved in the processing of dynamic pitch information in speech. By including a set of complexity-matched unintelligible conditions created by spectral inversion, this is additionally the first study reporting a fully factorial exploration of spectrotemporal complexity and spectral inversion as they relate to the neural processing of speech intelligibility. Perhaps surprisingly, there was little evidence for an interaction between the two factors-we discuss the implications for the processing of sound and speech in the dorsolateral temporal lobes.
Collapse
|
104
|
Chang HC, Lee HJ, Tzeng OJL, Kuo WJ. Implicit target substitution and sequencing for lexical tone production in Chinese: an FMRI study. PLoS One 2014; 9:e83126. [PMID: 24427269 PMCID: PMC3888393 DOI: 10.1371/journal.pone.0083126] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2013] [Accepted: 10/31/2013] [Indexed: 11/18/2022] Open
Abstract
In this study, we examine the neural substrates underlying Tone 3 sandhi and tone sequencing in Mandarin Chinese using fMRI. Tone 3 sandhi is traditionally described as the substitution of Tone 3 with Tone 2 when followed by another Tone 3 (i.e., 33→23). According to current speech production models, target substitution is expected to engage the posterior inferior frontal gyrus. Since Tone 3 sandhi is, to some extent, independent of segments, which makes it more similar to singing, right-lateralized activation in this region was predicted. As for tone sequencing, based on studies in sequencing, we expected the involvement of the supplementary motor area. In the experiments, participants were asked to produce twelve four-syllable sequences with the same tone assignment (the repeated sequences) or a different tone assignment (the mixed sequences). We found right-lateralized posterior inferior frontal gyrus activation for the sequence 3333 (Tone 3 sandhi) and left-lateralized activation in the supplementary motor area for the mixed sequences (tone sequencing). We proposed that tones and segments could be processed in parallel in the left and right hemispheres, but their integration, or the product of their integration, is hosted in the left hemisphere.
Collapse
Affiliation(s)
- Hui-Chuan Chang
- Institute of Neuroscience, National Yang-Ming University, Taipei, Taiwan
| | - Hsin-Ju Lee
- Institute of Neuroscience, National Yang-Ming University, Taipei, Taiwan
| | - Ovid J. L. Tzeng
- Institute of Neuroscience, National Yang-Ming University, Taipei, Taiwan
- Institute of Linguistics, Academia Sinica, Taipei, Taiwan
| | - Wen-Jui Kuo
- Institute of Neuroscience, National Yang-Ming University, Taipei, Taiwan
- Brain Research Center, National Yang-Ming University, Taipei, Taiwan
| |
Collapse
|
105
|
Abstract
Abstract
Collapse
|
106
|
Marie D, Jobard G, Crivello F, Perchey G, Petit L, Mellet E, Joliot M, Zago L, Mazoyer B, Tzourio-Mazoyer N. Descriptive anatomy of Heschl's gyri in 430 healthy volunteers, including 198 left-handers. Brain Struct Funct 2013; 220:729-43. [PMID: 24310352 PMCID: PMC4341020 DOI: 10.1007/s00429-013-0680-x] [Citation(s) in RCA: 75] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2013] [Accepted: 11/19/2013] [Indexed: 11/29/2022]
Abstract
This study describes the gyrification patterns and surface areas of Heschl's gyrus (HG) in 430 healthy volunteers mapped with magnetic resonance imaging. Among the 232 right-handers, we found a large occurrence of duplication (64 %), especially on the right (49 vs. 37 % on the left). Partial duplication was twice more frequent on the left than complete duplication. On the opposite, in the right hemisphere, complete duplication was 10 % more frequent than partial duplication. The most frequent inter-hemispheric gyrification patterns were bilateral single HG (36 %) and left single-right duplication (27 %). The least common patterns were left duplication-right single (22 %) and bilateral duplication (15 %). Duplication was associated with decreased anterior HG surface area on the corresponding side, independently of the type of duplication, and increased total HG surface area (including the second gyrus). Inter-hemispheric gyrification patterns strongly influenced both anterior and total HG surface area asymmetries, leftward asymmetry of the anterior HG surface was observed in all patterns except double left HG, and total HG surface asymmetry favored the side of duplication. Compared to right-handers, the 198 left-handers exhibited lower occurrence of duplication, and larger right anterior HG surface and total HG surface areas. Left-handers' HG surface asymmetries were thus significantly different from those of right-handers, with a loss of leftward asymmetry of their anterior HG surface, and with significant rightward asymmetry of their total HG surface. In summary, gyrification patterns have a strong impact on HG surface and asymmetry. The observed reduced lateralization of HG duplications and anterior HG asymmetry in left-handers highlights HG inter-hemispheric gyrification patterns as a potential candidate marker of speech lateralization.
Collapse
Affiliation(s)
- D Marie
- GIN, UMR 5296, University Bordeaux, 33000, Bordeaux, France
| | | | | | | | | | | | | | | | | | | |
Collapse
|
107
|
Abstract
Neural oscillatory dynamics are a candidate mechanism to steer perception of time and temporal rate change. While oscillator models of time perception are strongly supported by behavioral evidence, a direct link to neural oscillations and oscillatory entrainment has not yet been provided. In addition, it has thus far remained unaddressed how context-induced illusory percepts of time are coded for in oscillator models of time perception. To investigate these questions, we used magnetoencephalography and examined the neural oscillatory dynamics that underpin pitch-induced illusory percepts of temporal rate change. Human participants listened to frequency-modulated sounds that varied over time in both modulation rate and pitch, and judged the direction of rate change (decrease vs increase). Our results demonstrate distinct neural mechanisms of rate perception: Modulation rate changes directly affected listeners' rate percept as well as the exact frequency of the neural oscillation. However, pitch-induced illusory rate changes were unrelated to the exact frequency of the neural responses. The rate change illusion was instead linked to changes in neural phase patterns, which allowed for single-trial decoding of percepts. That is, illusory underestimations or overestimations of perceived rate change were tightly coupled to increased intertrial phase coherence and changes in cerebro-acoustic phase lag. The results provide insight on how illusory percepts of time are coded for by neural oscillatory dynamics.
Collapse
|
108
|
Scott SK, McGettigan C. Do temporal processes underlie left hemisphere dominance in speech perception? BRAIN AND LANGUAGE 2013; 127:36-45. [PMID: 24125574 PMCID: PMC4083253 DOI: 10.1016/j.bandl.2013.07.006] [Citation(s) in RCA: 41] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/17/2012] [Revised: 07/18/2013] [Accepted: 07/22/2013] [Indexed: 05/27/2023]
Abstract
It is not unusual to find it stated as a fact that the left hemisphere is specialized for the processing of rapid, or temporal aspects of sound, and that the dominance of the left hemisphere in the perception of speech can be a consequence of this specialization. In this review we explore the history of this claim and assess the weight of this assumption. We will demonstrate that instead of a supposed sensitivity of the left temporal lobe for the acoustic properties of speech, it is the right temporal lobe which shows a marked preference for certain properties of sounds, for example longer durations, or variations in pitch. We finish by outlining some alternative factors that contribute to the left lateralization of speech perception.
Collapse
Affiliation(s)
- Sophie K Scott
- Institute for Cognitive Neuroscience, 17 Queen Square, London WC1N 3AR, UK.
| | | |
Collapse
|
109
|
Psychoacoustic abilities as predictors of vocal emotion recognition. Atten Percept Psychophys 2013; 75:1799-810. [DOI: 10.3758/s13414-013-0518-x] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
110
|
The encoding of vowels and temporal speech cues in the auditory cortex of professional musicians: An EEG study. Neuropsychologia 2013; 51:1608-18. [DOI: 10.1016/j.neuropsychologia.2013.04.007] [Citation(s) in RCA: 58] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2012] [Revised: 03/14/2013] [Accepted: 04/18/2013] [Indexed: 11/16/2022]
|
111
|
Abstract
Singing provides a unique opportunity to examine music performance—the musical instrument is contained wholly within the body, thus eliminating the need for creating artificial instruments or tasks in neuroimaging experiments. Here, more than two decades of voice and singing research will be reviewed to give an overview of the sensory-motor control of the singing voice, starting from the vocal tract and leading up to the brain regions involved in singing. Additionally, to demonstrate how sensory feedback is integrated with vocal motor control, recent functional magnetic resonance imaging (fMRI) research on somatosensory and auditory feedback processing during singing will be presented. The relationship between the brain and singing behavior will be explored also by examining: (1) neuroplasticity as a function of various lengths and types of training, (2) vocal amusia due to a compromised singing network, and (3) singing performance in individuals with congenital amusia. Finally, the auditory-motor control network for singing will be considered alongside dual-stream models of auditory processing in music and speech to refine both these theoretical models and the singing network itself.
Collapse
|
112
|
Scott SK, McGettigan C. The neural processing of masked speech. Hear Res 2013; 303:58-66. [PMID: 23685149 DOI: 10.1016/j.heares.2013.05.001] [Citation(s) in RCA: 55] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/12/2013] [Revised: 04/29/2013] [Accepted: 05/03/2013] [Indexed: 11/16/2022]
Abstract
Spoken language is rarely heard in silence, and a great deal of interest in psychoacoustics has focused on the ways that the perception of speech is affected by properties of masking noise. In this review we first briefly outline the neuroanatomy of speech perception. We then summarise the neurobiological aspects of the perception of masked speech, and investigate this as a function of masker type, masker level and task. This article is part of a Special Issue entitled "Annual Reviews 2013".
Collapse
Affiliation(s)
- Sophie K Scott
- Institute of Cognitive Neuroscience, UCL, 17 Queen Square, London WC1N 3AR, UK.
| | | |
Collapse
|
113
|
Bidelman GM. The role of the auditory brainstem in processing musically relevant pitch. Front Psychol 2013; 4:264. [PMID: 23717294 PMCID: PMC3651994 DOI: 10.3389/fpsyg.2013.00264] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2013] [Accepted: 04/23/2013] [Indexed: 11/13/2022] Open
Abstract
Neuroimaging work has shed light on the cerebral architecture involved in processing the melodic and harmonic aspects of music. Here, recent evidence is reviewed illustrating that subcortical auditory structures contribute to the early formation and processing of musically relevant pitch. Electrophysiological recordings from the human brainstem and population responses from the auditory nerve reveal that nascent features of tonal music (e.g., consonance/dissonance, pitch salience, harmonic sonority) are evident at early, subcortical levels of the auditory pathway. The salience and harmonicity of brainstem activity is strongly correlated with listeners' perceptual preferences and perceived consonance for the tonal relationships of music. Moreover, the hierarchical ordering of pitch intervals/chords described by the Western music practice and their perceptual consonance is well-predicted by the salience with which pitch combinations are encoded in subcortical auditory structures. While the neural correlates of consonance can be tuned and exaggerated with musical training, they persist even in the absence of musicianship or long-term enculturation. As such, it is posited that the structural foundations of musical pitch might result from innate processing performed by the central auditory system. A neurobiological predisposition for consonant, pleasant sounding pitch relationships may be one reason why these pitch combinations have been favored by composers and listeners for centuries. It is suggested that important perceptual dimensions of music emerge well before the auditory signal reaches cerebral cortex and prior to attentional engagement. While cortical mechanisms are no doubt critical to the perception, production, and enjoyment of music, the contribution of subcortical structures implicates a more integrated, hierarchically organized network underlying music processing within the brain.
Collapse
Affiliation(s)
- Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis Memphis, TN, USA ; School of Communication Sciences and Disorders, University of Memphis Memphis, TN, USA
| |
Collapse
|
114
|
Parkinson AL, Korzyukov O, Larson CR, Litvak V, Robin DA. Modulation of effective connectivity during vocalization with perturbed auditory feedback. Neuropsychologia 2013; 51:1471-80. [PMID: 23665378 DOI: 10.1016/j.neuropsychologia.2013.05.002] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2012] [Revised: 03/05/2013] [Accepted: 05/01/2013] [Indexed: 10/26/2022]
Abstract
The integration of auditory feedback with vocal motor output is important for the control of voice fundamental frequency (F0). We used a pitch-shift paradigm where subjects respond to an alteration, or shift, of voice pitch auditory feedback with a reflexive change in F0. We presented varying magnitudes of pitch shifted auditory feedback to subjects during vocalization and passive listening and measured event related potentials (ERPs) to the feedback shifts. Shifts were delivered at +100 and +400 cents (200 ms duration). The ERP data were modeled with dynamic causal modeling (DCM) techniques where the effective connectivity between the superior temporal gyrus (STG), inferior frontal gyrus and premotor areas were tested. We compared three main factors: the effect of intrinsic STG connectivity, STG modulation across hemispheres and the specific effect of hemisphere. A Bayesian model selection procedure was used to make inference about model families. Results suggest that both intrinsic STG and left to right STG connections are important in the identification of self-voice error and sensory motor integration. We identified differences in left-to-right STG connections between 100 cent and 400 cent shift conditions suggesting that self- and non-self-voice error are processed differently in the left and right hemisphere. These results also highlight the potential of DCM modeling of ERP responses to characterize specific network properties of forward models of voice control.
Collapse
Affiliation(s)
- Amy L Parkinson
- Research Imaging Institute, University of Texas Health Science Center San Antonio, San Antonio, TX 78229, USA.
| | | | | | | | | |
Collapse
|
115
|
Albouy P, Mattout J, Bouet R, Maby E, Sanchez G, Aguera PE, Daligault S, Delpuech C, Bertrand O, Caclin A, Tillmann B. Impaired pitch perception and memory in congenital amusia: the deficit starts in the auditory cortex. Brain 2013; 136:1639-61. [DOI: 10.1093/brain/awt082] [Citation(s) in RCA: 182] [Impact Index Per Article: 16.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
|
116
|
Krishnan A, Bidelman GM, Smalt CJ, Ananthakrishnan S, Gandour JT. Relationship between brainstem, cortical and behavioral measures relevant to pitch salience in humans. Neuropsychologia 2012; 50:2849-2859. [PMID: 22940428 PMCID: PMC3483071 DOI: 10.1016/j.neuropsychologia.2012.08.013] [Citation(s) in RCA: 52] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2012] [Revised: 08/01/2012] [Accepted: 08/15/2012] [Indexed: 10/28/2022]
Abstract
Neural representation of pitch-relevant information at both the brainstem and cortical levels of processing is influenced by language or music experience. However, the functional roles of brainstem and cortical neural mechanisms in the hierarchical network for language processing, and how they drive and maintain experience-dependent reorganization are not known. In an effort to evaluate the possible interplay between these two levels of pitch processing, we introduce a novel electrophysiological approach to evaluate pitch-relevant neural activity at the brainstem and auditory cortex concurrently. Brainstem frequency-following responses and cortical pitch responses were recorded from participants in response to iterated rippled noise stimuli that varied in stimulus periodicity (pitch salience). A control condition using iterated rippled noise devoid of pitch was employed to ensure pitch specificity of the cortical pitch response. Neural data were compared with behavioral pitch discrimination thresholds. Results showed that magnitudes of neural responses increase systematically and that behavioral pitch discrimination improves with increasing stimulus periodicity, indicating more robust encoding for salient pitch. Absence of cortical pitch response in the control condition confirms that the cortical pitch response is specific to pitch. Behavioral pitch discrimination was better predicted by brainstem and cortical responses together as compared to each separately. The close correspondence between neural and behavioral data suggest that neural correlates of pitch salience that emerge in early, preattentive stages of processing in the brainstem may drive and maintain with high fidelity the early cortical representations of pitch. These neural representations together contain adequate information for the development of perceptual pitch salience.
Collapse
Affiliation(s)
- Ananthanarayan Krishnan
- Department of Speech Language Hearing Sciences, Purdue University, West Lafayette, IN 47907-2038, USA.
| | - Gavin M Bidelman
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, ON, Canada.
| | - Christopher J Smalt
- School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN 47907-2038, USA.
| | - Saradha Ananthakrishnan
- Department of Speech Language Hearing Sciences, Purdue University, West Lafayette, IN 47907-2038, USA.
| | - Jackson T Gandour
- Department of Speech Language Hearing Sciences, Purdue University, West Lafayette, IN 47907-2038, USA.
| |
Collapse
|
117
|
Andoh J, Zatorre RJ. Mapping the after-effects of theta burst stimulation on the human auditory cortex with functional imaging. J Vis Exp 2012:e3985. [PMID: 23007549 DOI: 10.3791/3985] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022] Open
Abstract
Auditory cortex pertains to the processing of sound, which is at the basis of speech or music-related processing. However, despite considerable recent progress, the functional properties and lateralization of the human auditory cortex are far from being fully understood. Transcranial Magnetic Stimulation (TMS) is a non-invasive technique that can transiently or lastingly modulate cortical excitability via the application of localized magnetic field pulses, and represents a unique method of exploring plasticity and connectivity. It has only recently begun to be applied to understand auditory cortical function. An important issue in using TMS is that the physiological consequences of the stimulation are difficult to establish. Although many TMS studies make the implicit assumption that the area targeted by the coil is the area affected, this need not be the case, particularly for complex cognitive functions which depend on interactions across many brain regions. One solution to this problem is to combine TMS with functional Magnetic resonance imaging (fMRI). The idea here is that fMRI will provide an index of changes in brain activity associated with TMS. Thus, fMRI would give an independent means of assessing which areas are affected by TMS and how they are modulated. In addition, fMRI allows the assessment of functional connectivity, which represents a measure of the temporal coupling between distant regions. It can thus be useful not only to measure the net activity modulation induced by TMS in given locations, but also the degree to which the network properties are affected by TMS, via any observed changes in functional connectivity. Different approaches exist to combine TMS and functional imaging according to the temporal order of the methods. Functional MRI can be applied before, during, after, or both before and after TMS. Recently, some studies interleaved TMS and fMRI in order to provide online mapping of the functional changes induced by TMS. However, this online combination has many technical problems, including the static artifacts resulting from the presence of the TMS coil in the scanner room, or the effects of TMS pulses on the process of MR image formation. But more importantly, the loud acoustic noise induced by TMS (increased compared with standard use because of the resonance of the scanner bore) and the increased TMS coil vibrations (caused by the strong mechanical forces due to the static magnetic field of the MR scanner) constitute a crucial problem when studying auditory processing. This is one reason why fMRI was carried out before and after TMS in the present study. Similar approaches have been used to target the motor cortex, premotor cortex, primary somatosensory cortex and language-related areas, but so far no combined TMS-fMRI study has investigated the auditory cortex. The purpose of this article is to provide details concerning the protocol and considerations necessary to successfully combine these two neuroscientific tools to investigate auditory processing. Previously we showed that repetitive TMS (rTMS) at high and low frequencies (resp. 10 Hz and 1 Hz) applied over the auditory cortex modulated response time (RT) in a melody discrimination task. We also showed that RT modulation was correlated with functional connectivity in the auditory network assessed using fMRI: the higher the functional connectivity between left and right auditory cortices during task performance, the higher the facilitatory effect (i.e. decreased RT) observed with rTMS. However those findings were mainly correlational, as fMRI was performed before rTMS. Here, fMRI was carried out before and immediately after TMS to provide direct measures of the functional organization of the auditory cortex, and more specifically of the plastic reorganization of the auditory neural network occurring after the neural intervention provided by TMS. Combined fMRI and TMS applied over the auditory cortex should enable a better understanding of brain mechanisms of auditory processing, providing physiological information about functional effects of TMS. This knowledge could be useful for many cognitive neuroscience applications, as well as for optimizing therapeutic applications of TMS, particularly in auditory-related disorders.
Collapse
Affiliation(s)
- Jamila Andoh
- Montreal Neurological Institute and International laboratory for Brain, Music, and Sound, McGill University.
| | | |
Collapse
|
118
|
Dykstra AR, Koh CK, Braida LD, Tramo MJ. Dissociation of detection and discrimination of pure tones following bilateral lesions of auditory cortex. PLoS One 2012; 7:e44602. [PMID: 22957087 PMCID: PMC3434164 DOI: 10.1371/journal.pone.0044602] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2011] [Accepted: 08/09/2012] [Indexed: 12/04/2022] Open
Abstract
It is well known that damage to the peripheral auditory system causes deficits in tone detection as well as pitch and loudness perception across a wide range of frequencies. However, the extent to which to which the auditory cortex plays a critical role in these basic aspects of spectral processing, especially with regard to speech, music, and environmental sound perception, remains unclear. Recent experiments indicate that primary auditory cortex is necessary for the normally-high perceptual acuity exhibited by humans in pure-tone frequency discrimination. The present study assessed whether the auditory cortex plays a similar role in the intensity domain and contrasted its contribution to sensory versus discriminative aspects of intensity processing. We measured intensity thresholds for pure-tone detection and pure-tone loudness discrimination in a population of healthy adults and a middle-aged man with complete or near-complete lesions of the auditory cortex bilaterally. Detection thresholds in his left and right ears were 16 and 7 dB HL, respectively, within clinically-defined normal limits. In contrast, the intensity threshold for monaural loudness discrimination at 1 kHz was 6.5±2.1 dB in the left ear and 6.5±1.9 dB in the right ear at 40 dB sensation level, well above the means of the control population (left ear: 1.6±0.22 dB; right ear: 1.7±0.19 dB). The results indicate that auditory cortex lowers just-noticeable differences for loudness discrimination by approximately 5 dB but is not necessary for tone detection in quiet. Previous human and Old-world monkey experiments employing lesion-effect, neurophysiology, and neuroimaging methods to investigate the role of auditory cortex in intensity processing are reviewed.
Collapse
Affiliation(s)
- Andrew R Dykstra
- Program in Speech and Hearing Biosciences and Technology, Harvard-MIT Division of Health Sciences and Technology, Cambridge, Massachusetts, United States of America.
| | | | | | | |
Collapse
|
119
|
Witteman J, Van Heuven VJP, Schiller NO. Hearing feelings: a quantitative meta-analysis on the neuroimaging literature of emotional prosody perception. Neuropsychologia 2012; 50:2752-2763. [PMID: 22841991 DOI: 10.1016/j.neuropsychologia.2012.07.026] [Citation(s) in RCA: 53] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2012] [Revised: 06/29/2012] [Accepted: 07/13/2012] [Indexed: 10/28/2022]
Abstract
With the advent of neuroimaging considerable progress has been made in uncovering the neural network involved in the perception of emotional prosody. However, the exact neuroanatomical underpinnings of the emotional prosody perception process remain unclear. Furthermore, it is unclear what the intrahemispheric basis might be of the relative right-hemispheric specialization for emotional prosody perception that has been found previously in the lesion literature. In an attempt to shed light on these issues, quantitative meta-analyses of the neuroimaging literature were performed to investigate which brain areas are robustly associated with stimulus-driven and task-dependent perception of emotional prosody. Also, lateralization analyses were performed to investigate whether statistically reliable hemispheric specialization across studies can be found in these networks. A bilateral temporofrontal network was found to be implicated in emotional prosody perception, generally supporting previously proposed models of emotional prosody perception. Right-lateralized convergence across studies was found in (early) auditory processing areas, suggesting that the right hemispheric specialization for emotional prosody perception reported previously in the lesion literature might be driven by hemispheric specialization for non-prosody-specific fundamental acoustic dimensions of the speech signal.
Collapse
Affiliation(s)
- Jurriaan Witteman
- Leiden Institute for Brain and Cognition, Leiden University, The Netherlands; Leiden University Centre for Linguistics, Leiden University, The Netherlands.
| | - Vincent J P Van Heuven
- Leiden Institute for Brain and Cognition, Leiden University, The Netherlands; Leiden University Centre for Linguistics, Leiden University, The Netherlands
| | - Niels O Schiller
- Leiden Institute for Brain and Cognition, Leiden University, The Netherlands; Leiden University Centre for Linguistics, Leiden University, The Netherlands
| |
Collapse
|
120
|
Giordano BL, McAdams S, Zatorre RJ, Kriegeskorte N, Belin P. Abstract encoding of auditory objects in cortical activity patterns. ACTA ACUST UNITED AC 2012; 23:2025-37. [PMID: 22802575 DOI: 10.1093/cercor/bhs162] [Citation(s) in RCA: 60] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
The human brain is thought to process auditory objects along a hierarchical temporal "what" stream that progressively abstracts object information from the low-level structure (e.g., loudness) as processing proceeds along the middle-to-anterior direction. Empirical demonstrations of abstract object encoding, independent of low-level structure, have relied on speech stimuli, and non-speech studies of object-category encoding (e.g., human vocalizations) often lack a systematic assessment of low-level information (e.g., vocalizations are highly harmonic). It is currently unknown whether abstract encoding constitutes a general functional principle that operates for auditory objects other than speech. We combined multivariate analyses of functional imaging data with an accurate analysis of the low-level acoustical information to examine the abstract encoding of non-speech categories. We observed abstract encoding of the living and human-action sound categories in the fine-grained spatial distribution of activity in the middle-to-posterior temporal cortex (e.g., planum temporale). Abstract encoding of auditory objects appears to extend to non-speech biological sounds and to operate in regions other than the anterior temporal lobe. Neural processes for the abstract encoding of auditory objects might have facilitated the emergence of speech categories in our ancestors.
Collapse
Affiliation(s)
- Bruno L Giordano
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK.
| | | | | | | | | |
Collapse
|
121
|
Särkämö T, Soto D. Music listening after stroke: beneficial effects and potential neural mechanisms. Ann N Y Acad Sci 2012; 1252:266-81. [PMID: 22524369 DOI: 10.1111/j.1749-6632.2011.06405.x] [Citation(s) in RCA: 61] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
Music is an enjoyable leisure activity that also engages many emotional, cognitive, and motor processes in the brain. Here, we will first review previous literature on the emotional and cognitive effects of music listening in healthy persons and various clinical groups. Then we will present findings about the short- and long-term effects of music listening on the recovery of cognitive function in stroke patients and the underlying neural mechanisms of these music effects. First, our results indicate that listening to pleasant music can have a short-term facilitating effect on visual awareness in patients with visual neglect, which is associated with functional coupling between emotional and attentional brain regions. Second, daily music listening can improve auditory and verbal memory, focused attention, and mood as well as induce structural gray matter changes in the early poststroke stage. The psychological and neural mechanisms potentially underlying the rehabilitating effect of music after stroke are discussed.
Collapse
Affiliation(s)
- Teppo Särkämö
- Cognitive Brain Research Unit, Institute of Behavioural Sciences, University of Helsinki, Helsinki, Finland.
| | | |
Collapse
|
122
|
Butler BE, Trainor LJ. Sequencing the Cortical Processing of Pitch-Evoking Stimuli using EEG Analysis and Source Estimation. Front Psychol 2012; 3:180. [PMID: 22740836 PMCID: PMC3382913 DOI: 10.3389/fpsyg.2012.00180] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2011] [Accepted: 05/16/2012] [Indexed: 11/15/2022] Open
Abstract
Cues to pitch include spectral cues that arise from tonotopic organization and temporal cues that arise from firing patterns of auditory neurons. fMRI studies suggest a common pitch center is located just beyond primary auditory cortex along the lateral aspect of Heschl’s gyrus, but little work has examined the stages of processing for the integration of pitch cues. Using electroencephalography, we recorded cortical responses to high-pass filtered iterated rippled noise (IRN) and high-pass filtered complex harmonic stimuli, which differ in temporal and spectral content. The two stimulus types were matched for pitch saliency, and a mismatch negativity (MMN) response was elicited by infrequent pitch changes. The P1 and N1 components of event-related potentials (ERPs) are thought to arise from primary and secondary auditory areas, respectively, and to result from simple feature extraction. MMN is generated in secondary auditory cortex and is thought to act on feature-integrated auditory objects. We found that peak latencies of both P1 and N1 occur later in response to IRN stimuli than to complex harmonic stimuli, but found no latency differences between stimulus types for MMN. The location of each ERP component was estimated based on iterative fitting of regional sources in the auditory cortices. The sources of both the P1 and N1 components elicited by IRN stimuli were located dorsal to those elicited by complex harmonic stimuli, whereas no differences were observed for MMN sources across stimuli. Furthermore, the MMN component was located between the P1 and N1 components, consistent with fMRI studies indicating a common pitch region in lateral Heschl’s gyrus. These results suggest that while the spectral and temporal processing of different pitch-evoking stimuli involves different cortical areas during early processing, by the time the object-related MMN response is formed, these cues have been integrated into a common representation of pitch.
Collapse
Affiliation(s)
- Blake E Butler
- Department of Psychology, Neuroscience and Behaviour, McMaster University Hamilton, ON, Canada
| | | |
Collapse
|
123
|
Laufer O, Paz R. Monetary loss alters perceptual thresholds and compromises future decisions via amygdala and prefrontal networks. J Neurosci 2012; 32:6304-11. [PMID: 22553036 PMCID: PMC6622137 DOI: 10.1523/jneurosci.6281-11.2012] [Citation(s) in RCA: 41] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2011] [Revised: 02/21/2012] [Accepted: 03/09/2012] [Indexed: 11/21/2022] Open
Abstract
The influence of monetary loss on decision making and choice behavior is extensively studied. However, the effect of loss on sensory perception is less explored. Here, we use conditioning in human subjects to explore how monetary loss associated with a pure tone can affect changes in perceptual thresholds for the previously neutral stimulus. We found that loss conditioning, when compared with neutral exposure, decreases sensitivity and increases perceptual thresholds (i.e., a relative increase in the just-noticeable-difference). This was so even when compared with gain conditioning of comparable intensity, suggesting that the finding is related to valence. We further show that these perceptual changes are related to future decisions about stimuli that are farther away from the conditioned one (wider generalization), resulting in overall increased and irrational monetary loss for the subjects. We use functional imaging to identify the neural network whose activity correlates with the deterioration in sensitivity on an individual basis. In addition, we show that activity in the amygdala was tightly correlated with the wider behavioral generalization, namely, when wrong decisions were made. We suggest that, in principle, less discrimination can be beneficial in loss scenarios, because it assures an accurate and fast response to stimuli that resemble the original stimulus and hence have a high likelihood of entailing the same outcome. But whereas this can be useful for primary reinforcers that can impact survival, it can also underlie wrong and costly behaviors in scenarios of contemporary life that involve secondary reinforcers.
Collapse
Affiliation(s)
- Offir Laufer
- Department of Neurobiology, Weizmann Institute of Science, Rehovot 76100, Israel
| | - Rony Paz
- Department of Neurobiology, Weizmann Institute of Science, Rehovot 76100, Israel
| |
Collapse
|
124
|
Liu F, Xu Y, Patel AD, Francart T, Jiang C. Differential recognition of pitch patterns in discrete and gliding stimuli in congenital amusia: evidence from Mandarin speakers. Brain Cogn 2012; 79:209-15. [PMID: 22546729 DOI: 10.1016/j.bandc.2012.03.008] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2011] [Revised: 03/21/2012] [Accepted: 03/28/2012] [Indexed: 10/28/2022]
Abstract
This study examined whether "melodic contour deafness" (insensitivity to the direction of pitch movement) in congenital amusia is associated with specific types of pitch patterns (discrete versus gliding pitches) or stimulus types (speech syllables versus complex tones). Thresholds for identification of pitch direction were obtained using discrete or gliding pitches in the syllable /ma/ or its complex tone analog, from nineteen amusics and nineteen controls, all healthy university students with Mandarin Chinese as their native language. Amusics, unlike controls, had more difficulty recognizing pitch direction in discrete than in gliding pitches, for both speech and non-speech stimuli. Also, amusic thresholds were not significantly affected by stimulus types (speech versus non-speech), whereas controls showed lower thresholds for tones than for speech. These findings help explain why amusics have greater difficulty with discrete musical pitch perception than with speech perception, in which continuously changing pitch movements are prevalent.
Collapse
Affiliation(s)
- Fang Liu
- Center for the Study of Language and Information, Stanford University, Stanford, CA 94305-4101, USA
| | | | | | | | | |
Collapse
|
125
|
McGettigan C, Scott SK. Cortical asymmetries in speech perception: what's wrong, what's right and what's left? Trends Cogn Sci 2012; 16:269-76. [PMID: 22521208 DOI: 10.1016/j.tics.2012.04.006] [Citation(s) in RCA: 87] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2012] [Revised: 04/04/2012] [Accepted: 04/06/2012] [Indexed: 10/28/2022]
Abstract
Over the past 30 years hemispheric asymmetries in speech perception have been construed within a domain-general framework, according to which preferential processing of speech is due to left-lateralized, non-linguistic acoustic sensitivities. A prominent version of this argument holds that the left temporal lobe selectively processes rapid/temporal information in sound. Acoustically, this is a poor characterization of speech and there has been little empirical support for a left-hemisphere selectivity for these cues. In sharp contrast, the right temporal lobe is demonstrably sensitive to specific acoustic properties. We suggest that acoustic accounts of speech sensitivities need to be informed by the nature of the speech signal and that a simple domain-general vs. domain-specific dichotomy may be incorrect.
Collapse
Affiliation(s)
- Carolyn McGettigan
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR, UK
| | | |
Collapse
|
126
|
Interaction between bottom-up and top-down effects during the processing of pitch intervals in sequences of spoken and sung syllables. Neuroimage 2012; 61:715-22. [PMID: 22503936 DOI: 10.1016/j.neuroimage.2012.03.086] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2011] [Revised: 03/14/2012] [Accepted: 03/29/2012] [Indexed: 11/21/2022] Open
Abstract
The processing of pitch intervals may be differentially influenced when musical or speech stimuli carry the pitch information. Most insights into the neural basis of pitch interval processing come from studies on music perception. However, music, in contrast to speech, contains a stable set of pitch intervals. To converge the investigation of pitch interval processing in music and speech, we used sequences of the same spoken or sung syllables. The pitch of these syllables varied either by semitone steps like in music or by smaller intervals. Participants had to differentiate the sequences according to their different sizes of pitch intervals or to the direction of the last frequency step in the sequence. The results depended strongly on the specific task demands. Whereas the interval-size task itself recruited more regions in right lateralized fronto-parietal brain network, stronger activity on semitone than on non-semitone sequences was found in the left hemisphere (mainly in frontal cortex) during this task. These effects were also influenced by the speech mode (spoken or sung syllables). Our findings suggest that the processing of pitch intervals in sequences of syllables depends on an interaction between bottom-up (speech mode, pitch interval) and top-down effects (task).
Collapse
|
127
|
Deroche MLD, Zion DJ, Schurman JR, Chatterjee M. Sensitivity of school-aged children to pitch-related cues. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2012; 131:2938-2947. [PMID: 22501071 PMCID: PMC3339501 DOI: 10.1121/1.3692230] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2011] [Revised: 02/09/2012] [Accepted: 02/09/2012] [Indexed: 05/29/2023]
Abstract
Two experiments investigated the ability of 17 school-aged children to process purely temporal and spectro-temporal cues that signal changes in pitch. Percentage correct was measured for the discrimination of sinusoidal amplitude modulation rate (AMR) of broadband noise in experiment 1 and for the discrimination of fundamental frequency (F0) of broadband sine-phase harmonic complexes in experiment 2. The reference AMR was 100 Hz as was the reference F0. A child-friendly interface helped listeners to remain attentive to the task. Data were fitted using a maximum-likelihood technique that extracted threshold, slope, and lapse rate. All thresholds were subsequently standardized to a common d' value equal to 0.77. There were relatively large individual differences across listeners: eight had relatively adult-like thresholds in both tasks and nine had higher thresholds. However, these individual differences did not vary systematically with age, over the span of 6-16 yr. Thresholds were correlated across the two tasks and were about nine times finer for F0 discrimination than for AMR discrimination as has been previously observed in adults.
Collapse
Affiliation(s)
- Mickael L D Deroche
- Cochlear Implants and Psychophysics Lab, Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA.
| | | | | | | |
Collapse
|
128
|
Burton H, Firszt JB, Holden T, Agato A, Uchanski RM. Activation lateralization in human core, belt, and parabelt auditory fields with unilateral deafness compared to normal hearing. Brain Res 2012; 1454:33-47. [PMID: 22502976 DOI: 10.1016/j.brainres.2012.02.066] [Citation(s) in RCA: 55] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2012] [Accepted: 02/26/2012] [Indexed: 11/19/2022]
Abstract
We studied activation magnitudes in core, belt, and parabelt auditory cortex in adults with normal hearing (NH) and unilateral hearing loss (UHL) using an interrupted, single-event design and monaural stimulation with random spectrographic sounds. NH patients had one ear blocked and received stimulation on the side matching the intact ear in UHL. The objective was to determine whether the side of deafness affected lateralization and magnitude of evoked blood oxygen level-dependent responses across different auditory cortical fields (ACFs). Regardless of ear of stimulation, NH showed larger contralateral responses in several ACFs. With right ear stimulation in UHL, ipsilateral responses were larger compared to NH in core and belt ACFs, indicating neuroplasticity in the right hemisphere. With left ear stimulation in UHL, only posterior core ACFs showed larger ipsilateral responses, suggesting that most ACFs in the left hemisphere had greater resilience against reduced crossed inputs from a deafferented right ear. Parabelt regions located posterolateral to core and belt auditory cortex showed reduced activation in UHL compared to NH irrespective of RE/LE stimulation and lateralization of inputs. Thus, the effect in UHL compared to NH differed by ACF and ear of deafness.
Collapse
Affiliation(s)
- Harold Burton
- Department of Anatomy and Neurobiology, Washington University School of Medicine, St. Louis, MO 63110, USA.
| | | | | | | | | |
Collapse
|
129
|
Brain 'talks over' boring quotes: top-down activation of voice-selective areas while listening to monotonous direct speech quotations. Neuroimage 2012; 60:1832-42. [PMID: 22306805 DOI: 10.1016/j.neuroimage.2012.01.111] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2011] [Revised: 12/31/2011] [Accepted: 01/22/2012] [Indexed: 11/24/2022] Open
Abstract
In human communication, direct speech (e.g., Mary said, "I'm hungry") is perceived as more vivid than indirect speech (e.g., Mary said that she was hungry). This vividness distinction has previously been found to underlie silent reading of quotations: Using functional magnetic resonance imaging (fMRI), we found that direct speech elicited higher brain activity in the temporal voice areas (TVA) of the auditory cortex than indirect speech, consistent with an "inner voice" experience in reading direct speech. Here we show that listening to monotonously spoken direct versus indirect speech quotations also engenders differential TVA activity. This suggests that individuals engage in top-down simulations or imagery of enriched supra-segmental acoustic representations while listening to monotonous direct speech. The findings shed new light on the acoustic nature of the "inner voice" in understanding direct speech.
Collapse
|
130
|
Reed CL, Cahn SJ, Cory C, Szaflarski JP. Impaired perception of harmonic complexity in congenital amusia: a case study. Cogn Neuropsychol 2012; 28:305-21. [PMID: 22248246 DOI: 10.1080/02643294.2011.646972] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Abstract
This study investigates whether congenital amusia (an inability to perceive music from birth) also impairs the perception of musical qualities that do not rely on fine-grained pitch discrimination. We established that G.G. (64-year-old male, age-typical hearing) met the criteria of congenital amusia and demonstrated music-specific deficits (e.g., language processing, intonation, prosody, fine-grained pitch processing, pitch discrimination, identification of discrepant tones and direction of pitch for tones in a series, pitch discrimination within scale segments, predictability of tone sequences, recognition versus knowing memory for melodies, and short-term memory for melodies). Next, we conducted tests of tonal fusion, harmonic complexity, and affect perception: recognizing timbre, assessing consonance and dissonance, and recognizing musical affect from harmony. G.G. displayed relatively unimpaired perception and production of environmental sounds, prosody, and emotion conveyed by speech compared with impaired fine-grained pitch perception, tonal sequence discrimination, and melody recognition. Importantly, G.G. could not perform tests of tonal fusion that do not rely on pitch discrimination: He could not distinguish concurrent notes, timbre, consonance/dissonance, simultaneous notes, and musical affect. Results indicate at least three distinct problems-one with pitch discrimination, one with harmonic simultaneity, and one with musical affect-and each has distinct consequences for music perception.
Collapse
Affiliation(s)
- Catherine L Reed
- Department of Psychology, Claremont McKenna College and Claremont Graduate University, Claremont, C 91711A, USA.
| | | | | | | |
Collapse
|
131
|
Abdul-Kareem IA, Stancak A, Parkes LM, Al-Ameen M, Alghamdi J, Aldhafeeri FM, Embleton K, Morris D, Sluming V. Plasticity of the superior and middle cerebellar peduncles in musicians revealed by quantitative analysis of volume and number of streamlines based on diffusion tensor tractography. THE CEREBELLUM 2012; 10:611-23. [PMID: 21503593 DOI: 10.1007/s12311-011-0274-1] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
This work was conducted to study the plasticity of superior (SCP) and middle (MCP) cerebellar peduncles in musicians. The cerebellum is well known to support several musically relevant motor, sensory and cognitive functions. Previous studies reported increased cerebellar volume and grey matter (GM) density in musicians. Here, we report on plasticity of white matter (WM) of the cerebellum. Our cohort included 10/10 gender and handedness-matched musicians and controls. Using diffusion tensor imaging, fibre tractography of SCP and MCP was performed. The fractional anisotropy (FA), number of streamlines and volume of streamlines of SCP/MCP were compared between groups. Automatic measurements of GM and WM volumes of the right/left cerebellar hemispheres were also compared. Musicians have significantly increased right SCP volume (p = 0.02) and number of streamlines (p = 0.001), right MCP volume (p = 0.004) and total WM volume of the right cerebellum (p = 0.003). There were no significant differences in right MCP number of streamlines, left SCP/MCP volume and number of streamlines, SCP/MCP FA values, GM volume of the right cerebellum and GM/WM volumes of the left cerebellum. We propose that increased volume and number of streamlines of the right cerebellar peduncles represent use-dependent structural adaptation to increased sensorimotor and cognitive functional demands on the musician's cerebellum.
Collapse
Affiliation(s)
- Ihssan A Abdul-Kareem
- Department of Molecular and Cellular Biology, Institute of Translational Medicine, University of Liverpool, Liverpool, UK.
| | | | | | | | | | | | | | | | | |
Collapse
|
132
|
Contralateral white noise attenuates 40-Hz auditory steady-state fields but not N100m in auditory evoked fields. Neuroimage 2012; 59:1037-42. [DOI: 10.1016/j.neuroimage.2011.08.108] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2011] [Revised: 08/28/2011] [Accepted: 08/29/2011] [Indexed: 11/23/2022] Open
|
133
|
Affiliation(s)
- Peter Wolf
- Danish Epilepsy Center Filadelfia, Dianalund, Denmark.
| | | |
Collapse
|
134
|
Petacchi A, Kaernbach C, Ratnam R, Bower JM. Increased activation of the human cerebellum during pitch discrimination: A positron emission tomography (PET) study. Hear Res 2011; 282:35-48. [DOI: 10.1016/j.heares.2011.09.008] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/25/2011] [Revised: 09/21/2011] [Accepted: 09/29/2011] [Indexed: 11/28/2022]
|
135
|
Williamson VJ, Liu F, Peryer G, Grierson M, Stewart L. Perception and action de-coupling in congenital amusia: sensitivity to task demands. Neuropsychologia 2011; 50:172-80. [PMID: 22138419 DOI: 10.1016/j.neuropsychologia.2011.11.015] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2011] [Revised: 10/20/2011] [Accepted: 11/16/2011] [Indexed: 11/27/2022]
Abstract
Theories that purport the existence of a distinct auditory action stream have received support from the finding that individuals with congenital amusia, a disorder of pitch perception, are able to reproduce the direction of a pitch change that they are unable to identify (Loui, Guenther, Mathys, & Schlaug, 2008). Although this finding has proved influential in theorizing about the existence of an auditory action-stream, aspects of the original study warrant further investigation. The present report attempts to replicate the original study's findings across a sizeable cohort of individuals with amusia (n=14), obtaining action (production) and perception thresholds for pitch direction. In contrast to the original study, we find evidence of a double dissociation: while a minority of amusics had lower (better) thresholds for production compared to perception of pitch, more than half showed the reverse pattern. To explore the impact of task demands, perception thresholds were also measured using a two alternative, criterion-free, forced choice task that avoided labeling demands. Controls' thresholds were task-invariant while amusics' thresholds were significantly task-dependent. We argue that the direction and extent of a perception/production dissociation in this population reflects individual differences in the mapping of pitch representations to labels ("up"; "down") and to the vocal apparatus, as opposed to anything intrinsically yoked to perception or action per se.
Collapse
Affiliation(s)
- Victoria J Williamson
- Psychology Department, Goldsmiths, University of London, New Cross, London SE14 6NW, United Kingdom.
| | | | | | | | | |
Collapse
|
136
|
Mathys C, Loui P, Zheng X, Schlaug G. Non-invasive brain stimulation applied to Heschl's gyrus modulates pitch discrimination. Front Psychol 2011; 1:193. [PMID: 21286253 PMCID: PMC3028589 DOI: 10.3389/fpsyg.2010.00193] [Citation(s) in RCA: 52] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The neural basis of the human brain's ability to discriminate pitch has been investigated by functional neuroimaging and the study of lesioned brains, indicating the critical importance of right and left Heschl's gyrus (HG) in pitch perception. Nonetheless, there remains some uncertainty with regard to localization and lateralization of pitch discrimination, partly because neuroimaging results do not allow us to draw inferences about the causality. To address the problem of causality in pitch discrimination functions, we used transcranial direct current stimulation to downregulate (via cathodal stimulation) and upregulate (via anodal stimulation) excitability in either left or right auditory cortex and measured the effect on performance in a pitch discrimination task in comparison with sham stimulation. Cathodal stimulation of HG on the left and on the right hemispheres adversely affected pitch discrimination in comparison to sham stimulation, with the effect on the right being significantly stronger than on the left. Anodal stimulation on either side had no effect on performance in comparison to sham. Our results indicate that both left and right HG are causally involved in pitch discrimination, although the right auditory cortex might be a stronger contributor.
Collapse
Affiliation(s)
- Christoph Mathys
- Music and Neuroimaging Laboratory, Department of Neurology, Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, MA, USA
| | | | | | | |
Collapse
|
137
|
Andoh J, Zatorre RJ. Interhemispheric Connectivity Influences the Degree of Modulation of TMS-Induced Effects during Auditory Processing. Front Psychol 2011; 2:161. [PMID: 21811478 PMCID: PMC3139954 DOI: 10.3389/fpsyg.2011.00161] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2011] [Accepted: 06/27/2011] [Indexed: 11/13/2022] Open
Abstract
Repetitive transcranial magnetic stimulation (rTMS) has been shown to interfere with many components of language processing, including semantic, syntactic, and phonologic. However, not much is known about its effects on nonlinguistic auditory processing, especially its action on Heschl's gyrus (HG). We aimed to investigate the behavioral and neural basis of rTMS during a melody processing task, while targeting the left HG, the right HG, and the Vertex as a control site. Response times (RT) were normalized relative to the baseline-rTMS (Vertex) and expressed as percentage change from baseline (%RT change). We also looked at sex differences in rTMS-induced response as well as in functional connectivity during melody processing using rTMS and functional magnetic resonance imaging (fMRI). fMRI results showed an increase in the right HG compared with the left HG during the melody task, as well as sex differences in functional connectivity indicating a greater interhemispheric connectivity between left and right HG in females compared with males. TMS results showed that 10 Hz-rTMS targeting the right HG induced differential effects according to sex, with a facilitation of performance in females and an impairment of performance in males. We also found a differential correlation between the %RT change after 10 Hz-rTMS targeting the right HG and the interhemispheric functional connectivity between right and left HG, indicating that an increase in interhemispheric functional connectivity was associated with a facilitation of performance. This is the first study to report a differential rTMS-induced interference with melody processing depending on sex. In addition, we showed a relationship between the interference induced by rTMS on behavioral performance and the neural activity in the network connecting left and right HG, suggesting that the interhemispheric functional connectivity could determine the degree of modulation of behavioral performance.
Collapse
Affiliation(s)
- Jamila Andoh
- Montreal Neurological Institute, McGill University Montreal, QC, Canada
| | | |
Collapse
|
138
|
Boh B, Herholz SC, Lappe C, Pantev C. Processing of complex auditory patterns in musicians and nonmusicians. PLoS One 2011; 6:e21458. [PMID: 21750713 PMCID: PMC3131276 DOI: 10.1371/journal.pone.0021458] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2011] [Accepted: 06/01/2011] [Indexed: 11/18/2022] Open
Abstract
In the present study we investigated the capacity of the memory store underlying the mismatch negativity (MMN) response in musicians and nonmusicians for complex tone patterns. While previous studies have focused either on the kind of information that can be encoded or on the decay of the memory trace over time, we studied capacity in terms of the length of tone sequences, i.e., the number of individual tones that can be fully encoded and maintained. By means of magnetoencephalography (MEG) we recorded MMN responses to deviant tones that could occur at any position of standard tone patterns composed of four, six or eight tones during passive, distracted listening. Whereas there was a reliable MMN response to deviant tones in the four-tone pattern in both musicians and nonmusicians, only some individuals showed MMN responses to the longer patterns. This finding of a reliable capacity of the short-term auditory store underlying the MMN response is in line with estimates of a three to five item capacity of the short-term memory trace from behavioural studies, although pitch and contour complexity covaried with sequence length, which might have led to an understatement of the reported capacity. Whereas there was a tendency for an enhancement of the pattern MMN in musicians compared to nonmusicians, a strong advantage for musicians could be shown in an accompanying behavioural task of detecting the deviants while attending to the stimuli for all pattern lengths, indicating that long-term musical training differentially affects the memory capacity of auditory short-term memory for complex tone patterns with and without attention. Also, a left-hemispheric lateralization of MMN responses in the six-tone pattern suggests that additional networks that help structuring the patterns in the temporal domain might be recruited for demanding auditory processing in the pitch domain.
Collapse
Affiliation(s)
- Bastiaan Boh
- Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Sibylle C. Herholz
- Montreal Neurological Institute, McGill University, and International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, Canada
- * E-mail:
| | - Claudia Lappe
- Institute for Biomagnetism and Biosignalanalysis, Muenster, Germany
| | - Christo Pantev
- Institute for Biomagnetism and Biosignalanalysis, Muenster, Germany
- Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, Westfalian Wilhelms-University, Muenster, Germany
| |
Collapse
|
139
|
Horváth RA, Schwarcz A, Aradi M, Auer T, Fehér N, Kovács N, Tényi T, Szalay C, Perlaki G, Orsi G, Komoly S, Dóczi T, Woermann FG, Gyimesi C, Janszky J. Lateralisation of non-metric rhythm. Laterality 2011; 16:620-35. [PMID: 21424982 DOI: 10.1080/1357650x.2010.515990] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
There are contradictory results on lateralisation and localisation of rhythm processing. Our aim was to test whether there is a hemispheric dissociation of metric and non-metric rhythm processing. We created a non-metric rhythm stimulus without a sense of metre and we measured brain activities during passive rhythm perception. A total of 11 healthy, right-handed, native female Hungarian speakers aged 21.3 ± 1.1 were investigated by functional magnetic resonance imaging (fMRI) using a 3T MR scanner. The experimental acoustic stimulus consisted of comprehensive sentences transformed to Morse code, which represent a non-metric rhythm with irregular perceptual accent structure. Activations were found in the right hemisphere, in the posterior parts of the right-sided superior and middle temporal gyri and temporal pole as well as in the orbital part of the right inferior frontal gyrus. Additional activation appeared in the left-sided superior temporal region. Our study suggests that non-metric rhythm with irregular perceptual accents structure is confined to the right hemisphere. Furthermore, a right-lateralised fronto-temporal network extracts the continuously altering temporal structure of the non-metric rhythm.
Collapse
|
140
|
Tsukano H, Hishida R, Shibuki K. Detection of virtual pitch up to 5kHz by mice. Neurosci Res 2011; 71:140-4. [PMID: 21704087 DOI: 10.1016/j.neures.2011.06.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2011] [Revised: 05/25/2011] [Accepted: 06/09/2011] [Indexed: 10/18/2022]
Abstract
Natural sounds consist of a component at the fundamental frequency (f0) and its overtones. Pitch is perceived at f0, even when spectral energy at f0 is missing. This missing f0, or 'virtual pitch', is thought to be detected in the auditory cortex and related cortical areas, but the precise neural mechanisms are unknown. One possibility is that virtual pitch can be retrieved from the periodicity of sound waveforms. However, this mechanism requires the temporal accuracy in periodicity detection, and so far the detection of virtual pitch has only been demonstrated at frequencies lower than 1kHz. We investigated the ability of mice to detect virtual pitch up to 5kHz using a two-step sound discrimination test. In the first step of this test, mice were trained to discriminate between tone bursts at 2.5 and 5kHz. In the second step, we tested the ability of mice to discriminate between virtual pitches at 2.5kHz and at 5kHz. It was demonstrated that the performance of mice to discriminate between virtual pitches at 2.5 and 5kHz was significantly affected by previous discrimination learning between tone bursts, indicating that mice can detect virtual pitch up to 5kHz.
Collapse
Affiliation(s)
- Hiroaki Tsukano
- Department of Neurophysiology, Brain Research Institute, Niigata University, 1-757 Asahi-machi, Chuo-ku, Niigata 951-8585, Japan
| | | | | |
Collapse
|
141
|
Goll JC, Kim LG, Hailstone JC, Lehmann M, Buckley A, Crutch SJ, Warren JD. Auditory object cognition in dementia. Neuropsychologia 2011; 49:2755-65. [PMID: 21689671 PMCID: PMC3202629 DOI: 10.1016/j.neuropsychologia.2011.06.004] [Citation(s) in RCA: 52] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2011] [Revised: 06/01/2011] [Accepted: 06/02/2011] [Indexed: 11/18/2022]
Abstract
The cognition of nonverbal sounds in dementia has been relatively little explored. Here we undertook a systematic study of nonverbal sound processing in patient groups with canonical dementia syndromes comprising clinically diagnosed typical amnestic Alzheimer's disease (AD; n = 21), progressive nonfluent aphasia (PNFA; n = 5), logopenic progressive aphasia (LPA; n = 7) and aphasia in association with a progranulin gene mutation (GAA; n = 1), and in healthy age-matched controls (n = 20). Based on a cognitive framework treating complex sounds as ‘auditory objects’, we designed a novel neuropsychological battery to probe auditory object cognition at early perceptual (sub-object), object representational (apperceptive) and semantic levels. All patients had assessments of peripheral hearing and general neuropsychological functions in addition to the experimental auditory battery. While a number of aspects of auditory object analysis were impaired across patient groups and were influenced by general executive (working memory) capacity, certain auditory deficits had some specificity for particular dementia syndromes. Patients with AD had a disproportionate deficit of auditory apperception but preserved timbre processing. Patients with PNFA had salient deficits of timbre and auditory semantic processing, but intact auditory size and apperceptive processing. Patients with LPA had a generalised auditory deficit that was influenced by working memory function. In contrast, the patient with GAA showed substantial preservation of auditory function, but a mild deficit of pitch direction processing and a more severe deficit of auditory apperception. The findings provide evidence for separable stages of auditory object analysis and separable profiles of impaired auditory object cognition in different dementia syndromes.
Collapse
Affiliation(s)
- Johanna C. Goll
- Dementia Research Centre, Institute of Neurology, University College London, 8-11 Queen Square, London WC1N 3BG, United Kingdom
| | - Lois G. Kim
- Department of Medical Statistics, Faculty of Epidemiology and Population Health, London School of Hygiene and Tropical Medicine, Keppel St, London WC1E 7HT, United Kingdom
| | - Julia C. Hailstone
- Dementia Research Centre, Institute of Neurology, University College London, 8-11 Queen Square, London WC1N 3BG, United Kingdom
| | - Manja Lehmann
- Dementia Research Centre, Institute of Neurology, University College London, 8-11 Queen Square, London WC1N 3BG, United Kingdom
| | - Aisling Buckley
- Dementia Research Centre, Institute of Neurology, University College London, 8-11 Queen Square, London WC1N 3BG, United Kingdom
| | - Sebastian J. Crutch
- Dementia Research Centre, Institute of Neurology, University College London, 8-11 Queen Square, London WC1N 3BG, United Kingdom
| | - Jason D. Warren
- Dementia Research Centre, Institute of Neurology, University College London, 8-11 Queen Square, London WC1N 3BG, United Kingdom
- Corresponding author. Tel.: +44 0207 829 8773; fax: +44 0207 676 2066.
| |
Collapse
|
142
|
Koelsch S. Toward a neural basis of music perception - a review and updated model. Front Psychol 2011; 2:110. [PMID: 21713060 PMCID: PMC3114071 DOI: 10.3389/fpsyg.2011.00110] [Citation(s) in RCA: 170] [Impact Index Per Article: 13.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2011] [Accepted: 05/13/2011] [Indexed: 12/11/2022] Open
Abstract
Music perception involves acoustic analysis, auditory memory, auditory scene analysis, processing of interval relations, of musical syntax and semantics, and activation of (pre)motor representations of actions. Moreover, music perception potentially elicits emotions, thus giving rise to the modulation of emotional effector systems such as the subjective feeling system, the autonomic nervous system, the hormonal, and the immune system. Building on a previous article (Koelsch and Siebel, 2005), this review presents an updated model of music perception and its neural correlates. The article describes processes involved in music perception, and reports EEG and fMRI studies that inform about the time course of these processes, as well as about where in the brain these processes might be located.
Collapse
Affiliation(s)
- Stefan Koelsch
- Cluster of Excellence "Languages of Emotion", Freie Universität Berlin Berlin, Germany
| |
Collapse
|
143
|
Elmer S, Meyer M, Marrama L, Jäncke L. Intensive language training and attention modulate the involvement of fronto-parietal regions during a non-verbal auditory discrimination task. Eur J Neurosci 2011; 34:165-75. [PMID: 21649758 DOI: 10.1111/j.1460-9568.2011.07728.x] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
This event-related functional magnetic resonance imaging (fMRI) study was designed in such a manner so as to contribute to the present debate on behavioural and functional transfer effects associated with intensive language training. To address this novel issue, we measured professional simultaneous interpreters and control subjects while they performed a non-verbal auditory discrimination task that primarily relies on attention and categorization functions. The fMRI results revealed that the discrimination of the target stimuli was associated with differential blood oxygen level-dependent responses in fronto-parietal regions between the two groups, even though in-scanner behavioural results did not show significant group differences. These findings are in line with previous observations showing the contribution of fronto-parietal regions to auditory attention and categorization functions. Our results imply that language training modulates brain activity in regions involved in the top-down regulation of auditory functions.
Collapse
Affiliation(s)
- Stefan Elmer
- Department of Neuropsychology, Institute of Psychology, University of Zürich, Zürich, Switzerland.
| | | | | | | |
Collapse
|
144
|
Woods DL, Herron TJ, Cate AD, Kang X, Yund EW. Phonological processing in human auditory cortical fields. Front Hum Neurosci 2011; 5:42. [PMID: 21541252 PMCID: PMC3082852 DOI: 10.3389/fnhum.2011.00042] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2010] [Accepted: 04/01/2011] [Indexed: 11/30/2022] Open
Abstract
We used population-based cortical-surface analysis of functional magnetic imaging data to characterize the processing of consonant–vowel–consonant syllables (CVCs) and spectrally matched amplitude-modulated noise bursts (AMNBs) in human auditory cortex as subjects attended to auditory or visual stimuli in an intermodal selective attention paradigm. Average auditory cortical field (ACF) locations were defined using tonotopic mapping in a previous study. Activations in auditory cortex were defined by two stimulus-preference gradients: (1) Medial belt ACFs preferred AMNBs and lateral belt and parabelt fields preferred CVCs. This preference extended into core ACFs with medial regions of primary auditory cortex (A1) and the rostral field preferring AMNBs and lateral regions preferring CVCs. (2) Anterior ACFs showed smaller activations but more clearly defined stimulus preferences than did posterior ACFs. Stimulus preference gradients were unaffected by auditory attention suggesting that ACF preferences reflect the automatic processing of different spectrotemporal sound features.
Collapse
Affiliation(s)
- David L Woods
- Human Cognitive Neurophysiology Laboratory, Department of Veterans Affairs Northern California Health Care System Martinez, CA, USA
| | | | | | | | | |
Collapse
|
145
|
Sugiura L, Ojima S, Matsuba-Kurita H, Dan I, Tsuzuki D, Katura T, Hagiwara H. Sound to language: different cortical processing for first and second languages in elementary school children as revealed by a large-scale study using fNIRS. ACTA ACUST UNITED AC 2011; 21:2374-93. [PMID: 21350046 PMCID: PMC3169662 DOI: 10.1093/cercor/bhr023] [Citation(s) in RCA: 46] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
A large-scale study of 484 elementary school children (6-10 years) performing word repetition tasks in their native language (L1-Japanese) and a second language (L2-English) was conducted using functional near-infrared spectroscopy. Three factors presumably associated with cortical activation, language (L1/L2), word frequency (high/low), and hemisphere (left/right), were investigated. L1 words elicited significantly greater brain activation than L2 words, regardless of semantic knowledge, particularly in the superior/middle temporal and inferior parietal regions (angular/supramarginal gyri). The greater L1-elicited activation in these regions suggests that they are phonological loci, reflecting processes tuned to the phonology of the native language, while phonologically unfamiliar L2 words were processed like nonword auditory stimuli. The activation was bilateral in the auditory and superior/middle temporal regions. Hemispheric asymmetry was observed in the inferior frontal region (right dominant), and in the inferior parietal region with interactions: low-frequency words elicited more right-hemispheric activation (particularly in the supramarginal gyrus), while high-frequency words elicited more left-hemispheric activation (particularly in the angular gyrus). The present results reveal the strong involvement of a bilateral language network in children's brains depending more on right-hemispheric processing while acquiring unfamiliar/low-frequency words. A right-to-left shift in laterality should occur in the inferior parietal region, as lexical knowledge increases irrespective of language.
Collapse
Affiliation(s)
- Lisa Sugiura
- Department of Language Sciences, Graduate School of Humanities, Tokyo Metropolitan University, Minami-Osawa, Hachioji, Tokyo 192-0397, Japan.
| | | | | | | | | | | | | |
Collapse
|
146
|
Lee YS, Janata P, Frost C, Hanke M, Granger R. Investigation of melodic contour processing in the brain using multivariate pattern-based fMRI. Neuroimage 2011; 57:293-300. [PMID: 21315158 DOI: 10.1016/j.neuroimage.2011.02.006] [Citation(s) in RCA: 63] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2010] [Revised: 01/28/2011] [Accepted: 02/02/2011] [Indexed: 10/18/2022] Open
Abstract
Music perception generally involves processing the frequency relationships between successive pitches and extraction of the melodic contour. Previous evidence has suggested that the 'ups' and 'downs' of melodic contour are categorically and automatically processed, but knowledge of the brain regions that discriminate different types of contour is limited. Here, we examined melodic contour discrimination using multivariate pattern analysis (MVPA) of fMRI data. Twelve non-musicians were presented with various ascending and descending melodic sequences while being scanned. Whole-brain MVPA was used to identify regions in which the local pattern of activity accurately discriminated between contour categories. We identified three distinct cortical loci: the right superior temporal sulcus (rSTS), the left inferior parietal lobule (lIPL), and the anterior cingulate cortex (ACC). These results complement previous findings of melodic processing within the rSTS, and extend our understanding of the way in which abstract auditory sequences are categorized by the human brain.
Collapse
Affiliation(s)
- Yune-Sang Lee
- Dept. of Psychological & Brain Sciences, Dartmouth College, Hanover, NH, USA; Center for Cognitive Neuroscience, Dartmouth College, Hanover, NH, USA; Neurology Department, University of Pennsylvania, Philadelphia, PA, USA.
| | - Petr Janata
- Center for Mind and Brain, U.C. Davis, CA, USA
| | - Carlton Frost
- Dept. of Psychological & Brain Sciences, Dartmouth College, Hanover, NH, USA
| | - Michael Hanke
- Dept. of Psychological & Brain Sciences, Dartmouth College, Hanover, NH, USA; Center for Cognitive Neuroscience, Dartmouth College, Hanover, NH, USA; Dept. of Experimental Psychology, Otto-von-Guericke University, Magdeburg, Germany
| | - Richard Granger
- Dept. of Psychological & Brain Sciences, Dartmouth College, Hanover, NH, USA; Center for Cognitive Neuroscience, Dartmouth College, Hanover, NH, USA
| |
Collapse
|
147
|
Abstract
The ability to make sense of the music in our environment involves sophisticated cognitive mechanisms that, for most people, are acquired effortlessly and in early life. A special population of individuals, with a disorder termed congenital amusia, report lifelong difficulties in this regard. Exploring the nature of this developmental disorder provides a window onto the cognitive architecture of typical musical processing, as well as allowing a study of the relationship between processing of music and other domains, such as language. The present article considers findings concerning pitch discrimination, pitch memory, contour processing, experiential aspects of music listening in amusia, and emerging evidence concerning the neurobiology of the disorder. A simplified model of melodic processing is outlined, and possible loci of the cognitive deficit are discussed.
Collapse
Affiliation(s)
- Lauren Stewart
- Department of Psychology, Goldsmiths, University of London, London, UK.
| |
Collapse
|
148
|
Woods DL, Herron TJ, Cate AD, Yund EW, Stecker GC, Rinne T, Kang X. Functional properties of human auditory cortical fields. Front Syst Neurosci 2010; 4:155. [PMID: 21160558 PMCID: PMC3001989 DOI: 10.3389/fnsys.2010.00155] [Citation(s) in RCA: 83] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2010] [Accepted: 11/05/2010] [Indexed: 11/23/2022] Open
Abstract
While auditory cortex in non-human primates has been subdivided into multiple functionally specialized auditory cortical fields (ACFs), the boundaries and functional specialization of human ACFs have not been defined. In the current study, we evaluated whether a widely accepted primate model of auditory cortex could explain regional tuning properties of fMRI activations on the cortical surface to attended and non-attended tones of different frequency, location, and intensity. The limits of auditory cortex were defined by voxels that showed significant activations to non-attended sounds. Three centrally located fields with mirror-symmetric tonotopic organization were identified and assigned to the three core fields of the primate model while surrounding activations were assigned to belt fields following procedures similar to those used in macaque fMRI studies. The functional properties of core, medial belt, and lateral belt field groups were then analyzed. Field groups were distinguished by tonotopic organization, frequency selectivity, intensity sensitivity, contralaterality, binaural enhancement, attentional modulation, and hemispheric asymmetry. In general, core fields showed greater sensitivity to sound properties than did belt fields, while belt fields showed greater attentional modulation than core fields. Significant distinctions in intensity sensitivity and contralaterality were seen between adjacent core fields A1 and R, while multiple differences in tuning properties were evident at boundaries between adjacent core and belt fields. The reliable differences in functional properties between fields and field groups suggest that the basic primate pattern of auditory cortex organization is preserved in humans. A comparison of the sizes of functionally defined ACFs in humans and macaques reveals a significant relative expansion in human lateral belt fields implicated in the processing of speech.
Collapse
Affiliation(s)
- David L Woods
- Human Cognitive Neurophysiology Laboratory, VANCHCS Martinez, CA, USA
| | | | | | | | | | | | | |
Collapse
|
149
|
Särkämö T, Tervaniemi M, Soinila S, Autti T, Silvennoinen HM, Laine M, Hietanen M, Pihko E. Auditory and cognitive deficits associated with acquired amusia after stroke: a magnetoencephalography and neuropsychological follow-up study. PLoS One 2010; 5:e15157. [PMID: 21152040 PMCID: PMC2996293 DOI: 10.1371/journal.pone.0015157] [Citation(s) in RCA: 36] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2010] [Accepted: 10/22/2010] [Indexed: 11/18/2022] Open
Abstract
Acquired amusia is a common disorder after damage to the middle cerebral artery (MCA) territory. However, its neurocognitive mechanisms, especially the relative contribution of perceptual and cognitive factors, are still unclear. We studied cognitive and auditory processing in the amusic brain by performing neuropsychological testing as well as magnetoencephalography (MEG) measurements of frequency and duration discrimination using magnetic mismatch negativity (MMNm) recordings. Fifty-three patients with a left (n = 24) or right (n = 29) hemisphere MCA stroke (MRI verified) were investigated 1 week, 3 months, and 6 months after the stroke. Amusia was evaluated using the Montreal Battery of Evaluation of Amusia (MBEA). We found that amusia caused by right hemisphere damage (RHD), especially to temporal and frontal areas, was more severe than amusia caused by left hemisphere damage (LHD). Furthermore, the severity of amusia was found to correlate with weaker frequency MMNm responses only in amusic RHD patients. Additionally, within the RHD subgroup, the amusic patients who had damage to the auditory cortex (AC) showed worse recovery on the MBEA as well as weaker MMNm responses throughout the 6-month follow-up than the non-amusic patients or the amusic patients without AC damage. Furthermore, the amusic patients both with and without AC damage performed worse than the non-amusic patients on tests of working memory, attention, and cognitive flexibility. These findings suggest domain-general cognitive deficits to be the primary mechanism underlying amusia without AC damage whereas amusia with AC damage is associated with both auditory and cognitive deficits.
Collapse
Affiliation(s)
- Teppo Särkämö
- Cognitive Brain Research Unit, Institute of Behavioural Sciences, University of Helsinki, Helsinki, Finland.
| | | | | | | | | | | | | | | |
Collapse
|
150
|
Bidirectional connectivity between hemispheres occurs at multiple levels in language processing but depends on sex. J Neurosci 2010; 30:11576-85. [PMID: 20810879 DOI: 10.1523/jneurosci.1245-10.2010] [Citation(s) in RCA: 49] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Our aim was to determine the direction of interhemispheric communication in a phonological task in regions involved in different levels of processing. Effective connectivity analysis was conducted on functional magnetic resonance imaging data from 39 children (ages 9-15 years) performing rhyming judgment on spoken words. The results show interaction between hemispheres at multiple levels. First, there is unidirectional transfer of information from right to left at the sensory level of primary auditory cortex. Second, bidirectional connections between superior temporal gyri (STGs) suggest a reciprocal cooperation between hemispheres at the level of phonological and prosodic processing. Third, a direct connection from right STG to left inferior frontal gyrus suggest that information processed in the right STG is integrated into the final stages of phonological segmentation required for the rhyming decision. Intrahemispheric connectivity from primary auditory cortex to STG was stronger in the left compared to the right hemisphere. These results support a model of cooperation between hemispheres, with asymmetric interhemispheric and intrahemispheric connectivity consistent with the left hemisphere specialization for phonological processing. Finally, we found greater interhemispheric connectivity in girls compared to boys, consistent with the hypothesis of a more bilateral representation of language in females than males. However, interhemispheric communication was associated with slow performance and low verbal intelligent quotient within girls. We suggest that females may have the potential for greater interhemispheric cooperation, which may be an advantage in certain tasks. However, in other tasks too much communication between hemispheres may interfere with task performance.
Collapse
|