1
|
Okazaki M, Yumoto M, Kaneko Y, Maruo K. Correlation of motor-auditory cross-modal and auditory unimodal N1 and mismatch responses of schizophrenic patients and normal subjects: an MEG study. Front Psychiatry 2023; 14:1217307. [PMID: 37886112 PMCID: PMC10598755 DOI: 10.3389/fpsyt.2023.1217307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Accepted: 09/25/2023] [Indexed: 10/28/2023] Open
Abstract
Introduction It has been suggested that the positive symptoms of schizophrenic patients (hallucinations, delusions, and passivity experience) are caused by dysfunction of their internal and external sensory prediction errors. This is often discussed as related to dysfunction of the forward model that executes self-monitoring. Several reports have suggested that dysfunction of the forward model in schizophrenia causes misattributions of self-generated thoughts and actions to external sources. There is some evidence that the forward model can be measured using the electroencephalography (EEG) and magnetoencephalography (MEG) components such as N1 (m) and mismatch negativity (MMN) (m). The objective in this MEG study is to investigate differences in the N1m and MMNm-like activity generated in motor-auditory cross-modal tasks in normal control (NC) subjects and schizophrenic (SC) patients, and compared that activity with N1m and MMNm in the auditory unimodal task. Methods The N1m and MMNm/MMNm-like activity were recorded in 15 SC patients and 12 matched NC subjects. The N1m-attenuation effects and peak amplitude of MMNm/MMNm-like activity of the NC and SC groups were compared. Additionally, correlations between MEG measures (N1m suppression rate, MMNm, and MMNm-like activity) and clinical variables (Positive and Negative Syndrome Scale (PANSS) scores and antipsychotic drug (APD) dosages) in SC patients were investigated. Results It was found that (i) there was no significant difference in N1m-attenuation for the NC and SC groups, and that (ii) MMNm in the unimodal task in the SC group was significantly smaller than that in the NC group. Further, the MMNm-like activity in the cross-modal task was smaller than that of the MMNm in the unimodal task in the NC group, but there was no significant difference in the SC group. The PANSS positive symptoms and general psychopathology score were moderately negatively correlated with the amplitudes of the MMNm-like activity, and the APD dosage was moderately negatively correlated with the N1m suppression rate. However, none of these correlations reached statistical significance. Discussion The findings suggest that schizophrenic patients perform altered predictive processes differently from healthy subjects in latencies reflecting MMNm, depending on whether they are under forward model generation or not. This may support the hypothesis that schizophrenic patients tend to misattribute their inner experience to external agents, thus leading to the characteristic schizophrenia symptoms.
Collapse
Affiliation(s)
- Mitsutoshi Okazaki
- Department of Psychiatry, National Center Hospital of Neurology and Psychiatry, Kodaira, Japan
- Department of Psychiatry, Ome Municipal General Hospital, Ome, Japan
| | - Masato Yumoto
- Department of Clinical Engineering, Faculty of Medical Science and Technology, Gunma Paz University, Takasaki, Japan
| | - Yuu Kaneko
- Department of Neurosurgery, National Center Hospital of Neurology and Psychiatry, Kodaira, Japan
| | - Kazushi Maruo
- Department of Biostatistics, Faculty of Medicine, University of Tsukuba, Tsukuba, Japan
| |
Collapse
|
2
|
Dhakal K, Norgaard M, Adhikari BM, Yun KS, Dhamala M. Higher Node Activity with Less Functional Connectivity During Musical Improvisation. Brain Connect 2019; 9:296-309. [DOI: 10.1089/brain.2017.0566] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2023] Open
Affiliation(s)
- Kiran Dhakal
- Department of Physics and Astronomy, Georgia State University, Atlanta, Georgia
| | | | - Bhim M. Adhikari
- Department of Physics and Astronomy, Georgia State University, Atlanta, Georgia
- Department of Psychiatry, Maryland Psychiatry Research Center, University of Maryland School of Medicine, Baltimore, Maryland
| | - Kristy S. Yun
- Department of Physics and Astronomy, Georgia State University, Atlanta, Georgia
| | - Mukesh Dhamala
- Department of Physics and Astronomy, Georgia State University, Atlanta, Georgia
- Neuroscience Institute, Georgia State University, Atlanta, Georgia
- Center for Behavioral Neuroscience, Georgia State University, Atlanta, Georgia
- Center for Nano-Optics, Georgia State University, Atlanta, Georgia
- Center for Diagnostics and Therapeutics, Georgia State University, Atlanta, Georgia
| |
Collapse
|
3
|
Shin H, Fujioka T. Effects of Visual Predictive Information and Sequential Context on Neural Processing of Musical Syntax. Front Psychol 2019; 9:2528. [PMID: 30618951 PMCID: PMC6300505 DOI: 10.3389/fpsyg.2018.02528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2018] [Accepted: 11/27/2018] [Indexed: 11/13/2022] Open
Abstract
The early right anterior negativity (ERAN) in event-related potentials (ERPs) is typically elicited by syntactically unexpected events in Western tonal music. We examined how visual predictive information influences syntactic processing, how musical or non-musical cues have different effects, and how they interact with sequential effects between trials, which could modulate with the strength of the sense of established tonality. The EEG was recorded from musicians who listened to chord sequences paired with one of four types of visual stimuli; two provided predictive information about the syntactic validity of the last chord through either musical notation of the whole sequence, or the word "regular" or "irregular," while the other two, empty musical staves or a blank screen, provided no information. Half of the sequences ended with the syntactically invalid Neapolitan sixth chord, while the other half ended with the Tonic chord. Clear ERAN was observed in frontocentral electrodes in all conditions. A principal component analysis (PCA) was performed on the grand average response in the audio-only condition, to separate spatio-temporal dynamics of different scalp areas as principal components (PCs) and use them to extract auditory-related neural activities in the other visual-cue conditions. The first principal component (PC1) showed a symmetrical frontocentral topography, while the second (PC2) showed a right-lateralized frontal concentration. A source analysis confirmed the relative contribution of temporal sources to the former and a right frontal source to the latter. Cue predictability affected only the ERAN projected onto PC1, especially when the previous trial ended with the Tonic chord. The ERAN in PC2 was reduced in the trials following Neapolitan endings in general. However, the extent of this reduction differed between cue-styles, whereby it was nearly absent when musical notation was used, regardless of whether the staves were filled with notes or empty. The results suggest that the right frontal areas carry out the primary role in musical syntactic analysis and integration of the ongoing context, which produce schematic expectations that, together with the veridical expectation incorporated by the temporal areas, inform musical syntactic processing in musicians.
Collapse
Affiliation(s)
- Hana Shin
- Department of Music, Center for Computer Research in Music and Acoustics, Stanford University, Stanford, CA, United States
| | - Takako Fujioka
- Department of Music, Center for Computer Research in Music and Acoustics, Stanford University, Stanford, CA, United States.,Stanford Neurosciences Institute, Stanford University, Stanford, CA, United States
| |
Collapse
|
4
|
Drai-Zerbib V, Baccino T. Cross-modal music integration in expert memory: Evidence from eye movements. J Eye Mov Res 2018; 11:10.16910/jemr.11.2.4. [PMID: 33828687 PMCID: PMC7733353 DOI: 10.16910/jemr.11.2.4] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2018] [Indexed: 12/02/2022] Open
Abstract
The study investigated the cross-modal integration hypothesis for expert musicians using eye tracking. Twenty randomized excerpts of classical music were presented in two modes (auditory and visual), at the same time (simultaneously) or successively (sequentially). Musicians (N = 53, 26 experts and 27 non-experts) were asked to detect a note modified between the auditory and visual versions, either in the same major/minor key or violating the key. Experts carried out the task faster and with greater accuracy than non-experts. Sequential presentation was more difficult than simultaneous (longer fixations and higher error rates) and the modified notes were more easily detected when violating the key (fewer errors), but with longer fixations (speed/accuracy trade-off strategy). Experts detected the modified note faster, especially in the simultaneous condition in which cross-modal integration may be applied. These results support the hypothesis that the main difference between experts and non-experts derives from the difference in knowledge structures in memory built over time with practice. They also suggest that these high-level knowledge structures in memory contain harmony and tonal rules, arguing in favour of cross-modal integration capacities for experts, which are related to and can be explained by the long-term working memory (LTWM) model of expert memory (e.g. (18; 22).
Collapse
|
5
|
Adhikari BM, Norgaard M, Quinn KM, Ampudia J, Squirek J, Dhamala M. The Brain Network Underpinning Novel Melody Creation. Brain Connect 2016; 6:772-785. [DOI: 10.1089/brain.2016.0453] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023] Open
Affiliation(s)
- Bhim M. Adhikari
- Physics and Astronomy, Georgia State University, Atlanta, Georgia
- Maryland Psychiatry Research Center, Department of Psychiatry, University of Maryland School of Medicine, Baltimore, Maryland
| | | | - Kristen M. Quinn
- Physics and Astronomy, Georgia State University, Atlanta, Georgia
| | - Jenine Ampudia
- Physics and Astronomy, Georgia State University, Atlanta, Georgia
| | - Justin Squirek
- Physics and Astronomy, Georgia State University, Atlanta, Georgia
| | - Mukesh Dhamala
- Physics and Astronomy, Georgia State University, Atlanta, Georgia
- Neuroscience Institute, Georgia State University, Atlanta, Georgia
- Center for Behavioral Neuroscience, Center for Nano-Optics, Center for Diagnostics and Therapeutics, Georgia State University, Atlanta, Georgia
| |
Collapse
|
6
|
Gelding RW, Thompson WF, Johnson BW. The Pitch Imagery Arrow Task: effects of musical training, vividness, and mental control. PLoS One 2015; 10:e0121809. [PMID: 25807078 PMCID: PMC4373867 DOI: 10.1371/journal.pone.0121809] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2014] [Accepted: 02/04/2015] [Indexed: 11/25/2022] Open
Abstract
Musical imagery is a relatively unexplored area, partly because of deficiencies in existing experimental paradigms, which are often difficult, unreliable, or do not provide objective measures of performance. Here we describe a novel protocol, the Pitch Imagery Arrow Task (PIAT), which induces and trains pitch imagery in both musicians and non-musicians. Given a tonal context and an initial pitch sequence, arrows are displayed to elicit a scale-step sequence of imagined pitches, and participants indicate whether the final imagined tone matches an audible probe. It is a staircase design that accommodates individual differences in musical experience and imagery ability. This new protocol was used to investigate the roles that musical expertise, self-reported auditory vividness and mental control play in imagery performance. Performance on the task was significantly better for participants who employed a musical imagery strategy compared to participants who used an alternative cognitive strategy and positively correlated with scores on the Control subscale from the Bucknell Auditory Imagery Scale (BAIS). Multiple regression analysis revealed that Imagery performance accuracy was best predicted by a combination of strategy use and scores on the Vividness subscale of BAIS. These results confirm that competent performance on the PIAT requires active musical imagery and is very difficult to achieve using alternative cognitive strategies. Auditory vividness and mental control were more important than musical experience in the ability to perform manipulation of pitch imagery.
Collapse
Affiliation(s)
- Rebecca W. Gelding
- Department of Cognitive Science, Macquarie University, Sydney, Australia
- ARC Centre of Excellence in Cognition and its Disorders, Macquarie University, Sydney, Australia
- * E-mail:
| | - William Forde Thompson
- ARC Centre of Excellence in Cognition and its Disorders, Macquarie University, Sydney, Australia
- Department of Psychology, Macquarie University, Sydney, Australia
| | - Blake W. Johnson
- Department of Cognitive Science, Macquarie University, Sydney, Australia
- ARC Centre of Excellence in Cognition and its Disorders, Macquarie University, Sydney, Australia
| |
Collapse
|
7
|
Brown RM, Zatorre RJ, Penhune VB. Expert music performance: cognitive, neural, and developmental bases. PROGRESS IN BRAIN RESEARCH 2015; 217:57-86. [DOI: 10.1016/bs.pbr.2014.11.021] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
|
8
|
Daikoku T, Yatomi Y, Yumoto M. Statistical learning of music- and language-like sequences and tolerance for spectral shifts. Neurobiol Learn Mem 2014; 118:8-19. [PMID: 25451311 DOI: 10.1016/j.nlm.2014.11.001] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2014] [Revised: 09/25/2014] [Accepted: 11/02/2014] [Indexed: 11/18/2022]
Abstract
In our previous study (Daikoku, Yatomi, & Yumoto, 2014), we demonstrated that the N1m response could be a marker for the statistical learning process of pitch sequence, in which each tone was ordered by a Markov stochastic model. The aim of the present study was to investigate how the statistical learning of music- and language-like auditory sequences is reflected in the N1m responses based on the assumption that both language and music share domain generality. By using vowel sounds generated by a formant synthesizer, we devised music- and language-like auditory sequences in which higher-ordered transitional rules were embedded according to a Markov stochastic model by controlling fundamental (F0) and/or formant frequencies (F1-F2). In each sequence, F0 and/or F1-F2 were spectrally shifted in the last one-third of the tone sequence. Neuromagnetic responses to the tone sequences were recorded from 14 right-handed normal volunteers. In the music- and language-like sequences with pitch change, the N1m responses to the tones that appeared with higher transitional probability were significantly decreased compared with the responses to the tones that appeared with lower transitional probability within the first two-thirds of each sequence. Moreover, the amplitude difference was even retained within the last one-third of the sequence after the spectral shifts. However, in the language-like sequence without pitch change, no significant difference could be detected. The pitch change may facilitate the statistical learning in language and music. Statistically acquired knowledge may be appropriated to process altered auditory sequences with spectral shifts. The relative processing of spectral sequences may be a domain-general auditory mechanism that is innate to humans.
Collapse
Affiliation(s)
- Tatsuya Daikoku
- Department of Clinical Laboratory, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Yutaka Yatomi
- Department of Clinical Laboratory, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Masato Yumoto
- Department of Clinical Laboratory, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan.
| |
Collapse
|
9
|
Implicit and explicit statistical learning of tone sequences across spectral shifts. Neuropsychologia 2014; 63:194-204. [PMID: 25192632 DOI: 10.1016/j.neuropsychologia.2014.08.028] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2014] [Revised: 08/21/2014] [Accepted: 08/22/2014] [Indexed: 11/22/2022]
Abstract
We investigated how the statistical learning of auditory sequences is reflected in neuromagnetic responses in implicit and explicit learning conditions. Complex tones with fundamental frequencies (F0s) in a five-tone equal temperament were generated by a formant synthesizer. The tones were subsequently ordered with the constraint that the probability of the forthcoming tone was statistically defined (80% for one tone; 5% for the other four) by the latest two successive tones (second-order Markov chains). The tone sequence consisted of 500 tones and 250 successive tones with a relative shift of F0s based on the same Markov transitional matrix. In explicit and implicit learning conditions, neuromagnetic responses to the tone sequence were recorded from fourteen right-handed participants. The temporal profiles of the N1m responses to the tones with higher and lower transitional probabilities were compared. In the explicit learning condition, the N1m responses to tones with higher transitional probability were significantly decreased compared with responses to tones with lower transitional probability in the latter half of the 500-tone sequence. Furthermore, this difference was retained even after the F0s were relatively shifted. In the implicit learning condition, N1m responses to tones with higher transitional probability were significantly decreased only for the 250 tones following the relative shift of F0s. The delayed detection of learning effects across the sound-spectral shift in the implicit condition may imply that learning may progress earlier in explicit learning conditions than in implicit learning conditions. The finding that the learning effects were retained across spectral shifts regardless of the learning modality indicates that relative pitch processing may be an essential ability for humans.
Collapse
|
10
|
Amemiya K, Karino S, Ishizu T, Yumoto M, Yamasoba T. Distinct neural mechanisms of tonal processing between musicians and non-musicians. Clin Neurophysiol 2014; 125:738-747. [DOI: 10.1016/j.clinph.2013.09.027] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2011] [Revised: 09/01/2013] [Accepted: 09/05/2013] [Indexed: 11/25/2022]
|
11
|
Loehr JD, Kourtis D, Vesper C, Sebanz N, Knoblich G. Monitoring Individual and Joint Action Outcomes in Duet Music Performance. J Cogn Neurosci 2013; 25:1049-61. [PMID: 23489144 DOI: 10.1162/jocn_a_00388] [Citation(s) in RCA: 80] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
We investigated whether people monitor the outcomes of their own and their partners' individual actions as well as the outcome of their combined actions when performing joint actions together. Pairs of pianists memorized both parts of a piano duet. Each pianist then performed one part while their partner performed the other; EEG was recorded from both. Auditory outcomes (pitches) associated with keystrokes produced by the pianists were occasionally altered in a way that either did or did not affect the joint auditory outcome (i.e., the harmony of a chord produced by the two pianists' combined pitches). Altered auditory outcomes elicited a feedback-related negativity whether they occurred in the pianist's own part or the partner's part, and whether they affected individual or joint action outcomes. Altered auditory outcomes also elicited a P300 whose amplitude was larger when the alteration affected the joint outcome compared with individual outcomes and when the alteration affected the pianist's own part compared with the partner's part. Thus, musicians engaged in joint actions monitor their own and their partner's actions as well as their combined action outcomes, while at the same time maintaining a distinction between their own and others' actions and between individual and joint outcomes.
Collapse
Affiliation(s)
| | | | | | - Natalie Sebanz
- 3Radboud University Nijmegen, The Netherlands
- 4Central European University, Budapest, Hungary
| | | |
Collapse
|
12
|
Simoens VL, Tervaniemi M. Auditory short-term memory activation during score reading. PLoS One 2013; 8:e53691. [PMID: 23326487 PMCID: PMC3543329 DOI: 10.1371/journal.pone.0053691] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2012] [Accepted: 12/04/2012] [Indexed: 11/19/2022] Open
Abstract
Performing music on the basis of reading a score requires reading ahead of what is being played in order to anticipate the necessary actions to produce the notes. Score reading thus not only involves the decoding of a visual score and the comparison to the auditory feedback, but also short-term storage of the musical information due to the delay of the auditory feedback during reading ahead. This study investigates the mechanisms of encoding of musical information in short-term memory during such a complicated procedure. There were three parts in this study. First, professional musicians participated in an electroencephalographic (EEG) experiment to study the slow wave potentials during a time interval of short-term memory storage in a situation that requires cross-modal translation and short-term storage of visual material to be compared with delayed auditory material, as it is the case in music score reading. This delayed visual-to-auditory matching task was compared with delayed visual-visual and auditory-auditory matching tasks in terms of EEG topography and voltage amplitudes. Second, an additional behavioural experiment was performed to determine which type of distractor would be the most interfering with the score reading-like task. Third, the self-reported strategies of the participants were also analyzed. All three parts of this study point towards the same conclusion according to which during music score reading, the musician most likely first translates the visual score into an auditory cue, probably starting around 700 or 1300 ms, ready for storage and delayed comparison with the auditory feedback.
Collapse
Affiliation(s)
- Veerle L Simoens
- Cognitive Brain Research Unit, Cognitive Science, Institute of Behavioural Sciences, University of Helsinki, Helsinki, Finland.
| | | |
Collapse
|
13
|
Lindström R, Paavilainen P, Kujala T, Tervaniemi M. Processing of audiovisual associations in the human brain: dependency on expectations and rule complexity. Front Psychol 2012; 3:159. [PMID: 22654778 PMCID: PMC3361018 DOI: 10.3389/fpsyg.2012.00159] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2012] [Accepted: 05/03/2012] [Indexed: 11/16/2022] Open
Abstract
In order to respond to environmental changes appropriately, the human brain must not only be able to detect environmental changes but also to form expectations of forthcoming events. The events in the external environment often have a number of multisensory features such as pitch and form. For integrated percepts of objects and events, crossmodal processing, and crossmodally induced expectations of forthcoming events are needed. The aim of the present study was to determine whether the expectations created by visual stimuli can modulate the deviance detection in the auditory modality, as reflected by auditory event-related potentials (ERPs). Additionally, it was studied whether the complexity of the rules linking auditory and visual stimuli together affects this process. The N2 deflection of the ERP was observed in response to violations in the subjects’ expectation of a forthcoming tone. Both temporal aspects and cognitive demands during the audiovisual deviance detection task modulated the brain processes involved.
Collapse
Affiliation(s)
- Riikka Lindström
- Cognitive Brain Research Unit, Cognitive Science, Institute of Behavioural Sciences, University of Helsinki Helsinki, Finland
| | | | | | | |
Collapse
|
14
|
Paraskevopoulos E, Kuchenbuch A, Herholz SC, Pantev C. Evidence for training-induced plasticity in multisensory brain structures: an MEG study. PLoS One 2012; 7:e36534. [PMID: 22570723 PMCID: PMC3343004 DOI: 10.1371/journal.pone.0036534] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2012] [Accepted: 04/09/2012] [Indexed: 11/23/2022] Open
Abstract
Multisensory learning and resulting neural brain plasticity have recently become a topic of renewed interest in human cognitive neuroscience. Music notation reading is an ideal stimulus to study multisensory learning, as it allows studying the integration of visual, auditory and sensorimotor information processing. The present study aimed at answering whether multisensory learning alters uni-sensory structures, interconnections of uni-sensory structures or specific multisensory areas. In a short-term piano training procedure musically naive subjects were trained to play tone sequences from visually presented patterns in a music notation-like system [Auditory-Visual-Somatosensory group (AVS)], while another group received audio-visual training only that involved viewing the patterns and attentively listening to the recordings of the AVS training sessions [Auditory-Visual group (AV)]. Training-related changes in cortical networks were assessed by pre- and post-training magnetoencephalographic (MEG) recordings of an auditory, a visual and an integrated audio-visual mismatch negativity (MMN). The two groups (AVS and AV) were differently affected by the training. The results suggest that multisensory training alters the function of multisensory structures, and not the uni-sensory ones along with their interconnections, and thus provide an answer to an important question presented by cognitive models of multisensory training.
Collapse
|
15
|
Navarro Cebrian A, Janata P. Electrophysiological correlates of accurate mental image formation in auditory perception and imagery tasks. Brain Res 2010; 1342:39-54. [PMID: 20406623 DOI: 10.1016/j.brainres.2010.04.026] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2010] [Revised: 04/04/2010] [Accepted: 04/12/2010] [Indexed: 10/19/2022]
Abstract
Event-related potentials (ERPs) were recorded while listeners made intonation judgments about target notes that terminated a sequence of heard notes (bottom-up task) or a sequence of imagined notes (top-down task). We hypothesized that the neural processes underlying the accurate formation and evaluation of mental images would behave similarly in both tasks. In the imagery condition, the amplitude of the N100 component of the auditory evoked potential in response to the target tone was smaller for those listeners who formed more accurate mental images. It was comparable in amplitude to the N100 evoked when all of the notes leading to the target were heard, consistent with a process of habituation of the N100 in the auditory cortex due to the formation of a sequence of mental images. The P3a response - a marker of deviance detection - to mistuned targets was also found in the imagery condition and it was larger for listeners who formed more accurate images. Additionally, the influence of long-term implicit memory for tonal structure of Western music on the acuity of mental images was examined by comparing responses to leading tone (contextually unstable) and tonic (contextually stable) targets. Images were more accurate for targets that were related more closely to the established tonal context. The results suggest that successful top-down activation of pitch representations activates the same neural processes that underlie the N100 response to perceived notes, and that the engagement of these processes underlies successful detection of mistuning as indexed by the P3a component.
Collapse
|
16
|
Schaefer RS, Desain P, Suppes P. Structural decomposition of EEG signatures of melodic processing. Biol Psychol 2009; 82:253-9. [PMID: 19698758 DOI: 10.1016/j.biopsycho.2009.08.004] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2008] [Revised: 08/07/2009] [Accepted: 08/11/2009] [Indexed: 11/26/2022]
Abstract
In the current study we investigate the EEG response to listening and imagining melodies and explore the possibility of decomposing this response according to musical features, such as rhythm and pitch patterns. A structural model was created based on musical aspects and multiple regression was used to calculate profiles of the contribution of each aspect, in contrast to traditional ERP components. By decomposing the response, we aimed to uncover pronounced ERP contributions for aspects of the encoding of musical structure, assuming a simple additive combination of these. When using a model built up of metric levels and contour direction, 81% of the variance is explained for perceived, and 57% for imagined melodies. The maximum correlation between the parameters found for the same melodic aspect in perception vs. imagery was 0.88, indicating similar processing between tasks. The decomposition method is shown to be a novel analysis method of complex ERP patterns, which allows subcomponents to be investigated within a continuous context.
Collapse
Affiliation(s)
- Rebecca S Schaefer
- Donders Institute for Brain, Cognition and Behavior: Centre for Cognition, Montessorilaan 3, 6525 HE Radboud University Nijmegen, The Netherlands.
| | | | | |
Collapse
|
17
|
|
18
|
Ruiz MH, Jabusch HC, Altenmüller E. Detecting Wrong Notes in Advance: Neuronal Correlates of Error Monitoring in Pianists. Cereb Cortex 2009; 19:2625-39. [PMID: 19276327 DOI: 10.1093/cercor/bhp021] [Citation(s) in RCA: 70] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Affiliation(s)
- María Herrojo Ruiz
- Institute of Music Physiology and Musicians' Medicine, Hanover University of Music and Drama, Hanover 30161, Germany
| | | | | |
Collapse
|
19
|
Herholz SC, Lappe C, Knief A, Pantev C. Neural basis of music imagery and the effect of musical expertise. Eur J Neurosci 2009; 28:2352-60. [PMID: 19046375 DOI: 10.1111/j.1460-9568.2008.06515.x] [Citation(s) in RCA: 75] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
Abstract
Although the influence of long-term musical training on the processing of heard music has been the subject of many studies, the neural basis of music imagery and the effect of musical expertise remain insufficiently understood. By means of magnetoencephalography (MEG) we compared musicians and nonmusicians in a musical imagery task with familiar melodies. Subjects listened to the beginnings of the melodies, continued them in their imagination and then heard a tone which was either a correct or an incorrect further continuation of the melody. Only in musicians was the imagery of these melodies strong enough to elicit an early preattentive brain response to unexpected incorrect continuations of the imagined melodies; this response, the imagery mismatch negativity (iMMN), peaked approximately 175 ms after tone onset and was right-lateralized. In contrast to previous studies the iMMN was not based on a heard but on a purely imagined memory trace. Our results suggest that in trained musicians imagery and perception rely on similar neuronal correlates, and that the musicians' intense musical training has modified this network to achieve a superior ability for imagery and preattentive processing of music.
Collapse
Affiliation(s)
- Sibylle C Herholz
- Institute for Biomagnetism and Biosignalanalysis, University of Münster, Malmedyweg 15, D-48149 Münster, Germany
| | | | | | | |
Collapse
|
20
|
Katahira K, Abla D, Masuda S, Okanoya K. Feedback-based error monitoring processes during musical performance: An ERP study. Neurosci Res 2008; 61:120-8. [DOI: 10.1016/j.neures.2008.02.001] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2007] [Revised: 02/04/2008] [Accepted: 02/05/2008] [Indexed: 10/22/2022]
|
21
|
Hirose H, Kubota M, Kimura I, Yumoto M, Sakakihara Y. Increased right auditory cortex activity in absolute pitch possessors. Neuroreport 2006; 16:1775-9. [PMID: 16237325 DOI: 10.1097/01.wnr.0000183906.00526.51] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
We recorded the auditory-evoked magnetic fields from children and adults with absolute pitch during the following tasks: (1) hearing 1000 Hz pure tones inattentively, (2) hearing eight random tones inattentively and (3) listening to eight random tones and identifying each tone. In children with absolute pitch, there was no significant positive correlation between the appearance rate of N100m and the kinds of tasks. In adults with absolute pitch, only the right N100m dipole moments increased significantly in tasks (1) and (2). The present results suggest that the circuit for labeling in the right auditory cortex may lose a function from childhood to adulthood, which reveals neuroplasticity in the development of absolute pitch ability.
Collapse
Affiliation(s)
- Hiroyuki Hirose
- Department of Pediatrics, Faculty of Medicine, University of Tokyo, Bunkyo-ku, Tokyo, Japan.
| | | | | | | | | |
Collapse
|