1
|
Jo HS, Hsieh TH, Chien WC, Shaw FZ, Liang SF, Kung CC. Probing the neural dynamics of musicians' and non-musicians' consonant/dissonant perception: Joint analyses of electrical encephalogram (EEG) and functional magnetic resonance imaging (fMRI). Neuroimage 2024; 298:120784. [PMID: 39147290 DOI: 10.1016/j.neuroimage.2024.120784] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2023] [Revised: 08/06/2024] [Accepted: 08/12/2024] [Indexed: 08/17/2024] Open
Abstract
The perception of two (or more) simultaneous musical notes, depending on their pitch interval(s), could be broadly categorized as consonant or dissonant. Previous literature has suggested that musicians and non-musicians adopt different strategies when discerning music intervals: while musicians rely on the frequency ratios between the two fundamental frequencies, such as "perfect fifth" (3:2) as consonant and "tritone" (45:32) as dissonant intervals; non-musicians may rely on the presence of 'roughness' or 'beats', generated by the difference of fundamental frequencies, as the key elements of 'dissonance'. The separate Event-Related Potential (ERP) differences in N1 and P2 along the midline electrodes provided evidence congruent with such 'separate reliances'. To replicate and to extend, in this study we reran the previous experiment, and separately collected fMRI data of the same protocol (with sparse sampling modifications). The behavioral and EEG results largely corresponded to our previous finding. The fMRI results, with the joint analyses by univariate, psycho-physiological interaction, and representational similarity analysis (RSA) approaches, further reinforce the involvement of central midline-related brain regions, such as ventromedial prefrontal and dorsal anterior cingulate cortex, in consonant/dissonance judgments. The final spatiotemporal searchlight RSA provided convincing evidence that the medial prefrontal cortex, along with the bilateral superior temporal cortex, is the joint locus of midline N1 and dorsal anterior cingulate cortex for the P2 effect (for musicians). Together, these analyses reaffirm that musicians rely more on experience-driven knowledge for consonance/dissonance perception; but also demonstrate the advantages of multiple analyses in constraining the findings from both EEG and fMRI.
Collapse
Affiliation(s)
- Han Shin Jo
- Institute of Medical Informatics, National Cheng Kung University (NCKU), Tainan, 70101, Taiwan
| | - Tsung-Hao Hsieh
- Department of Computer Science and Information Engineering, NCKU, Tainan, 70101, Taiwan; Department of Computer Science, Tunghai University, Taichung, 407224, Taiwan
| | - Wei-Che Chien
- Department of Computer Science and Information Engineering, NCKU, Tainan, 70101, Taiwan
| | - Fu-Zen Shaw
- Department of Psychology, NCKU, Tainan, 70101, Taiwan; Mind Research and Imaging Center, NCKU, Tainan, 70101, Taiwan
| | - Sheng-Fu Liang
- Institute of Medical Informatics, National Cheng Kung University (NCKU), Tainan, 70101, Taiwan; Department of Computer Science and Information Engineering, NCKU, Tainan, 70101, Taiwan
| | - Chun-Chia Kung
- Department of Psychology, NCKU, Tainan, 70101, Taiwan; Mind Research and Imaging Center, NCKU, Tainan, 70101, Taiwan.
| |
Collapse
|
2
|
Honbolygó F, Zulauf B, Zavogianni MI, Csépe V. Investigating the neurocognitive background of speech perception with a fast multi-feature MMN paradigm. Biol Futur 2024; 75:145-158. [PMID: 38805154 DOI: 10.1007/s42977-024-00219-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Accepted: 04/11/2024] [Indexed: 05/29/2024]
Abstract
The speech multi-feature MMN (Mismatch Negativity) offers a means to explore the neurocognitive background of the processing of multiple speech features in a short time, by capturing the time-locked electrophysiological activity of the brain known as event-related brain potentials (ERPs). Originating from Näätänen et al. (Clin Neurophysiol 115:140-144, 2004) pioneering work, this paradigm introduces several infrequent deviant stimuli alongside standard ones, each differing in various speech features. In this study, we aimed to refine the multi-feature MMN paradigm used previously to encompass both segmental and suprasegmental (prosodic) features of speech. In the experiment, a two-syllable long pseudoword was presented as a standard, and the deviant stimuli included alterations in consonants (deviation by place or place and mode of articulation), vowels (deviation by place or mode of articulation), and stress pattern in the first syllable of the pseudoword. Results indicated the emergence of MMN components across all segmental and prosodic contrasts, with the expected fronto-central amplitude distribution. Subsequent analyses revealed subtle differences in MMN responses to the deviants, suggesting varying sensitivity to phonetic contrasts. Furthermore, individual differences in MMN amplitudes were noted, partially attributable to participants' musical and language backgrounds. These findings underscore the utility of the multi-feature MMN paradigm for rapid and efficient investigation of the neurocognitive mechanisms underlying speech processing. Moreover, the paradigm demonstrated the potential to be used in further research to study the speech processing abilities in various populations.
Collapse
Affiliation(s)
- Ferenc Honbolygó
- HUN-REN Research Centre for Natural Sciences, Budapest, Hungary.
- Institute of Psychology, Eötvös Loránd University, Budapest, Hungary.
| | - Borbála Zulauf
- Institute of Psychology, Eötvös Loránd University, Budapest, Hungary
| | - Maria Ioanna Zavogianni
- HUN-REN Research Centre for Natural Sciences, Budapest, Hungary
- Faculty of Modern Philology and Social Sciences, Multilingualism Doctoral School, University of Pannonia, Veszprém, Hungary
| | - Valéria Csépe
- HUN-REN Research Centre for Natural Sciences, Budapest, Hungary
- University of Pannonia, Veszprém, Hungary
| |
Collapse
|
3
|
Elmer S, Kurthen I, Meyer M, Giroud N. A multidimensional characterization of the neurocognitive architecture underlying age-related temporal speech processing. Neuroimage 2023; 278:120285. [PMID: 37481009 DOI: 10.1016/j.neuroimage.2023.120285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 07/11/2023] [Accepted: 07/19/2023] [Indexed: 07/24/2023] Open
Abstract
Healthy aging is often associated with speech comprehension difficulties in everyday life situations despite a pure-tone hearing threshold in the normative range. Drawing on this background, we used a multidimensional approach to assess the functional and structural neural correlates underlying age-related temporal speech processing while controlling for pure-tone hearing acuity. Accordingly, we combined structural magnetic resonance imaging and electroencephalography, and collected behavioral data while younger and older adults completed a phonetic categorization and discrimination task with consonant-vowel syllables varying along a voice-onset time continuum. The behavioral results confirmed age-related temporal speech processing singularities which were reflected in a shift of the boundary of the psychometric categorization function, with older adults perceiving more syllable characterized by a short voice-onset time as /ta/ compared to younger adults. Furthermore, despite the absence of any between-group differences in phonetic discrimination abilities, older adults demonstrated longer N100/P200 latencies as well as increased P200 amplitudes while processing the consonant-vowel syllables varying in voice-onset time. Finally, older adults also exhibited a divergent anatomical gray matter infrastructure in bilateral auditory-related and frontal brain regions, as manifested in reduced cortical thickness and surface area. Notably, in the younger adults but not in the older adult cohort, cortical surface area in these two gross anatomical clusters correlated with the categorization of consonant-vowel syllables characterized by a short voice-onset time, suggesting the existence of a critical gray matter threshold that is crucial for consistent mapping of phonetic categories varying along the temporal dimension. Taken together, our results highlight the multifaceted dimensions of age-related temporal speech processing characteristics, and pave the way toward a better understanding of the relationships between hearing, speech and the brain in older age.
Collapse
Affiliation(s)
- Stefan Elmer
- Department of Computational Linguistics, Computational Neuroscience of Speech & Hearing, University of Zurich, Zurich, Switzerland; Competence center Language & Medicine, University of Zurich, Switzerland.
| | - Ira Kurthen
- Department of Computational Linguistics, Computational Neuroscience of Speech & Hearing, University of Zurich, Zurich, Switzerland
| | - Martin Meyer
- Department of Comparative Language Science, University of Zurich, Zurich, Switzerland; Center for Neuroscience Zurich, University and ETH of Zurich, Zurich, Switzerland; Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich, Switzerland; Cognitive Psychology Unit, Alpen-Adria University, Klagenfurt, Austria
| | - Nathalie Giroud
- Department of Computational Linguistics, Computational Neuroscience of Speech & Hearing, University of Zurich, Zurich, Switzerland; Center for Neuroscience Zurich, University and ETH of Zurich, Zurich, Switzerland; Competence center Language & Medicine, University of Zurich, Switzerland
| |
Collapse
|
4
|
Eierud C, Michael A, Banks D, Andrews E. Resting-state functional connectivity in lifelong musicians. PSYCHORADIOLOGY 2023; 3:kkad003. [PMID: 38666119 PMCID: PMC10917383 DOI: 10.1093/psyrad/kkad003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 01/30/2023] [Accepted: 02/24/2023] [Indexed: 04/28/2024]
Abstract
Background It has been postulated that musicianship can lead to enhanced brain and cognitive reserve, but the neural mechanisms of this effect have been poorly understood. Lifelong professional musicianship in conjunction with novel brain imaging techniques offers a unique opportunity to examine brain network differences between musicians and matched controls. Objective In this study we aim to investigate how resting-state functional networks (FNs) manifest in lifelong active musicians. We will evaluate the FNs of lifelong musicians and matched healthy controls using resting-state functional magnetic resonance imaging. Methods We derive FNs using the data-driven independent component analysis approach and analyze the functional network connectivity (FNC) between the default mode (DMN), sensory-motor (SMN), visual (VSN), and auditory (AUN) networks. We examine whether the linear regressions between FNC and age are different between the musicians and the control group. Results The age trajectory of average FNC across all six pairs of FNs shows significant differences between musicians and controls. Musicians show an increase in average FNC with age while controls show a decrease (P = 0.013). When we evaluated each pair of FN, we note that in musicians FNC values increased with age in DMN-AUN, DMN-VSN, and SMN-VSN and in controls FNC values decreased with age in DMN-AUN, DMN-SMN, AUN-SMN, and SMN-VSN. Conclusion This result provides early evidence that lifelong musicianship may contribute to enhanced brain and cognitive reserve. Results of this study are preliminary and need to be replicated with a larger number of participants.
Collapse
Affiliation(s)
- Cyrus Eierud
- Linguistics Program, Duke University, Durham, NC 27708, USA
| | - Andrew Michael
- Duke Institute for Brain Sciences, Duke University, Durham, NC 27708, USA
| | - David Banks
- Department of Statistical Science, Duke University, Durham, NC 27708, USA
| | - Edna Andrews
- Linguistics Program, Duke University, Durham, NC 27708, USA
- Duke Institute for Brain Sciences, Duke University, Durham, NC 27708, USA
- Center for Cognitive Neuroscience, Duke University, Durham, NC 27708, USA
| |
Collapse
|
5
|
Zhang Z, Zhang H, Sommer W, Yang X, Wei Z, Li W. Musical training alters neural processing of tones and vowels in classic Chinese poems. Brain Cogn 2023; 166:105952. [PMID: 36641937 DOI: 10.1016/j.bandc.2023.105952] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Revised: 10/31/2022] [Accepted: 01/05/2023] [Indexed: 01/15/2023]
Abstract
Long-term rigorous musical training promotes various aspects of spoken language processing. However, it is unclear whether musical training provides an advantage in recognizing segmental and suprasegmental information of spoken language. We used vowel and tone violations in spoken unfamiliar seven-character quatrains and a rhyming judgment task to investigate the effects of musical training on tone and vowel processing by recording ERPs. Compared with non-musicians, musicians were more accurate and responded faster to incorrect than correct tones. Musicians showed larger P2 components in their ERPs than non-musicians during both tone and vowel processing, revealing increased focused attention on sounds. Both groups showed enhanced N400 and LPC for incorrect vowels (vs. correct vowels) but non-musicians showed an additional P2 effect for vowel violations. Moreover, both groups showed enhanced LPC for incorrect tones (vs. correct tones) but only non-musicians showed an additional N400 effect for tone violations. These results indicate that vowel/tone processing is less effortful for musicians (vs. non-musicians). Our study suggests that long-term musical training facilitates speech tone and vowel processing in a tonal language environment by increasing the attentional focus on speech and reducing demands for detecting incorrect vowels and integration costs for tone changes.
Collapse
Affiliation(s)
- Zhenghua Zhang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China; Department of Psychology, Renmin University of China, Beijing 100872, China
| | - Hang Zhang
- Department of Psychology, Renmin University of China, Beijing 100872, China
| | - Werner Sommer
- Institut für Psychologie, Humboldt-Universität zu Berlin, Berlin 10117, Germany; Department of Psychology, Zhejiang Normal University, Jinhua 321004, China
| | - Xiaohong Yang
- Department of Psychology, Renmin University of China, Beijing 100872, China
| | - Zhen Wei
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| | - Weijun Li
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China.
| |
Collapse
|
6
|
Cohn M, Barreda S, Zellou G. Differences in a Musician's Advantage for Speech-in-Speech Perception Based on Age and Task. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:545-564. [PMID: 36729698 DOI: 10.1044/2022_jslhr-22-00259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
PURPOSE This study investigates the debate that musicians have an advantage in speech-in-noise perception from years of targeted auditory training. We also consider the effect of age on any such advantage, comparing musicians and nonmusicians (age range: 18-66 years), all of whom had normal hearing. We manipulate the degree of fundamental frequency (f o) separation between the competing talkers, as well as use different tasks, to probe attentional differences that might shape a musician's advantage across ages. METHOD Participants (ranging in age from 18 to 66 years) included 29 musicians and 26 nonmusicians. They completed two tasks varying in attentional demands: (a) a selective attention task where listeners identify the target sentence presented with a one-talker interferer (Experiment 1), and (b) a divided attention task where listeners hear two vowels played simultaneously and identify both competing vowels (Experiment 2). In both paradigms, f o separation was manipulated between the two voices (Δf o = 0, 0.156, 0.306, 1, 2, 3 semitones). RESULTS Results show that increasing differences in f o separation lead to higher accuracy on both tasks. Additionally, we find evidence for a musician's advantage across the two studies. In the sentence identification task, younger adult musicians show higher accuracy overall, as well as a stronger reliance on f o separation. Yet, this advantage declines with musicians' age. In the double vowel identification task, musicians of all ages show an across-the-board advantage in detecting two vowels-and use f o separation more to aid in stream separation-but show no consistent difference in double vowel identification. CONCLUSIONS Overall, we find support for a hybrid auditory encoding-attention account of music-to-speech transfer. The musician's advantage includes f o, but the benefit also depends on the attentional demands in the task and listeners' age. Taken together, this study suggests a complex relationship between age, musical experience, and speech-in-speech paradigm on a musician's advantage. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21956777.
Collapse
Affiliation(s)
- Michelle Cohn
- Phonetics Lab, Department of Linguistics, University of California, Davis
| | - Santiago Barreda
- Phonetics Lab, Department of Linguistics, University of California, Davis
| | - Georgia Zellou
- Phonetics Lab, Department of Linguistics, University of California, Davis
| |
Collapse
|
7
|
Maillard E, Joyal M, Murray MM, Tremblay P. Are musical activities associated with enhanced speech perception in noise in adults? A systematic review and meta-analysis. CURRENT RESEARCH IN NEUROBIOLOGY 2023. [DOI: 10.1016/j.crneur.2023.100083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/30/2023] Open
|
8
|
Smit EA, Milne AJ, Escudero P. Music Perception Abilities and Ambiguous Word Learning: Is There Cross-Domain Transfer in Nonmusicians? Front Psychol 2022; 13:801263. [PMID: 35401340 PMCID: PMC8984940 DOI: 10.3389/fpsyg.2022.801263] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 02/08/2022] [Indexed: 11/14/2022] Open
Abstract
Perception of music and speech is based on similar auditory skills, and it is often suggested that those with enhanced music perception skills may perceive and learn novel words more easily. The current study tested whether music perception abilities are associated with novel word learning in an ambiguous learning scenario. Using a cross-situational word learning (CSWL) task, nonmusician adults were exposed to word-object pairings between eight novel words and visual referents. Novel words were either non-minimal pairs differing in all sounds or minimal pairs differing in their initial consonant or vowel. In order to be successful in this task, learners need to be able to correctly encode the phonological details of the novel words and have sufficient auditory working memory to remember the correct word-object pairings. Using the Mistuning Perception Test (MPT) and the Melodic Discrimination Test (MDT), we measured learners’ pitch perception and auditory working memory. We predicted that those with higher MPT and MDT values would perform better in the CSWL task and in particular for novel words with high phonological overlap (i.e., minimal pairs). We found that higher musical perception skills led to higher accuracy for non-minimal pairs and minimal pairs differing in their initial consonant. Interestingly, this was not the case for vowel minimal pairs. We discuss the results in relation to theories of second language word learning such as the Second Language Perception model (L2LP).
Collapse
Affiliation(s)
- Eline A. Smit
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, NSW, Australia
- ARC Centre of Excellence for the Dynamics of Language, Canberra, ACT, Australia
- *Correspondence: Eline A. Smit,
| | - Andrew J. Milne
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, NSW, Australia
| | - Paola Escudero
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, NSW, Australia
- ARC Centre of Excellence for the Dynamics of Language, Canberra, ACT, Australia
| |
Collapse
|
9
|
Abstract
While many studies have examined the auditory abilities of musicians, this study uniquely asks whether dance training, a similar yet understudied type of early-life training, also benefits auditory abilities. We focused this investigation on temporal resolution, given the importance of subtle temporal cues in synchronizing movement. We found that, compared to untrained controls, novice adult dancers who have trained continuously since childhood had enhanced temporal resolution, measured with a gap detection task. In an analysis involving current and former dancers, total years of training was a significant predictor of temporal resolution thresholds. The association between dance experience and improved auditory skills has implications for current theories of experience-dependent auditory plasticity and the design of sound-based educational and rehabilitation activities.
Collapse
Affiliation(s)
- Erika Skoe
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs, Connecticut, United States
| | - Erica V Scarpati
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs, Connecticut, United States
| | | |
Collapse
|
10
|
Li X, Zatorre RJ, Du Y. The Microstructural Plasticity of the Arcuate Fasciculus Undergirds Improved Speech in Noise Perception in Musicians. Cereb Cortex 2021; 31:3975-3985. [PMID: 34037726 PMCID: PMC8328222 DOI: 10.1093/cercor/bhab063] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Abstract
Musical training is thought to be related to improved language skills, for example, understanding speech in background noise. Although studies have found that musicians and nonmusicians differed in morphology of bilateral arcuate fasciculus (AF), none has associated such white matter features with speech-in-noise (SIN) perception. Here, we tested both SIN and the diffusivity of bilateral AF segments in musicians and nonmusicians using diffusion tensor imaging. Compared with nonmusicians, musicians had higher fractional anisotropy (FA) in the right direct AF and lower radial diffusivity in the left anterior AF, which correlated with SIN performance. The FA-based laterality index showed stronger right lateralization of the direct AF and stronger left lateralization of the posterior AF in musicians than nonmusicians, with the posterior AF laterality predicting SIN accuracy. Furthermore, hemodynamic activity in right superior temporal gyrus obtained during a SIN task played a full mediation role in explaining the contribution of the right direct AF diffusivity on SIN performance, which therefore links training-related white matter plasticity, brain hemodynamics, and speech perception ability. Our findings provide direct evidence that differential microstructural plasticity of bilateral AF segments may serve as a neural foundation of the cross-domain transfer effect of musical experience to speech perception amid competing noise.
Collapse
Affiliation(s)
- Xiaonan Li
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Robert J Zatorre
- Montréal Neurological Institute, McGill University, Montréal, QC H3A 2B4, Canada.,International Laboratory for Brain, Music, and Sound Research (BRAMS), Montréal, QC H3A 2B4, Canada.,Centre for Research on Brain, Language and Music (CRBLM), Montreal, QC H3A 2B4, Canada
| | - Yi Du
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China.,CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai 200031, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China.,Chinese Institute for Brain Research, Beijing 102206, China
| |
Collapse
|
11
|
Bourke JD, Todd J. Acoustics versus linguistics? Context is Part and Parcel to lateralized processing of the parts and parcels of speech. Laterality 2021; 26:725-765. [PMID: 33726624 DOI: 10.1080/1357650x.2021.1898415] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
The purpose of this review is to provide an accessible exploration of key considerations of lateralization in speech and non-speech perception using clear and defined language. From these considerations, the primary arguments for each side of the linguistics versus acoustics debate are outlined and explored in context of emerging integrative theories. This theoretical approach entails a perspective that linguistic and acoustic features differentially contribute to leftward bias, depending on the given context. Such contextual factors include stimulus parameters and variables of stimulus presentation (e.g., noise/silence and monaural/binaural) and variances in individuals (sex, handedness, age, and behavioural ability). Discussion of these factors and their interaction is also aimed towards providing an outline of variables that require consideration when developing and reviewing methodology of acoustic and linguistic processing laterality studies. Thus, there are three primary aims in the present paper: (1) to provide the reader with key theoretical perspectives from the acoustics/linguistics debate and a synthesis of the two viewpoints, (2) to highlight key caveats for generalizing findings regarding predominant models of speech laterality, and (3) to provide a practical guide for methodological control using predominant behavioural measures (i.e., gap detection and dichotic listening tasks) and/or neurophysiological measures (i.e., mismatch negativity) of speech laterality.
Collapse
Affiliation(s)
- Jesse D Bourke
- School of Psychology, University Drive, Callaghan, NSW 2308, Australia
| | - Juanita Todd
- School of Psychology, University Drive, Callaghan, NSW 2308, Australia
| |
Collapse
|
12
|
Abstract
The present study examined the relationship between multisensory integration and the temporal binding window (TBW) for multisensory processing in adults with Autism spectrum disorder (ASD). The ASD group was less likely than the typically developing group to perceive an illusory flash induced by multisensory integration during a sound-induced flash illusion (SIFI) task. Although both groups showed comparable TBWs during the multisensory temporal order judgment task, correlation analyses and Bayes factors provided moderate evidence that the reduced SIFI susceptibility was associated with the narrow TBW in the ASD group. These results suggest that the individuals with ASD exhibited atypical multisensory integration and that individual differences in the efficacy of this process might be affected by the temporal processing of multisensory information.
Collapse
|
13
|
Sorati M, Behne DM. Considerations in Audio-Visual Interaction Models: An ERP Study of Music Perception by Musicians and Non-musicians. Front Psychol 2021; 11:594434. [PMID: 33551911 PMCID: PMC7854916 DOI: 10.3389/fpsyg.2020.594434] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Accepted: 12/03/2020] [Indexed: 11/13/2022] Open
Abstract
Previous research with speech and non-speech stimuli suggested that in audiovisual perception, visual information starting prior to the onset of corresponding sound can provide visual cues, and form a prediction about the upcoming auditory sound. This prediction leads to audiovisual (AV) interaction. Auditory and visual perception interact and induce suppression and speeding up of the early auditory event-related potentials (ERPs) such as N1 and P2. To investigate AV interaction, previous research examined N1 and P2 amplitudes and latencies in response to audio only (AO), video only (VO), audiovisual, and control (CO) stimuli, and compared AV with auditory perception based on four AV interaction models (AV vs. AO+VO, AV-VO vs. AO, AV-VO vs. AO-CO, AV vs. AO). The current study addresses how different models of AV interaction express N1 and P2 suppression in music perception. Furthermore, the current study took one step further and examined whether previous musical experience, which can potentially lead to higher N1 and P2 amplitudes in auditory perception, influenced AV interaction in different models. Musicians and non-musicians were presented the recordings (AO, AV, VO) of a keyboard /C4/ key being played, as well as CO stimuli. Results showed that AV interaction models differ in their expression of N1 and P2 amplitude and latency suppression. The calculation of model (AV-VO vs. AO) and (AV-VO vs. AO-CO) has consequences for the resulting N1 and P2 difference waves. Furthermore, while musicians, compared to non-musicians, showed higher N1 amplitude in auditory perception, suppression of amplitudes and latencies for N1 and P2 was similar for the two groups across the AV models. Collectively, these results suggest that when visual cues from finger and hand movements predict the upcoming sound in AV music perception, suppression of early ERPs is similar for musicians and non-musicians. Notably, the calculation differences across models do not lead to the same pattern of results for N1 and P2, demonstrating that the four models are not interchangeable and are not directly comparable.
Collapse
Affiliation(s)
- Marzieh Sorati
- Department of Psychology, Norwegian University of Science and Technology, Trondheim, Norway
| | - Dawn M Behne
- Department of Psychology, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
14
|
Effects of Lifelong Musicianship on White Matter Integrity and Cognitive Brain Reserve. Brain Sci 2021; 11:brainsci11010067. [PMID: 33419228 PMCID: PMC7825624 DOI: 10.3390/brainsci11010067] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Revised: 12/30/2020] [Accepted: 01/01/2021] [Indexed: 02/07/2023] Open
Abstract
There is a significant body of research that has identified specific, high-end cognitive demand activities and lifestyles that may play a role in building cognitive brain reserve, including volume changes in gray matter and white matter, increased structural connectivity, and enhanced categorical perception. While normal aging produces trends of decreasing white matter (WM) integrity, research on cognitive brain reserve suggests that complex sensory–motor activities across the life span may slow down or reverse these trends. Previous research has focused on structural and functional changes to the human brain caused by training and experience in both linguistic (especially bilingualism) and musical domains. The current research uses diffusion tensor imaging to examine the integrity of subcortical white matter fiber tracts in lifelong musicians. Our analysis, using Tortoise and ICBM-81, reveals higher fractional anisotropy, an indicator of greater WM integrity, in aging musicians in bilateral superior longitudinal fasciculi and bilateral uncinate fasciculi. Statistical methods used include Fisher’s method and linear regression analysis. Another unique aspect of this study is the accompanying behavioral performance data for each participant. This is one of the first studies to look specifically at musicianship across the life span and its impact on bilateral WM integrity in aging.
Collapse
|
15
|
Musical training improves rhythm integrative processing of classical Chinese poem. ACTA PSYCHOLOGICA SINICA 2020. [DOI: 10.3724/sp.j.1041.2020.00847] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
16
|
Sorati M, Behne DM. Audiovisual Modulation in Music Perception for Musicians and Non-musicians. Front Psychol 2020; 11:1094. [PMID: 32547458 PMCID: PMC7273518 DOI: 10.3389/fpsyg.2020.01094] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2020] [Accepted: 04/29/2020] [Indexed: 11/13/2022] Open
Abstract
In audiovisual music perception, visual information from a musical instrument being played is available prior to the onset of the corresponding musical sound and consequently allows a perceiver to form a prediction about the upcoming audio music. This prediction in audiovisual music perception, compared to auditory music perception, leads to lower N1 and P2 amplitudes and latencies. Although previous research suggests that audiovisual experience, such as previous musical experience may enhance this prediction, a remaining question is to what extent musical experience modifies N1 and P2 amplitudes and latencies. Furthermore, corresponding event-related phase modulations quantified as inter-trial phase coherence (ITPC) have not previously been reported for audiovisual music perception. In the current study, audio video recordings of a keyboard key being played were presented to musicians and non-musicians in audio only (AO), video only (VO), and audiovisual (AV) conditions. With predictive movements from playing the keyboard isolated from AV music perception (AV-VO), the current findings demonstrated that, compared to the AO condition, both groups had a similar decrease in N1 amplitude and latency, and P2 amplitude, along with correspondingly lower ITPC values in the delta, theta, and alpha frequency bands. However, while musicians showed lower ITPC values in the beta-band in AV-VO compared to the AO, non-musicians did not show this pattern. Findings indicate that AV perception may be broadly correlated with auditory perception, and differences between musicians and non-musicians further indicate musical experience to be a specific factor influencing AV perception. Predicting an upcoming sound in AV music perception may involve visual predictory processes, as well as beta-band oscillations, which may be influenced by years of musical training. This study highlights possible interconnectivity in AV perception as well as potential modulation with experience.
Collapse
Affiliation(s)
- Marzieh Sorati
- Department of Psychology, Norwegian University of Science and Technology, Trondheim, Norway
| | - Dawn Marie Behne
- Department of Psychology, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
17
|
Musicians use speech-specific areas when processing tones: The key to their superior linguistic competence? Behav Brain Res 2020; 390:112662. [PMID: 32442547 DOI: 10.1016/j.bbr.2020.112662] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2019] [Revised: 04/21/2020] [Accepted: 04/22/2020] [Indexed: 11/23/2022]
Abstract
It is known that musicians compared to non-musicians have some superior speech and language competence, yet the mechanisms how musical training leads to this advantage are not well specified. This event-related fMRI study confirmed that musicians outperformed non-musicians in processing not only of musical tones but also syllables and identified a network differentiating musicians from non-musicians during processing of linguistic sounds. Within this network, the activation of bilateral superior temporal gyrus was shared with all subjects during processing of the acoustically well-matched musical and linguistic sounds, and with the activation distinguishing tones with a complex harmonic spectrum (bowed tone) from a simpler one (plucked tone). These results confirm that better speech processing in musicians relies on improved cross-domain spectral analysis. Activation of left posterior superior temporal sulcus (pSTS), premotor cortex, inferior frontal and fusiform gyrus (FG) also distinguishing musicians from non-musicians during syllable processing overlapped with the activation segregating linguistic from musical sounds in all subjects. Since these brain-regions were not involved during tone processing in non-musicians, they could code for functions which are specialized for speech. Musicians recruited pSTS and FG during tone processing, thus these speech-specialized brain-areas processed musical sounds in the presence of musical training. This study shows that the linguistic advantage of musicians is linked not only to improved cross-domain spectral analysis, but also to the functional adaptation of brain resources that are specialized for speech, but accessible to the domain of music in the presence of musical training.
Collapse
|
18
|
Sadakata M, Weidema JL, Honing H. Parallel pitch processing in speech and melody: A study of the interference of musical melody on lexical pitch perception in speakers of Mandarin. PLoS One 2020; 15:e0229109. [PMID: 32130244 PMCID: PMC7055904 DOI: 10.1371/journal.pone.0229109] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2019] [Accepted: 01/29/2020] [Indexed: 11/30/2022] Open
Abstract
Music and language have long been considered two distinct cognitive faculties governed by domain-specific cognitive and neural mechanisms. Recent work into the domain-specificity of pitch processing in both domains appears to suggest pitch processing to be governed by shared neural mechanisms. The current study aimed to explore the domain-specificity of pitch processing by simultaneously presenting pitch contours in speech and music to speakers of a tonal language, and measuring behavioral response and event-related potentials (ERPs). Native speakers of Mandarin were exposed to concurrent pitch contours in melody and speech. Contours in melody emulated those in speech were either congruent or incongruent with the pitch contour of the lexical tone (i.e., rising or falling). Component magnitudes of the N2b and N400 were used as indices of lexical processing. We found that the N2b was modulated by melodic pitch; incongruent item evoked significantly stronger amplitude. There was a trend of N400 to be modulated in the same way. Interestingly, these effects were present only on rising tones. Amplitude and time-course of the N2b and N400 may suggest an interference of melodic pitch contours with both early and late stages of phonological and semantic processing.
Collapse
Affiliation(s)
- Makiko Sadakata
- Institute for Logic, Language and Computation, Amsterdam Brain & Cognition, University of Amsterdam, Amsterdam, The Netherlands
- Musicology Department, University of Amsterdam, Amsterdam, The Netherlands
- * E-mail:
| | - Joey L. Weidema
- Institute for Logic, Language and Computation, Amsterdam Brain & Cognition, University of Amsterdam, Amsterdam, The Netherlands
| | - Henkjan Honing
- Institute for Logic, Language and Computation, Amsterdam Brain & Cognition, University of Amsterdam, Amsterdam, The Netherlands
- Musicology Department, University of Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
19
|
Carioti D, Danelli L, Guasti MT, Gallucci M, Perugini M, Steca P, Stucchi NA, Maffezzoli A, Majno M, Berlingeri M, Paulesu E. Music Education at School: Too Little and Too Late? Evidence From a Longitudinal Study on Music Training in Preadolescents. Front Psychol 2019; 10:2704. [PMID: 31920782 PMCID: PMC6930811 DOI: 10.3389/fpsyg.2019.02704] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2019] [Accepted: 11/15/2019] [Indexed: 12/02/2022] Open
Abstract
It is widely believed that intensive music training can boost cognitive and visuo-motor skills. However, this evidence is primarily based on retrospective studies; this makes it difficult to determine whether a cognitive advantage is caused by the intensive music training, or it is instead a factor influencing the choice of starting a music curriculum. To address these issues in a highly ecological setting, we tested longitudinally 128 students of a Middle School in Milan, at the beginning of the first class and, 1 year later, at the beginning of the second class. 72 students belonged to a Music curriculum (30 with previous music experience and 42 without) and 56 belonged to a Standard curriculum (44 with prior music experience and 12 without). Using a Principal Component Analysis, all the cognitive measures were grouped in four high-order factors, reflecting (a) General Cognitive Abilities, (b) Speed of Linguistic Elaboration, (c) Accuracy in Reading and Memory tests, and (d) Visuospatial and numerical skills. The longitudinal comparison of the four groups of students revealed that students from the Music curriculum had better performance in tests tackling General Cognitive Abilities, Visuospatial skills, and Accuracy in Reading and Memory tests. However, there were no significant curriculum-by-time interactions. Finally, the decision to have a musical experience before entering middle school was more likely to occur when the cultural background of the families was a high one. We conclude that a combination of family-related variables, early music experience, and pre-existent cognitive make-up is a likely explanation for the decision to enter a music curriculum at middle school.
Collapse
Affiliation(s)
- Desiré Carioti
- Psychology Department, University of Milano-Bicocca, Milan, Italy
- Department of Humanistic Studies, University of Urbino Carlo Bo, Urbino, Italy
| | - Laura Danelli
- Psychology Department, University of Milano-Bicocca, Milan, Italy
| | - Maria T. Guasti
- Psychology Department, University of Milano-Bicocca, Milan, Italy
| | | | - Marco Perugini
- Psychology Department, University of Milano-Bicocca, Milan, Italy
| | - Patrizia Steca
- Psychology Department, University of Milano-Bicocca, Milan, Italy
| | | | | | - Maria Majno
- SONG onlus – Sistema in Lombardia, Milan, Italy
| | - Manuela Berlingeri
- Department of Humanistic Studies, University of Urbino Carlo Bo, Urbino, Italy
- Center of Developmental Neuropsychology, ASUR Marche, Pesaro, Italy
- NeuroMi, Milan Center for Neuroscience, Milan, Italy
| | - Eraldo Paulesu
- Psychology Department, University of Milano-Bicocca, Milan, Italy
- I.R.C.C.S. Galeazzi, Orthopedic Institute Milano, Milan, Italy
| |
Collapse
|
20
|
Sorati M, Behne DM. Musical Expertise Affects Audiovisual Speech Perception: Findings From Event-Related Potentials and Inter-trial Phase Coherence. Front Psychol 2019; 10:2562. [PMID: 31803107 PMCID: PMC6874039 DOI: 10.3389/fpsyg.2019.02562] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Accepted: 10/29/2019] [Indexed: 12/03/2022] Open
Abstract
In audiovisual speech perception, visual information from a talker's face during mouth articulation is available before the onset of the corresponding audio speech, and thereby allows the perceiver to use visual information to predict the upcoming audio. This prediction from phonetically congruent visual information modulates audiovisual speech perception and leads to a decrease in N1 and P2 amplitudes and latencies compared to the perception of audio speech alone. Whether audiovisual experience, such as with musical training, influences this prediction is unclear, but if so, may explain some of the variations observed in previous research. The current study addresses whether audiovisual speech perception is affected by musical training, first assessing N1 and P2 event-related potentials (ERPs) and in addition, inter-trial phase coherence (ITPC). Musicians and non-musicians are presented the syllable, /ba/ in audio only (AO), video only (VO), and audiovisual (AV) conditions. With the predictory effect of mouth movement isolated from the AV speech (AV-VO), results showed that, compared to audio speech, both groups have a lower N1 latency and P2 amplitude and latency. Moreover, they also showed lower ITPCs in the delta, theta, and beta bands in audiovisual speech perception. However, musicians showed significant suppression of N1 amplitude and desynchronization in the alpha band in audiovisual speech, not present for non-musicians. Collectively, the current findings indicate that early sensory processing can be modified by musical experience, which in turn can explain some of the variations in previous AV speech perception research.
Collapse
Affiliation(s)
- Marzieh Sorati
- Department of Psychology, Norwegian University of Science and Technology, Trondheim, Norway
| | | |
Collapse
|
21
|
Dittinger E, Scherer J, Jäncke L, Besson M, Elmer S. Testing the influence of musical expertise on novel word learning across the lifespan using a cross-sectional approach in children, young adults and older adults. BRAIN AND LANGUAGE 2019; 198:104678. [PMID: 31450024 DOI: 10.1016/j.bandl.2019.104678] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Revised: 07/01/2019] [Accepted: 08/07/2019] [Indexed: 05/25/2023]
Abstract
Word learning is a multifaceted perceptual and cognitive task that is omnipresent in everyday life. Currently, it is unclear whether this ability is influenced by age, musical expertise or both variables. Accordingly, we used EEG and compared behavioral and electrophysiological indices of word learning between older adults with and without musical expertise (older adults' perspective) as well as between musically trained and untrained children, young adults, and older adults (lifespan perspective). Results of the older adults' perspective showed that the ability to learn new words is preserved in elderly, however, without a beneficial influence of musical expertise. Otherwise, results of the lifespan perspective revealed lower error rates and faster reaction times in young adults compared to children and older adults. Furthermore, musically trained children and young adults outperformed participants without musical expertise, and this advantage was accompanied by EEG manifestations reflecting faster learning and neural facilitation in accessing lexical-semantic representations.
Collapse
Affiliation(s)
- Eva Dittinger
- CNRS & Aix-Marseille University, Laboratoire de Neurosciences Cognitives (LNC, UMR 7291), Marseille, France; CNRS & Aix-Marseille University, Laboratoire Parole et Langage (LPL, UMR 7309), Aix-en-Provence, France; Brain and Language Research Institute (BLRI), Aix-en-Provence, France.
| | - Johanna Scherer
- Division Neuropsychology (Auditory Research Group Zurich, ARGZ), Institute of Psychology, University of Zurich, Switzerland.
| | - Lutz Jäncke
- Division Neuropsychology (Auditory Research Group Zurich, ARGZ), Institute of Psychology, University of Zurich, Switzerland; University Research Priority Program (URRP) "Dynamic of Healthy Aging", Zurich, Switzerland.
| | - Mireille Besson
- CNRS & Aix-Marseille University, Laboratoire de Neurosciences Cognitives (LNC, UMR 7291), Marseille, France.
| | - Stefan Elmer
- Division Neuropsychology (Auditory Research Group Zurich, ARGZ), Institute of Psychology, University of Zurich, Switzerland.
| |
Collapse
|
22
|
Lumaca M, Kleber B, Brattico E, Vuust P, Baggio G. Functional connectivity in human auditory networks and the origins of variation in the transmission of musical systems. eLife 2019; 8:48710. [PMID: 31658945 PMCID: PMC6819097 DOI: 10.7554/elife.48710] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2019] [Accepted: 10/09/2019] [Indexed: 02/02/2023] Open
Abstract
Music producers, whether original composers or performers, vary in their ability to acquire and faithfully transmit music. This form of variation may serve as a mechanism for the emergence of new traits in musical systems. In this study, we aim to investigate whether individual differences in the social learning and transmission of music relate to intrinsic neural dynamics of auditory processing systems. We combined auditory and resting-state functional magnetic resonance imaging (fMRI) with an interactive laboratory model of cultural transmission, the signaling game, in an experiment with a large cohort of participants (N=51). We found that the degree of interhemispheric rs-FC within fronto-temporal auditory networks predicts—weeks after scanning—learning, transmission, and structural modification of an artificial tone system. Our study introduces neuroimaging in cultural transmission research and points to specific neural auditory processing mechanisms that constrain and drive variation in the cultural transmission and regularization of musical systems.
Collapse
Affiliation(s)
- Massimo Lumaca
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music, Aarhus, Denmark
| | - Boris Kleber
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music, Aarhus, Denmark
| | - Elvira Brattico
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music, Aarhus, Denmark
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music, Aarhus, Denmark
| | - Giosue Baggio
- Language Acquisition and Language Processing Lab, Department of Language and Literature, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
23
|
DU YIHANG, FANG WEINING, QIU HANZHAO. DEVELOPMENT AND VALIDATION OF A METHOD TO ENHANCE AUDITORY ATTENTION DURING CONTINUOUS SPEECH-SHAPED NOISE ENVIRONMENT. J MECH MED BIOL 2019. [DOI: 10.1142/s0219519419500489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Auditory training (AT) may strengthen auditory skills that help human not only in on-task auditory perception performance but in continuous speech-shaped noise (SSN) environment. AT based on musical material has provided some evidence for an “auditory advantage” in understanding speech-in-noise (SIN), but with a long period training and complex procedure. Experimental research is essential to develop a simplified method named auditory target tracking training (ATT) which refined from musical material is necessary to determine the benefits of training. We developed two kinds of refined AT method: basic auditory target tracking (BAT) training and enhanced auditory target tracking (EAT) training to adult participants ([Formula: see text]) separately for 20 units, assessing performance to perceive speech in noise environment after training. The EAT group presented better speech perception performance than the other groups and no significant differences between BAT group and control group. The training effect of EAT is the most significant when uni-gender SSN and [Formula: see text] dB. Outcomes suggest that efficacy of trained EAT can improve speech perception performance and selective attention during SSN environment. These findings provide an important link between musical-based training and auditory selective attention in real-world, and extended to special vocational training.
Collapse
Affiliation(s)
- YIHANG DU
- School of Mechanical, Electronic and Control Engineering, Beijing Jiaotong University, Beijing 100044, P. R. China
| | - WEINING FANG
- State Key Lab of Rail Traffic Control & Safety, Beijing Jiaotong University, Beijing 100044, P. R. China
| | - HANZHAO QIU
- School of Mechanical, Electronic and Control Engineering, Beijing Jiaotong University, Beijing 100044, P. R. China
| |
Collapse
|
24
|
Frey A, François C, Chobert J, Velay JL, Habib M, Besson M. Music Training Positively Influences the Preattentive Perception of Voice Onset Time in Children with Dyslexia: A Longitudinal Study. Brain Sci 2019; 9:brainsci9040091. [PMID: 31010099 PMCID: PMC6523730 DOI: 10.3390/brainsci9040091] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2019] [Revised: 04/12/2019] [Accepted: 04/13/2019] [Indexed: 12/04/2022] Open
Abstract
Previous results showed a positive influence of music training on linguistic abilities at both attentive and preattentive levels. Here, we investigate whether six months of active music training is more efficient than painting training to improve the preattentive processing of phonological parameters based on durations that are often impaired in children with developmental dyslexia (DD). Results were also compared to a control group of Typically Developing (TD) children matched on reading age. We used a Test–Training–Retest procedure and analysed the Mismatch Negativity (MMN) and the N1 and N250 components of the Event-Related Potentials to syllables that differed in Voice Onset Time (VOT), vowel duration, and vowel frequency. Results were clear-cut in showing a normalization of the preattentive processing of VOT in children with DD after music training but not after painting training. They also revealed increased N250 amplitude to duration deviant stimuli in children with DD after music but not painting training, and no training effect on the preattentive processing of frequency. These findings are discussed in view of recent theories of dyslexia pointing to deficits in processing the temporal structure of speech. They clearly encourage the use of active music training for the rehabilitation of children with language impairments.
Collapse
Affiliation(s)
- Aline Frey
- ESPE de l'académie de Créteil, Université Paris-Est Créteil, Laboratoire CHArt, 94380 Bonneuil-sur-Marne, France.
| | - Clément François
- Laboratoire Parole et Langage, CNRS et Aix Marseille Université, 13640 Aix-en-Provence, France.
- Cognition and Brain Plasticity Group, IDIBELL, University of Barcelona, 08193 Barcelona, Spain.
| | - Julie Chobert
- Laboratoire de Neurosciences Cognitives, CNRS et Aix-Marseille Université, 13331 Marseille, France.
| | - Jean-Luc Velay
- Laboratoire de Neurosciences Cognitives, CNRS et Aix-Marseille Université, 13331 Marseille, France.
| | - Michel Habib
- Département de Neurologie Pédiatrique, CHU Timone, 13005 Marseille, France.
| | - Mireille Besson
- Laboratoire de Neurosciences Cognitives, CNRS et Aix-Marseille Université, 13331 Marseille, France.
- Cuban Neuroscience Center, La Havane 4850, Cuba.
| |
Collapse
|
25
|
Decrypting the electrophysiological individuality of the human brain: Identification of individuals based on resting-state EEG activity. Neuroimage 2019; 197:470-481. [PMID: 30978497 DOI: 10.1016/j.neuroimage.2019.04.005] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2019] [Accepted: 04/01/2019] [Indexed: 01/09/2023] Open
Abstract
Biometric identification (BI) of individuals is a fast-growing field of research that is producing increasingly sophisticated applications in several spheres of everyday life. Previous magnetic resonance imaging (MRI) studies have demonstrated that based on the high inter-individual variability of brain structure and function, it is possible to identify individuals with high accuracy. Otherwise, there is the common belief that electroencephalographic (EEG) data recorded at the surface of the scalp are too noisy for identification purposes with a comparably high hit rate. In the present work, we compared BI quality (F1-scores, accuracy, sensitivity, and specificity) between different types of functional (instantaneous, lagged, and total coherence, phase synchronization, correlation, and mutual information) and effective (Granger causality, phase synchronization, and coherence) connectivity measures. Results revealed that across functional connectivity metrics, identification accuracy was in the range of 0.98-1, whereas sensitivity and F1-scores were between 0.00 and 1 and specificity was between 0.99 and 1. BI was higher for the connectivity metrics that are contaminated by volume conduction (instantaneous connectivity) compared to those that are unaffected by this variable (lagged connectivity). Support vector machine and neural network algorithms yielded the highest BI, followed by random forest and weighted k-nearest neighborhood, whereas linear discriminant analysis was less accurate. These results provide cross-validated counterevidence to the belief that EEG data are too noisy for identification purposes and demonstrate that functional and effective connectivity metrics are particularly suited for BI applications with comparable accuracy to MRI. Our results have important implications for fast, low-cost, and mobile BI applications.
Collapse
|
26
|
Ghiselli S, Ciciriello E, Maniago G, Muzzi E, Pellizzoni S, Orzan E. Musical Training in Congenital Hearing Impairment. Effects on Cognitive and Motor Skill in Three Children Using Hearing Aids: Pilot Test Data. Front Psychol 2018; 9:1283. [PMID: 30087644 PMCID: PMC6067014 DOI: 10.3389/fpsyg.2018.01283] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2018] [Accepted: 07/04/2018] [Indexed: 11/13/2022] Open
Affiliation(s)
- Sara Ghiselli
- Department of ENT and Audiology, Institute for Maternal and Child Health IRCCS Burlo Garofolo, Trieste, Italy
- *Correspondence: Sara Ghiselli
| | - Elena Ciciriello
- Department of ENT and Audiology, Institute for Maternal and Child Health IRCCS Burlo Garofolo, Trieste, Italy
| | | | - Enrico Muzzi
- Department of ENT and Audiology, Institute for Maternal and Child Health IRCCS Burlo Garofolo, Trieste, Italy
| | | | - Eva Orzan
- Department of ENT and Audiology, Institute for Maternal and Child Health IRCCS Burlo Garofolo, Trieste, Italy
| |
Collapse
|
27
|
Mondelli MFCG, José IDS, José MR, Lopes NBF. Elaboration of an instrument to evaluate the recognition of Brazilian melodies in children. Braz J Otorhinolaryngol 2018; 85:690-697. [PMID: 30017874 PMCID: PMC9443065 DOI: 10.1016/j.bjorl.2018.05.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2018] [Revised: 05/09/2018] [Accepted: 05/28/2018] [Indexed: 10/28/2022] Open
Abstract
INTRODUCTION There is evidence pointing to the importance of the evaluation of musical perception through objective and subjective instruments. In Brazil, there is a shortage of instruments that evaluates musical perception. OBJECTIVE To develop an instrument to evaluate the recognition of traditional Brazilian melodies and investigate the performance of children with typical hearing. METHODS The study was carried out after approval of the research ethics committee (1.198.607). The instrument was developed in software format with website access, using the languages PHP 5.5.12, Javascript, Cascade style sheets and "HTML5"; database "MYSQL 5.6.17" on the "Apache 2.4.9" server. Fifteen melodies of Brazilian folk songs were recorded in piano synthesized timbre, with 12 seconds per melody reproduction and four second intervals between them. A total of 155 schooled children, aged eight to 11 years, of both sexes, with typical hearing participated in the study. The test was performed in a silent room with sound stimuli amplified by a sound box at 65dBNA, positioned at 0 azimuth, and at one meter from the participant, the notebook was used for children to play with on the screen on the title and illustration of the melody they recognized they were listening to. The responses were recorded on their own database. RESULTS The instrument titled "Evaluation of recognition of traditional melodies in children" can be run on various devices (computers, notebooks, tablets, mobile phones) and operating systems (Windows, Macintosh, Android, Linux). Access: http://192.185.216.17/ivan/home/login.php by login and password. The most easily recognized melody was "Cai, cai balão" (89%) and the least recognized was "Capelinha de melão" (25.2%). The average time to perform the test was 3'15″. CONCLUSION The development and application of the software proved effective for the studied population. This instrument may contribute to the improvement of protocols for the evaluation of musical perception in children with hearing aid and/or cochlear implants users.
Collapse
Affiliation(s)
| | - Ivan Dos Santos José
- Universidade de São Paulo (USP), Faculdade de Odontologia de Bauru, Programa de Pós-Graduação em Fonoaudiologia, Bauru, SP, Brazil
| | - Maria Renata José
- Universidade de São Paulo (USP), Faculdade de Odontologia de Bauru, Programa de Pós-Graduação em Fonoaudiologia, Bauru, SP, Brazil
| | | |
Collapse
|
28
|
Dittinger E, D'Imperio M, Besson M. Enhanced neural and behavioural processing of a nonnative phonemic contrast in professional musicians. Eur J Neurosci 2018; 47:1504-1516. [DOI: 10.1111/ejn.13939] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2017] [Revised: 01/24/2018] [Accepted: 04/16/2018] [Indexed: 11/28/2022]
Affiliation(s)
- Eva Dittinger
- CNRS & Aix-Marseille Université; Laboratoire de Neurosciences Cognitives (LNC, UMR 7291); Marseille France
- CNRS & Aix-Marseille Université; Laboratoire Parole et Langage (LPL, UMR 7309); Aix-en-Provence France
- Brain and Language Research Institute (BLRI); Aix-en-Provence France
| | - Mariapaola D'Imperio
- CNRS & Aix-Marseille Université; Laboratoire Parole et Langage (LPL, UMR 7309); Aix-en-Provence France
- Institut Universitaire de France (IUF); Paris France
| | - Mireille Besson
- CNRS & Aix-Marseille Université; Laboratoire de Neurosciences Cognitives (LNC, UMR 7291); Marseille France
| |
Collapse
|
29
|
Theta Coherence Asymmetry in the Dorsal Stream of Musicians Facilitates Word Learning. Sci Rep 2018; 8:4565. [PMID: 29545619 PMCID: PMC5854697 DOI: 10.1038/s41598-018-22942-1] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2017] [Accepted: 03/01/2018] [Indexed: 01/19/2023] Open
Abstract
Word learning constitutes a human faculty which is dependent upon two anatomically distinct processing streams projecting from posterior superior temporal (pST) and inferior parietal (IP) brain regions toward the prefrontal cortex (dorsal stream) and the temporal pole (ventral stream). The ventral stream is involved in mapping sensory and phonological information onto lexical-semantic representations, whereas the dorsal stream contributes to sound-to-motor mapping, articulation, complex sequencing in the verbal domain, and to how verbal information is encoded, stored, and rehearsed from memory. In the present source-based EEG study, we evaluated functional connectivity between the IP lobe and Broca's area while musicians and non-musicians learned pseudowords presented in the form of concatenated auditory streams. Behavioral results demonstrated that musicians outperformed non-musicians, as reflected by a higher sensitivity index (d'). This behavioral superiority was paralleled by increased left-hemispheric theta coherence in the dorsal stream, whereas non-musicians showed stronger functional connectivity in the right hemisphere. Since no between-group differences were observed in a passive listening control condition nor during rest, results point to a task-specific intertwining between musical expertise, functional connectivity, and word learning.
Collapse
|
30
|
Elmer S, Jäncke L. Relationships between music training, speech processing, and word learning: a network perspective. Ann N Y Acad Sci 2018; 1423:10-18. [PMID: 29542125 DOI: 10.1111/nyas.13581] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2017] [Revised: 11/16/2017] [Accepted: 11/27/2017] [Indexed: 01/19/2023]
Abstract
Numerous studies have documented the behavioral advantages conferred on professional musicians and children undergoing music training in processing speech sounds varying in the spectral and temporal dimensions. These beneficial effects have previously often been associated with local functional and structural changes in the auditory cortex (AC). However, this perspective is oversimplified, in that it does not take into account the intrinsic organization of the human brain, namely, neural networks and oscillatory dynamics. Therefore, we propose a new framework for extending these previous findings to a network perspective by integrating multimodal imaging, electrophysiology, and neural oscillations. In particular, we provide concrete examples of how functional and structural connectivity can be used to model simple neural circuits exerting a modulatory influence on AC activity. In addition, we describe how such a network approach can be used for better comprehending the beneficial effects of music training on more complex speech functions, such as word learning.
Collapse
Affiliation(s)
- Stefan Elmer
- Division of Neuropsychology (Auditory Research Group Zurich, ARGZ), Institute of Psychology, University of Zurich, Zurich, Switzerland
| | - Lutz Jäncke
- Division of Neuropsychology (Auditory Research Group Zurich, ARGZ), Institute of Psychology, University of Zurich, Zurich, Switzerland
- Center for Integrative Human Physiology (ZIHP), University of Zurich, Zurich, Switzerland
- International Normal Aging and Plasticity Imaging Center (INAPIC), University of Zurich, Zurich, Switzerland
- University Research Priority Program (URPP) "Dynamic of Healthy Aging", University of Zurich, Zurich, Switzerland
- Department of Special Education, King Abdulaziz University, Jeddah, Saudi Arabia
| |
Collapse
|
31
|
Elmer S, Kühnis J, Rauch P, Abolfazl Valizadeh S, Jäncke L. Functional connectivity in the dorsal stream and between bilateral auditory-related cortical areas differentially contribute to speech decoding depending on spectro-temporal signal integrity and performance. Neuropsychologia 2017; 106:398-406. [PMID: 29106999 DOI: 10.1016/j.neuropsychologia.2017.10.030] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2017] [Revised: 09/21/2017] [Accepted: 10/25/2017] [Indexed: 10/18/2022]
Abstract
Speech processing relies on the interdependence between auditory perception, sensorimotor integration, and verbal memory functions. Functional and structural connectivity between bilateral auditory-related cortical areas (ARCAs) facilitates spectro-temporal analyses, whereas the dynamic interplay between ARCAs and Broca's area (i.e., dorsal pathway) contributes to verbal memory functions, articulation, and sound-to-motor mapping. However, it remains unclear whether these two neural circuits are preferentially driven by spectral or temporal acoustic information, and whether their recruitment is predictive of speech perception performance and learning. Therefore, we evaluated EEG-based intracranial (eLORETA) functional connectivity (lagged coherence) in both pathways (i.e., between bilateral ARCAs and in the dorsal stream) while good- (GPs, N = 12) and poor performers (PPs, N = 13) learned to decode natural pseudowords (CLEAN) or comparable items (speech-noise chimeras) manipulated in the envelope (ENV) or in the fine-structure (FS). Learning to decode degraded speech was generally associated with increased functional connectivity in the theta, alpha, and beta frequency range in both circuits. Furthermore, GPs exhibited increased connectivity in the left dorsal stream compared to PPs, but only during the FS condition and in the theta frequency band. These results suggest that both pathways contribute to the decoding of spectro-temporal degraded speech by increasing the communication between brain regions involved in perceptual analyses and verbal memory functions. Otherwise, the left-hemispheric recruitment of the dorsal stream in GPs during the FS condition points to a contribution of this pathway to articulatory-based memory processes that are dependent on the temporal integrity of the speech signal. These results enable to better comprehend the neural circuits underlying word-learning as a function of temporal and spectral signal integrity and performance.
Collapse
Affiliation(s)
- Stefan Elmer
- Division Neuropsychology (Auditory Research Group Zurich, ARGZ), Institute of Psychology, University of Zurich, Switzerland.
| | - Jürg Kühnis
- Division Neuropsychology (Auditory Research Group Zurich, ARGZ), Institute of Psychology, University of Zurich, Switzerland.
| | - Piyush Rauch
- Division Neuropsychology (Auditory Research Group Zurich, ARGZ), Institute of Psychology, University of Zurich, Switzerland.
| | - Seyed Abolfazl Valizadeh
- Division Neuropsychology (Auditory Research Group Zurich, ARGZ), Institute of Psychology, University of Zurich, Switzerland.
| | - Lutz Jäncke
- Division Neuropsychology (Auditory Research Group Zurich, ARGZ), Institute of Psychology, University of Zurich, Switzerland; Center for Integrative Human Physiology (ZIHP), University of Zurich, Switzerland; International Normal Aging and Plasticity Imaging Center (INAPIC), University of Zurich, Switzerland; University Research Priority Program (URPP) "Dynamic of Healthy Aging", University of Zurich, Switzerland; Department of Special Education, King Abdulaziz University, Jeddah, Saudi Arabia.
| |
Collapse
|
32
|
Dittinger E, Valizadeh SA, Jäncke L, Besson M, Elmer S. Increased functional connectivity in the ventral and dorsal streams during retrieval of novel words in professional musicians. Hum Brain Mapp 2017; 39:722-734. [PMID: 29105247 DOI: 10.1002/hbm.23877] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2017] [Revised: 10/13/2017] [Accepted: 10/23/2017] [Indexed: 01/01/2023] Open
Abstract
Current models of speech and language processing postulate the involvement of two parallel processing streams (the dual stream model): a ventral stream involved in mapping sensory and phonological representations onto lexical and conceptual representations and a dorsal stream contributing to sound-to-motor mapping, articulation, and to how verbal information is encoded and manipulated in memory. Based on previous evidence showing that music training has an influence on language processing, cognitive functions, and word learning, we examined EEG-based intracranial functional connectivity in the ventral and dorsal streams while musicians and nonmusicians learned the meaning of novel words through picture-word associations. In accordance with the dual stream model, word learning was generally associated with increased beta functional connectivity in the ventral stream compared to the dorsal stream. In addition, in the linguistically most demanding "semantic task," musicians outperformed nonmusicians, and this behavioral advantage was accompanied by increased left-hemispheric theta connectivity in both streams. Moreover, theta coherence in the left dorsal pathway was positively correlated with the number of years of music training. These results provide evidence for a complex interplay within a network of brain regions involved in semantic processing and verbal memory functions, and suggest that intensive music training can modify its functional architecture leading to advantages in novel word learning.
Collapse
Affiliation(s)
- Eva Dittinger
- CNRS & Aix-Marseille Univ, Laboratoire de Neurosciences Cognitives (LNC, UMR 7291), Marseille, France.,CNRS & Aix-Marseille Univ, Laboratoire Parole et Langage (LPL, UMR 7309), Aix-en-Provence, France.,Brain and Language Research Institute (BLRI), Aix-en-Provence, France
| | - Seyed Abolfazl Valizadeh
- Auditory Research Group Zurich (ARGZ), Division Neuropsychology, Institute of Psychology, University of Zurich, Zurich, Switzerland.,Sensory-Motor System Lab, Institute of Robotics and Intelligence Systems, Swiss Federal Institute of Technology, Zurich, Switzerland
| | - Lutz Jäncke
- Auditory Research Group Zurich (ARGZ), Division Neuropsychology, Institute of Psychology, University of Zurich, Zurich, Switzerland.,University Research Priority Program (URRP) "Dynamic of Healthy Aging", Zurich, Switzerland
| | - Mireille Besson
- CNRS & Aix-Marseille Univ, Laboratoire de Neurosciences Cognitives (LNC, UMR 7291), Marseille, France
| | - Stefan Elmer
- Auditory Research Group Zurich (ARGZ), Division Neuropsychology, Institute of Psychology, University of Zurich, Zurich, Switzerland
| |
Collapse
|
33
|
Faster native vowel discrimination learning in musicians is mediated by an optimization of mnemonic functions. Neuropsychologia 2017; 104:64-75. [DOI: 10.1016/j.neuropsychologia.2017.08.001] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2017] [Revised: 07/11/2017] [Accepted: 08/02/2017] [Indexed: 11/22/2022]
|
34
|
Rostami S, Moossavi A. Musical Training Enhances Neural Processing of Comodulation Masking Release in the Auditory Brainstem. Audiol Res 2017; 7:185. [PMID: 28890775 PMCID: PMC5582414 DOI: 10.4081/audiores.2017.185] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2017] [Accepted: 07/19/2017] [Indexed: 12/03/2022] Open
Abstract
Musical training strengthens segregation the target signal from background noise. Musicians have enhanced stream segregation, which can be considered a process similar to comodulation masking release. In the current study, we surveyed psychoacoustical comodulation masking release in musicians and non-musicians. We then recorded the brainstem responses to complex stimuli in comodulated and unmodulated maskers to investigate the effect of musical training on the neural representation of comodulation masking release for the first time. The musicians showed significantly greater amplitudes and earlier brainstem response timing for stimulus in the presence of comodulated maskers than non-musicians. In agreement with the results of psychoacoustical experiment, musicians showed greater comodulation masking release than non-musicians. These results reveal a physiological explanation for behavioral enhancement of comodulation masking release and stream segregation in musicians.
Collapse
Affiliation(s)
- Soheila Rostami
- Department of Audiology, Faculty of Rehabilitation Sciences, Iran University of Medical Sciences, Tehran, Iran
| | - Abdollah Moossavi
- Department of Otolaryngology, Head and Neck Surgery, Faculty of Medicine, Iran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
35
|
Elmer S, Hausheer M, Albrecht J, Kühnis J. Human Brainstem Exhibits higher Sensitivity and Specificity than Auditory-Related Cortex to Short-Term Phonetic Discrimination Learning. Sci Rep 2017; 7:7455. [PMID: 28785043 PMCID: PMC5547112 DOI: 10.1038/s41598-017-07426-y] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2017] [Accepted: 06/28/2017] [Indexed: 01/09/2023] Open
Abstract
Phonetic discrimination learning is an active perceptual process that operates under the influence of cognitive control mechanisms by increasing the sensitivity of the auditory system to the trained stimulus attributes. It is assumed that the auditory cortex and the brainstem interact in order to refine how sounds are transcribed into neural codes. Here, we evaluated whether these two computational entities are prone to short-term functional changes, whether there is a chronological difference in malleability, and whether short-term training suffices to alter reciprocal interactions. We performed repeated cortical (i.e., mismatch negativity responses, MMN) and subcortical (i.e., frequency-following response, FFR) EEG measurements in two groups of participants who underwent one hour of phonetic discrimination training or were passively exposed to the same stimulus material. The training group showed a distinctive brainstem energy reduction in the trained frequency-range (i.e., first formant), whereas the passive group did not show any response modulation. Notably, brainstem signal change correlated with the behavioral improvement during training, this result indicating a close relationship between behavior and underlying brainstem physiology. Since we did not reveal group differences in MMN responses, results point to specific short-term brainstem changes that precede functional alterations in the auditory cortex.
Collapse
Affiliation(s)
- Stefan Elmer
- Auditory Research Group Zurich (ARGZ), Division Neuropsychology, Institute of Psychology, University of Zurich, Zurich, Switzerland.
| | - Marcela Hausheer
- Auditory Research Group Zurich (ARGZ), Division Neuropsychology, Institute of Psychology, University of Zurich, Zurich, Switzerland
| | - Joëlle Albrecht
- Auditory Research Group Zurich (ARGZ), Division Neuropsychology, Institute of Psychology, University of Zurich, Zurich, Switzerland
| | - Jürg Kühnis
- Auditory Research Group Zurich (ARGZ), Division Neuropsychology, Institute of Psychology, University of Zurich, Zurich, Switzerland
| |
Collapse
|
36
|
Dittinger E, Chobert J, Ziegler JC, Besson M. Fast Brain Plasticity during Word Learning in Musically-Trained Children. Front Hum Neurosci 2017; 11:233. [PMID: 28553213 PMCID: PMC5427084 DOI: 10.3389/fnhum.2017.00233] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2016] [Accepted: 04/21/2017] [Indexed: 01/08/2023] Open
Abstract
Children learn new words every day and this ability requires auditory perception, phoneme discrimination, attention, associative learning and semantic memory. Based on previous results showing that some of these functions are enhanced by music training, we investigated learning of novel words through picture-word associations in musically-trained and control children (8-12 year-old) to determine whether music training would positively influence word learning. Results showed that musically-trained children outperformed controls in a learning paradigm that included picture-sound matching and semantic associations. Moreover, the differences between unexpected and expected learned words, as reflected by the N200 and N400 effects, were larger in children with music training compared to controls after only 3 min of learning the meaning of novel words. In line with previous results in adults, these findings clearly demonstrate a correlation between music training and better word learning. It is argued that these benefits reflect both bottom-up and top-down influences. The present learning paradigm might provide a useful dynamic diagnostic tool to determine which perceptive and cognitive functions are impaired in children with learning difficulties.
Collapse
Affiliation(s)
- Eva Dittinger
- Laboratoire de Neurosciences Cognitives (LNC, UMR 7291), CNRS, Aix-Marseille UniversityMarseille, France
- Laboratoire Parole et Langage (LPL, UMR 7309), CNRS, Aix-Marseille UniversityAix-en-Provence, France
| | - Julie Chobert
- Laboratoire de Neurosciences Cognitives (LNC, UMR 7291), CNRS, Aix-Marseille UniversityMarseille, France
| | - Johannes C. Ziegler
- Laboratoire de Psychologie Cognitive (LPC, UMR 7290), CNRS, Aix-Marseille UniversityMarseille, France
| | - Mireille Besson
- Laboratoire de Neurosciences Cognitives (LNC, UMR 7291), CNRS, Aix-Marseille UniversityMarseille, France
| |
Collapse
|
37
|
Longitudinal auditory learning facilitates auditory cognition as revealed by microstate analysis. Biol Psychol 2016; 123:25-36. [PMID: 27866990 DOI: 10.1016/j.biopsycho.2016.11.007] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2016] [Revised: 09/19/2016] [Accepted: 11/15/2016] [Indexed: 11/20/2022]
Abstract
The current study investigates cognitive processes as reflected in late auditory-evoked potentials as a function of longitudinal auditory learning. A normal hearing adult sample (n=15) performed an active oddball task at three consecutive time points (TPs) arranged at two week intervals, and during which EEG was recorded. The stimuli comprised of syllables consisting of a natural fricative (/sh/,/s/,/f/) embedded between two /a/ sounds, as well as morphed transitions of the two syllables that served as deviants. Perceptual and cognitive modulations as reflected in the onset and the mean global field power (GFP) of N2b- and P3b-related microstates across four weeks were investigated. We found that the onset of P3b-like microstates, but not N2b-like microstates decreased across TPs, more strongly for difficult deviants leading to similar onsets for difficult and easy stimuli after repeated exposure. The mean GFP of all N2b-like and P3b-like microstates increased more in spectrally strong deviants compared to weak deviants, leading to a distinctive activation for each stimulus after learning. Our results indicate that longitudinal training of auditory-related cognitive mechanisms such as stimulus categorization, attention and memory updating processes are an indispensable part of successful auditory learning. This suggests that future studies should focus on the potential benefits of cognitive processes in auditory training.
Collapse
|
38
|
Cohrdes C, Grolig L, Schroeder S. Relating Language and Music Skills in Young Children: A First Approach to Systemize and Compare Distinct Competencies on Different Levels. Front Psychol 2016; 7:1616. [PMID: 27826266 PMCID: PMC5078758 DOI: 10.3389/fpsyg.2016.01616] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2016] [Accepted: 10/03/2016] [Indexed: 11/13/2022] Open
Abstract
Children in transition from kindergarten to school develop fundamental skills important for the acquisition of reading and writing. Previous research pointed toward substantial correlations between specific language- and music-related competencies as well as positive transfer effects from music on pre-literacy skills. However, until now the relationship between diverse music and language competencies remains unclear. In the present study, we used a comprehensive approach to clarify the relationships between a broad variety of language and music skills on different levels, not only between but also within domains. In order to do so, we selected representative language- and music-related competencies and systematically compared the performance of N = 44 5- to 7-year-old children with a control group of N = 20 young adults aged from 20 to 30. Competencies were organized in distinct levels according to varying units of vowels/sounds, words or syllables/short melodic or rhythmic phrases, syntax/harmony and context of a whole story/song to test for their interrelatedness within each domain. Following this, we conducted systematic correlation analyses between the competencies of both domains. Overall, selected competencies appeared to be appropriate for the measurement of language and music skills in young children with reference to comprehension, difficulty and a developmental perspective. In line with a hierarchical model of skill acquisition, performance on lower levels was predictive for the performance on higher levels within domains. Moreover, correlations between domains were stronger for competencies reflecting a similar level of cognitive processing, as expected. In conclusion, a systematic comparison of various competencies on distinct levels according to varying units turned out to be appropriate regarding comparability and interrelatedness. Results are discussed with regard to similarities and differences in the development of language and music skills as well as in terms of implications for further research on transfer effects from music on language.
Collapse
Affiliation(s)
- Caroline Cohrdes
- Max Planck Research Group 'Reading Education and Development', Max Planck Institute for Human Development Berlin, Germany
| | - Lorenz Grolig
- Max Planck Research Group 'Reading Education and Development', Max Planck Institute for Human Development Berlin, Germany
| | - Sascha Schroeder
- Max Planck Research Group 'Reading Education and Development', Max Planck Institute for Human Development Berlin, Germany
| |
Collapse
|
39
|
Sanju HK, Kumar P. Pre-attentive auditory discrimination skill in Indian classical vocal musicians and non-musicians. J Otol 2016; 11:102-110. [PMID: 29937818 PMCID: PMC6002603 DOI: 10.1016/j.joto.2016.06.002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2016] [Revised: 06/06/2016] [Accepted: 06/06/2016] [Indexed: 11/25/2022] Open
Abstract
Objective To test for pre-attentive auditory discrimination skills in Indian classical vocal musicians and non-musicians. Design Mismatch negativity (MMN) was recorded to test for pre-attentive auditory discrimination skills with a pair of stimuli of /1000 Hz/ and /1100 Hz/, with /1000 Hz/ as the frequent stimulus and /1100 Hz/ as the infrequent stimulus. Onset, offset and peak latencies were the considered latency parameters, whereas peak amplitude and area under the curve were considered for amplitude analysis. Study sample Exactly 50 participants, out of which the experimental group had 25 adult Indian classical vocal musicians and 25 age-matched non-musicians served as the control group, were included in the study. Experimental group participants had a minimum professional music experience in Indian classic vocal music of 10 years. However, control group participants did not have any formal training in music. Results Descriptive statistics showed better waveform morphology in the experimental group as compared to the control. MANOVA showed significantly better onset latency, peak amplitude and area under the curve in the experimental group but no significant difference in the offset and peak latencies between the two groups. Conclusion The present study probably points towards the enhancement of pre-attentive auditory discrimination skills in Indian classical vocal musicians compared to non-musicians. It indicates that Indian classical musical training enhances pre-attentive auditory discrimination skills in musicians, leading to higher peak amplitude and a greater area under the curve compared to non-musicians.
Collapse
Affiliation(s)
| | - Prawin Kumar
- Department of Audiology, All India Institute of Speech and Hearing, Mysore, Karnataka, India
| |
Collapse
|
40
|
Ong JH, Burnham D, Stevens CJ, Escudero P. Naïve Learners Show Cross-Domain Transfer after Distributional Learning: The Case of Lexical and Musical Pitch. Front Psychol 2016; 7:1189. [PMID: 27551272 PMCID: PMC4976504 DOI: 10.3389/fpsyg.2016.01189] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2016] [Accepted: 07/27/2016] [Indexed: 11/13/2022] Open
Abstract
Experienced listeners of a particular acoustic cue in either speech or music appear to have an advantage when perceiving a similar cue in the other domain (i.e., they exhibit cross-domain transfer). One explanation for cross-domain transfer relates to the acquisition of the foundations of speech and music: if acquiring pitch-based elements in speech or music results in heightened attention to pitch in general, then cross-domain transfer of pitch may be observed, which may explain the cross-domain phenomenon seen among listeners of a tone language and listeners with musical training. Here, we investigate this possibility in naïve adult learners, who were trained to acquire pitch-based elements using a distributional learning paradigm, to provide a proof-of-concept for the explanation. Learners were exposed to a stimulus distribution spanning either a Thai lexical tone minimal pair or a novel musical chord minimal pair. Within each domain, the distribution highlights pitch to facilitate learning of two different sounds (Bimodal distribution) or the distribution minimizes pitch so that the input is inferred to be from a single sound (Unimodal distribution). Learning was assessed before and after exposure to the distribution using discrimination tasks with both Thai tone and musical chord minimal pairs. We hypothesize: (i) distributional learning for learners in both the tone and the chord distributions, that is, pre-to-post improvement in discrimination after exposure to the Bimodal but not the Unimodal distribution; and (ii) for both the tone and chord conditions, learners in the Bimodal conditions but not those in the Unimodal conditions will show cross-domain transfer, as indexed by improvement in discrimination of test items in the domain other than what they were trained on. The results support both hypotheses, suggesting that distributional learning is not only used to acquire the foundations of speech and music, but may also play a role in cross-domain transfer: as a result of learning primitives based on a particular cue, learners show heightened attention to that cue in any auditory signal.
Collapse
Affiliation(s)
- Jia Hoong Ong
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney NSW, Australia
| | - Denis Burnham
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney NSW, Australia
| | - Catherine J Stevens
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney NSW, Australia
| | - Paola Escudero
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney NSW, Australia
| |
Collapse
|
41
|
Sanju HK, Kumar P. Enhanced auditory evoked potentials in musicians: A review of recent findings. J Otol 2016; 11:63-72. [PMID: 29937812 PMCID: PMC6002589 DOI: 10.1016/j.joto.2016.04.002] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2016] [Revised: 04/25/2016] [Accepted: 04/25/2016] [Indexed: 11/26/2022] Open
Abstract
Auditory evoked potentials serve as an objective mode for assessment to check the functioning of the auditory system and neuroplasticity. Literature has reported enhanced electrophysiological responses in musicians, which shows neuroplasticity in musicians. Various databases including PubMed, Google, Google Scholar and Medline were searched for references related to auditory evoked potentials in musicians from 1994 till date. Different auditory evoked potentials in musicians have been summarized in the present article. The findings of various studies may support as evidences for music-induced neuroplasticity which can be used for the treatment of various clinical disorders. The search results showed enhanced auditory evoked potentials in musicians compared to non-musicians from brainstem to cortical levels. Also, the present review showed enhanced attentive and pre-attentive skills in musicians compared to non-musicians.
Collapse
Affiliation(s)
| | - Prawin Kumar
- Department of Audiology, All India Institute of Speech and Hearing, Mysore, Karnataka, India
| |
Collapse
|
42
|
Abstract
The influence of music on the human brain has continued to attract increasing attention from neuroscientists and musicologists. Currently, tonal music is widely present in people's daily lives; however, atonal music has gradually become an important part of modern music. In this study, we conducted two experiments: the first one tested for differences in perception of distractibility between tonal music and atonal music. The second experiment tested how tonal music and atonal music affect visual working memory by comparing musicians and nonmusicians who were placed in contexts with background tonal music, atonal music, and silence. They were instructed to complete a delay matching memory task. The results show that musicians and nonmusicians have different evaluations of the distractibility of tonal music and atonal music, possibly indicating that long-term training may lead to a higher auditory perception threshold among musicians. For the working memory task, musicians reacted faster than nonmusicians in all background music cases, and musicians took more time to respond in the tonal background music condition than in the other conditions. Therefore, our results suggest that for a visual memory task, background tonal music may occupy more cognitive resources than atonal music or silence for musicians, leaving few resources left for the memory task. Moreover, the musicians outperformed the nonmusicians because of the higher sensitivity to background music, which also needs a further longitudinal study to be confirmed.
Collapse
|
43
|
El Boghdady N, Kegel A, Lai WK, Dillier N. A neural-based vocoder implementation for evaluating cochlear implant coding strategies. Hear Res 2016; 333:136-149. [PMID: 26775182 DOI: 10.1016/j.heares.2016.01.005] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/23/2015] [Revised: 12/18/2015] [Accepted: 01/07/2016] [Indexed: 10/22/2022]
Abstract
Most simulations of cochlear implant (CI) coding strategies rely on standard vocoders that are based on purely signal processing techniques. However, these models neither account for various biophysical phenomena, such as neural stochasticity and refractoriness, nor for effects of electrical stimulation, such as spectral smearing as a function of stimulus intensity. In this paper, a neural model that accounts for stochastic firing, parasitic spread of excitation across neuron populations, and neuronal refractoriness, was developed and augmented as a preprocessing stage for a standard 22-channel noise-band vocoder. This model was used to subjectively and objectively assess consonant discrimination in commercial and experimental coding strategies. Stimuli consisting of consonant-vowel (CV) and vowel-consonant-vowel (VCV) tokens were processed by either the Advanced Combination Encoder (ACE) or the Excitability Controlled Coding (ECC) strategies, and later resynthesized to audio using the aforementioned vocoder model. Baseline performance was measured using unprocessed versions of the speech tokens. Behavioural responses were collected from seven normal hearing (NH) volunteers, while EEG data were recorded from five NH participants. Psychophysical results indicate that while there may be a difference in consonant perception between the two tested coding strategies, mismatch negativity (MMN) waveforms do not show any marked trends in CV or VCV contrast discrimination.
Collapse
Affiliation(s)
- Nawal El Boghdady
- Institute for Neuroinformatics (INI), Universität Zürich (UZH)/ ETH Zürich (ETHZ), Zürich, Switzerland.
| | - Andrea Kegel
- Laboratory of Experimental Audiology, ENT Department, Universitätsspital Zürich (USZ), Zürich, Switzerland
| | - Wai Kong Lai
- Laboratory of Experimental Audiology, ENT Department, Universitätsspital Zürich (USZ), Zürich, Switzerland
| | - Norbert Dillier
- Laboratory of Experimental Audiology, ENT Department, Universitätsspital Zürich (USZ), Zürich, Switzerland
| |
Collapse
|
44
|
Tervaniemi M, Janhunen L, Kruck S, Putkinen V, Huotilainen M. Auditory Profiles of Classical, Jazz, and Rock Musicians: Genre-Specific Sensitivity to Musical Sound Features. Front Psychol 2016; 6:1900. [PMID: 26779055 PMCID: PMC4703758 DOI: 10.3389/fpsyg.2015.01900] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2015] [Accepted: 11/24/2015] [Indexed: 11/13/2022] Open
Abstract
When compared with individuals without explicit training in music, adult musicians have facilitated neural functions in several modalities. They also display structural changes in various brain areas, these changes corresponding to the intensity and duration of their musical training. Previous studies have focused on investigating musicians with training in Western classical music. However, musicians involved in different musical genres may display highly differentiated auditory profiles according to the demands set by their genre, i.e., varying importance of different musical sound features. This hypothesis was tested in a novel melody paradigm including deviants in tuning, timbre, rhythm, melody transpositions, and melody contour. Using this paradigm while the participants were watching a silent video and instructed to ignore the sounds, we compared classical, jazz, and rock musicians' and non-musicians' accuracy of neural encoding of the melody. In all groups of participants, all deviants elicited an MMN response, which is a cortical index of deviance discrimination. The strength of the MMN and the subsequent attentional P3a responses reflected the importance of various sound features in each music genre: these automatic brain responses were selectively enhanced to deviants in tuning (classical musicians), timing (classical and jazz musicians), transposition (jazz musicians), and melody contour (jazz and rock musicians). Taken together, these results indicate that musicians with different training history have highly specialized cortical reactivity to sounds which violate the neural template for melody content.
Collapse
Affiliation(s)
- Mari Tervaniemi
- Cognitive Brain Research Unit, Institute of Behavioural Sciences, University of HelsinkiHelsinki, Finland; CICERO Learning, University of HelsinkiHelsinki, Finland
| | - Lauri Janhunen
- Cognitive Brain Research Unit, Institute of Behavioural Sciences, University of Helsinki Helsinki, Finland
| | - Stefanie Kruck
- Cognitive Brain Research Unit, Institute of Behavioural Sciences, University of Helsinki Helsinki, Finland
| | - Vesa Putkinen
- Department of Music, University of Jyväskylä Jyväskylä, Finland
| | - Minna Huotilainen
- Cognitive Brain Research Unit, Institute of Behavioural Sciences, University of HelsinkiHelsinki, Finland; CICERO Learning, University of HelsinkiHelsinki, Finland; Finnish Institute of Occupational HealthHelsinki, Finland
| |
Collapse
|
45
|
Poikonen H, Alluri V, Brattico E, Lartillot O, Tervaniemi M, Huotilainen M. Event-related brain responses while listening to entire pieces of music. Neuroscience 2015; 312:58-73. [PMID: 26550950 DOI: 10.1016/j.neuroscience.2015.10.061] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2015] [Revised: 10/28/2015] [Accepted: 10/30/2015] [Indexed: 12/19/2022]
Abstract
Brain responses to discrete short sounds have been studied intensively using the event-related potential (ERP) method, in which the electroencephalogram (EEG) signal is divided into epochs time-locked to stimuli of interest. Here we introduce and apply a novel technique which enables one to isolate ERPs in human elicited by continuous music. The ERPs were recorded during listening to a Tango Nuevo piece, a deep techno track and an acoustic lullaby. Acoustic features related to timbre, harmony, and dynamics of the audio signal were computationally extracted from the musical pieces. Negative deflation occurring around 100 milliseconds after the stimulus onset (N100) and positive deflation occurring around 200 milliseconds after the stimulus onset (P200) ERP responses to peak changes in the acoustic features were distinguishable and were often largest for Tango Nuevo. In addition to large changes in these musical features, long phases of low values that precede a rapid increase - and that we will call Preceding Low-Feature Phases - followed by a rapid increase enhanced the amplitudes of N100 and P200 responses. These ERP responses resembled those to simpler sounds, making it possible to utilize the tradition of ERP research with naturalistic paradigms.
Collapse
Affiliation(s)
- H Poikonen
- Cognitive Brain Research Unit, Cognitive Science, Institute of Behavioural Sciences, University of Helsinki, P.O. Box 9 (Siltavuorenpenger 1 B), FI-00014 University of Helsinki, Finland.
| | - V Alluri
- Department of Music, University of Jyväskylä, P.O. Box 35, 40014 University of Jyväskylä, Finland.
| | - E Brattico
- Cognitive Brain Research Unit, Cognitive Science, Institute of Behavioural Sciences, University of Helsinki, P.O. Box 9 (Siltavuorenpenger 1 B), FI-00014 University of Helsinki, Finland; Center for Music in the Brain (MIB), Department of Clinical Medicine, Aarhus University, Nørrebrograde 44, DK-8000 Aarhus C, Denmark.
| | - O Lartillot
- Department of Architecture, Design and Media Technology, University of Aalborg, Rendsburggade 14, DK-9000 Aalborg, Denmark.
| | - M Tervaniemi
- Cognitive Brain Research Unit, Cognitive Science, Institute of Behavioural Sciences, University of Helsinki, P.O. Box 9 (Siltavuorenpenger 1 B), FI-00014 University of Helsinki, Finland; Cicero Learning, P.O. Box 9 (Siltavuorenpenger 5 A), FI-00014 University of Helsinki, Finland.
| | - M Huotilainen
- Cognitive Brain Research Unit, Cognitive Science, Institute of Behavioural Sciences, University of Helsinki, P.O. Box 9 (Siltavuorenpenger 1 B), FI-00014 University of Helsinki, Finland; Cicero Learning, P.O. Box 9 (Siltavuorenpenger 5 A), FI-00014 University of Helsinki, Finland; Finnish Institute of Occupational Health, Haartmaninkatu 1 A, 00250 Helsinki, Finland.
| |
Collapse
|
46
|
Slater J, Skoe E, Strait DL, O’Connell S, Thompson E, Kraus N. Music training improves speech-in-noise perception: Longitudinal evidence from a community-based music program. Behav Brain Res 2015; 291:244-252. [DOI: 10.1016/j.bbr.2015.05.026] [Citation(s) in RCA: 98] [Impact Index Per Article: 10.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2015] [Revised: 05/09/2015] [Accepted: 05/13/2015] [Indexed: 02/01/2023]
|
47
|
Putkinen V, Tervaniemi M, Saarikivi K, Huotilainen M. Promises of formal and informal musical activities in advancing neurocognitive development throughout childhood. Ann N Y Acad Sci 2015; 1337:153-62. [PMID: 25773630 DOI: 10.1111/nyas.12656] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Adult musicians show superior neural sound discrimination when compared to nonmusicians. However, it is unclear whether these group differences reflect the effects of experience or preexisting neural enhancement in individuals who seek out musical training. Tracking how brain function matures over time in musically trained and nontrained children can shed light on this issue. Here, we review our recent longitudinal event-related potential (ERP) studies that examine how formal musical training and less formal musical activities influence the maturation of brain responses related to sound discrimination and auditory attention. These studies found that musically trained school-aged children and preschool-aged children attending a musical playschool show more rapid maturation of neural sound discrimination than their control peers. Importantly, we found no evidence for pretraining group differences. In a related cross-sectional study, we found ERP and behavioral evidence for improved executive functions and control over auditory novelty processing in musically trained school-aged children and adolescents. Taken together, these studies provide evidence for the causal role of formal musical training and less formal musical activities in shaping the development of important neural auditory skills and suggest transfer effects with domain-general implications.
Collapse
Affiliation(s)
- Vesa Putkinen
- Cognitive Brain Research Unit, Cognitive Science, Institute of Behavioural Sciences, University of Helsinki, Helsinki, Finland; Finnish Centre of Interdisciplinary Music Research, University of Jyväskylä, Jyväskylä, Finland
| | | | | | | |
Collapse
|
48
|
Schellenberg EG. Music training and speech perception: a gene-environment interaction. Ann N Y Acad Sci 2015; 1337:170-7. [PMID: 25773632 DOI: 10.1111/nyas.12627] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Abstract
Claims of beneficial side effects of music training are made for many different abilities, including verbal and visuospatial abilities, executive functions, working memory, IQ, and speech perception in particular. Such claims assume that music training causes the associations even though children who take music lessons are likely to differ from other children in music aptitude, which is associated with many aspects of speech perception. Music training in childhood is also associated with cognitive, personality, and demographic variables, and it is well established that IQ and personality are determined largely by genetics. Recent evidence also indicates that the role of genetics in music aptitude and music achievement is much larger than previously thought. In short, music training is an ideal model for the study of gene-environment interactions but far less appropriate as a model for the study of plasticity. Children seek out environments, including those with music lessons, that are consistent with their predispositions; such environments exaggerate preexisting individual differences.
Collapse
Affiliation(s)
- E Glenn Schellenberg
- Department of Psychology, University of Toronto Mississauga, Mississauga, Ontario, Canada
| |
Collapse
|
49
|
Wu H, Ma X, Zhang L, Liu Y, Zhang Y, Shu H. Musical experience modulates categorical perception of lexical tones in native Chinese speakers. Front Psychol 2015; 6:436. [PMID: 25918511 PMCID: PMC4394639 DOI: 10.3389/fpsyg.2015.00436] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2015] [Accepted: 03/27/2015] [Indexed: 11/13/2022] Open
Abstract
Although musical training has been shown to facilitate both native and non-native phonetic perception, it remains unclear whether and how musical experience affects native speakers’ categorical perception (CP) of speech at the suprasegmental level. Using both identification and discrimination tasks, this study compared Chinese-speaking musicians and non-musicians in their CP of a lexical tone continuum (from the high level tone, Tone1 to the high falling tone, Tone4). While the identification functions showed similar steepness and boundary location between the two subject groups, the discrimination results revealed superior performance in the musicians for discriminating within-category stimuli pairs but not for between-category stimuli. These findings suggest that musical training can enhance sensitivity to subtle pitch differences between within-category sounds in the presence of robust mental representations in service of CP of lexical tonal contrasts.
Collapse
Affiliation(s)
- Han Wu
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University Beijing, China
| | - Xiaohui Ma
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University Beijing, China
| | - Linjun Zhang
- Faculty of Linguistic Sciences, Beijing Language and Culture University Beijing, China
| | - Youyi Liu
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University Beijing, China
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences and Center for Neurobehavioral Development, University of Minnesota Minneapolis, MN, USA
| | - Hua Shu
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University Beijing, China
| |
Collapse
|
50
|
Rigoulot S, Pell MD, Armony JL. Time course of the influence of musical expertise on the processing of vocal and musical sounds. Neuroscience 2015; 290:175-84. [PMID: 25637804 DOI: 10.1016/j.neuroscience.2015.01.033] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2014] [Revised: 01/09/2015] [Accepted: 01/12/2015] [Indexed: 11/18/2022]
Abstract
Previous functional magnetic resonance imaging (fMRI) studies have suggested that different cerebral regions preferentially process human voice and music. Yet, little is known on the temporal course of the brain processes that decode the category of sounds and how the expertise in one sound category can impact these processes. To address this question, we recorded the electroencephalogram (EEG) of 15 musicians and 18 non-musicians while they were listening to short musical excerpts (piano and violin) and vocal stimuli (speech and non-linguistic vocalizations). The task of the participants was to detect noise targets embedded within the stream of sounds. Event-related potentials revealed an early differentiation of sound category, within the first 100 ms after the onset of the sound, with mostly increased responses to musical sounds. Importantly, this effect was modulated by the musical background of participants, as musicians were more responsive to music sounds than non-musicians, consistent with the notion that musical training increases sensitivity to music. In late temporal windows, brain responses were enhanced in response to vocal stimuli, but musicians were still more responsive to music. These results shed new light on the temporal course of neural dynamics of auditory processing and reveal how it is impacted by the stimulus category and the expertise of participants.
Collapse
Affiliation(s)
- S Rigoulot
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, Canada; Department of Psychiatry, McGill University and Douglas Mental Health University Institute, Montreal, Canada.
| | - M D Pell
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, Canada; School of Communication Sciences and Disorders, McGill University, Canada
| | - J L Armony
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, Canada; Department of Psychiatry, McGill University and Douglas Mental Health University Institute, Montreal, Canada
| |
Collapse
|