51
|
Swaminathan J, Mason CR, Streeter TM, Best V, Kidd G, Patel AD. Musical training, individual differences and the cocktail party problem. Sci Rep 2015; 5:11628. [PMID: 26112910 PMCID: PMC4481518 DOI: 10.1038/srep11628] [Citation(s) in RCA: 88] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2014] [Accepted: 06/02/2015] [Indexed: 11/09/2022] Open
Abstract
Are musicians better able to understand speech in noise than non-musicians? Recent findings have produced contradictory results. Here we addressed this question by asking musicians and non-musicians to understand target sentences masked by other sentences presented from different spatial locations, the classical 'cocktail party problem' in speech science. We found that musicians obtained a substantial benefit in this situation, with thresholds ~6 dB better than non-musicians. Large individual differences in performance were noted particularly for the non-musically trained group. Furthermore, in different conditions we manipulated the spatial location and intelligibility of the masking sentences, thus changing the amount of 'informational masking' (IM) while keeping the amount of 'energetic masking' (EM) relatively constant. When the maskers were unintelligible and spatially separated from the target (low in IM), musicians and non-musicians performed comparably. These results suggest that the characteristics of speech maskers and the amount of IM can influence the magnitude of the differences found between musicians and non-musicians in multiple-talker "cocktail party" environments. Furthermore, considering the task in terms of the EM-IM distinction provides a conceptual framework for future behavioral and neuroscientific studies which explore the underlying sensory and cognitive mechanisms contributing to enhanced "speech-in-noise" perception by musicians.
Collapse
Affiliation(s)
| | - Christine R Mason
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA
| | - Timothy M Streeter
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA
| | - Virginia Best
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA
| | - Gerald Kidd
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA
| | | |
Collapse
|
52
|
Bidelman GM, Dexter L. Bilinguals at the "cocktail party": dissociable neural activity in auditory-linguistic brain regions reveals neurobiological basis for nonnative listeners' speech-in-noise recognition deficits. BRAIN AND LANGUAGE 2015; 143:32-41. [PMID: 25747886 DOI: 10.1016/j.bandl.2015.02.002] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/20/2014] [Revised: 12/22/2014] [Accepted: 02/08/2015] [Indexed: 06/04/2023]
Abstract
We examined a consistent deficit observed in bilinguals: poorer speech-in-noise (SIN) comprehension for their nonnative language. We recorded neuroelectric mismatch potentials in mono- and bi-lingual listeners in response to contrastive speech sounds in noise. Behaviorally, late bilinguals required ∼10dB more favorable signal-to-noise ratios to match monolinguals' SIN abilities. Source analysis of cortical activity demonstrated monotonic increase in response latency with noise in superior temporal gyrus (STG) for both groups, suggesting parallel degradation of speech representations in auditory cortex. Contrastively, we found differential speech encoding between groups within inferior frontal gyrus (IFG)-adjacent to Broca's area-where noise delays observed in nonnative listeners were offset in monolinguals. Notably, brain-behavior correspondences double dissociated between language groups: STG activation predicted bilinguals' SIN, whereas IFG activation predicted monolinguals' performance. We infer higher-order brain areas act compensatorily to enhance impoverished sensory representations but only when degraded speech recruits linguistic brain mechanisms downstream from initial auditory-sensory inputs.
Collapse
Affiliation(s)
- Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA.
| | - Lauren Dexter
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA
| |
Collapse
|
53
|
Carey D, Rosen S, Krishnan S, Pearce MT, Shepherd A, Aydelott J, Dick F. Generality and specificity in the effects of musical expertise on perception and cognition. Cognition 2015; 137:81-105. [DOI: 10.1016/j.cognition.2014.12.005] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2013] [Revised: 11/03/2014] [Accepted: 12/18/2014] [Indexed: 10/24/2022]
|
54
|
Musical training orchestrates coordinated neuroplasticity in auditory brainstem and cortex to counteract age-related declines in categorical vowel perception. J Neurosci 2015; 35:1240-9. [PMID: 25609638 DOI: 10.1523/jneurosci.3292-14.2015] [Citation(s) in RCA: 114] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023] Open
Abstract
Musicianship in early life is associated with pervasive changes in brain function and enhanced speech-language skills. Whether these neuroplastic benefits extend to older individuals more susceptible to cognitive decline, and for whom plasticity is weaker, has yet to be established. Here, we show that musical training offsets declines in auditory brain processing that accompanying normal aging in humans, preserving robust speech recognition late into life. We recorded both brainstem and cortical neuroelectric responses in older adults with and without modest musical training as they classified speech sounds along an acoustic-phonetic continuum. Results reveal higher temporal precision in speech-evoked responses at multiple levels of the auditory system in older musicians who were also better at differentiating phonetic categories. Older musicians also showed a closer correspondence between neural activity and perceptual performance. This suggests that musicianship strengthens brain-behavior coupling in the aging auditory system. Last, "neurometric" functions derived from unsupervised classification of neural activity established that early cortical responses could accurately predict listeners' psychometric speech identification and, more critically, that neurometric profiles were organized more categorically in older musicians. We propose that musicianship offsets age-related declines in speech listening by refining the hierarchical interplay between subcortical/cortical auditory brain representations, allowing more behaviorally relevant information carried within the neural code, and supplying more faithful templates to the brain mechanisms subserving phonetic computations. Our findings imply that robust neuroplasticity conferred by musical training is not restricted by age and may serve as an effective means to bolster speech listening skills that decline across the lifespan.
Collapse
|
55
|
Bidelman GM, Alain C. Hierarchical neurocomputations underlying concurrent sound segregation: Connecting periphery to percept. Neuropsychologia 2015; 68:38-50. [DOI: 10.1016/j.neuropsychologia.2014.12.020] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2014] [Revised: 12/18/2014] [Accepted: 12/22/2014] [Indexed: 10/24/2022]
|
56
|
Boebinger D, Evans S, Rosen S, Lima CF, Manly T, Scott SK. Musicians and non-musicians are equally adept at perceiving masked speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2015; 137:378-87. [PMID: 25618067 PMCID: PMC4434218 DOI: 10.1121/1.4904537] [Citation(s) in RCA: 98] [Impact Index Per Article: 10.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
There is much interest in the idea that musicians perform better than non-musicians in understanding speech in background noise. Research in this area has often used energetic maskers, which have their effects primarily at the auditory periphery. However, masking interference can also occur at more central auditory levels, known as informational masking. This experiment extends existing research by using multiple maskers that vary in their informational content and similarity to speech, in order to examine differences in perception of masked speech between trained musicians (n = 25) and non-musicians (n = 25). Although musicians outperformed non-musicians on a measure of frequency discrimination, they showed no advantage in perceiving masked speech. Further analysis revealed that non-verbal IQ, rather than musicianship, significantly predicted speech reception thresholds in noise. The results strongly suggest that the contribution of general cognitive abilities needs to be taken into account in any investigations of individual variability for perceiving speech in noise.
Collapse
Affiliation(s)
- Dana Boebinger
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR, United Kingdom
| | - Samuel Evans
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London WC1N 3AR, United Kingdom
| | - Stuart Rosen
- Speech, Hearing & Phonetic Sciences, University College London, 2 Wakefield Street, London WC1N 2PF, United Kingdom
| | - César F Lima
- Centre for Psychology at University of Porto, Rua Alfredo Allen, 4200-135 Porto, Portugal
| | - Tom Manly
- Medical Research Council Cognition and Brain Sciences Unit, Cambridge, 15 Chaucer Road, Cambridge CB2 7EF, United Kingdom
| | - Sophie K Scott
- Division of Psychology and Language Sciences, University College London, Gower Street, London WC1E 6BT, United Kingdom
| |
Collapse
|
57
|
Slater J, Strait DL, Skoe E, O'Connell S, Thompson E, Kraus N. Longitudinal effects of group music instruction on literacy skills in low-income children. PLoS One 2014; 9:e113383. [PMID: 25409300 PMCID: PMC4237413 DOI: 10.1371/journal.pone.0113383] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2014] [Accepted: 10/24/2014] [Indexed: 11/18/2022] Open
Abstract
Children from low-socioeconomic backgrounds tend to fall progressively further behind their higher-income peers over the course of their academic careers. Music training has been associated with enhanced language and learning skills, suggesting that music programs could play a role in helping low-income children to stay on track academically. Using a controlled, longitudinal design, the impact of group music instruction on English reading ability was assessed in 42 low-income Spanish-English bilingual children aged 6-9 years in Los Angeles. After one year, children who received music training retained their age-normed level of reading performance while a matched control group's performance deteriorated, consistent with expected declines in this population. While the extent of change is modest, outcomes nonetheless provide evidence that music programs may have value in helping to counteract the negative effects of low-socioeconomic status on child literacy development.
Collapse
Affiliation(s)
- Jessica Slater
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, Illinois, United States of America
- Department of Communication Sciences, Northwestern University, Evanston, Illinois, United States of America
| | - Dana L. Strait
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, Illinois, United States of America
- Institute for Neuroscience, Northwestern University, Evanston, Illinois, United States of America
| | - Erika Skoe
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, Illinois, United States of America
- Department of Communication Sciences, Northwestern University, Evanston, Illinois, United States of America
| | - Samantha O'Connell
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, Illinois, United States of America
| | - Elaine Thompson
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, Illinois, United States of America
- Department of Communication Sciences, Northwestern University, Evanston, Illinois, United States of America
| | - Nina Kraus
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, Illinois, United States of America
- Department of Communication Sciences, Northwestern University, Evanston, Illinois, United States of America
- Institute for Neuroscience, Northwestern University, Evanston, Illinois, United States of America
- Department of Neurobiology and Physiology, Northwestern University, Evanston, Illinois, United States of America
- Department of Otolaryngology, Northwestern University, Evanston, Illinois, United States of America
| |
Collapse
|
58
|
Zendel BR, Tremblay CD, Belleville S, Peretz I. The impact of musicianship on the cortical mechanisms related to separating speech from background noise. J Cogn Neurosci 2014; 27:1044-59. [PMID: 25390195 DOI: 10.1162/jocn_a_00758] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Musicians have enhanced auditory processing abilities. In some studies, these abilities are paralleled by an improved understanding of speech in noisy environments, partially due to more robust encoding of speech signals in noise at the level of the brainstem. Little is known about the impact of musicianship on attention-dependent cortical activity related to lexical access during a speech-in-noise task. To address this issue, we presented musicians and nonmusicians with single words mixed with three levels of background noise, across two conditions, while monitoring electrical brain activity. In the active condition, listeners repeated the words aloud, and in the passive condition, they ignored the words and watched a silent film. When background noise was most intense, musicians repeated more words correctly compared with nonmusicians. Auditory evoked responses were attenuated and delayed with the addition of background noise. In musicians, P1 amplitude was marginally enhanced during active listening and was related to task performance in the most difficult listening condition. By comparing ERPs from the active and passive conditions, we isolated an N400 related to lexical access. The amplitude of the N400 was not influenced by the level of background noise in musicians, whereas N400 amplitude increased with the level of background noise in nonmusicians. In nonmusicians, the increase in N400 amplitude was related to a reduction in task performance. In musicians only, there was a rightward shift of the sources contributing to the N400 as the level of background noise increased. This pattern of results supports the hypothesis that encoding of speech in noise is more robust in musicians and suggests that this facilitates lexical access. Moreover, the shift in sources suggests that musicians, to a greater extent than nonmusicians, may increasingly rely on acoustic cues to understand speech in noise.
Collapse
Affiliation(s)
- Benjamin Rich Zendel
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montréal, Québec, Canada
| | | | | | | |
Collapse
|
59
|
Bidelman GM, Weiss MW, Moreno S, Alain C. Coordinated plasticity in brainstem and auditory cortex contributes to enhanced categorical speech perception in musicians. Eur J Neurosci 2014; 40:2662-73. [PMID: 24890664 DOI: 10.1111/ejn.12627] [Citation(s) in RCA: 107] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2013] [Revised: 04/16/2014] [Accepted: 04/18/2014] [Indexed: 11/28/2022]
Abstract
Musicianship is associated with neuroplastic changes in brainstem and cortical structures, as well as improved acuity for behaviorally relevant sounds including speech. However, further advance in the field depends on characterizing how neuroplastic changes in brainstem and cortical speech processing relate to one another and to speech-listening behaviors. Here, we show that subcortical and cortical neural plasticity interact to yield the linguistic advantages observed with musicianship. We compared brainstem and cortical neuroelectric responses elicited by a series of vowels that differed along a categorical speech continuum in amateur musicians and non-musicians. Musicians obtained steeper identification functions and classified speech sounds more rapidly than non-musicians. Behavioral advantages coincided with more robust and temporally coherent brainstem phase-locking to salient speech cues (voice pitch and formant information) coupled with increased amplitude in cortical-evoked responses, implying an overall enhancement in the nervous system's responsiveness to speech. Musicians' subcortical and cortical neural enhancements (but not behavioral measures) were correlated with their years of formal music training. Associations between multi-level neural responses were also stronger in musically trained listeners, and were better predictors of speech perception than in non-musicians. Results suggest that musicianship modulates speech representations at multiple tiers of the auditory pathway, and strengthens the correspondence of processing between subcortical and cortical areas to allow neural activity to carry more behaviorally relevant information. We infer that musicians have a refined hierarchy of internalized representations for auditory objects at both pre-attentive and attentive levels that supplies more faithful phonemic templates to decision mechanisms governing linguistic operations.
Collapse
Affiliation(s)
- Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences & Disorders, University of Memphis, 807 Jefferson Ave. Memphis, TN, 38105, USA
| | | | | | | |
Collapse
|
60
|
Koreimann S, Gula B, Vitouch O. Inattentional deafness in music. PSYCHOLOGICAL RESEARCH 2014; 78:304-12. [DOI: 10.1007/s00426-014-0552-x] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2013] [Accepted: 02/19/2014] [Indexed: 10/25/2022]
|
61
|
Moreno S, Bidelman GM. Examining neural plasticity and cognitive benefit through the unique lens of musical training. Hear Res 2014; 308:84-97. [DOI: 10.1016/j.heares.2013.09.012] [Citation(s) in RCA: 118] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/19/2013] [Revised: 09/14/2013] [Accepted: 09/19/2013] [Indexed: 11/30/2022]
|
62
|
Strait DL, Kraus N. Biological impact of auditory expertise across the life span: musicians as a model of auditory learning. Hear Res 2014; 308:109-21. [PMID: 23988583 PMCID: PMC3947192 DOI: 10.1016/j.heares.2013.08.004] [Citation(s) in RCA: 106] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/25/2013] [Revised: 08/08/2013] [Accepted: 08/11/2013] [Indexed: 01/19/2023]
Abstract
Experience-dependent characteristics of auditory function, especially with regard to speech-evoked auditory neurophysiology, have garnered increasing attention in recent years. This interest stems from both pragmatic and theoretical concerns as it bears implications for the prevention and remediation of language-based learning impairment in addition to providing insight into mechanisms engendering experience-dependent changes in human sensory function. Musicians provide an attractive model for studying the experience-dependency of auditory processing in humans due to their distinctive neural enhancements compared to nonmusicians. We have only recently begun to address whether these enhancements are observable early in life, during the initial years of music training when the auditory system is under rapid development, as well as later in life, after the onset of the aging process. Here we review neural enhancements in musically trained individuals across the life span in the context of cellular mechanisms that underlie learning, identified in animal models. Musicians' subcortical physiologic enhancements are interpreted according to a cognitive framework for auditory learning, providing a model in which to study mechanisms of experience-dependent changes in human auditory function.
Collapse
Affiliation(s)
- Dana L Strait
- Auditory Neuroscience Laboratory, Northwestern University, 2240 Campus Drive, Evanston, IL 60208, USA; Institute for Neuroscience, Northwestern University, Chicago, IL 60611, USA
| | - Nina Kraus
- Auditory Neuroscience Laboratory, Northwestern University, 2240 Campus Drive, Evanston, IL 60208, USA; Institute for Neuroscience, Northwestern University, Chicago, IL 60611, USA; Department of Communication Sciences, Northwestern University, Evanston, IL 60208, USA; Department of Neurobiology & Physiology, Northwestern University, Evanston, IL 60208, USA; Department of Otolaryngology, Northwestern University, Evanston, IL 60208, USA.
| |
Collapse
|
63
|
Alain C, Zendel BR, Hutka S, Bidelman GM. Turning down the noise: The benefit of musical training on the aging auditory brain. Hear Res 2014. [DOI: 10.10.1016/j.heares.2013.06.008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
64
|
Ragert M, Fairhurst MT, Keller PE. Segregation and integration of auditory streams when listening to multi-part music. PLoS One 2014; 9:e84085. [PMID: 24475030 PMCID: PMC3901649 DOI: 10.1371/journal.pone.0084085] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2013] [Accepted: 11/12/2013] [Indexed: 11/19/2022] Open
Abstract
In our daily lives, auditory stream segregation allows us to differentiate concurrent sound sources and to make sense of the scene we are experiencing. However, a combination of segregation and the concurrent integration of auditory streams is necessary in order to analyze the relationship between streams and thus perceive a coherent auditory scene. The present functional magnetic resonance imaging study investigates the relative role and neural underpinnings of these listening strategies in multi-part musical stimuli. We compare a real human performance of a piano duet and a synthetic stimulus of the same duet in a prioritized integrative attention paradigm that required the simultaneous segregation and integration of auditory streams. In so doing, we manipulate the degree to which the attended part of the duet led either structurally (attend melody vs. attend accompaniment) or temporally (asynchronies vs. no asynchronies between parts), and thus the relative contributions of integration and segregation used to make an assessment of the leader-follower relationship. We show that perceptually the relationship between parts is biased towards the conventional structural hierarchy in western music in which the melody generally dominates (leads) the accompaniment. Moreover, the assessment varies as a function of both cognitive load, as shown through difficulty ratings and the interaction of the temporal and the structural relationship factors. Neurally, we see that the temporal relationship between parts, as one important cue for stream segregation, revealed distinct neural activity in the planum temporale. By contrast, integration used when listening to both the temporally separated performance stimulus and the temporally fused synthetic stimulus resulted in activation of the intraparietal sulcus. These results support the hypothesis that the planum temporale and IPS are key structures underlying the mechanisms of segregation and integration of auditory streams, respectively.
Collapse
Affiliation(s)
- Marie Ragert
- Max Planck Institute for Human Cognitive and Brain Sciences, Research Group: Music Cognition and Action, Leipzig, Germany
- * E-mail:
| | - Merle T. Fairhurst
- Max Planck Institute for Human Cognitive and Brain Sciences, Research Group: Music Cognition and Action, Leipzig, Germany
- Max Planck Institute for Human Cognitive and Brain Sciences, Research Group: Early Social Development, Leipzig, Germany
| | - Peter E. Keller
- Max Planck Institute for Human Cognitive and Brain Sciences, Research Group: Music Cognition and Action, Leipzig, Germany
- The MARCS Institute, Music Cognition and Action Group, University of Western Sydney, Sydney, Australia
| |
Collapse
|
65
|
Enhanced attention-dependent activity in the auditory cortex of older musicians. Neurobiol Aging 2014; 35:55-63. [DOI: 10.1016/j.neurobiolaging.2013.06.022] [Citation(s) in RCA: 62] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2013] [Revised: 06/25/2013] [Accepted: 06/30/2013] [Indexed: 11/23/2022]
|
66
|
Zuk J, Ozernov-Palchik O, Kim H, Lakshminarayanan K, Gabrieli JDE, Tallal P, Gaab N. Enhanced syllable discrimination thresholds in musicians. PLoS One 2013; 8:e80546. [PMID: 24339875 PMCID: PMC3855080 DOI: 10.1371/journal.pone.0080546] [Citation(s) in RCA: 42] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2013] [Accepted: 10/15/2013] [Indexed: 11/23/2022] Open
Abstract
Speech processing inherently relies on the perception of specific, rapidly changing spectral and temporal acoustic features. Advanced acoustic perception is also integral to musical expertise, and accordingly several studies have demonstrated a significant relationship between musical training and superior processing of various aspects of speech. Speech and music appear to overlap in spectral and temporal features; however, it remains unclear which of these acoustic features, crucial for speech processing, are most closely associated with musical training. The present study examined the perceptual acuity of musicians to the acoustic components of speech necessary for intra-phonemic discrimination of synthetic syllables. We compared musicians and non-musicians on discrimination thresholds of three synthetic speech syllable continua that varied in their spectral and temporal discrimination demands, specifically voice onset time (VOT) and amplitude envelope cues in the temporal domain. Musicians demonstrated superior discrimination only for syllables that required resolution of temporal cues. Furthermore, performance on the temporal syllable continua positively correlated with the length and intensity of musical training. These findings support one potential mechanism by which musical training may selectively enhance speech perception, namely by reinforcing temporal acuity and/or perception of amplitude rise time, and implications for the translation of musical training to long-term linguistic abilities.
Collapse
Affiliation(s)
- Jennifer Zuk
- Laboratories of Cognitive Neuroscience, Developmental Medicine Center, Boston Children's Hospital, Boston, Massachusetts, United States of America
- Harvard Medical School, Boston, Massachusetts, United States of America
| | - Ola Ozernov-Palchik
- Laboratories of Cognitive Neuroscience, Developmental Medicine Center, Boston Children's Hospital, Boston, Massachusetts, United States of America
| | - Heesoo Kim
- Helen Wills Neuroscience Institute, University of California, Berkeley, California, United States of America
| | - Kala Lakshminarayanan
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, New Jersey, United States of America
| | - John D. E. Gabrieli
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Paula Tallal
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, New Jersey, United States of America
| | - Nadine Gaab
- Laboratories of Cognitive Neuroscience, Developmental Medicine Center, Boston Children's Hospital, Boston, Massachusetts, United States of America
- Harvard Medical School, Boston, Massachusetts, United States of America
- Harvard Graduate School of Education, Cambridge, Massachusetts, United States of America
- * E-mail:
| |
Collapse
|
67
|
White EJ, Hutka SA, Williams LJ, Moreno S. Learning, neural plasticity and sensitive periods: implications for language acquisition, music training and transfer across the lifespan. Front Syst Neurosci 2013; 7:90. [PMID: 24312022 PMCID: PMC3834520 DOI: 10.3389/fnsys.2013.00090] [Citation(s) in RCA: 53] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2013] [Accepted: 10/29/2013] [Indexed: 01/27/2023] Open
Abstract
Sensitive periods in human development have often been proposed to explain age-related differences in the attainment of a number of skills, such as a second language (L2) and musical expertise. It is difficult to reconcile the negative consequence this traditional view entails for learning after a sensitive period with our current understanding of the brain's ability for experience-dependent plasticity across the lifespan. What is needed is a better understanding of the mechanisms underlying auditory learning and plasticity at different points in development. Drawing on research in language development and music training, this review examines not only what we learn and when we learn it, but also how learning occurs at different ages. First, we discuss differences in the mechanism of learning and plasticity during and after a sensitive period by examining how language exposure versus training forms language-specific phonetic representations in infants and adult L2 learners, respectively. Second, we examine the impact of musical training that begins at different ages on behavioral and neural indices of auditory and motor processing as well as sensorimotor integration. Third, we examine the extent to which childhood training in one auditory domain can enhance processing in another domain via the transfer of learning between shared neuro-cognitive systems. Specifically, we review evidence for a potential bi-directional transfer of skills between music and language by examining how speaking a tonal language may enhance music processing and, conversely, how early music training can enhance language processing. We conclude with a discussion of the role of attention in auditory learning for learning during and after sensitive periods and outline avenues of future research.
Collapse
Affiliation(s)
- Erin J. White
- Rotman Research Institute, BaycrestToronto, ON, Canada
| | | | | | | |
Collapse
|
68
|
Alain C, Zendel BR, Hutka S, Bidelman GM. Turning down the noise: the benefit of musical training on the aging auditory brain. Hear Res 2013; 308:162-73. [PMID: 23831039 DOI: 10.1016/j.heares.2013.06.008] [Citation(s) in RCA: 84] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/09/2013] [Revised: 06/19/2013] [Accepted: 06/24/2013] [Indexed: 11/29/2022]
Abstract
Age-related decline in hearing abilities is a ubiquitous part of aging, and commonly impacts speech understanding, especially when there are competing sound sources. While such age effects are partially due to changes within the cochlea, difficulties typically exist beyond measurable hearing loss, suggesting that central brain processes, as opposed to simple peripheral mechanisms (e.g., hearing sensitivity), play a critical role in governing hearing abilities late into life. Current training regimens aimed to improve central auditory processing abilities have experienced limited success in promoting listening benefits. Interestingly, recent studies suggest that in young adults, musical training positively modifies neural mechanisms, providing robust, long-lasting improvements to hearing abilities as well as to non-auditory tasks that engage cognitive control. These results offer the encouraging possibility that musical training might be used to counteract age-related changes in auditory cognition commonly observed in older adults. Here, we reviewed studies that have examined the effects of age and musical experience on auditory cognition with an emphasis on auditory scene analysis. We infer that musical training may offer potential benefits to complex listening and might be utilized as a means to delay or even attenuate declines in auditory perception and cognition that often emerge later in life.
Collapse
Affiliation(s)
- Claude Alain
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Canada; Department of Psychology, University of Toronto, Canada.
| | - Benjamin Rich Zendel
- International Laboratory for Brain, Music and Sound Research (BRAMS), Département de Psychologie, Université de Montréal, Québec, Canada; Centre de Recherche, Institut Universitaire de Gériatrie de Montréal, Québec, Canada
| | - Stefanie Hutka
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Canada; Department of Psychology, University of Toronto, Canada
| | - Gavin M Bidelman
- Institute for Intelligent Systems & School of Communication Sciences and Disorders, University of Memphis, USA
| |
Collapse
|
69
|
Ono K, Matsuhashi M, Mima T, Fukuyama H, Altmann CF. Effects of regularity on the processing of sound omission in a tone sequence in musicians and non-musicians. Eur J Neurosci 2013; 38:2786-92. [DOI: 10.1111/ejn.12254] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2013] [Accepted: 04/14/2013] [Indexed: 12/01/2022]
Affiliation(s)
| | - Masao Matsuhashi
- Human Brain Research Center; Graduate School of Medicine; Kyoto University; Kyoto; Japan
| | - Tatsuya Mima
- Human Brain Research Center; Graduate School of Medicine; Kyoto University; Kyoto; Japan
| | - Hidenao Fukuyama
- Human Brain Research Center; Graduate School of Medicine; Kyoto University; Kyoto; Japan
| | | |
Collapse
|
70
|
Uhlig M, Fairhurst MT, Keller PE. The importance of integration and top-down salience when listening to complex multi-part musical stimuli. Neuroimage 2013; 77:52-61. [PMID: 23558103 DOI: 10.1016/j.neuroimage.2013.03.051] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2012] [Revised: 02/20/2013] [Accepted: 03/14/2013] [Indexed: 11/30/2022] Open
Abstract
In listening to multi-part music, auditory streams can be attended to either selectively or globally. More specifically, musicians rely on prioritized integrative attention which incorporates both stream segregation and integration to assess the relationship between concurrent parts. In this fMRI study, we used a piano duet to investigate which factors of a leader-follower relationship between parts grab the listener's attention and influence the perception of multi-part music. The factors considered included the structural relationship between melody and accompaniment as well as the temporal relationship (asynchronies) between parts. The structural relationship was manipulated by cueing subjects to the part of the duet that had to be prioritized. The temporal relationship was investigated by synthetically shifting the onset times of melody and accompaniment to either a consistent melody or accompaniment lead. The relative importance of these relationship factors for segregation and integration as attentional mechanisms was of interest. Participants were required to listen to the cued part and then globally assess if the prioritized stream was leading or following compared to the second stream. Results show that the melody is judged as more leading when it is globally temporally ahead whereas the accompaniment is not judged as leading when it is ahead. This bias may be a result of the interaction of salience of both leader-follower relationship factors. Interestingly, the corresponding interaction effect in the fMRI-data yields an inverse bias for melody in a fronto-parietal attention network. Corresponding parameter estimates within the dlPFC and right IPS show higher neural activity for attending to melody when listening to a performance without a temporal leader, pointing to an interaction of salience of both factors in listening to music. Both frontal and parietal activation implicate segregation and integration mechanisms and a top-down influence of salience on attention and the perception of leader-follower relations in music.
Collapse
Affiliation(s)
- Marie Uhlig
- Max Planck Institute for Human Cognitive and Brain Sciences, Research Group, Music Cognition and Action, Stephanstrasse 1a,Leipzig, Germany.
| | | | | |
Collapse
|
71
|
Law LNC, Zentner M. Assessing musical abilities objectively: construction and validation of the profile of music perception skills. PLoS One 2012; 7:e52508. [PMID: 23285071 PMCID: PMC3532219 DOI: 10.1371/journal.pone.0052508] [Citation(s) in RCA: 98] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2012] [Accepted: 11/19/2012] [Indexed: 11/19/2022] Open
Abstract
A common approach for determining musical competence is to rely on information about individuals' extent of musical training, but relying on musicianship status fails to identify musically untrained individuals with musical skill, as well as those who, despite extensive musical training, may not be as skilled. To counteract this limitation, we developed a new test battery (Profile of Music Perception Skills; PROMS) that measures perceptual musical skills across multiple domains: tonal (melody, pitch), qualitative (timbre, tuning), temporal (rhythm, rhythm-to-melody, accent, tempo), and dynamic (loudness). The PROMS has satisfactory psychometric properties for the composite score (internal consistency and test-retest r>.85) and fair to good coefficients for the individual subtests (.56 to.85). Convergent validity was established with the relevant dimensions of Gordon's Advanced Measures of Music Audiation and Musical Aptitude Profile (melody, rhythm, tempo), the Musical Ear Test (rhythm), and sample instrumental sounds (timbre). Criterion validity was evidenced by consistently sizeable and significant relationships between test performance and external musical proficiency indicators in all three studies (.38 to.62, p<.05 to p<.01). An absence of correlations between test scores and a nonmusical auditory discrimination task supports the battery's discriminant validity (-.05, ns). The interrelationships among the various subtests could be accounted for by two higher order factors, sequential and sensory music processing. A brief version of the full PROMS is introduced as a time-efficient approximation of the full version of the battery.
Collapse
Affiliation(s)
- Lily N. C. Law
- Department of Psychology, University of York, York, United Kingdom
| | - Marcel Zentner
- Department of Psychology, University of York, York, United Kingdom
| |
Collapse
|
72
|
Parbery-Clark A, Anderson S, Hittner E, Kraus N. Musical experience strengthens the neural representation of sounds important for communication in middle-aged adults. Front Aging Neurosci 2012. [PMID: 23189051 PMCID: PMC3504955 DOI: 10.3389/fnagi.2012.00030] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
Abstract
Older adults frequently complain that while they can hear a person talking, they cannot understand what is being said; this difficulty is exacerbated by background noise. Peripheral hearing loss cannot fully account for this age-related decline in speech-in-noise ability, as declines in central processing also contribute to this problem. Given that musicians have enhanced speech-in-noise perception, we aimed to define the effects of musical experience on subcortical responses to speech and speech-in-noise perception in middle-aged adults. Results reveal that musicians have enhanced neural encoding of speech in quiet and noisy settings. Enhancements include faster neural response timing, higher neural response consistency, more robust encoding of speech harmonics, and greater neural precision. Taken together, we suggest that musical experience provides perceptual benefits in an aging population by strengthening the underlying neural pathways necessary for the accurate representation of important temporal and spectral features of sound.
Collapse
Affiliation(s)
- Alexandra Parbery-Clark
- Auditory Neuroscience Laboratory, Northwestern University Evanston, IL, USA ; Communication Sciences, Northwestern University Evanston, IL, USA
| | | | | | | |
Collapse
|
73
|
Zendel BR, Alain C. The influence of lifelong musicianship on neurophysiological measures of concurrent sound segregation. J Cogn Neurosci 2012; 25:503-16. [PMID: 23163409 DOI: 10.1162/jocn_a_00329] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The ability to separate concurrent sounds based on periodicity cues is critical for parsing complex auditory scenes. This ability is enhanced in young adult musicians and reduced in older adults. Here, we investigated the impact of lifelong musicianship on concurrent sound segregation and perception using scalp-recorded ERPs. Older and younger musicians and nonmusicians were presented with periodic harmonic complexes where the second harmonic could be tuned or mistuned by 1-16% of its original value. The likelihood of perceiving two simultaneous sounds increased with mistuning, and musicians, both older and younger, were more likely to detect and report hearing two sounds when the second harmonic was mistuned at or above 2%. The perception of a mistuned harmonic as a separate sound was paralleled by an object-related negativity that was larger and earlier in younger musicians compared with the other three groups. When listeners made a judgment about the harmonic stimuli, the perception of the mistuned harmonic as a separate sound was paralleled by a positive wave at about 400 msec poststimulus (P400), which was enhanced in both older and younger musicians. These findings suggest attention-dependent processing of a mistuned harmonic is enhanced in older musicians and provides further evidence that age-related decline in hearing abilities are mitigated by musical training.
Collapse
|
74
|
Moussard A, Rochette F, Bigand E. La musique comme outil de stimulation cognitive. ANNEE PSYCHOLOGIQUE 2012. [DOI: 10.3917/anpsy.123.0499] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
|
75
|
Parbery-Clark A, Tierney A, Strait DL, Kraus N. Musicians have fine-tuned neural distinction of speech syllables. Neuroscience 2012; 219:111-9. [PMID: 22634507 DOI: 10.1016/j.neuroscience.2012.05.042] [Citation(s) in RCA: 82] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2012] [Revised: 05/09/2012] [Accepted: 05/16/2012] [Indexed: 10/28/2022]
Abstract
One of the benefits musicians derive from their training is an increased ability to detect small differences between sounds. Here, we asked whether musicians' experience discriminating sounds on the basis of small acoustic differences confers advantages in the subcortical differentiation of closely related speech sounds (e.g., /ba/ and /ga/), distinguishable only by their harmonic spectra (i.e., their second formant trajectories). Although the second formant is particularly important for distinguishing stop consonants, auditory brainstem neurons do not phase-lock to its frequency range (above 1000 Hz). Instead, brainstem neurons convert this high-frequency content into neural response timing differences. As such, speech tokens with higher formant frequencies elicit earlier brainstem responses than those with lower formant frequencies. By measuring the degree to which subcortical response timing differs to the speech syllables /ba/, /da/, and /ga/ in adult musicians and nonmusicians, we reveal that musicians demonstrate enhanced subcortical discrimination of closely related speech sounds. Furthermore, the extent of subcortical consonant discrimination correlates with speech-in-noise perception. Taken together, these findings show a musician enhancement for the neural processing of speech and reveal a biological mechanism contributing to musicians' enhanced speech perception in noise.
Collapse
Affiliation(s)
- A Parbery-Clark
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, IL 60208, USA
| | | | | | | |
Collapse
|
76
|
Mikutta C, Altorfer A, Strik W, Koenig T. Emotions, Arousal, and Frontal Alpha Rhythm Asymmetry During Beethoven’s 5th Symphony. Brain Topogr 2012; 25:423-30. [PMID: 22534936 DOI: 10.1007/s10548-012-0227-0] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2011] [Accepted: 04/06/2012] [Indexed: 10/28/2022]
|
77
|
Itoh K, Suwazono S, Nakada T. Central auditory processing of noncontextual consonance in music: an evoked potential study. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2010; 128:3781-3787. [PMID: 21218909 DOI: 10.1121/1.3500685] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
The consonance of individual chords presented out of musical context, or the noncontextual consonance of chords, is usually defined as the absence of roughness, which is a sensation perceived when slightly mistuned frequencies are not clearly resolved in the cochlea. The present work uses evoked potentials to demonstrate that the absence of roughness is not sufficient to explain the entirety of noncontextual consonance perception. Presented with a random sequence of various pure-tone intervals (0-13 semitones), listeners' cerebral cortical activities distinguished these stimuli according to their noncontextual consonance in a manner consistent with standard musical practice, even when the intervals exceeded the critical bandwidth (approximately three semitones). The roughness-based model of noncontextual consonance could not account for this result because these wide intervals had indistinguishably low levels of roughness. Further, this effect was evident only in musicians, indicating plasticity in the underlying neural mechanisms. The results are consistent with the hypothesis that, although the absence of roughness may represent an important aspect of noncontextual consonance, properties of intervals other than those related to roughness also contribute to this perception, underpinned by neural activity in the central auditory system that can be plastically modified by experience.
Collapse
Affiliation(s)
- Kosuke Itoh
- Center for Integrated Human Brain Science, Brain Research Institute, University of Niigata, Asahimachi 1-757, Niigata 951-8585, Japan.
| | | | | |
Collapse
|
78
|
Bidelman GM, Krishnan A. Effects of reverberation on brainstem representation of speech in musicians and non-musicians. Brain Res 2010; 1355:112-25. [PMID: 20691672 PMCID: PMC2939203 DOI: 10.1016/j.brainres.2010.07.100] [Citation(s) in RCA: 149] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2010] [Revised: 07/14/2010] [Accepted: 07/29/2010] [Indexed: 11/18/2022]
Abstract
Perceptual and neurophysiological enhancements in linguistic processing in musicians suggest that domain specific experience may enhance neural resources recruited for language specific behaviors. In everyday situations, listeners are faced with extracting speech signals in degraded listening conditions. Here, we examine whether musical training provides resilience to the degradative effects of reverberation on subcortical representations of pitch and formant-related harmonic information of speech. Brainstem frequency-following responses (FFRs) were recorded from musicians and non-musician controls in response to the vowel /i/ in four different levels of reverberation and analyzed based on their spectro-temporal composition. For both groups, reverberation had little effect on the neural encoding of pitch but significantly degraded neural encoding of formant-related harmonics (i.e., vowel quality) suggesting a differential impact on the source-filter components of speech. However, in quiet and across nearly all reverberation conditions, musicians showed more robust responses than non-musicians. Neurophysiologic results were confirmed behaviorally by comparing brainstem spectral magnitudes with perceptual measures of fundamental (F0) and first formant (F1) frequency difference limens (DLs). For both types of discrimination, musicians obtained DLs which were 2-4 times better than non-musicians. Results suggest that musicians' enhanced neural encoding of acoustic features, an experience-dependent effect, is more resistant to reverberation degradation which may explain their enhanced perceptual ability on behaviorally relevant speech and/or music tasks in adverse listening conditions.
Collapse
Affiliation(s)
- Gavin M. Bidelman
- Department of Speech Language Hearing Sciences, Purdue University, West Lafayette, Indiana, USA
| | - Ananthanarayan Krishnan
- Department of Speech Language Hearing Sciences, Purdue University, West Lafayette, Indiana, USA
| |
Collapse
|
79
|
The effect of visual cues on auditory stream segregation in musicians and non-musicians. PLoS One 2010; 5:e11297. [PMID: 20585606 PMCID: PMC2890685 DOI: 10.1371/journal.pone.0011297] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2010] [Accepted: 05/24/2010] [Indexed: 11/21/2022] Open
Abstract
Background The ability to separate two interleaved melodies is an important factor in music appreciation. This ability is greatly reduced in people with hearing impairment, contributing to difficulties in music appreciation. The aim of this study was to assess whether visual cues, musical training or musical context could have an effect on this ability, and potentially improve music appreciation for the hearing impaired. Methods Musicians (N = 18) and non-musicians (N = 19) were asked to rate the difficulty of segregating a four-note repeating melody from interleaved random distracter notes. Visual cues were provided on half the blocks, and two musical contexts were tested, with the overlap between melody and distracter notes either gradually increasing or decreasing. Conclusions Visual cues, musical training, and musical context all affected the difficulty of extracting the melody from a background of interleaved random distracter notes. Visual cues were effective in reducing the difficulty of segregating the melody from distracter notes, even in individuals with no musical training. These results are consistent with theories that indicate an important role for central (top-down) processes in auditory streaming mechanisms, and suggest that visual cues may help the hearing-impaired enjoy music.
Collapse
|
80
|
Musical experience limits the degradative effects of background noise on the neural processing of sound. J Neurosci 2009; 29:14100-7. [PMID: 19906958 DOI: 10.1523/jneurosci.3256-09.2009] [Citation(s) in RCA: 256] [Impact Index Per Article: 17.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Musicians have lifelong experience parsing melodies from background harmonies, which can be considered a process analogous to speech perception in noise. To investigate the effect of musical experience on the neural representation of speech-in-noise, we compared subcortical neurophysiological responses to speech in quiet and noise in a group of highly trained musicians and nonmusician controls. Musicians were found to have a more robust subcortical representation of the acoustic stimulus in the presence of noise. Specifically, musicians demonstrated faster neural timing, enhanced representation of speech harmonics, and less degraded response morphology in noise. Neural measures were associated with better behavioral performance on the Hearing in Noise Test (HINT) for which musicians outperformed the nonmusician controls. These findings suggest that musical experience limits the negative effects of competing background noise, thereby providing the first biological evidence for musicians' perceptual advantage for speech-in-noise.
Collapse
|
81
|
|