1
|
Teng X, Larrouy-Maestri P, Poeppel D. Segmenting and Predicting Musical Phrase Structure Exploits Neural Gain Modulation and Phase Precession. J Neurosci 2024; 44:e1331232024. [PMID: 38926087 PMCID: PMC11270514 DOI: 10.1523/jneurosci.1331-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 05/29/2024] [Accepted: 06/11/2024] [Indexed: 06/28/2024] Open
Abstract
Music, like spoken language, is often characterized by hierarchically organized structure. Previous experiments have shown neural tracking of notes and beats, but little work touches on the more abstract question: how does the brain establish high-level musical structures in real time? We presented Bach chorales to participants (20 females and 9 males) undergoing electroencephalogram (EEG) recording to investigate how the brain tracks musical phrases. We removed the main temporal cues to phrasal structures, so that listeners could only rely on harmonic information to parse a continuous musical stream. Phrasal structures were disrupted by locally or globally reversing the harmonic progression, so that our observations on the original music could be controlled and compared. We first replicated the findings on neural tracking of musical notes and beats, substantiating the positive correlation between musical training and neural tracking. Critically, we discovered a neural signature in the frequency range ∼0.1 Hz (modulations of EEG power) that reliably tracks musical phrasal structure. Next, we developed an approach to quantify the phrasal phase precession of the EEG power, revealing that phrase tracking is indeed an operation of active segmentation involving predictive processes. We demonstrate that the brain establishes complex musical structures online over long timescales (>5 s) and actively segments continuous music streams in a manner comparable to language processing. These two neural signatures, phrase tracking and phrasal phase precession, provide new conceptual and technical tools to study the processes underpinning high-level structure building using noninvasive recording techniques.
Collapse
Affiliation(s)
- Xiangbin Teng
- Department of Psychology, The Chinese University of Hong Kong, Shatin, Hong Kong SAR, China
| | - Pauline Larrouy-Maestri
- Music Department, Max-Planck-Institute for Empirical Aesthetics, Frankfurt 60322, Germany
- Center for Language, Music, and Emotion (CLaME), New York, New York 10003
| | - David Poeppel
- Center for Language, Music, and Emotion (CLaME), New York, New York 10003
- Department of Psychology, New York University, New York, New York 10003
- Ernst Struengmann Institute for Neuroscience, Frankfurt 60528, Germany
- Music and Audio Research Laboratory (MARL), New York, New York 11201
| |
Collapse
|
2
|
Vigl J, Talamini F, Strauss H, Zentner M. Prosodic discrimination skills mediate the association between musical aptitude and vocal emotion recognition ability. Sci Rep 2024; 14:16462. [PMID: 39014043 PMCID: PMC11252295 DOI: 10.1038/s41598-024-66889-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2023] [Accepted: 07/04/2024] [Indexed: 07/18/2024] Open
Abstract
The current study tested the hypothesis that the association between musical ability and vocal emotion recognition skills is mediated by accuracy in prosody perception. Furthermore, it was investigated whether this association is primarily related to musical expertise, operationalized by long-term engagement in musical activities, or musical aptitude, operationalized by a test of musical perceptual ability. To this end, we conducted three studies: In Study 1 (N = 85) and Study 2 (N = 93), we developed and validated a new instrument for the assessment of prosodic discrimination ability. In Study 3 (N = 136), we examined whether the association between musical ability and vocal emotion recognition was mediated by prosodic discrimination ability. We found evidence for a full mediation, though only in relation to musical aptitude and not in relation to musical expertise. Taken together, these findings suggest that individuals with high musical aptitude have superior prosody perception skills, which in turn contribute to their vocal emotion recognition skills. Importantly, our results suggest that these benefits are not unique to musicians, but extend to non-musicians with high musical aptitude.
Collapse
Affiliation(s)
- Julia Vigl
- Department of Psychology, University of Innsbruck, Universitätsstraße 15, 6020, Innsbruck, Austria.
| | - Francesca Talamini
- Department of Psychology, University of Innsbruck, Universitätsstraße 15, 6020, Innsbruck, Austria
| | - Hannah Strauss
- Department of Psychology, University of Innsbruck, Universitätsstraße 15, 6020, Innsbruck, Austria
| | - Marcel Zentner
- Department of Psychology, University of Innsbruck, Universitätsstraße 15, 6020, Innsbruck, Austria
| |
Collapse
|
3
|
Bidelman GM, Sisson A, Rizzi R, MacLean J, Baer K. Myogenic artifacts masquerade as neuroplasticity in the auditory frequency-following response. Front Neurosci 2024; 18:1422903. [PMID: 39040631 PMCID: PMC11260751 DOI: 10.3389/fnins.2024.1422903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Accepted: 06/24/2024] [Indexed: 07/24/2024] Open
Abstract
The frequency-following response (FFR) is an evoked potential that provides a neural index of complex sound encoding in the brain. FFRs have been widely used to characterize speech and music processing, experience-dependent neuroplasticity (e.g., learning and musicianship), and biomarkers for hearing and language-based disorders that distort receptive communication abilities. It is widely assumed that FFRs stem from a mixture of phase-locked neurogenic activity from the brainstem and cortical structures along the hearing neuraxis. In this study, we challenge this prevailing view by demonstrating that upwards of ~50% of the FFR can originate from an unexpected myogenic source: contamination from the postauricular muscle (PAM) vestigial startle reflex. We measured PAM, transient auditory brainstem responses (ABRs), and sustained frequency-following response (FFR) potentials reflecting myogenic (PAM) and neurogenic (ABR/FFR) responses in young, normal-hearing listeners with varying degrees of musical training. We first establish that PAM artifact is present in all ears, varies with electrode proximity to the muscle, and can be experimentally manipulated by directing listeners' eye gaze toward the ear of sound stimulation. We then show this muscular noise easily confounds auditory FFRs, spuriously amplifying responses 3-4-fold with tandem PAM contraction and even explaining putative FFR enhancements observed in highly skilled musicians. Our findings expose a new and unrecognized myogenic source to the FFR that drives its large inter-subject variability and cast doubt on whether changes in the response typically attributed to neuroplasticity/pathology are solely of brain origin.
Collapse
Affiliation(s)
- Gavin M. Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, United States
- Program in Neuroscience, Indiana University, Bloomington, IN, United States
- Cognitive Science Program, Indiana University, Bloomington, IN, United States
| | - Alexandria Sisson
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, United States
| | - Rose Rizzi
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, United States
- Program in Neuroscience, Indiana University, Bloomington, IN, United States
| | - Jessica MacLean
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, United States
- Program in Neuroscience, Indiana University, Bloomington, IN, United States
| | - Kaitlin Baer
- School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States
- Veterans Affairs Medical Center, Memphis, TN, United States
| |
Collapse
|
4
|
Bidelman G, Sisson A, Rizzi R, MacLean J, Baer K. Myogenic artifacts masquerade as neuroplasticity in the auditory frequency-following response (FFR). BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.10.27.564446. [PMID: 37961324 PMCID: PMC10634913 DOI: 10.1101/2023.10.27.564446] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Abstract
The frequency-following response (FFR) is an evoked potential that provides a "neural fingerprint" of complex sound encoding in the brain. FFRs have been widely used to characterize speech and music processing, experience-dependent neuroplasticity (e.g., learning, musicianship), and biomarkers for hearing and language-based disorders that distort receptive communication abilities. It is widely assumed FFRs stem from a mixture of phase-locked neurogenic activity from brainstem and cortical structures along the hearing neuraxis. Here, we challenge this prevailing view by demonstrating upwards of ~50% of the FFR can originate from a non-neural source: contamination from the postauricular muscle (PAM) vestigial startle reflex. We first establish PAM artifact is present in all ears, varies with electrode proximity to the muscle, and can be experimentally manipulated by directing listeners' eye gaze toward the ear of sound stimulation. We then show this muscular noise easily confounds auditory FFRs, spuriously amplifying responses by 3-4x fold with tandem PAM contraction and even explaining putative FFR enhancements observed in highly skilled musicians. Our findings expose a new and unrecognized myogenic source to the FFR that drives its large inter-subject variability and cast doubt on whether changes in the response typically attributed to neuroplasticity/pathology are solely of brain origin.
Collapse
|
5
|
Bidelman GM, Bernard F, Skubic K. Hearing in categories aids speech streaming at the "cocktail party". BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.03.587795. [PMID: 38617284 PMCID: PMC11014555 DOI: 10.1101/2024.04.03.587795] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/16/2024]
Abstract
Our perceptual system bins elements of the speech signal into categories to make speech perception manageable. Here, we aimed to test whether hearing speech in categories (as opposed to a continuous/gradient fashion) affords yet another benefit to speech recognition: parsing noisy speech at the "cocktail party." We measured speech recognition in a simulated 3D cocktail party environment. We manipulated task difficulty by varying the number of additional maskers presented at other spatial locations in the horizontal soundfield (1-4 talkers) and via forward vs. time-reversed maskers, promoting more and less informational masking (IM), respectively. In separate tasks, we measured isolated phoneme categorization using two-alternative forced choice (2AFC) and visual analog scaling (VAS) tasks designed to promote more/less categorical hearing and thus test putative links between categorization and real-world speech-in-noise skills. We first show that listeners can only monitor up to ~3 talkers despite up to 5 in the soundscape and streaming is not related to extended high-frequency hearing thresholds (though QuickSIN scores are). We then confirm speech streaming accuracy and speed decline with additional competing talkers and amidst forward compared to reverse maskers with added IM. Dividing listeners into "discrete" vs. "continuous" categorizers based on their VAS labeling (i.e., whether responses were binary or continuous judgments), we then show the degree of IM experienced at the cocktail party is predicted by their degree of categoricity in phoneme labeling; more discrete listeners are less susceptible to IM than their gradient responding peers. Our results establish a link between speech categorization skills and cocktail party processing, with a categorical (rather than gradient) listening strategy benefiting degraded speech perception. These findings imply figure-ground deficits common in many disorders might arise through a surprisingly simple mechanism: a failure to properly bin sounds into categories.
Collapse
Affiliation(s)
- Gavin M. Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
- Program in Neuroscience, Indiana University, Bloomington, IN, USA
- Cognitive Science Program, Indiana University, Bloomington, IN, USA
| | - Fallon Bernard
- School of Communication Sciences & Disorders, University of Memphis, Memphis TN, USA
| | - Kimberly Skubic
- School of Communication Sciences & Disorders, University of Memphis, Memphis TN, USA
| |
Collapse
|
6
|
Caprini F, Zhao S, Chait M, Agus T, Pomper U, Tierney A, Dick F. Generalization of auditory expertise in audio engineers and instrumental musicians. Cognition 2024; 244:105696. [PMID: 38160651 DOI: 10.1016/j.cognition.2023.105696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Revised: 12/04/2023] [Accepted: 12/13/2023] [Indexed: 01/03/2024]
Abstract
From auditory perception to general cognition, the ability to play a musical instrument has been associated with skills both related and unrelated to music. However, it is unclear if these effects are bound to the specific characteristics of musical instrument training, as little attention has been paid to other populations such as audio engineers and designers whose auditory expertise may match or surpass that of musicians in specific auditory tasks or more naturalistic acoustic scenarios. We explored this possibility by comparing students of audio engineering (n = 20) to matched conservatory-trained instrumentalists (n = 24) and to naive controls (n = 20) on measures of auditory discrimination, auditory scene analysis, and speech in noise perception. We found that audio engineers and performing musicians had generally lower psychophysical thresholds than controls, with pitch perception showing the largest effect size. Compared to controls, audio engineers could better memorise and recall auditory scenes composed of non-musical sounds, whereas instrumental musicians performed best in a sustained selective attention task with two competing streams of tones. Finally, in a diotic speech-in-babble task, musicians showed lower signal-to-noise-ratio thresholds than both controls and engineers; however, a follow-up online study did not replicate this musician advantage. We also observed differences in personality that might account for group-based self-selection biases. Overall, we showed that investigating a wider range of forms of auditory expertise can help us corroborate (or challenge) the specificity of the advantages previously associated with musical instrument training.
Collapse
Affiliation(s)
- Francesco Caprini
- Department of Psychological Sciences, Birkbeck, University of London, UK.
| | - Sijia Zhao
- Department of Experimental Psychology, University of Oxford, UK
| | - Maria Chait
- University College London (UCL) Ear Institute, UK
| | - Trevor Agus
- School of Arts, English and Languages, Queen's University Belfast, UK
| | - Ulrich Pomper
- Department of Cognition, Emotion, and Methods in Psychology, Universität Wien, Austria
| | - Adam Tierney
- Department of Psychological Sciences, Birkbeck, University of London, UK
| | - Fred Dick
- Department of Experimental Psychology, University College London (UCL), UK
| |
Collapse
|
7
|
Wang X, Ren X, Wang S, Yang D, Liu S, Li M, Yang M, Liu Y, Xu Q. Validation and applicability of the music ear test on a large Chinese sample. PLoS One 2024; 19:e0297073. [PMID: 38324549 PMCID: PMC10849222 DOI: 10.1371/journal.pone.0297073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 12/23/2023] [Indexed: 02/09/2024] Open
Abstract
In the context of extensive disciplinary integration, researchers worldwide have increasingly focused on musical ability. However, despite the wide range of available music ability tests, there remains a dearth of validated tests applicable to China. The Music Ear Test (MET) is a validated scale that has been reported to be potentially suitable for cross-cultural distribution in a Chinese sample. However, no formal translation and cross-cultural reliability/validity tests have been conducted for the Chinese population in any of the studies using the Music Ear Test. This study aims to assess the factor structure, convergence, predictiveness, and validity of the Chinese version of the MET, based on a large sample of Chinese participants (n≥1235). Furthermore, we seek to determine whether variables such as music training level, response pattern, and demographic data such as gender and age have intervening effects on the results. In doing so, we aim to provide clear indications of musical aptitude and expertise by validating an existing instrument, the Music Ear Test, and provide a valid method for further understanding the musical abilities of the Chinese sample.
Collapse
Affiliation(s)
- Xiaoyu Wang
- Music College, Catholic University of Daegu, Gyeongsan-si, Gyeongsangbuk-do, Rep. of Korea
| | - Xiubo Ren
- Music College, Catholic University of Daegu, Gyeongsan-si, Gyeongsangbuk-do, Rep. of Korea
| | - Shidan Wang
- Music College, Catholic University of Daegu, Gyeongsan-si, Gyeongsangbuk-do, Rep. of Korea
| | - Dan Yang
- Music College, Catholic University of Daegu, Gyeongsan-si, Gyeongsangbuk-do, Rep. of Korea
| | - Shilin Liu
- Music College, Catholic University of Daegu, Gyeongsan-si, Gyeongsangbuk-do, Rep. of Korea
| | - Meihui Li
- Music College, Catholic University of Daegu, Gyeongsan-si, Gyeongsangbuk-do, Rep. of Korea
| | - Mingyi Yang
- Music College, Catholic University of Daegu, Gyeongsan-si, Gyeongsangbuk-do, Rep. of Korea
| | - Yintong Liu
- Music College, Catholic University of Daegu, Gyeongsan-si, Gyeongsangbuk-do, Rep. of Korea
| | - Qiujian Xu
- Music College, Catholic University of Daegu, Gyeongsan-si, Gyeongsangbuk-do, Rep. of Korea
- School of Arts and Design, Yanshan University, Qinhuangdao, China
| |
Collapse
|
8
|
MacLean J, Stirn J, Sisson A, Bidelman GM. Short- and long-term neuroplasticity interact during the perceptual learning of concurrent speech. Cereb Cortex 2024; 34:bhad543. [PMID: 38212291 PMCID: PMC10839853 DOI: 10.1093/cercor/bhad543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Revised: 12/20/2023] [Accepted: 12/21/2023] [Indexed: 01/13/2024] Open
Abstract
Plasticity from auditory experience shapes the brain's encoding and perception of sound. However, whether such long-term plasticity alters the trajectory of short-term plasticity during speech processing has yet to be investigated. Here, we explored the neural mechanisms and interplay between short- and long-term neuroplasticity for rapid auditory perceptual learning of concurrent speech sounds in young, normal-hearing musicians and nonmusicians. Participants learned to identify double-vowel mixtures during ~ 45 min training sessions recorded simultaneously with high-density electroencephalography (EEG). We analyzed frequency-following responses (FFRs) and event-related potentials (ERPs) to investigate neural correlates of learning at subcortical and cortical levels, respectively. Although both groups showed rapid perceptual learning, musicians showed faster behavioral decisions than nonmusicians overall. Learning-related changes were not apparent in brainstem FFRs. However, plasticity was highly evident in cortex, where ERPs revealed unique hemispheric asymmetries between groups suggestive of different neural strategies (musicians: right hemisphere bias; nonmusicians: left hemisphere). Source reconstruction and the early (150-200 ms) time course of these effects localized learning-induced cortical plasticity to auditory-sensory brain areas. Our findings reinforce the domain-general benefits of musicianship but reveal that successful speech sound learning is driven by a critical interplay between long- and short-term mechanisms of auditory plasticity, which first emerge at a cortical level.
Collapse
Affiliation(s)
- Jessica MacLean
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
- Program in Neuroscience, Indiana University, Bloomington, IN, USA
| | - Jack Stirn
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
| | - Alexandria Sisson
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
| | - Gavin M Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
- Program in Neuroscience, Indiana University, Bloomington, IN, USA
- Cognitive Science Program, Indiana University, Bloomington, IN, USA
| |
Collapse
|
9
|
Kim G, Kim DK, Jeong H. Spontaneous emergence of rudimentary music detectors in deep neural networks. Nat Commun 2024; 15:148. [PMID: 38168097 PMCID: PMC10761941 DOI: 10.1038/s41467-023-44516-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Accepted: 12/15/2023] [Indexed: 01/05/2024] Open
Abstract
Music exists in almost every society, has universal acoustic features, and is processed by distinct neural circuits in humans even with no experience of musical training. However, it remains unclear how these innate characteristics emerge and what functions they serve. Here, using an artificial deep neural network that models the auditory information processing of the brain, we show that units tuned to music can spontaneously emerge by learning natural sound detection, even without learning music. The music-selective units encoded the temporal structure of music in multiple timescales, following the population-level response characteristics observed in the brain. We found that the process of generalization is critical for the emergence of music-selectivity and that music-selectivity can work as a functional basis for the generalization of natural sound, thereby elucidating its origin. These findings suggest that evolutionary adaptation to process natural sounds can provide an initial blueprint for our sense of music.
Collapse
Affiliation(s)
- Gwangsu Kim
- Department of Physics, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Korea
| | - Dong-Kyum Kim
- Department of Physics, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Korea
| | - Hawoong Jeong
- Department of Physics, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Korea.
- Center for Complex Systems, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Korea.
| |
Collapse
|
10
|
Wesseldijk LW, Gordon RL, Mosing MA, Ullén F. Music and verbal ability - a twin study of genetic and environmental associations. PSYCHOLOGY OF AESTHETICS, CREATIVITY, AND THE ARTS 2023; 17:675-681. [PMID: 38269365 PMCID: PMC10805386 DOI: 10.1037/aca0000401] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Musical aptitude and music training are associated with language-related cognitive outcomes, even when controlling for general intelligence. However, genetic and environmental influences on these associations have not been studied, and it remains unclear whether music training can causally increase verbal ability. In a sample of 1,336 male twins, we tested the associations between verbal ability measured at time of conscription at age 18 and two music related variables: overall musical aptitude and total amount of music training before the age of 18. We estimated the amount of specific genetic and environmental influences on the association between verbal ability and musical aptitude, over and above the factors shared with general intelligence, using classical twin modelling. Further, we tested whether music training could causally influence verbal ability using a co-twin-control analysis. Musical aptitude and music training were significantly associated with verbal ability. Controlling for general intelligence only slightly attenuated the correlations. The partial association between musical aptitude and verbal ability, corrected for general intelligence, was mostly explained by shared genetic factors (50%) and non-shared environmental influences (35%). The co-twin-control-analysis gave no support for causal effects of early music training on verbal ability at age 18. Overall, our findings in a sizeable population sample converge with known associations between the music and language domains, while results from twin modelling suggested that this reflected a shared underlying aetiology rather than causal transfer.
Collapse
Affiliation(s)
- Laura W. Wesseldijk
- Department of Neuroscience, Karolinska Institutet, Solnavägen 9, SE-171 77 Stockholm, Sweden
- Department of Psychiatry, Amsterdam UMC, University of Amsterdam, Meibergdreef 5, 1105 AZ Amsterdam, The Netherlands
| | - Reyna L. Gordon
- Department of Otolaryngology - Head & Neck Surgery, Vanderbilt University Medical Center
- Department of Psychology, Vanderbilt University
- Vanderbilt Genetics Institute, Vanderbilt University Medical Center
| | - Miriam A. Mosing
- Department of Neuroscience, Karolinska Institutet, Solnavägen 9, SE-171 77 Stockholm, Sweden
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Nobels v 12A, 171 77 Stockholm, Sweden
| | - Fredrik Ullén
- Department of Neuroscience, Karolinska Institutet, Solnavägen 9, SE-171 77 Stockholm, Sweden
| |
Collapse
|
11
|
MacLean J, Stirn J, Sisson A, Bidelman GM. Short- and long-term experience-dependent neuroplasticity interact during the perceptual learning of concurrent speech. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.09.26.559640. [PMID: 37808665 PMCID: PMC10557636 DOI: 10.1101/2023.09.26.559640] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/10/2023]
Abstract
Plasticity from auditory experiences shapes brain encoding and perception of sound. However, whether such long-term plasticity alters the trajectory of short-term plasticity during speech processing has yet to be investigated. Here, we explored the neural mechanisms and interplay between short- and long-term neuroplasticity for rapid auditory perceptual learning of concurrent speech sounds in young, normal-hearing musicians and nonmusicians. Participants learned to identify double-vowel mixtures during ∼45 minute training sessions recorded simultaneously with high-density EEG. We analyzed frequency-following responses (FFRs) and event-related potentials (ERPs) to investigate neural correlates of learning at subcortical and cortical levels, respectively. While both groups showed rapid perceptual learning, musicians showed faster behavioral decisions than nonmusicians overall. Learning-related changes were not apparent in brainstem FFRs. However, plasticity was highly evident in cortex, where ERPs revealed unique hemispheric asymmetries between groups suggestive of different neural strategies (musicians: right hemisphere bias; nonmusicians: left hemisphere). Source reconstruction and the early (150-200 ms) time course of these effects localized learning-induced cortical plasticity to auditory-sensory brain areas. Our findings confirm domain-general benefits for musicianship but reveal successful speech sound learning is driven by a critical interplay between long- and short-term mechanisms of auditory plasticity that first emerge at a cortical level.
Collapse
|
12
|
Rizzi R, Bidelman GM. Duplex perception reveals brainstem auditory representations are modulated by listeners' ongoing percept for speech. Cereb Cortex 2023; 33:10076-10086. [PMID: 37522248 PMCID: PMC10502779 DOI: 10.1093/cercor/bhad266] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 06/27/2023] [Accepted: 07/10/2023] [Indexed: 08/01/2023] Open
Abstract
So-called duplex speech stimuli with perceptually ambiguous spectral cues to one ear and isolated low- versus high-frequency third formant "chirp" to the opposite ear yield a coherent percept supporting their phonetic categorization. Critically, such dichotic sounds are only perceived categorically upon binaural integration. Here, we used frequency-following responses (FFRs), scalp-recorded potentials reflecting phase-locked subcortical activity, to investigate brainstem responses to fused speech percepts and to determine whether FFRs reflect binaurally integrated category-level representations. We recorded FFRs to diotic and dichotic stop-consonants (/da/, /ga/) that either did or did not require binaural fusion to properly label along with perceptually ambiguous sounds without clear phonetic identity. Behaviorally, listeners showed clear categorization of dichotic speech tokens confirming they were heard with a fused, phonetic percept. Neurally, we found FFRs were stronger for categorically perceived speech relative to category-ambiguous tokens but also differentiated phonetic categories for both diotically and dichotically presented speech sounds. Correlations between neural and behavioral data further showed FFR latency predicted the degree to which listeners labeled tokens as "da" versus "ga." The presence of binaurally integrated, category-level information in FFRs suggests human brainstem processing reflects a surprisingly abstract level of the speech code typically circumscribed to much later cortical processing.
Collapse
Affiliation(s)
- Rose Rizzi
- Department of Speech, Language, and Hearing Sciences, Indiana University, Bloomington, IN, United States
- Program in Neuroscience, Indiana University, Bloomington, IN, United States
- School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States
| | - Gavin M Bidelman
- Department of Speech, Language, and Hearing Sciences, Indiana University, Bloomington, IN, United States
- Program in Neuroscience, Indiana University, Bloomington, IN, United States
- Cognitive Science Program, Indiana University, Bloomington, IN, United States
| |
Collapse
|
13
|
Arenillas-Alcón S, Ribas-Prats T, Puertollano M, Mondéjar-Segovia A, Gómez-Roig MD, Costa-Faidella J, Escera C. Prenatal daily musical exposure is associated with enhanced neural representation of speech fundamental frequency: Evidence from neonatal frequency-following responses. Dev Sci 2023; 26:e13362. [PMID: 36550689 DOI: 10.1111/desc.13362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Revised: 12/14/2022] [Accepted: 12/15/2022] [Indexed: 12/24/2022]
Abstract
Fetal hearing experiences shape the linguistic and musical preferences of neonates. From the very first moment after birth, newborns prefer their native language, recognize their mother's voice, and show a greater responsiveness to lullabies presented during pregnancy. Yet, the neural underpinnings of this experience inducing plasticity have remained elusive. Here we recorded the frequency-following response (FFR), an auditory evoked potential elicited to periodic complex sounds, to show that prenatal music exposure is associated to enhanced neural encoding of speech stimuli periodicity, which relates to the perceptual experience of pitch. FFRs were recorded in a sample of 60 healthy neonates born at term and aged 12-72 hours. The sample was divided into two groups according to their prenatal musical exposure (29 daily musically exposed; 31 not-daily musically exposed). Prenatal exposure was assessed retrospectively by a questionnaire in which mothers reported how often they sang or listened to music through loudspeakers during the last trimester of pregnancy. The FFR was recorded to either a /da/ or an /oa/ speech-syllable stimulus. Analyses were centered on stimuli sections of identical duration (113 ms) and fundamental frequency (F0 = 113 Hz). Neural encoding of stimuli periodicity was quantified as the FFR spectral amplitude at the stimulus F0 . Data revealed that newborns exposed daily to music exhibit larger spectral amplitudes at F0 as compared to not-daily musically-exposed newborns, regardless of the eliciting stimulus. Our results suggest that prenatal music exposure facilitates the tuning to human speech fundamental frequency, which may support early language processing and acquisition. RESEARCH HIGHLIGHTS: Frequency-following responses to speech were collected from a sample of neonates prenatally exposed to music daily and compared to neonates not-daily exposed to music. Neonates who experienced daily prenatal music exposure exhibit enhanced frequency-following responses to the periodicity of speech sounds. Prenatal music exposure is associated with a fine-tuned encoding of human speech fundamental frequency, which may facilitate early language processing and acquisition.
Collapse
Affiliation(s)
- Sonia Arenillas-Alcón
- Brainlab - Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Catalonia, Spain
- Institute of Neurosciences, University of Barcelona, Catalonia, Spain
- Institut de Recerca Sant Joan de Déu, Catalonia, Spain
| | - Teresa Ribas-Prats
- Brainlab - Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Catalonia, Spain
- Institute of Neurosciences, University of Barcelona, Catalonia, Spain
- Institut de Recerca Sant Joan de Déu, Catalonia, Spain
| | - Marta Puertollano
- Brainlab - Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Catalonia, Spain
- Institute of Neurosciences, University of Barcelona, Catalonia, Spain
- Institut de Recerca Sant Joan de Déu, Catalonia, Spain
| | - Alejandro Mondéjar-Segovia
- Brainlab - Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Catalonia, Spain
- Institute of Neurosciences, University of Barcelona, Catalonia, Spain
| | - María Dolores Gómez-Roig
- Institut de Recerca Sant Joan de Déu, Catalonia, Spain
- BCNatal - Barcelona Center for Maternal Fetal and Neonatal Medicine (Hospital Sant Joan de Déu and Hospital Clínic), University of Barcelona, Catalonia, Spain
| | - Jordi Costa-Faidella
- Brainlab - Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Catalonia, Spain
- Institute of Neurosciences, University of Barcelona, Catalonia, Spain
- Institut de Recerca Sant Joan de Déu, Catalonia, Spain
| | - Carles Escera
- Brainlab - Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Catalonia, Spain
- Institute of Neurosciences, University of Barcelona, Catalonia, Spain
- Institut de Recerca Sant Joan de Déu, Catalonia, Spain
| |
Collapse
|
14
|
Whiteford KL, Goh PY, Stevens KL, Oxenham AJ. Dissociating sensitivity from bias in the Mini Profile of Music Perception Skills. JASA EXPRESS LETTERS 2023; 3:094401. [PMID: 37747320 PMCID: PMC10523237 DOI: 10.1121/10.0021096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Accepted: 09/06/2023] [Indexed: 09/26/2023]
Abstract
The Mini Profile of Music Perception Skills (Mini-PROMS) is a rapid performance-based measure of musical perceptual competence. The present study was designed to determine the optimal way to evaluate and score the Mini-PROMS results. Two traditional methods for scoring the Mini-PROMS, the weighted composite score and the parametric sensitivity index (d'), were compared with nonparametric alternatives, also derived from signal detection theory. Performance estimates using the traditional methods were found to depend on response bias (e.g., confidence), making them suboptimal. The simple nonparametric alternatives provided unbiased and reliable performance estimates from the Mini-PROMS and are therefore recommended instead.
Collapse
Affiliation(s)
- Kelly L Whiteford
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota 55455, , , ,
| | - Pui Yii Goh
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota 55455, , , ,
| | - Kara L Stevens
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota 55455, , , ,
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota 55455, , , ,
| |
Collapse
|
15
|
Hsieh IH, Guo YJ. No Musician Advantage in the Perception of Degraded-Fundamental Frequency Speech in Noisy Environments. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023:1-13. [PMID: 37499233 DOI: 10.1044/2023_jslhr-22-00662] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/29/2023]
Abstract
PURPOSE Pitch variations of the fundamental frequency (fo) contour contribute to speech perception in noisy environments, but whether musicians confer an advantage in speech in noise (SIN) with altered fo information remains unclear. This study investigated the effects of different levels of degraded fo contour (i.e., conveying lexical tone or intonation information) on musician advantage in speech-in-noise perception. METHOD A cohort of native Mandarin Chinese speakers, comprising 30 trained musicians and 30 nonmusicians, were tested on the intelligibility of Mandarin Chinese sentences with natural, flattened-tone, flattened-intonation, and flattened-all fo contours embedded in background noise masked under three signal-to-noise ratios (0, -5, and -9 dB). Pitch difference thresholds and innate musical skills associated with speech-in-noise benefits were also assessed. RESULTS Speech intelligibility score improved with increasing signal-to-noise level for both musicians and nonmusicians. However, no musician advantage was observed for identifying any type of flattened-fo contour SIN. Musicians exhibited smaller fo pitch discrimination limens than nonmusicians, which correlated with benefits for perceiving speech with intact tone-level fo information. Regardless of musician status, performance on the pitch and accent musical-skill subtests correlated with speech intelligibility score. CONCLUSIONS Collectively, these results provide no evidence for a musician advantage for perceiving speech with distorted fo information in noisy environments. Results further show that perceptual musical skills on pitch and accent processing may benefit the perception of SIN, independent of formal musical training. Our findings suggest that the potential application of music training in speech perception in noisy backgrounds is not contingent on the ability to process fo pitch contours, at least for Mandarin Chinese speakers. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.23706354.
Collapse
Affiliation(s)
- I-Hui Hsieh
- Institute of Cognitive Neuroscience, National Central University, Taoyuan City, Taiwan
- Cognitive Intelligence and Precision Healthcare Center, National Central University, Taoyuan City, Taiwan
| | - Yu-Jyun Guo
- Institute of Cognitive Neuroscience, National Central University, Taoyuan City, Taiwan
| |
Collapse
|
16
|
Correia AI, Vincenzi M, Vanzella P, Pinheiro AP, Schellenberg EG, Lima CF. Individual differences in musical ability among adults with no music training. Q J Exp Psychol (Hove) 2023; 76:1585-1598. [PMID: 36114609 PMCID: PMC10280665 DOI: 10.1177/17470218221128557] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Revised: 07/06/2022] [Accepted: 07/22/2022] [Indexed: 09/26/2023]
Abstract
Good musical abilities are typically considered to be a consequence of music training, such that they are studied in samples of formally trained individuals. Here, we asked what predicts musical abilities in the absence of music training. Participants with no formal music training (N = 190) completed the Goldsmiths Musical Sophistication Index, measures of personality and cognitive ability, and the Musical Ear Test (MET). The MET is an objective test of musical abilities that provides a Total score and separate scores for its two subtests (Melody and Rhythm), which require listeners to determine whether standard and comparison auditory sequences are identical. MET scores had no associations with personality traits. They correlated positively, however, with informal musical experience and cognitive abilities. Informal musical experience was a better predictor of Melody than of Rhythm scores. Some participants (12%) had Total scores higher than the mean from a sample of musically trained individuals (⩾6 years of formal training), tested previously by Correia et al. Untrained participants with particularly good musical abilities (top 25%, n = 51) scored higher than trained participants on the Rhythm subtest and similarly on the Melody subtest. High-ability untrained participants were also similar to trained ones in cognitive ability, but lower in the personality trait openness-to-experience. These results imply that formal music training is not required to achieve musician-like performance on tests of musical and cognitive abilities. They also suggest that informal music practice and music-related predispositions should be considered in studies of musical expertise.
Collapse
Affiliation(s)
- Ana Isabel Correia
- Centro de Investigação e Intervenção
Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa,
Portugal
| | - Margherita Vincenzi
- Centro de Investigação e Intervenção
Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa,
Portugal
- Department of General Psychology,
University of Padova, Padova, Italy
| | - Patrícia Vanzella
- Center for Mathematics, Computing and
Cognition, Universidade Federal do ABC, Santo Andre, Brazil
| | - Ana P Pinheiro
- CICPSI, Faculdade de Psicologia,
Universidade de Lisboa, Lisbon, Portugal
| | - E Glenn Schellenberg
- Centro de Investigação e Intervenção
Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa,
Portugal
- Department of Psychology, University of
Toronto Mississauga, Mississauga, ON, Canada
| | - César F Lima
- Centro de Investigação e Intervenção
Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa,
Portugal
- Institute of Cognitive Neuroscience,
University College London, London, UK
| |
Collapse
|
17
|
Gustavson DE, Nayak S, Coleman PL, Iversen JR, Lense MD, Gordon RL, Maes HH. Heritability of Childhood Music Engagement and Associations with Language and Executive Function: Insights from the Adolescent Brain Cognitive Development (ABCD) Study. Behav Genet 2023; 53:189-207. [PMID: 36757558 PMCID: PMC10159991 DOI: 10.1007/s10519-023-10135-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Accepted: 01/27/2023] [Indexed: 02/10/2023]
Abstract
Music engagement is a powerful, influential experience that often begins early in life. Music engagement is moderately heritable in adults (~ 41-69%), but fewer studies have examined genetic influences on childhood music engagement, including their association with language and executive functions. Here we explored genetic and environmental influences on music listening and instrument playing (including singing) in the baseline assessment of the Adolescent Brain Cognitive Development study. Parents reported on their 9-10-year-old children's music experiences (N = 11,876 children; N = 1543 from twin pairs). Both music measures were explained primarily by shared environmental influences. Instrument exposure (but not frequency of instrument engagement) was associated with language skills (r = .27) and executive functions (r = .15-0.17), and these associations with instrument engagement were stronger than those for music listening, visual art, or soccer engagement. These findings highlight the role of shared environmental influences between early music experiences, language, and executive function, during a formative time in development.
Collapse
Affiliation(s)
- Daniel E Gustavson
- Institute for Behavioral Genetics, University of Colorado Boulder, 1480 30th St, Boulder, CO, 80303, USA.
- Vanderbilt Genetics Institute, Vanderbilt University Medical Center, Nashville, TN, USA.
| | - Srishti Nayak
- Vanderbilt Genetics Institute, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Psychology, Middle Tennessee State University, Murfreesboro, TN, USA
- Department of Otolaryngology - Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
| | | | - John R Iversen
- Swartz Center for Computational Neuroscience, Institute for Neural Computation, University of California, San Diego, La Jolla, CA, USA
| | - Miriam D Lense
- Department of Otolaryngology - Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- The Curb Center, Vanderbilt University, Nashville, TN, USA
| | - Reyna L Gordon
- Vanderbilt Genetics Institute, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Otolaryngology - Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- The Curb Center, Vanderbilt University, Nashville, TN, USA
| | - Hermine H Maes
- Department of Human and Molecular Genetics, Virginia Institute for Psychiatric and Behavioral Genetics, Virginia Commonwealth University, Richmond, VA, USA
- Department of Psychiatry, Virginia Institute for Psychiatric and Behavioral Genetics, Virginia Commonwealth University, Richmond, VA, USA
- Massey Cancer Center, Virginia Commonwealth University, Richmond, VA, USA
| |
Collapse
|
18
|
Zhang L, Wang X, Alain C, Du Y. Successful aging of musicians: Preservation of sensorimotor regions aids audiovisual speech-in-noise perception. SCIENCE ADVANCES 2023; 9:eadg7056. [PMID: 37126550 PMCID: PMC10132752 DOI: 10.1126/sciadv.adg7056] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Musicianship can mitigate age-related declines in audiovisual speech-in-noise perception. We tested whether this benefit originates from functional preservation or functional compensation by comparing fMRI responses of older musicians, older nonmusicians, and young nonmusicians identifying noise-masked audiovisual syllables. Older musicians outperformed older nonmusicians and showed comparable performance to young nonmusicians. Notably, older musicians retained similar neural specificity of speech representations in sensorimotor areas to young nonmusicians, while older nonmusicians showed degraded neural representations. In the same region, older musicians showed higher neural alignment to young nonmusicians than older nonmusicians, which was associated with their training intensity. In older nonmusicians, the degree of neural alignment predicted better performance. In addition, older musicians showed greater activation in frontal-parietal, speech motor, and visual motion regions and greater deactivation in the angular gyrus than older nonmusicians, which predicted higher neural alignment in sensorimotor areas. Together, these findings suggest that musicianship-related benefit in audiovisual speech-in-noise processing is rooted in preserving youth-like representations in sensorimotor regions.
Collapse
Affiliation(s)
- Lei Zhang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Xiuyi Wang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China
| | - Claude Alain
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, ON M6A 2E1, Canada
- Department of Psychology, University of Toronto, ON M8V 2S4, Canada
| | - Yi Du
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
- CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai 200031, China
- Chinese Institute for Brain Research, Beijing 102206, China
| |
Collapse
|
19
|
Carter JA, Bidelman GM. Perceptual warping exposes categorical representations for speech in human brainstem responses. Neuroimage 2023; 269:119899. [PMID: 36720437 PMCID: PMC9992300 DOI: 10.1016/j.neuroimage.2023.119899] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 01/17/2023] [Accepted: 01/22/2023] [Indexed: 01/30/2023] Open
Abstract
The brain transforms continuous acoustic events into discrete category representations to downsample the speech signal for our perceptual-cognitive systems. Such phonetic categories are highly malleable, and their percepts can change depending on surrounding stimulus context. Previous work suggests these acoustic-phonetic mapping and perceptual warping of speech emerge in the brain no earlier than auditory cortex. Here, we examined whether these auditory-category phenomena inherent to speech perception occur even earlier in the human brain, at the level of auditory brainstem. We recorded speech-evoked frequency following responses (FFRs) during a task designed to induce more/less warping of listeners' perceptual categories depending on stimulus presentation order of a speech continuum (random, forward, backward directions). We used a novel clustered stimulus paradigm to rapidly record the high trial counts needed for FFRs concurrent with active behavioral tasks. We found serial stimulus order caused perceptual shifts (hysteresis) near listeners' category boundary confirming identical speech tokens are perceived differentially depending on stimulus context. Critically, we further show neural FFRs during active (but not passive) listening are enhanced for prototypical vs. category-ambiguous tokens and are biased in the direction of listeners' phonetic label even for acoustically-identical speech stimuli. These findings were not observed in the stimulus acoustics nor model FFR responses generated via a computational model of cochlear and auditory nerve transduction, confirming a central origin to the effects. Our data reveal FFRs carry category-level information and suggest top-down processing actively shapes the neural encoding and categorization of speech at subcortical levels. These findings suggest the acoustic-phonetic mapping and perceptual warping in speech perception occur surprisingly early along the auditory neuroaxis, which might aid understanding by reducing ambiguity inherent to the speech signal.
Collapse
Affiliation(s)
- Jared A Carter
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, USA; Division of Clinical Neuroscience, School of Medicine, Hearing Sciences - Scottish Section, University of Nottingham, Glasgow, Scotland, UK
| | - Gavin M Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA; Program in Neuroscience, Indiana University, Bloomington, IN, USA.
| |
Collapse
|
20
|
Bidelman GM, Carter JA. Continuous dynamics in behavior reveal interactions between perceptual warping in categorization and speech-in-noise perception. Front Neurosci 2023; 17:1032369. [PMID: 36937676 PMCID: PMC10014819 DOI: 10.3389/fnins.2023.1032369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Accepted: 02/14/2023] [Indexed: 03/05/2023] Open
Abstract
Introduction Spoken language comprehension requires listeners map continuous features of the speech signal to discrete category labels. Categories are however malleable to surrounding context and stimulus precedence; listeners' percept can dynamically shift depending on the sequencing of adjacent stimuli resulting in a warping of the heard phonetic category. Here, we investigated whether such perceptual warping-which amplify categorical hearing-might alter speech processing in noise-degraded listening scenarios. Methods We measured continuous dynamics in perception and category judgments of an acoustic-phonetic vowel gradient via mouse tracking. Tokens were presented in serial vs. random orders to induce more/less perceptual warping while listeners categorized continua in clean and noise conditions. Results Listeners' responses were faster and their mouse trajectories closer to the ultimate behavioral selection (marked visually on the screen) in serial vs. random order, suggesting increased perceptual attraction to category exemplars. Interestingly, order effects emerged earlier and persisted later in the trial time course when categorizing speech in noise. Discussion These data describe interactions between perceptual warping in categorization and speech-in-noise perception: warping strengthens the behavioral attraction to relevant speech categories, making listeners more decisive (though not necessarily more accurate) in their decisions of both clean and noise-degraded speech.
Collapse
Affiliation(s)
- Gavin M. Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, United States
- Program in Neuroscience, Indiana University, Bloomington, IN, United States
| | - Jared A. Carter
- School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States
- Hearing Sciences – Scottish Section, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Glasgow, United Kingdom
| |
Collapse
|
21
|
Toh XR, Tan SH, Wong G, Lau F, Wong FCK. Enduring musician advantage among former musicians in prosodic pitch perception. Sci Rep 2023; 13:2657. [PMID: 36788323 PMCID: PMC9929097 DOI: 10.1038/s41598-023-29733-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Accepted: 02/09/2023] [Indexed: 02/16/2023] Open
Abstract
Musical training has been associated with various cognitive benefits, one of which is enhanced speech perception. However, most findings have been based on musicians taking part in ongoing music lessons and practice. This study thus sought to determine whether the musician advantage in pitch perception in the language domain extends to individuals who have ceased musical training and practice. To this end, adult active musicians (n = 22), former musicians (n = 27), and non-musicians (n = 47) were presented with sentences spoken in a native language, English, and a foreign language, French. The final words of the sentences were either prosodically congruous (spoken at normal pitch height), weakly incongruous (pitch was increased by 25%), or strongly incongruous (pitch was increased by 110%). Results of the pitch discrimination task revealed that although active musicians outperformed former musicians, former musicians outperformed non-musicians in the weakly incongruous condition. The findings suggest that the musician advantage in pitch perception in speech is retained to some extent even after musical training and practice is discontinued.
Collapse
Affiliation(s)
- Xin Ru Toh
- grid.59025.3b0000 0001 2224 0361Linguistics and Multilingual Studies, School of Humanities, Nanyang Technological University, Singapore, Singapore
| | - Shen Hui Tan
- grid.59025.3b0000 0001 2224 0361Linguistics and Multilingual Studies, School of Humanities, Nanyang Technological University, Singapore, Singapore
| | - Galston Wong
- grid.267323.10000 0001 2151 7939School of Brain and Behavioral Sciences, The University of Texas at Dallas, Dallas, TX USA
| | - Fun Lau
- grid.59025.3b0000 0001 2224 0361Linguistics and Multilingual Studies, School of Humanities, Nanyang Technological University, Singapore, Singapore
| | - Francis C. K. Wong
- grid.59025.3b0000 0001 2224 0361Linguistics and Multilingual Studies, School of Humanities, Nanyang Technological University, Singapore, Singapore
| |
Collapse
|
22
|
Beck J, Konieczny L. What a difference a syllable makes-Rhythmic reading of poetry. Front Psychol 2023; 14:1043651. [PMID: 36865353 PMCID: PMC9973453 DOI: 10.3389/fpsyg.2023.1043651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Accepted: 01/06/2023] [Indexed: 02/15/2023] Open
Abstract
In reading conventional poems aloud, the rhythmic experience is coupled with the projection of meter, enabling the prediction of subsequent input. However, it is unclear how top-down and bottom-up processes interact. If the rhythmicity in reading loud is governed by the top-down prediction of metric patterns of weak and strong stress, these should be projected also onto a randomly included, lexically meaningless syllable. If bottom-up information such as the phonetic quality of consecutive syllables plays a functional role in establishing a structured rhythm, the occurrence of the lexically meaningless syllable should affect reading and the number of these syllables in a metrical line should modulate this effect. To investigate this, we manipulated poems by replacing regular syllables at random positions with the syllable "tack". Participants were instructed to read the poems aloud and their voice was recorded during the reading. At the syllable level, we calculated the syllable onset interval (SOI) as a measure of articulation duration, as well as the mean syllable intensity. Both measures were supposed to operationalize how strongly a syllable was stressed. Results show that the average articulation duration of metrically strong regular syllables was longer than for weak syllables. This effect disappeared for "tacks". Syllable intensities, on the other hand, captured metrical stress of "tacks" as well, but only for musically active participants. Additionally, we calculated the normalized pairwise variability index (nPVI) for each line as an indicator for rhythmic contrast, i.e., the alternation between long and short, as well as louder and quieter syllables, to estimate the influence of "tacks" on reading rhythm. For SOI the nPVI revealed a clear negative effect: When "tacks" occurred, lines appeared to be read less altering, and this effect was proportional to the number of tacks per line. For intensity, however, the nPVI did not capture significant effects. Results suggests that top-down prediction does not always suffice to maintain a rhythmic gestalt across a series of syllables that carry little bottom-up prosodic information. Instead, the constant integration of sufficiently varying bottom-up information appears necessary to maintain a stable metrical pattern prediction.
Collapse
Affiliation(s)
- Judith Beck
- Center for Cognitive Science, Institute of Psychology, University of Freiburg, Freiburg, Germany
| | | |
Collapse
|
23
|
Maillard E, Joyal M, Murray MM, Tremblay P. Are musical activities associated with enhanced speech perception in noise in adults? A systematic review and meta-analysis. CURRENT RESEARCH IN NEUROBIOLOGY 2023. [DOI: 10.1016/j.crneur.2023.100083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/30/2023] Open
|
24
|
Nayak S, Coleman PL, Ladányi E, Nitin R, Gustavson DE, Fisher SE, Magne CL, Gordon RL. The Musical Abilities, Pleiotropy, Language, and Environment (MAPLE) Framework for Understanding Musicality-Language Links Across the Lifespan. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2022; 3:615-664. [PMID: 36742012 PMCID: PMC9893227 DOI: 10.1162/nol_a_00079] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Accepted: 08/08/2022] [Indexed: 04/18/2023]
Abstract
Using individual differences approaches, a growing body of literature finds positive associations between musicality and language-related abilities, complementing prior findings of links between musical training and language skills. Despite these associations, musicality has been often overlooked in mainstream models of individual differences in language acquisition and development. To better understand the biological basis of these individual differences, we propose the Musical Abilities, Pleiotropy, Language, and Environment (MAPLE) framework. This novel integrative framework posits that musical and language-related abilities likely share some common genetic architecture (i.e., genetic pleiotropy) in addition to some degree of overlapping neural endophenotypes, and genetic influences on musically and linguistically enriched environments. Drawing upon recent advances in genomic methodologies for unraveling pleiotropy, we outline testable predictions for future research on language development and how its underlying neurobiological substrates may be supported by genetic pleiotropy with musicality. In support of the MAPLE framework, we review and discuss findings from over seventy behavioral and neural studies, highlighting that musicality is robustly associated with individual differences in a range of speech-language skills required for communication and development. These include speech perception-in-noise, prosodic perception, morphosyntactic skills, phonological skills, reading skills, and aspects of second/foreign language learning. Overall, the current work provides a clear agenda and framework for studying musicality-language links using individual differences approaches, with an emphasis on leveraging advances in the genomics of complex musicality and language traits.
Collapse
Affiliation(s)
- Srishti Nayak
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Psychology, Middle Tennessee State University, Murfreesboro, TN, USA
- Vanderbilt Genetics Institute, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt University School of Medicine, Vanderbilt University, TN, USA
| | - Peyton L. Coleman
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Enikő Ladányi
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Linguistics, Potsdam University, Potsdam, Germany
| | - Rachana Nitin
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - Daniel E. Gustavson
- Vanderbilt Genetics Institute, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Medicine, Vanderbilt University Medical Center, Nashville, TN, USA
- Institute for Behavioral Genetics, University of Colorado Boulder, Boulder, CO, USA
| | - Simon E. Fisher
- Language and Genetics Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Cyrille L. Magne
- Department of Psychology, Middle Tennessee State University, Murfreesboro, TN, USA
- PhD Program in Literacy Studies, Middle Tennessee State University, Murfreesboro, TN, USA
| | - Reyna L. Gordon
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Genetics Institute, Vanderbilt University Medical Center, Nashville, TN, USA
- Curb Center for Art, Enterprise, and Public Policy, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, TN, USA
- Vanderbilt University School of Medicine, Vanderbilt University, TN, USA
| |
Collapse
|
25
|
O’Connell SR, Nave-Blodgett JE, Wilson GE, Hannon EE, Snyder JS. Elements of musical and dance sophistication predict musical groove perception. Front Psychol 2022; 13:998321. [PMID: 36467160 PMCID: PMC9712211 DOI: 10.3389/fpsyg.2022.998321] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Accepted: 10/21/2022] [Indexed: 11/02/2023] Open
Abstract
Listening to groovy music is an enjoyable experience and a common human behavior in some cultures. Specifically, many listeners agree that songs they find to be more familiar and pleasurable are more likely to induce the experience of musical groove. While the pleasurable and dance-inducing effects of musical groove are omnipresent, we know less about how subjective feelings toward music, individual musical or dance experiences, or more objective musical perception abilities are correlated with the way we experience groove. Therefore, the present study aimed to evaluate how musical and dance sophistication relates to musical groove perception. One-hundred 24 participants completed an online study during which they rated 20 songs, considered high- or low-groove, and completed the Goldsmiths Musical Sophistication Index, the Goldsmiths Dance Sophistication Index, the Beat and Meter Sensitivity Task, and a modified short version of the Profile for Music Perception Skills. Our results reveal that measures of perceptual abilities, musical training, and social dancing predicted the difference in groove rating between high- and low-groove music. Overall, these findings support the notion that listeners' individual experiences and predispositions may shape their perception of musical groove, although other causal directions are also possible. This research helps elucidate the correlates and possible causes of musical groove perception in a wide range of listeners.
Collapse
Affiliation(s)
- Samantha R. O’Connell
- Caruso Department of Otolaryngology, Head and Neck Surgery, Keck School of Medicine of USC, University of Southern California, Los Angeles, CA, United States
| | | | - Grace E. Wilson
- Department of Psychology, University of Nevada, Las Vegas, NV, United States
| | - Erin E. Hannon
- Department of Psychology, University of Nevada, Las Vegas, NV, United States
| | - Joel S. Snyder
- Department of Psychology, University of Nevada, Las Vegas, NV, United States
| |
Collapse
|
26
|
Benítez-Barrera CR, Skoe E, Huang J, Tharpe AM. Evidence for a Musician Speech-Perception-in-Noise Advantage in School-Age Children. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:3996-4008. [PMID: 36194893 DOI: 10.1044/2022_jslhr-22-00134] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
PURPOSE The objective of this study was to evaluate whether child musicians are better at listening to speech in noise (SPIN) than nonmusicians of the same age. In addition, we aimed to explore whether the musician SPIN advantage in children was related to general intelligence (IQ). METHOD Fifty-one children aged 8.2-11.8 years and with different levels of music training participated in the study. A between-group design and correlational analyses were used to determine differences in SPIN skills as they relate to music training. IQ was used as a covariate to explore the relationship between intelligence and SPIN ability. RESULTS More years of music training were associated with better SPIN skills than fewer years of music training. Furthermore, this difference in SPIN skills remained even when accounting for IQ. These results were found at the group level and also when years of instrument training was treated as a continuous variable (i.e., correlational analyses). CONCLUSIONS We confirmed results from previous studies in which child musicians outperformed nonmusicians in SPIN skills. We also showed that this effect was not related to differences in IQ between the musicians and nonmusicians for this cohort of children. However, confirmation of this finding with a cohort of children from more diverse socioeconomic statuses and cognitive profiles is warranted.
Collapse
Affiliation(s)
| | | | | | - Anne Marie Tharpe
- Vanderbilt University, Nashville, TN
- Vanderbilt University Medical Center, Nashville, TN
| |
Collapse
|
27
|
Lippolis M, Müllensiefen D, Frieler K, Matarrelli B, Vuust P, Cassibba R, Brattico E. Learning to play a musical instrument in the middle school is associated with superior audiovisual working memory and fluid intelligence: A cross-sectional behavioral study. Front Psychol 2022; 13:982704. [PMID: 36312139 PMCID: PMC9610841 DOI: 10.3389/fpsyg.2022.982704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Accepted: 09/22/2022] [Indexed: 11/13/2022] Open
Abstract
Music training, in all its forms, is known to have an impact on behavior both in childhood and even in aging. In the delicate life period of transition from childhood to adulthood, music training might have a special role for behavioral and cognitive maturation. Among the several kinds of music training programs implemented in the educational communities, we focused on instrumental training incorporated in the public middle school curriculum in Italy that includes both individual, group and collective (orchestral) lessons several times a week. At three middle schools, we tested 285 preadolescent children (aged 10–14 years) with a test and questionnaire battery including adaptive tests for visuo-spatial working memory skills (with the Jack and Jill test), fluid intelligence (with a matrix reasoning test) and music-related perceptual and memory abilities (with listening tests). Of these children, 163 belonged to a music curriculum within the school and 122 to a standard curriculum. Significant differences between students of the music and standard curricula were found in both perceptual and cognitive domains, even when controlling for pre-existing individual differences in musical sophistication. The music children attending the third and last grade of middle school had better performance and showed the largest advantage compared to the control group on both audiovisual working memory and fluid intelligence. Furthermore, some gender differences were found for several tests and across groups in favor of females. The present results indicate that learning to play a musical instrument as part of the middle school curriculum represents a resource for preadolescent education. Even though the current evidence is not sufficient to establish the causality of the found effects, it can still guide future research evaluation with longitudinal data.
Collapse
Affiliation(s)
- Mariangela Lippolis
- Department of Teaching of Musical, Visual and Corporal Expression, University of Valencia, Valencia, Spain
- Mariangela Lippolis,
| | - Daniel Müllensiefen
- Department of Psychology, Goldsmiths, University of London, London, United Kingdom
| | - Klaus Frieler
- Department of Methodology, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| | - Benedetta Matarrelli
- Department of Clinical Medicine, Center for Music in the Brain (MIB), The Royal Academy of Music Aarhus and Aalborg, Aarhus University, Aarhus, Denmark
- Department of Education, Psychology, and Communication, University of Bari Aldo Moro, Bari, Italy
| | - Peter Vuust
- Department of Clinical Medicine, Center for Music in the Brain (MIB), The Royal Academy of Music Aarhus and Aalborg, Aarhus University, Aarhus, Denmark
| | - Rosalinda Cassibba
- Department of Education, Psychology, and Communication, University of Bari Aldo Moro, Bari, Italy
| | - Elvira Brattico
- Department of Clinical Medicine, Center for Music in the Brain (MIB), The Royal Academy of Music Aarhus and Aalborg, Aarhus University, Aarhus, Denmark
- Department of Education, Psychology, and Communication, University of Bari Aldo Moro, Bari, Italy
- *Correspondence: Elvira Brattico,
| |
Collapse
|
28
|
Domain-specific hearing-in-noise performance is associated with absolute pitch proficiency. Sci Rep 2022; 12:16344. [PMID: 36175508 PMCID: PMC9521875 DOI: 10.1038/s41598-022-20869-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Accepted: 09/20/2022] [Indexed: 11/22/2022] Open
Abstract
Recent evidence suggests that musicians may have an advantage over non-musicians in perceiving speech against noisy backgrounds. Previously, musicians have been compared as a homogenous group, despite demonstrated heterogeneity, which may contribute to discrepancies between studies. Here, we investigated whether “quasi”-absolute pitch (AP) proficiency, viewed as a general trait that varies across a spectrum, accounts for the musician advantage in hearing-in-noise (HIN) performance, irrespective of whether the streams are speech or musical sounds. A cohort of 12 non-musicians and 42 trained musicians stratified into high, medium, or low AP proficiency identified speech or melody targets masked in noise (speech-shaped, multi-talker, and multi-music) under four signal-to-noise ratios (0, − 3, − 6, and − 9 dB). Cognitive abilities associated with HIN benefits, including auditory working memory and use of visuo-spatial cues, were assessed. AP proficiency was verified against pitch adjustment and relative pitch tasks. We found a domain-specific effect on HIN perception: quasi-AP abilities were related to improved perception of melody but not speech targets in noise. The quasi-AP advantage extended to tonal working memory and the use of spatial cues, but only during melodic stream segregation. Overall, the results do not support the putative musician advantage in speech-in-noise perception, but suggest a quasi-AP advantage in perceiving music under noisy environments.
Collapse
|
29
|
Brown JA, Bidelman GM. Familiarity of Background Music Modulates the Cortical Tracking of Target Speech at the "Cocktail Party". Brain Sci 2022; 12:brainsci12101320. [PMID: 36291252 PMCID: PMC9599198 DOI: 10.3390/brainsci12101320] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2022] [Revised: 09/23/2022] [Accepted: 09/27/2022] [Indexed: 11/23/2022] Open
Abstract
The "cocktail party" problem-how a listener perceives speech in noisy environments-is typically studied using speech (multi-talker babble) or noise maskers. However, realistic cocktail party scenarios often include background music (e.g., coffee shops, concerts). Studies investigating music's effects on concurrent speech perception have predominantly used highly controlled synthetic music or shaped noise, which do not reflect naturalistic listening environments. Behaviorally, familiar background music and songs with vocals/lyrics inhibit concurrent speech recognition. Here, we investigated the neural bases of these effects. While recording multichannel EEG, participants listened to an audiobook while popular songs (or silence) played in the background at a 0 dB signal-to-noise ratio. Songs were either familiar or unfamiliar to listeners and featured either vocals or isolated instrumentals from the original audio recordings. Comprehension questions probed task engagement. We used temporal response functions (TRFs) to isolate cortical tracking to the target speech envelope and analyzed neural responses around 100 ms (i.e., auditory N1 wave). We found that speech comprehension was, expectedly, impaired during background music compared to silence. Target speech tracking was further hindered by the presence of vocals. When masked by familiar music, response latencies to speech were less susceptible to informational masking, suggesting concurrent neural tracking of speech was easier during music known to the listener. These differential effects of music familiarity were further exacerbated in listeners with less musical ability. Our neuroimaging results and their dependence on listening skills are consistent with early attentional-gain mechanisms where familiar music is easier to tune out (listeners already know the song's expectancies) and thus can allocate fewer attentional resources to the background music to better monitor concurrent speech material.
Collapse
Affiliation(s)
- Jane A. Brown
- School of Communication Sciences and Disorders, University of Memphis, Memphis, TN 38152, USA
- Institute for Intelligent Systems, University of Memphis, Memphis, TN 38152, USA
| | - Gavin M. Bidelman
- School of Communication Sciences and Disorders, University of Memphis, Memphis, TN 38152, USA
- Institute for Intelligent Systems, University of Memphis, Memphis, TN 38152, USA
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN 47408, USA
- Program in Neuroscience, Indiana University, Bloomington, IN 47405, USA
- Correspondence:
| |
Collapse
|
30
|
Mednicoff SD, Barashy S, Gonzales D, Benning SD, Snyder JS, Hannon EE. Auditory affective processing, musicality, and the development of misophonic reactions. Front Neurosci 2022; 16:924806. [PMID: 36213735 PMCID: PMC9537735 DOI: 10.3389/fnins.2022.924806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Accepted: 09/02/2022] [Indexed: 11/13/2022] Open
Abstract
Misophonia can be characterized both as a condition and as a negative affective experience. Misophonia is described as feeling irritation or disgust in response to hearing certain sounds, such as eating, drinking, gulping, and breathing. Although the earliest misophonic experiences are often described as occurring during childhood, relatively little is known about the developmental pathways that lead to individual variation in these experiences. This literature review discusses evidence of misophonic reactions during childhood and explores the possibility that early heightened sensitivities to both positive and negative sounds, such as to music, might indicate a vulnerability for misophonia and misophonic reactions. We will review when misophonia may develop, how it is distinguished from other auditory conditions (e.g., hyperacusis, phonophobia, or tinnitus), and how it relates to developmental disorders (e.g., autism spectrum disorder or Williams syndrome). Finally, we explore the possibility that children with heightened musicality could be more likely to experience misophonic reactions and develop misophonia.
Collapse
|
31
|
Lai J, Price CN, Bidelman GM. Brainstem speech encoding is dynamically shaped online by fluctuations in cortical α state. Neuroimage 2022; 263:119627. [PMID: 36122686 PMCID: PMC10017375 DOI: 10.1016/j.neuroimage.2022.119627] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 09/12/2022] [Indexed: 11/25/2022] Open
Abstract
Experimental evidence in animals demonstrates cortical neurons innervate subcortex bilaterally to tune brainstem auditory coding. Yet, the role of the descending (corticofugal) auditory system in modulating earlier sound processing in humans during speech perception remains unclear. Here, we measured EEG activity as listeners performed speech identification tasks in different noise backgrounds designed to tax perceptual and attentional processing. We hypothesized brainstem speech coding might be tied to attention and arousal states (indexed by cortical α power) that actively modulate the interplay of brainstem-cortical signal processing. When speech-evoked brainstem frequency-following responses (FFRs) were categorized according to cortical α states, we found low α FFRs in noise were weaker, correlated positively with behavioral response times, and were more "decodable" via neural classifiers. Our data provide new evidence for online corticofugal interplay in humans and establish that brainstem sensory representations are continuously yoked to (i.e., modulated by) the ebb and flow of cortical states to dynamically update perceptual processing.
Collapse
Affiliation(s)
- Jesyin Lai
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, USA; Diagnostic Imaging Department, St. Jude Children's Research Hospital, Memphis, TN, USA.
| | - Caitlin N Price
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, USA; Department of Audiology and Speech Pathology, University of Arkansas for Medical Sciences, Little Rock, AR, USA
| | - Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, USA; Department of Speech, Language and Hearing Sciences, Indiana University, 2631 East Discovery Parkway, Bloomington, IN 47408, USA; Program in Neuroscience, Indiana University, 1101 E 10th St, Bloomington, IN 47405, USA.
| |
Collapse
|
32
|
Neves L, Correia AI, Castro SL, Martins D, Lima CF. Does music training enhance auditory and linguistic processing? A systematic review and meta-analysis of behavioral and brain evidence. Neurosci Biobehav Rev 2022; 140:104777. [PMID: 35843347 DOI: 10.1016/j.neubiorev.2022.104777] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 07/11/2022] [Accepted: 07/12/2022] [Indexed: 02/02/2023]
Abstract
It is often claimed that music training improves auditory and linguistic skills. Results of individual studies are mixed, however, and most evidence is correlational, precluding inferences of causation. Here, we evaluated data from 62 longitudinal studies that examined whether music training programs affect behavioral and brain measures of auditory and linguistic processing (N = 3928). For the behavioral data, a multivariate meta-analysis revealed a small positive effect of music training on both auditory and linguistic measures, regardless of the type of assignment (random vs. non-random), training (instrumental vs. non-instrumental), and control group (active vs. passive). The trim-and-fill method provided suggestive evidence of publication bias, but meta-regression methods (PET-PEESE) did not. For the brain data, a narrative synthesis also documented benefits of music training, namely for measures of auditory processing and for measures of speech and prosody processing. Thus, the available literature provides evidence that music training produces small neurobehavioral enhancements in auditory and linguistic processing, although future studies are needed to confirm that such enhancements are not due to publication bias.
Collapse
Affiliation(s)
- Leonor Neves
- Centro de Investigação e Intervenção Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal
| | - Ana Isabel Correia
- Centro de Investigação e Intervenção Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal
| | - São Luís Castro
- Centro de Psicologia da Universidade do Porto (CPUP), Faculdade de Psicologia e de Ciências da Educação da Universidade do Porto (FPCEUP), Porto, Portugal
| | - Daniel Martins
- Department of Neuroimaging, Institute of Psychiatry, Psychology and Neuroscience, King's College London, UK; NIHR Maudsley Biomedical Research Centre (BRC), South London and Maudsley NHS Foundation Trust, London, UK
| | - César F Lima
- Centro de Investigação e Intervenção Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal.
| |
Collapse
|
33
|
Samiotis IP, Qiu S, Lofi C, Yang J, Gadiraju U, Bozzon A. An Analysis of Music Perception Skills on Crowdsourcing Platforms. Front Artif Intell 2022; 5:828733. [PMID: 35774636 PMCID: PMC9237482 DOI: 10.3389/frai.2022.828733] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Accepted: 05/09/2022] [Indexed: 11/17/2022] Open
Abstract
Music content annotation campaigns are common on paid crowdsourcing platforms. Crowd workers are expected to annotate complex music artifacts, a task often demanding specialized skills and expertise, thus selecting the right participants is crucial for campaign success. However, there is a general lack of deeper understanding of the distribution of musical skills, and especially auditory perception skills, in the worker population. To address this knowledge gap, we conducted a user study (N = 200) on Prolific and Amazon Mechanical Turk. We asked crowd workers to indicate their musical sophistication through a questionnaire and assessed their music perception skills through an audio-based skill test. The goal of this work is to better understand the extent to which crowd workers possess higher perceptions skills, beyond their own musical education level and self reported abilities. Our study shows that untrained crowd workers can possess high perception skills on the music elements of melody, tuning, accent, and tempo; skills that can be useful in a plethora of annotation tasks in the music domain.
Collapse
Affiliation(s)
- Ioannis Petros Samiotis
- Department of Software Technology, Delft University of Technology, Delft, Netherlands
- *Correspondence: Ioannis Petros Samiotis
| | - Sihang Qiu
- Department of Software Technology, Delft University of Technology, Delft, Netherlands
- Hunan Institute of Advanced Technology, Changsha, China
| | - Christoph Lofi
- Department of Software Technology, Delft University of Technology, Delft, Netherlands
| | - Jie Yang
- Department of Software Technology, Delft University of Technology, Delft, Netherlands
| | - Ujwal Gadiraju
- Department of Software Technology, Delft University of Technology, Delft, Netherlands
| | - Alessandro Bozzon
- Department of Software Technology, Delft University of Technology, Delft, Netherlands
| |
Collapse
|
34
|
Zendel BR. The importance of the motor system in the development of music-based forms of auditory rehabilitation. Ann N Y Acad Sci 2022; 1515:10-19. [PMID: 35648040 DOI: 10.1111/nyas.14810] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
Hearing abilities decline with age, and one of the most commonly reported hearing issues in older adults is a difficulty understanding speech when there is loud background noise. Understanding speech in noise relies on numerous cognitive processes, including working memory, and is supported by numerous brain regions, including the motor and motor planning systems. Indeed, many working memory processes are supported by motor and premotor cortical regions. Interestingly, lifelong musicians and nonmusicians given music training over the course of weeks or months show an improved ability to understand speech when there is loud background noise. These benefits are associated with enhanced working memory abilities, and enhanced activity in motor and premotor cortical regions. Accordingly, it is likely that music training improves the coupling between the auditory and motor systems and promotes plasticity in these regions and regions that feed into auditory/motor areas. This leads to an enhanced ability to dynamically process incoming acoustic information, and is likely the reason that musicians and those who receive laboratory-based music training are better able to understand speech when there is background noise. Critically, these findings suggest that music-based forms of auditory rehabilitation are possible and should focus on tasks that promote auditory-motor interactions.
Collapse
Affiliation(s)
- Benjamin Rich Zendel
- Faculty of Medicine, Memorial University of Newfoundland, St. John's, Newfoundland and Labrador, Canada.,Aging Research Centre - Newfoundland and Labrador, Grenfell Campus, Memorial University, Corner Brook, Newfoundland and Labrador, Canada
| |
Collapse
|
35
|
Carter JA, Buder EH, Bidelman GM. Nonlinear dynamics in auditory cortical activity reveal the neural basis of perceptual warping in speech categorization. JASA EXPRESS LETTERS 2022; 2:045201. [PMID: 35434716 PMCID: PMC8984957 DOI: 10.1121/10.0009896] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 03/03/2022] [Indexed: 06/14/2023]
Abstract
Surrounding context influences speech listening, resulting in dynamic shifts to category percepts. To examine its neural basis, event-related potentials (ERPs) were recorded during vowel identification with continua presented in random, forward, and backward orders to induce perceptual warping. Behaviorally, sequential order shifted individual listeners' categorical boundary, versus random delivery, revealing perceptual warping (biasing) of the heard phonetic category dependent on recent stimulus history. ERPs revealed later (∼300 ms) activity localized to superior temporal and middle/inferior frontal gyri that predicted listeners' hysteresis/enhanced contrast magnitudes. Findings demonstrate that interactions between frontotemporal brain regions govern top-down, stimulus history effects on speech categorization.
Collapse
Affiliation(s)
- Jared A Carter
- Institute for Intelligent Systems, University of Memphis, Memphis, Tennessee 38152, USA
| | - Eugene H Buder
- School of Communication Sciences and Disorders, University of Memphis, Memphis, Tennessee 38152, USA
| | - Gavin M Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, , Bloomington, Indiana 47408, USA , ,
| |
Collapse
|
36
|
Hennessy S, Mack WJ, Habibi A. Speech-in-noise perception in musicians and non-musicians: A multi-level meta-analysis. Hear Res 2022; 416:108442. [PMID: 35078132 DOI: 10.1016/j.heares.2022.108442] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/17/2021] [Revised: 01/10/2022] [Accepted: 01/13/2022] [Indexed: 01/25/2023]
Abstract
Speech-in-noise perception, the ability to hear a relevant voice within a noisy background, is important for successful communication. Musicians have been reported to perform better than non-musicians on speech-in-noise tasks. This meta-analysis uses a multi-level design to assess the claim that musicians have superior speech-in-noise abilities compared to non-musicians. Across 31 studies and 62 effect sizes, the overall effect of musician status on speech-in-noise ability is significant, with a moderate effect size (g = 0.58), 95% CI [0.42, 0.74]. The overall effect of musician status was not moderated by within-study IQ equivalence, target stimulus, target contextual information, type of background noise, or age. We conclude that musicians show superior speech-in-noise abilities compared to non-musicians, not modified by age, IQ, or speech task parameters. These effects may reflect changes due to music training or predisposed auditory advantages that encourage musicianship.
Collapse
Affiliation(s)
- Sarah Hennessy
- Brain and Creativity Institute, University of Southern California, Los Angeles, CA, United States
| | - Wendy J Mack
- Department of Population and Public Health Sciences, University of Southern California, Los Angeles, CA, United States
| | - Assal Habibi
- Brain and Creativity Institute, University of Southern California, Los Angeles, CA, United States.
| |
Collapse
|
37
|
Lad M, Billig AJ, Kumar S, Griffiths TD. A specific relationship between musical sophistication and auditory working memory. Sci Rep 2022; 12:3517. [PMID: 35241747 PMCID: PMC8894429 DOI: 10.1038/s41598-022-07568-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Accepted: 02/22/2022] [Indexed: 11/08/2022] Open
Abstract
Previous studies have found conflicting results between individual measures related to music and fundamental aspects of auditory perception and cognition. The results have been difficult to compare because of different musical measures being used and lack of uniformity in the auditory perceptual and cognitive measures. In this study we used a general construct of musicianship, musical sophistication, that can be applied to populations with widely different backgrounds. We investigated the relationship between musical sophistication and measures of perception and working memory for sound by using a task suitable to measure both. We related scores from the Goldsmiths Musical Sophistication Index to performance on tests of perception and working memory for two acoustic features-frequency and amplitude modulation. The data show that musical sophistication scores are best related to working memory for frequency in an analysis that accounts for age and non-verbal intelligence. Musical sophistication was not significantly associated with working memory for amplitude modulation rate or with the perception of either acoustic feature. The work supports a specific association between musical sophistication and working memory for sound frequency.
Collapse
Affiliation(s)
- Meher Lad
- Translational and Clinical Research Institute, Newcastle University, Newcastle upon Tyne, UK.
| | | | | | - Timothy D Griffiths
- Human Brain Research Laboratory, University of Iowa, Iowa, USA
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, UK
- Wellcome Centre for Human Neuroimaging, University College London, London, UK
| |
Collapse
|
38
|
Amateur singing benefits speech perception in aging under certain conditions of practice: behavioural and neurobiological mechanisms. Brain Struct Funct 2022; 227:943-962. [PMID: 35013775 DOI: 10.1007/s00429-021-02433-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Accepted: 11/19/2021] [Indexed: 12/21/2022]
Abstract
Limited evidence has shown that practising musical activities in aging, such as choral singing, could lessen age-related speech perception in noise (SPiN) difficulties. However, the robustness and underlying mechanism of action of this phenomenon remain unclear. In this study, we used surface-based morphometry combined with a moderated mediation analytic approach to examine whether singing-related plasticity in auditory and dorsal speech stream regions is associated with better SPiN capabilities. 36 choral singers and 36 non-singers aged 20-87 years underwent cognitive, auditory, and SPiN assessments. Our results provide important new insights into experience-dependent plasticity by revealing that, under certain conditions of practice, amateur choral singing is associated with age-dependent structural plasticity within auditory and dorsal speech regions, which is associated with better SPiN performance in aging. Specifically, the conditions of practice that were associated with benefits on SPiN included frequent weekly practice at home, several hours of weekly group singing practice, singing in multiple languages, and having received formal singing training. These results suggest that amateur choral singing is associated with improved SPiN through a dual mechanism involving auditory processing and auditory-motor integration and may be dose dependent, with more intense singing associated with greater benefit. Our results, thus, reveal that the relationship between singing practice and SPiN is complex, and underscore the importance of considering singing practice behaviours in understanding the effects of musical activities on the brain-behaviour relationship.
Collapse
|
39
|
Rimmele JM, Kern P, Lubinus C, Frieler K, Poeppel D, Assaneo MF. Musical Sophistication and Speech Auditory-Motor Coupling: Easy Tests for Quick Answers. Front Neurosci 2022; 15:764342. [PMID: 35058741 PMCID: PMC8763673 DOI: 10.3389/fnins.2021.764342] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 11/22/2021] [Indexed: 12/05/2022] Open
Abstract
Musical training enhances auditory-motor cortex coupling, which in turn facilitates music and speech perception. How tightly the temporal processing of music and speech are intertwined is a topic of current research. We investigated the relationship between musical sophistication (Goldsmiths Musical Sophistication index, Gold-MSI) and spontaneous speech-to-speech synchronization behavior as an indirect measure of speech auditory-motor cortex coupling strength. In a group of participants (n = 196), we tested whether the outcome of the spontaneous speech-to-speech synchronization test (SSS-test) can be inferred from self-reported musical sophistication. Participants were classified as high (HIGHs) or low (LOWs) synchronizers according to the SSS-test. HIGHs scored higher than LOWs on all Gold-MSI subscales (General Score, Active Engagement, Musical Perception, Musical Training, Singing Skills), but the Emotional Attachment scale. More specifically, compared to a previously reported German-speaking sample, HIGHs overall scored higher and LOWs lower. Compared to an estimated distribution of the English-speaking general population, our sample overall scored lower, with the scores of LOWs significantly differing from the normal distribution, with scores in the ∼30th percentile. While HIGHs more often reported musical training compared to LOWs, the distribution of training instruments did not vary across groups. Importantly, even after the highly correlated subscores of the Gold-MSI were decorrelated, particularly the subscales Musical Perception and Musical Training allowed to infer the speech-to-speech synchronization behavior. The differential effects of musical perception and training were observed, with training predicting audio-motor synchronization in both groups, but perception only in the HIGHs. Our findings suggest that speech auditory-motor cortex coupling strength can be inferred from training and perceptual aspects of musical sophistication, suggesting shared mechanisms involved in speech and music perception.
Collapse
Affiliation(s)
- Johanna M. Rimmele
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Frankfurt, Germany
- Max Planck NYU Center for Language, Music and Emotion, New York, NY, United States
| | - Pius Kern
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Frankfurt, Germany
| | - Christina Lubinus
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Frankfurt, Germany
| | - Klaus Frieler
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Frankfurt, Germany
| | - David Poeppel
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Frankfurt, Germany
- Max Planck NYU Center for Language, Music and Emotion, New York, NY, United States
- Department of Psychology, New York University, New York, NY, United States
- Ernst Strüngmann Institute for Neuroscience, Frankfurt, Germany
| | - M. Florencia Assaneo
- Instituto de Neurobiología, Universidad Nacional Autónoma de México, Querétaro, México
| |
Collapse
|
40
|
Gustavson DE, Friedman NP, Stallings MC, Reynolds CA, Coon H, Corley RP, Hewitt JK, Gordon RL. Musical instrument engagement in adolescence predicts verbal ability 4 years later: A twin and adoption study. Dev Psychol 2021; 57:1943-1957. [PMID: 34914455 PMCID: PMC8842509 DOI: 10.1037/dev0001245] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Individual differences in music traits are heritable and correlated with the development of cognitive and communication skills, but little is known about whether diverse modes of music engagement (e.g., playing instruments vs. singing) reflect similar underlying genetic/environmental influences. Moreover, the biological etiology underlying the relationship between musicality and childhood language development is poorly understood. Here we explored genetic and environmental associations between music engagement and verbal ability in the Colorado Adoption/Twin Study of Lifespan behavioral development & cognitive aging (CATSLife). Adolescents (N = 1,684) completed measures of music engagement and intelligence at approximately age 12 and/or multiple tests of verbal ability at age 16. Structural equation models revealed that instrument engagement was highly heritable (a² = .78), with moderate heritability of singing (a² = .43) and dance engagement (a² = .66). Adolescent self-reported instrument engagement (but not singing or dance engagement) was genetically correlated with age 12 verbal intelligence and still was associated with age 16 verbal ability, even when controlling for age 12 full-scale intelligence, providing evidence for a longitudinal relationship between music engagement and language beyond shared general cognitive processes. Together, these novel findings suggest that shared genetic influences in part accounts for phenotypic associations between music engagement and language, but there may also be some (weak) direct benefits of music engagement on later language abilities. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Collapse
Affiliation(s)
- Daniel E. Gustavson
- Department of Medicine, Vanderbilt University Medical Center, Nashville, TN,Vanderbilt Genetics Institute, Vanderbilt University Medical Center, Nashville, TN
| | - Naomi P. Friedman
- Institute for Behavioral Genetics, University of Colorado Boulder, Boulder, CO,Department of Psychology and Neuroscience, University of Colorado Boulder, Boulder, CO
| | - Michael C. Stallings
- Institute for Behavioral Genetics, University of Colorado Boulder, Boulder, CO,Department of Psychology and Neuroscience, University of Colorado Boulder, Boulder, CO
| | | | - Hilary Coon
- Department of Psychiatry, University of Utah, Salt Lake City, UT
| | - Robin P. Corley
- Department of Psychology and Neuroscience, University of Colorado Boulder, Boulder, CO
| | - John K. Hewitt
- Institute for Behavioral Genetics, University of Colorado Boulder, Boulder, CO
| | - Reyna L. Gordon
- Vanderbilt Genetics Institute, Vanderbilt University Medical Center, Nashville, TN,Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN,Department of Psychology, Vanderbilt University, Nashville, TN
| |
Collapse
|
41
|
Fiveash A, Bedoin N, Gordon RL, Tillmann B. Processing rhythm in speech and music: Shared mechanisms and implications for developmental speech and language disorders. Neuropsychology 2021; 35:771-791. [PMID: 34435803 PMCID: PMC8595576 DOI: 10.1037/neu0000766] [Citation(s) in RCA: 41] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
OBJECTIVE Music and speech are complex signals containing regularities in how they unfold in time. Similarities between music and speech/language in terms of their auditory features, rhythmic structure, and hierarchical structure have led to a large body of literature suggesting connections between the two domains. However, the precise underlying mechanisms behind this connection remain to be elucidated. METHOD In this theoretical review article, we synthesize previous research and present a framework of potentially shared neural mechanisms for music and speech rhythm processing. We outline structural similarities of rhythmic signals in music and speech, synthesize prominent music and speech rhythm theories, discuss impaired timing in developmental speech and language disorders, and discuss music rhythm training as an additional, potentially effective therapeutic tool to enhance speech/language processing in these disorders. RESULTS We propose the processing rhythm in speech and music (PRISM) framework, which outlines three underlying mechanisms that appear to be shared across music and speech/language processing: Precise auditory processing, synchronization/entrainment of neural oscillations to external stimuli, and sensorimotor coupling. The goal of this framework is to inform directions for future research that integrate cognitive and biological evidence for relationships between rhythm processing in music and speech. CONCLUSION The current framework can be used as a basis to investigate potential links between observed timing deficits in developmental disorders, impairments in the proposed mechanisms, and pathology-specific deficits which can be targeted in treatment and training supporting speech therapy outcomes. On these grounds, we propose future research directions and discuss implications of our framework. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Collapse
Affiliation(s)
- Anna Fiveash
- Lyon Neuroscience Research Center, CRNL, CNRS, UMR5292, INSERM, U1028, F-69000, Lyon, France
- University Lyon 1, Lyon, France
| | - Nathalie Bedoin
- Lyon Neuroscience Research Center, CRNL, CNRS, UMR5292, INSERM, U1028, F-69000, Lyon, France
- University Lyon 1, Lyon, France
- University of Lyon 2, CNRS, UMR5596, Lyon, F-69000, France
| | - Reyna L. Gordon
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee
- Vanderbilt Genetics Institute, Vanderbilt University, Nashville, Tennessee
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, Tennessee
| | - Barbara Tillmann
- Lyon Neuroscience Research Center, CRNL, CNRS, UMR5292, INSERM, U1028, F-69000, Lyon, France
- University Lyon 1, Lyon, France
| |
Collapse
|
42
|
Nisha KV, Neelamegarajan D, Nayagam NN, Winston JS, Anil SP. Musical Aptitude as a Variable in the Assessment of Working Memory and Selective Attention Tasks. J Audiol Otol 2021; 25:178-188. [PMID: 34649418 PMCID: PMC8524116 DOI: 10.7874/jao.2021.00171] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Accepted: 07/14/2021] [Indexed: 11/22/2022] Open
Abstract
Background and Objectives The influence of musical aptitude on cognitive test performance in musicians is a long-debated research question. Evidence points to the low performance of nonmusicians in visual and auditory cognitive tasks (working memory and attention) compared with musicians. This cannot be generalized to all nonmusicians, as a sub-group in this population can have innate musical abilities even without any formal musical training. The present study aimed to study the effect of musical aptitude on the working memory and selective attention. Subjects and Methods Three groups of 20 individuals each (a total of 60 participants), including trained-musicians, nonmusicians with good musical aptitude, and nonmusicians with low musical aptitude, participated in the present study. Cognitive-based visual (Flanker’s selective attention test) and auditory (working memory tests: backward digit span and operation span) tests were administered. Results MANOVA (followed by ANOVA) revealed a benefit of musicianship and musical aptitude on backward digit span and Flanker’s reaction time (p<0.05). Discriminant function analyses showed that the groups could be effectively (accuracy, 80%) segregated based on the backward digit span and Flanker’s selective attention test. Trained musicians and nonmusicians with good musical aptitude were distinguished as one cluster and nonmusicians with low musical aptitude formed another cluster, hinting the role of musical aptitude in working memory and selective attention. Conclusions Nonmusicians with good musical aptitude can have enhanced working memory and selective attention skills like musicians. Hence, caution is required when these individuals are included as controls in cognitive-based visual and auditory experiments.
Collapse
Affiliation(s)
- Kavassery Venkateswaran Nisha
- Department of Audiology, All India Institute of Speech and Hearing, Naimisham Campus, Manasagangothri, Mysore, India
| | - Devi Neelamegarajan
- Department of Audiology, All India Institute of Speech and Hearing, Naimisham Campus, Manasagangothri, Mysore, India
| | - Nishant N Nayagam
- Department of Audiology, All India Institute of Speech and Hearing, Naimisham Campus, Manasagangothri, Mysore, India
| | - Jim Saroj Winston
- Department of Audiology, All India Institute of Speech and Hearing, Naimisham Campus, Manasagangothri, Mysore, India
| | - Sam Publius Anil
- Department of Audiology, All India Institute of Speech and Hearing, Naimisham Campus, Manasagangothri, Mysore, India
| |
Collapse
|
43
|
Zhang X, Gong Q. Context-dependent Plasticity and Strength of Subcortical Encoding of Musical Sounds Independently Underlie Pitch Discrimination for Music Melodies. Neuroscience 2021; 472:68-89. [PMID: 34358631 DOI: 10.1016/j.neuroscience.2021.07.032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2021] [Revised: 07/26/2021] [Accepted: 07/27/2021] [Indexed: 10/20/2022]
Abstract
Subcortical auditory nuclei contribute to pitch perception, but how subcortical sound encoding is related to pitch processing for music perception remains unclear. Conventionally, enhanced subcortical sound encoding is considered underlying superior pitch discrimination. However, associations between superior auditory perception and the context-dependent plasticity of subcortical sound encoding are also documented. Here, we explored the subcortical neural correlates to music pitch perception by analyzing frequency-following responses (FFRs) to musical sounds presented in a predictable context and a random context. We found that the FFR inter-trial phase-locking (ITPL) was negatively correlated with behavioral performances of discrimination of pitches in music melodies. It was also negatively correlated with the plasticity indices measuring the variability of FFRs to physically identical sounds between the two contexts. The plasticity indices were consistently positively correlated with pitch discrimination performances, suggesting the subcortical context-dependent plasticity underlying music pitch perception. Moreover, the raw FFR spectral strength was not significantly correlated with pitch discrimination performances. However, it was positively correlated with behavioral performances when the FFR ITPL was controlled by partial correlations, suggesting that the strength of subcortical sound encoding underlies music pitch perception. When the spectral strength was controlled by partial correlations, the negative ITPL-behavioral correlations were maintained. Furthermore, the FFR ITPL, the plasticity indices, and the FFR spectral strength were more correlated with pitch than with rhythm discrimination performances. These findings suggest that the context-dependent plasticity and the strength of subcortical encoding of musical sounds are independently and perhaps specifically associated with pitch perception for music melodies.
Collapse
Affiliation(s)
- Xiaochen Zhang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China; Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Qin Gong
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China; School of Medicine, Shanghai University, Shanghai, China.
| |
Collapse
|
44
|
Rajan A, Shah A, Ingalhalikar M, Singh NC. Structural connectivity predicts sequential processing differences in music perception ability. Eur J Neurosci 2021; 54:6093-6103. [PMID: 34340255 DOI: 10.1111/ejn.15407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2020] [Revised: 07/08/2021] [Accepted: 07/24/2021] [Indexed: 11/25/2022]
Abstract
To relate individual differences in music perception ability with whole brain white matter connectivity, we scanned a group of 27 individuals with varying degrees of musical training and assessed musical ability in sensory and sequential music perception domains using the Profile of Music Perception Skills-Short version (PROMS-S). Sequential processing ability was estimated by combining performance on tasks for Melody, Standard Rhythm, Embedded Rhythm, and Accent subscores while sensory processing ability was ascertained via tasks of Tempo, Pitch, Timbre, and Tuning. Controlling for musical training, gender, and years of training, network-based statistics revealed positive linear associations between total PROMS-S scores and increased interhemispheric fronto-temporal and parieto-frontal white matter connectivity, suggesting a distinct segregated structural network for music perception. Secondary analysis revealed two subnetworks for sequential processing ability, one comprising ventral fronto-temporal and subcortical regions and the other comprising dorsal fronto-temporo-parietal regions. A graph-theoretic analysis to characterize the structural network revealed a positive association of modularity of the whole brain structural connectome with the d' total score. In addition, the nodal degree of the right posterior cingulate cortex also showed a significant positive correlation with the total d' score. Our results suggest that a distinct structural network of connectivity across fronto-temporal, cerebellar, and cerebro-subcortical regions is associated with music processing abilities and the right posterior cingulate cortex mediates the connectivity of this network.
Collapse
Affiliation(s)
- Archith Rajan
- Symbiosis Centre for Medical Image Analysis, Symbiosis International (Deemed University), Pune, India
| | - Apurva Shah
- Symbiosis Centre for Medical Image Analysis, Symbiosis International (Deemed University), Pune, India
| | - Madhura Ingalhalikar
- Symbiosis Centre for Medical Image Analysis, Symbiosis International (Deemed University), Pune, India
| | - Nandini Chatterjee Singh
- Language Literacy and Music Laboratory, National Brain Research Centre (Deemed University), Manesar, India.,Science of Learning, UNESCO Mahatma Gandhi Institute of Education for Peace and Sustainable Development, New Delhi, India
| |
Collapse
|
45
|
Gustavson DE, Coleman PL, Iversen JR, Maes HH, Gordon RL, Lense MD. Mental health and music engagement: review, framework, and guidelines for future studies. Transl Psychiatry 2021; 11:370. [PMID: 34226495 PMCID: PMC8257764 DOI: 10.1038/s41398-021-01483-8] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Revised: 06/03/2021] [Accepted: 06/10/2021] [Indexed: 01/08/2023] Open
Abstract
Is engaging with music good for your mental health? This question has long been the topic of empirical clinical and nonclinical investigations, with studies indicating positive associations between music engagement and quality of life, reduced depression or anxiety symptoms, and less frequent substance use. However, many earlier investigations were limited by small populations and methodological limitations, and it has also been suggested that aspects of music engagement may even be associated with worse mental health outcomes. The purpose of this scoping review is first to summarize the existing state of music engagement and mental health studies, identifying their strengths and weaknesses. We focus on broad domains of mental health diagnoses including internalizing psychopathology (e.g., depression and anxiety symptoms and diagnoses), externalizing psychopathology (e.g., substance use), and thought disorders (e.g., schizophrenia). Second, we propose a theoretical model to inform future work that describes the importance of simultaneously considering music-mental health associations at the levels of (1) correlated genetic and/or environmental influences vs. (bi)directional associations, (2) interactions with genetic risk factors, (3) treatment efficacy, and (4) mediation through brain structure and function. Finally, we describe how recent advances in large-scale data collection, including genetic, neuroimaging, and electronic health record studies, allow for a more rigorous examination of these associations that can also elucidate their neurobiological substrates.
Collapse
Affiliation(s)
- Daniel E. Gustavson
- grid.412807.80000 0004 1936 9916Department of Medicine, Vanderbilt University Medical Center, Nashville, TN USA ,grid.412807.80000 0004 1936 9916Vanderbilt Genetics Institute, Vanderbilt University Medical Center, Nashville, TN USA
| | - Peyton L. Coleman
- grid.412807.80000 0004 1936 9916Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN USA
| | - John R. Iversen
- grid.266100.30000 0001 2107 4242Swartz Center for Computational Neuroscience, Institute for Neural Computation, University of California, San Diego, La Jolla, CA USA
| | - Hermine H. Maes
- grid.224260.00000 0004 0458 8737Department of Human and Molecular Genetics, Virginia Institute for Psychiatric and Behavioral Genetics, Virginia Commonwealth University, Richmond, VA USA ,grid.224260.00000 0004 0458 8737Department of Psychiatry, Virginia Institute for Psychiatric and Behavioral Genetics, Virginia Commonwealth University, Richmond, VA USA ,grid.224260.00000 0004 0458 8737Massey Cancer Center, Virginia Commonwealth University, Richmond, VA USA
| | - Reyna L. Gordon
- grid.412807.80000 0004 1936 9916Vanderbilt Genetics Institute, Vanderbilt University Medical Center, Nashville, TN USA ,grid.412807.80000 0004 1936 9916Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN USA ,grid.152326.10000 0001 2264 7217Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN USA ,grid.152326.10000 0001 2264 7217The Curb Center, Vanderbilt University, Nashville, TN USA
| | - Miriam D. Lense
- grid.412807.80000 0004 1936 9916Vanderbilt Genetics Institute, Vanderbilt University Medical Center, Nashville, TN USA ,grid.152326.10000 0001 2264 7217Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN USA ,grid.152326.10000 0001 2264 7217The Curb Center, Vanderbilt University, Nashville, TN USA
| |
Collapse
|
46
|
Perron M, Theaud G, Descoteaux M, Tremblay P. The frontotemporal organization of the arcuate fasciculus and its relationship with speech perception in young and older amateur singers and non-singers. Hum Brain Mapp 2021; 42:3058-3076. [PMID: 33835629 PMCID: PMC8193549 DOI: 10.1002/hbm.25416] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2020] [Revised: 02/26/2021] [Accepted: 03/08/2021] [Indexed: 12/11/2022] Open
Abstract
The ability to perceive speech in noise (SPiN) declines with age. Although the etiology of SPiN decline is not well understood, accumulating evidence suggests a role for the dorsal speech stream. While age‐related decline within the dorsal speech stream would negatively affect SPiN performance, experience‐induced neuroplastic changes within the dorsal speech stream could positively affect SPiN performance. Here, we investigated the relationship between SPiN performance and the structure of the arcuate fasciculus (AF), which forms the white matter scaffolding of the dorsal speech stream, in aging singers and non‐singers. Forty‐three non‐singers and 41 singers aged 20 to 87 years old completed a hearing evaluation and a magnetic resonance imaging session that included High Angular Resolution Diffusion Imaging. The groups were matched for sex, age, education, handedness, cognitive level, and musical instrument experience. A subgroup of participants completed syllable discrimination in the noise task. The AF was divided into 10 segments to explore potential local specializations for SPiN. The results show that, in carefully matched groups of singers and non‐singers (a) myelin and/or axonal membrane deterioration within the bilateral frontotemporal AF segments are associated with SPiN difficulties in aging singers and non‐singers; (b) the structure of the AF is different in singers and non‐singers; (c) these differences are not associated with a benefit on SPiN performance for singers. This study clarifies the etiology of SPiN difficulties by supporting the hypothesis for the role of aging of the dorsal speech stream.
Collapse
Affiliation(s)
- Maxime Perron
- CERVO Brain Research Center, Quebec City, Quebec, Canada.,Département de Réadaptation, Université Laval, Faculté de Médecine, Quebec City, Quebec, Canada
| | - Guillaume Theaud
- Sherbrooke Connectivity Imaging Lab (SCIL), Computer Science Department, Université de Sherbrooke, Sherbrooke, Quebec, Canada
| | - Maxime Descoteaux
- Sherbrooke Connectivity Imaging Lab (SCIL), Computer Science Department, Université de Sherbrooke, Sherbrooke, Quebec, Canada
| | - Pascale Tremblay
- CERVO Brain Research Center, Quebec City, Quebec, Canada.,Département de Réadaptation, Université Laval, Faculté de Médecine, Quebec City, Quebec, Canada
| |
Collapse
|
47
|
Hennessy S, Wood A, Wilcox R, Habibi A. Neurophysiological improvements in speech-in-noise task after short-term choir training in older adults. Aging (Albany NY) 2021; 13:9468-9495. [PMID: 33824226 PMCID: PMC8064162 DOI: 10.18632/aging.202931] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Accepted: 03/26/2021] [Indexed: 01/24/2023]
Abstract
Perceiving speech in noise (SIN) is important for health and well-being and decreases with age. Musicians show improved speech-in-noise abilities and reduced age-related auditory decline, yet it is unclear whether short term music engagement has similar effects. In this randomized control trial we used a pre-post design to investigate whether a 12-week music intervention in adults aged 50-65 without prior music training and with subjective hearing loss improves well-being, speech-in-noise abilities, and auditory encoding and voluntary attention as indexed by auditory evoked potentials (AEPs) in a syllable-in-noise task, and later AEPs in an oddball task. Age and gender-matched adults were randomized to a choir or control group. Choir participants sang in a 2-hr ensemble with 1-hr home vocal training weekly; controls listened to a 3-hr playlist weekly, attended concerts, and socialized online with fellow participants. From pre- to post-intervention, no differences between groups were observed on quantitative measures of well-being or behavioral speech-in-noise abilities. In the choir group, but not the control group, changes in the N1 component were observed for the syllable-in-noise task, with increased N1 amplitude in the passive condition and decreased N1 latency in the active condition. During the oddball task, larger N1 amplitudes to the frequent standard stimuli were also observed in the choir but not control group from pre to post intervention. Findings have implications for the potential role of music training to improve sound encoding in individuals who are in the vulnerable age range and at risk of auditory decline.
Collapse
Affiliation(s)
- Sarah Hennessy
- Brain and Creativity Institute, University of Southern California, Los Angeles, CA 90089, USA
| | - Alison Wood
- Brain and Creativity Institute, University of Southern California, Los Angeles, CA 90089, USA
| | - Rand Wilcox
- Department of Psychology, University of Southern California, Los Angeles, CA 90089, USA
| | - Assal Habibi
- Brain and Creativity Institute, University of Southern California, Los Angeles, CA 90089, USA
| |
Collapse
|
48
|
Price CN, Bidelman GM. Attention reinforces human corticofugal system to aid speech perception in noise. Neuroimage 2021; 235:118014. [PMID: 33794356 PMCID: PMC8274701 DOI: 10.1016/j.neuroimage.2021.118014] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2020] [Revised: 03/09/2021] [Accepted: 03/25/2021] [Indexed: 12/13/2022] Open
Abstract
Perceiving speech-in-noise (SIN) demands precise neural coding between brainstem and cortical levels of the hearing system. Attentional processes can then select and prioritize task-relevant cues over competing background noise for successful speech perception. In animal models, brainstem-cortical interplay is achieved via descending corticofugal projections from cortex that shape midbrain responses to behaviorally-relevant sounds. Attentional engagement of corticofugal feedback may assist SIN understanding but has never been confirmed and remains highly controversial in humans. To resolve these issues, we recorded source-level, anatomically constrained brainstem frequency-following responses (FFRs) and cortical event-related potentials (ERPs) to speech via high-density EEG while listeners performed rapid SIN identification tasks. We varied attention with active vs. passive listening scenarios whereas task difficulty was manipulated with additive noise interference. Active listening (but not arousal-control tasks) exaggerated both ERPs and FFRs, confirming attentional gain extends to lower subcortical levels of speech processing. We used functional connectivity to measure the directed strength of coupling between levels and characterize "bottom-up" vs. "top-down" (corticofugal) signaling within the auditory brainstem-cortical pathway. While attention strengthened connectivity bidirectionally, corticofugal transmission disengaged under passive (but not active) SIN listening. Our findings (i) show attention enhances the brain's transcription of speech even prior to cortex and (ii) establish a direct role of the human corticofugal feedback system as an aid to cocktail party speech perception.
Collapse
Affiliation(s)
- Caitlin N Price
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, 4055 North Park Loop, Memphis, TN 38152, USA.
| | - Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, 4055 North Park Loop, Memphis, TN 38152, USA; Department of Anatomy and Neurobiology, University of Tennessee Health Sciences Center, Memphis, TN, USA.
| |
Collapse
|
49
|
The Musical Ear Test: Norms and correlates from a large sample of Canadian undergraduates. Behav Res Methods 2021; 53:2007-2024. [PMID: 33704673 DOI: 10.3758/s13428-020-01528-8] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/16/2020] [Indexed: 02/06/2023]
Abstract
We sought to establish norms and correlates for the Musical Ear Test (MET), an objective test of musical ability. A large sample of undergraduates at a Canadian university (N > 500) took the 20-min test, which provided a Total score as well as separate scores for its Melody and Rhythm subtests. On each trial, listeners judged whether standard and comparison auditory sequences were the same or different. Norms were derived as percentiles, Z-scores, and T-scores. The distribution of scores was approximately normal without floor or ceiling effects. There were no gender differences on either subtest or the total score. As expected, scores on both subtests were correlated with performance on a test of immediate recall for nonmusical auditory stimuli (Digit Span Forward). Moreover, as duration of music training increased, so did performance on both subtests, but starting lessons at a younger age was not predictive of better musical abilities. Listeners who spoke a tone language exhibited enhanced performance on the Melody subtest but not on the Rhythm subtest. The MET appears to have adequate psychometric characteristics that make it suitable for researchers who seek to measure musical abilities objectively.
Collapse
|
50
|
Auditory categorical processing for speech is modulated by inherent musical listening skills. Neuroreport 2021; 31:162-166. [PMID: 31834142 DOI: 10.1097/wnr.0000000000001369] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
During successful auditory perception, the human brain classifies diverse acoustic information into meaningful groupings, a process known as categorical perception (CP). Intense auditory experiences (e.g., musical training and language expertise) shape categorical representations necessary for speech identification and novel sound-to-meaning learning, but little is known concerning the role of innate auditory function in CP. Here, we tested whether listeners vary in their intrinsic abilities to categorize complex sounds and individual differences in the underlying auditory brain mechanisms. To this end, we recorded EEGs in individuals without formal music training but who differed in their inherent auditory perceptual abilities (i.e., musicality) as they rapidly categorized sounds along a speech vowel continuum. Behaviorally, individuals with naturally more adept listening skills ("musical sleepers") showed enhanced speech categorization in the form of faster identification. At the neural level, inverse modeling parsed EEG data into different sources to evaluate the contribution of region-specific activity [i.e., auditory cortex (AC)] to categorical neural coding. We found stronger categorical processing in musical sleepers around the timeframe of P2 (~180 ms) in the right AC compared to those with poorer musical listening abilities. Our data show that listeners with naturally more adept auditory skills map sound to meaning more efficiently than their peers, which may aid novel sound learning related to language and music acquisition.
Collapse
|