1
|
Kausel L, Zamorano F, Billeke P, Sutherland ME, Alliende MI, Larrain‐Valenzuela J, Soto‐Icaza P, Aboitiz F. Theta and alpha oscillations may underlie improved attention and working memory in musically trained children. Brain Behav 2024; 14:e3517. [PMID: 38702896 PMCID: PMC11069029 DOI: 10.1002/brb3.3517] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Revised: 04/10/2024] [Accepted: 04/13/2024] [Indexed: 05/06/2024] Open
Abstract
INTRODUCTION Attention and working memory are key cognitive functions that allow us to select and maintain information in our mind for a short time, being essential for our daily life and, in particular, for learning and academic performance. It has been shown that musical training can improve working memory performance, but it is still unclear if and how the neural mechanisms of working memory and particularly attention are implicated in this process. In this work, we aimed to identify the oscillatory signature of bimodal attention and working memory that contributes to improved working memory in musically trained children. MATERIALS AND METHODS We recruited children with and without musical training and asked them to complete a bimodal (auditory/visual) attention and working memory task, whereas their brain activity was measured using electroencephalography. Behavioral, time-frequency, and source reconstruction analyses were made. RESULTS Results showed that, overall, musically trained children performed better on the task than children without musical training. When comparing musically trained children with children without musical training, we found modulations in the alpha band pre-stimuli onset and the beginning of stimuli onset in the frontal and parietal regions. These correlated with correct responses to the attended modality. Moreover, during the end phase of stimuli presentation, we found modulations correlating with correct responses independent of attention condition in the theta and alpha bands, in the left frontal and right parietal regions. CONCLUSIONS These results suggest that musically trained children have improved neuronal mechanisms for both attention allocation and memory encoding. Our results can be important for developing interventions for people with attention and working memory difficulties.
Collapse
Affiliation(s)
- Leonie Kausel
- Centro de Estudios en Neurociencia Humana y Neuropsicología, Facultad de PsicologíaUniversidad Diego PortalesSantiagoChile
- Laboratorio de Neurociencia Social y Neuromodulación, Centro de Investigación en Complejidad Social (CICS), Facultad de GobiernoUniversidad del DesarrolloSantiagoChile
- Centro Interdisciplinario de NeurocienciasPontificia Universidad Católica de ChileSantiagoChile
| | - F. Zamorano
- Unidad de Imágenes Cuantitativas Avanzadas, Departamento de ImágenesClínica Alemanade SantiagoSantiagoChile
- Facultad de Ciencias para el Cuidado de la SaludUniversidad San SebastiánSantiagoChile
- Laboratorio de Psiquiatría TraslacionalDepartamento de PsiquiatríaFacultad de MedicinaUniversidad de ChileSantiagoChile
| | - P. Billeke
- Laboratorio de Neurociencia Social y Neuromodulación, Centro de Investigación en Complejidad Social (CICS), Facultad de GobiernoUniversidad del DesarrolloSantiagoChile
| | - M. E. Sutherland
- Centro Interdisciplinario de NeurocienciasPontificia Universidad Católica de ChileSantiagoChile
| | - M. I. Alliende
- Centro Interdisciplinario de NeurocienciasPontificia Universidad Católica de ChileSantiagoChile
| | - J. Larrain‐Valenzuela
- Centro de Investigación en Complejidad Social (CICS), Facultad de GobiernoUniversidad del DesarrolloSantiagoChile
| | - P. Soto‐Icaza
- Laboratorio de Neurociencia Social y Neuromodulación, Centro de Investigación en Complejidad Social (CICS), Facultad de GobiernoUniversidad del DesarrolloSantiagoChile
| | - F. Aboitiz
- Centro Interdisciplinario de NeurocienciasPontificia Universidad Católica de ChileSantiagoChile
| |
Collapse
|
2
|
Tseng HC, Hsieh IH. Effects of absolute pitch on brain activation and functional connectivity during hearing-in-noise perception. Cortex 2024; 174:1-18. [PMID: 38484435 DOI: 10.1016/j.cortex.2024.02.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 01/11/2024] [Accepted: 02/06/2024] [Indexed: 04/21/2024]
Abstract
Hearing-in-noise (HIN) ability is crucial in speech and music communication. Recent evidence suggests that absolute pitch (AP), the ability to identify isolated musical notes, is associated with HIN benefits. A theoretical account postulates a link between AP ability and neural network indices of segregation. However, how AP ability modulates the brain activation and functional connectivity underlying HIN perception remains unclear. Here we used functional magnetic resonance imaging to contrast brain responses among a sample (n = 45) comprising 15 AP musicians, 15 non-AP musicians, and 15 non-musicians in perceiving Mandarin speech and melody targets under varying signal-to-noise ratios (SNRs: No-Noise, 0, -9 dB). Results reveal that AP musicians exhibited increased activation in auditory and superior frontal regions across both HIN domains (music and speech), irrespective of noise levels. Notably, substantially higher sensorimotor activation was found in AP musicians when the target was music compared to speech. Furthermore, we examined AP effects on neural connectivity using psychophysiological interaction analysis with the auditory cortex as the seed region. AP musicians showed decreased functional connectivity with the sensorimotor and middle frontal gyrus compared to non-AP musicians. Crucially, AP differentially affected connectivity with parietal and frontal brain regions depending on the HIN domain being music or speech. These findings suggest that AP plays a critical role in HIN perception, manifested by increased activation and functional independence between auditory and sensorimotor regions for perceiving music and speech streams.
Collapse
Affiliation(s)
- Hung-Chen Tseng
- Institute of Cognitive Neuroscience, National Central University, Taoyuan City, Taiwan
| | - I-Hui Hsieh
- Institute of Cognitive Neuroscience, National Central University, Taoyuan City, Taiwan; Cognitive Intelligence and Precision Healthcare Center, National Central University, Taoyuan City, Taiwan.
| |
Collapse
|
3
|
Li Z, Zhang D. How does the human brain process noisy speech in real life? Insights from the second-person neuroscience perspective. Cogn Neurodyn 2024; 18:371-382. [PMID: 38699619 PMCID: PMC11061069 DOI: 10.1007/s11571-022-09924-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 11/20/2022] [Accepted: 12/19/2022] [Indexed: 01/07/2023] Open
Abstract
Comprehending speech with the existence of background noise is of great importance for human life. In the past decades, a large number of psychological, cognitive and neuroscientific research has explored the neurocognitive mechanisms of speech-in-noise comprehension. However, as limited by the low ecological validity of the speech stimuli and the experimental paradigm, as well as the inadequate attention on the high-order linguistic and extralinguistic processes, there remains much unknown about how the brain processes noisy speech in real-life scenarios. A recently emerging approach, i.e., the second-person neuroscience approach, provides a novel conceptual framework. It measures both of the speaker's and the listener's neural activities, and estimates the speaker-listener neural coupling with regarding of the speaker's production-related neural activity as a standardized reference. The second-person approach not only promotes the use of naturalistic speech but also allows for free communication between speaker and listener as in a close-to-life context. In this review, we first briefly review the previous discoveries about how the brain processes speech in noise; then, we introduce the principles and advantages of the second-person neuroscience approach and discuss its implications to unravel the linguistic and extralinguistic processes during speech-in-noise comprehension; finally, we conclude by proposing some critical issues and calls for more research interests in the second-person approach, which would further extend the present knowledge about how people comprehend speech in noise.
Collapse
Affiliation(s)
- Zhuoran Li
- Department of Psychology, School of Social Sciences, Tsinghua University, Room 334, Mingzhai Building, Beijing, 100084 China
- Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing, 100084 China
| | - Dan Zhang
- Department of Psychology, School of Social Sciences, Tsinghua University, Room 334, Mingzhai Building, Beijing, 100084 China
- Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing, 100084 China
| |
Collapse
|
4
|
Caprini F, Zhao S, Chait M, Agus T, Pomper U, Tierney A, Dick F. Generalization of auditory expertise in audio engineers and instrumental musicians. Cognition 2024; 244:105696. [PMID: 38160651 DOI: 10.1016/j.cognition.2023.105696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Revised: 12/04/2023] [Accepted: 12/13/2023] [Indexed: 01/03/2024]
Abstract
From auditory perception to general cognition, the ability to play a musical instrument has been associated with skills both related and unrelated to music. However, it is unclear if these effects are bound to the specific characteristics of musical instrument training, as little attention has been paid to other populations such as audio engineers and designers whose auditory expertise may match or surpass that of musicians in specific auditory tasks or more naturalistic acoustic scenarios. We explored this possibility by comparing students of audio engineering (n = 20) to matched conservatory-trained instrumentalists (n = 24) and to naive controls (n = 20) on measures of auditory discrimination, auditory scene analysis, and speech in noise perception. We found that audio engineers and performing musicians had generally lower psychophysical thresholds than controls, with pitch perception showing the largest effect size. Compared to controls, audio engineers could better memorise and recall auditory scenes composed of non-musical sounds, whereas instrumental musicians performed best in a sustained selective attention task with two competing streams of tones. Finally, in a diotic speech-in-babble task, musicians showed lower signal-to-noise-ratio thresholds than both controls and engineers; however, a follow-up online study did not replicate this musician advantage. We also observed differences in personality that might account for group-based self-selection biases. Overall, we showed that investigating a wider range of forms of auditory expertise can help us corroborate (or challenge) the specificity of the advantages previously associated with musical instrument training.
Collapse
Affiliation(s)
- Francesco Caprini
- Department of Psychological Sciences, Birkbeck, University of London, UK.
| | - Sijia Zhao
- Department of Experimental Psychology, University of Oxford, UK
| | - Maria Chait
- University College London (UCL) Ear Institute, UK
| | - Trevor Agus
- School of Arts, English and Languages, Queen's University Belfast, UK
| | - Ulrich Pomper
- Department of Cognition, Emotion, and Methods in Psychology, Universität Wien, Austria
| | - Adam Tierney
- Department of Psychological Sciences, Birkbeck, University of London, UK
| | - Fred Dick
- Department of Experimental Psychology, University College London (UCL), UK
| |
Collapse
|
5
|
Loutrari A, Alqadi A, Jiang C, Liu F. Exploring the role of singing, semantics, and amusia screening in speech-in-noise perception in musicians and non-musicians. Cogn Process 2024; 25:147-161. [PMID: 37851154 PMCID: PMC10827916 DOI: 10.1007/s10339-023-01165-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Accepted: 09/26/2023] [Indexed: 10/19/2023]
Abstract
Sentence repetition has been the focus of extensive psycholinguistic research. The notion that music training can bolster speech perception in adverse auditory conditions has been met with mixed results. In this work, we sought to gauge the effect of babble noise on immediate repetition of spoken and sung phrases of varying semantic content (expository, narrative, and anomalous), initially in 100 English-speaking monolinguals with and without music training. The two cohorts also completed some non-musical cognitive tests and the Montreal Battery of Evaluation of Amusia (MBEA). When disregarding MBEA results, musicians were found to significantly outperform non-musicians in terms of overall repetition accuracy. Sung targets were recalled significantly better than spoken ones across groups in the presence of babble noise. Sung expository targets were recalled better than spoken expository ones, and semantically anomalous content was recalled more poorly in noise. Rerunning the analysis after eliminating thirteen participants who were diagnosed with amusia showed no significant group differences. This suggests that the notion of enhanced speech perception-in noise or otherwise-in musicians needs to be evaluated with caution. Musicianship aside, this study showed for the first time that sung targets presented in babble noise seem to be recalled better than spoken ones. We discuss the present design and the methodological approach of screening for amusia as factors which may partially account for some of the mixed results in the field.
Collapse
Affiliation(s)
- Ariadne Loutrari
- School of Psychology and Clinical Language Sciences, University of Reading, Earley Gate, Reading, RG6 6AL, UK
- Division of Psychology and Language Sciences, University College London, London, WC1N 1PF, UK
| | - Aseel Alqadi
- School of Psychology and Clinical Language Sciences, University of Reading, Earley Gate, Reading, RG6 6AL, UK
| | - Cunmei Jiang
- Music College, Shanghai Normal University, Shanghai, 200234, China
| | - Fang Liu
- School of Psychology and Clinical Language Sciences, University of Reading, Earley Gate, Reading, RG6 6AL, UK.
| |
Collapse
|
6
|
MacLean J, Stirn J, Sisson A, Bidelman GM. Short- and long-term neuroplasticity interact during the perceptual learning of concurrent speech. Cereb Cortex 2024; 34:bhad543. [PMID: 38212291 PMCID: PMC10839853 DOI: 10.1093/cercor/bhad543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Revised: 12/20/2023] [Accepted: 12/21/2023] [Indexed: 01/13/2024] Open
Abstract
Plasticity from auditory experience shapes the brain's encoding and perception of sound. However, whether such long-term plasticity alters the trajectory of short-term plasticity during speech processing has yet to be investigated. Here, we explored the neural mechanisms and interplay between short- and long-term neuroplasticity for rapid auditory perceptual learning of concurrent speech sounds in young, normal-hearing musicians and nonmusicians. Participants learned to identify double-vowel mixtures during ~ 45 min training sessions recorded simultaneously with high-density electroencephalography (EEG). We analyzed frequency-following responses (FFRs) and event-related potentials (ERPs) to investigate neural correlates of learning at subcortical and cortical levels, respectively. Although both groups showed rapid perceptual learning, musicians showed faster behavioral decisions than nonmusicians overall. Learning-related changes were not apparent in brainstem FFRs. However, plasticity was highly evident in cortex, where ERPs revealed unique hemispheric asymmetries between groups suggestive of different neural strategies (musicians: right hemisphere bias; nonmusicians: left hemisphere). Source reconstruction and the early (150-200 ms) time course of these effects localized learning-induced cortical plasticity to auditory-sensory brain areas. Our findings reinforce the domain-general benefits of musicianship but reveal that successful speech sound learning is driven by a critical interplay between long- and short-term mechanisms of auditory plasticity, which first emerge at a cortical level.
Collapse
Affiliation(s)
- Jessica MacLean
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
- Program in Neuroscience, Indiana University, Bloomington, IN, USA
| | - Jack Stirn
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
| | - Alexandria Sisson
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
| | - Gavin M Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
- Program in Neuroscience, Indiana University, Bloomington, IN, USA
- Cognitive Science Program, Indiana University, Bloomington, IN, USA
| |
Collapse
|
7
|
Körner A, Strack F. Articulation posture influences pitch during singing imagery. Psychon Bull Rev 2023; 30:2187-2195. [PMID: 37221280 PMCID: PMC10728233 DOI: 10.3758/s13423-023-02306-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/03/2023] [Indexed: 05/25/2023]
Abstract
Facial muscle activity contributes to singing and to articulation: in articulation, mouth shape can alter vowel identity; and in singing, facial movement correlates with pitch changes. Here, we examine whether mouth posture causally influences pitch during singing imagery. Based on perception-action theories and embodied cognition theories, we predict that mouth posture influences pitch judgments even when no overt utterances are produced. In two experiments (total N = 160), mouth posture was manipulated to resemble the articulation of either /i/ (as in English meet; retracted lips) or /o/ (as in French rose; protruded lips). Holding this mouth posture, participants were instructed to mentally "sing" given songs (which were all positive in valence) while listening with their inner ear and, afterwards, to assess the pitch of their mental chant. As predicted, compared to the o-posture, the i-posture led to higher pitch in mental singing. Thus, bodily states can shape experiential qualities, such as pitch, during imagery. This extends embodied music cognition and demonstrates a new link between language and music.
Collapse
Affiliation(s)
- Anita Körner
- Department of Psychology, University of Kassel, Holländische Straße 36-38, 34127, Kassel, Germany.
| | - Fritz Strack
- Department of Psychology, University of Würzburg, Würzburg, Germany
| |
Collapse
|
8
|
Zheng Y, Gao P, Li X. The modulating effect of musical expertise on lexical-semantic prediction in speech-in-noise comprehension: Evidence from an EEG study. Psychophysiology 2023; 60:e14371. [PMID: 37350401 DOI: 10.1111/psyp.14371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2022] [Revised: 05/08/2023] [Accepted: 05/27/2023] [Indexed: 06/24/2023]
Abstract
Musical expertise has been proposed to facilitate speech perception and comprehension in noisy environments. This study further examined the open question of whether musical expertise modulates high-level lexical-semantic prediction to aid online speech comprehension in noisy backgrounds. Musicians and nonmusicians listened to semantically strongly/weakly constraining sentences during EEG recording. At verbs prior to target nouns, both groups showed a positivity-ERP effect (Strong vs. Weak) associated with the predictability of incoming nouns; this correlation effect was stronger in musicians than in nonmusicians. After the target nouns appeared, both groups showed an N400 reduction effect (Strong vs. Weak) associated with noun predictability, but musicians exhibited an earlier onset latency and stronger effect size of this correlation effect than nonmusicians. To determine whether musical expertise enhances anticipatory semantic processing in general, the same group of participants participated in a control reading comprehension experiment. The results showed that, compared with nonmusicians, musicians demonstrated more delayed ERP correlation effects of noun predictability at words preceding the target nouns; musicians also exhibited more delayed and reduced N400 decrease effects correlated with noun predictability at the target nouns. Taken together, these results suggest that musical expertise enhances lexical-semantic predictive processing in speech-in-noise comprehension. This musical-expertise effect may be related to the strengthened hierarchical speech processing in particular.
Collapse
Affiliation(s)
- Yuanyi Zheng
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Panke Gao
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Xiaoqing Li
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
- Jiangsu Collaborative Innovation Center for Language Ability, Jiangsu Normal University, Xuzhou, China
| |
Collapse
|
9
|
MacLean J, Stirn J, Sisson A, Bidelman GM. Short- and long-term experience-dependent neuroplasticity interact during the perceptual learning of concurrent speech. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.09.26.559640. [PMID: 37808665 PMCID: PMC10557636 DOI: 10.1101/2023.09.26.559640] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/10/2023]
Abstract
Plasticity from auditory experiences shapes brain encoding and perception of sound. However, whether such long-term plasticity alters the trajectory of short-term plasticity during speech processing has yet to be investigated. Here, we explored the neural mechanisms and interplay between short- and long-term neuroplasticity for rapid auditory perceptual learning of concurrent speech sounds in young, normal-hearing musicians and nonmusicians. Participants learned to identify double-vowel mixtures during ∼45 minute training sessions recorded simultaneously with high-density EEG. We analyzed frequency-following responses (FFRs) and event-related potentials (ERPs) to investigate neural correlates of learning at subcortical and cortical levels, respectively. While both groups showed rapid perceptual learning, musicians showed faster behavioral decisions than nonmusicians overall. Learning-related changes were not apparent in brainstem FFRs. However, plasticity was highly evident in cortex, where ERPs revealed unique hemispheric asymmetries between groups suggestive of different neural strategies (musicians: right hemisphere bias; nonmusicians: left hemisphere). Source reconstruction and the early (150-200 ms) time course of these effects localized learning-induced cortical plasticity to auditory-sensory brain areas. Our findings confirm domain-general benefits for musicianship but reveal successful speech sound learning is driven by a critical interplay between long- and short-term mechanisms of auditory plasticity that first emerge at a cortical level.
Collapse
|
10
|
Hsieh IH, Guo YJ. No Musician Advantage in the Perception of Degraded-Fundamental Frequency Speech in Noisy Environments. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023:1-13. [PMID: 37499233 DOI: 10.1044/2023_jslhr-22-00662] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/29/2023]
Abstract
PURPOSE Pitch variations of the fundamental frequency (fo) contour contribute to speech perception in noisy environments, but whether musicians confer an advantage in speech in noise (SIN) with altered fo information remains unclear. This study investigated the effects of different levels of degraded fo contour (i.e., conveying lexical tone or intonation information) on musician advantage in speech-in-noise perception. METHOD A cohort of native Mandarin Chinese speakers, comprising 30 trained musicians and 30 nonmusicians, were tested on the intelligibility of Mandarin Chinese sentences with natural, flattened-tone, flattened-intonation, and flattened-all fo contours embedded in background noise masked under three signal-to-noise ratios (0, -5, and -9 dB). Pitch difference thresholds and innate musical skills associated with speech-in-noise benefits were also assessed. RESULTS Speech intelligibility score improved with increasing signal-to-noise level for both musicians and nonmusicians. However, no musician advantage was observed for identifying any type of flattened-fo contour SIN. Musicians exhibited smaller fo pitch discrimination limens than nonmusicians, which correlated with benefits for perceiving speech with intact tone-level fo information. Regardless of musician status, performance on the pitch and accent musical-skill subtests correlated with speech intelligibility score. CONCLUSIONS Collectively, these results provide no evidence for a musician advantage for perceiving speech with distorted fo information in noisy environments. Results further show that perceptual musical skills on pitch and accent processing may benefit the perception of SIN, independent of formal musical training. Our findings suggest that the potential application of music training in speech perception in noisy backgrounds is not contingent on the ability to process fo pitch contours, at least for Mandarin Chinese speakers. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.23706354.
Collapse
Affiliation(s)
- I-Hui Hsieh
- Institute of Cognitive Neuroscience, National Central University, Taoyuan City, Taiwan
- Cognitive Intelligence and Precision Healthcare Center, National Central University, Taoyuan City, Taiwan
| | - Yu-Jyun Guo
- Institute of Cognitive Neuroscience, National Central University, Taoyuan City, Taiwan
| |
Collapse
|
11
|
Kyrtsoudi M, Sidiras C, Papadelis G, Iliadou VM. Auditory Processing in Musicians, a Cross-Sectional Study, as a Basis for Auditory Training Optimization. Healthcare (Basel) 2023; 11:2027. [PMID: 37510468 PMCID: PMC10379437 DOI: 10.3390/healthcare11142027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Revised: 06/26/2023] [Accepted: 07/12/2023] [Indexed: 07/30/2023] Open
Abstract
Μusicians are reported to have enhanced auditory processing. This study aimed to assess auditory perception in Greek musicians with respect to their musical specialization and to compare their auditory processing with that of non-musicians. Auditory processing elements evaluated were speech recognition in babble, rhythmic advantage in speech recognition, short-term working memory, temporal resolution, and frequency discrimination threshold detection. All groups were of 12 participants. Three distinct experimental groups tested included western classical musicians, Byzantine chanters, and percussionists. The control group consisted of 12 non-musicians. The results revealed: (i) a rhythmic advantage for word recognition in noise for classical musicians (M = 12.42) compared to Byzantine musicians (M = 9.83), as well as for musicians compared to non-musicians (U = 120.50, p = 0.019), (ii) better frequency discrimination threshold of Byzantine musicians (M = 3.17, p = 0.002) compared to the other two musicians' group for the 2000 Hz region, (iii) statistically significant better working memory for musicians (U = 123.00, p = 0.025) compared to non-musicians. Musical training enhances elements of auditory processing and may be used as an additional rehabilitation approach during auditory training, focusing on specific types of music for specific auditory processing deficits.
Collapse
Affiliation(s)
- Maria Kyrtsoudi
- Clinical Psychoacoustics Laboratory, 3rd Psychiatric Department, Neurosciences Sector, Medical School, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
| | - Christos Sidiras
- Clinical Psychoacoustics Laboratory, 3rd Psychiatric Department, Neurosciences Sector, Medical School, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
| | - Georgios Papadelis
- School of Music Studies, Faculty of Fine Arts, Aristotle University of Thessaloniki, 57001 Thermi, Greece
| | - Vasiliki Maria Iliadou
- Clinical Psychoacoustics Laboratory, 3rd Psychiatric Department, Neurosciences Sector, Medical School, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
| |
Collapse
|
12
|
Zhang L, Wang X, Alain C, Du Y. Successful aging of musicians: Preservation of sensorimotor regions aids audiovisual speech-in-noise perception. SCIENCE ADVANCES 2023; 9:eadg7056. [PMID: 37126550 PMCID: PMC10132752 DOI: 10.1126/sciadv.adg7056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Musicianship can mitigate age-related declines in audiovisual speech-in-noise perception. We tested whether this benefit originates from functional preservation or functional compensation by comparing fMRI responses of older musicians, older nonmusicians, and young nonmusicians identifying noise-masked audiovisual syllables. Older musicians outperformed older nonmusicians and showed comparable performance to young nonmusicians. Notably, older musicians retained similar neural specificity of speech representations in sensorimotor areas to young nonmusicians, while older nonmusicians showed degraded neural representations. In the same region, older musicians showed higher neural alignment to young nonmusicians than older nonmusicians, which was associated with their training intensity. In older nonmusicians, the degree of neural alignment predicted better performance. In addition, older musicians showed greater activation in frontal-parietal, speech motor, and visual motion regions and greater deactivation in the angular gyrus than older nonmusicians, which predicted higher neural alignment in sensorimotor areas. Together, these findings suggest that musicianship-related benefit in audiovisual speech-in-noise processing is rooted in preserving youth-like representations in sensorimotor regions.
Collapse
Affiliation(s)
- Lei Zhang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Xiuyi Wang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China
| | - Claude Alain
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, ON M6A 2E1, Canada
- Department of Psychology, University of Toronto, ON M8V 2S4, Canada
| | - Yi Du
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
- CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai 200031, China
- Chinese Institute for Brain Research, Beijing 102206, China
| |
Collapse
|
13
|
Auditory Electrophysiological and Perceptual Measures in Student Musicians with High Sound Exposure. Diagnostics (Basel) 2023; 13:diagnostics13050934. [PMID: 36900080 PMCID: PMC10000734 DOI: 10.3390/diagnostics13050934] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 12/05/2022] [Accepted: 02/26/2023] [Indexed: 03/06/2023] Open
Abstract
This study aimed to determine (a) the influence of noise exposure background (NEB) on the peripheral and central auditory system functioning and (b) the influence of NEB on speech recognition in noise abilities in student musicians. Twenty non-musician students with self-reported low NEB and 18 student musicians with self-reported high NEB completed a battery of tests that consisted of physiological measures, including auditory brainstem responses (ABRs) at three different stimulus rates (11.3 Hz, 51.3 Hz, and 81.3 Hz), and P300, and behavioral measures including conventional and extended high-frequency audiometry, consonant-vowel nucleus-consonant (CNC) word test and AzBio sentence test for assessing speech perception in noise abilities at -9, -6, -3, 0, and +3 dB signal to noise ratios (SNRs). The NEB was negatively associated with performance on the CNC test at all five SNRs. A negative association was found between NEB and performance on the AzBio test at 0 dB SNR. No effect of NEB was found on the amplitude and latency of P300 and the ABR wave I amplitude. More investigations of larger datasets with different NEB and longitudinal measurements are needed to investigate the influence of NEB on word recognition in noise and to understand the specific cognitive processes contributing to the impact of NEB on word recognition in noise.
Collapse
|
14
|
Cohn M, Barreda S, Zellou G. Differences in a Musician's Advantage for Speech-in-Speech Perception Based on Age and Task. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:545-564. [PMID: 36729698 DOI: 10.1044/2022_jslhr-22-00259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
PURPOSE This study investigates the debate that musicians have an advantage in speech-in-noise perception from years of targeted auditory training. We also consider the effect of age on any such advantage, comparing musicians and nonmusicians (age range: 18-66 years), all of whom had normal hearing. We manipulate the degree of fundamental frequency (f o) separation between the competing talkers, as well as use different tasks, to probe attentional differences that might shape a musician's advantage across ages. METHOD Participants (ranging in age from 18 to 66 years) included 29 musicians and 26 nonmusicians. They completed two tasks varying in attentional demands: (a) a selective attention task where listeners identify the target sentence presented with a one-talker interferer (Experiment 1), and (b) a divided attention task where listeners hear two vowels played simultaneously and identify both competing vowels (Experiment 2). In both paradigms, f o separation was manipulated between the two voices (Δf o = 0, 0.156, 0.306, 1, 2, 3 semitones). RESULTS Results show that increasing differences in f o separation lead to higher accuracy on both tasks. Additionally, we find evidence for a musician's advantage across the two studies. In the sentence identification task, younger adult musicians show higher accuracy overall, as well as a stronger reliance on f o separation. Yet, this advantage declines with musicians' age. In the double vowel identification task, musicians of all ages show an across-the-board advantage in detecting two vowels-and use f o separation more to aid in stream separation-but show no consistent difference in double vowel identification. CONCLUSIONS Overall, we find support for a hybrid auditory encoding-attention account of music-to-speech transfer. The musician's advantage includes f o, but the benefit also depends on the attentional demands in the task and listeners' age. Taken together, this study suggests a complex relationship between age, musical experience, and speech-in-speech paradigm on a musician's advantage. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21956777.
Collapse
Affiliation(s)
- Michelle Cohn
- Phonetics Lab, Department of Linguistics, University of California, Davis
| | - Santiago Barreda
- Phonetics Lab, Department of Linguistics, University of California, Davis
| | - Georgia Zellou
- Phonetics Lab, Department of Linguistics, University of California, Davis
| |
Collapse
|
15
|
Plasticity Changes in Central Auditory Systems of School-Age Children Following a Brief Training With a Remote Microphone System. Ear Hear 2023:00003446-990000000-00109. [PMID: 36706057 DOI: 10.1097/aud.0000000000001329] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
OBJECTIVES The objective of this study was to investigate whether a brief speech-in-noise training with a remote microphone (RM) system (favorable listening condition) would contribute to enhanced post-training plasticity changes in the auditory system of school-age children. DESIGN Before training, event-related potentials (ERPs) were recorded from 49 typically developing children, who actively identified two syllables in quiet and in noise (+5 dB signal-to-noise ratio [SNR]). During training, children completed the same syllable identification task as in the pre-training noise condition, but received feedback on their performance. Following random assignment, half of the sample used an RM system during training (experimental group), while the other half did not (control group). That is, during training' children in the experimental group listened to a more favorable speech signal (+15 dB SNR) than children from the control group (+5 dB SNR). ERPs were collected after training at +5 dB SNR to evaluate the effects of training with and without the RM system. Electrical neuroimaging analyses quantified the effects of training in each group on ERP global field power (GFP) and topography, indexing response strength and network changes, respectively. Behavioral speech-perception-in-noise skills of children were also evaluated and compared before and after training. We hypothesized that training with the RM system (experimental group) would lead to greater enhancement of GFP and greater topographical changes post-training than training without the RM system (control group). We also expected greater behavioral improvement on the speech-perception-in-noise task when training with than without the RM system. RESULTS GFP was enhanced after training only in the experimental group. These effects were observed on early time-windows corresponding to traditional P1-N1 (100 to 200 msec) and P2-N2 (200 to 400 msec) ERP components. No training effects were observed on response topography. Finally, both groups increased their speech-perception-in-noise skills post-training. CONCLUSIONS Enhanced GFP after training with the RM system indicates plasticity changes in the neural representation of sound resulting from listening to an enriched auditory signal. Further investigation of longer training or auditory experiences with favorable listening conditions is needed to determine if that results in long-term speech-perception-in-noise benefits.
Collapse
|
16
|
Tremblay P, Perron M. Auditory cognitive aging in amateur singers and non-singers. Cognition 2023; 230:105311. [PMID: 36332309 DOI: 10.1016/j.cognition.2022.105311] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Revised: 10/02/2022] [Accepted: 10/17/2022] [Indexed: 11/06/2022]
Abstract
The notion that lifestyle factors, such as music-making activities, can affect cognitive functioning and reduce cognitive decline in aging is often referred to as the mental exercise hypothesis. One ubiquitous musical activity is choir singing. Like other musical activities, singing is hypothesized to impact cognitive and especially executive functions. Despite the commonness of choir singing, little is known about the extent to which singing can affect cognition in adulthood. In this cross-sectional group study, we examined the relationship between age and four auditory executive functions to test hypotheses about the relationship between the level of mental activity and cognitive functioning. We also examined pitch discrimination capabilities. A non-probabilistic sample of 147 cognitively healthy adults was recruited, which included 75 non-singers (mean age 52.5 ± 20.3; 20-98 years) and 72 singers (mean age 55.5 ± 19.2; 21-87 years). Tests of selective attention, processing speed, inhibitory control, and working memory were administered to all participants. Our main hypothesis was that executive functions and age would be negatively correlated, and that this relationship would be stronger in non-singers than singers, consistent with the differential preservation hypothesis. The alternative hypothesis - preserved differentiation - predicts that the difference between singers and non-singers in executive functions is unaffected by age. Our results reveal a detrimental effect of age on processing speed, selective attention, inhibitory control and working memory. The effect of singing was comparatively more limited, being positively associated only with frequency discrimination, processing speed, and, to some extent, inhibitory control. Evidence of differential preservation was limited to processing speed. We also found a circumscribed positive impact of age of onset and a negative impact of singing experience on cognitive functioning in singers. Together, these findings were interpreted as reflecting an age-related decline in executive function in cognitively healthy adults, with specific and limited positive impacts of singing, consistent with the preserved differentiation hypothesis, but not with the differential preservation hypothesis.
Collapse
Affiliation(s)
- Pascale Tremblay
- CERVO Brain Research Center, Quebec City G1J 2G3, Canada; Université Laval, Faculté de Médecine, Département de Réadaptation, Quebec City G1V 0A6, Canada.
| | - Maxime Perron
- Rotman Research Institute, Baycrest, North York, Ontario M6A 2E1, Canada; University of Toronto, Faculty of Arts and Science, Department of Psychology, Toronto, Ontario M5S 3G3, Canada
| |
Collapse
|
17
|
Maillard E, Joyal M, Murray MM, Tremblay P. Are musical activities associated with enhanced speech perception in noise in adults? A systematic review and meta-analysis. CURRENT RESEARCH IN NEUROBIOLOGY 2023. [DOI: 10.1016/j.crneur.2023.100083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/30/2023] Open
|
18
|
Johns MA, Calloway RC, Phillips I, Karuzis VP, Dutta K, Smith E, Shamma SA, Goupell MJ, Kuchinsky SE. Performance on stochastic figure-ground perception varies with individual differences in speech-in-noise recognition and working memory capacity. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 153:286. [PMID: 36732241 PMCID: PMC9851714 DOI: 10.1121/10.0016756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Revised: 12/07/2022] [Accepted: 12/10/2022] [Indexed: 06/18/2023]
Abstract
Speech recognition in noisy environments can be challenging and requires listeners to accurately segregate a target speaker from irrelevant background noise. Stochastic figure-ground (SFG) tasks in which temporally coherent inharmonic pure-tones must be identified from a background have been used to probe the non-linguistic auditory stream segregation processes important for speech-in-noise processing. However, little is known about the relationship between performance on SFG tasks and speech-in-noise tasks nor the individual differences that may modulate such relationships. In this study, 37 younger normal-hearing adults performed an SFG task with target figure chords consisting of four, six, eight, or ten temporally coherent tones amongst a background of randomly varying tones. Stimuli were designed to be spectrally and temporally flat. An increased number of temporally coherent tones resulted in higher accuracy and faster reaction times (RTs). For ten target tones, faster RTs were associated with better scores on the Quick Speech-in-Noise task. Individual differences in working memory capacity and self-reported musicianship further modulated these relationships. Overall, results demonstrate that the SFG task could serve as an assessment of auditory stream segregation accuracy and RT that is sensitive to individual differences in cognitive and auditory abilities, even among younger normal-hearing adults.
Collapse
Affiliation(s)
- Michael A Johns
- Institute for Systems Research, University of Maryland, College Park, Maryland 20742, USA
| | - Regina C Calloway
- Institute for Systems Research, University of Maryland, College Park, Maryland 20742, USA
| | - Ian Phillips
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, Maryland 20889, USA
| | - Valerie P Karuzis
- Applied Research Laboratory of Intelligence and Security, University of Maryland, College Park, Maryland 20742, USA
| | - Kelsey Dutta
- Institute for Systems Research, University of Maryland, College Park, Maryland 20742, USA
| | - Ed Smith
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Shihab A Shamma
- Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland 20742, USA
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Stefanie E Kuchinsky
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, Maryland 20889, USA
| |
Collapse
|
19
|
Christensen J, Slavik L, Nicol JJ, Loehr JD. Alpha oscillations related to self-other integration and distinction during live orchestral performance: A naturalistic case study. PSYCHOLOGY OF MUSIC 2023; 51:295-315. [PMID: 36532616 PMCID: PMC9751440 DOI: 10.1177/03057356221091313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Ensemble music performance requires musicians to achieve precise interpersonal coordination while maintaining autonomous control over their own actions. To do so, musicians dynamically shift between integrating other performers' actions into their own action plans and maintaining a distinction between their own and others' actions. Research in laboratory settings has shown that this dynamic process of self-other integration and distinction is indexed by sensorimotor alpha oscillations. The purpose of the current descriptive case study was to examine oscillations related to self-other integration and distinction in a naturalistic performance context. We measured alpha activity from four violinists during a concert hall performance of a 60-musician orchestra. We selected a musical piece from the orchestra's repertoire and, before analyzing alpha activity, performed a score analysis to divide the piece into sections that were expected to strongly promote self-other integration and distinction. In line with previous laboratory findings, performers showed suppressed and enhanced alpha activity during musical sections that promoted self-other integration and distinction, respectively. The current study thus provides preliminary evidence that findings from carefully controlled laboratory experiments generalize to complex real-world performance. Its findings also suggest directions for future research and potential applications of interest to musicians, music educators, and music therapists.
Collapse
Affiliation(s)
| | - Lauren Slavik
- Department of Psychology, University of Saskatchewan, Saskatoon, Canada
| | - Jennifer J Nicol
- Department of Educational Psychology and Special Education, University of Saskatchewan, Saskatoon, Canada
| | - Janeen D Loehr
- Department of Psychology, University of Saskatchewan, Saskatoon, Canada
| |
Collapse
|
20
|
Nayak S, Coleman PL, Ladányi E, Nitin R, Gustavson DE, Fisher SE, Magne CL, Gordon RL. The Musical Abilities, Pleiotropy, Language, and Environment (MAPLE) Framework for Understanding Musicality-Language Links Across the Lifespan. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2022; 3:615-664. [PMID: 36742012 PMCID: PMC9893227 DOI: 10.1162/nol_a_00079] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Accepted: 08/08/2022] [Indexed: 04/18/2023]
Abstract
Using individual differences approaches, a growing body of literature finds positive associations between musicality and language-related abilities, complementing prior findings of links between musical training and language skills. Despite these associations, musicality has been often overlooked in mainstream models of individual differences in language acquisition and development. To better understand the biological basis of these individual differences, we propose the Musical Abilities, Pleiotropy, Language, and Environment (MAPLE) framework. This novel integrative framework posits that musical and language-related abilities likely share some common genetic architecture (i.e., genetic pleiotropy) in addition to some degree of overlapping neural endophenotypes, and genetic influences on musically and linguistically enriched environments. Drawing upon recent advances in genomic methodologies for unraveling pleiotropy, we outline testable predictions for future research on language development and how its underlying neurobiological substrates may be supported by genetic pleiotropy with musicality. In support of the MAPLE framework, we review and discuss findings from over seventy behavioral and neural studies, highlighting that musicality is robustly associated with individual differences in a range of speech-language skills required for communication and development. These include speech perception-in-noise, prosodic perception, morphosyntactic skills, phonological skills, reading skills, and aspects of second/foreign language learning. Overall, the current work provides a clear agenda and framework for studying musicality-language links using individual differences approaches, with an emphasis on leveraging advances in the genomics of complex musicality and language traits.
Collapse
Affiliation(s)
- Srishti Nayak
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Psychology, Middle Tennessee State University, Murfreesboro, TN, USA
- Vanderbilt Genetics Institute, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt University School of Medicine, Vanderbilt University, TN, USA
| | - Peyton L. Coleman
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Enikő Ladányi
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Linguistics, Potsdam University, Potsdam, Germany
| | - Rachana Nitin
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - Daniel E. Gustavson
- Vanderbilt Genetics Institute, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Medicine, Vanderbilt University Medical Center, Nashville, TN, USA
- Institute for Behavioral Genetics, University of Colorado Boulder, Boulder, CO, USA
| | - Simon E. Fisher
- Language and Genetics Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Cyrille L. Magne
- Department of Psychology, Middle Tennessee State University, Murfreesboro, TN, USA
- PhD Program in Literacy Studies, Middle Tennessee State University, Murfreesboro, TN, USA
| | - Reyna L. Gordon
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Genetics Institute, Vanderbilt University Medical Center, Nashville, TN, USA
- Curb Center for Art, Enterprise, and Public Policy, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, TN, USA
- Vanderbilt University School of Medicine, Vanderbilt University, TN, USA
| |
Collapse
|
21
|
Benítez-Barrera CR, Skoe E, Huang J, Tharpe AM. Evidence for a Musician Speech-Perception-in-Noise Advantage in School-Age Children. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:3996-4008. [PMID: 36194893 DOI: 10.1044/2022_jslhr-22-00134] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
PURPOSE The objective of this study was to evaluate whether child musicians are better at listening to speech in noise (SPIN) than nonmusicians of the same age. In addition, we aimed to explore whether the musician SPIN advantage in children was related to general intelligence (IQ). METHOD Fifty-one children aged 8.2-11.8 years and with different levels of music training participated in the study. A between-group design and correlational analyses were used to determine differences in SPIN skills as they relate to music training. IQ was used as a covariate to explore the relationship between intelligence and SPIN ability. RESULTS More years of music training were associated with better SPIN skills than fewer years of music training. Furthermore, this difference in SPIN skills remained even when accounting for IQ. These results were found at the group level and also when years of instrument training was treated as a continuous variable (i.e., correlational analyses). CONCLUSIONS We confirmed results from previous studies in which child musicians outperformed nonmusicians in SPIN skills. We also showed that this effect was not related to differences in IQ between the musicians and nonmusicians for this cohort of children. However, confirmation of this finding with a cohort of children from more diverse socioeconomic statuses and cognitive profiles is warranted.
Collapse
Affiliation(s)
| | | | | | - Anne Marie Tharpe
- Vanderbilt University, Nashville, TN
- Vanderbilt University Medical Center, Nashville, TN
| |
Collapse
|
22
|
Domain-specific hearing-in-noise performance is associated with absolute pitch proficiency. Sci Rep 2022; 12:16344. [PMID: 36175508 PMCID: PMC9521875 DOI: 10.1038/s41598-022-20869-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Accepted: 09/20/2022] [Indexed: 11/22/2022] Open
Abstract
Recent evidence suggests that musicians may have an advantage over non-musicians in perceiving speech against noisy backgrounds. Previously, musicians have been compared as a homogenous group, despite demonstrated heterogeneity, which may contribute to discrepancies between studies. Here, we investigated whether “quasi”-absolute pitch (AP) proficiency, viewed as a general trait that varies across a spectrum, accounts for the musician advantage in hearing-in-noise (HIN) performance, irrespective of whether the streams are speech or musical sounds. A cohort of 12 non-musicians and 42 trained musicians stratified into high, medium, or low AP proficiency identified speech or melody targets masked in noise (speech-shaped, multi-talker, and multi-music) under four signal-to-noise ratios (0, − 3, − 6, and − 9 dB). Cognitive abilities associated with HIN benefits, including auditory working memory and use of visuo-spatial cues, were assessed. AP proficiency was verified against pitch adjustment and relative pitch tasks. We found a domain-specific effect on HIN perception: quasi-AP abilities were related to improved perception of melody but not speech targets in noise. The quasi-AP advantage extended to tonal working memory and the use of spatial cues, but only during melodic stream segregation. Overall, the results do not support the putative musician advantage in speech-in-noise perception, but suggest a quasi-AP advantage in perceiving music under noisy environments.
Collapse
|
23
|
Brown JA, Bidelman GM. Familiarity of Background Music Modulates the Cortical Tracking of Target Speech at the "Cocktail Party". Brain Sci 2022; 12:brainsci12101320. [PMID: 36291252 PMCID: PMC9599198 DOI: 10.3390/brainsci12101320] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2022] [Revised: 09/23/2022] [Accepted: 09/27/2022] [Indexed: 11/23/2022] Open
Abstract
The "cocktail party" problem-how a listener perceives speech in noisy environments-is typically studied using speech (multi-talker babble) or noise maskers. However, realistic cocktail party scenarios often include background music (e.g., coffee shops, concerts). Studies investigating music's effects on concurrent speech perception have predominantly used highly controlled synthetic music or shaped noise, which do not reflect naturalistic listening environments. Behaviorally, familiar background music and songs with vocals/lyrics inhibit concurrent speech recognition. Here, we investigated the neural bases of these effects. While recording multichannel EEG, participants listened to an audiobook while popular songs (or silence) played in the background at a 0 dB signal-to-noise ratio. Songs were either familiar or unfamiliar to listeners and featured either vocals or isolated instrumentals from the original audio recordings. Comprehension questions probed task engagement. We used temporal response functions (TRFs) to isolate cortical tracking to the target speech envelope and analyzed neural responses around 100 ms (i.e., auditory N1 wave). We found that speech comprehension was, expectedly, impaired during background music compared to silence. Target speech tracking was further hindered by the presence of vocals. When masked by familiar music, response latencies to speech were less susceptible to informational masking, suggesting concurrent neural tracking of speech was easier during music known to the listener. These differential effects of music familiarity were further exacerbated in listeners with less musical ability. Our neuroimaging results and their dependence on listening skills are consistent with early attentional-gain mechanisms where familiar music is easier to tune out (listeners already know the song's expectancies) and thus can allocate fewer attentional resources to the background music to better monitor concurrent speech material.
Collapse
Affiliation(s)
- Jane A. Brown
- School of Communication Sciences and Disorders, University of Memphis, Memphis, TN 38152, USA
- Institute for Intelligent Systems, University of Memphis, Memphis, TN 38152, USA
| | - Gavin M. Bidelman
- School of Communication Sciences and Disorders, University of Memphis, Memphis, TN 38152, USA
- Institute for Intelligent Systems, University of Memphis, Memphis, TN 38152, USA
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN 47408, USA
- Program in Neuroscience, Indiana University, Bloomington, IN 47405, USA
- Correspondence:
| |
Collapse
|
24
|
Mednicoff SD, Barashy S, Gonzales D, Benning SD, Snyder JS, Hannon EE. Auditory affective processing, musicality, and the development of misophonic reactions. Front Neurosci 2022; 16:924806. [PMID: 36213735 PMCID: PMC9537735 DOI: 10.3389/fnins.2022.924806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Accepted: 09/02/2022] [Indexed: 11/13/2022] Open
Abstract
Misophonia can be characterized both as a condition and as a negative affective experience. Misophonia is described as feeling irritation or disgust in response to hearing certain sounds, such as eating, drinking, gulping, and breathing. Although the earliest misophonic experiences are often described as occurring during childhood, relatively little is known about the developmental pathways that lead to individual variation in these experiences. This literature review discusses evidence of misophonic reactions during childhood and explores the possibility that early heightened sensitivities to both positive and negative sounds, such as to music, might indicate a vulnerability for misophonia and misophonic reactions. We will review when misophonia may develop, how it is distinguished from other auditory conditions (e.g., hyperacusis, phonophobia, or tinnitus), and how it relates to developmental disorders (e.g., autism spectrum disorder or Williams syndrome). Finally, we explore the possibility that children with heightened musicality could be more likely to experience misophonic reactions and develop misophonia.
Collapse
|
25
|
Listeners are sensitive to the speech breathing time series: Evidence from a gap detection task. Cognition 2022; 225:105171. [DOI: 10.1016/j.cognition.2022.105171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2021] [Revised: 04/29/2022] [Accepted: 05/11/2022] [Indexed: 11/23/2022]
|
26
|
Zendel BR. The importance of the motor system in the development of music-based forms of auditory rehabilitation. Ann N Y Acad Sci 2022; 1515:10-19. [PMID: 35648040 DOI: 10.1111/nyas.14810] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
Hearing abilities decline with age, and one of the most commonly reported hearing issues in older adults is a difficulty understanding speech when there is loud background noise. Understanding speech in noise relies on numerous cognitive processes, including working memory, and is supported by numerous brain regions, including the motor and motor planning systems. Indeed, many working memory processes are supported by motor and premotor cortical regions. Interestingly, lifelong musicians and nonmusicians given music training over the course of weeks or months show an improved ability to understand speech when there is loud background noise. These benefits are associated with enhanced working memory abilities, and enhanced activity in motor and premotor cortical regions. Accordingly, it is likely that music training improves the coupling between the auditory and motor systems and promotes plasticity in these regions and regions that feed into auditory/motor areas. This leads to an enhanced ability to dynamically process incoming acoustic information, and is likely the reason that musicians and those who receive laboratory-based music training are better able to understand speech when there is background noise. Critically, these findings suggest that music-based forms of auditory rehabilitation are possible and should focus on tasks that promote auditory-motor interactions.
Collapse
Affiliation(s)
- Benjamin Rich Zendel
- Faculty of Medicine, Memorial University of Newfoundland, St. John's, Newfoundland and Labrador, Canada.,Aging Research Centre - Newfoundland and Labrador, Grenfell Campus, Memorial University, Corner Brook, Newfoundland and Labrador, Canada
| |
Collapse
|
27
|
Savard MA, Sares AG, Coffey EBJ, Deroche MLD. Specificity of Affective Responses in Misophonia Depends on Trigger Identification. Front Neurosci 2022; 16:879583. [PMID: 35692416 PMCID: PMC9179422 DOI: 10.3389/fnins.2022.879583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2022] [Accepted: 04/26/2022] [Indexed: 12/05/2022] Open
Abstract
Individuals with misophonia, a disorder involving extreme sound sensitivity, report significant anger, disgust, and anxiety in response to select but usually common sounds. While estimates of prevalence within certain populations such as college students have approached 20%, it is currently unknown what percentage of people experience misophonic responses to such “trigger” sounds. Furthermore, there is little understanding of the fundamental processes involved. In this study, we aimed to characterize the distribution of misophonic symptoms in a general population, as well as clarify whether the aversive emotional responses to trigger sounds are partly caused by acoustic salience of the sound itself, or by recognition of the sound. Using multi-talker babble as masking noise to decrease participants' ability to identify sounds, we assessed how identification of common trigger sounds related to subjective emotional responses in 300 adults who participated in an online study. Participants were asked to listen to and identify neutral, unpleasant and trigger sounds embedded in different levels of the masking noise (signal-to-noise ratios: −30, −20, −10, 0, +10 dB), and then to evaluate their subjective judgment of the sounds (pleasantness) and emotional reactions to them (anxiety, anger, and disgust). Using participants' scores on a scale quantifying misophonia sensitivity, we selected the top and bottom 20% scorers from the distribution to form a Most-Misophonic subgroup (N = 66) and Least-Misophonic subgroup (N = 68). Both groups were better at identifying triggers than unpleasant sounds, which themselves were identified better than neutral sounds. Both groups also recognized the aversiveness of the unpleasant and trigger sounds, yet for the Most-Misophonic group, there was a greater increase in subjective ratings of negative emotions once the sounds became identifiable, especially for trigger sounds. These results highlight the heightened salience of trigger sounds, but furthermore suggest that learning and higher-order evaluation of sounds play an important role in misophonia.
Collapse
Affiliation(s)
- Marie-Anick Savard
- Department of Psychology, Concordia University, Montreal, QC, Canada
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada
- Centre for Research on Brain, Language, and Music (CRBLM), Montreal, QC, Canada
- *Correspondence: Marie-Anick Savard
| | - Anastasia G. Sares
- Department of Psychology, Concordia University, Montreal, QC, Canada
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada
- Centre for Research on Brain, Language, and Music (CRBLM), Montreal, QC, Canada
| | - Emily B. J. Coffey
- Department of Psychology, Concordia University, Montreal, QC, Canada
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada
- Centre for Research on Brain, Language, and Music (CRBLM), Montreal, QC, Canada
| | - Mickael L. D. Deroche
- Department of Psychology, Concordia University, Montreal, QC, Canada
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada
- Centre for Research on Brain, Language, and Music (CRBLM), Montreal, QC, Canada
| |
Collapse
|
28
|
Abstract
Hearing in noise is a core problem in audition, and a challenge for hearing-impaired listeners, yet the underlying mechanisms are poorly understood. We explored whether harmonic frequency relations, a signature property of many communication sounds, aid hearing in noise for normal hearing listeners. We measured detection thresholds in noise for tones and speech synthesized to have harmonic or inharmonic spectra. Harmonic signals were consistently easier to detect than otherwise identical inharmonic signals. Harmonicity also improved discrimination of sounds in noise. The largest benefits were observed for two-note up-down "pitch" discrimination and melodic contour discrimination, both of which could be performed equally well with harmonic and inharmonic tones in quiet, but which showed large harmonic advantages in noise. The results show that harmonicity facilitates hearing in noise, plausibly by providing a noise-robust pitch cue that aids detection and discrimination.
Collapse
|
29
|
Amateur singing benefits speech perception in aging under certain conditions of practice: behavioural and neurobiological mechanisms. Brain Struct Funct 2022; 227:943-962. [PMID: 35013775 DOI: 10.1007/s00429-021-02433-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Accepted: 11/19/2021] [Indexed: 12/21/2022]
Abstract
Limited evidence has shown that practising musical activities in aging, such as choral singing, could lessen age-related speech perception in noise (SPiN) difficulties. However, the robustness and underlying mechanism of action of this phenomenon remain unclear. In this study, we used surface-based morphometry combined with a moderated mediation analytic approach to examine whether singing-related plasticity in auditory and dorsal speech stream regions is associated with better SPiN capabilities. 36 choral singers and 36 non-singers aged 20-87 years underwent cognitive, auditory, and SPiN assessments. Our results provide important new insights into experience-dependent plasticity by revealing that, under certain conditions of practice, amateur choral singing is associated with age-dependent structural plasticity within auditory and dorsal speech regions, which is associated with better SPiN performance in aging. Specifically, the conditions of practice that were associated with benefits on SPiN included frequent weekly practice at home, several hours of weekly group singing practice, singing in multiple languages, and having received formal singing training. These results suggest that amateur choral singing is associated with improved SPiN through a dual mechanism involving auditory processing and auditory-motor integration and may be dose dependent, with more intense singing associated with greater benefit. Our results, thus, reveal that the relationship between singing practice and SPiN is complex, and underscore the importance of considering singing practice behaviours in understanding the effects of musical activities on the brain-behaviour relationship.
Collapse
|
30
|
Rimmele JM, Kern P, Lubinus C, Frieler K, Poeppel D, Assaneo MF. Musical Sophistication and Speech Auditory-Motor Coupling: Easy Tests for Quick Answers. Front Neurosci 2022; 15:764342. [PMID: 35058741 PMCID: PMC8763673 DOI: 10.3389/fnins.2021.764342] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 11/22/2021] [Indexed: 12/05/2022] Open
Abstract
Musical training enhances auditory-motor cortex coupling, which in turn facilitates music and speech perception. How tightly the temporal processing of music and speech are intertwined is a topic of current research. We investigated the relationship between musical sophistication (Goldsmiths Musical Sophistication index, Gold-MSI) and spontaneous speech-to-speech synchronization behavior as an indirect measure of speech auditory-motor cortex coupling strength. In a group of participants (n = 196), we tested whether the outcome of the spontaneous speech-to-speech synchronization test (SSS-test) can be inferred from self-reported musical sophistication. Participants were classified as high (HIGHs) or low (LOWs) synchronizers according to the SSS-test. HIGHs scored higher than LOWs on all Gold-MSI subscales (General Score, Active Engagement, Musical Perception, Musical Training, Singing Skills), but the Emotional Attachment scale. More specifically, compared to a previously reported German-speaking sample, HIGHs overall scored higher and LOWs lower. Compared to an estimated distribution of the English-speaking general population, our sample overall scored lower, with the scores of LOWs significantly differing from the normal distribution, with scores in the ∼30th percentile. While HIGHs more often reported musical training compared to LOWs, the distribution of training instruments did not vary across groups. Importantly, even after the highly correlated subscores of the Gold-MSI were decorrelated, particularly the subscales Musical Perception and Musical Training allowed to infer the speech-to-speech synchronization behavior. The differential effects of musical perception and training were observed, with training predicting audio-motor synchronization in both groups, but perception only in the HIGHs. Our findings suggest that speech auditory-motor cortex coupling strength can be inferred from training and perceptual aspects of musical sophistication, suggesting shared mechanisms involved in speech and music perception.
Collapse
Affiliation(s)
- Johanna M. Rimmele
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Frankfurt, Germany
- Max Planck NYU Center for Language, Music and Emotion, New York, NY, United States
| | - Pius Kern
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Frankfurt, Germany
| | - Christina Lubinus
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Frankfurt, Germany
| | - Klaus Frieler
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Frankfurt, Germany
| | - David Poeppel
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Frankfurt, Germany
- Max Planck NYU Center for Language, Music and Emotion, New York, NY, United States
- Department of Psychology, New York University, New York, NY, United States
- Ernst Strüngmann Institute for Neuroscience, Frankfurt, Germany
| | - M. Florencia Assaneo
- Instituto de Neurobiología, Universidad Nacional Autónoma de México, Querétaro, México
| |
Collapse
|
31
|
Terasawa H, Matsubara M, Goudarzi V, Sadakata M. Music in Quarantine: Connections Between Changes in Lifestyle, Psychological States, and Musical Behaviors During COVID-19 Pandemic. Front Psychol 2021; 12:689505. [PMID: 34707530 PMCID: PMC8542664 DOI: 10.3389/fpsyg.2021.689505] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Accepted: 09/06/2021] [Indexed: 12/05/2022] Open
Abstract
Music is not only the art of organized sound but also a compound of social interaction among people, built upon social and environmental foundations. Since the beginning of the COVID-19 outbreak, containment measures such as shelter-in-place, lockdown, social distancing, and self-quarantine have severely impacted the foundation of human society, resulting in a drastic change in our everyday experience. In this paper, the relationships between musical behavior, lifestyle, and psychological states during the shelter-in-place period of the COVID-19 pandemic are investigated. An online survey on musical experience, lifestyle changes, stress level, musical behaviors, media usage, and environmental sound perception was conducted. The survey was conducted in early June 2020. Responses from 620 people in 24 countries were collected, with the large proportion of the responses coming from the U.S. (55.5%) and India (21.4%). Structural equation modeling (SEM) analysis revealed causal relationships between lifestyle, stress, and music behaviors. Elements such as stress-level change, work risk, and staying home contribute to changes in musical experiences, such as moderating emotion with music, feeling emotional with music, and being more attentive to music. Stress-level change was correlated with work risk and income change, and people who started living with others due to the outbreak, especially with their children, indicated less change in stress level. People with more stress-level change tended to use music more purposefully for their mental well-being, such as to moderate emotions, to influence mood, and to relax. In addition, people with more stress-level change tend to be more annoyed by neighbors' noise. Housing type was not directly associated with annoyance; however, attention to environmental sounds decreased when the housing type was smaller. Attention to environmental and musical sounds and the emotional responses to them are highly inter-correlated. Multi-group SEM based on musicians showed that the causal relationship structure for professional musicians differs from that of less-experienced musicians. For professional musicians, staying at home was the only component that caused all musical behavior changes; stress did not cause musical behavior changes. Regarding Internet use, listening to music via YouTube and streaming was preferred over TV and radio, especially among less-experienced musicians, while participation in the online music community was preferred by more advanced musicians. This work suggests that social, environmental, and personal factors and limitations influence the changes in our musical behavior, perception of sonic experience, and emotional recognition, and that people actively accommodated the unusual pandemic situations using music and Internet technologies.
Collapse
Affiliation(s)
- Hiroko Terasawa
- Faculty of Library, Information and Media Science, University of Tsukuba, Tsukuba, Japan
| | - Masaki Matsubara
- Faculty of Library, Information and Media Science, University of Tsukuba, Tsukuba, Japan
| | - Visda Goudarzi
- Audio Arts and Acoustics Department, Columbia College Chicago, Chicago, IL, United States
| | - Makiko Sadakata
- Institute for Logic, Language and Computation, University of Amsterdam, Amsterdam, Netherlands
| |
Collapse
|
32
|
Multiple Cases of Auditory Neuropathy Illuminate the Importance of Subcortical Neural Synchrony for Speech-in-noise Recognition and the Frequency-following Response. Ear Hear 2021; 43:605-619. [PMID: 34619687 DOI: 10.1097/aud.0000000000001122] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES The role of subcortical synchrony in speech-in-noise (SIN) recognition and the frequency-following response (FFR) was examined in multiple listeners with auditory neuropathy. Although an absent FFR has been documented in one listener with idiopathic neuropathy who has severe difficulty recognizing SIN, several etiologies cause the neuropathy phenotype. Consequently, it is necessary to replicate absent FFRs and concomitant SIN difficulties in patients with multiple sources and clinical presentations of neuropathy to elucidate fully the importance of subcortical neural synchrony for the FFR and SIN recognition. DESIGN Case series. Three children with auditory neuropathy (two males with neuropathy attributed to hyperbilirubinemia, one female with a rare missense mutation in the OPA1 gene) were compared to age-matched controls with normal hearing (52 for electrophysiology and 48 for speech recognition testing). Tests included standard audiological evaluations, FFRs, and sentence recognition in noise. The three children with neuropathy had a range of clinical presentations, including moderate sensorineural hearing loss, use of a cochlear implant, and a rapid progressive hearing loss. RESULTS Children with neuropathy generally had good speech recognition in quiet but substantial difficulties in noise. These SIN difficulties were somewhat mitigated by a clear speaking style and presenting words in a high semantic context. In the children with neuropathy, FFRs were absent from all tested stimuli. In contrast, age-matched controls had reliable FFRs. CONCLUSION Subcortical synchrony is subject to multiple forms of disruption but results in a consistent phenotype of an absent FFR and substantial difficulties recognizing SIN. These results support the hypothesis that subcortical synchrony is necessary for the FFR. Thus, in healthy listeners, the FFR may reflect subcortical neural processes important for SIN recognition.
Collapse
|
33
|
Zheng Y, Zhao Z, Yang X, Li X. The impact of musical expertise on anticipatory semantic processing during online speech comprehension: An electroencephalography study. BRAIN AND LANGUAGE 2021; 221:105006. [PMID: 34392023 DOI: 10.1016/j.bandl.2021.105006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Revised: 07/29/2021] [Accepted: 07/30/2021] [Indexed: 06/13/2023]
Abstract
Musical experience has been found to aid speech perception. This electroencephalography study further examined whether and how musical expertise affects high-level predictive semantic processing in speech comprehension. Musicians and non-musicians listened to semantically strongly/weakly constraining sentences, with each sentence being primed by a congruent/incongruent sentence-prosody. At the target nouns, a N400 reduction effect (strongly vs. weakly constraining) was observed in both groups, with the onset-latency of this effect being delayed for incongruent (vs. congruent) priming. At the transitive verbs preceding these target nouns, musicians' event-related-potential amplitude (in incongruent-priming) and beta-band oscillatory power (in congruent- and incongruent-priming) showed a semantic-constraint effect, and were correlated with the predictability of incoming nouns; non-musicians only demonstrated an event-related-potential semantic-constraint effect, which was correlated with the predictability of current verbs. These results indicate musical expertise enhances semantic prediction tendency in speech comprehension, and this effect might be not just an aftereffect of facilitated acoustic/phonological processing.
Collapse
Affiliation(s)
- Yuanyi Zheng
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100149, China
| | - Zitong Zhao
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100149, China
| | - Xiaohong Yang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100149, China
| | - Xiaoqing Li
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100149, China.
| |
Collapse
|
34
|
Hausfeld L, Disbergen NR, Valente G, Zatorre RJ, Formisano E. Modulating Cortical Instrument Representations During Auditory Stream Segregation and Integration With Polyphonic Music. Front Neurosci 2021; 15:635937. [PMID: 34630007 PMCID: PMC8498193 DOI: 10.3389/fnins.2021.635937] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Accepted: 08/24/2021] [Indexed: 11/13/2022] Open
Abstract
Numerous neuroimaging studies demonstrated that the auditory cortex tracks ongoing speech and that, in multi-speaker environments, tracking of the attended speaker is enhanced compared to the other irrelevant speakers. In contrast to speech, multi-instrument music can be appreciated by attending not only on its individual entities (i.e., segregation) but also on multiple instruments simultaneously (i.e., integration). We investigated the neural correlates of these two modes of music listening using electroencephalography (EEG) and sound envelope tracking. To this end, we presented uniquely composed music pieces played by two instruments, a bassoon and a cello, in combination with a previously validated music auditory scene analysis behavioral paradigm (Disbergen et al., 2018). Similar to results obtained through selective listening tasks for speech, relevant instruments could be reconstructed better than irrelevant ones during the segregation task. A delay-specific analysis showed higher reconstruction for the relevant instrument during a middle-latency window for both the bassoon and cello and during a late window for the bassoon. During the integration task, we did not observe significant attentional modulation when reconstructing the overall music envelope. Subsequent analyses indicated that this null result might be due to the heterogeneous strategies listeners employ during the integration task. Overall, our results suggest that subsequent to a common processing stage, top-down modulations consistently enhance the relevant instrument's representation during an instrument segregation task, whereas such an enhancement is not observed during an instrument integration task. These findings extend previous results from speech tracking to the tracking of multi-instrument music and, furthermore, inform current theories on polyphonic music perception.
Collapse
Affiliation(s)
- Lars Hausfeld
- Department of Cognitive Neuroscience, Maastricht University, Maastricht, Netherlands
- Maastricht Brain Imaging Centre (MBIC), Maastricht University, Maastricht, Netherlands
| | - Niels R Disbergen
- Department of Cognitive Neuroscience, Maastricht University, Maastricht, Netherlands
- Maastricht Brain Imaging Centre (MBIC), Maastricht University, Maastricht, Netherlands
| | - Giancarlo Valente
- Department of Cognitive Neuroscience, Maastricht University, Maastricht, Netherlands
- Maastricht Brain Imaging Centre (MBIC), Maastricht University, Maastricht, Netherlands
| | - Robert J Zatorre
- Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Montreal, QC, Canada
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada
| | - Elia Formisano
- Department of Cognitive Neuroscience, Maastricht University, Maastricht, Netherlands
- Maastricht Brain Imaging Centre (MBIC), Maastricht University, Maastricht, Netherlands
- Maastricht Centre for Systems Biology (MaCSBio), Maastricht University, Maastricht, Netherlands
- Brightlands Institute for Smart Society (BISS), Maastricht University, Maastricht, Netherlands
| |
Collapse
|
35
|
Homma NY, Bajo VM. Lemniscal Corticothalamic Feedback in Auditory Scene Analysis. Front Neurosci 2021; 15:723893. [PMID: 34489635 PMCID: PMC8417129 DOI: 10.3389/fnins.2021.723893] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Accepted: 07/30/2021] [Indexed: 12/15/2022] Open
Abstract
Sound information is transmitted from the ear to central auditory stations of the brain via several nuclei. In addition to these ascending pathways there exist descending projections that can influence the information processing at each of these nuclei. A major descending pathway in the auditory system is the feedback projection from layer VI of the primary auditory cortex (A1) to the ventral division of medial geniculate body (MGBv) in the thalamus. The corticothalamic axons have small glutamatergic terminals that can modulate thalamic processing and thalamocortical information transmission. Corticothalamic neurons also provide input to GABAergic neurons of the thalamic reticular nucleus (TRN) that receives collaterals from the ascending thalamic axons. The balance of corticothalamic and TRN inputs has been shown to refine frequency tuning, firing patterns, and gating of MGBv neurons. Therefore, the thalamus is not merely a relay stage in the chain of auditory nuclei but does participate in complex aspects of sound processing that include top-down modulations. In this review, we aim (i) to examine how lemniscal corticothalamic feedback modulates responses in MGBv neurons, and (ii) to explore how the feedback contributes to auditory scene analysis, particularly on frequency and harmonic perception. Finally, we will discuss potential implications of the role of corticothalamic feedback in music and speech perception, where precise spectral and temporal processing is essential.
Collapse
Affiliation(s)
- Natsumi Y. Homma
- Center for Integrative Neuroscience, University of California, San Francisco, San Francisco, CA, United States
- Coleman Memorial Laboratory, Department of Otolaryngology – Head and Neck Surgery, University of California, San Francisco, San Francisco, CA, United States
| | - Victoria M. Bajo
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
36
|
Merten N, Fischer ME, Dillard LK, Klein BEK, Tweed TS, Cruickshanks KJ. Benefit of Musical Training for Speech Perception and Cognition Later in Life. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:2885-2896. [PMID: 34185592 PMCID: PMC8632477 DOI: 10.1044/2021_jslhr-20-00588] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/03/2020] [Revised: 12/29/2020] [Accepted: 03/18/2021] [Indexed: 06/13/2023]
Abstract
Purpose The aim of this study was to determine the long-term associations of musical training with speech perception in adverse conditions and cognition in a longitudinal cohort study of middle-age to older adults. Method This study is based on Epidemiology of Hearing Loss Study participants. We asked participants at baseline (1993-1995) about their musical training. Speech perception (word recognition in competing message; Northwestern University Auditory Test Number 6), cognitive function (cognitive test battery), and impairment (self-report or surrogate report of Alzheimer's disease or dementia, and/or a Mini-Mental State Examination score ≤ 24) were assessed up to 5 times over the 20-year follow-up. We included 2,938 Epidemiology of Hearing Loss Study participants who had musical training data and at least one follow-up of speech perception and/or cognitive assessment. We used linear mixed-effects models to determine associations between musicianship and decline in speech perception and cognitive function over time and Cox regression models to evaluate associations of musical training with 20-year cumulative incidence of speech perception and cognitive impairment. Models were adjusted for age, sex, and occupation and repeated with additional adjustment for health-related confounders and education. Results Musicians showed less speech perception decline over time with stronger effects in women (0.16% difference, 95% confidence interval [CI] [0.05, 0.26]). Among men, musicians had, on average, better speech perception than nonmusicians (3.41% difference, 95% CI [0.62, 6.20]) and were less likely to develop a cognitive impairment than nonmusicians (hazard ratio = 0.58, 95% CI [0.37, 0.91]). Conclusions Musicians showed an advantage in speech perception abilities and cognition later in life and less decline over time with different magnitudes of effect sizes in men and women. Associations remained with further adjustment, indicating that some degree of the advantage of musical training is independent of socioeconomic or health differences. If confirmed, these findings could have implications for developing speech perception intervention and prevention strategies. Supplemental Material https://doi.org/10.23641/asha.14825454.
Collapse
Affiliation(s)
- Natascha Merten
- Department of Population Health Sciences, School of Medicine and Public Health, University of Wisconsin–Madison
| | - Mary E. Fischer
- Department of Ophthalmology and Visual Sciences, School of Medicine and Public Health, University of Wisconsin–Madison
| | - Lauren K. Dillard
- Department of Communication Sciences and Disorders, College of Letters and Science, University of Wisconsin–Madison
| | - Barbara E. K. Klein
- Department of Ophthalmology and Visual Sciences, School of Medicine and Public Health, University of Wisconsin–Madison
| | - Ted S. Tweed
- Department of Ophthalmology and Visual Sciences, School of Medicine and Public Health, University of Wisconsin–Madison
| | - Karen J. Cruickshanks
- Department of Population Health Sciences, School of Medicine and Public Health, University of Wisconsin–Madison
- Department of Ophthalmology and Visual Sciences, School of Medicine and Public Health, University of Wisconsin–Madison
- Department of Communication Sciences and Disorders, College of Letters and Science, University of Wisconsin–Madison
| |
Collapse
|
37
|
Worschech F, Marie D, Jünemann K, Sinke C, Krüger THC, Großbach M, Scholz DS, Abdili L, Kliegel M, James CE, Altenmüller E. Improved Speech in Noise Perception in the Elderly After 6 Months of Musical Instruction. Front Neurosci 2021; 15:696240. [PMID: 34305522 PMCID: PMC8299120 DOI: 10.3389/fnins.2021.696240] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Accepted: 06/14/2021] [Indexed: 01/19/2023] Open
Abstract
Understanding speech in background noise poses a challenge in daily communication, which is a particular problem among the elderly. Although musical expertise has often been suggested to be a contributor to speech intelligibility, the associations are mostly correlative. In the present multisite study conducted in Germany and Switzerland, 156 healthy, normal-hearing elderly were randomly assigned to either piano playing or music listening/musical culture groups. The speech reception threshold was assessed using the International Matrix Test before and after a 6 month intervention. Bayesian multilevel modeling revealed an improvement of both groups over time under binaural conditions. Additionally, the speech reception threshold of the piano group decreased during stimuli presentation to the left ear. A right ear improvement only occurred in the German piano group. Furthermore, improvements were predominantly found in women. These findings are discussed in the light of current neuroscientific theories on hemispheric lateralization and biological sex differences. The study indicates a positive transfer from musical training to speech processing, probably supported by the enhancement of auditory processing and improvement of general cognitive functions.
Collapse
Affiliation(s)
- Florian Worschech
- Institute for Music Physiology and Musicians’ Medicine, Hanover University of Music, Drama and Media, Hanover, Germany
- Center for Systems Neuroscience, Hanover, Germany
| | - Damien Marie
- Geneva Musical Minds Lab, Geneva School of Health Sciences, University of Applied Sciences and Arts Western Switzerland (HES-SO), Geneva, Switzerland
- Faculty of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland
| | - Kristin Jünemann
- Center for Systems Neuroscience, Hanover, Germany
- Division of Clinical Psychology and Sexual Medicine, Department of Psychiatry, Social Psychiatry and Psychotherapy, Hanover Medical School, Hanover, Germany
| | - Christopher Sinke
- Division of Clinical Psychology and Sexual Medicine, Department of Psychiatry, Social Psychiatry and Psychotherapy, Hanover Medical School, Hanover, Germany
| | - Tillmann H. C. Krüger
- Center for Systems Neuroscience, Hanover, Germany
- Division of Clinical Psychology and Sexual Medicine, Department of Psychiatry, Social Psychiatry and Psychotherapy, Hanover Medical School, Hanover, Germany
| | - Michael Großbach
- Institute for Music Physiology and Musicians’ Medicine, Hanover University of Music, Drama and Media, Hanover, Germany
| | - Daniel S. Scholz
- Institute for Music Physiology and Musicians’ Medicine, Hanover University of Music, Drama and Media, Hanover, Germany
- Center for Systems Neuroscience, Hanover, Germany
| | - Laura Abdili
- Geneva Musical Minds Lab, Geneva School of Health Sciences, University of Applied Sciences and Arts Western Switzerland (HES-SO), Geneva, Switzerland
- Faculty of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland
| | - Matthias Kliegel
- Faculty of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland
- Center for the Interdisciplinary Study of Gerontology and Vulnerability, University of Geneva, Geneva, Switzerland
| | - Clara E. James
- Geneva Musical Minds Lab, Geneva School of Health Sciences, University of Applied Sciences and Arts Western Switzerland (HES-SO), Geneva, Switzerland
- Faculty of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland
| | - Eckart Altenmüller
- Institute for Music Physiology and Musicians’ Medicine, Hanover University of Music, Drama and Media, Hanover, Germany
- Center for Systems Neuroscience, Hanover, Germany
| |
Collapse
|
38
|
McCrary JM, Redding E, Altenmüller E. Performing arts as a health resource? An umbrella review of the health impacts of music and dance participation. PLoS One 2021; 16:e0252956. [PMID: 34111212 PMCID: PMC8191944 DOI: 10.1371/journal.pone.0252956] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Accepted: 05/25/2021] [Indexed: 01/23/2023] Open
Abstract
An increasing body of evidence notes the health benefits of arts engagement and participation. However, specific health effects and optimal modes and 'doses' of arts participation remain unclear, limiting evidence-based recommendations and prescriptions. The performing arts are the most popular form of arts participation, presenting substantial scope for established interest to be leveraged into positive health outcomes. Results of a three-component umbrella review (PROSPERO ID #: CRD42020191991) of relevant systematic reviews (33), epidemiologic studies (9) and descriptive studies (87) demonstrate that performing arts participation is broadly health promoting activity. Beneficial effects of performing arts participation were reported in healthy (non-clinical) children, adolescents, adults, and older adults across 17 health domains (9 supported by moderate-high quality evidence (GRADE criteria)). Positive health effects were associated with as little as 30 (acute effects) to 60 minutes (sustained weekly participation) of performing arts participation, with drumming and both expressive (ballroom, social) and exercise-based (aerobic dance, Zumba) modes of dance linked to the broadest health benefits. Links between specific health effects and performing arts modes/doses remain unclear and specific conclusions are limited by a still young and disparate evidence base. Further research is necessary, with this umbrella review providing a critical knowledge foundation.
Collapse
Affiliation(s)
- J. Matt McCrary
- Institute for Music Physiology and Musicians’ Medicine, Hannover University for Music, Drama and Media, Hannover, Germany
- Prince of Wales Clinical School, Faculty of Medicine, University of New South Wales, Sydney, Australia
| | - Emma Redding
- Division of Dance Science, Faculty of Dance, Trinity Laban Conservatoire of Music and Dance, London, United Kingdom
| | - Eckart Altenmüller
- Institute for Music Physiology and Musicians’ Medicine, Hannover University for Music, Drama and Media, Hannover, Germany
| |
Collapse
|
39
|
Li X, Zatorre RJ, Du Y. The Microstructural Plasticity of the Arcuate Fasciculus Undergirds Improved Speech in Noise Perception in Musicians. Cereb Cortex 2021; 31:3975-3985. [PMID: 34037726 PMCID: PMC8328222 DOI: 10.1093/cercor/bhab063] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Abstract
Musical training is thought to be related to improved language skills, for example, understanding speech in background noise. Although studies have found that musicians and nonmusicians differed in morphology of bilateral arcuate fasciculus (AF), none has associated such white matter features with speech-in-noise (SIN) perception. Here, we tested both SIN and the diffusivity of bilateral AF segments in musicians and nonmusicians using diffusion tensor imaging. Compared with nonmusicians, musicians had higher fractional anisotropy (FA) in the right direct AF and lower radial diffusivity in the left anterior AF, which correlated with SIN performance. The FA-based laterality index showed stronger right lateralization of the direct AF and stronger left lateralization of the posterior AF in musicians than nonmusicians, with the posterior AF laterality predicting SIN accuracy. Furthermore, hemodynamic activity in right superior temporal gyrus obtained during a SIN task played a full mediation role in explaining the contribution of the right direct AF diffusivity on SIN performance, which therefore links training-related white matter plasticity, brain hemodynamics, and speech perception ability. Our findings provide direct evidence that differential microstructural plasticity of bilateral AF segments may serve as a neural foundation of the cross-domain transfer effect of musical experience to speech perception amid competing noise.
Collapse
Affiliation(s)
- Xiaonan Li
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Robert J Zatorre
- Montréal Neurological Institute, McGill University, Montréal, QC H3A 2B4, Canada.,International Laboratory for Brain, Music, and Sound Research (BRAMS), Montréal, QC H3A 2B4, Canada.,Centre for Research on Brain, Language and Music (CRBLM), Montreal, QC H3A 2B4, Canada
| | - Yi Du
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China.,CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai 200031, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China.,Chinese Institute for Brain Research, Beijing 102206, China
| |
Collapse
|
40
|
Putkinen V, Saarikivi K, Chan TMV, Tervaniemi M. Faster maturation of selective attention in musically trained children and adolescents: Converging behavioral and event-related potential evidence. Eur J Neurosci 2021; 54:4246-4257. [PMID: 33932235 DOI: 10.1111/ejn.15262] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2020] [Revised: 04/15/2021] [Accepted: 04/15/2021] [Indexed: 11/28/2022]
Abstract
Previous work suggests that musical training in childhood is associated with enhanced executive functions. However, it is unknown whether this advantage extends to selective attention-another central aspect of executive control. We recorded a well-established event-related potential (ERP) marker of distraction, the P3a, during an audio-visual task to investigate the maturation of selective attention in musically trained children and adolescents aged 10-17 years and a control group of untrained peers. The task required categorization of visual stimuli, while a sequence of standard sounds and distracting novel sounds were presented in the background. The music group outperformed the control group in the categorization task and the younger children in the music group showed a smaller P3a to the distracting novel sounds than their peers in the control group. Also, a negative response elicited by the novel sounds in the N1/MMN time range (~150-200 ms) was smaller in the music group. These results indicate that the music group was less easily distracted by the task-irrelevant sound stimulation and gated the neural processing of the novel sounds more efficiently than the control group. Furthermore, we replicated our previous finding that, relative to the control group, the musically trained children and adolescents performed faster in standardized tests for inhibition and set shifting. These results provide novel converging behavioral and electrophysiological evidence from a cross-modal paradigm for accelerated maturation of selective attention in musically trained children and adolescents and corroborate the association between musical training and enhanced inhibition and set shifting.
Collapse
Affiliation(s)
- Vesa Putkinen
- Turku PET Centre, University of Turku, Turku, Finland
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Katri Saarikivi
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
- Cicero Learning, Faculty of Educational Sciences, University of Helsinki, Helsinki, Finland
| | | | - Mari Tervaniemi
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
- Cicero Learning, Faculty of Educational Sciences, University of Helsinki, Helsinki, Finland
| |
Collapse
|
41
|
McKay CM. No Evidence That Music Training Benefits Speech Perception in Hearing-Impaired Listeners: A Systematic Review. Trends Hear 2021; 25:2331216520985678. [PMID: 33634750 PMCID: PMC7934028 DOI: 10.1177/2331216520985678] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023] Open
Abstract
As musicians have been shown to have a range of superior auditory skills to non-musicians (e.g., pitch discrimination ability), it has been hypothesized by many researchers that music training can have a beneficial effect on speech perception in populations with hearing impairment. This hypothesis relies on an assumption that the benefits seen in musicians are due to their training and not due to innate skills that may support successful musicianship. This systematic review examined the evidence from 13 longitudinal training studies that tested the hypothesis that music training has a causal effect on speech perception ability in hearing-impaired listeners. The papers were evaluated for quality of research design and appropriate analysis techniques. Only 4 of the 13 papers used a research design that allowed a causal relation between music training and outcome benefits to be validly tested, and none of those 4 papers with a better quality study design demonstrated a benefit of music training for speech perception. In spite of the lack of valid evidence in support of the hypothesis, 10 of the 13 papers made claims of benefits of music training, showing a propensity for confirmation bias in this area of research. It is recommended that future studies that aim to evaluate the association of speech perception ability and music training use a study design that differentiates the effects of training from those of innate perceptual and cognitive skills in the participants.
Collapse
Affiliation(s)
- Colette M McKay
- Bionics Institute, Melbourne, Australia.,Department of Medical Bionics, The University of Melbourne, Melbourne, Australia.,Department of Audiology and Speech Pathology, The University of Melbourne, Melbourne, Australia
| |
Collapse
|
42
|
Impact of Auditory-Motor Musical Training on Melodic Pattern Recognition in Cochlear Implant Users. Otol Neurotol 2021; 41:e422-e431. [PMID: 32176126 DOI: 10.1097/mao.0000000000002525] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
OBJECTIVE Cochlear implant (CI) users struggle with tasks of pitch-based prosody perception. Pitch pattern recognition is vital for both music comprehension and understanding the prosody of speech, which signals emotion and intent. Research in normal-hearing individuals shows that auditory-motor training, in which participants produce the auditory pattern they are learning, is more effective than passive auditory training. We investigated whether auditory-motor training of CI users improves complex sound perception, such as vocal emotion recognition and pitch pattern recognition, compared with purely auditory training. STUDY DESIGN Prospective cohort study. SETTING Tertiary academic center. PATIENTS Fifteen postlingually deafened adults with CIs. INTERVENTION(S) Participants were divided into 3 one-month training groups: auditory-motor (intervention), auditory-only (active control), and no training (control). Auditory-motor training was conducted with the "Contours" software program and auditory-only training was completed with the "AngelSound" software program. MAIN OUTCOME MEASURE Pre and posttest examinations included tests of speech perception (consonant-nucleus-consonant, hearing-in-noise test sentence recognition), speech prosody perception, pitch discrimination, and melodic contour identification. RESULTS Participants in the auditory-motor training group performed better than those in the auditory-only and no-training (p < 0.05) for the melodic contour identification task. No significant training effect was noted on tasks of speech perception, speech prosody perception, or pitch discrimination. CONCLUSIONS These data suggest that short-term auditory-motor music training of CI users impacts pitch pattern recognition. This study offers approaches for enriching the world of complex sound in the CI user.
Collapse
|
43
|
Perron M, Theaud G, Descoteaux M, Tremblay P. The frontotemporal organization of the arcuate fasciculus and its relationship with speech perception in young and older amateur singers and non-singers. Hum Brain Mapp 2021; 42:3058-3076. [PMID: 33835629 PMCID: PMC8193549 DOI: 10.1002/hbm.25416] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2020] [Revised: 02/26/2021] [Accepted: 03/08/2021] [Indexed: 12/11/2022] Open
Abstract
The ability to perceive speech in noise (SPiN) declines with age. Although the etiology of SPiN decline is not well understood, accumulating evidence suggests a role for the dorsal speech stream. While age‐related decline within the dorsal speech stream would negatively affect SPiN performance, experience‐induced neuroplastic changes within the dorsal speech stream could positively affect SPiN performance. Here, we investigated the relationship between SPiN performance and the structure of the arcuate fasciculus (AF), which forms the white matter scaffolding of the dorsal speech stream, in aging singers and non‐singers. Forty‐three non‐singers and 41 singers aged 20 to 87 years old completed a hearing evaluation and a magnetic resonance imaging session that included High Angular Resolution Diffusion Imaging. The groups were matched for sex, age, education, handedness, cognitive level, and musical instrument experience. A subgroup of participants completed syllable discrimination in the noise task. The AF was divided into 10 segments to explore potential local specializations for SPiN. The results show that, in carefully matched groups of singers and non‐singers (a) myelin and/or axonal membrane deterioration within the bilateral frontotemporal AF segments are associated with SPiN difficulties in aging singers and non‐singers; (b) the structure of the AF is different in singers and non‐singers; (c) these differences are not associated with a benefit on SPiN performance for singers. This study clarifies the etiology of SPiN difficulties by supporting the hypothesis for the role of aging of the dorsal speech stream.
Collapse
Affiliation(s)
- Maxime Perron
- CERVO Brain Research Center, Quebec City, Quebec, Canada.,Département de Réadaptation, Université Laval, Faculté de Médecine, Quebec City, Quebec, Canada
| | - Guillaume Theaud
- Sherbrooke Connectivity Imaging Lab (SCIL), Computer Science Department, Université de Sherbrooke, Sherbrooke, Quebec, Canada
| | - Maxime Descoteaux
- Sherbrooke Connectivity Imaging Lab (SCIL), Computer Science Department, Université de Sherbrooke, Sherbrooke, Quebec, Canada
| | - Pascale Tremblay
- CERVO Brain Research Center, Quebec City, Quebec, Canada.,Département de Réadaptation, Université Laval, Faculté de Médecine, Quebec City, Quebec, Canada
| |
Collapse
|
44
|
MEG Intersubject Phase Locking of Stimulus-Driven Activity during Naturalistic Speech Listening Correlates with Musical Training. J Neurosci 2021; 41:2713-2722. [PMID: 33536196 DOI: 10.1523/jneurosci.0932-20.2020] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2020] [Revised: 11/13/2020] [Accepted: 11/17/2020] [Indexed: 12/26/2022] Open
Abstract
Musical training is associated with increased structural and functional connectivity between auditory sensory areas and higher-order brain networks involved in speech and motor processing. Whether such changed connectivity patterns facilitate the cortical propagation of speech information in musicians remains poorly understood. We here used magnetoencephalography (MEG) source imaging and a novel seed-based intersubject phase-locking approach to investigate the effects of musical training on the interregional synchronization of stimulus-driven neural responses during listening to naturalistic continuous speech presented in silence. MEG data were obtained from 20 young human subjects (both sexes) with different degrees of musical training. Our data show robust bilateral patterns of stimulus-driven interregional phase synchronization between auditory cortex and frontotemporal brain regions previously associated with speech processing. Stimulus-driven phase locking was maximal in the delta band, but was also observed in the theta and alpha bands. The individual duration of musical training was positively associated with the magnitude of stimulus-driven alpha-band phase locking between auditory cortex and parts of the dorsal and ventral auditory processing streams. These findings provide evidence for a positive relationship between musical training and the propagation of speech-related information between auditory sensory areas and higher-order processing networks, even when speech is presented in silence. We suggest that the increased synchronization of higher-order cortical regions to auditory cortex may contribute to the previously described musician advantage in processing speech in background noise.SIGNIFICANCE STATEMENT Musical training has been associated with widespread structural and functional brain plasticity. It has been suggested that these changes benefit the production and perception of music but can also translate to other domains of auditory processing, such as speech. We developed a new magnetoencephalography intersubject analysis approach to study the cortical synchronization of stimulus-driven neural responses during the perception of continuous natural speech and its relationship to individual musical training. Our results provide evidence that musical training is associated with higher synchronization of stimulus-driven activity between brain regions involved in early auditory sensory and higher-order processing. We suggest that the increased synchronized propagation of speech information may contribute to the previously described musician advantage in processing speech in background noise.
Collapse
|
45
|
Kaplan EC, Wagner AE, Toffanin P, Başkent D. Do Musicians and Non-musicians Differ in Speech-on-Speech Processing? Front Psychol 2021; 12:623787. [PMID: 33679539 PMCID: PMC7931613 DOI: 10.3389/fpsyg.2021.623787] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2020] [Accepted: 01/21/2021] [Indexed: 12/18/2022] Open
Abstract
Earlier studies have shown that musically trained individuals may have a benefit in adverse listening situations when compared to non-musicians, especially in speech-on-speech perception. However, the literature provides mostly conflicting results. In the current study, by employing different measures of spoken language processing, we aimed to test whether we could capture potential differences between musicians and non-musicians in speech-on-speech processing. We used an offline measure of speech perception (sentence recall task), which reveals a post-task response, and online measures of real time spoken language processing: gaze-tracking and pupillometry. We used stimuli of comparable complexity across both paradigms and tested the same groups of participants. In the sentence recall task, musicians recalled more words correctly than non-musicians. In the eye-tracking experiment, both groups showed reduced fixations to the target and competitor words' images as the level of speech maskers increased. The time course of gaze fixations to the competitor did not differ between groups in the speech-in-quiet condition, while the time course dynamics did differ between groups as the two-talker masker was added to the target signal. As the level of two-talker masker increased, musicians showed reduced lexical competition as indicated by the gaze fixations to the competitor. The pupil dilation data showed differences mainly in one target-to-masker ratio. This does not allow to draw conclusions regarding potential differences in the use of cognitive resources between groups. Overall, the eye-tracking measure enabled us to observe that musicians may be using a different strategy than non-musicians to attain spoken word recognition as the noise level increased. However, further investigation with more fine-grained alignment between the processes captured by online and offline measures is necessary to establish whether musicians differ due to better cognitive control or sound processing.
Collapse
Affiliation(s)
- Elif Canseza Kaplan
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, Netherlands.,Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, Groningen, Netherlands
| | - Anita E Wagner
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| | - Paolo Toffanin
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, Netherlands.,Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, Groningen, Netherlands
| |
Collapse
|
46
|
Ferreira MC, Zamberlan-Amorim NE, Wolf AE, Reis ACMB. Influence of different types of noise on sentence recognition in normally hearing adults. REVISTA CEFAC 2021. [DOI: 10.1590/1982-0216/20212352121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
ABSTRACT Objective: to analyze speech perception in normally hearing adults when listening in silence and with different types of noise. Methods: 40 individuals of both sexes, aged 18 to 45 years, participated in the study. Speech perception was assessed with the Lists of Sentences in Portuguese test, without a competing noise and with speech-spectrum, babble, and cocktail party noise. A mixed-effects linear regression model and the 95% confidence interval were used. Results: the subjects’ performance was worse in the three types of noise than in silence. When comparing the types of noise, differences were found in all combinations (speech-spectrum X babble, speech-spectrum X cocktail party, and babble X cocktail party), with a worse performance in babble, noise, followed by cocktail party. Conclusion: all noises negatively influenced speech perception, with a worse performance in babble, followed by cocktail party and speech-spectrum.
Collapse
|
47
|
Amemane R, Gundmi A, Madikeri Mohan K. Effect of Carnatic Music Listening Training on Speech in Noise Performance in Adults. J Audiol Otol 2020; 25:22-26. [PMID: 33181869 PMCID: PMC7835432 DOI: 10.7874/jao.2020.00255] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Accepted: 09/08/2020] [Indexed: 01/06/2023] Open
Abstract
Background and Objectives Music listening has a concomitant effect on structural and functional organization of the brain. It helps in relaxation, mind training and neural strengthening. In relation to it, the present study was aimed to find the effect of Carnatic music listening training (MLT) on speech in noise performance in adults. Subjects and Methods A total of 28 participants (40-70 years) were recruited in the study. Based on randomized control trial, they were divided into intervention and control group. Intervention group underwent a short-term MLT. Quick Speech-in-Noise in Kannada was used as an outcome measure. Results Results were analysed using mixed method analysis of variance (ANOVA) and repeated measures ANOVA. There was a significant difference between intervention and control group post MLT. The results of the second continuum revealed no statistically significant difference between post training and follow-up scores in both the groups. Conclusions In conclusion short-term MLT resulted in betterment of speech in noise performance. MLT can be hence used as a viable tool in formal auditory training for better prognosis.
Collapse
Affiliation(s)
| | - Archana Gundmi
- Department of Speech and Hearing, Manipal College for Health Professions, Manipal Academy of Higher Education, Manipal, Karnataka, India
| | - Kishan Madikeri Mohan
- Department of Speech and Hearing, Manipal College for Health Professions, Manipal Academy of Higher Education, Manipal, Karnataka, India
| |
Collapse
|
48
|
Couth S, Mazlan N, Moore DR, Munro KJ, Dawes P. Hearing Difficulties and Tinnitus in Construction, Agricultural, Music, and Finance Industries: Contributions of Demographic, Health, and Lifestyle Factors. Trends Hear 2020; 23:2331216519885571. [PMID: 31747526 PMCID: PMC6868580 DOI: 10.1177/2331216519885571] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022] Open
Abstract
High levels of occupational noise exposure increase the risk of hearing difficulties and tinnitus. However, differences in demographic, health, and lifestyle factors could also contribute to high levels of hearing difficulties and tinnitus in some industries. Data from a subsample (n = 22,936) of the U.K. Biobank were analyzed to determine to what extent differences in levels of hearing difficulties and tinnitus in high-risk industries (construction, agricultural, and music) compared with low-risk industries (finance) could be attributable to demographic, health, and lifestyle factors, rather than occupational noise exposure. Hearing difficulties were identified using a digits-in-noise speech recognition test. Tinnitus was identified based on self-report. Logistic regression analyses showed that occupational noise exposure partially accounted for higher levels of hearing difficulties in the agricultural industry compared with finance, and occupational noise exposure, older age, low socioeconomic status, and non-White ethnic background partially accounted for higher levels of hearing difficulties in the construction industry. However, the factors assessed in the model did not fully account for the increased likelihood of hearing difficulties in high-risk industries, suggesting that there are additional unknown factors which impact on hearing or that there was insufficient measurement of factors included in the model. The levels of tinnitus were greatest for music and construction industries compared with finance, and these differences were accounted for by occupational and music noise exposure, as well as older age. These findings emphasize the need to promote hearing conservation in occupational and music settings, with a particular focus on high-risk demographic subgroups.
Collapse
Affiliation(s)
- Samuel Couth
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, UK
| | - Naadia Mazlan
- Faculty of Engineering, School of Civil Engineering, Universiti Teknologi Malaysia, Malaysia
| | - David R Moore
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, UK.,Communication Sciences Research Center, Cincinnati Children's Hospital Medical Centre, OH, USA
| | - Kevin J Munro
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, UK.,Manchester Academic Health Science Centre, Manchester University Hospitals NHS Foundation Trust, UK
| | - Piers Dawes
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, UK
| |
Collapse
|
49
|
Laffere A, Dick F, Holt LL, Tierney A. Attentional modulation of neural entrainment to sound streams in children with and without ADHD. Neuroimage 2020; 224:117396. [PMID: 32979522 DOI: 10.1016/j.neuroimage.2020.117396] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2020] [Revised: 08/25/2020] [Accepted: 09/14/2020] [Indexed: 01/06/2023] Open
Abstract
To extract meaningful information from complex auditory scenes like a noisy playground, rock concert, or classroom, children can direct attention to different sound streams. One means of accomplishing this might be to align neural activity with the temporal structure of a target stream, such as a specific talker or melody. However, this may be more difficult for children with ADHD, who can struggle with accurately perceiving and producing temporal intervals. In this EEG study, we found that school-aged children's attention to one of two temporally-interleaved isochronous tone 'melodies' was linked to an increase in phase-locking at the melody's rate, and a shift in neural phase that aligned the neural responses with the attended tone stream. Children's attention task performance and neural phase alignment with the attended melody were linked to performance on temporal production tasks, suggesting that children with more robust control over motor timing were better able to direct attention to the time points associated with the target melody. Finally, we found that although children with ADHD performed less accurately on the tonal attention task than typically developing children, they showed the same degree of attentional modulation of phase locking and neural phase shifts, suggesting that children with ADHD may have difficulty with attentional engagement rather than attentional selection.
Collapse
Affiliation(s)
- Aeron Laffere
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E 7HX, United Kingdom
| | - Fred Dick
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E 7HX, United Kingdom; Division of Psychology & Language Sciences, UCL, Gower Street, London, WC1E 6BT, United Kingdom
| | - Lori L Holt
- Department of Psychology, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213, United States
| | - Adam Tierney
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E 7HX, United Kingdom.
| |
Collapse
|
50
|
Lee J, Han JH, Lee HJ. Long-Term Musical Training Alters Auditory Cortical Activity to the Frequency Change. Front Hum Neurosci 2020; 14:329. [PMID: 32973478 PMCID: PMC7471721 DOI: 10.3389/fnhum.2020.00329] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Accepted: 07/24/2020] [Indexed: 11/13/2022] Open
Abstract
Objective: The ability to detect frequency variation is a fundamental skill necessary for speech perception. It is known that musical expertise is associated with a range of auditory perceptual skills, including discriminating frequency change, which suggests the neural encoding of spectral features can be enhanced by musical training. In this study, we measured auditory cortical responses to frequency change in musicians to examine the relationships between N1/P2 responses and behavioral performance/musical training. Methods: Behavioral and electrophysiological data were obtained from professional musicians and age-matched non-musician participants. Behavioral data included frequency discrimination detection thresholds for no threshold-equalizing noise (TEN), +5, 0, and -5 signal-to-noise ratio settings. Auditory-evoked responses were measured using a 64-channel electroencephalogram (EEG) system in response to frequency changes in ongoing pure tones consisting of 250 and 4,000 Hz, and the magnitudes of frequency change were 10%, 25% or 50% from the base frequencies. N1 and P2 amplitudes and latencies as well as dipole source activation in the left and right hemispheres were measured for each condition. Results: Compared to the non-musician group, behavioral thresholds in the musician group were lower for frequency discrimination in quiet conditions only. The scalp-recorded N1 amplitudes were modulated as a function of frequency change. P2 amplitudes in the musician group were larger than in the non-musician group. Dipole source analysis showed that P2 dipole activity to frequency changes was lateralized to the right hemisphere, with greater activity in the musician group regardless of the hemisphere side. Additionally, N1 amplitudes to frequency changes were positively related to behavioral thresholds for frequency discrimination while enhanced P2 amplitudes were associated with a longer duration of musical training. Conclusions: Our results demonstrate that auditory cortical potentials evoked by frequency change are related to behavioral thresholds for frequency discrimination in musicians. Larger P2 amplitudes in musicians compared to non-musicians reflects musical training-induced neural plasticity.
Collapse
Affiliation(s)
- Jihyun Lee
- Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, South Korea
| | - Ji-Hye Han
- Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, South Korea
| | - Hyo-Jeong Lee
- Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, South Korea.,Department of Otorhinolaryngology, College of Medicine, Hallym University, Anyang, South Korea
| |
Collapse
|