1
|
Leibold LJ, Buss E, Miller MK, Cowan T, McCreery RW, Oleson J, Rodriguez B, Calandruccio L. Development of the Children's English and Spanish Speech Recognition Test: Psychometric Properties, Feasibility, Reliability, and Normative Data. Ear Hear 2024; 45:860-877. [PMID: 38334698 PMCID: PMC11178473 DOI: 10.1097/aud.0000000000001480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/10/2024]
Abstract
OBJECTIVES The Children's English and Spanish Speech Recognition (ChEgSS) test is a computer-based tool for assessing closed-set word recognition in English and in Spanish, with a masker that is either speech-shaped noise or competing speech. The present study was conducted to (1) characterize the psychometric properties of the ChEgSS test, (2) evaluate feasibility and reliability for a large cohort of Spanish/English bilingual children with normal hearing, and (3) establish normative data. DESIGN Three experiments were conducted to evaluate speech perception in children (4-17 years) and adults (19-40 years) with normal hearing using the ChEgSS test. In Experiment 1, data were collected from Spanish/English bilingual and English monolingual adults at multiple, fixed signal-to-noise ratios. Psychometric functions were fitted to the word-level data to characterize variability across target words in each language and in each masker condition. In Experiment 2, Spanish/English bilingual adults were tested using an adaptive tracking procedure to evaluate the influence of different target-word normalization approaches on the reliability of estimates of masked-speech recognition thresholds corresponding to 70.7% correct word recognition and to determine the optimal number of reversals needed to obtain reliable estimates. In Experiment 3, Spanish/English bilingual and English monolingual children completed speech perception testing using the ChEgSS test to (1) characterize feasibility across age and language group, (2) evaluate test-retest reliability, and (3) establish normative data. RESULTS Experiments 1 and 2 yielded data that are essential for stimulus normalization, optimizing threshold estimation procedures, and interpreting threshold data across test language and masker type. Findings obtained from Spanish/English bilingual and English monolingual children with normal hearing in Experiment 3 support feasibility and demonstrate reliability for use with children as young as 4 years of age. Equivalent results for testing in English and Spanish were observed for Spanish/English bilingual children, contingent on adequate proficiency in the target language. Regression-based threshold norms were established for Spanish/English bilingual and English monolingual children between 4 and 17 years of age. CONCLUSIONS The present findings indicate the ChEgSS test is appropriate for testing a wide age range of children with normal hearing in either Spanish, English, or both languages. The ChEgSS test is currently being evaluated in a large cohort of patients with hearing loss at pediatric audiology clinics across the United States. Results will be compared with normative data established in the present study and with established clinical measures used to evaluate English- and Spanish-speaking children. Questionnaire data from parents and clinician feedback will be used to further improve test procedures.
Collapse
Affiliation(s)
- Lori J Leibold
- Boys Town National Research Hospital, Center for Hearing Research, Omaha, Nebraska, USA
| | - Emily Buss
- Department of Otolaryngology/HNS, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA
| | - Margaret K Miller
- Boys Town National Research Hospital, Center for Hearing Research, Omaha, Nebraska, USA
| | - Tiana Cowan
- Boys Town National Research Hospital, Center for Hearing Research, Omaha, Nebraska, USA
| | - Ryan W McCreery
- Boys Town National Research Hospital, Center for Hearing Research, Omaha, Nebraska, USA
| | - Jacob Oleson
- Department of Biostatistics, University of Iowa, Iowa City, Iowa, USA
| | - Barbara Rodriguez
- Department of Speech and Hearing Sciences, University of New Mexico, Albuquerque, New Mexico, USA
| | - Lauren Calandruccio
- Department of Psychological Sciences, Case Western Reserve University, Cleveland, Ohio, USA
| |
Collapse
|
2
|
Nematova S, Zinszer B, Jasinska KK. Exploring audiovisual speech perception in monolingual and bilingual children in Uzbekistan. J Exp Child Psychol 2024; 239:105808. [PMID: 37972516 DOI: 10.1016/j.jecp.2023.105808] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Revised: 10/05/2023] [Accepted: 10/23/2023] [Indexed: 11/19/2023]
Abstract
This study aimed to investigate the development of audiovisual speech perception in monolingual Uzbek-speaking and bilingual Uzbek-Russian-speaking children, focusing on the impact of language experience on audiovisual speech perception and the role of visual phonetic (i.e., mouth movements corresponding to phonetic/lexical information) and temporal (i.e., timing of speech signals) cues. A total of 321 children aged 4 to 10 years in Tashkent, Uzbekistan, discriminated /ba/ and /da/ syllables across three conditions: auditory-only, audiovisual phonetic (i.e., sound accompanied by mouth movements), and audiovisual temporal (i.e., sound onset/offset accompanied by mouth opening/closing). Effects of modality (audiovisual phonetic, audiovisual temporal, or audio-only cues), age, group (monolingual or bilingual), and their interactions were tested using a Bayesian regression model. Overall, older participants performed better than younger participants. Participants performed better in the audiovisual phonetic modality compared with the auditory modality. However, no significant difference between monolingual and bilingual children was observed across all modalities. This finding stands in contrast to earlier studies. We attribute the contrasting findings of our study and the existing literature to the cross-linguistic similarity of the language pairs involved. When the languages spoken by bilinguals exhibit substantial linguistic similarity, there may be an increased necessity to disambiguate speech signals, leading to a greater reliance on audiovisual cues. The limited phonological similarity between Uzbek and Russian might have minimized bilinguals' need to rely on visual speech cues, contributing to the lack of group differences in our study.
Collapse
Affiliation(s)
- Shakhlo Nematova
- Department of Linguistics and Cognitive Science, University of Delaware, Newark, DE 19716, USA.
| | - Benjamin Zinszer
- Department of Psychology, Swarthmore College, Swarthmore, PA 19081, USA
| | - Kaja K Jasinska
- Department of Applied Psychology and Human Development, University of Toronto, Toronto, ON M5S 1A1, Canada
| |
Collapse
|
3
|
Bsharat-Maalouf D, Degani T, Karawani H. The Involvement of Listening Effort in Explaining Bilingual Listening Under Adverse Listening Conditions. Trends Hear 2023; 27:23312165231205107. [PMID: 37941413 PMCID: PMC10637154 DOI: 10.1177/23312165231205107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 09/14/2023] [Accepted: 09/15/2023] [Indexed: 11/10/2023] Open
Abstract
The current review examines listening effort to uncover how it is implicated in bilingual performance under adverse listening conditions. Various measures of listening effort, including physiological, behavioral, and subjective measures, have been employed to examine listening effort in bilingual children and adults. Adverse listening conditions, stemming from environmental factors, as well as factors related to the speaker or listener, have been examined. The existing literature, although relatively limited to date, points to increased listening effort among bilinguals in their nondominant second language (L2) compared to their dominant first language (L1) and relative to monolinguals. Interestingly, increased effort is often observed even when speech intelligibility remains unaffected. These findings emphasize the importance of considering listening effort alongside speech intelligibility. Building upon the insights gained from the current review, we propose that various factors may modulate the observed effects. These include the particular measure selected to examine listening effort, the characteristics of the adverse condition, as well as factors related to the particular linguistic background of the bilingual speaker. Critically, further research is needed to better understand the impact of these factors on listening effort. The review outlines avenues for future research that would promote a comprehensive understanding of listening effort in bilingual individuals.
Collapse
Affiliation(s)
- Dana Bsharat-Maalouf
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| | - Tamar Degani
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| | - Hanin Karawani
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| |
Collapse
|
4
|
Cowan T, Paroby C, Leibold LJ, Buss E, Rodriguez B, Calandruccio L. Masked-Speech Recognition for Linguistically Diverse Populations: A Focused Review and Suggestions for the Future. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:3195-3216. [PMID: 35917458 PMCID: PMC9911100 DOI: 10.1044/2022_jslhr-22-00011] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Revised: 04/12/2022] [Accepted: 05/04/2022] [Indexed: 06/15/2023]
Abstract
PURPOSE Twenty years ago, von Hapsburg and Peña (2002) wrote a tutorial that reviewed the literature on speech audiometry and bilingualism and outlined valuable recommendations to increase the rigor of the evidence base. This review article returns to that seminal tutorial to reflect on how that advice was applied over the last 20 years and to provide updated recommendations for future inquiry. METHOD We conducted a focused review of the literature on masked-speech recognition for bilingual children and adults. First, we evaluated how studies published since 2002 described bilingual participants. Second, we reviewed the literature on native language masked-speech recognition. Third, we discussed theoretically motivated experimental work. Fourth, we outlined how recent research in bilingual speech recognition can be used to improve clinical practice. RESULTS Research conducted since 2002 commonly describes bilingual samples in terms of their language status, competency, and history. Bilingualism was not consistently associated with poor masked-speech recognition. For example, bilinguals who were exposed to English prior to age 7 years and who were dominant in English performed comparably to monolinguals for masked-sentence recognition tasks. To the best of our knowledge, there are no data to document the masked-speech recognition ability of these bilinguals in their other language compared to a second monolingual group, which is an important next step. Nonetheless, individual factors that commonly vary within bilingual populations were associated with masked-speech recognition and included language dominance, competency, and age of acquisition. We identified methodological issues in sampling strategies that could, in part, be responsible for inconsistent findings between studies. For instance, disparities in socioeconomic status (SES) between recruited bilingual and monolingual groups could cause confounding bias within the research design. CONCLUSIONS Dimensions of the bilingual linguistic profile should be considered in clinical practice to inform counseling and (re)habilitation strategies since susceptibility to masking is elevated in at least one language for most bilinguals. Future research should continue to report language status, competency, and history but should also report language stability and demand for use data. In addition, potential confounds (e.g., SES, educational attainment) when making group comparisons between monolinguals and bilinguals must be considered.
Collapse
Affiliation(s)
- Tiana Cowan
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | - Caroline Paroby
- Department of Psychological Sciences, Case Western Reserve University, Cleveland, OH
| | - Lori J. Leibold
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | - Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, The University of North Carolina at Chapel Hill
| | - Barbara Rodriguez
- Department of Speech and Hearing Sciences, The University of New Mexico, Albuquerque
| | - Lauren Calandruccio
- Department of Psychological Sciences, Case Western Reserve University, Cleveland, OH
| |
Collapse
|
5
|
Strydom L, Pottas L, Soer M, Graham MA. Effects of language experience on selective auditory attention and speech-in-noise perception among English second language learners: Preliminary findings. Int J Pediatr Otorhinolaryngol 2022; 154:111061. [PMID: 35149369 DOI: 10.1016/j.ijporl.2022.111061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Revised: 01/13/2022] [Accepted: 02/01/2022] [Indexed: 10/19/2022]
Abstract
OBJECTIVE The purpose of the study was to examine the effects of language experience on selective auditory attention and speech-in-noise perception in English Second Language (ESL) learners aged seven to eight years. METHOD A quantitative, descriptive, comparative cross-sectional research design was used to determine the effect of age of exposure to English on the selective auditory attention abilities and speech-in-noise perception skills of 40 children with normal hearing in first or second grade (aged seven to eight years). The control group comprised of 20 English first language (EFL) learners (mean age = 7.35 years ±0.49) and the research group included 20 s language learners (mean age = 7.70 years ±0.47). In order to compare the control and research groups with respect to the age of exposure to English through various sources, the Mann Whitney test was used. Information regarding the age of exposure was gathered by a case history questionnaire, completed by the parents/guardians of the participants. The Selective Auditory Attention Test (SAAT) and Digits-in-Noise (DIN) test were performed in one sitting. RESULTS No statistically significant differences between the EFL and ESL groups were found for the SAAT and DIN. However, a statistically significant difference was obtained between the SAAT lists 1 and 3 & the DIN: diotic listening condition for the ESL group only (rs = -0.623; p = 0.003). The difference between the EFL and ESL groups in the mean age of exposure to English was statistically significant (p = 0,019), with mean age of exposure to English in the ESL group (mean age = 2.82 ± 0.53) being higher than the mean age of exposure in the EFL group (mean age = 1.81 ± 1.53). However, this difference did not influence the results of the SAAT and DIN significantly. CONCLUSION The main finding was that selective auditory attention and speech-in-noise perception were not significantly affected in the ESL learners who participated in the study - learners who were recruited from private schools located in an urban area and thus from higher socio-economic status (SES) households. There is a need for additional research with a larger sample size to determine the selective auditory attention abilities and speech-in-noise perception skills of ESL learners in government-funded schools located in rural areas and from various socio-economic backgrounds.
Collapse
Affiliation(s)
- Lianca Strydom
- Department of Speech-Language Pathology and Audiology, Faculty of Humanities, University of Pretoria, Pretoria, Gauteng, South Africa.
| | - Lidia Pottas
- Department of Speech-Language Pathology and Audiology, Faculty of Humanities, University of Pretoria, Pretoria, Gauteng, South Africa
| | - Maggi Soer
- Department of Speech-Language Pathology and Audiology, Faculty of Humanities, University of Pretoria, Pretoria, Gauteng, South Africa
| | - Marien Alet Graham
- Department of Science, Mathematics and Technology Education, Faculty of Education, University of Pretoria, Pretoria, Gauteng, South Africa
| |
Collapse
|
6
|
Bilinguals Show Proportionally Greater Benefit From Visual Speech Cues and Sentence Context in Their Second Compared to Their First Language. Ear Hear 2021; 43:1316-1326. [PMID: 34966162 DOI: 10.1097/aud.0000000000001182] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
OBJECTIVES Speech perception in noise is challenging, but evidence suggests that it may be facilitated by visual speech cues (e.g., lip movements) and supportive sentence context in native speakers. Comparatively few studies have investigated speech perception in noise in bilinguals, and little is known about the impact of visual speech cues and supportive sentence context in a first language compared to a second language within the same individual. The current study addresses this gap by directly investigating the extent to which bilinguals benefit from visual speech cues and supportive sentence context under similarly noisy conditions in their first and second language. DESIGN Thirty young adult English-French/French-English bilinguals were recruited from the undergraduate psychology program at Concordia University and from the Montreal community. They completed a speech perception in noise task during which they were presented with video-recorded sentences and instructed to repeat the last word of each sentence out loud. Sentences were presented in three different modalities: visual-only, auditory-only, and audiovisual. Additionally, sentences had one of two levels of context: moderate (e.g., "In the woods, the hiker saw a bear.") and low (e.g., "I had not thought about that bear."). Each participant completed this task in both their first and second language; crucially, the level of background noise was calibrated individually for each participant and was the same throughout the first language and second language (L2) portions of the experimental task. RESULTS Overall, speech perception in noise was more accurate in bilinguals' first language compared to the second. However, participants benefited from visual speech cues and supportive sentence context to a proportionally greater extent in their second language compared to their first. At the individual level, performance during the speech perception in noise task was related to aspects of bilinguals' experience in their second language (i.e., age of acquisition, relative balance between the first and the second language). CONCLUSIONS Bilinguals benefit from visual speech cues and sentence context in their second language during speech in noise and do so to a greater extent than in their first language given the same level of background noise. Together, this indicates that L2 speech perception can be conceptualized within an inverse effectiveness hypothesis framework with a complex interplay of sensory factors (i.e., the quality of the auditory speech signal and visual speech cues) and linguistic factors (i.e., presence or absence of supportive context and L2 experience of the listener).
Collapse
|
7
|
Morini G, Newman RS. A comparison of monolingual and bilingual toddlers' word recognition in noise. THE INTERNATIONAL JOURNAL OF BILINGUALISM : CROSS-DISCIPLINARY, CROSS-LINGUISTIC STUDIES OF LANGUAGE BEHAVIOR 2021; 25:1446-1459. [PMID: 36160086 PMCID: PMC9494292 DOI: 10.1177/13670069211028664] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
AIMS AND OBJECTIVES The purpose of this study was to examine whether differences in language exposure (i.e., being raised in a bilingual versus a monolingual environment) influence young children's ability to comprehend words when speech is heard in the presence of background noise. METHODOLOGY Forty-four children (22 monolinguals and 22 bilinguals) between the ages of 29 and 31 months completed a preferential looking task where they saw picture-pairs of familiar objects (e.g., balloon and apple) on a screen and simultaneously heard sentences instructing them to locate one of the objects (e.g., look at the apple!). Speech was heard in quiet and in the presence of competing white noise. DATA AND ANALYSES Children's eye-movements were coded off-line to identify the proportion of time they fixated on the correct object on the screen and performance across groups was compared using a 2 × 3 mixed analysis of variance. FINDINGS Bilingual toddlers performed worse than monolinguals during the task. This group difference in performance was particularly clear when the listening condition contained background noise. ORIGINALITY There are clear differences in how infants and adults process speech in noise. To date, developmental work on this topic has mainly been carried out with monolingual infants. This study is one of the first to examine how background noise might influence word identification in young bilingual children who are just starting to acquire their languages. SIGNIFICANCE High noise levels are often reported in daycares and classrooms where bilingual children are present. Therefore, this work has important implications for learning and education practices with young bilinguals.
Collapse
|
8
|
Calandruccio L, Beninate I, Oleson J, Miller MK, Leibold LJ, Buss E, Rodriguez BL. A Simplified Approach to Quantifying a Child's Bilingual Language Experience. Am J Audiol 2021; 30:769-776. [PMID: 34310200 DOI: 10.1044/2021_aja-20-00214] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Purpose Bilingual children's linguistic experience can vary markedly from child to child. For appropriate audiological assessment and intervention, audiologists need accurate and efficient ways to describe and understand a bilingual child's dynamic linguistic experience. This report documents an approach for quantitatively capturing a child's language exposure and usage in a time-efficient manner. Method A well-known pediatric bilingual language survey was administered to 83 parents of bilingual children, obtaining information about the child's exposure to (input) and usage of (output) Spanish and English for seventeen 1-hr intervals during a typical weekday and weekend day. Results A factor analysis indicated that capturing linguistic exposure and usage over three grouped-time intervals during a typical weekday and weekend day accounted for ≥ 74% of the total variance of the linguistic information captured with the full-length survey. Conclusions Although further confirmation is required, these results suggest that collecting language exposure and usage data from parents of bilingual children for three grouped-time intervals provides similar information as a comprehensive hour-by-hour approach. A time-efficient method of capturing the dynamic bilingual linguistic experience of a child would benefit pediatric audiologists and speech-language pathologists alike.
Collapse
Affiliation(s)
- Lauren Calandruccio
- Department of Psychological Sciences, Case Western Reserve University, Cleveland, OH
| | - Isabella Beninate
- Department of Psychological Sciences, Case Western Reserve University, Cleveland, OH
| | - Jacob Oleson
- Department of Biostatistics, The University of Iowa, Iowa City
| | | | | | - Emily Buss
- Department of Otolaryngology/Head & Neck Surgery, University of North Carolina, Chapel Hill
| | - Barbara L. Rodriguez
- Department of Speech & Hearing Sciences, The University of New Mexico, Albuquerque
| |
Collapse
|
9
|
Behavioral Pattern Analysis between Bilingual and Monolingual Listeners’ Natural Speech Perception on Foreign-Accented English Language Using Different Machine Learning Approaches. TECHNOLOGIES 2021. [DOI: 10.3390/technologies9030051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Speech perception in an adverse background/noisy environment is a complex and challenging human process, which is made even more complicated in foreign-accented language for bilingual and monolingual individuals. Listeners who have difficulties in hearing are affected most by such a situation. Despite considerable efforts, the increase in speech intelligibility in noise remains elusive. Considering this opportunity, this study investigates Bengali–English bilinguals and native American English monolinguals’ behavioral patterns on foreign-accented English language considering bubble noise, gaussian or white noise, and quiet sound level. Twelve regular hearing participants (Six Bengali–English bilinguals and Six Native American English monolinguals) joined in this study. Statistical computation shows that speech with different noise has a significant effect (p = 0.009) on listening for both bilingual and monolingual under different sound levels (e.g., 55 dB, 65 dB, and 75 dB). Here, six different machine learning approaches (Logistic Regression (LR), Linear Discriminant Analysis (LDA), K-nearest neighbors (KNN), Naïve Bayes (NB), Classification and regression trees (CART), and Support vector machine (SVM)) are tested and evaluated to differentiate between bilingual and monolingual individuals from their behavioral patterns in both noisy and quiet environments. Results show that most optimal performances were observed using LDA by successfully differentiating between bilingual and monolingual 60% of the time. A deep neural network-based model is proposed to improve this measure further and achieved an accuracy of nearly 100% in successfully differentiating between bilingual and monolingual individuals.
Collapse
|
10
|
Guediche S, de Bruin A, Caballero-Gaudes C, Baart M, Samuel AG. Second-language word recognition in noise: Interdependent neuromodulatory effects of semantic context and crosslinguistic interactions driven by word form similarity. Neuroimage 2021; 237:118168. [PMID: 34000398 DOI: 10.1016/j.neuroimage.2021.118168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Revised: 05/05/2021] [Accepted: 05/12/2021] [Indexed: 11/17/2022] Open
Abstract
Spoken language comprehension is a fundamental component of our cognitive skills. We are quite proficient at deciphering words from the auditory input despite the fact that the speech we hear is often masked by noise such as background babble originating from talkers other than the one we are attending to. To perceive spoken language as intended, we rely on prior linguistic knowledge and context. Prior knowledge includes all sounds and words that are familiar to a listener and depends on linguistic experience. For bilinguals, the phonetic and lexical repertoire encompasses two languages, and the degree of overlap between word forms across languages affects the degree to which they influence one another during auditory word recognition. To support spoken word recognition, listeners often rely on semantic information (i.e., the words we hear are usually related in a meaningful way). Although the number of multilinguals across the globe is increasing, little is known about how crosslinguistic effects (i.e., word overlap) interact with semantic context and affect the flexible neural systems that support accurate word recognition. The current multi-echo functional magnetic resonance imaging (fMRI) study addresses this question by examining how prime-target word pair semantic relationships interact with the target word's form similarity (cognate status) to the translation equivalent in the dominant language (L1) during accurate word recognition of a non-dominant (L2) language. We tested 26 early-proficient Spanish-Basque (L1-L2) bilinguals. When L2 targets matching L1 translation-equivalent phonological word forms were preceded by unrelated semantic contexts that drive lexical competition, a flexible language control (fronto-parietal-subcortical) network was upregulated, whereas when they were preceded by related semantic contexts that reduce lexical competition, it was downregulated. We conclude that an interplay between semantic and crosslinguistic effects regulates flexible control mechanisms of speech processing to facilitate L2 word recognition, in noise.
Collapse
Affiliation(s)
- Sara Guediche
- Basque Center on Cognition Brain, and Language, Donostia-San Sebastian 20009, Spain.
| | | | | | - Martijn Baart
- Basque Center on Cognition Brain, and Language, Donostia-San Sebastian 20009, Spain; Department of Cognitive Neuropsychology, Tilburg University, P.O. Box 90153, 5000 LE Tilburg, the Netherlands
| | - Arthur G Samuel
- Basque Center on Cognition Brain, and Language, Donostia-San Sebastian 20009, Spain; Stony Brook University, NY 11794-2500, United States; Ikerbasque Foundation, Spain
| |
Collapse
|
11
|
Abstract
OBJECTIVE Although numerous studies have shown that musicians have better speech perception in noise (SPIN) compared to nonmusicians, other studies have not replicated the "musician advantage for SPIN." One factor that has not been adequately addressed in previous studies is how musicians' SPIN is affected by routine exposure to high levels of sound. We hypothesized that such exposure diminishes the musician advantage for SPIN. DESIGN Environmental sound levels were measured continuously for 1 week via body-worn noise dosimeters in 56 college students with diverse musical backgrounds and clinically normal pure-tone audiometric averages. SPIN was measured using the Quick Speech in Noise Test (QuickSIN). Multiple linear regression modeling was used to examine how music practice (years of playing a musical instrument) and routine noise exposure predict QuickSIN scores. RESULTS Noise exposure and music practice were both significant predictors of QuickSIN, but they had opposing influences, with more years of music practice predicting better QuickSIN scores and greater routine noise exposure predicting worse QuickSIN scores. Moreover, mediation analysis suggests that noise exposure suppresses the relationship between music practice and QuickSIN scores. CONCLUSIONS Our findings suggest a beneficial relationship between music practice and SPIN that is suppressed by noise exposure.
Collapse
|
12
|
Miller MK, Calandruccio L, Buss E, McCreery RW, Oleson J, Rodriguez B, Leibold LJ. Masked English Speech Recognition Performance in Younger and Older Spanish-English Bilingual and English Monolingual Children. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:4578-4591. [PMID: 31830845 PMCID: PMC7839054 DOI: 10.1044/2019_jslhr-19-00059] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Accepted: 05/09/2019] [Indexed: 06/01/2023]
Abstract
Purpose The purpose of this study was to compare masked English speech recognition thresholds between Spanish-English bilingual and English monolingual children and to evaluate effects of age, maternal education, and English receptive language abilities on individual differences in masked speech recognition. Method Forty-three Spanish-English bilingual children and 42 English monolingual children completed an English sentence recognition task in 2 masker conditions: (a) speech-shaped noise and (b) 2-talker English speech. Two age groups of children, younger (5-6 years) and older (9-10 years), were tested. The predictors of masked speech recognition performance were evaluated using 2 mixed-effects regression models. In the 1st model, fixed effects were age group (younger children vs. older children), language group (bilingual vs. monolingual), and masker type (speech-shaped noise vs. 2-talker speech). In the 2nd model, the fixed effects of receptive English vocabulary scores and maternal education level were also included. Results Younger children performed more poorly than older children, but no significant difference in masked speech recognition was observed between bilingual and monolingual children for either age group when English proficiency and maternal education were also included in the model. English language abilities fell within age-appropriate norms for both groups, but individual children with larger receptive vocabularies in English tended to show better recognition; this effect was stronger for younger children than for older children. Speech reception thresholds for all children were lower in the speech-shaped noise masker than in the 2-talker speech masker. Conclusions Regardless of age, similar masked speech recognition was observed for Spanish-English bilingual and English monolingual children tested in this study when receptive English language abilities were accounted for. Receptive English vocabulary scores were associated with better masked speech recognition performance for both bilinguals and monolinguals, with a stronger relationship observed for younger children than older children. Further investigation involving a Spanish-dominant bilingual sample is warranted given the high English language proficiency of children included in this study.
Collapse
Affiliation(s)
- Margaret K. Miller
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | - Lauren Calandruccio
- Department of Psychological Sciences, Case Western Reserve University, Cleveland, OH
| | - Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill
| | - Ryan W. McCreery
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | - Jacob Oleson
- Department of Biostatistics, University of Iowa, Iowa City
| | - Barbara Rodriguez
- Department of Speech and Hearing Sciences, The University of New Mexico, Albuquerque
| | - Lori J. Leibold
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| |
Collapse
|
13
|
Regalado D, Kong J, Buss E, Calandruccio L. Effects of Language History on Sentence Recognition in Noise or Two-Talker Speech: Monolingual, Early Bilingual, and Late Bilingual Speakers of English. Am J Audiol 2019; 28:935-946. [PMID: 31697566 DOI: 10.1044/2019_aja-18-0194] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Purpose Language history is an important factor in masked speech recognition. Listeners who acquire the target language later in life perform more poorly than native speakers. However, there are inconsistencies in the literature regarding performance of bilingual speakers who begin learning the target language early in life. The purpose of this experiment was to evaluate speech-in-noise and speech-in-speech recognition for highly proficient early bilingual listeners compared to monolingual and late bilingual listeners. Method Three groups of young adults participated: native monolingual English speakers, bilingual Mandarin-English speakers who learned English from birth (early bilinguals), and native Mandarin speakers who learned English later in life (late bilinguals). All participants had normal hearing and were full-time college students. Recognition was assessed for English sentences in speech-shaped noise and two-talker English speech. Participants provided linguistic and demographic information, and late bilinguals completed the Versant test of spoken English abilities. Results All listeners performed better in speech-shaped noise than two-talker speech. Performance was similar for monolingual and early bilinguals. Late bilinguals performed more poorly overall. There was evidence for a stronger association between masked speech recognition and English dominance for late bilinguals compared to early bilinguals. Conclusion These results support the conclusion that bilingualism itself does not necessarily result in a disadvantage when recognizing masked speech in noise and speech in speech. For populations similar to those studied here (highly proficient early bilinguals), it would be appropriate to evaluate masked speech recognition using the same simple stimuli and normative data used for monolingual speakers of English.
Collapse
Affiliation(s)
- Diana Regalado
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill
| | - Jessica Kong
- Department of Psychological Sciences, Case Western Reserve University, Cleveland, OH
| | - Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill
| | - Lauren Calandruccio
- Department of Psychological Sciences, Case Western Reserve University, Cleveland, OH
| |
Collapse
|
14
|
Lagerberg TB, Lam J, Olsson R, Abelin Å, Strömbergsson S. Intelligibility of Children With Speech Sound Disorders Evaluated by Listeners With Swedish as a Second Language. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:3714-3727. [PMID: 31619121 DOI: 10.1044/2019_jslhr-s-18-0492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Purpose This study aimed to investigate the intelligibility of children's atypical speech in relation to listeners' language background. Method Forty-eight participants listened to and transcribed isolated words repeated by children with speech sound disorders. Participants were divided into, on the one hand, a multilingual group (n = 29) that was further divided into subgroups based on age of acquisition (early, 0-3 years; intermediate, 4-12 years; and late, > 12 years) and, on the other hand, a monolingual comparison group (n = 19). Results The monolingual listeners obtained higher intelligibility scores than the multilingual listeners; this difference was statistically significant. Participants who acquired Swedish at an older age (> 4 years) were found to have lower scores than other listeners. The later the age of acquisition, the less of the atypical speech was decoded correctly. A further analysis of the transcriptions also revealed a higher level of nonwords among the incorrect transcriptions of the multilinguals than that of the monolinguals who used more real words, whereas both groups were equally prone to using blanks when they did not perceive a word. Conclusions This indicates a higher risk of communicative problems between late acquirers of Swedish and children with speech sound disorders. Clinical implications, such as involving communication partners in the intervention process, are discussed as well as possible linguistic explanations to the findings. This study could be seen as a starting point in the field of research regarding the relations between the language background of the listener and the ability to perceive atypical speech.
Collapse
Affiliation(s)
- Tove B Lagerberg
- Division of Speech and Language Pathology, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Sweden
| | - Jenny Lam
- Division of Speech and Language Pathology, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Sweden
| | - Rikard Olsson
- Speech and Language Pathology Clinic, Praktikertjänst Närsjukhus Dalsland, Dalslands Sjukhus, Bäckefors, Sweden
| | - Åsa Abelin
- Department of Philosophy, Linguistics and Theory of Science, University of Gothenburg, Sweden
| | - Sofia Strömbergsson
- Division of Speech and Language Pathology, Department of Clinical Science, Intervention and Technology (CLINTEC), Karolinska Institutet, Stockholm, Sweden
| |
Collapse
|
15
|
Kousaie S, Baum S, Phillips NA, Gracco V, Titone D, Chen JK, Chai XJ, Klein D. Language learning experience and mastering the challenges of perceiving speech in noise. BRAIN AND LANGUAGE 2019; 196:104645. [PMID: 31284145 DOI: 10.1016/j.bandl.2019.104645] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/09/2018] [Revised: 06/20/2019] [Accepted: 06/26/2019] [Indexed: 06/09/2023]
Abstract
Given the ubiquity of noisy environments and increasing globalization, the necessity to perceive speech in noise in a non-native language is common and necessary for successful communication. In the current investigation, bilingual individuals who learned their non-native language at different ages underwent magnetic resonance imaging while listening to sentences in both of their languages, in quiet and in noise. Sentence context was varied such that the final word could be of high or low predictability. Results show that early non-native language learning is associated with superior ability to benefit from contextual information behaviourally, and a pattern of neural recruitment in the left inferior frontal gyrus that suggests easier processing when perceiving non-native speech in noise. These findings have implications for our understanding of speech processing in non-optimal listening conditions and shed light on how individuals navigate every day complex communicative environments, in a native and non-native language.
Collapse
Affiliation(s)
- Shanna Kousaie
- Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Montreal, QC H3A 2B4, Canada; Centre for Research on Brain, Language and Music, McGill University, Montreal, QC H3G 2A8, Canada.
| | - Shari Baum
- Centre for Research on Brain, Language and Music, McGill University, Montreal, QC H3G 2A8, Canada; School of Communication Sciences and Disorders, Faculty of Medicine, McGill University, Montreal, QC H3A 1G1, Canada
| | - Natalie A Phillips
- Centre for Research on Brain, Language and Music, McGill University, Montreal, QC H3G 2A8, Canada; Department of Psychology/Centre for Research in Human Development, Concordia University, Montreal, QC H4B 1R6, Canada; Bloomfield Centre for Research in Aging, Lady Davis Institute for Medical Research and Jewish General Hospital/McGill University Memory Clinic, Jewish General Hospital, Montreal, QC H3T 1E2, Canada
| | - Vincent Gracco
- Centre for Research on Brain, Language and Music, McGill University, Montreal, QC H3G 2A8, Canada; School of Communication Sciences and Disorders, Faculty of Medicine, McGill University, Montreal, QC H3A 1G1, Canada; Haskins Laboratories, New Haven, CT 06511, USA
| | - Debra Titone
- Centre for Research on Brain, Language and Music, McGill University, Montreal, QC H3G 2A8, Canada; Department of Psychology, McGill University Montreal, QC H3A 1G1, Canada
| | - Jen-Kai Chen
- Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Montreal, QC H3A 2B4, Canada; Centre for Research on Brain, Language and Music, McGill University, Montreal, QC H3G 2A8, Canada
| | - Xiaoqian J Chai
- Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, Montreal, QC H3A 2B4, Canada
| | - Denise Klein
- Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Montreal, QC H3A 2B4, Canada; Centre for Research on Brain, Language and Music, McGill University, Montreal, QC H3G 2A8, Canada.
| |
Collapse
|
16
|
Xie Z, Zinszer BD, Riggs M, Beevers CG, Chandrasekaran B. Impact of depression on speech perception in noise. PLoS One 2019; 14:e0220928. [PMID: 31415624 PMCID: PMC6695097 DOI: 10.1371/journal.pone.0220928] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2019] [Accepted: 07/26/2019] [Indexed: 11/19/2022] Open
Abstract
Effective speech communication is critical to everyday quality of life and social well-being. In addition to the well-studied deficits in cognitive and motor function, depression also impacts communication. Here, we examined speech perception in individuals who were clinically diagnosed with major depressive disorder (MDD) relative to neurotypical controls. Forty-two normal-hearing (NH) individuals with MDD and 41 NH neurotypical controls performed sentence recognition tasks across three conditions with maskers varying in the extent of linguistic content (high, low, and none): 1-talker masker (1T), reversed 1-talker masker (1T_tr), and speech-shaped noise (SSN). Individuals with MDD, relative to neurotypical controls, demonstrated lower recognition accuracy in the 1T condition but not in the 1T_tr or SSN condition. To examine the nature of the listening condition-specific speech perception deficit, we analyzed speech recognition errors. Errors as a result of interference from masker sentences were higher for individuals with MDD (vs. neurotypical controls) in the 1T condition. This depression-related listening condition-specific pattern in recognition errors was not observed for other error types. We posit that this depression-related listening condition-specific deficit in speech perception may be related to heightened distractibility due to linguistic interference from background talkers.
Collapse
Affiliation(s)
- Zilong Xie
- Department of Hearing and Speech Sciences, University of Maryland, Maryland, United States of America
| | - Benjamin D. Zinszer
- Department of Linguistics and Cognitive Science, University of Delaware, Newark, Delaware, United States of America
| | - Meredith Riggs
- Department of Communication Sciences and Disorders, The University of Texas at Austin, Austin, Texas, United States of America
| | - Christopher G. Beevers
- Department of Psychology, The University of Texas at Austin, Austin, Texas, United States of America
- Institute for Mental Health Research, Austin, Texas, United States of America
| | - Bharath Chandrasekaran
- Department of Communication Science and Disorders, School of Health and Rehabilitation Sciences, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| |
Collapse
|
17
|
Psychobiological Responses Reveal Audiovisual Noise Differentially Challenges Speech Recognition. Ear Hear 2019; 41:268-277. [PMID: 31283529 DOI: 10.1097/aud.0000000000000755] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES In noisy environments, listeners benefit from both hearing and seeing a talker, demonstrating audiovisual (AV) cues enhance speech-in-noise (SIN) recognition. Here, we examined the relative contribution of auditory and visual cues to SIN perception and the strategies used by listeners to decipher speech in noise interference(s). DESIGN Normal-hearing listeners (n = 22) performed an open-set speech recognition task while viewing audiovisual TIMIT sentences presented under different combinations of signal degradation including visual (AVn), audio (AnV), or multimodal (AnVn) noise. Acoustic and visual noises were matched in physical signal-to-noise ratio. Eyetracking monitored participants' gaze to different parts of a talker's face during SIN perception. RESULTS As expected, behavioral performance for clean sentence recognition was better for A-only and AV compared to V-only speech. Similarly, with noise in the auditory channel (AnV and AnVn speech), performance was aided by the addition of visual cues of the talker regardless of whether the visual channel contained noise, confirming a multimodal benefit to SIN recognition. The addition of visual noise (AVn) obscuring the talker's face had little effect on speech recognition by itself. Listeners' eye gaze fixations were biased toward the eyes (decreased at the mouth) whenever the auditory channel was compromised. Fixating on the eyes was negatively associated with SIN recognition performance. Eye gazes on the mouth versus eyes of the face also depended on the gender of the talker. CONCLUSIONS Collectively, results suggest listeners (1) depend heavily on the auditory over visual channel when seeing and hearing speech and (2) alter their visual strategy from viewing the mouth to viewing the eyes of a talker with signal degradations, which negatively affects speech perception.
Collapse
|
18
|
Bidelman GM, Sigley L, Lewis GA. Acoustic noise and vision differentially warp the auditory categorization of speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 146:60. [PMID: 31370660 PMCID: PMC6786888 DOI: 10.1121/1.5114822] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/18/2019] [Revised: 06/05/2019] [Accepted: 06/07/2019] [Indexed: 06/10/2023]
Abstract
Speech perception requires grouping acoustic information into meaningful linguistic-phonetic units via categorical perception (CP). Beyond shrinking observers' perceptual space, CP might aid degraded speech perception if categories are more resistant to noise than surface acoustic features. Combining audiovisual (AV) cues also enhances speech recognition, particularly in noisy environments. This study investigated the degree to which visual cues from a talker (i.e., mouth movements) aid speech categorization amidst noise interference by measuring participants' identification of clear and noisy speech (0 dB signal-to-noise ratio) presented in auditory-only or combined AV modalities (i.e., A, A+noise, AV, AV+noise conditions). Auditory noise expectedly weakened (i.e., shallower identification slopes) and slowed speech categorization. Interestingly, additional viseme cues largely counteracted noise-related decrements in performance and stabilized classification speeds in both clear and noise conditions suggesting more precise acoustic-phonetic representations with multisensory information. Results are parsimoniously described under a signal detection theory framework and by a reduction (visual cues) and increase (noise) in the precision of perceptual object representation, which were not due to lapses of attention or guessing. Collectively, findings show that (i) mapping sounds to categories aids speech perception in "cocktail party" environments; (ii) visual cues help lattice formation of auditory-phonetic categories to enhance and refine speech identification.
Collapse
Affiliation(s)
- Gavin M Bidelman
- School of Communication Sciences & Disorders, University of Memphis, 4055 North Park Loop, Memphis, Tennessee 38152, USA
| | - Lauren Sigley
- School of Communication Sciences & Disorders, University of Memphis, 4055 North Park Loop, Memphis, Tennessee 38152, USA
| | - Gwyneth A Lewis
- School of Communication Sciences & Disorders, University of Memphis, 4055 North Park Loop, Memphis, Tennessee 38152, USA
| |
Collapse
|
19
|
Coffey EBJ, Arseneau-Bruneau I, Zhang X, Zatorre RJ. The Music-In-Noise Task (MINT): A Tool for Dissecting Complex Auditory Perception. Front Neurosci 2019; 13:199. [PMID: 30930734 PMCID: PMC6427094 DOI: 10.3389/fnins.2019.00199] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2018] [Accepted: 02/20/2019] [Indexed: 11/30/2022] Open
Abstract
The ability to segregate target sounds in noisy backgrounds is relevant both to neuroscience and to clinical applications. Recent research suggests that hearing-in-noise (HIN) problems are solved using combinations of sub-skills that are applied according to task demand and information availability. While evidence is accumulating for a musician advantage in HIN, the exact nature of the reported training effect is not fully understood. Existing HIN tests focus on tasks requiring understanding of speech in the presence of competing sound. Because visual, spatial and predictive cues are not systematically considered in these tasks, few tools exist to investigate the most relevant components of cognitive processes involved in stream segregation. We present the Music-In-Noise Task (MINT) as a flexible tool to expand HIN measures beyond speech perception, and for addressing research questions pertaining to the relative contributions of HIN sub-skills, inter-individual differences in their use, and their neural correlates. The MINT uses a match-mismatch trial design: in four conditions (Baseline, Rhythm, Spatial, and Visual) subjects first hear a short instrumental musical excerpt embedded in an informational masker of "multi-music" noise, followed by either a matching or scrambled repetition of the target musical excerpt presented in silence; the four conditions differ according to the presence or absence of additional cues. In a fifth condition (Prediction), subjects hear the excerpt in silence as a target first, which helps to anticipate incoming information when the target is embedded in masking sound. Data from samples of young adults show that the MINT has good reliability and internal consistency, and demonstrate selective benefits of musicianship in the Prediction, Rhythm, and Visual subtasks. We also report a performance benefit of multilingualism that is separable from that of musicianship. Average MINT scores were correlated with scores on a sentence-in-noise perception task, but only accounted for a relatively small percentage of the variance, indicating that the MINT is sensitive to additional factors and can provide a complement and extension of speech-based tests for studying stream segregation. A customizable version of the MINT is made available for use and extension by the scientific community.
Collapse
Affiliation(s)
- Emily B. J. Coffey
- Department of Psychology, Concordia University, Montreal, QC, Canada
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada
- Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT), Montreal, QC, Canada
| | - Isabelle Arseneau-Bruneau
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada
- Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT), Montreal, QC, Canada
- Montreal Neurological Institute, McGill University, Montreal, QC, Canada
| | - Xiaochen Zhang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Robert J. Zatorre
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada
- Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT), Montreal, QC, Canada
- Montreal Neurological Institute, McGill University, Montreal, QC, Canada
| |
Collapse
|
20
|
Bidelman GM, Heath ST. Neural Correlates of Enhanced Audiovisual Processing in the Bilingual Brain. Neuroscience 2019; 401:11-20. [PMID: 30639306 PMCID: PMC6379141 DOI: 10.1016/j.neuroscience.2019.01.003] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2018] [Revised: 12/22/2018] [Accepted: 01/04/2019] [Indexed: 10/27/2022]
Abstract
Bilingualism is associated with enhancements in perceptual and cognitive processing necessary for juggling multiple languages. Recent psychophysical studies demonstrate bilinguals also show enhanced multisensory processing and more restricted temporal binding windows for integrating audiovisual information. Here, we probed the neural mechanisms of bilinguals' audiovisual benefits. We recorded neuroelectric responses in mono- and bi-lingual listeners to the double-flash paradigm in which auditory beeps concurrent with a single visual flash induces the perceptual illusion of multiple flashes. Relative to monolinguals, bilinguals showed less susceptibility to the illusion (fewer false perceptual reports) coupled with stronger and faster event-related potentials to audiovisual information. Source analyses of EEG data revealed monolinguals' increased propensity for erroneously perceiving audiovisual stimuli was attributed to increased activity in primary visual (V1) and auditory cortex (PAC), increases in multisensory association areas (BA 37), but reduced frontal activity (BA 10). Regional activations were associated with an opposite pattern of behaviors: whereas stronger V1 and PAC activity predicted slower behavioral responses, stronger frontal BA10 responses elicited faster judgments. Our results suggest bilinguals' higher precision in audiovisual perception reflects more veridical sensory coding of physical cues coupled with superior top-down gating of sensory information to suppress the generation of false percepts. Findings underscore that the plasticity afforded by speaking multiple languages shapes extra-linguistic brain regions and can enhance audiovisual brain processing in a domain-general manner.
Collapse
Affiliation(s)
- Gavin M Bidelman
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA; Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; University of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, USA.
| | - Shelley T Heath
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA
| |
Collapse
|