1
|
Kaucke S, Schlechtweg M. English Speakers' Perception of Non-native Vowel Contrasts in Adverse Listening Conditions: A Discrimination Study on the German Front Rounded Vowels /y/ and /ø/. LANGUAGE AND SPEECH 2024:238309241254350. [PMID: 38853599 DOI: 10.1177/00238309241254350] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2024]
Abstract
Previous research has shown that it is difficult for English speakers to distinguish the front rounded vowels /y/ and /ø/ from the back rounded vowels /u/ and /o/. In this study, we examine the effect of noise on this perceptual difficulty. In an Oddity Discrimination Task, English speakers without any knowledge of German were asked to discriminate between German-sounding pseudowords varying in the vowel both in quiet and in white noise at two signal-to-noise ratios (8 and 0 dB). In test trials, vowels of the same height were contrasted with each other, whereas a contrast with /a/ served as a control trial. Results revealed that a contrast with /a/ remained stable in every listening condition for both high and mid vowels. When contrasting vowels of the same height, however, there was a perceptual shift along the F2 dimension as the noise level increased. Although the /ø/-/o/ and particularly /y/-/u/ contrasts were the most difficult in quiet, accuracy on /i/-/y/ and /e/-/ø/ trials decreased immensely when the speech signal was masked. The German control group showed the same pattern, albeit less severe than the non-native group, suggesting that even in low-level tasks with pseudowords, there is a native advantage in speech perception in noise.
Collapse
Affiliation(s)
- Stephanie Kaucke
- Institute for English and American Studies, Carl von Ossietzky Universität Oldenburg, Germany; Cluster of Excellence "Hearing4All," Germany
| | - Marcel Schlechtweg
- Institute for English and American Studies, Carl von Ossietzky Universität Oldenburg, Germany; Cluster of Excellence "Hearing4All," Germany
| |
Collapse
|
2
|
Hu XJ, Lau CC. Influence of Speech Recognition Ability on Acceptable Noise Level for Mandarin (Chinese) Speakers with Normal Hearing. Audiol Neurootol 2023; 28:371-379. [PMID: 37166311 DOI: 10.1159/000530025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 02/27/2023] [Indexed: 05/12/2023] Open
Abstract
INTRODUCTION Noise can induce hearing loss and reduce speech understanding. The Acceptable Noise Level (ANL) test has been widely used in audiology. However, strategies used by listeners to determine ANLs are unclear. The current study evaluated the role of speech recognition in selecting ANL and how well ANL could predict speech understanding in a noisy situation. METHODS Forty-five Mandarin speakers with normal hearing were tested in both ears. ANL is defined as Most Comfortable Level (MCL) minus Background Noise Level (BNL). To obtain ANL monaurally with an earphone, the study measured participants' MCL to hear a Mandarin story in quiet and the maximum BNL to tolerate while following the story. Then, based on the participant's ANL, speech recognition in noise was examined using a set of phonemic-balanced Mandarin words. The signal-to-noise ratio (SNR) was adjusted to ANL, ANL - 10 dB ("degraded noise condition"), and ANL + 10 dB ("improved noise condition"). RESULTS The mean ANLs were 2.4 dB and 2.6 dB for the left and right ears, respectively. The mean speech recognition with SNR adjusted to ANL was relatively high for both ears (81-83% correct). Even for those ear samples with very low ANL (<0 dB), speech performance obtained at SNR = ANL was still high. The mean speech recognition obtained at SNR = ANL was 5 percentage points lower than the mean speech recognition at the improved noise condition and 14 percentage points higher than the mean speech recognition at the degraded noise condition. Speech recognition obtained at SNR = ANL and ANL - 10 dB correlated significantly with ANL. CONCLUSION Speech recognition in noise appears to play an important role for listeners with normal hearing in deciding their ANLs. Additionally, ANL can predict speech performance (r-squared = 53-61%) in the degraded noise condition.
Collapse
Affiliation(s)
- Xu Jun Hu
- School of Medical Technology and Information Engineering, Zhejiang Chinese Medical University, Hangzhou, China
| | | |
Collapse
|
3
|
Maillard E, Joyal M, Murray MM, Tremblay P. Are musical activities associated with enhanced speech perception in noise in adults? A systematic review and meta-analysis. CURRENT RESEARCH IN NEUROBIOLOGY 2023. [DOI: 10.1016/j.crneur.2023.100083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/30/2023] Open
|
4
|
Van Os M, Kray J, Demberg V. Rational speech comprehension: Interaction between predictability, acoustic signal, and noise. Front Psychol 2022; 13:914239. [PMID: 36591096 PMCID: PMC9802670 DOI: 10.3389/fpsyg.2022.914239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2022] [Accepted: 11/30/2022] [Indexed: 12/23/2022] Open
Abstract
Introduction During speech comprehension, multiple sources of information are available to listeners, which are combined to guide the recognition process. Models of speech comprehension posit that when the acoustic speech signal is obscured, listeners rely more on information from other sources. However, these models take into account only word frequency information and local contexts (surrounding syllables), but not sentence-level information. To date, empirical studies investigating predictability effects in noise did not carefully control the tested speech sounds, while the literature investigating the effect of background noise on the recognition of speech sounds does not manipulate sentence predictability. Additionally, studies on the effect of background noise show conflicting results regarding which noise type affects speech comprehension most. We address this in the present experiment. Methods We investigate how listeners combine information from different sources when listening to sentences embedded in background noise. We manipulate top-down predictability, type of noise, and characteristics of the acoustic signal, thus creating conditions which differ in the extent to which a specific speech sound is masked in a way that is grounded in prior work on the confusability of speech sounds in noise. Participants complete an online word recognition experiment. Results and discussion The results show that participants rely more on the provided sentence context when the acoustic signal is harder to process. This is the case even when interactions of the background noise and speech sounds lead to small differences in intelligibility. Listeners probabilistically combine top-down predictions based on context with noisy bottom-up information from the acoustic signal, leading to a trade-off between the different types of information that is dependent on the combination of a specific type of background noise and speech sound.
Collapse
Affiliation(s)
- Marjolein Van Os
- Department of Language Science and Technology, Saarland University, Saarbrücken, Germany,*Correspondence: Marjolein Van Os,
| | - Jutta Kray
- Department of Psychology, Saarland University, Saarbrücken, Germany
| | - Vera Demberg
- Department of Language Science and Technology, Saarland University, Saarbrücken, Germany,Department of Computer Science, Saarland University, Saarbrücken, Germany
| |
Collapse
|
5
|
Taitelbaum-Swead R, Fostick L. The Effect of Age, Type of Noise, and Cochlear Implants on Adaptive Sentence-in-Noise Task. J Clin Med 2022; 11:jcm11195872. [PMID: 36233739 PMCID: PMC9571224 DOI: 10.3390/jcm11195872] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Revised: 09/22/2022] [Accepted: 09/28/2022] [Indexed: 11/16/2022] Open
Abstract
Adaptive tests of sentences in noise mimic the challenge of daily listening situations. The aims of the present study were to validate an adaptive version of the HeBio sentence test on normal hearing (NH) adults; to evaluate the effect of age and type of noise on speech reception threshold in noise (SRTn); and to test it on prelingual adults with cochlear implants (CI). In Experiment 1, 45 NH young adults listened to two lists accompanied by four-talker babble noise (4TBN). Experiment 2 presented the sentences amidst 4TBN or speech-shaped noise (SSN) to 80 participants in four age groups. In Experiment 3, 18 CI adult users with prelingual bilateral profound hearing loss performed the test amidst SSN, along with HeBio sentences and monosyllabic words in quiet and forward digits span. The main findings were as follows: SRTn for NH participants was normally distributed and had high test–retest reliability; SRTn was lower among adolescents and young adults than middle-aged and older adults, and were better for SSN than 4TBN; SRTn for CI users was higher and more variant than for NH and correlated with speech perception tests in quiet, digits span, and age at first CI. This suggests that the adaptive HeBio can be implemented in clinical and research settings with various populations.
Collapse
Affiliation(s)
- Riki Taitelbaum-Swead
- Department of Communication Disorders, Ariel University, Ariel 4077625, Israel
- Medical Division, Meuhedet Health Services, Tel Aviv 6203854, Israel
- Correspondence:
| | - Leah Fostick
- Department of Communication Disorders, Ariel University, Ariel 4077625, Israel
| |
Collapse
|
6
|
Kepp NE, Arrieta I, Schiøth C, Percy-Smith L. Virtual Reality pitch ranking in children with cochlear implants, hearing aids or normal hearing. Int J Pediatr Otorhinolaryngol 2022; 161:111241. [PMID: 35964492 DOI: 10.1016/j.ijporl.2022.111241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 07/03/2022] [Accepted: 07/11/2022] [Indexed: 11/19/2022]
Affiliation(s)
- Nille Elise Kepp
- Research Unit at the Center of Hearing & Balance, Copenhagen University Hospital, Rigshospitalet, Denmark; Graduate School of Health and Medical Sciences, University of Copenhagen, Denmark.
| | - Irene Arrieta
- Basque Center on Cognition, Brain and Language - BCBL, Universidad del Paíz Vasco - UPV, Spain; Technical University of Denmark - DTU, Denmark
| | | | - Lone Percy-Smith
- Research Unit at the Center of Hearing & Balance, Copenhagen University Hospital, Rigshospitalet, Denmark
| |
Collapse
|
7
|
Differential weighting of temporal envelope cues from the low-frequency region for Mandarin sentence recognition in noise. BMC Neurosci 2022; 23:35. [PMID: 35698039 PMCID: PMC9190152 DOI: 10.1186/s12868-022-00721-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2021] [Accepted: 06/01/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Temporal envelope cues are conveyed by cochlear implants (CIs) to hearing loss patients to restore hearing. Although CIs could enable users to communicate in clear listening environments, noisy environments still pose a problem. To improve speech-processing strategies used in Chinese CIs, we explored the relative contributions made by the temporal envelope in various frequency regions, as relevant to Mandarin sentence recognition in noise. METHODS Original speech material from the Mandarin version of the Hearing in Noise Test (MHINT) was mixed with speech-shaped noise (SSN), sinusoidally amplitude-modulated speech-shaped noise (SAM SSN), and sinusoidally amplitude-modulated (SAM) white noise (4 Hz) at a + 5 dB signal-to-noise ratio, respectively. Envelope information of the noise-corrupted speech material was extracted from 30 contiguous bands that were allocated to five frequency regions. The intelligibility of the noise-corrupted speech material (temporal cues from one or two regions were removed) was measured to estimate the relative weights of temporal envelope cues from the five frequency regions. RESULTS In SSN, the mean weights of Regions 1-5 were 0.34, 0.19, 0.20, 0.16, and 0.11, respectively; in SAM SSN, the mean weights of Regions 1-5 were 0.34, 0.17, 0.24, 0.14, and 0.11, respectively; and in SAM white noise, the mean weights of Regions 1-5 were 0.46, 0.24, 0.22, 0.06, and 0.02, respectively. CONCLUSIONS The results suggest that the temporal envelope in the low-frequency region transmits the greatest amount of information in terms of Mandarin sentence recognition for three types of noise, which differed from the perception strategy employed in clear listening environments.
Collapse
|
8
|
Taitelbaum-Swead R, Dahan T, Katzenel U, Dorman MF, Litvak LM, Fostick L. AzBio Sentence test in Hebrew (HeBio): development, preliminary validation, and the effect of noise. Cochlear Implants Int 2022; 23:270-279. [DOI: 10.1080/14670100.2022.2083285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Riki Taitelbaum-Swead
- Department of Communication Disorders, Ariel University, Israel
- Meuhedet Health Services, Tel Aviv, Israel
| | - Tzofit Dahan
- The Audiology Service, Kaplan Medical Center, Rehovot, Israel
| | - Udi Katzenel
- Department of Otolaryngology Head and Neck Surgery, Kaplan Medical Center, Rehovot, Israel
- Hebrew University, Hadassah Medical School, Jerusalem, Israel
| | - Michael F. Dorman
- Department of Speech and Hearing Science, Arizona State University, Tempe, USA
| | | | - Leah Fostick
- Department of Communication Disorders, Ariel University, Israel
| |
Collapse
|
9
|
Fostick L, Babkoff H. The role of tone duration in dichotic temporal order judgment II: Extending the boundaries of duration and age. PLoS One 2022; 17:e0264831. [PMID: 35353821 PMCID: PMC8967006 DOI: 10.1371/journal.pone.0264831] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2021] [Accepted: 02/17/2022] [Indexed: 11/18/2022] Open
Abstract
Temporal order judgment (TOJ) measures the ability to correctly perceive the order of consecutive stimuli presented rapidly. Our previous research suggested that the major predictor of auditory dichotic TOJ threshold, a paradigm that requires the identification of the order of two tones, each of which is presented to a different ear, is the time separating the onset of the first tone from the onset of the second tone (stimulus-onset-asynchrony, SOA). Data supporting this finding, however, was based on a young adult population and a tone duration range of 10–40 msec. The current study aimed to evaluate the generalizability of the earlier finding by manipulating the experimental model in two different ways: a) extending the tone duration range to include shorter stimulus durations (3–8 msec; Experiment 1) and b) repeating the identical testing procedure on a different population with temporal processing deficits, i.e., older adults (Experiment 2). We hypothesized that the SOA would predict the TOJ threshold regardless of tone duration and participant age. Experiment 1 included 226 young adults divided into eight groups (each group receiving a different tone duration) with duration ranging from 3–40 msec. Experiment 2 included 98 participants aged 60–75 years, divided into five groups by tone duration (10–40 msec). The results of both experiments confirmed the hypothesis, that the SOA required for performing dichotic TOJ was constant regardless of stimulus duration, for both age groups: about 66.5 msec for the young adults and 33 msec longer (100 msec) for the older adults. This finding suggests that dichotic TOJ threshold is controlled by a general mechanism that changes quantitatively with age. Clinically, this has significance because quantitative changes can be more easily remedied than qualitative changes. Theoretically, our findings show that, with dichotic TOJ, tone duration affects threshold by providing more time between the onsets of the consecutive stimuli to the two ears. The findings also imply that a temporal processing deficit, at least among older adults, does not elicit the use of a different mechanism in order to judge temporal order.
Collapse
Affiliation(s)
- Leah Fostick
- Department of Communication Disorders, Ariel University, Ariel, Israel
- * E-mail:
| | - Harvey Babkoff
- Department of Psychology, Bar-Ilan University, Ramat-Gan, Israel
| |
Collapse
|
10
|
Seol HY, Kang S, Lim J, Hong SH, Moon IJ. Feasibility of Virtual Reality Audiological Testing: Prospective Study. JMIR Serious Games 2021; 9:e26976. [PMID: 34463624 PMCID: PMC8441603 DOI: 10.2196/26976] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Revised: 05/13/2021] [Accepted: 05/29/2021] [Indexed: 11/25/2022] Open
Abstract
Background It has been noted in the literature that there is a gap between clinical assessment and real-world performance. Real-world conversations entail visual and audio information, yet there are not any audiological assessment tools that include visual information. Virtual reality (VR) technology has been applied to various areas, including audiology. However, the use of VR in speech-in-noise perception has not yet been investigated. Objective The purpose of this study was to investigate the impact of virtual space (VS) on speech performance and its feasibility to be used as a speech test instrument. We hypothesized that individuals’ ability to recognize speech would improve when visual cues were provided. Methods A total of 30 individuals with normal hearing and 25 individuals with hearing loss completed pure-tone audiometry and the Korean version of the Hearing in Noise Test (K-HINT) under three conditions—conventional K-HINT (cK-HINT), VS on PC (VSPC), and VS head-mounted display (VSHMD)—at –10 dB, –5 dB, 0 dB, and +5 dB signal-to-noise ratios (SNRs). Participants listened to target speech and repeated it back to the tester for all conditions. Hearing aid users in the hearing loss group completed testing under unaided and aided conditions. A questionnaire was administered after testing to gather subjective opinions on the headset, the VSHMD condition, and test preference. Results Provision of visual information had a significant impact on speech performance between the normal hearing and hearing impaired groups. The Mann-Whitney U test showed statistical significance (P<.05) between the two groups under all test conditions. Hearing aid use led to better integration of audio and visual cues. Statistical significance through the Mann-Whitney U test was observed for –5 dB (P=.04) and 0 dB (P=.02) SNRs under the cK-HINT condition, as well as for –10 dB (P=.007) and 0 dB (P=.04) SNRs under the VSPC condition, between hearing aid and non–hearing aid users. Participants reported positive responses across almost all items on the questionnaire except for the weight of the headset. Participants preferred a test method with visual imagery, but found the headset to be heavy. Conclusions Findings are in line with previous literature that showed that visual cues were beneficial for communication. This is the first study to include hearing aid users with a more naturalistic stimulus and a relatively simple test environment, suggesting the feasibility of VR audiological testing in clinical practice.
Collapse
Affiliation(s)
- Hye Yoon Seol
- Medical Research Institute, Sungkyunkwan University School of Medicine, Suwon, Republic of Korea.,Hearing Research Laboratory, Samsung Medical Center, Seoul, Republic of Korea
| | - Soojin Kang
- Medical Research Institute, Sungkyunkwan University School of Medicine, Suwon, Republic of Korea.,Hearing Research Laboratory, Samsung Medical Center, Seoul, Republic of Korea
| | - Jihyun Lim
- Center for Clinical Epidemiology, Samsung Medical Center, Seoul, Republic of Korea
| | - Sung Hwa Hong
- Hearing Research Laboratory, Samsung Medical Center, Seoul, Republic of Korea.,Department of Otolaryngology-Head & Neck Surgery, Samsung Changwon Hospital, Changwon, Republic of Korea
| | - Il Joon Moon
- Hearing Research Laboratory, Samsung Medical Center, Seoul, Republic of Korea.,Department of Otolaryngology-Head & Neck Surgery, Samsung Medical Center, Seoul, Republic of Korea
| |
Collapse
|
11
|
Huang W, Wong LLN, Chen F, Liu H, Liang W. Effects of Fundamental Frequency Contours on Sentence Recognition in Mandarin-Speaking Children With Cochlear Implants. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:3855-3864. [PMID: 33022190 DOI: 10.1044/2020_jslhr-20-00033] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Purpose Fundamental frequency (F0) is the primary acoustic cue for lexical tone perception in tonal languages but is processed in a limited way in cochlear implant (CI) systems. The aim of this study was to evaluate the importance of F0 contours in sentence recognition in Mandarin-speaking children with CIs and find out whether it is similar to/different from that in age-matched normal-hearing (NH) peers. Method Age-appropriate sentences, with F0 contours manipulated to be either natural or flattened, were randomly presented to preschool children with CIs and their age-matched peers with NH under three test conditions: in quiet, in white noise, and with competing sentences at 0 dB signal-to-noise ratio. Results The neutralization of F0 contours resulted in a significant reduction in sentence recognition. While this was seen only in noise conditions among NH children, it was observed throughout all test conditions among children with CIs. Moreover, the F0 contour-induced accuracy reduction ratios (i.e., the reduction in sentence recognition resulting from the neutralization of F0 contours compared to the normal F0 condition) were significantly greater in children with CIs than in NH children in all test conditions. Conclusions F0 contours play a major role in sentence recognition in both quiet and noise among pediatric implantees, and the contribution of the F0 contour is even more salient than that in age-matched NH children. These results also suggest that there may be differences between children with CIs and NH children in how F0 contours are processed.
Collapse
Affiliation(s)
- Wanting Huang
- Unit of Human Communication, Development, and Information Sciences, Faculty of Education, The University of Hong Kong, China
| | - Lena L N Wong
- Unit of Human Communication, Development, and Information Sciences, Faculty of Education, The University of Hong Kong, China
| | - Fei Chen
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Haihong Liu
- Beijing Key Laboratory of Pediatric Diseases of Otolaryngology, Head and Neck Surgery, Beijing Children's Hospital, China
| | - Wei Liang
- China Rehabilitation Research Center for Hearing and Speech Impairment, Beijing, China
| |
Collapse
|
12
|
Icht M, Mama Y, Taitelbaum-Swead R. Visual and Auditory Verbal Memory in Older Adults: Comparing Postlingually Deaf Cochlear Implant Users to Normal-Hearing Controls. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:3865-3876. [PMID: 33049151 DOI: 10.1044/2020_jslhr-20-00170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Purpose The aim of this study was to test whether a group of older postlingually deafened cochlear implant users (OCIs) use similar verbal memory strategies to those used by older normal-hearing adults (ONHs). Verbal memory functioning was assessed in the visual and auditory modalities separately, enabling us to eliminate possible modality-based biases. Method Participants performed two separate visual and auditory verbal memory tasks. In each task, the visually or aurally presented study words were learned by vocal production (saying aloud) or by no production (reading silently or listening), followed by a free recall test. Twenty-seven older adults (> 60 years) participated (OCI = 13, ONH = 14), all of whom demonstrated intact cognitive abilities. All OCIs showed good open-set speech perception results in quiet. Results Both ONHs and OCIs showed production benefits (higher recall rates for vocalized than nonvocalized words) in the visual and auditory tasks. The ONHs showed similar production benefits in the visual and auditory tasks. The OCIs demonstrated a smaller production effect in the auditory task. Conclusions These results may indicate that different modality-specific memory strategies were used by the ONHs and the OCIs. The group differences in memory performance suggest that, even when deafness occurs after the completion of language acquisition, the reduced and distorted external auditory stimulation leads to a deterioration in the phonological representation of sounds. Possibly, this deterioration leads to a less efficient auditory long-term verbal memory.
Collapse
Affiliation(s)
- Michal Icht
- Department of Communication Disorders, Ariel University, Israel
| | - Yaniv Mama
- Department of Behavioral Sciences and Psychology, Ariel University, Israel
| | - Riki Taitelbaum-Swead
- Department of Communication Disorders, Ariel University, Israel
- Meuhedet Health Services, Tel Aviv, Israel
| |
Collapse
|
13
|
Fostick L, Taitelbaum-Swead R, Kreitler S, Zokraut S, Billig M. Auditory Training to Improve Speech Perception and Self-Efficacy in Aging Adults. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:1270-1281. [PMID: 32182434 DOI: 10.1044/2019_jslhr-19-00355] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Purpose Difficulty in understanding spoken speech is a common complaint among aging adults, even when hearing impairment is absent. Correlational studies point to a relationship between age, auditory temporal processing (ATP), and speech perception but cannot demonstrate causality unlike training studies. In the current study, we test (a) the causal relationship between a spatial-temporal ATP task (temporal order judgment [TOJ]) and speech perception among aging adults using a training design and (b) whether improvement in aging adult speech perception is accompanied by improved self-efficacy. Method Eighty-two participants aged 60-83 years were randomly assigned to a group receiving (a) ATP training (TOJ) over 14 days, (b) non-ATP training (intensity discrimination) over 14 days, or (c) no training. Results The data showed that TOJ training elicited improvement in all speech perception tests, which was accompanied by increased self-efficacy. Neither improvement in speech perception nor self-efficacy was evident following non-ATP training or no training. Conclusions There was no generalization of the improvement resulting from TOJ training to intensity discrimination or generalization of improvement resulting from intensity discrimination training to speech perception. These findings imply that the effect of TOJ training on speech perception is specific and such improvement is not simply the product of generally improved auditory perception. It provides support for the idea that temporal properties of speech are indeed crucial for speech perception. Clinically, the findings suggest that aging adults can be trained to improve their speech perception, specifically through computer-based auditory training, and this may improve perceived self-efficacy.
Collapse
Affiliation(s)
- Leah Fostick
- Department of Communication Disorders, Ariel University, Israel
| | | | | | - Shelly Zokraut
- Department of Health Systems Management, Ariel University, Israel
| | - Miriam Billig
- Department of Sociology and Anthropology, Ariel University, Israel
- Eastern R&D Center, Ariel, Israel
| |
Collapse
|
14
|
Yaralı M. Varying effect of noise on sound onset and acoustic change evoked auditory cortical N1 responses evoked by a vowel-vowel stimulus. Int J Psychophysiol 2020; 152:36-43. [PMID: 32302643 DOI: 10.1016/j.ijpsycho.2020.04.010] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2019] [Revised: 04/09/2020] [Accepted: 04/10/2020] [Indexed: 11/24/2022]
Abstract
INTRODUCTION According to previous studies noise causes prolonged latencies and decreased amplitudes in acoustic change evoked cortical responses. Particularly for a consonant-vowel stimulus, speech shaped noise leads to more pronounced changes on onset evoked response than acoustic change evoked response. Reasoning that this may be related to the spectral characteristics of the stimuli and the noise, in the current study a vowel-vowel stimulus (/ui/) was presented in white noise during cortical response recordings. The hypothesis is that the effect of noise will be higher on acoustic change N1 compared to onset N1 due to the masking effects on formant transitions. METHODS Onset and acoustic change evoked auditory cortical N1-P2 responses were obtained from 21 young adults with normal hearing while presenting 1000 ms /ui/ stimuli in quiet and in white noise at +10 dB and 0 dB signal-to-noise ratio (SNR). RESULTS In the quiet and +10 dB SNR conditions, the N1-P2 responses to both onset and change were present. In the +10 dB SNR condition acoustic change N1-P2 peak-to-peak amplitudes were reduced and N1 latencies were prolonged compared to the quiet condition. Whereas there was not a significant change in onset N1 latencies and N1-P2 peak-to-peak amplitudes in the +10 dB SNR condition. In the 0 dB SNR condition change responses were not observed but onset N1-P2 peak-to-peak amplitudes were significantly lower, and onset N1 latencies were significantly higher compared to the quiet and the 10 dB SNR conditions. Onset and change responses were also compared with each other in each condition. N1 latencies and N1-P2 peak to peak amplitudes of onset and acoustic change were not significantly different in the quiet condition. Whereas at 10 dB SNR, acoustic change N1 latencies were higher and N1-P2 amplitudes were lower than onset latencies and amplitudes. To summarize, presentation of white noise at 10 dB SNR resulted in the reduction of acoustic change evoked N1-P2 peak-to-peak amplitudes and the prolongation of N1 latencies compared to quiet. Same effect on onsets were only observed at 0 dB SNR, where acoustic change N1 was not observed. In the quiet condition, latencies and amplitudes of onsets and changes were not different. Whereas at 10 dB SNR, acoustic change N1 latencies were higher, amplitudes were lower than onset N1. DISCUSSION/CONCLUSIONS The effect of noise was found to be higher on acoustic change evoked N1 response compared to onset N1. This may be related to the spectral characteristics of the utilized noise and the stimuli, possible differences in acoustic features of sound onsets and acoustic changes, or to the possible differences in the mechanisms for detecting acoustic changes and sound onsets. In order to investigate the possible reasons for more pronounced effect of noise on acoustic changes, future work with different vowel-vowel transitions in different noise types is suggested.
Collapse
Affiliation(s)
- Mehmet Yaralı
- Department of Audiology, Hacettepe University, Ankara, Turkey.
| |
Collapse
|
15
|
Sprachverstehen und kognitive Leistungen in akustisch schwierigen Situationen. HNO 2020; 68:171-176. [DOI: 10.1007/s00106-019-0727-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
16
|
Šabić E, Henning D, Myüz H, Morrow A, Hout MC, MacDonald JA. Examining the Role of Eye Movements During Conversational Listening in Noise. Front Psychol 2020; 11:200. [PMID: 32116975 PMCID: PMC7033431 DOI: 10.3389/fpsyg.2020.00200] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2019] [Accepted: 01/28/2020] [Indexed: 12/02/2022] Open
Abstract
Speech comprehension is often thought of as an entirely auditory process, but both normal hearing and hearing-impaired individuals sometimes use visual attention to disambiguate speech, particularly when it is difficult to hear. Many studies have investigated how visual attention (or the lack thereof) impacts the perception of simple speech sounds such as isolated consonants, but there is a gap in the literature concerning visual attention during natural speech comprehension. This issue needs to be addressed, as individuals process sounds and words in everyday speech differently than when they are separated into individual elements with no competing sound sources or noise. Moreover, further research is needed to explore patterns of eye movements during speech comprehension – especially in the presence of noise – as such an investigation would allow us to better understand how people strategically use visual information while processing speech. To this end, we conducted an experiment to track eye-gaze behavior during a series of listening tasks as a function of the number of speakers, background noise intensity, and the presence or absence of simulated hearing impairment. Our specific aims were to discover how individuals might adapt their oculomotor behavior to compensate for the difficulty of the listening scenario, such as when listening in noisy environments or experiencing simulated hearing loss. Speech comprehension difficulty was manipulated by simulating hearing loss and varying background noise intensity. Results showed that eye movements were affected by the number of speakers, simulated hearing impairment, and the presence of noise. Further, findings showed that differing levels of signal-to-noise ratio (SNR) led to changes in eye-gaze behavior. Most notably, we found that the addition of visual information (i.e. videos vs. auditory information only) led to enhanced speech comprehension – highlighting the strategic usage of visual information during this process.
Collapse
Affiliation(s)
- Edin Šabić
- Hearing Enhancement and Augmented Reality Lab, Department of Psychology, New Mexico State University, Las Cruces, NM, United States
| | - Daniel Henning
- Hearing Enhancement and Augmented Reality Lab, Department of Psychology, New Mexico State University, Las Cruces, NM, United States
| | - Hunter Myüz
- Hearing Enhancement and Augmented Reality Lab, Department of Psychology, New Mexico State University, Las Cruces, NM, United States
| | - Audrey Morrow
- Hearing Enhancement and Augmented Reality Lab, Department of Psychology, New Mexico State University, Las Cruces, NM, United States
| | - Michael C Hout
- Hearing Enhancement and Augmented Reality Lab, Department of Psychology, New Mexico State University, Las Cruces, NM, United States
| | - Justin A MacDonald
- Hearing Enhancement and Augmented Reality Lab, Department of Psychology, New Mexico State University, Las Cruces, NM, United States
| |
Collapse
|
17
|
Taitelbaum-Swead R, Kozol Z, Fostick L. Listening Effort Among Adults With and Without Attention-Deficit/Hyperactivity Disorder. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:4554-4563. [PMID: 31747524 DOI: 10.1044/2019_jslhr-h-19-0134] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Purpose Few studies have assessed listening effort (LE)-the cognitive resources required to perceive speech-among populations with intact hearing but reduced availability of cognitive resources. Attention/deficit/hyperactivity disorder (ADHD) is theorized to restrict attention span, possibly making speech perception in adverse conditions more challenging. This study examined the effect of ADHD on LE among adults using a behavioral dual-task paradigm (DTP). Method Thirty-nine normal-hearing adults (aged 21-27 years) participated: 19 with ADHD (ADHD group) and 20 without ADHD (control group). Baseline group differences were measured in visual and auditory attention as well as speech perception. LE using DTP was assessed as the performance difference on a visual-motor task versus a simultaneous auditory and visual-motor task. Results Group differences in attention were confirmed by differences in visual attention (larger reaction times between congruent and incongruent conditions) and auditory attention (lower accuracy in the presence of distractors) among the ADHD group, compared to the controls. LE was greater among the ADHD group than the control group. Nevertheless, no group differences were found in speech perception. Conclusions LE is increased among those with ADHD. As a DTP assumes limited cognitive capacity to allocate attentional resources, LE among those with ADHD may be increased because higher level cognitive processes are more taxed in this population. Studies on LE using a DTP should take into consideration mechanisms of selective and divided attention. Among young adults who need to continuously process great volumes of auditory and visual information, much more effort may be expended by those with ADHD than those without it. As a result, those with ADHD may be more prone to fatigue and irritability, similar to those who are engaged in more outwardly demanding tasks.
Collapse
Affiliation(s)
- Riki Taitelbaum-Swead
- Department of Communication Disorders, Ariel University, Israel
- Meuhedet Health Services, Tel Aviv, Israel
| | - Zvi Kozol
- Department of Physiotherapy, Ariel University, Israel
| | - Leah Fostick
- Department of Communication Disorders, Ariel University, Israel
| |
Collapse
|
18
|
Castiglione A, Casa M, Gallo S, Sorrentino F, Dhima S, Cilia D, Lovo E, Gambin M, Previato M, Colombo S, Caserta E, Gheller F, Giacomelli C, Montino S, Limongi F, Brotto D, Gabelli C, Trevisi P, Bovo R, Martini A. Correspondence Between Cognitive and Audiological Evaluations Among the Elderly: A Preliminary Report of an Audiological Screening Model of Subjects at Risk of Cognitive Decline With Slight to Moderate Hearing Loss. Front Neurosci 2019; 13:1279. [PMID: 31920475 PMCID: PMC6915032 DOI: 10.3389/fnins.2019.01279] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2018] [Accepted: 11/11/2019] [Indexed: 11/25/2022] Open
Abstract
Epidemiological studies show increasing prevalence rates of cognitive decline and hearing loss with age, particularly after the age of 65 years. These conditions are reported to be associated, although conclusive evidence of causality and implications is lacking. Nevertheless, audiological and cognitive assessment among elderly people is a key target for comprehensive and multidisciplinary evaluation of the subject’s frailty status. To evaluate the use of tools for identifying older adults at risk of hearing loss and cognitive decline and to compare skills and abilities in terms of hearing and cognitive performances between older adults and young subjects, we performed a prospective cross-sectional study using supraliminal auditory tests. The relationship between cognitive assessment results and audiometric results was investigated, and reference ranges for different ages or stages of disease were determined. Patients older than 65 years with different degrees of hearing function were enrolled. Each subject underwent an extensive audiological assessment, including tonal and speech audiometry, Italian Matrix Sentence Test, and speech audiometry with logatomes in quiet. Cognitive function was screened and then verified by experienced clinicians using the Montreal Cognitive Assessment Score, the Geriatric Depression Scale, and further investigations in some. One hundred twenty-three subjects were finally enrolled during 2016–2019: 103 were >65 years of age and 20 were younger participants (as controls). Cognitive functions showed a correlation with the audiological results in post-lingual hearing-impaired patients, in particular in those affected by slight to moderate hearing loss and aged more than 70 years. Audiological testing can thus be useful in clinical assessment and identification of patients at risk of cognitive impairment. The study was limited by its sample size (CI 95%; CL 10%), strict dependence on language, and hearing threshold. Further investigations should be conducted to confirm the reported results and to verify similar screening models.
Collapse
Affiliation(s)
- Alessandro Castiglione
- Department of Neurosciences, University of Padua, Padua, Italy.,Complex Operative Unit of Otolaryngology, Hospital of Padua, Padua, Italy
| | - Mariella Casa
- Regional Center for the Study and Treatment of the Aging Brain, Department of Internal Medicine, Padua, Italy
| | - Samanta Gallo
- Complex Operative Unit of Otolaryngology, Hospital of Padua, Padua, Italy
| | - Flavia Sorrentino
- Complex Operative Unit of Otolaryngology, Hospital of Padua, Padua, Italy
| | - Sonila Dhima
- Complex Operative Unit of Otolaryngology, Hospital of Padua, Padua, Italy
| | - Dalila Cilia
- Department of Neurosciences, University of Padua, Padua, Italy
| | - Elisa Lovo
- Department of Neurosciences, University of Padua, Padua, Italy
| | - Marta Gambin
- Department of Neurosciences, University of Padua, Padua, Italy
| | - Maela Previato
- Department of Neurosciences, University of Padua, Padua, Italy
| | - Simone Colombo
- Department of Neurosciences, University of Padua, Padua, Italy
| | - Ezio Caserta
- Complex Operative Unit of Otolaryngology, Hospital of Padua, Padua, Italy
| | - Flavia Gheller
- Department of Neurosciences, University of Padua, Padua, Italy
| | | | - Silvia Montino
- Department of Neurosciences, University of Padua, Padua, Italy
| | - Federica Limongi
- Institute of Neuroscience, National Research Council, Padua, Italy
| | - Davide Brotto
- Complex Operative Unit of Otolaryngology, Hospital of Padua, Padua, Italy
| | - Carlo Gabelli
- Regional Center for the Study and Treatment of the Aging Brain, Department of Internal Medicine, Padua, Italy
| | - Patrizia Trevisi
- Department of Neurosciences, University of Padua, Padua, Italy.,Complex Operative Unit of Otolaryngology, Hospital of Padua, Padua, Italy
| | - Roberto Bovo
- Department of Neurosciences, University of Padua, Padua, Italy.,Complex Operative Unit of Otolaryngology, Hospital of Padua, Padua, Italy
| | - Alessandro Martini
- Department of Neurosciences, University of Padua, Padua, Italy.,Complex Operative Unit of Otolaryngology, Hospital of Padua, Padua, Italy
| |
Collapse
|
19
|
Fostick L. Card playing enhances speech perception among aging adults: comparison with aging musicians. Eur J Ageing 2019; 16:481-489. [PMID: 31798372 DOI: 10.1007/s10433-019-00512-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023] Open
Abstract
Speech perception and auditory processing have been shown to be enhanced among aging musicians as compared to non-musicians. In the present study, the aim was to test whether these functions are also enhanced among those who are engaged in a non-musical mentally challenging leisure activity (card playing). Three groups of 23 aging adults, aged 60-80 years, were recruited for the study: Musicians, Card players, and Controls. Participants were matched for age, gender, Wechsler Adult Intelligence Scale-III Matrix Reasoning, and Digit Span scores. Their performance was measured using auditory spectral and spatial temporal order judgment tests, and four tasks of speech perception in conditions of: no background noise, background noise of speech frequencies, background noise of white noise, and 60% compressed speech. Musicians were better in auditory and speech perception than the other two groups. Card players were similar to Controls in auditory perception tasks, but were better in the speech perception tasks. Non-musician aging adults may be able to improve their speech perception ability by engaging in leisure activity requiring cognitive effort.
Collapse
Affiliation(s)
- Leah Fostick
- Department of Communication Disorders, Ariel University, Ariel, Israel
| |
Collapse
|
20
|
Ronen M, Lifshitz-Ben-Basat A, Taitelbaum-Swead R, Fostick L. Auditory temporal processing, reading, and phonological awareness among aging adults. Acta Psychol (Amst) 2018; 190:1-10. [PMID: 29986206 DOI: 10.1016/j.actpsy.2018.06.010] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2018] [Revised: 06/25/2018] [Accepted: 06/25/2018] [Indexed: 11/18/2022] Open
Abstract
Auditory temporal processing (ATP) has been related in the literature to both speech perception as well as reading and phonological awareness. In aging adults, it is known to be related to difficulties in speech perception. In the present study, we aimed to test whether an age-related deficit in ATP would also be accompanied by poor reading and phonological awareness. Thirty-eight aging adults were compared to 55 readers with dyslexia and 42 young normal readers on temporal order judgment (TOJ), speech perception, reading, and phonological awareness tests. Aging adults had longer TOJ thresholds than young normal readers, but shorter than readers with dyslexia; however, they had lower speech perception accuracy than both groups. Phonological awareness of the aging adults was better than readers with dyslexia, but poorer than young normal readers, although their reading accuracy was similar to that of the young controls. This is the first report on poor phonological awareness among aging adults. Suprisingly, it was not accompanied by difficulties in reading ability, and might instead be related to aging adults' difficulties in speech perception. This newly discovered relationship between ATP and phonological awareness among aging adults appears to extend the existing understanding of this relationship, and suggests it should be explored in other groups with ATP deficits.
Collapse
Affiliation(s)
- Michal Ronen
- Department of Psychology, Ariel University, Israel
| | | | | | - Leah Fostick
- Department of Communication Disorders, Ariel University, Israel.
| |
Collapse
|
21
|
Coping with adversity: Individual differences in the perception of noisy and accented speech. Atten Percept Psychophys 2018; 80:1559-1570. [DOI: 10.3758/s13414-018-1537-4] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
22
|
Mama Y, Fostick L, Icht M. The impact of different background noises on the Production Effect. Acta Psychol (Amst) 2018; 185:235-242. [PMID: 29559082 DOI: 10.1016/j.actpsy.2018.03.002] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2017] [Accepted: 03/11/2018] [Indexed: 11/16/2022] Open
Abstract
The presence of background noise has been previously shown to disrupt cognitive performance, especially memory. The amount of interference is derived from the acoustic characteristics of the noise; energetic vs. informational, steady-state vs. fluctuating. However, the literature is inconsistent concerning the effects of different types of noise on long-term memory free recall. In the present study, we tested the impact of different noises on recall of items that were learned under two conditions - silent or aloud reading, a Production Effect (PE) paradigm. As the PE represents enhanced memory for words read aloud relative to words read silently during study, we focused on the effect of noise on this robust memory phenomenon. The results showed that (a) steady-state energetic noise did not affect memory, with a recall advantage for aloud words (PE), comparable to a no-noise condition, (b) fluctuating-energetic noise and fluctuating-informational (eight-talkers babble) noise eliminated the PE, with similar recall for aloud and silent items. These results are discussed in light of their theoretical implications, stressing the role of attention in the PE. Ecological implications regarding studying in noisy environments are suggested.
Collapse
Affiliation(s)
- Yaniv Mama
- Department of Behavioral Sciences and Psychology, Ariel University, Israel.
| | - Leah Fostick
- Department of Communication Disorders, Ariel University, Israel
| | - Michal Icht
- Department of Communication Disorders, Ariel University, Israel
| |
Collapse
|