1
|
Ikeda K, Campbell TA. Binaural interaction in human auditory brainstem and middle-latency responses affected by sound frequency band, lateralization predictability, and attended modality. Hear Res 2024; 452:109089. [PMID: 39137721 DOI: 10.1016/j.heares.2024.109089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Revised: 07/11/2024] [Accepted: 07/16/2024] [Indexed: 08/15/2024]
Abstract
The binaural interaction component (BIC) of the auditory evoked potential is the difference between the waveforms of the binaural response and the sum of left and right monaural responses. This investigation examined BICs of the auditory brainstem (ABR) and middle-latency (MLR) responses concerning three objectives: 1) the level of the auditory system at which low-frequency dominance in BIC amplitudes begins when the binaural temporal fine structure is more influential with lower- than higher-frequency content; 2) how BICs vary as a function of frequency and lateralization predictability, as could relate to the improved lateralization of high-frequency sounds; 3) how attention affects BICs. Sixteen right-handed participants were presented with either low-passed (< 1000 Hz) or high-passed (> 2000 Hz) clicks at 30 dB SL with a 38 dB (A) masking noise, at a stimulus onset asynchrony of 180 ms. Further, this repeated-measures design manipulated stimulus presentation (binaural, left monaural, right monaural), lateralization predictability (unpredictable, predictable), and attended modality (either auditory or visual). For the objectives, respectively, the results were: 1) whereas low-frequency dominance in BIC amplitudes began during, and continued after, the Na-BIC, binaural (center) as well as summed monaural (left and right) amplitudes revealed low-frequency dominance only after the Na wave; 2) with a predictable position that was fixed, no BIC exhibited equivalent amplitudes between low- and high-passed clicks; 3) whether clicks were low- or high-passed, selective attention affected the ABR-BIC yet not MLR-BICs. These findings indicate that low-frequency dominance in lateralization begins at the Na latency, being independent of the efferent cortico-collicular pathway's influence.
Collapse
Affiliation(s)
- Kazunari Ikeda
- Laboratory of Cognitive Psychophysiology, Tokyo Gakugei University, Koganei, Tokyo 184-8501, Japan.
| | - Tom A Campbell
- Faculty of Information Technology and Communication Sciences, Tampere University, 33720 Tampere, Finland
| |
Collapse
|
2
|
Rönnberg J, Sharma A, Signoret C, Campbell TA, Sörqvist P. Editorial: Cognitive hearing science: Investigating the relationship between selective attention and brain activity. Front Neurosci 2022; 16:1098340. [PMID: 36583104 PMCID: PMC9793772 DOI: 10.3389/fnins.2022.1098340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 11/22/2022] [Indexed: 12/15/2022] Open
Affiliation(s)
- Jerker Rönnberg
- Department of Behavioral Sciences and Learning, Linnaeus Centre HEAD, Linköping University, Linköping, Sweden,*Correspondence: Jerker Rönnberg
| | - Anu Sharma
- Department of Speech, Language and Hearing Sciences University of Colorado at Boulder, Boulder, CO, United States
| | - Carine Signoret
- Department of Behavioral Sciences and Learning, Linnaeus Centre HEAD, Linköping University, Linköping, Sweden
| | - Tom A. Campbell
- Faculty of Information Technology and Communication Sciences, Tampere University, Tampere, Finland
| | - Patrik Sörqvist
- Department of Building Engineering, Energy Systems and Sustainability Science, University of Gävle, Gävle, Sweden
| |
Collapse
|
3
|
Rönnberg J, Signoret C, Andin J, Holmer E. The cognitive hearing science perspective on perceiving, understanding, and remembering language: The ELU model. Front Psychol 2022; 13:967260. [PMID: 36118435 PMCID: PMC9477118 DOI: 10.3389/fpsyg.2022.967260] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Accepted: 08/08/2022] [Indexed: 11/13/2022] Open
Abstract
The review gives an introductory description of the successive development of data patterns based on comparisons between hearing-impaired and normal hearing participants' speech understanding skills, later prompting the formulation of the Ease of Language Understanding (ELU) model. The model builds on the interaction between an input buffer (RAMBPHO, Rapid Automatic Multimodal Binding of PHOnology) and three memory systems: working memory (WM), semantic long-term memory (SLTM), and episodic long-term memory (ELTM). RAMBPHO input may either match or mismatch multimodal SLTM representations. Given a match, lexical access is accomplished rapidly and implicitly within approximately 100-400 ms. Given a mismatch, the prediction is that WM is engaged explicitly to repair the meaning of the input - in interaction with SLTM and ELTM - taking seconds rather than milliseconds. The multimodal and multilevel nature of representations held in WM and LTM are at the center of the review, being integral parts of the prediction and postdiction components of language understanding. Finally, some hypotheses based on a selective use-disuse of memory systems mechanism are described in relation to mild cognitive impairment and dementia. Alternative speech perception and WM models are evaluated, and recent developments and generalisations, ELU model tests, and boundaries are discussed.
Collapse
Affiliation(s)
- Jerker Rönnberg
- Linnaeus Centre HEAD, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| | | | | | | |
Collapse
|
4
|
Zobel BH, Freyman RL, Sanders LD. Spatial release from informational masking enhances the early cortical representation of speech sounds. AUDITORY PERCEPTION & COGNITION 2022; 5:211-237. [PMID: 36160272 PMCID: PMC9494573 DOI: 10.1080/25742442.2022.2088329] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Accepted: 06/04/2022] [Indexed: 06/16/2023]
Abstract
INTRODUCTION Spatial separation between competing speech streams reduces their confusion (informational masking), improving speech processing under challenging listening conditions. The precise stages of auditory processing involved in this benefit are not fully understood. This study used event-related potentials to examine the processing of target speech under conditions of informational masking and its spatial release. METHODS Participants detected noise-vocoded target speech presented with two-talker noise-vocoded masking speech. In separate conditions, the same set of targets were spatially co-located with maskers to produce informational masking and spatially separated from maskers using a perceptual manipulation to release the informational masking. RESULTS An increase in N1 and P2 amplitude, consistent with cortical auditory evoked potentials, and a later sustained positivity (P300) were observed in response to target onsets only under conditions supporting release from informational masking. At target intensities above masking threshold in both spatial conditions, N1 and P2 latencies were shorter when targets and maskers were perceptually separated. DISCUSSION These results indicate that spatial release from informational masking benefits speech representation beginning in the early stages of auditory perception. Additionally, these results suggest that the auditory evoked potential itself may be heavily dependent upon how information is perceptually organized rather than physically organized.
Collapse
Affiliation(s)
- Benjamin H. Zobel
- Department of Psychological and Brain Sciences, University of Massachusetts Amherst, Amherst, Massachusetts 01003
| | - Richard L. Freyman
- Department of Communication Disorders, University of Massachusetts Amherst, Amherst, Massachusetts 01003
| | - Lisa D. Sanders
- Department of Psychological and Brain Sciences, University of Massachusetts Amherst, Amherst, Massachusetts 01003
| |
Collapse
|
5
|
Blomberg R, Johansson Capusan A, Signoret C, Danielsson H, Rönnberg J. The Effects of Working Memory Load on Auditory Distraction in Adults With Attention Deficit Hyperactivity Disorder. Front Hum Neurosci 2021; 15:771711. [PMID: 34916918 PMCID: PMC8670091 DOI: 10.3389/fnhum.2021.771711] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Accepted: 11/08/2021] [Indexed: 12/23/2022] Open
Abstract
Cognitive control provides us with the ability to inter alia, regulate the locus of attention and ignore environmental distractions in accordance with our goals. Auditory distraction is a frequently cited symptom in adults with attention deficit hyperactivity disorder (aADHD)-yet few task-based fMRI studies have explored whether deficits in cognitive control (associated with the disorder) impedes on the ability to suppress/compensate for exogenously evoked cortical responses to noise in this population. In the current study, we explored the effects of auditory distraction as function of working memory (WM) load. Participants completed two tasks: an auditory target detection (ATD) task in which the goal was to actively detect salient oddball tones amidst a stream of standard tones in noise, and a visual n-back task consisting of 0-, 1-, and 2-back WM conditions whilst concurrently ignoring the same tonal signal from the ATD task. Results indicated that our sample of young aADHD (n = 17), compared to typically developed controls (n = 17), had difficulty attenuating auditory cortical responses to the task-irrelevant sound when WM demands were high (2-back). Heightened auditory activity to task-irrelevant sound was associated with both poorer WM performance and symptomatic inattentiveness. In the ATD task, we observed a significant increase in functional communications between auditory and salience networks in aADHD. Because performance outcomes were on par with controls for this task, we suggest that this increased functional connectivity in aADHD was likely an adaptive mechanism for suboptimal listening conditions. Taken together, our results indicate that aADHD are more susceptible to noise interference when they are engaged in a primary task. The ability to cope with auditory distraction appears to be related to the WM demands of the task and thus the capacity to deploy cognitive control.
Collapse
Affiliation(s)
- Rina Blomberg
- Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden.,Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| | - Andrea Johansson Capusan
- Department of Psychiatry, Linköping University, Linköping, Sweden.,Department of Clinical and Experimental Medicine, Linköping University, Linköping, Sweden.,Center for Social and Affective Neuroscience, Linköping University, Linköping, Sweden
| | - Carine Signoret
- Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden.,Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| | - Henrik Danielsson
- Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden.,Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| | - Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden.,Swedish Institute for Disability Research, Linköping University, Linköping, Sweden.,Center for Medical Image Science and Visualization, Linköping University, Linköping, Sweden
| |
Collapse
|
6
|
German B, Honbolygó F, Csépe V, Kóbor A. Working memory contributes to word stress processing in a fixed-stress language. JOURNAL OF COGNITIVE PSYCHOLOGY 2021. [DOI: 10.1080/20445911.2021.1898411] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Affiliation(s)
- Borbála German
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest, Hungary
- Department of Cognitive Science, Budapest University of Technology and Economics, Budapest, Hungary
| | - Ferenc Honbolygó
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest, Hungary
- Institute of Psychology, ELTE Eötvös Loránd University, Budapest, Hungary
| | - Valéria Csépe
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest, Hungary
- Faculty of Modern Philology and Social Sciences, University of Pannonia, Veszprém, Hungary
| | - Andrea Kóbor
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest, Hungary
| |
Collapse
|
7
|
Defining the Role of Attention in Hierarchical Auditory Processing. Audiol Res 2021; 11:112-128. [PMID: 33805600 PMCID: PMC8006147 DOI: 10.3390/audiolres11010012] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Revised: 03/07/2021] [Accepted: 03/10/2021] [Indexed: 01/09/2023] Open
Abstract
Communication in noise is a complex process requiring efficient neural encoding throughout the entire auditory pathway as well as contributions from higher-order cognitive processes (i.e., attention) to extract speech cues for perception. Thus, identifying effective clinical interventions for individuals with speech-in-noise deficits relies on the disentanglement of bottom-up (sensory) and top-down (cognitive) factors to appropriately determine the area of deficit; yet, how attention may interact with early encoding of sensory inputs remains unclear. For decades, attentional theorists have attempted to address this question with cleverly designed behavioral studies, but the neural processes and interactions underlying attention's role in speech perception remain unresolved. While anatomical and electrophysiological studies have investigated the neurological structures contributing to attentional processes and revealed relevant brain-behavior relationships, recent electrophysiological techniques (i.e., simultaneous recording of brainstem and cortical responses) may provide novel insight regarding the relationship between early sensory processing and top-down attentional influences. In this article, we review relevant theories that guide our present understanding of attentional processes, discuss current electrophysiological evidence of attentional involvement in auditory processing across subcortical and cortical levels, and propose areas for future study that will inform the development of more targeted and effective clinical interventions for individuals with speech-in-noise deficits.
Collapse
|
8
|
Asilador A, Llano DA. Top-Down Inference in the Auditory System: Potential Roles for Corticofugal Projections. Front Neural Circuits 2021; 14:615259. [PMID: 33551756 PMCID: PMC7862336 DOI: 10.3389/fncir.2020.615259] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Accepted: 12/17/2020] [Indexed: 01/28/2023] Open
Abstract
It has become widely accepted that humans use contextual information to infer the meaning of ambiguous acoustic signals. In speech, for example, high-level semantic, syntactic, or lexical information shape our understanding of a phoneme buried in noise. Most current theories to explain this phenomenon rely on hierarchical predictive coding models involving a set of Bayesian priors emanating from high-level brain regions (e.g., prefrontal cortex) that are used to influence processing at lower-levels of the cortical sensory hierarchy (e.g., auditory cortex). As such, virtually all proposed models to explain top-down facilitation are focused on intracortical connections, and consequently, subcortical nuclei have scarcely been discussed in this context. However, subcortical auditory nuclei receive massive, heterogeneous, and cascading descending projections at every level of the sensory hierarchy, and activation of these systems has been shown to improve speech recognition. It is not yet clear whether or how top-down modulation to resolve ambiguous sounds calls upon these corticofugal projections. Here, we review the literature on top-down modulation in the auditory system, primarily focused on humans and cortical imaging/recording methods, and attempt to relate these findings to a growing animal literature, which has primarily been focused on corticofugal projections. We argue that corticofugal pathways contain the requisite circuitry to implement predictive coding mechanisms to facilitate perception of complex sounds and that top-down modulation at early (i.e., subcortical) stages of processing complement modulation at later (i.e., cortical) stages of processing. Finally, we suggest experimental approaches for future studies on this topic.
Collapse
Affiliation(s)
- Alexander Asilador
- Neuroscience Program, The University of Illinois at Urbana-Champaign, Champaign, IL, United States
- Beckman Institute for Advanced Science and Technology, Urbana, IL, United States
| | - Daniel A. Llano
- Neuroscience Program, The University of Illinois at Urbana-Champaign, Champaign, IL, United States
- Beckman Institute for Advanced Science and Technology, Urbana, IL, United States
- Molecular and Integrative Physiology, The University of Illinois at Urbana-Champaign, Champaign, IL, United States
| |
Collapse
|
9
|
Visual load effects on the auditory steady-state responses to 20-, 40-, and 80-Hz amplitude-modulated tones. Physiol Behav 2021; 228:113240. [PMID: 33188789 DOI: 10.1016/j.physbeh.2020.113240] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2020] [Revised: 09/29/2020] [Accepted: 10/31/2020] [Indexed: 10/23/2022]
Abstract
Ignoring background sounds while focusing on a visual task is a necessary ability in everyday life. If attentional resources are shared between modalities, processing of task-irrelevant auditory information should become attenuated when attentional capacity is expended by visual demands. According to the early-filter model, top-down attenuation of auditory responses is possible at various stages of the auditory pathway through multiple recurrent loops. Furthermore, the adaptive filtering model of selective attention suggests that filtering occurs early when concurrent visual tasks are demanding (e.g., high load) and late when tasks are easy (e.g., low load). To test these models, this study examined the effects of three levels of visual load on auditory steady-state responses (ASSRs) at three modulation frequencies. Subjects performed a visual task with no, low, and high visual load while ignoring task-irrelevant sounds. The auditory stimuli were 500-Hz tones amplitude-modulated at 20, 40, or 80 Hz to target different processing stages of the auditory pathway. Results from bayesian analyses suggest that ASSRs are unaffected by visual load. These findings imply that attentional resources are modality specific and that the attentional filter of auditory processing does not vary with visual task demands.
Collapse
|
10
|
Szychowska M, Wiens S. Visual load does not decrease the auditory steady-state response to 40-Hz amplitude-modulated tones. Psychophysiology 2020; 57:e13689. [PMID: 32944959 PMCID: PMC7757234 DOI: 10.1111/psyp.13689] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Revised: 07/26/2020] [Accepted: 08/17/2020] [Indexed: 12/26/2022]
Abstract
The auditory pathway consists of multiple recurrent loops of afferent and efferent connections that extend from the cochlea up to the prefrontal cortex. The early‐filter theory proposes that these loops allow top‐down filtering of early and middle latency auditory responses. Furthermore, the adaptive filtering model suggests that the filtering of irrelevant auditory stimuli should start lower in the pathway during more demanding tasks. If so, the 40‐Hz auditory steady‐state responses (ASSRs) to irrelevant sounds should be affected by top‐down crossmodal attention to a visual task, and effects should vary with the load of the visual task. Because few studies have examined this possibility, we conducted two preregistered studies that manipulated visual load (Study 1: N = 43, Study 2: N = 45). Study 1 used two levels (low and high), and Study 2 used four levels (no, low, high, and very high). Subjects were asked to ignore a 500‐Hz task‐irrelevant tone that was amplitude‐modulated to evoke 40‐Hz ASSRs. Results from Bayesian analyses provided moderate to extreme support for no effect of load (or of a task) on ASSRs. Results also supported no interaction with time (i.e., over blocks, over minutes, or with changes in ASSRs that were synchronized with the onset of the visual stimuli). Further, results provided moderate support for no correlation between the effects of load and working memory capacity. Because the present findings support the robustness of ASSRs against manipulations of crossmodal attention, they are not consistent with the adaptive filtering model. The adaptive filtering model suggests that the filtering of irrelevant auditory stimuli should start lower in the auditory pathway during more demanding tasks. Two preregistered studies (N = 43, N = 45) examined the effects of visual perceptual load (from no to very high) on the 40‐Hz auditory steady‐state response (ASSR) to a task‐irrelevant tone. Bayesian analyses provided evidence for no effect of load. This robustness of ASSR against manipulations of crossmodal attention is not consistent with the adaptive filter model.
Collapse
Affiliation(s)
- Malina Szychowska
- Gösta Ekman Laboratory, Department of Psychology, Stockholm University, Stockholm, Sweden
| | - Stefan Wiens
- Gösta Ekman Laboratory, Department of Psychology, Stockholm University, Stockholm, Sweden
| |
Collapse
|
11
|
Venâncio LGA, da Hora LCD, Muniz LF. Critical analysis of the article "Speech-ABR in contralateral noise:" The potential tool to evaluate the rostral part of the auditory efferent system "of Lotfi, Moossavi, Javanbakhta and Zadeh: Letter to the editor. Med Hypotheses 2020; 141:109651. [PMID: 32330773 DOI: 10.1016/j.mehy.2020.109651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2020] [Accepted: 02/25/2020] [Indexed: 11/17/2022]
Affiliation(s)
- L G A Venâncio
- Programa de Pós-graduação em Saúde da Comunicação Humana, Universidade Federal de Pernambuco - UFPE - Recife, PE, Brazil.
| | - L C D da Hora
- Programa de Pós-graduação em Saúde da Comunicação Humana, Universidade Federal de Pernambuco - UFPE - Recife, PE, Brazil
| | - L F Muniz
- Programa de Pós-graduação em Saúde da Comunicação Humana, Universidade Federal de Pernambuco - UFPE - Recife, PE, Brazil
| |
Collapse
|
12
|
Campbell TA, Marsh JE. On corticopetal-corticofugal loops of the new early filter: from cell assemblies to the rostral brainstem. Neuroreport 2019; 30:202-206. [PMID: 30702551 DOI: 10.1097/wnr.0000000000001184] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Affiliation(s)
- Tom A Campbell
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - John E Marsh
- Department of Building, Energy and Environmental Engineering, University of Gävle, Gävle, Sweden.,School of Psychology, University of Central Lancashire, Preston, UK
| |
Collapse
|
13
|
Campbell TA, Marsh JE. Commentary: Donepezil enhances understanding of degraded speech in Alzheimer's disease. Front Aging Neurosci 2018; 10:197. [PMID: 30057546 PMCID: PMC6053516 DOI: 10.3389/fnagi.2018.00197] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2018] [Accepted: 06/11/2018] [Indexed: 11/13/2022] Open
Affiliation(s)
- Tom A Campbell
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - John E Marsh
- Department of Building, Energy and Environmental Engineering, University of Gävle, Gävle, Sweden.,School of Psychology, University of Central Lancashire, Preston, United Kingdom
| |
Collapse
|
14
|
Discontinuity of early and late event-related brain potentials for selective attention in dichotic listening. Neuroreport 2018. [PMID: 29538097 DOI: 10.1097/wnr.0000000000001004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
If a representation of an auditory attention channel was present in the auditory cortices but not in the subcortical structures, it would be predicted that the early event-related brain potential (ERP) would disagree with the late ERP in selective attention effects. To examine this idea, the present study recorded the auditory brain stem response (ABR) as an early ERP and also the negative difference, the processing negativity and the irrelevant positive difference waves as late ERPs during dichotic listening. Each participant experienced two dichotic conditions: (i) 500-Hz standard tones to the left ear and 1000-Hz ones to the right ear (L500/R1000), (ii) 1000-Hz standard tones to the left ear and 500-Hz ones to the right ear (L1000/R500). In a control task, participants performed visual detection and ignored auditory stimuli. Although the negative difference and processing negativity were found to be identical between the two dichotic conditions, the ABR demonstrated a significant difference between relevant and irrelevant tasks only for the L500/R1000 condition. A response preference to lower-frequency tones was found for behavioural measures and late ERPs but not for the ABR. These results suggest difficulty in representing attention channels in the auditory brain stem. In addition, a weak effect of dichotic sound combination in behaviours corresponded only with earlier ERPs.
Collapse
|
15
|
Lee A, Ryu H, Kim JK, Jeong E. Multisensory Integration Strategy for Modality-Specific Loss of Inhibition Control in Older Adults. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2018; 15:ijerph15040718. [PMID: 29641462 PMCID: PMC5923760 DOI: 10.3390/ijerph15040718] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/14/2018] [Revised: 03/25/2018] [Accepted: 04/06/2018] [Indexed: 11/16/2022]
Abstract
Older adults are known to have lesser cognitive control capability and greater susceptibility to distraction than young adults. Previous studies have reported age-related problems in selective attention and inhibitory control, yielding mixed results depending on modality and context in which stimuli and tasks were presented. The purpose of the study was to empirically demonstrate a modality-specific loss of inhibitory control in processing audio-visual information with ageing. A group of 30 young adults (mean age = 25.23, Standard Deviation (SD) = 1.86) and 22 older adults (mean age = 55.91, SD = 4.92) performed the audio-visual contour identification task (AV-CIT). We compared performance of visual/auditory identification (Uni-V, Uni-A) with that of visual/auditory identification in the presence of distraction in counterpart modality (Multi-V, Multi-A). The findings showed a modality-specific effect on inhibitory control. Uni-V performance was significantly better than Multi-V, indicating that auditory distraction significantly hampered visual target identification. However, Multi-A performance was significantly enhanced compared to Uni-A, indicating that auditory target performance was significantly enhanced by visual distraction. Additional analysis showed an age-specific effect on enhancement between Uni-A and Multi-A depending on the level of visual inhibition. Together, our findings indicated that the loss of visual inhibitory control was beneficial for the auditory target identification presented in a multimodal context in older adults. A likely multisensory information processing strategy in the older adults was further discussed in relation to aged cognition.
Collapse
Affiliation(s)
- Ahreum Lee
- Department of Industrial Engineering, Hanyang University, 222 Wangsimni-ro, Seongdong-gu, Seoul 04763, Korea.
| | - Hokyoung Ryu
- Department of Arts and Technology, Hanyang University, 222 Wangsimni-ro, Seongdong-gu, Seoul 04763, Korea.
| | - Jae-Kwan Kim
- Smart Factory Business Division, Samsung SDS, 35 Olympic Ro, Seoul 05510, Korea.
| | - Eunju Jeong
- Department of Arts and Technology, Hanyang University, 222 Wangsimni-ro, Seongdong-gu, Seoul 04763, Korea.
| |
Collapse
|
16
|
Mai G, Tuomainen J, Howell P. Relationship between speech-evoked neural responses and perception of speech in noise in older adults. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 143:1333. [PMID: 29604686 DOI: 10.1121/1.5024340] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Speech-in-noise (SPIN) perception involves neural encoding of temporal acoustic cues. Cues include temporal fine structure (TFS) and envelopes that modulate at syllable (Slow-rate ENV) and fundamental frequency (F0-rate ENV) rates. Here the relationship between speech-evoked neural responses to these cues and SPIN perception was investigated in older adults. Theta-band phase-locking values (PLVs) that reflect cortical sensitivity to Slow-rate ENV and peripheral/brainstem frequency-following responses phase-locked to F0-rate ENV (FFRENV_F0) and TFS (FFRTFS) were measured from scalp-electroencephalography responses to a repeated speech syllable in steady-state speech-shaped noise (SpN) and 16-speaker babble noise (BbN). The results showed that (1) SPIN performance and PLVs were significantly higher under SpN than BbN, implying differential cortical encoding may serve as the neural mechanism of SPIN performance that varies as a function of noise types; (2) PLVs and FFRTFS at resolved harmonics were significantly related to good SPIN performance, supporting the importance of phase-locked neural encoding of Slow-rate ENV and TFS of resolved harmonics during SPIN perception; (3) FFRENV_F0 was not associated to SPIN performance until audiometric threshold was controlled for, indicating that hearing loss should be carefully controlled when studying the role of neural encoding of F0-rate ENV. Implications are drawn with respect to fitting auditory prostheses.
Collapse
Affiliation(s)
- Guangting Mai
- Department of Experimental Psychology, Division of Psychology and Language Sciences, University College London, London, WC1H 0AP, England
| | - Jyrki Tuomainen
- Department of Speech, Hearing and Phonetic Sciences, Division of Psychology and Language Sciences, University College London, London, WC1N 1PF, England
| | - Peter Howell
- Department of Experimental Psychology, Division of Psychology and Language Sciences, University College London, London, WC1H 0AP, England
| |
Collapse
|
17
|
Abstract
We investigated the capacity for two different forms of metacognitive cue to shield against auditory distraction in problem solving with Compound Remote Associates Tasks (CRATs). Experiment 1 demonstrated that an intrinsic metacognitive cue in the form of processing disfluency (manipulated using an easy-to-read vs. difficult-to-read font) could increase focal task engagement so as to mitigate the detrimental impact of distraction on solution rates for CRATs. Experiment 2 showed that an extrinsic metacognitive cue that took the form of an incentive for good task performance (i.e. 80% or better CRAT solutions) could likewise eliminate the negative impact of distraction on CRAT solution rates. Overall, these findings support the view that both intrinsic and extrinsic metacognitive cues have remarkably similar effects. This suggests that metacognitive cues operate via a common underlying mechanism whereby a participant applies increased focal attention to the primary task so as to ensure more steadfast task engagement that is not so easily diverted by task-irrelevant stimuli.
Collapse
|
18
|
Hardy CJD, Hwang YT, Bond RL, Marshall CR, Ridha BH, Crutch SJ, Rossor MN, Warren JD. Donepezil enhances understanding of degraded speech in Alzheimer's disease. Ann Clin Transl Neurol 2017; 4:835-840. [PMID: 29159197 PMCID: PMC5682113 DOI: 10.1002/acn3.471] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2017] [Accepted: 08/24/2017] [Indexed: 12/11/2022] Open
Abstract
Auditory dysfunction under complex, dynamic listening conditions is a clinical hallmark of Alzheimer's disease (AD) but challenging to measure and manage. Here, we assessed understanding of sinewave speech (a paradigm of degraded speech perception) and general cognitive abilities in 17 AD patients, before and following a 10 mg dose of donepezil. Relative to healthy older individuals, patients had impaired sinewave speech comprehension that was selectively ameliorated by donepezil. Our findings demonstrate impaired perception of degraded speech in AD but retained perceptual learning capacity that can be harnessed by acetylcholinesterase inhibition, with implications for designing communication interventions and acoustic environments in dementia.
Collapse
Affiliation(s)
- Chris J D Hardy
- Dementia Research Centre Department of Neurodegenerative Disease Institute of Neurology University College London London United Kingdom
| | - Yun T Hwang
- Dementia Research Centre Department of Neurodegenerative Disease Institute of Neurology University College London London United Kingdom
| | - Rebecca L Bond
- Dementia Research Centre Department of Neurodegenerative Disease Institute of Neurology University College London London United Kingdom
| | - Charles R Marshall
- Dementia Research Centre Department of Neurodegenerative Disease Institute of Neurology University College London London United Kingdom
| | - Basil H Ridha
- Dementia Research Centre Department of Neurodegenerative Disease Institute of Neurology University College London London United Kingdom
| | - Sebastian J Crutch
- Dementia Research Centre Department of Neurodegenerative Disease Institute of Neurology University College London London United Kingdom
| | - Martin N Rossor
- Dementia Research Centre Department of Neurodegenerative Disease Institute of Neurology University College London London United Kingdom
| | - Jason D Warren
- Dementia Research Centre Department of Neurodegenerative Disease Institute of Neurology University College London London United Kingdom
| |
Collapse
|
19
|
Rönnberg J, Lunner T, Ng EHN, Lidestam B, Zekveld AA, Sörqvist P, Lyxell B, Träff U, Yumba W, Classon E, Hällgren M, Larsby B, Signoret C, Pichora-Fuller MK, Rudner M, Danielsson H, Stenfelt S. Hearing impairment, cognition and speech understanding: exploratory factor analyses of a comprehensive test battery for a group of hearing aid users, the n200 study. Int J Audiol 2016; 55:623-42. [PMID: 27589015 PMCID: PMC5044772 DOI: 10.1080/14992027.2016.1219775] [Citation(s) in RCA: 63] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2016] [Revised: 07/29/2016] [Accepted: 07/29/2016] [Indexed: 02/08/2023]
Abstract
OBJECTIVE The aims of the current n200 study were to assess the structural relations between three classes of test variables (i.e. HEARING, COGNITION and aided speech-in-noise OUTCOMES) and to describe the theoretical implications of these relations for the Ease of Language Understanding (ELU) model. STUDY SAMPLE Participants were 200 hard-of-hearing hearing-aid users, with a mean age of 60.8 years. Forty-three percent were females and the mean hearing threshold in the better ear was 37.4 dB HL. DESIGN LEVEL1 factor analyses extracted one factor per test and/or cognitive function based on a priori conceptualizations. The more abstract LEVEL 2 factor analyses were performed separately for the three classes of test variables. RESULTS The HEARING test variables resulted in two LEVEL 2 factors, which we labelled SENSITIVITY and TEMPORAL FINE STRUCTURE; the COGNITIVE variables in one COGNITION factor only, and OUTCOMES in two factors, NO CONTEXT and CONTEXT. COGNITION predicted the NO CONTEXT factor to a stronger extent than the CONTEXT outcome factor. TEMPORAL FINE STRUCTURE and SENSITIVITY were associated with COGNITION and all three contributed significantly and independently to especially the NO CONTEXT outcome scores (R(2) = 0.40). CONCLUSIONS All LEVEL 2 factors are important theoretically as well as for clinical assessment.
Collapse
Affiliation(s)
- Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
| | - Thomas Lunner
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
- Department of Clinical and Experimental Medicine, Linköping University,
Linköping,
Sweden
- Eriksholm Research Centre,
Oticon A/S, Rørtangvej 20, 3070 Snekkersten,
Denmark
| | - Elaine Hoi Ning Ng
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
| | - Björn Lidestam
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
| | - Adriana Agatha Zekveld
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
- Section Ear & Hearing, Dept. of Otolaryngology-Head and Neck Surgery and EMGO Institute, VU University Medical Center,
Amsterdam,
The Netherlands
| | - Patrik Sörqvist
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
- Department of Building, Energy and Environmental Engineering, University of Gävle,
Gävle,
Sweden
| | - Björn Lyxell
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
| | - Ulf Träff
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
| | - Wycliffe Yumba
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
| | - Elisabet Classon
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
| | - Mathias Hällgren
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
- Department of Clinical and Experimental Medicine, Linköping University,
Linköping,
Sweden
| | - Birgitta Larsby
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
- Department of Clinical and Experimental Medicine, Linköping University,
Linköping,
Sweden
| | - Carine Signoret
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
| | - M. Kathleen Pichora-Fuller
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
- Department of Psychology, University of Toronto,
Toronto,
Ontario,
Canada
- The Toronto Rehabilitation Institute, University Health Network,
Toronto,
Ontario,
Canada
- The Rotman Research Institute, Baycrest Hospital,
Toronto,
Ontario,
Canada
| | - Mary Rudner
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
| | - Henrik Danielsson
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
| | - Stefan Stenfelt
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
- Department of Clinical and Experimental Medicine, Linköping University,
Linköping,
Sweden
| |
Collapse
|