1
|
Liu H, Bai Y, Zheng Q, Liu J, Zhu J, Ni G. Electrophysiological correlation of auditory selective spatial attention in the "cocktail party" situation. Hum Brain Mapp 2024; 45:e26793. [PMID: 39037186 PMCID: PMC11261592 DOI: 10.1002/hbm.26793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Revised: 07/04/2024] [Accepted: 07/09/2024] [Indexed: 07/23/2024] Open
Abstract
The auditory system can selectively attend to the target source in complex environments, the phenomenon known as the "cocktail party" effect. However, the spatiotemporal dynamics of electrophysiological activity associated with auditory selective spatial attention (ASSA) remain largely unexplored. In this study, single-source and multiple-source paradigms were designed to simulate different auditory environments, and microstate analysis was introduced to reveal the electrophysiological correlates of ASSA. Furthermore, cortical source analysis was employed to reveal the neural activity regions of these microstates. The results showed that five microstates could explain the spatiotemporal dynamics of ASSA, ranging from MS1 to MS5. Notably, MS2 and MS3 showed significantly lower partial properties in multiple-source situations than in single-source situations, whereas MS4 had shorter durations and MS5 longer durations in multiple-source situations than in single-source situations. MS1 had insignificant differences between the two situations. Cortical source analysis showed that the activation regions of these microstates initially transferred from the right temporal cortex to the temporal-parietal cortex, and subsequently to the dorsofrontal cortex. Moreover, the neural activity of the single-source situations was greater than that of the multiple-source situations in MS2 and MS3, correlating with the N1 and P2 components, with the greatest differences observed in the superior temporal gyrus and inferior parietal lobule. These findings suggest that these specific microstates and their associated activation regions may serve as promising substrates for decoding ASSA in complex environments.
Collapse
Affiliation(s)
- Hongxing Liu
- Academy of Medical Engineering and Translational MedicineTianjin UniversityTianjinChina
- State Key Laboratory of Advanced Medical Materials and DevicesTianjin UniversityTianjinChina
| | - Yanru Bai
- Academy of Medical Engineering and Translational MedicineTianjin UniversityTianjinChina
- State Key Laboratory of Advanced Medical Materials and DevicesTianjin UniversityTianjinChina
- Haihe Laboratory of Brain‐computer Interaction and Human‐machine IntegrationTianjinChina
| | - Qi Zheng
- Academy of Medical Engineering and Translational MedicineTianjin UniversityTianjinChina
- State Key Laboratory of Advanced Medical Materials and DevicesTianjin UniversityTianjinChina
| | - Jihan Liu
- Academy of Medical Engineering and Translational MedicineTianjin UniversityTianjinChina
- State Key Laboratory of Advanced Medical Materials and DevicesTianjin UniversityTianjinChina
| | - Jianing Zhu
- Academy of Medical Engineering and Translational MedicineTianjin UniversityTianjinChina
- State Key Laboratory of Advanced Medical Materials and DevicesTianjin UniversityTianjinChina
| | - Guangjian Ni
- Academy of Medical Engineering and Translational MedicineTianjin UniversityTianjinChina
- State Key Laboratory of Advanced Medical Materials and DevicesTianjin UniversityTianjinChina
- Haihe Laboratory of Brain‐computer Interaction and Human‐machine IntegrationTianjinChina
- Tianjin Key Laboratory of Brain Science and NeuroengineeringTianjinChina
| |
Collapse
|
2
|
Fu X, Smulders FTY, Riecke L. Touch Helps Hearing: Evidence From Continuous Audio-Tactile Stimulation. Ear Hear 2024:00003446-990000000-00318. [PMID: 39046790 DOI: 10.1097/aud.0000000000001566] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/25/2024]
Abstract
OBJECTIVES Identifying target sounds in challenging environments is crucial for daily experiences. It is important to note that it can be enhanced by nonauditory stimuli, for example, through lip-reading in an ongoing conversation. However, how tactile stimuli affect auditory processing is still relatively unclear. Recent studies have shown that brief tactile stimuli can reliably facilitate auditory perception, while studies using longer-lasting audio-tactile stimulation yielded conflicting results. This study aimed to investigate the impact of ongoing pulsating tactile stimulation on basic auditory processing. DESIGN In experiment 1, the electroencephalogram (EEG) was recorded while 24 participants performed a loudness-discrimination task on a 4-Hz modulated tone-in-noise and received either in-phase, anti-phase, or no 4-Hz electrotactile stimulation above the median nerve. In experiment 2, another 24 participants were presented with the same tactile stimulation as before, but performed a tone-in-noise detection task while their selective auditory attention was manipulated. RESULTS We found that in-phase tactile stimulation enhanced EEG responses to the tone, whereas anti-phase tactile stimulation suppressed these responses. No corresponding tactile effects on loudness-discrimination performance were observed in experiment 1. Using a yes/no paradigm in experiment 2, we found that in-phase tactile stimulation, but not anti-phase tactile stimulation, improved detection thresholds. Selective attention also improved thresholds but did not modulate the observed benefit from in-phase tactile stimulation. CONCLUSIONS Our study highlights that ongoing in-phase tactile input can enhance basic auditory processing as reflected in scalp EEG and detection thresholds. This might have implications for the development of hearing enhancement technologies and interventions.
Collapse
Affiliation(s)
- Xueying Fu
- Faculty of Psychology and Neuroscience, Department of Cognitive Neuroscience, Maastricht University, Maastricht, the Netherlands
| | | | | |
Collapse
|
3
|
Drew J, Foti N, Nadkarni R, Larson E, Fox E, Kc Lee A. Using a linear dynamic system to measure functional connectivity from M/EEG. J Neural Eng 2024; 21:046020. [PMID: 38936398 DOI: 10.1088/1741-2552/ad5cc1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2023] [Accepted: 06/27/2024] [Indexed: 06/29/2024]
Abstract
Objective.Measures of functional connectivity (FC) can elucidate which cortical regions work together in order to complete a variety of behavioral tasks. This study's primary objective was to expand a previously published model of measuring FC to include multiple subjects and several regions of interest. While FC has been more extensively investigated in vision and other sensorimotor tasks, it is not as well understood in audition. The secondary objective of this study was to investigate how auditory regions are functionally connected to other cortical regions when attention is directed to different distinct auditory stimuli.Approach.This study implements a linear dynamic system (LDS) to measure the structured time-lagged dependence across several cortical regions in order to estimate their FC during a dual-stream auditory attention task.Results.The model's output shows consistent functionally connected regions across different listening conditions, indicative of an auditory attention network that engages regardless of endogenous switching of attention or different auditory cues being attended.Significance.The LDS implemented in this study implements a multivariate autoregression to infer FC across cortical regions during an auditory attention task. This study shows how a first-order autoregressive function can reliably measure functional connectivity from M/EEG data. Additionally, the study shows how auditory regions engage with the supramodal attention network outlined in the visual attention literature.
Collapse
Affiliation(s)
- Jordan Drew
- Electrical and Computer Engineering, University of Washington, Seattle, WA, United States of America
| | - Nicholas Foti
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, WA, United States of America
| | - Rahul Nadkarni
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, WA, United States of America
| | - Eric Larson
- Institute for Learning & Brain Sciences, University of Washington, Seattle, WA, United States of America
| | - Emily Fox
- Departments of Statistics and Computer Science, Stanford University, Stanford, CA, United States of America
- Chan Zuckerberg Biohub, San Francisco, CA, United States of America
| | - Adrian Kc Lee
- Institute for Learning & Brain Sciences, University of Washington, Seattle, WA, United States of America
- Speech & Hearing Sciences, University of Washington, Seattle, WA, United States of America
| |
Collapse
|
4
|
Dimmock S, O'Donnell C, Houghton C. Bayesian analysis of phase data in EEG and MEG. eLife 2023; 12:e84602. [PMID: 37698464 PMCID: PMC10588985 DOI: 10.7554/elife.84602] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Accepted: 09/11/2023] [Indexed: 09/13/2023] Open
Abstract
Electroencephalography and magnetoencephalography recordings are non-invasive and temporally precise, making them invaluable tools in the investigation of neural responses in humans. However, these recordings are noisy, both because the neuronal electrodynamics involved produces a muffled signal and because the neuronal processes of interest compete with numerous other processes, from blinking to day-dreaming. One fruitful response to this noisiness has been to use stimuli with a specific frequency and to look for the signal of interest in the response at that frequency. Typically this signal involves measuring the coherence of response phase: here, a Bayesian approach to measuring phase coherence is described. This Bayesian approach is illustrated using two examples from neurolinguistics and its properties are explored using simulated data. We suggest that the Bayesian approach is more descriptive than traditional statistical approaches because it provides an explicit, interpretable generative model of how the data arises. It is also more data-efficient: it detects stimulus-related differences for smaller participant numbers than the standard approach.
Collapse
Affiliation(s)
- Sydney Dimmock
- Faculty of Engineering, University of BristolBristolUnited Kingdom
| | - Cian O'Donnell
- Faculty of Engineering, University of BristolBristolUnited Kingdom
- School of Computing, Engineering & Intelligent Systems, Ulster UniversityDerry/LondonderryUnited Kingdom
| | - Conor Houghton
- Faculty of Engineering, University of BristolBristolUnited Kingdom
| |
Collapse
|
5
|
Ummear Raza M, Gautam D, Rorie D, Sivarao DV. Differential Effects of Clozapine and Haloperidol on the 40 Hz Auditory Steady State Response-mediated Phase Resetting in the Prefrontal Cortex of the Female Sprague Dawley Rat. Schizophr Bull 2023; 49:581-591. [PMID: 36691888 PMCID: PMC10154723 DOI: 10.1093/schbul/sbac203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
BACKGROUND Neural synchrony at gamma frequency (~40 Hz) is important for information processing and is disrupted in schizophrenia. From a drug development perspective, molecules that can improve local gamma synchrony are promising candidates for therapeutic development. HYPOTHESIS Given their differentiated clinical profile, clozapine, and haloperidol may have distinct effects on local gamma synchrony engendered by 40 Hz click trains, the so-called auditory steady-state response (ASSR). STUDY DESIGN Clozapine and haloperidol at doses known to mimic clinically relevant D2 receptor occupancy were evaluated using the ASSR in separate cohorts of female SD rats. RESULTS Clozapine (2.5-10 mg/kg, sc) robustly increased intertrial phase coherence (ITC), across all doses. Evoked response increased but less consistently. Background gamma activity, unrelated to the stimulus, showed a reduction at all doses. Closer scrutiny of the data indicated that clozapine accelerated gamma phase resetting. Thus, clozapine augmented auditory information processing in the gamma frequency range by reducing the background gamma, accelerating the gamma phase resetting and improving phase precision and signal power. Modest improvements in ITC were seen with Haloperidol (0.08 and 0.24 mg/kg, sc) without accelerating phase resetting. Evoked power was unaffected while background gamma was reduced at high doses only, which also caused catalepsy. CONCLUSIONS Using click-train evoked gamma synchrony as an index of local neural network function, we provide a plausible neurophysiological basis for the superior and differentiated profile of clozapine. These observations may provide a neurophysiological template for identifying new drug candidates with a therapeutic potential for treatment-resistant schizophrenia.
Collapse
Affiliation(s)
- Muhammad Ummear Raza
- Department of Pharmaceutical Sciences, Bill Gatton College of Pharmacy, East Tennessee State University, Johnson City, TN
| | - Deepshila Gautam
- Department of Pharmaceutical Sciences, Bill Gatton College of Pharmacy, East Tennessee State University, Johnson City, TN
| | - Dakota Rorie
- Department of Pharmaceutical Sciences, Bill Gatton College of Pharmacy, East Tennessee State University, Johnson City, TN
| | - Digavalli V Sivarao
- Department of Pharmaceutical Sciences, Bill Gatton College of Pharmacy, East Tennessee State University, Johnson City, TN
| |
Collapse
|
6
|
Kumar S, Nayak S, Pitchai Muthu AN. Effect of selective attention on auditory brainstem response. HEARING, BALANCE AND COMMUNICATION 2023. [DOI: 10.1080/21695717.2023.2168413] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Affiliation(s)
- Sathish Kumar
- Department of Audiology and Speech-Language Pathology, Kasturba Medical College, Mangalore, Manipal Academy of Higher Education, Mangalore, India
| | - Srikanth Nayak
- Department of Audiology and Speech-Language Pathology, Yenepoya Medical College, Yenepoya University (Deemed to be University), Mangalore, India
| | | |
Collapse
|
7
|
Kurthen I, Christen A, Meyer M, Giroud N. Older adults' neural tracking of interrupted speech is a function of task difficulty. Neuroimage 2022; 262:119580. [PMID: 35995377 DOI: 10.1016/j.neuroimage.2022.119580] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Revised: 08/14/2022] [Accepted: 08/18/2022] [Indexed: 11/16/2022] Open
Abstract
Age-related hearing loss is a highly prevalent condition, which manifests at both the auditory periphery and the brain. It leads to degraded auditory input, which needs to be repaired in order to achieve understanding of spoken language. It is still unclear how older adults with this condition draw on their neural resources to optimally process speech. By presenting interrupted speech to 26 healthy older adults with normal-for-age audiograms, this study investigated neural tracking of degraded auditory input. The electroencephalograms of the participants were recorded while they first listened to and then verbally repeated sentences interrupted by silence in varying interruption rates. Speech tracking was measured by inter-trial phase coherence in response to the stimuli. In interruption rates that corresponded to the theta frequency band, speech tracking was highly specific to the interruption rate and positively related to the understanding of interrupted speech. These results suggest that older adults' brain activity optimizes through the tracking of stimulus characteristics, and that this tracking aids in processing an incomplete auditory stimulus. Further investigation of speech tracking as a candidate training mechanism to alleviate age-related hearing loss is thus encouraged.
Collapse
Affiliation(s)
- Ira Kurthen
- Department of Psychology, University of Zurich, Binzmuehlestrasse 14/21, Zurich 8050, Switzerland.
| | - Allison Christen
- Department of Psychology, University of Zurich, Binzmuehlestrasse 14/21, Zurich 8050, Switzerland
| | - Martin Meyer
- Department of Comparative Language Science, University of Zurich, Switzerland; Center for the Interdisciplinary Study of Language Evolution, University of Zurich, Switzerland; Cognitive Psychology Unit, University of Klagenfurt, Austria
| | - Nathalie Giroud
- Department of Computational Linguistics, Phonetics and Speech Sciences, University of Zurich, Switzerland; Competence Center for Language & Medicine, University of Zurich, Switzerland; Center for Neuroscience Zurich, University of Zurich, Switzerland
| |
Collapse
|
8
|
Auerbach BD, Gritton HJ. Hearing in Complex Environments: Auditory Gain Control, Attention, and Hearing Loss. Front Neurosci 2022; 16:799787. [PMID: 35221899 PMCID: PMC8866963 DOI: 10.3389/fnins.2022.799787] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Accepted: 01/18/2022] [Indexed: 12/12/2022] Open
Abstract
Listening in noisy or complex sound environments is difficult for individuals with normal hearing and can be a debilitating impairment for those with hearing loss. Extracting meaningful information from a complex acoustic environment requires the ability to accurately encode specific sound features under highly variable listening conditions and segregate distinct sound streams from multiple overlapping sources. The auditory system employs a variety of mechanisms to achieve this auditory scene analysis. First, neurons across levels of the auditory system exhibit compensatory adaptations to their gain and dynamic range in response to prevailing sound stimulus statistics in the environment. These adaptations allow for robust representations of sound features that are to a large degree invariant to the level of background noise. Second, listeners can selectively attend to a desired sound target in an environment with multiple sound sources. This selective auditory attention is another form of sensory gain control, enhancing the representation of an attended sound source while suppressing responses to unattended sounds. This review will examine both “bottom-up” gain alterations in response to changes in environmental sound statistics as well as “top-down” mechanisms that allow for selective extraction of specific sound features in a complex auditory scene. Finally, we will discuss how hearing loss interacts with these gain control mechanisms, and the adaptive and/or maladaptive perceptual consequences of this plasticity.
Collapse
Affiliation(s)
- Benjamin D. Auerbach
- Department of Molecular and Integrative Physiology, Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- Neuroscience Program, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- *Correspondence: Benjamin D. Auerbach,
| | - Howard J. Gritton
- Neuroscience Program, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- Department of Comparative Biosciences, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, IL, United States
| |
Collapse
|
9
|
Vander Ghinst M, Bourguignon M, Wens V, Naeije G, Ducène C, Niesen M, Hassid S, Choufani G, Goldman S, De Tiège X. Inaccurate cortical tracking of speech in adults with impaired speech perception in noise. Brain Commun 2021; 3:fcab186. [PMID: 34541530 PMCID: PMC8445395 DOI: 10.1093/braincomms/fcab186] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Revised: 06/05/2021] [Accepted: 06/08/2021] [Indexed: 01/17/2023] Open
Abstract
Impaired speech perception in noise despite normal peripheral auditory function is a common problem in young adults. Despite a growing body of research, the pathophysiology of this impairment remains unknown. This magnetoencephalography study characterizes the cortical tracking of speech in a multi-talker background in a group of highly selected adult subjects with impaired speech perception in noise without peripheral auditory dysfunction. Magnetoencephalographic signals were recorded from 13 subjects with impaired speech perception in noise (six females, mean age: 30 years) and matched healthy subjects while they were listening to 5 different recordings of stories merged with a multi-talker background at different signal to noise ratios (No Noise, +10, +5, 0 and −5 dB). The cortical tracking of speech was quantified with coherence between magnetoencephalographic signals and the temporal envelope of (i) the global auditory scene (i.e. the attended speech stream and the multi-talker background noise), (ii) the attended speech stream only and (iii) the multi-talker background noise. Functional connectivity was then estimated between brain areas showing altered cortical tracking of speech in noise in subjects with impaired speech perception in noise and the rest of the brain. All participants demonstrated a selective cortical representation of the attended speech stream in noisy conditions, but subjects with impaired speech perception in noise displayed reduced cortical tracking of speech at the syllable rate (i.e. 4–8 Hz) in all noisy conditions. Increased functional connectivity was observed in subjects with impaired speech perception in noise in Noiseless and speech in noise conditions between supratemporal auditory cortices and left-dominant brain areas involved in semantic and attention processes. The difficulty to understand speech in a multi-talker background in subjects with impaired speech perception in noise appears to be related to an inaccurate auditory cortex tracking of speech at the syllable rate. The increased functional connectivity between supratemporal auditory cortices and language/attention-related neocortical areas probably aims at supporting speech perception and subsequent recognition in adverse auditory scenes. Overall, this study argues for a central origin of impaired speech perception in noise in the absence of any peripheral auditory dysfunction.
Collapse
Affiliation(s)
- Marc Vander Ghinst
- Laboratoire de Cartographie fonctionnelle du Cerveau, UNI-ULB Neuroscience Institute, Université Libre de Bruxelles (ULB), Brussels 1070, Belgium.,Service, d'ORL et de chirurgie cervico-faciale, CUB Hôpital Erasme, Université Libre de Bruxelles (ULB), Brussels 1070, Belgium
| | - Mathieu Bourguignon
- Laboratoire de Cartographie fonctionnelle du Cerveau, UNI-ULB Neuroscience Institute, Université Libre de Bruxelles (ULB), Brussels 1070, Belgium.,Laboratory of Neurophysiology and Movement Biomechanics, UNI-ULB Neuroscience Institute, Université Libre de Bruxelles (ULB), Brussels 1070, Belgium.,Basque Center on Cognition, Brain and Language (BCBL), Donostia/San Sebastian 20009, Spain
| | - Vincent Wens
- Laboratoire de Cartographie fonctionnelle du Cerveau, UNI-ULB Neuroscience Institute, Université Libre de Bruxelles (ULB), Brussels 1070, Belgium.,Clinics of Functional Neuroimaging, Service of Nuclear Medicine, CUB Hôpital Erasme, Université Libre de Bruxelles (ULB), Brussels 1070, Belgium
| | - Gilles Naeije
- Laboratoire de Cartographie fonctionnelle du Cerveau, UNI-ULB Neuroscience Institute, Université Libre de Bruxelles (ULB), Brussels 1070, Belgium.,Service de Neurologie, ULB-Hôpital Erasme, Université libre de Bruxelles (ULB), Brussels 1070, Belgium
| | - Cecile Ducène
- Laboratoire de Cartographie fonctionnelle du Cerveau, UNI-ULB Neuroscience Institute, Université Libre de Bruxelles (ULB), Brussels 1070, Belgium.,Service, d'ORL et de chirurgie cervico-faciale, CUB Hôpital Erasme, Université Libre de Bruxelles (ULB), Brussels 1070, Belgium
| | - Maxime Niesen
- Laboratoire de Cartographie fonctionnelle du Cerveau, UNI-ULB Neuroscience Institute, Université Libre de Bruxelles (ULB), Brussels 1070, Belgium.,Service, d'ORL et de chirurgie cervico-faciale, CUB Hôpital Erasme, Université Libre de Bruxelles (ULB), Brussels 1070, Belgium
| | - Sergio Hassid
- Service, d'ORL et de chirurgie cervico-faciale, CUB Hôpital Erasme, Université Libre de Bruxelles (ULB), Brussels 1070, Belgium
| | - Georges Choufani
- Service, d'ORL et de chirurgie cervico-faciale, CUB Hôpital Erasme, Université Libre de Bruxelles (ULB), Brussels 1070, Belgium
| | - Serge Goldman
- Laboratoire de Cartographie fonctionnelle du Cerveau, UNI-ULB Neuroscience Institute, Université Libre de Bruxelles (ULB), Brussels 1070, Belgium.,Clinics of Functional Neuroimaging, Service of Nuclear Medicine, CUB Hôpital Erasme, Université Libre de Bruxelles (ULB), Brussels 1070, Belgium
| | - Xavier De Tiège
- Laboratoire de Cartographie fonctionnelle du Cerveau, UNI-ULB Neuroscience Institute, Université Libre de Bruxelles (ULB), Brussels 1070, Belgium.,Clinics of Functional Neuroimaging, Service of Nuclear Medicine, CUB Hôpital Erasme, Université Libre de Bruxelles (ULB), Brussels 1070, Belgium
| |
Collapse
|
10
|
Sokoliuk R, Degano G, Melloni L, Noppeney U, Cruse D. The Influence of Auditory Attention on Rhythmic Speech Tracking: Implications for Studies of Unresponsive Patients. Front Hum Neurosci 2021; 15:702768. [PMID: 34456697 PMCID: PMC8385206 DOI: 10.3389/fnhum.2021.702768] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Accepted: 07/21/2021] [Indexed: 11/13/2022] Open
Abstract
Language comprehension relies on integrating words into progressively more complex structures, like phrases and sentences. This hierarchical structure-building is reflected in rhythmic neural activity across multiple timescales in E/MEG in healthy, awake participants. However, recent studies have shown evidence for this “cortical tracking” of higher-level linguistic structures also in a proportion of unresponsive patients. What does this tell us about these patients’ residual levels of cognition and consciousness? Must the listener direct their attention toward higher level speech structures to exhibit cortical tracking, and would selective attention across levels of the hierarchy influence the expression of these rhythms? We investigated these questions in an EEG study of 72 healthy human volunteers listening to streams of monosyllabic isochronous English words that were either unrelated (scrambled condition) or composed of four-word-sequences building meaningful sentences (sentential condition). Importantly, there were no physical cues between four-word-sentences. Rather, boundaries were marked by syntactic structure and thematic role assignment. Participants were divided into three attention groups: from passive listening (passive group) to attending to individual words (word group) or sentences (sentence group). The passive and word groups were initially naïve to the sentential stimulus structure, while the sentence group was not. We found significant tracking at word- and sentence rate across all three groups, with sentence tracking linked to left middle temporal gyrus and right superior temporal gyrus. Goal-directed attention to words did not enhance word-rate-tracking, suggesting that word tracking here reflects largely automatic mechanisms, as was shown for tracking at the syllable-rate before. Importantly, goal-directed attention to sentences relative to words significantly increased sentence-rate-tracking over left inferior frontal gyrus. This attentional modulation of rhythmic EEG activity at the sentential rate highlights the role of attention in integrating individual words into complex linguistic structures. Nevertheless, given the presence of high-level cortical tracking under conditions of lower attentional effort, our findings underline the suitability of the paradigm in its clinical application in patients after brain injury. The neural dissociation between passive tracking of sentences and directed attention to sentences provides a potential means to further characterise the cognitive state of each unresponsive patient.
Collapse
Affiliation(s)
- Rodika Sokoliuk
- School of Psychology, University of Birmingham, Birmingham, United Kingdom.,Centre for Human Brain Health, University of Birmingham, Birmingham, United Kingdom
| | - Giulio Degano
- School of Psychology, University of Birmingham, Birmingham, United Kingdom.,Centre for Human Brain Health, University of Birmingham, Birmingham, United Kingdom.,Brain and Language Lab, Department of Psychology, Faculty of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland
| | - Lucia Melloni
- Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany.,Department of Neurology, New York University, New York City, NY, United States
| | - Uta Noppeney
- Donders Centre for Cognitive Neuroimaging, Nijmegen, Netherlands.,Department of Biophysics, Radboud University, Nijmegen, Netherlands
| | - Damian Cruse
- School of Psychology, University of Birmingham, Birmingham, United Kingdom.,Centre for Human Brain Health, University of Birmingham, Birmingham, United Kingdom
| |
Collapse
|
11
|
Abstract
The perception of sensory events can be enhanced or suppressed by the surrounding spatial and temporal context in ways that facilitate the detection of novel objects and contribute to the perceptual constancy of those objects under variable conditions. In the auditory system, the phenomenon known as auditory enhancement reflects a general principle of contrast enhancement, in which a target sound embedded within a background sound becomes perceptually more salient if the background is presented first by itself. This effect is highly robust, producing an effective enhancement of the target of up to 25 dB (more than two orders of magnitude in intensity), depending on the task. Despite the importance of the effect, neural correlates of auditory contrast enhancement have yet to be identified in humans. Here, we used the auditory steady-state response to probe the neural representation of a target sound under conditions of enhancement. The probe was simultaneously modulated in amplitude with two modulation frequencies to distinguish cortical from subcortical responses. We found robust correlates for neural enhancement in the auditory cortical, but not subcortical, responses. Our findings provide empirical support for a previously unverified theory of auditory enhancement based on neural adaptation of inhibition and point to approaches for improving sensory prostheses for hearing loss, such as hearing aids and cochlear implants.
Collapse
|
12
|
Tanaka K, Ross B, Kuriki S, Harashima T, Obuchi C, Okamoto H. Neurophysiological Evaluation of Right-Ear Advantage During Dichotic Listening. Front Psychol 2021; 12:696263. [PMID: 34305754 PMCID: PMC8295541 DOI: 10.3389/fpsyg.2021.696263] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Accepted: 06/14/2021] [Indexed: 11/13/2022] Open
Abstract
Right-ear advantage refers to the observation that when two different speech stimuli are simultaneously presented to both ears, listeners report stimuli more correctly from the right ear than the left. It is assumed to result from prominent projection along the auditory pathways to the contralateral hemisphere and the dominance of the left auditory cortex for the perception of speech elements. Our study aimed to investigate the role of attention in the right-ear advantage. We recorded magnetoencephalography data while participants listened to pairs of Japanese two-syllable words (namely, "/ta/ /ko/" or "/i/ /ka/"). The amplitudes of the stimuli were modulated at 35 Hz in one ear and 45 Hz in the other. Such frequency-tagging allowed the selective quantification of left and right auditory cortex responses to left and right ear stimuli. Behavioral tests confirmed the right-ear advantage, with higher accuracy for stimuli presented to the right ear than to the left. The amplitude of the auditory steady-state response was larger when attending to the stimuli compared to passive listening. We detected a correlation between the attention-related increase in the amplitude of the auditory steady-state response and the laterality index of behavioral accuracy. The right-ear advantage in the free-response dichotic listening was also found in neural activities in the left auditory cortex, suggesting that it was related to the allocation of attention to both ears.
Collapse
Affiliation(s)
- Keita Tanaka
- Department of Science and Engineering, Tokyo Denki University, Saitama, Japan
| | - Bernhard Ross
- Baycrest Centre, Rotman Research Institute, Toronto, ON, Canada
| | - Shinya Kuriki
- Department of Science and Engineering, Tokyo Denki University, Saitama, Japan.,Faculty of Health Sciences, Hokkaido University, Sapporo, Japan
| | - Tsuneo Harashima
- Faculty of Human Sciences, University of Tsukuba, Tsukuba, Japan
| | - Chie Obuchi
- Department of Speech Language and Hearing Sciences, International University of Health and Welfare, Narita, Japan
| | - Hidehiko Okamoto
- Department of Physiology, School of Medicine, International University of Health and Welfare, Narita, Japan
| |
Collapse
|
13
|
Hanenberg C, Schlüter MC, Getzmann S, Lewald J. Short-Term Audiovisual Spatial Training Enhances Electrophysiological Correlates of Auditory Selective Spatial Attention. Front Neurosci 2021; 15:645702. [PMID: 34276281 PMCID: PMC8280319 DOI: 10.3389/fnins.2021.645702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Accepted: 06/09/2021] [Indexed: 11/13/2022] Open
Abstract
Audiovisual cross-modal training has been proposed as a tool to improve human spatial hearing. Here, we investigated training-induced modulations of event-related potential (ERP) components that have been associated with processes of auditory selective spatial attention when a speaker of interest has to be localized in a multiple speaker ("cocktail-party") scenario. Forty-five healthy participants were tested, including younger (19-29 years; n = 21) and older (66-76 years; n = 24) age groups. Three conditions of short-term training (duration 15 min) were compared, requiring localization of non-speech targets under "cocktail-party" conditions with either (1) synchronous presentation of co-localized auditory-target and visual stimuli (audiovisual-congruency training) or (2) immediate visual feedback on correct or incorrect localization responses (visual-feedback training), or (3) presentation of spatially incongruent auditory-target and visual stimuli presented at random positions with synchronous onset (control condition). Prior to and after training, participants were tested in an auditory spatial attention task (15 min), requiring localization of a predefined spoken word out of three distractor words, which were presented with synchronous stimulus onset from different positions. Peaks of ERP components were analyzed with a specific focus on the N2, which is known to be a correlate of auditory selective spatial attention. N2 amplitudes were significantly larger after audiovisual-congruency training compared with the remaining training conditions for younger, but not older, participants. Also, at the time of the N2, distributed source analysis revealed an enhancement of neural activity induced by audiovisual-congruency training in dorsolateral prefrontal cortex (Brodmann area 9) for the younger group. These findings suggest that cross-modal processes induced by audiovisual-congruency training under "cocktail-party" conditions at a short time scale resulted in an enhancement of correlates of auditory selective spatial attention.
Collapse
Affiliation(s)
| | | | - Stephan Getzmann
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| | - Jörg Lewald
- Faculty of Psychology, Ruhr University Bochum, Bochum, Germany
| |
Collapse
|
14
|
Li Z, Li J, Wang S, Wang X, Chen J, Qin L. Laminar Profile of Auditory Steady-State Response in the Auditory Cortex of Awake Mice. Front Syst Neurosci 2021; 15:636395. [PMID: 33815073 PMCID: PMC8017131 DOI: 10.3389/fnsys.2021.636395] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Accepted: 02/19/2021] [Indexed: 12/20/2022] Open
Abstract
Objective Auditory steady-state response (ASSR) is a gamma oscillation evoked by periodic auditory stimuli, which is commonly used in clinical electroencephalographic examination to evaluate the neurological functions. Though it has been suggested that auditory cortex is the origin of ASSR, how the laminar architecture of the neocortex contributes to the ASSR recorded from the brain surface remains unclear. Methods We used a 16-channel silicon probe to record the local field potential and the single-unit spike activity in the different layers of the auditory cortex of unanesthetized mice. Click-trains with a repetition rate at 40-Hz were present as sound stimuli to evoke ASSR. Results We found that the LFPs of all cortical layers showed a stable ASSR synchronizing to the 40-Hz click stimuli, while the ASSR was strongest in the granular (thalamorecipient) layer. Furthermore, time-frequency analyses also revealed the strongest coherence between the signals recorded from the granular layer and pial surface. Conclusion Our results reveal that the 40-Hz ASSR primarily shows the evoked gamma oscillation of thalamorecipient layers in the neocortex, and that the ASSR may be a biomarker to detect the cognitive deficits associated with impaired thalamo-cortical connection.
Collapse
Affiliation(s)
- Zijie Li
- Department of Physiology, China Medical University, Shenyang, China
| | - Jinhong Li
- Department of Physiology, China Medical University, Shenyang, China
| | - Shuai Wang
- Department of Physiology, China Medical University, Shenyang, China
| | - Xuejiao Wang
- Department of Physiology, China Medical University, Shenyang, China
| | - Jingyu Chen
- Department of Physiology, China Medical University, Shenyang, China
| | - Ling Qin
- Department of Physiology, China Medical University, Shenyang, China
| |
Collapse
|
15
|
Visual load effects on the auditory steady-state responses to 20-, 40-, and 80-Hz amplitude-modulated tones. Physiol Behav 2021; 228:113240. [PMID: 33188789 DOI: 10.1016/j.physbeh.2020.113240] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2020] [Revised: 09/29/2020] [Accepted: 10/31/2020] [Indexed: 10/23/2022]
Abstract
Ignoring background sounds while focusing on a visual task is a necessary ability in everyday life. If attentional resources are shared between modalities, processing of task-irrelevant auditory information should become attenuated when attentional capacity is expended by visual demands. According to the early-filter model, top-down attenuation of auditory responses is possible at various stages of the auditory pathway through multiple recurrent loops. Furthermore, the adaptive filtering model of selective attention suggests that filtering occurs early when concurrent visual tasks are demanding (e.g., high load) and late when tasks are easy (e.g., low load). To test these models, this study examined the effects of three levels of visual load on auditory steady-state responses (ASSRs) at three modulation frequencies. Subjects performed a visual task with no, low, and high visual load while ignoring task-irrelevant sounds. The auditory stimuli were 500-Hz tones amplitude-modulated at 20, 40, or 80 Hz to target different processing stages of the auditory pathway. Results from bayesian analyses suggest that ASSRs are unaffected by visual load. These findings imply that attentional resources are modality specific and that the attentional filter of auditory processing does not vary with visual task demands.
Collapse
|
16
|
Manting CL, Andersen LM, Gulyas B, Ullén F, Lundqvist D. Attentional modulation of the auditory steady-state response across the cortex. Neuroimage 2020; 217:116930. [DOI: 10.1016/j.neuroimage.2020.116930] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2019] [Revised: 04/10/2020] [Accepted: 05/07/2020] [Indexed: 10/24/2022] Open
|
17
|
Puschmann S, Baillet S, Zatorre RJ. Musicians at the Cocktail Party: Neural Substrates of Musical Training During Selective Listening in Multispeaker Situations. Cereb Cortex 2020; 29:3253-3265. [PMID: 30137239 DOI: 10.1093/cercor/bhy193] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2018] [Revised: 06/26/2018] [Accepted: 07/19/2018] [Indexed: 11/12/2022] Open
Abstract
Musical training has been demonstrated to benefit speech-in-noise perception. It is however unknown whether this effect translates to selective listening in cocktail party situations, and if so what its neural basis might be. We investigated this question using magnetoencephalography-based speech envelope reconstruction and a sustained selective listening task, in which participants with varying amounts of musical training attended to 1 of 2 speech streams while detecting rare target words. Cortical frequency-following responses (FFR) and auditory working memory were additionally measured to dissociate musical training-related effects on low-level auditory processing versus higher cognitive function. Results show that the duration of musical training is associated with a reduced distracting effect of competing speech on target detection accuracy. Remarkably, more musical training was related to a robust neural tracking of both the to-be-attended and the to-be-ignored speech stream, up until late cortical processing stages. Musical training-related increases in FFR power were associated with a robust speech tracking in auditory sensory areas, whereas training-related differences in auditory working memory were linked to an increased representation of the to-be-ignored stream beyond auditory cortex. Our findings suggest that musically trained persons can use additional information about the distracting stream to limit interference by competing speech.
Collapse
Affiliation(s)
- Sebastian Puschmann
- Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada.,Centre for Research on Brain, Language and Music, Montreal, Quebec, Canada
| | - Sylvain Baillet
- Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada.,Centre for Research on Brain, Language and Music, Montreal, Quebec, Canada
| | - Robert J Zatorre
- Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada.,Centre for Research on Brain, Language and Music, Montreal, Quebec, Canada.,International Laboratory for Brain, Music and Sound Research, Montreal, Quebec, Canada
| |
Collapse
|
18
|
Niesen M, Vander Ghinst M, Bourguignon M, Wens V, Bertels J, Goldman S, Choufani G, Hassid S, De Tiège X. Tracking the Effects of Top-Down Attention on Word Discrimination Using Frequency-tagged Neuromagnetic Responses. J Cogn Neurosci 2020; 32:877-888. [PMID: 31933439 DOI: 10.1162/jocn_a_01522] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Discrimination of words from nonspeech sounds is essential in communication. Still, how selective attention can influence this early step of speech processing remains elusive. To answer that question, brain activity was recorded with magnetoencephalography in 12 healthy adults while they listened to two sequences of auditory stimuli presented at 2.17 Hz, consisting of successions of one randomized word (tagging frequency = 0.54 Hz) and three acoustically matched nonverbal stimuli. Participants were instructed to focus their attention on the occurrence of a predefined word in the verbal attention condition and on a nonverbal stimulus in the nonverbal attention condition. Steady-state neuromagnetic responses were identified with spectral analysis at sensor and source levels. Significant sensor responses peaked at 0.54 and 2.17 Hz in both conditions. Sources at 0.54 Hz were reconstructed in supratemporal auditory cortex, left superior temporal gyrus (STG), left middle temporal gyrus, and left inferior frontal gyrus. Sources at 2.17 Hz were reconstructed in supratemporal auditory cortex and STG. Crucially, source strength in the left STG at 0.54 Hz was significantly higher in verbal attention than in nonverbal attention condition. This study demonstrates speech-sensitive responses at primary auditory and speech-related neocortical areas. Critically, it highlights that, during word discrimination, top-down attention modulates activity within the left STG. This area therefore appears to play a crucial role in selective verbal attentional processes for this early step of speech processing.
Collapse
|
19
|
Backer KC, Kessler AS, Lawyer LA, Corina DP, Miller LM. A novel EEG paradigm to simultaneously and rapidly assess the functioning of auditory and visual pathways. J Neurophysiol 2019; 122:1312-1329. [PMID: 31268796 DOI: 10.1152/jn.00868.2018] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023] Open
Abstract
Objective assessment of the sensory pathways is crucial for understanding their development across the life span and how they may be affected by neurodevelopmental disorders (e.g., autism spectrum) and neurological pathologies (e.g., stroke, multiple sclerosis, etc.). Quick and passive measurements, for example, using electroencephalography (EEG), are especially important when working with infants and young children and with patient populations having communication deficits (e.g., aphasia). However, many EEG paradigms are limited to measuring activity from one sensory domain at a time, may be time consuming, and target only a subset of possible responses from that particular sensory domain (e.g., only auditory brainstem responses or only auditory P1-N1-P2 evoked potentials). Thus we developed a new multisensory paradigm that enables simultaneous, robust, and rapid (6-12 min) measurements of both auditory and visual EEG activity, including auditory brainstem responses, auditory and visual evoked potentials, as well as auditory and visual steady-state responses. This novel method allows us to examine neural activity at various stations along the auditory and visual hierarchies with an ecologically valid continuous speech stimulus, while an unrelated video is playing. Both the speech stimulus and the video can be customized for any population of interest. Furthermore, by using two simultaneous visual steady-state stimulation rates, we demonstrate the ability of this paradigm to track both parafoveal and peripheral visual processing concurrently. We report results from 25 healthy young adults, which validate this new paradigm.NEW & NOTEWORTHY A novel electroencephalography paradigm enables the rapid, reliable, and noninvasive assessment of neural activity along both auditory and visual pathways concurrently. The paradigm uses an ecologically valid continuous speech stimulus for auditory evaluation and can simultaneously track visual activity to both parafoveal and peripheral visual space. This new methodology may be particularly appealing to researchers and clinicians working with infants and young children and with patient populations with limited communication abilities.
Collapse
Affiliation(s)
- Kristina C Backer
- Center for Mind and Brain, University of California, Davis, California.,Department of Cognitive and Information Sciences, University of California, Merced, California
| | - Andrew S Kessler
- Center for Mind and Brain, University of California, Davis, California
| | - Laurel A Lawyer
- Center for Mind and Brain, University of California, Davis, California
| | - David P Corina
- Center for Mind and Brain, University of California, Davis, California.,Deptartment of Linguistics, University of California, Davis, California
| | - Lee M Miller
- Center for Mind and Brain, University of California, Davis, California.,Department of Neurobiology, Physiology, and Behavior, University of California, Davis, California
| |
Collapse
|
20
|
Park H, Ince RAA, Schyns PG, Thut G, Gross J. Representational interactions during audiovisual speech entrainment: Redundancy in left posterior superior temporal gyrus and synergy in left motor cortex. PLoS Biol 2018; 16:e2006558. [PMID: 30080855 PMCID: PMC6095613 DOI: 10.1371/journal.pbio.2006558] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2018] [Revised: 08/16/2018] [Accepted: 07/24/2018] [Indexed: 11/24/2022] Open
Abstract
Integration of multimodal sensory information is fundamental to many aspects of human behavior, but the neural mechanisms underlying these processes remain mysterious. For example, during face-to-face communication, we know that the brain integrates dynamic auditory and visual inputs, but we do not yet understand where and how such integration mechanisms support speech comprehension. Here, we quantify representational interactions between dynamic audio and visual speech signals and show that different brain regions exhibit different types of representational interaction. With a novel information theoretic measure, we found that theta (3-7 Hz) oscillations in the posterior superior temporal gyrus/sulcus (pSTG/S) represent auditory and visual inputs redundantly (i.e., represent common features of the two), whereas the same oscillations in left motor and inferior temporal cortex represent the inputs synergistically (i.e., the instantaneous relationship between audio and visual inputs is also represented). Importantly, redundant coding in the left pSTG/S and synergistic coding in the left motor cortex predict behavior-i.e., speech comprehension performance. Our findings therefore demonstrate that processes classically described as integration can have different statistical properties and may reflect distinct mechanisms that occur in different brain regions to support audiovisual speech comprehension.
Collapse
Affiliation(s)
- Hyojin Park
- School of Psychology, Centre for Human Brain Health (CHBH), University of Birmingham, Birmingham, United Kingdom
| | - Robin A. A. Ince
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
| | - Philippe G. Schyns
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
| | - Gregor Thut
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
| | - Joachim Gross
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
- Institute for Biomagnetism and Biosignalanalysis, University of Muenster, Muenster, Germany
| |
Collapse
|
21
|
Lewald J, Schlüter MC, Getzmann S. Cortical processing of location changes in a “cocktail-party” situation: Spatial oddball effects on electrophysiological correlates of auditory selective attention. Hear Res 2018; 365:49-61. [DOI: 10.1016/j.heares.2018.04.009] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/14/2018] [Revised: 04/12/2018] [Accepted: 04/25/2018] [Indexed: 11/24/2022]
|
22
|
Abstract
OBJECTIVES Auditory stimuli modulated by modulation frequencies within the 30 to 50 Hz region evoke auditory steady state responses (ASSRs) with high signal to noise ratios in adults, and can be used to determine the frequency-specific hearing thresholds of adults who are unable to give behavioral feedback reliably. To measure ASSRs as efficiently as possible a multiple stimulus paradigm can be used, stimulating both ears simultaneously. The response strength of 30 to 50Hz ASSRs is, however, affected when both ears are stimulated simultaneously. The aim of the present study is to gain insight in the measurement efficiency of 30 to 50 Hz ASSRs evoked with a 2-ear stimulation paradigm, by systematically investigating the binaural interaction effects of 30 to 50 Hz ASSRs in normal-hearing adults. DESIGN ASSRs were obtained with a 64-channel EEG system in 23 normal-hearing adults. All participants participated in one diotic, multiple dichotic, and multiple monaural conditions. Stimuli consisted of a modulated one-octave noise band, centered at 1 kHz, and presented at 70 dB SPL. The diotic condition contained 40 Hz modulated stimuli presented to both ears. In the dichotic conditions, the modulation frequency of the left ear stimulus was kept constant at 40 Hz, while the stimulus at the right ear was either the unmodulated or modulated carrier. In case of the modulated carrier, the modulation frequency varied between 30 and 50 Hz in steps of 2 Hz across conditions. The monaural conditions consisted of all stimuli included in the diotic and dichotic conditions. RESULTS Modulation frequencies ≥36 Hz resulted in prominent ASSRs in all participants for the monaural conditions. A significant enhancement effect was observed (average: ~3 dB) in the diotic condition, whereas a significant reduction effect was observed in the dichotic conditions. There was no distinct effect of the temporal characteristics of the stimuli on the amount of reduction. The attenuation was in 33% of the cases >3 dB for ASSRs evoked with modulation frequencies ≥40 Hz and 50% for ASSRs evoked with modulation frequencies ≤36 Hz. CONCLUSIONS Binaural interaction effects as observed in the diotic condition are similar to the binaural interaction effects of middle latency responses as reported in the literature, suggesting that these responses share a same underlying mechanism. Our data also indicated that 30 to 50 Hz ASSRs are attenuated when presented dichotically and that this attenuation is independent of the stimulus characteristics as used in the present study. These findings are important as they give insight in how binaural interaction affects the measurement efficiency. The 2-ear stimulation paradigm of the present study was, for the most optimal modulation frequencies (i.e., ≥40 Hz), more efficient than a 1-ear sequential stimulation paradigm in 66% of the cases.
Collapse
|
23
|
Trachel RE, Brochier TG, Clerc M. Brain-computer interaction for online enhancement of visuospatial attention performance. J Neural Eng 2018; 15:046017. [PMID: 29667934 DOI: 10.1088/1741-2552/aabf16] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
OBJECTIVE this study on real-time decoding of visuospatial attention has two objectives: first, to reliably decode self-directed shifts of attention from electroencephalography (EEG) data, and second, to analyze whether this information can be used to enhance visuospatial performance. Visuospatial performance was measured in a target orientation discrimination task, in terms of reaction time, and error rate. APPROACH Our experiment extends the Posner paradigm by introducing a new type of ambiguous cues to indicate upcoming target location. The cues are designed so that their ambiguity is imperceptible to the user. This entails endogenous shifts of attention which are truly self-directed. Two protocols were implemented to exploit the decoding of attention shifts. The first 'adaptive' protocol uses the decoded locus to display the target. In the second 'warning' protocol, the target position is defined in advance, but a warning is flashed when the target mismatches the decoded locus. MAIN RESULTS Both protocols were tested in an online experiment involving ten subjects. The reaction time improved in both the adaptive and the warning protocol. The error rate was improved in the adaptive protocol only. SIGNIFICANCE This proof of concept study brings evidence that visuospatial brain-computer interfaces (BCIs) can be used to enhance improving human-machine interaction in situations where humans must react to off-center events in the visual field.
Collapse
Affiliation(s)
- R E Trachel
- Institut de Neurosciences de la Timone (INT), CNRS-Aix-Marseille Université, Campus Santé Timone, 27, Boulevard Jean Moulin. 13385 Marseille Cedex 5, France. Inria Sophia Antipolis-Méditerranée, 2004, route des Lucioles-BP 93, 06902 Sophia Antipolis Cedex, France
| | | | | |
Collapse
|
24
|
Jaeger M, Bleichner MG, Bauer AKR, Mirkovic B, Debener S. Did You Listen to the Beat? Auditory Steady-State Responses in the Human Electroencephalogram at 4 and 7 Hz Modulation Rates Reflect Selective Attention. Brain Topogr 2018; 31:811-826. [DOI: 10.1007/s10548-018-0637-8] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2017] [Accepted: 02/23/2018] [Indexed: 01/23/2023]
|
25
|
Wiegand K, Heiland S, Uhlig CH, Dykstra AR, Gutschalk A. Cortical networks for auditory detection with and without informational masking: Task effects and implications for conscious perception. Neuroimage 2018; 167:178-190. [DOI: 10.1016/j.neuroimage.2017.11.036] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2017] [Revised: 10/06/2017] [Accepted: 11/18/2017] [Indexed: 01/08/2023] Open
|
26
|
Is Listening in Noise Worth It? The Neurobiology of Speech Recognition in Challenging Listening Conditions. Ear Hear 2018; 37 Suppl 1:101S-10S. [PMID: 27355759 DOI: 10.1097/aud.0000000000000300] [Citation(s) in RCA: 80] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
This review examines findings from functional neuroimaging studies of speech recognition in noise to provide a neural systems level explanation for the effort and fatigue that can be experienced during speech recognition in challenging listening conditions. Neuroimaging studies of speech recognition consistently demonstrate that challenging listening conditions engage neural systems that are used to monitor and optimize performance across a wide range of tasks. These systems appear to improve speech recognition in younger and older adults, but sustained engagement of these systems also appears to produce an experience of effort and fatigue that may affect the value of communication. When considered in the broader context of the neuroimaging and decision making literature, the speech recognition findings from functional imaging studies indicate that the expected value, or expected level of speech recognition given the difficulty of listening conditions, should be considered when measuring effort and fatigue. The authors propose that the behavioral economics or neuroeconomics of listening can provide a conceptual and experimental framework for understanding effort and fatigue that may have clinical significance.
Collapse
|
27
|
Spüler M, Kurek S. Alpha-band lateralization during auditory selective attention for brain–computer interface control. BRAIN-COMPUTER INTERFACES 2018. [DOI: 10.1080/2326263x.2017.1415496] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Martin Spüler
- Department of Computer Engineering, Eberhard-Karls University Tübingen, Tübingen, Germany
| | - Simone Kurek
- Department of Computer Engineering, Eberhard-Karls University Tübingen, Tübingen, Germany
| |
Collapse
|
28
|
Shinn-Cunningham B. Cortical and Sensory Causes of Individual Differences in Selective Attention Ability Among Listeners With Normal Hearing Thresholds. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:2976-2988. [PMID: 29049598 PMCID: PMC5945067 DOI: 10.1044/2017_jslhr-h-17-0080] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2017] [Revised: 06/23/2017] [Accepted: 07/05/2017] [Indexed: 05/28/2023]
Abstract
PURPOSE This review provides clinicians with an overview of recent findings relevant to understanding why listeners with normal hearing thresholds (NHTs) sometimes suffer from communication difficulties in noisy settings. METHOD The results from neuroscience and psychoacoustics are reviewed. RESULTS In noisy settings, listeners focus their attention by engaging cortical brain networks to suppress unimportant sounds; they then can analyze and understand an important sound, such as speech, amidst competing sounds. Differences in the efficacy of top-down control of attention can affect communication abilities. In addition, subclinical deficits in sensory fidelity can disrupt the ability to perceptually segregate sound sources, interfering with selective attention, even in listeners with NHTs. Studies of variability in control of attention and in sensory coding fidelity may help to isolate and identify some of the causes of communication disorders in individuals presenting at the clinic with "normal hearing." CONCLUSIONS How well an individual with NHTs can understand speech amidst competing sounds depends not only on the sound being audible but also on the integrity of cortical control networks and the fidelity of the representation of suprathreshold sound. Understanding the root cause of difficulties experienced by listeners with NHTs ultimately can lead to new, targeted interventions that address specific deficits affecting communication in noise. PRESENTATION VIDEO http://cred.pubs.asha.org/article.aspx?articleid=2601617.
Collapse
Affiliation(s)
- Barbara Shinn-Cunningham
- Center for Research in Sensory Communication and Emerging Neural Technology, Boston University, MA
| |
Collapse
|
29
|
Attentional Modulation of Envelope-Following Responses at Lower (93-109 Hz) but Not Higher (217-233 Hz) Modulation Rates. J Assoc Res Otolaryngol 2017; 19:83-97. [PMID: 28971333 PMCID: PMC5783923 DOI: 10.1007/s10162-017-0641-9] [Citation(s) in RCA: 37] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2017] [Accepted: 09/04/2017] [Indexed: 11/03/2022] Open
Abstract
Directing attention to sounds of different frequencies allows listeners to perceive a sound of interest, like a talker, in a mixture. Whether cortically generated frequency-specific attention affects responses as low as the auditory brainstem is currently unclear. Participants attended to either a high- or low-frequency tone stream, which was presented simultaneously and tagged with different amplitude modulation (AM) rates. In a replication design, we showed that envelope-following responses (EFRs) were modulated by attention only when the stimulus AM rate was slow enough for the auditory cortex to track—and not for stimuli with faster AM rates, which are thought to reflect ‘purer’ brainstem sources. Thus, we found no evidence of frequency-specific attentional modulation that can be confidently attributed to brainstem generators. The results demonstrate that different neural populations contribute to EFRs at higher and lower rates, compatible with cortical contributions at lower rates. The results further demonstrate that stimulus AM rate can alter conclusions of EFR studies.
Collapse
|
30
|
Mehta AH, Jacoby N, Yasin I, Oxenham AJ, Shamma SA. An auditory illusion reveals the role of streaming in the temporal misallocation of perceptual objects. Philos Trans R Soc Lond B Biol Sci 2017; 372:rstb.2016.0114. [PMID: 28044024 DOI: 10.1098/rstb.2016.0114] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/11/2016] [Indexed: 11/12/2022] Open
Abstract
This study investigates the neural correlates and processes underlying the ambiguous percept produced by a stimulus similar to Deutsch's 'octave illusion', in which each ear is presented with a sequence of alternating pure tones of low and high frequencies. The same sequence is presented to each ear, but in opposite phase, such that the left and right ears receive a high-low-high … and a low-high-low … pattern, respectively. Listeners generally report hearing the illusion of an alternating pattern of low and high tones, with all the low tones lateralized to one side and all the high tones lateralized to the other side. The current explanation of the illusion is that it reflects an illusory feature conjunction of pitch and perceived location. Using psychophysics and electroencephalogram measures, we test this and an alternative hypothesis involving synchronous and sequential stream segregation, and investigate potential neural correlates of the illusion. We find that the illusion of alternating tones arises from the synchronous tone pairs across ears rather than sequential tones in one ear, suggesting that the illusion involves a misattribution of time across perceptual streams, rather than a misattribution of location within a stream. The results provide new insights into the mechanisms of binaural streaming and synchronous sound segregation.This article is part of the themed issue 'Auditory and visual scene analysis'.
Collapse
Affiliation(s)
- Anahita H Mehta
- UCL Ear Institute, University College London, London WC1X 8EE, UK .,Department of Psychology, University of Minnesota, Minneapolis, MN 55455, USA
| | - Nori Jacoby
- The Center for Science and Society, Columbia University, New York, NY 10027, USA
| | - Ifat Yasin
- Department of Computer Science, University College London, London WC1E 6BT, UK
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, Minneapolis, MN 55455, USA
| | - Shihab A Shamma
- Electrical and Computer Engineering Department and Institute for Systems Research, University of Maryland, College Park, MD 20742, USA.,École Normale Supérieure, 75005 Paris, France
| |
Collapse
|
31
|
Shinn-Cunningham B, Best V, Lee AKC. Auditory Object Formation and Selection. SPRINGER HANDBOOK OF AUDITORY RESEARCH 2017. [DOI: 10.1007/978-3-319-51662-2_2] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|
32
|
Braga RM, Hellyer PJ, Wise RJS, Leech R. Auditory and visual connectivity gradients in frontoparietal cortex. Hum Brain Mapp 2016; 38:255-270. [PMID: 27571304 PMCID: PMC5215394 DOI: 10.1002/hbm.23358] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2015] [Revised: 08/09/2016] [Accepted: 08/15/2016] [Indexed: 11/06/2022] Open
Abstract
A frontoparietal network of brain regions is often implicated in both auditory and visual information processing. Although it is possible that the same set of multimodal regions subserves both modalities, there is increasing evidence that there is a differentiation of sensory function within frontoparietal cortex. Magnetic resonance imaging (MRI) in humans was used to investigate whether different frontoparietal regions showed intrinsic biases in connectivity with visual or auditory modalities. Structural connectivity was assessed with diffusion tractography and functional connectivity was tested using functional MRI. A dorsal-ventral gradient of function was observed, where connectivity with visual cortex dominates dorsal frontal and parietal connections, while connectivity with auditory cortex dominates ventral frontal and parietal regions. A gradient was also observed along the posterior-anterior axis, although in opposite directions in prefrontal and parietal cortices. The results suggest that the location of neural activity within frontoparietal cortex may be influenced by these intrinsic biases toward visual and auditory processing. Thus, the location of activity in frontoparietal cortex may be influenced as much by stimulus modality as the cognitive demands of a task. It was concluded that stimulus modality was spatially encoded throughout frontal and parietal cortices, and was speculated that such an arrangement allows for top-down modulation of modality-specific information to occur within higher-order cortex. This could provide a potentially faster and more efficient pathway by which top-down selection between sensory modalities could occur, by constraining modulations to within frontal and parietal regions, rather than long-range connections to sensory cortices. Hum Brain Mapp 38:255-270, 2017. © 2016 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Rodrigo M Braga
- Center for Brain Science, Harvard University, Cambridge, Massachusetts.,Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital & Harvard Medical School, Charlestown, Massachusetts.,The Computational, Cognitive and Clinical Neuroimaging Laboratory, Division of Brain Sciences, Hammersmith Hospital Campus, Imperial College London, London, United Kingdom
| | - Peter J Hellyer
- The Computational, Cognitive and Clinical Neuroimaging Laboratory, Division of Brain Sciences, Hammersmith Hospital Campus, Imperial College London, London, United Kingdom.,Centre for Neuroimaging Sciences, Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, United Kingdom
| | - Richard J S Wise
- The Computational, Cognitive and Clinical Neuroimaging Laboratory, Division of Brain Sciences, Hammersmith Hospital Campus, Imperial College London, London, United Kingdom
| | - Robert Leech
- The Computational, Cognitive and Clinical Neuroimaging Laboratory, Division of Brain Sciences, Hammersmith Hospital Campus, Imperial College London, London, United Kingdom
| |
Collapse
|
33
|
Lewald J, Hanenberg C, Getzmann S. Brain correlates of the orientation of auditory spatial attention onto speaker location in a “cocktail-party” situation. Psychophysiology 2016; 53:1484-95. [DOI: 10.1111/psyp.12692] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2015] [Accepted: 05/24/2016] [Indexed: 11/29/2022]
Affiliation(s)
- Jörg Lewald
- Department of Cognitive Psychology, Faculty of Psychology; Ruhr University Bochum; Bochum Germany
- Leibniz Research Centre for Working Environment and Human Factors; Dortmund Germany
| | - Christina Hanenberg
- Department of Cognitive Psychology, Faculty of Psychology; Ruhr University Bochum; Bochum Germany
- Leibniz Research Centre for Working Environment and Human Factors; Dortmund Germany
| | - Stephan Getzmann
- Leibniz Research Centre for Working Environment and Human Factors; Dortmund Germany
| |
Collapse
|
34
|
Goossens T, Vercammen C, Wouters J, van Wieringen A. Aging Affects Neural Synchronization to Speech-Related Acoustic Modulations. Front Aging Neurosci 2016; 8:133. [PMID: 27378906 PMCID: PMC4908923 DOI: 10.3389/fnagi.2016.00133] [Citation(s) in RCA: 60] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2015] [Accepted: 05/25/2016] [Indexed: 11/13/2022] Open
Abstract
As people age, speech perception problems become highly prevalent, especially in noisy situations. In addition to peripheral hearing and cognition, temporal processing plays a key role in speech perception. Temporal processing of speech features is mediated by synchronized activity of neural oscillations in the central auditory system. Previous studies indicate that both the degree and hemispheric lateralization of synchronized neural activity relate to speech perception performance. Based on these results, we hypothesize that impaired speech perception in older persons may, in part, originate from deviances in neural synchronization. In this study, auditory steady-state responses that reflect synchronized activity of theta, beta, low and high gamma oscillations (i.e., 4, 20, 40, and 80 Hz ASSR, respectively) were recorded in young, middle-aged, and older persons. As all participants had normal audiometric thresholds and were screened for (mild) cognitive impairment, differences in synchronized neural activity across the three age groups were likely to be attributed to age. Our data yield novel findings regarding theta and high gamma oscillations in the aging auditory system. At an older age, synchronized activity of theta oscillations is increased, whereas high gamma synchronization is decreased. In contrast to young persons who exhibit a right hemispheric dominance for processing of high gamma range modulations, older adults show a symmetrical processing pattern. These age-related changes in neural synchronization may very well underlie the speech perception problems in aging persons.
Collapse
Affiliation(s)
- Tine Goossens
- Research Group Experimental Oto-rhino-laryngology (ExpORL), Department of Neurosciences, KU Leuven - University of Leuven Leuven, Belgium
| | - Charlotte Vercammen
- Research Group Experimental Oto-rhino-laryngology (ExpORL), Department of Neurosciences, KU Leuven - University of Leuven Leuven, Belgium
| | - Jan Wouters
- Research Group Experimental Oto-rhino-laryngology (ExpORL), Department of Neurosciences, KU Leuven - University of Leuven Leuven, Belgium
| | - Astrid van Wieringen
- Research Group Experimental Oto-rhino-laryngology (ExpORL), Department of Neurosciences, KU Leuven - University of Leuven Leuven, Belgium
| |
Collapse
|
35
|
Braga RM, Fu RZ, Seemungal BM, Wise RJS, Leech R. Eye Movements during Auditory Attention Predict Individual Differences in Dorsal Attention Network Activity. Front Hum Neurosci 2016; 10:164. [PMID: 27242465 PMCID: PMC4860869 DOI: 10.3389/fnhum.2016.00164] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2015] [Accepted: 04/01/2016] [Indexed: 11/13/2022] Open
Abstract
The neural mechanisms supporting auditory attention are not fully understood. A dorsal frontoparietal network of brain regions is thought to mediate the spatial orienting of attention across all sensory modalities. Key parts of this network, the frontal eye fields (FEF) and the superior parietal lobes (SPL), contain retinotopic maps and elicit saccades when stimulated. This suggests that their recruitment during auditory attention might reflect crossmodal oculomotor processes; however this has not been confirmed experimentally. Here we investigate whether task-evoked eye movements during an auditory task can predict the magnitude of activity within the dorsal frontoparietal network. A spatial and non-spatial listening task was used with on-line eye-tracking and functional magnetic resonance imaging (fMRI). No visual stimuli or cues were used. The auditory task elicited systematic eye movements, with saccade rate and gaze position predicting attentional engagement and the cued sound location, respectively. Activity associated with these separate aspects of evoked eye-movements dissociated between the SPL and FEF. However these observed eye movements could not account for all the activation in the frontoparietal network. Our results suggest that the recruitment of the SPL and FEF during attentive listening reflects, at least partly, overt crossmodal oculomotor processes during non-visual attention. Further work is needed to establish whether the network’s remaining contribution to auditory attention is through covert crossmodal processes, or is directly involved in the manipulation of auditory information.
Collapse
Affiliation(s)
- Rodrigo M Braga
- Computational, Cognitive and Clinical Neuroimaging Laboratory, Division of Brain Sciences, Imperial College London, Hammersmith Hospital CampusLondon, UK; Center for Brain Science, Harvard UniversityCambridge, MA, USA; Aathinoula A. Martinos Center for Biomedical ImagingCharlestown, MA, USA
| | - Richard Z Fu
- Computational, Cognitive and Clinical Neuroimaging Laboratory, Division of Brain Sciences, Imperial College London, Hammersmith Hospital Campus London, UK
| | - Barry M Seemungal
- Computational, Cognitive and Clinical Neuroimaging Laboratory, Division of Brain Sciences, Imperial College London, Hammersmith Hospital Campus London, UK
| | - Richard J S Wise
- Computational, Cognitive and Clinical Neuroimaging Laboratory, Division of Brain Sciences, Imperial College London, Hammersmith Hospital Campus London, UK
| | - Robert Leech
- Computational, Cognitive and Clinical Neuroimaging Laboratory, Division of Brain Sciences, Imperial College London, Hammersmith Hospital Campus London, UK
| |
Collapse
|
36
|
Evidence against attentional state modulating scalp-recorded auditory brainstem steady-state responses. Brain Res 2015; 1626:146-64. [PMID: 26187756 DOI: 10.1016/j.brainres.2015.06.038] [Citation(s) in RCA: 61] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2015] [Revised: 06/18/2015] [Accepted: 06/24/2015] [Indexed: 11/20/2022]
Abstract
Auditory brainstem responses (ABRs) and their steady-state counterpart (subcortical steady-state responses, SSSRs) are generally thought to be insensitive to cognitive demands. However, a handful of studies report that SSSRs are modulated depending on the subject׳s focus of attention, either towards or away from an auditory stimulus. Here, we explored whether attentional focus affects the envelope-following response (EFR), which is a particular kind of SSSR, and if so, whether the effects are specific to which sound elements in a sound mixture a subject is attending (selective auditory attentional modulation), specific to attended sensory input (inter-modal attentional modulation), or insensitive to attentional focus. We compared the strength of EFR-stimulus phase locking in human listeners under various tasks: listening to a monaural stimulus, selectively attending to a particular ear during dichotic stimulus presentation, and attending to visual stimuli while ignoring dichotic auditory inputs. We observed no systematic changes in the EFR across experimental manipulations, even though cortical EEG revealed attention-related modulations of alpha activity during the task. We conclude that attentional effects, if any, on human subcortical representation of sounds cannot be observed robustly using EFRs. This article is part of a Special Issue entitled SI: Prediction and Attention.
Collapse
|
37
|
Han JH, Dimitrijevic A. Acoustic change responses to amplitude modulation: a method to quantify cortical temporal processing and hemispheric asymmetry. Front Neurosci 2015; 9:38. [PMID: 25717291 PMCID: PMC4324071 DOI: 10.3389/fnins.2015.00038] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2014] [Accepted: 01/26/2015] [Indexed: 11/18/2022] Open
Abstract
Objective: Sound modulation is a critical temporal cue for the perception of speech and environmental sounds. To examine auditory cortical responses to sound modulation, we developed an acoustic change stimulus involving amplitude modulation (AM) of ongoing noise. The AM transitions in this stimulus evoked an acoustic change complex (ACC) that was examined parametrically in terms of rate and depth of modulation and hemispheric symmetry. Methods: Auditory cortical potentials were recorded from 64 scalp electrodes during passive listening in two conditions: (1) ACC from white noise to 4, 40, 300 Hz AM, with varying AM depths of 100, 50, 25% lasting 1 s and (2) 1 s AM noise bursts at the same modulation rate. Behavioral measures included AM detection from an attend ACC condition and AM depth thresholds (i.e., a temporal modulation transfer function, TMTF). Results: The N1 response of the ACC was large to 4 and 40 Hz and small to the 300 Hz AM. In contrast, the opposite pattern was observed with bursts of AM showing larger responses with increases in AM rate. Brain source modeling showed significant hemispheric asymmetry such that 4 and 40 Hz ACC responses were dominated by right and left hemispheres respectively. Conclusion: N1 responses to the ACC resembled a low pass filter shape similar to a behavioral TMTF. In the ACC paradigm, the only stimulus parameter that changes is AM and therefore the N1 response provides an index for this AM change. In contrast, an AM burst stimulus contains both AM and level changes and is likely dominated by the rise time of the stimulus. The hemispheric differences are consistent with the asymmetric sampling in time hypothesis suggesting that the different hemispheres preferentially sample acoustic time across different time windows. Significance: The ACC provides a novel approach to studying temporal processing at the level of cortex and provides further evidence of hemispheric specialization for fast and slow stimuli.
Collapse
Affiliation(s)
- Ji Hye Han
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center Cincinnati, OH, USA
| | - Andrew Dimitrijevic
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center Cincinnati, OH, USA
| |
Collapse
|
38
|
Mahajan Y, Davis C, Kim J. Attentional modulation of auditory steady-state responses. PLoS One 2014; 9:e110902. [PMID: 25334021 PMCID: PMC4205007 DOI: 10.1371/journal.pone.0110902] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2014] [Accepted: 09/17/2014] [Indexed: 11/18/2022] Open
Abstract
Auditory selective attention enables task-relevant auditory events to be enhanced and irrelevant ones suppressed. In the present study we used a frequency tagging paradigm to investigate the effects of attention on auditory steady state responses (ASSR). The ASSR was elicited by simultaneously presenting two different streams of white noise, amplitude modulated at either 16 and 23.5 Hz or 32.5 and 40 Hz. The two different frequencies were presented to each ear and participants were instructed to selectively attend to one ear or the other (confirmed by behavioral evidence). The results revealed that modulation of ASSR by selective attention depended on the modulation frequencies used and whether the activation was contralateral or ipsilateral. Attention enhanced the ASSR for contralateral activation from either ear for 16 Hz and suppressed the ASSR for ipsilateral activation for 16 Hz and 23.5 Hz. For modulation frequencies of 32.5 or 40 Hz attention did not affect the ASSR. We propose that the pattern of enhancement and inhibition may be due to binaural suppressive effects on ipsilateral stimulation and the dominance of contralateral hemisphere during dichotic listening. In addition to the influence of cortical processing asymmetries, these results may also reflect a bias towards inhibitory ipsilateral and excitatory contralateral activation present at the level of inferior colliculus. That the effect of attention was clearest for the lower modulation frequencies suggests that such effects are likely mediated by cortical brain structures or by those in close proximity to cortex.
Collapse
Affiliation(s)
- Yatin Mahajan
- The MARCS Institute, University of Western Sydney, Penrith, New South Wales, Australia
| | - Chris Davis
- The MARCS Institute, University of Western Sydney, Penrith, New South Wales, Australia
| | - Jeesun Kim
- The MARCS Institute, University of Western Sydney, Penrith, New South Wales, Australia
| |
Collapse
|