1
|
Ryan DB, Eckert MA, Sellers EW, Schairer KS, McBee MT, Ridley EA, Smith SL. Performance Monitoring and Cognitive Inhibition during a Speech-in-Noise Task in Older Listeners. Semin Hear 2023; 44:124-139. [PMID: 37122879 PMCID: PMC10147504 DOI: 10.1055/s-0043-1767695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/02/2023] Open
Abstract
The goal of this study was to examine the effect of hearing loss on theta and alpha electroencephalography (EEG) frequency power measures of performance monitoring and cognitive inhibition, respectively, during a speech-in-noise task. It was hypothesized that hearing loss would be associated with an increase in the peak power of theta and alpha frequencies toward easier conditions compared to normal hearing adults. The shift would reflect how hearing loss modulates the recruitment of listening effort to easier listening conditions. Nine older adults with normal hearing (ONH) and 10 older adults with hearing loss (OHL) participated in this study. EEG data were collected from all participants while they completed the words-in-noise task. It hypothesized that hearing loss would also have an effect on theta and alpha power. The ONH group showed an inverted U -shape effect of signal-to-noise ratio (SNR), but there were limited effects of SNR on theta or alpha power in the OHL group. The results of the ONH group support the growing body of literature showing effects of listening conditions on alpha and theta power. The null results of listening condition in the OHL group add to a smaller body of literature, suggesting that listening effort research conditions should have near ceiling performance.
Collapse
Affiliation(s)
- David B. Ryan
- Hearing and Balance Research Program, James H. Quillen VA Medical Center, Mountain Home, Tennessee
- Department of Psychology, East Tennessee State University, Johnson City, Tennessee
- Department of Head and Neck Surgery and Communication Sciences, Duke University School of Medicine, Durham, North Carolina
| | - Mark A. Eckert
- Department of Otolaryngology - Head and Neck Surgery, Hearing Research Program, Medical University of South Carolina, Charleston, North Carolina
| | - Eric W. Sellers
- Department of Psychology, East Tennessee State University, Johnson City, Tennessee
| | - Kim S. Schairer
- Hearing and Balance Research Program, James H. Quillen VA Medical Center, Mountain Home, Tennessee
- Department of Audiology and Speech Language Pathology, East Tennessee State University, Johnson City, Tennessee
| | - Matthew T. McBee
- Department of Psychology, East Tennessee State University, Johnson City, Tennessee
| | - Elizabeth A. Ridley
- Department of Psychology, East Tennessee State University, Johnson City, Tennessee
| | - Sherri L. Smith
- Department of Head and Neck Surgery and Communication Sciences, Duke University School of Medicine, Durham, North Carolina
- Center for the Study of Aging and Human Development, Duke University, Durham, North Carolina
- Department of Population Health Sciences, Duke University School of Medicine, Durham, North Carolina
- Audiology and Speech Pathology Service, Durham Veterans Affairs Healthcare System, Durham, North Carolina
| |
Collapse
|
2
|
Francis AL. Adding noise is a confounded nuisance. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:1375. [PMID: 36182286 DOI: 10.1121/10.0013874] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Accepted: 08/15/2022] [Indexed: 06/16/2023]
Abstract
A wide variety of research and clinical assessments involve presenting speech stimuli in the presence of some kind of noise. Here, I selectively review two theoretical perspectives and discuss ways in which these perspectives may help researchers understand the consequences for listeners of adding noise to a speech signal. I argue that adding noise changes more about the listening task than merely making the signal more difficult to perceive. To fully understand the effects of an added noise on speech perception, we must consider not just how much the noise affects task difficulty, but also how it affects all of the systems involved in understanding speech: increasing message uncertainty, modifying attentional demand, altering affective response, and changing motivation to perform the task.
Collapse
Affiliation(s)
- Alexander L Francis
- Department of Speech, Language, and Hearing Sciences, Purdue University, 715 Clinic Drive, West Lafayette, Indiana 47907, USA
| |
Collapse
|
3
|
Effects of temporally regular versus irregular distractors on goal-directed cognition and behavior. Sci Rep 2022; 12:10020. [PMID: 35705589 PMCID: PMC9200732 DOI: 10.1038/s41598-022-13211-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 05/23/2022] [Indexed: 11/12/2022] Open
Abstract
Human environments comprise plenty of task-irrelevant sensory inputs, which are potentially distracting. Auditory distractors often possess an inherent temporal structure. However, it is largely unknown whether and how the temporal regularity of distractors interferes with goal-directed cognitive processes, such as working memory. Here, we tested a total sample of N = 90 participants across four working memory tasks with sequences of temporally regular versus irregular distractors. Temporal irregularity was operationalized by a final tone onset time that violated an otherwise regular tone sequence (Experiment 1), by a sequence of tones with irregular onset-to-onset delays (Experiment 2), and by sequences of speech items with irregular onset-to-onset delays (Experiments 3 and 4). Across all experiments, temporal regularity of distractors did not modulate participants’ primary performance metric, that is, accuracy in recalling items from working memory. Instead, temporal regularity of distractors modulated secondary performance metrics: for regular versus irregular distractors, recall of the first item from memory was faster (Experiment 3) and the response bias was more conservative (Experiment 4). Taken together, the present results provide evidence that the temporal regularity of task-irrelevant input does not inevitably affect the precision of memory representations (reflected in the primary performance metric accuracy) but rather the response behavior (reflected in secondary performance metrics like response speed and bias). Our findings emphasize that a comprehensive understanding of auditory distraction requires that existing models of attention include often-neglected secondary performance metrics to understand how different features of auditory distraction reach awareness and impact cognition and behavior.
Collapse
|
4
|
Distracting Linguistic Information Impairs Neural Tracking of Attended Speech. CURRENT RESEARCH IN NEUROBIOLOGY 2022; 3:100043. [DOI: 10.1016/j.crneur.2022.100043] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Revised: 04/27/2022] [Accepted: 05/24/2022] [Indexed: 11/20/2022] Open
|
5
|
McHaney JR, Tessmer R, Roark CL, Chandrasekaran B. Working memory relates to individual differences in speech category learning: Insights from computational modeling and pupillometry. BRAIN AND LANGUAGE 2021; 222:105010. [PMID: 34454285 DOI: 10.1016/j.bandl.2021.105010] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/08/2021] [Revised: 07/26/2021] [Accepted: 08/10/2021] [Indexed: 05/27/2023]
Abstract
Across two experiments, we examine the relationship between individual differences in working memory (WM) and the acquisition of non-native speech categories in adulthood. While WM is associated with individual differences in a variety of learning tasks, successful acquisition of speech categories is argued to be contingent on WM-independent procedural-learning mechanisms. Thus, the role of WM in speech category learning is unclear. In Experiment 1, we show that individuals with higher WM acquire non-native speech categories faster and to a greater extent than those with lower WM. In Experiment 2, we replicate these results and show that individuals with higher WM use more optimal, procedural-based learning strategies and demonstrate more distinct speech-evoked pupillary responses for correct relative to incorrect trials. We propose that higher WM may allow for greater stimulus-related attention, resulting in more robust representations and optimal learning strategies. We discuss implications for neurobiological models of speech category learning.
Collapse
Affiliation(s)
- Jacie R McHaney
- Department of Communication Science and Disorders, University of Pittsburgh, United States
| | - Rachel Tessmer
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, United States
| | - Casey L Roark
- Department of Communication Science and Disorders, University of Pittsburgh, United States; Center for the Neural Basis of Cognition, Pittsburgh, PA, United States
| | - Bharath Chandrasekaran
- Department of Communication Science and Disorders, University of Pittsburgh, United States.
| |
Collapse
|
6
|
Tune S, Alavash M, Fiedler L, Obleser J. Neural attentional-filter mechanisms of listening success in middle-aged and older individuals. Nat Commun 2021; 12:4533. [PMID: 34312388 PMCID: PMC8313676 DOI: 10.1038/s41467-021-24771-9] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2020] [Accepted: 07/01/2021] [Indexed: 12/12/2022] Open
Abstract
Successful listening crucially depends on intact attentional filters that separate relevant from irrelevant information. Research into their neurobiological implementation has focused on two potential auditory filter strategies: the lateralization of alpha power and selective neural speech tracking. However, the functional interplay of the two neural filter strategies and their potency to index listening success in an ageing population remains unclear. Using electroencephalography and a dual-talker task in a representative sample of listeners (N = 155; age=39-80 years), we here demonstrate an often-missed link from single-trial behavioural outcomes back to trial-by-trial changes in neural attentional filtering. First, we observe preserved attentional-cue-driven modulation of both neural filters across chronological age and hearing levels. Second, neural filter states vary independently of one another, demonstrating complementary neurobiological solutions of spatial selective attention. Stronger neural speech tracking but not alpha lateralization boosts trial-to-trial behavioural performance. Our results highlight the translational potential of neural speech tracking as an individualized neural marker of adaptive listening behaviour.
Collapse
Affiliation(s)
- Sarah Tune
- Department of Psychology, University of Lübeck, Lübeck, Germany.
- Center for Brain, Behavior, and Metabolism, University of Lübeck, Lübeck, Germany.
| | - Mohsen Alavash
- Department of Psychology, University of Lübeck, Lübeck, Germany
- Center for Brain, Behavior, and Metabolism, University of Lübeck, Lübeck, Germany
| | - Lorenz Fiedler
- Department of Psychology, University of Lübeck, Lübeck, Germany
- Center for Brain, Behavior, and Metabolism, University of Lübeck, Lübeck, Germany
- Eriksholm Research Centre, Snekkersten, Denmark
| | - Jonas Obleser
- Department of Psychology, University of Lübeck, Lübeck, Germany.
- Center for Brain, Behavior, and Metabolism, University of Lübeck, Lübeck, Germany.
| |
Collapse
|
7
|
Kraus F, Tune S, Ruhe A, Obleser J, Wöstmann M. Unilateral Acoustic Degradation Delays Attentional Separation of Competing Speech. Trends Hear 2021; 25:23312165211013242. [PMID: 34184964 PMCID: PMC8246482 DOI: 10.1177/23312165211013242] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
Hearing loss is often asymmetric such that hearing thresholds differ substantially between the two ears. The extreme case of such asymmetric hearing is single-sided deafness. A unilateral cochlear implant (CI) on the more severely impaired ear is an effective treatment to restore hearing. The interactive effects of unilateral acoustic degradation and spatial attention to one sound source in multitalker situations are at present unclear. Here, we simulated some features of listening with a unilateral CI in young, normal-hearing listeners (N = 22) who were presented with 8-band noise-vocoded speech to one ear and intact speech to the other ear. Neural responses were recorded in the electroencephalogram to obtain the spectrotemporal response function to speech. Listeners made more mistakes when answering questions about vocoded (vs. intact) attended speech. At the neural level, we asked how unilateral acoustic degradation would impact the attention-induced amplification of tracking target versus distracting speech. Interestingly, unilateral degradation did not per se reduce the attention-induced amplification but instead delayed it in time: Speech encoding accuracy, modelled on the basis of the spectrotemporal response function, was significantly enhanced for attended versus ignored intact speech at earlier neural response latencies (<∼250 ms). This attentional enhancement was not absent but delayed for vocoded speech. These findings suggest that attentional selection of unilateral, degraded speech is feasible but induces delayed neural separation of competing speech, which might explain listening challenges experienced by unilateral CI users.
Collapse
Affiliation(s)
- Frauke Kraus
- Department of Psychology, University of Lübeck, Lübeck, Germany
| | - Sarah Tune
- Department of Psychology, University of Lübeck, Lübeck, Germany
| | - Anna Ruhe
- Department of Psychology, University of Lübeck, Lübeck, Germany
| | - Jonas Obleser
- Department of Psychology, University of Lübeck, Lübeck, Germany
| | - Malte Wöstmann
- Department of Psychology, University of Lübeck, Lübeck, Germany
| |
Collapse
|
8
|
Working-memory disruption by task-irrelevant talkers depends on degree of talker familiarity. Atten Percept Psychophys 2019; 81:1108-1118. [PMID: 30993655 DOI: 10.3758/s13414-019-01727-2] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
When one is listening, familiarity with an attended talker's voice improves speech comprehension. Here, we instead investigated the effect of familiarity with a distracting talker. In an irrelevant-speech task, we assessed listeners' working memory for the serial order of spoken digits when a task-irrelevant, distracting sentence was produced by either a familiar or an unfamiliar talker (with rare omissions of the task-irrelevant sentence). We tested two groups of listeners using the same experimental procedure. The first group were undergraduate psychology students (N = 66) who had attended an introductory statistics course. Critically, each student had been taught by one of two course instructors, whose voices served as the familiar and unfamiliar task-irrelevant talkers. The second group of listeners were family members and friends (N = 20) who had known either one of the two talkers for more than 10 years. Students, but not family members and friends, made more errors when the task-irrelevant talker was familiar versus unfamiliar. Interestingly, the effect of talker familiarity was not modulated by the presence of task-irrelevant speech: Students experienced stronger working memory disruption by a familiar talker, irrespective of whether they heard a task-irrelevant sentence during memory retention or merely expected it. While previous work has shown that familiarity with an attended talker benefits speech comprehension, our findings indicate that familiarity with an ignored talker disrupts working memory for target speech. The absence of this effect in family members and friends suggests that the degree of familiarity modulates the memory disruption.
Collapse
|
9
|
Schlittenlacher J, Staab K, Çelebi Ö, Samel A, Ellermeier W. Determinants of the irrelevant speech effect: Changes in spectrum and envelope. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 145:3625. [PMID: 31255159 DOI: 10.1121/1.5111749] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/08/2018] [Accepted: 05/28/2019] [Indexed: 06/09/2023]
Abstract
The irrelevant sound effect (ISE) denotes the fact that short-term memory is disrupted while being exposed to sound. The ISE is largest for speech. The presented study investigated the underlying acoustic properties that cause the ISE. Stimuli contained changes in either the spectral content only, the envelope only, or both. For this purpose two experiments were conducted and two vocoding strategies were developed to degrade the spectral content of speech and the envelope independently. The first strategy employed a noise vocoder that was based on perceptual dimensions, analyzing the original utterance into 1, 2, 4, 8, or 24 channels (critical bands) and independently manipulating loudness. The second strategy involved a temporal segmentation of the signal, freezing either spectrum or level for durations ranging from 50 ms to 14 s. In both experiments, changes in envelope alone did not have measurable effects on performance, but the ISE was significantly increased when both the spectral content and the envelope varied. Furthermore, when the envelope changes were uncorrelated with the spectral changes, the effect size was the same as with a constant-loudness envelope. This suggests that the ISE is primarily caused by spectral changes, but concurrent changes in level tend to amplify it.
Collapse
Affiliation(s)
- Josef Schlittenlacher
- Institut für Psychologie, TU Darmstadt, Alexanderstraße 10, 64283 Darmstadt, Germany
| | - Katharina Staab
- Institut für Psychologie, TU Darmstadt, Alexanderstraße 10, 64283 Darmstadt, Germany
| | - Özlem Çelebi
- Institut für Psychologie, TU Darmstadt, Alexanderstraße 10, 64283 Darmstadt, Germany
| | - Alisa Samel
- Institut für Psychologie, TU Darmstadt, Alexanderstraße 10, 64283 Darmstadt, Germany
| | - Wolfgang Ellermeier
- Institut für Psychologie, TU Darmstadt, Alexanderstraße 10, 64283 Darmstadt, Germany
| |
Collapse
|
10
|
Dorsi J, Viswanathan N, Rosenblum LD, Dias JW. The role of speech fidelity in the irrelevant sound effect: Insights from noise-vocoded speech backgrounds. Q J Exp Psychol (Hove) 2018; 71:2152-2161. [PMID: 30226434 DOI: 10.1177/1747021817739257] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The Irrelevant Sound Effect (ISE) is the finding that background sound impairs accuracy for visually presented serial recall tasks. Among various auditory backgrounds, speech typically acts as the strongest distractor. Based on the changing-state hypothesis, speech is a disruptive background because it is more complex than other nonspeech backgrounds. In the current study, we evaluate an alternative explanation by examining whether the speech-likeness of the background (speech fidelity) contributes, beyond signal complexity, to the ISE. We did this by using noise-vocoded speech as a background. In Experiment 1, we varied the complexity of the background by manipulating the number of vocoding channels. Results indicate that the ISE increases with the number of channels, suggesting that more complex signals produce greater ISEs. In Experiment 2, we varied complexity and speech fidelity independently. At each channel level, we selectively reversed a subset of channels to design a low-fidelity signal that was equated in overall complexity. Experiment 2 results indicated that speech-like noise-vocoded speech produces a larger ISE than selectively reversed noise-vocoded speech. Finally, in Experiment 3, we evaluated the locus of the speech-fidelity effect by assessing the distraction produced by these stimuli in a missing-item task. In this task, even though noise-vocoded speech disrupted task performance relative to silence, neither its complexity nor speech fidelity contributed to this effect. Together, these findings indicate a clear role for speech fidelity of the background beyond its changing-state quality and its attention capture potential.
Collapse
Affiliation(s)
- Josh Dorsi
- 1 University of California, Riverside, Riverside, CA, USA
| | | | | | - James W Dias
- 1 University of California, Riverside, Riverside, CA, USA
| |
Collapse
|
11
|
Equivalent auditory distraction in children and adults. J Exp Child Psychol 2018; 172:41-58. [PMID: 29574236 DOI: 10.1016/j.jecp.2018.02.005] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2017] [Revised: 02/13/2018] [Accepted: 02/16/2018] [Indexed: 11/20/2022]
Abstract
There is an ongoing debate about whether children have more problems ignoring auditory distractors than adults. This is an important empirical question with direct implications for theories making predictions about the development of selective attention. In two experiments, the disruptive effect of to-be-ignored speech on short-term memory performance of third graders, fourth graders, fifth graders, younger adults, and older adults was examined. Three auditory conditions were compared: (a) steady state sequences in which the same distractor was repeated, (b) changing state sequences in which different distractors were presented, and (c) auditory deviant sequences in which a deviant distractor was presented in a sequence of repeated distractors. According to the attentional resource view, children should exhibit larger disruption by changing and deviant sounds due to their poorer attentional control abilities compared with adults. The duplex-mechanism account proposes that the auditory deviant effect is under attentional control, whereas the changing state effect is not, and thus predicts that children should be more susceptible to auditory deviants than adults but equally disrupted by changing state sequences. According to the renewed view of age-related distraction, there should be no age differences in cross-modal auditory distraction because some of the irrelevant auditory information can be filtered out early in the processing stream. Children and adults were equally disrupted by changing and deviant speech sounds regardless of whether task difficulty was equated between age groups or not. These results are consistent with the renewed view of age-related distraction.
Collapse
|
12
|
Tune S, Wöstmann M, Obleser J. Probing the limits of alpha power lateralisation as a neural marker of selective attention in middle-aged and older listeners. Eur J Neurosci 2018; 48:2537-2550. [PMID: 29430736 DOI: 10.1111/ejn.13862] [Citation(s) in RCA: 43] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2017] [Revised: 12/22/2017] [Accepted: 02/01/2018] [Indexed: 02/05/2023]
Abstract
In recent years, hemispheric lateralisation of alpha power has emerged as a neural mechanism thought to underpin spatial attention across sensory modalities. Yet, how healthy ageing, beginning in middle adulthood, impacts the modulation of lateralised alpha power supporting auditory attention remains poorly understood. In the current electroencephalography study, middle-aged and older adults (N = 29; ~40-70 years) performed a dichotic listening task that simulates a challenging, multitalker scenario. We examined the extent to which the modulation of 8-12 Hz alpha power would serve as neural marker of listening success across age. With respect to the increase in interindividual variability with age, we examined an extensive battery of behavioural, perceptual and neural measures. Similar to findings on younger adults, middle-aged and older listeners' auditory spatial attention induced robust lateralisation of alpha power, which synchronised with the speech rate. Notably, the observed relationship between this alpha lateralisation and task performance did not co-vary with age. Instead, task performance was strongly related to an individual's attentional and working memory capacity. Multivariate analyses revealed a separation of neural and behavioural variables independent of age. Our results suggest that in age-varying samples as the present one, the lateralisation of alpha power is neither a sufficient nor necessary neural strategy for an individual's auditory spatial attention, as higher age might come with increased use of alternative, compensatory mechanisms. Our findings emphasise that explaining interindividual variability will be key to understanding the role of alpha oscillations in auditory attention in the ageing listener.
Collapse
Affiliation(s)
- Sarah Tune
- Department of Psychology, University of Lübeck, Maria-Goeppert-Str. 9a, 23562, Lübeck, Germany
| | - Malte Wöstmann
- Department of Psychology, University of Lübeck, Maria-Goeppert-Str. 9a, 23562, Lübeck, Germany
| | - Jonas Obleser
- Department of Psychology, University of Lübeck, Maria-Goeppert-Str. 9a, 23562, Lübeck, Germany
| |
Collapse
|
13
|
Metacognition in Auditory Distraction: How Expectations about Distractibility Influence the Irrelevant Sound Effect. J Cogn 2017; 1:2. [PMID: 31517180 PMCID: PMC6645164 DOI: 10.5334/joc.3] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
Task-irrelevant, to-be-ignored sound disrupts serial short-term memory for visually presented items compared to a quiet control condition. We tested whether disruption by changing state irrelevant sound is modulated by expectations about the degree to which distractors would disrupt serial recall performance. The participants’ expectations were manipulated by providing the (bogus) information that the irrelevant sound would be either easy or difficult to ignore. In Experiment 1, piano melodies were used as auditory distractors. Participants who expected the degree of disruption to be low made more errors in serial recall than participants who expected the degree of disruption to be high, independent of whether distractors were present or not. Although expectation had no effect on the magnitude of disruption, participants in the easy-to-ignore group reported after the experiment that they were less disrupted by the irrelevant sound than participants in the difficult-to-ignore group. In Experiment 2, spoken texts were used as auditory distractors. Expectations about the degree of disruption did not affect serial recall performance. Moreover, the subjective and objective distraction by irrelevant speech was similar in the easy-to-ignore group and in the difficult-to-ignore group. Thus, while metacognitive beliefs about whether the auditory distractors would be easy or difficult to ignore can have an effect on task engagement and subjective distractibility ratings, they do not seem to have an effect on the actual degree to which the auditory distractors disrupt serial recall performance.
Collapse
|
14
|
Ahveninen J, Seidman LJ, Chang WT, Hämäläinen M, Huang S. Suppression of irrelevant sounds during auditory working memory. Neuroimage 2017; 161:1-8. [PMID: 28818692 DOI: 10.1016/j.neuroimage.2017.08.040] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2017] [Revised: 08/10/2017] [Accepted: 08/13/2017] [Indexed: 11/16/2022] Open
Abstract
Auditory working memory (WM) processing in everyday acoustic environments depends on our ability to maintain relevant information online in our minds, and to suppress interference caused by competing incoming stimuli. A challenge in communication settings is that the relevant content and irrelevant inputs may emanate from a common source, such as a talkative conversationalist. An open question is how the WM system deals with such interference. Will the distracters become inadvertently filtered before processing for meaning because the primary WM operations deplete all available processing resources? Or are they suppressed post perceptually, through an active control process? We tested these alternative hypotheses by measuring magnetoencephalography (MEG), EEG, and functional MRI (fMRI) during a phonetic auditory continuous performance task. Contextual WM maintenance load was manipulated by adjusting the number of "filler" letter sounds in-between cue and target letter sounds. Trial-to-trial variability of pre- and post-stimulus activations in fMRI-informed cortical MEG/EEG estimates was analyzed within and across 14 subjects using generalized linear mixed effect (GLME) models. High contextual WM maintenance load suppressed left auditory cortex (AC) activations around 250-300 ms after the onset of irrelevant phonetic sounds. This effect coincided with increased 10-14 Hz alpha-range oscillatory functional connectivity between the left dorsolateral prefrontal cortex (DLPFC) and left AC. Suppression of AC responses to irrelevant sounds during active maintenance of the task context also correlated with increased pre-stimulus 7-15 Hz alpha power. Our results suggest that under high auditory WM load, irrelevant sounds are suppressed through a "late" active suppression mechanism, which prevents short-term consolidation of irrelevant information without affecting the initial screening of potentially meaningful stimuli. The results also suggest that AC alpha oscillations play an inhibitory role during auditory WM processing.
Collapse
Affiliation(s)
- Jyrki Ahveninen
- Harvard Medical School - Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA.
| | - Larry J Seidman
- Massachusetts Mental Health Center Public Psychiatry Division, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA; Department of Psychiatry, Harvard Medical School, Massachusetts General Hospital, Boston, MA, USA
| | - Wei-Tang Chang
- Harvard Medical School - Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
| | - Matti Hämäläinen
- Harvard Medical School - Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Samantha Huang
- Harvard Medical School - Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
| |
Collapse
|
15
|
Abstract
The extent to which the sleeping brain processes sensory information remains unclear. This is particularly true for continuous and complex stimuli such as speech, in which information is organized into hierarchically embedded structures. Recently, novel metrics for assessing the neural representation of continuous speech have been developed using noninvasive brain recordings that have thus far only been tested during wakefulness. Here we investigated, for the first time, the sleeping brain's capacity to process continuous speech at different hierarchical levels using a newly developed Concurrent Hierarchical Tracking (CHT) approach that allows monitoring the neural representation and processing-depth of continuous speech online. Speech sequences were compiled with syllables, words, phrases, and sentences occurring at fixed time intervals such that different linguistic levels correspond to distinct frequencies. This enabled us to distinguish their neural signatures in brain activity. We compared the neural tracking of intelligible versus unintelligible (scrambled and foreign) speech across states of wakefulness and sleep using high-density EEG in humans. We found that neural tracking of stimulus acoustics was comparable across wakefulness and sleep and similar across all conditions regardless of speech intelligibility. In contrast, neural tracking of higher-order linguistic constructs (words, phrases, and sentences) was only observed for intelligible speech during wakefulness and could not be detected at all during nonrapid eye movement or rapid eye movement sleep. These results suggest that, whereas low-level auditory processing is relatively preserved during sleep, higher-level hierarchical linguistic parsing is severely disrupted, thereby revealing the capacity and limits of language processing during sleep.SIGNIFICANCE STATEMENT Despite the persistence of some sensory processing during sleep, it is unclear whether high-level cognitive processes such as speech parsing are also preserved. We used a novel approach for studying the depth of speech processing across wakefulness and sleep while tracking neuronal activity with EEG. We found that responses to the auditory sound stream remained intact; however, the sleeping brain did not show signs of hierarchical parsing of the continuous stream of syllables into words, phrases, and sentences. The results suggest that sleep imposes a functional barrier between basic sensory processing and high-level cognitive processing. This paradigm also holds promise for studying residual cognitive abilities in a wide array of unresponsive states.
Collapse
|