1
|
Boncz Á, Szalárdy O, Velősy PK, Béres L, Baumgartner R, Winkler I, Tóth B. The effects of aging and hearing impairment on listening in noise. iScience 2024; 27:109295. [PMID: 38558934 PMCID: PMC10981015 DOI: 10.1016/j.isci.2024.109295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Revised: 09/19/2023] [Accepted: 02/16/2024] [Indexed: 04/04/2024] Open
Abstract
The study investigates age-related decline in listening abilities, particularly in noisy environments, where the challenge lies in extracting meaningful information from variable sensory input (figure-ground segregation). The research focuses on peripheral and central factors contributing to this decline using a tone-cloud-based figure detection task. Results based on behavioral measures and event-related brain potentials (ERPs) indicate that, despite delayed perceptual processes and some deterioration in attention and executive functions with aging, the ability to detect sound sources in noise remains relatively intact. However, even mild hearing impairment significantly hampers the segregation of individual sound sources within a complex auditory scene. The severity of the hearing deficit correlates with an increased susceptibility to masking noise. The study underscores the impact of hearing impairment on auditory scene analysis and highlights the need for personalized interventions based on individual abilities.
Collapse
Affiliation(s)
- Ádám Boncz
- Institute of Cognitive Neuroscience and Psychology, HUN-REN Research Centre for Natural Sciences, Budapest, Hungary
| | - Orsolya Szalárdy
- Institute of Cognitive Neuroscience and Psychology, HUN-REN Research Centre for Natural Sciences, Budapest, Hungary
- Institute of Behavioural Sciences, Faculty of Medicine, Semmelweis University, Budapest, Hungary
| | - Péter Kristóf Velősy
- Department of Cognitive Science, Budapest University of Technology and Economics, Budapest, Hungary
| | - Luca Béres
- Institute of Cognitive Neuroscience and Psychology, HUN-REN Research Centre for Natural Sciences, Budapest, Hungary
- Department of Cognitive Science, Budapest University of Technology and Economics, Budapest, Hungary
| | - Robert Baumgartner
- Acoustics Research Institute, Austrian Academy of Sciences, Vienna, Austria
| | - István Winkler
- Institute of Cognitive Neuroscience and Psychology, HUN-REN Research Centre for Natural Sciences, Budapest, Hungary
| | - Brigitta Tóth
- Institute of Cognitive Neuroscience and Psychology, HUN-REN Research Centre for Natural Sciences, Budapest, Hungary
| |
Collapse
|
2
|
Haumann NT, Petersen B, Vuust P, Brattico E. Age differences in central auditory system responses to naturalistic music. Biol Psychol 2023; 179:108566. [PMID: 37086903 DOI: 10.1016/j.biopsycho.2023.108566] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 04/11/2023] [Accepted: 04/14/2023] [Indexed: 04/24/2023]
Abstract
Aging influences the central auditory system leading to difficulties in the decoding and understanding of overlapping sound signals, such as speech in noise or polyphonic music. Studies on central auditory system evoked responses (ERs) have found in older compared to young listeners increased amplitudes (less inhibition) of the P1 and N1 and decreased amplitudes of the P2, mismatch negativity (MMN), and P3a responses. While preceding research has focused on simplified auditory stimuli, we here tested whether the previously observed age-related differences could be replicated with sounds embedded in medium and highly naturalistic musical contexts. Older (age 55-77 years) and younger adults (age 21-31 years) listened to medium naturalistic (synthesized melody) and highly naturalistic (studio recording of a music piece) stimuli. For the medium naturalistic music, the age group differences on the P1, N1, P2, MMN, and P3a amplitudes were all replicated. The age group differences, however, appeared reduced with the highly compared to the medium naturalistic music. The finding of lower P2 amplitude in older than young was replicated for slow event rates (0.3-2.9Hz) in the highly naturalistic music. Moreover, the ER latencies suggested a gradual slowing of the auditory processing time course for highly compared to medium naturalistic stimuli irrespective of age. These results support that age-related differences on ERs can partly be observed with naturalistic stimuli. This opens new avenues for including naturalistic stimuli in the investigation of age-related central auditory system disorders.
Collapse
Affiliation(s)
- Niels Trusbak Haumann
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music, Aarhus/Aalborg, Universitetsbyen 3, 8000 Aarhus C, Denmark.
| | - Bjørn Petersen
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music, Aarhus/Aalborg, Universitetsbyen 3, 8000 Aarhus C, Denmark
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music, Aarhus/Aalborg, Universitetsbyen 3, 8000 Aarhus C, Denmark
| | - Elvira Brattico
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music, Aarhus/Aalborg, Universitetsbyen 3, 8000 Aarhus C, Denmark
| |
Collapse
|
3
|
Lai J, Alain C, Bidelman GM. Cortical-brainstem interplay during speech perception in older adults with and without hearing loss. Front Neurosci 2023; 17:1075368. [PMID: 36816123 PMCID: PMC9932544 DOI: 10.3389/fnins.2023.1075368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Accepted: 01/17/2023] [Indexed: 02/05/2023] Open
Abstract
Introduction Real time modulation of brainstem frequency-following responses (FFRs) by online changes in cortical arousal state via the corticofugal (top-down) pathway has been demonstrated previously in young adults and is more prominent in the presence of background noise. FFRs during high cortical arousal states also have a stronger relationship with speech perception. Aging is associated with increased auditory brain responses, which might reflect degraded inhibitory processing within the peripheral and ascending pathways, or changes in attentional control regulation via descending auditory pathways. Here, we tested the hypothesis that online corticofugal interplay is impacted by age-related hearing loss. Methods We measured EEG in older adults with normal-hearing (NH) and mild to moderate hearing-loss (HL) while they performed speech identification tasks in different noise backgrounds. We measured α power to index online cortical arousal states during task engagement. Subsequently, we split brainstem speech-FFRs, on a trial-by-trial basis, according to fluctuations in concomitant cortical α power into low or high α FFRs to index cortical-brainstem modulation. Results We found cortical α power was smaller in the HL than the NH group. In NH listeners, α-FFRs modulation for clear speech (i.e., without noise) also resembled that previously observed in younger adults for speech in noise. Cortical-brainstem modulation was further diminished in HL older adults in the clear condition and by noise in NH older adults. Machine learning classification showed low α FFR frequency spectra yielded higher accuracy for classifying listeners' perceptual performance in both NH and HL participants. Moreover, low α FFRs decreased with increased hearing thresholds at 0.5-2 kHz for clear speech but noise generally reduced low α FFRs in the HL group. Discussion Collectively, our study reveals cortical arousal state actively shapes brainstem speech representations and provides a potential new mechanism for older listeners' difficulties perceiving speech in cocktail party-like listening situations in the form of a miss-coordination between cortical and subcortical levels of auditory processing.
Collapse
Affiliation(s)
- Jesyin Lai
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States,School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States,Department of Diagnostic Imaging, St. Jude Children’s Research Hospital, Memphis, TN, United States
| | - Claude Alain
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, ON, Canada,Department of Psychology, University of Toronto, Toronto, ON, Canada
| | - Gavin M. Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States,School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States,Department of Speech, Language, and Hearing Sciences, Indiana University, Bloomington, IN, United States,Program in Neuroscience, Indiana University, Bloomington, IN, United States,*Correspondence: Gavin M. Bidelman,
| |
Collapse
|
4
|
Gohari N, Hosseini Dastgerdi Z, Bernstein LJ, Alain C. Neural correlates of concurrent sound perception: A review and guidelines for future research. Brain Cogn 2022; 163:105914. [PMID: 36155348 DOI: 10.1016/j.bandc.2022.105914] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Revised: 08/30/2022] [Accepted: 09/02/2022] [Indexed: 11/02/2022]
Abstract
The perception of concurrent sound sources depends on processes (i.e., auditory scene analysis) that fuse and segregate acoustic features according to harmonic relations, temporal coherence, and binaural cues (encompass dichotic pitch, location difference, simulated echo). The object-related negativity (ORN) and P400 are electrophysiological indices of concurrent sound perception. Here, we review the different paradigms used to study concurrent sound perception and the brain responses obtained from these paradigms. Recommendations regarding the design and recording parameters of the ORN and P400 are made, and their clinical applications in assessing central auditory processing ability in different populations are discussed.
Collapse
Affiliation(s)
- Nasrin Gohari
- Department of Audiology, School of Rehabilitation, Hamadan University of Medical Sciences, Hamadan, Iran.
| | - Zahra Hosseini Dastgerdi
- Department of Audiology, School of Rehabilitation, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Lori J Bernstein
- Department of Supportive Care, University Health Network, and Department of Psychiatry, University of Toronto, Toronto, Canada
| | - Claude Alain
- Rotman Research Institute, Baycrest Centre for Geriatric Care & Department of Psychology, University of Toronto, Canada
| |
Collapse
|
5
|
Cortical Processing of Binaural Cues as Shown by EEG Responses to Random-Chord Stereograms. J Assoc Res Otolaryngol 2021; 23:75-94. [PMID: 34904205 PMCID: PMC8783002 DOI: 10.1007/s10162-021-00820-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2021] [Accepted: 10/06/2021] [Indexed: 10/26/2022] Open
Abstract
Spatial hearing facilitates the perceptual organization of complex soundscapes into accurate mental representations of sound sources in the environment. Yet, the role of binaural cues in auditory scene analysis (ASA) has received relatively little attention in recent neuroscientific studies employing novel, spectro-temporally complex stimuli. This may be because a stimulation paradigm that provides binaurally derived grouping cues of sufficient spectro-temporal complexity has not yet been established for neuroscientific ASA experiments. Random-chord stereograms (RCS) are a class of auditory stimuli that exploit spectro-temporal variations in the interaural envelope correlation of noise-like sounds with interaurally coherent fine structure; they evoke salient auditory percepts that emerge only under binaural listening. Here, our aim was to assess the usability of the RCS paradigm for indexing binaural processing in the human brain. To this end, we recorded EEG responses to RCS stimuli from 12 normal-hearing subjects. The stimuli consisted of an initial 3-s noise segment with interaurally uncorrelated envelopes, followed by another 3-s segment, where envelope correlation was modulated periodically according to the RCS paradigm. Modulations were applied either across the entire stimulus bandwidth (wideband stimuli) or in temporally shifting frequency bands (ripple stimulus). Event-related potentials and inter-trial phase coherence analyses of the EEG responses showed that the introduction of the 3- or 5-Hz wideband modulations produced a prominent change-onset complex and ongoing synchronized responses to the RCS modulations. In contrast, the ripple stimulus elicited a change-onset response but no response to ongoing RCS modulation. Frequency-domain analyses revealed increased spectral power at the fundamental frequency and the first harmonic of wideband RCS modulations. RCS stimulation yields robust EEG measures of binaurally driven auditory reorganization and has potential to provide a flexible stimulation paradigm suitable for isolating binaural effects in ASA experiments.
Collapse
|
6
|
Abstract
OBJECTIVES The motivation for this research is to determine whether a listening-while-balancing task would be sensitive to quantifying listening effort in middle age. The premise behind this exploratory work is that a decrease in postural control would be demonstrated in challenging acoustic conditions, more so in middle-aged than in younger adults. DESIGN A dual-task paradigm was employed with speech understanding as one task and postural control as the other. For the speech perception task, participants listened to and repeated back sentences in the presence of other sentences or steady-state noise. Targets and maskers were presented in both spatially-coincident and spatially-separated conditions. The postural control task required participants to stand on a force platform either in normal stance (with feet approximately shoulder-width apart) or in tandem stance (with one foot behind the other). Participants also rated their subjective listening effort at the end of each block of trials. RESULTS Postural control was poorer for both groups of participants when the listening task was completed at a more adverse (vs. less adverse) signal-to-noise ratio. When participants were standing normally, postural control in dual-task conditions was negatively associated with degree of high-frequency hearing loss, with individuals who had higher pure-tone thresholds exhibiting poorer balance. Correlation analyses also indicated that reduced speech recognition ability was associated with poorer postural control in both single- and dual-task conditions. Middle-aged participants exhibited larger dual-task costs when the masker was speech, as compared to when it was noise. Individuals who reported expending greater effort on the listening task exhibited larger dual-task costs when in normal stance. CONCLUSIONS Listening under challenging acoustic conditions can have a negative impact on postural control, more so in middle-aged than in younger adults. One explanation for this finding is that the increased effort required to successfully listen in adverse environments leaves fewer resources for maintaining balance, particularly as people age. These results provide preliminary support for using this type of ecologically-valid dual-task paradigm to quantify the costs associated with understanding speech in adverse acoustic environments.
Collapse
|
7
|
Qian M, Wang Q, Yang L, Wang Z, Hu D, Li B, Li Y, Wu H, Huang Z. The effects of aging on peripheral and central auditory function in adults with normal hearing. Am J Transl Res 2021; 13:549-564. [PMID: 33594309 PMCID: PMC7868840] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2020] [Accepted: 12/15/2020] [Indexed: 06/12/2023]
Abstract
This study was designed to investigate the effects of the aging process on peripheral and central auditory functions in adults with normal hearing. In this study, 149 participants with normal hearing were divided into four groups: aged 20-29, 30-39, 40-49 and 50-59 years for statistical purposes. Electrocochleography (EcochG), transient evoked otoacoustic emissions (TEOAE), Mandarin Hearing in Noise Test (MHINT) and the Gap Detection Test (GDT) were used. Our study found: (1) MHINT is significantly associated with aging (left ear R2=0.29, right ear R2=0.35). (2) TEOAE amplitude, TEOAE contralateral acoustic stimulation (CS) amplitude, EcochG action potential (AP), EcochG AP latency, EcochG summating potential (SP) and GDT progressively declined with age. (3) The EcochG SP/AP has no statistically significant difference among different age groups. (4) The peripheral auditory function of the right ear declines more slowly than that of the left ear. (5) Hypofunction of the central auditory system accelerates after age 40. The results demonstrate: (1) The age-related decline in the ability of speech recognition in a noisy environment may be the most sensitive indicator that reflects auditory function. (2) The decline of central auditory function is independent of peripheral auditory function, according to the auditory characteristics of the right ear. (3) Auditory function needs to be assessed individually to allow early prevention before age 40.
Collapse
Affiliation(s)
- Minfei Qian
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Hearing and Speech Center of Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Ear Institute, Shanghai Jiao Tong University School of MedicineShanghai 200125, China
- Shanghai Key Laboratory of Translational Medicine on Ear and Nose DiseasesShanghai 200125, China
| | - Qixuan Wang
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Hearing and Speech Center of Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Ear Institute, Shanghai Jiao Tong University School of MedicineShanghai 200125, China
- Shanghai Key Laboratory of Translational Medicine on Ear and Nose DiseasesShanghai 200125, China
| | - Lu Yang
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Hearing and Speech Center of Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Ear Institute, Shanghai Jiao Tong University School of MedicineShanghai 200125, China
- Shanghai Key Laboratory of Translational Medicine on Ear and Nose DiseasesShanghai 200125, China
| | - Zhongying Wang
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Hearing and Speech Center of Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Ear Institute, Shanghai Jiao Tong University School of MedicineShanghai 200125, China
- Shanghai Key Laboratory of Translational Medicine on Ear and Nose DiseasesShanghai 200125, China
| | - Difei Hu
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Hearing and Speech Center of Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Ear Institute, Shanghai Jiao Tong University School of MedicineShanghai 200125, China
- Shanghai Key Laboratory of Translational Medicine on Ear and Nose DiseasesShanghai 200125, China
| | - Bei Li
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Hearing and Speech Center of Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Ear Institute, Shanghai Jiao Tong University School of MedicineShanghai 200125, China
- Shanghai Key Laboratory of Translational Medicine on Ear and Nose DiseasesShanghai 200125, China
| | - Yun Li
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Hearing and Speech Center of Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Ear Institute, Shanghai Jiao Tong University School of MedicineShanghai 200125, China
- Shanghai Key Laboratory of Translational Medicine on Ear and Nose DiseasesShanghai 200125, China
| | - Hao Wu
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Hearing and Speech Center of Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Ear Institute, Shanghai Jiao Tong University School of MedicineShanghai 200125, China
- Shanghai Key Laboratory of Translational Medicine on Ear and Nose DiseasesShanghai 200125, China
| | - Zhiwu Huang
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Hearing and Speech Center of Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Ear Institute, Shanghai Jiao Tong University School of MedicineShanghai 200125, China
- Shanghai Key Laboratory of Translational Medicine on Ear and Nose DiseasesShanghai 200125, China
| |
Collapse
|
8
|
Klatt LI, Schneider D, Schubert AL, Hanenberg C, Lewald J, Wascher E, Getzmann S. Unraveling the Relation between EEG Correlates of Attentional Orienting and Sound Localization Performance: A Diffusion Model Approach. J Cogn Neurosci 2020; 32:945-962. [PMID: 31933435 DOI: 10.1162/jocn_a_01525] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Understanding the contribution of cognitive processes and their underlying neurophysiological signals to behavioral phenomena has been a key objective in recent neuroscience research. Using a diffusion model framework, we investigated to what extent well-established correlates of spatial attention in the electroencephalogram contribute to behavioral performance in an auditory free-field sound localization task. Younger and older participants were instructed to indicate the horizontal position of a predefined target among three simultaneously presented distractors. The central question of interest was whether posterior alpha lateralization and amplitudes of the anterior contralateral N2 subcomponent (N2ac) predict sound localization performance (accuracy, mean RT) and/or diffusion model parameters (drift rate, boundary separation, non-decision time). Two age groups were compared to explore whether, in older adults (who struggle with multispeaker environments), the brain-behavior relationship would differ from younger adults. Regression analyses revealed that N2ac amplitudes predicted drift rate and accuracy, whereas alpha lateralization was not related to behavioral or diffusion modeling parameters. This was true irrespective of age. The results indicate that a more efficient attentional filtering and selection of information within an auditory scene, reflected by increased N2ac amplitudes, was associated with a higher speed of information uptake (drift rate) and better localization performance (accuracy), while the underlying response criteria (threshold separation), mean RTs, and non-decisional processes remained unaffected. The lack of a behavioral correlate of poststimulus alpha power lateralization constrasts with the well-established notion that prestimulus alpha power reflects a functionally relevant attentional mechanism. This highlights the importance of distinguishing anticipatory from poststimulus alpha power modulations.
Collapse
Affiliation(s)
| | - Daniel Schneider
- Leibniz Research Centre for Working Environment and Human Factors
| | | | | | - Jörg Lewald
- Leibniz Research Centre for Working Environment and Human Factors.,Ruhr-University Bochum
| | - Edmund Wascher
- Leibniz Research Centre for Working Environment and Human Factors
| | - Stephan Getzmann
- Leibniz Research Centre for Working Environment and Human Factors
| |
Collapse
|
9
|
Ross B, Dobri S, Schumann A. Speech-in-noise understanding in older age: The role of inhibitory cortical responses. Eur J Neurosci 2019; 51:891-908. [PMID: 31494988 DOI: 10.1111/ejn.14573] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2019] [Revised: 07/23/2019] [Accepted: 09/04/2019] [Indexed: 01/10/2023]
Abstract
Studies of central auditory processing underlying speech-in-noise (SIN) recognition in aging have mainly concerned the degrading neural representation of speech sound in the auditory brainstem and cortex. Less attention has been paid to the aging-related decline of inhibitory function, which reduces the ability to suppress distraction from irrelevant sensory input. In a response suppression paradigm, young and older adults listened to sequences of three short sounds during MEG recording. The amplitudes of the cortical P30 response and the 40-Hz transient gamma response were compared with age, hearing loss and SIN performance. Sensory gating, indicated by the P30 amplitude ratio between the last and the first responses, was reduced in older compared to young listeners. Sensory gating was correlated with age in the older adults but not with hearing loss nor with SIN understanding. The transient gamma response expressed less response suppression. However, the gamma amplitude increased with age and SIN loss. Comparisons of linear multi-variable modeling showed a stronger brain-behavior relationship between the gamma amplitude and SIN performance than between gamma and age or hearing loss. The findings support the hypothesis that aging-related changes in the balance between inhibitory and excitatory neural mechanisms modify the generation of gamma oscillations, which impacts on perceptual binding and consequently on SIN understanding abilities. In conclusion, SIN recognition in older age is less affected by central auditory processing at the level of sensation, indicated by sensory gating, but is strongly affected at the level of perceptual organization, indicated by the correlation with the gamma responses.
Collapse
Affiliation(s)
- Bernhard Ross
- Baycrest Centre for Geriatric Care, Rotman Research Institute, Toronto, ON, Canada.,Department for Medical Biophysics, University of Toronto, Toronto, ON, Canada
| | - Simon Dobri
- Baycrest Centre for Geriatric Care, Rotman Research Institute, Toronto, ON, Canada.,Department for Medical Biophysics, University of Toronto, Toronto, ON, Canada
| | - Annette Schumann
- Baycrest Centre for Geriatric Care, Rotman Research Institute, Toronto, ON, Canada
| |
Collapse
|
10
|
Itoh K, Nejime M, Konoike N, Nakamura K, Nakada T. Evolutionary Elongation of the Time Window of Integration in Auditory Cortex: Macaque vs. Human Comparison of the Effects of Sound Duration on Auditory Evoked Potentials. Front Neurosci 2019; 13:630. [PMID: 31293370 PMCID: PMC6601703 DOI: 10.3389/fnins.2019.00630] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2019] [Accepted: 05/31/2019] [Indexed: 11/29/2022] Open
Abstract
The auditory cortex integrates auditory information over time to obtain neural representations of sound events, the time scale of which critically affects perception. This work investigated the species differences in the time scale of integration by comparing humans and monkeys regarding how their scalp-recorded cortical auditory evoked potentials (CAEPs) decrease in amplitude as stimulus duration is shortened from 100 ms (or longer) to 2 ms. Cortical circuits tuned to processing sounds at short time scales would continue to produce large CAEPs to brief sounds whereas those tuned to longer time scales would produce diminished responses. Four peaks were identified in the CAEPs and labeled P1, N1, P2, and N2 in humans and mP1, mN1, mP2, and mN2 in monkeys. In humans, the N1 diminished in amplitude as sound duration was decreased, consistent with the previously described temporal integration window of N1 (>50 ms). In macaques, by contrast, the mN1 was unaffected by sound duration, and it was clearly elicited by even the briefest sounds. Brief sounds also elicited significant mN2 in the macaque, but not the human N2. Regarding earlier latencies, both P1 (humans) and mP1 (macaques) were elicited at their full amplitudes even by the briefest sounds. These findings suggest an elongation of the time scale of late stages of human auditory cortical processing, as reflected by N1/mN1 and later CAEP components. Longer time scales of integration would allow neural representations of complex auditory features that characterize speech and music.
Collapse
Affiliation(s)
- Kosuke Itoh
- Center for Integrated Human Brain Science, Brain Research Institute, Niigata University, Niigata, Japan
| | - Masafumi Nejime
- Cognitive Neuroscience Section, Primate Research Institute, Kyoto University, Kyoto, Japan
| | - Naho Konoike
- Cognitive Neuroscience Section, Primate Research Institute, Kyoto University, Kyoto, Japan
| | - Katsuki Nakamura
- Cognitive Neuroscience Section, Primate Research Institute, Kyoto University, Kyoto, Japan
| | - Tsutomu Nakada
- Center for Integrated Human Brain Science, Brain Research Institute, Niigata University, Niigata, Japan
| |
Collapse
|
11
|
Fostick L. Card playing enhances speech perception among aging adults: comparison with aging musicians. Eur J Ageing 2019; 16:481-489. [PMID: 31798372 DOI: 10.1007/s10433-019-00512-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023] Open
Abstract
Speech perception and auditory processing have been shown to be enhanced among aging musicians as compared to non-musicians. In the present study, the aim was to test whether these functions are also enhanced among those who are engaged in a non-musical mentally challenging leisure activity (card playing). Three groups of 23 aging adults, aged 60-80 years, were recruited for the study: Musicians, Card players, and Controls. Participants were matched for age, gender, Wechsler Adult Intelligence Scale-III Matrix Reasoning, and Digit Span scores. Their performance was measured using auditory spectral and spatial temporal order judgment tests, and four tasks of speech perception in conditions of: no background noise, background noise of speech frequencies, background noise of white noise, and 60% compressed speech. Musicians were better in auditory and speech perception than the other two groups. Card players were similar to Controls in auditory perception tasks, but were better in the speech perception tasks. Non-musician aging adults may be able to improve their speech perception ability by engaging in leisure activity requiring cognitive effort.
Collapse
Affiliation(s)
- Leah Fostick
- Department of Communication Disorders, Ariel University, Ariel, Israel
| |
Collapse
|
12
|
Stuckenberg MV, Nayak CV, Meyer BT, Völker C, Hohmann V, Bendixen A. Age Effects on Concurrent Speech Segregation by Onset Asynchrony. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:177-189. [PMID: 30534994 DOI: 10.1044/2018_jslhr-h-18-0064] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Purpose For elderly listeners, it is more challenging to listen to 1 voice surrounded by other voices than for young listeners. This could be caused by a reduced ability to use acoustic cues-such as slight differences in onset time-for the segregation of concurrent speech signals. Here, we study whether the ability to benefit from onset asynchrony differs between young (18-33 years) and elderly (55-74 years) listeners. Method We investigated young (normal hearing, N = 20) and elderly (mildly hearing impaired, N = 26) listeners' ability to segregate 2 vowels with onset asynchronies ranging from 20 to 100 ms. Behavioral measures were complemented by a specific event-related brain potential component, the object-related negativity, indicating the perception of 2 distinct auditory objects. Results Elderly listeners' behavioral performance (identification accuracy of the 2 vowels) was considerably poorer than young listeners'. However, both age groups showed the same amount of improvement with increasing onset asynchrony. Object-related negativity amplitude also increased similarly in both age groups. Conclusion Both age groups benefit to a similar extent from onset asynchrony as a cue for concurrent speech segregation during active (behavioral measurement) and during passive (electroencephalographic measurement) listening.
Collapse
Affiliation(s)
- Maria V Stuckenberg
- Cluster of Excellence "Hearing4all," Carl von Ossietzky University of Oldenburg, Germany
- Department of Psychology, University of Leipzig, Germany
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Chaitra V Nayak
- Cluster of Excellence "Hearing4all," Carl von Ossietzky University of Oldenburg, Germany
| | - Bernd T Meyer
- Cluster of Excellence "Hearing4all," Carl von Ossietzky University of Oldenburg, Germany
| | - Christoph Völker
- Cluster of Excellence "Hearing4all," Carl von Ossietzky University of Oldenburg, Germany
| | - Volker Hohmann
- Cluster of Excellence "Hearing4all," Carl von Ossietzky University of Oldenburg, Germany
| | - Alexandra Bendixen
- Cluster of Excellence "Hearing4all," Carl von Ossietzky University of Oldenburg, Germany
- Faculty of Natural Sciences, Chemnitz University of Technology, Germany
| |
Collapse
|
13
|
Auditory Figure-Ground Segregation Is Impaired by High Visual Load. J Neurosci 2018; 39:1699-1708. [PMID: 30541915 PMCID: PMC6391559 DOI: 10.1523/jneurosci.2518-18.2018] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2018] [Revised: 11/19/2018] [Accepted: 11/19/2018] [Indexed: 11/21/2022] Open
Abstract
Figure-ground segregation is fundamental to listening in complex acoustic environments. An ongoing debate pertains to whether segregation requires attention or is "automatic" and preattentive. In this magnetoencephalography study, we tested a prediction derived from load theory of attention (e.g., Lavie, 1995) that segregation requires attention but can benefit from the automatic allocation of any "leftover" capacity under low load. Complex auditory scenes were modeled with stochastic figure-ground stimuli (Teki et al., 2013), which occasionally contained repeated frequency component "figures." Naive human participants (both sexes) passively listened to these signals while performing a visual attention task of either low or high load. While clear figure-related neural responses were observed under conditions of low load, high visual load substantially reduced the neural response to the figure in auditory cortex (planum temporale, Heschl's gyrus). We conclude that fundamental figure-ground segregation in hearing is not automatic but draws on resources that are shared across vision and audition.SIGNIFICANCE STATEMENT This work resolves a long-standing question of whether figure-ground segregation, a fundamental process of auditory scene analysis, requires attention or is underpinned by automatic, encapsulated computations. Task-irrelevant sounds were presented during performance of a visual search task. We revealed a clear magnetoencephalography neural signature of figure-ground segregation in conditions of low visual load, which was substantially reduced in conditions of high visual load. This demonstrates that, although attention does not need to be actively allocated to sound for auditory segregation to occur, segregation depends on shared computational resources across vision and hearing. The findings further highlight that visual load can impair the computational capacity of the auditory system, even when it does not simply dampen auditory responses as a whole.
Collapse
|
14
|
Smith NA, Folland NA, Martinez DM, Trainor LJ. Multisensory object perception in infancy: 4-month-olds perceive a mistuned harmonic as a separate auditory and visual object. Cognition 2017; 164:1-7. [PMID: 28346869 PMCID: PMC5429982 DOI: 10.1016/j.cognition.2017.01.016] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2015] [Revised: 01/17/2017] [Accepted: 01/24/2017] [Indexed: 10/19/2022]
Abstract
Infants learn to use auditory and visual information to organize the sensory world into identifiable objects with particular locations. Here we use a behavioural method to examine infants' use of harmonicity cues to auditory object perception in a multisensory context. Sounds emitted by different objects sum in the air and the auditory system must figure out which parts of the complex waveform belong to different sources (auditory objects). One important cue to this source separation is that complex tones with pitch typically contain a fundamental frequency and harmonics at integer multiples of the fundamental. Consequently, adults hear a mistuned harmonic in a complex sound as a distinct auditory object (Alain, Theunissen, Chevalier, Batty, & Taylor, 2003). Previous work by our group demonstrated that 4-month-old infants are also sensitive to this cue. They behaviourally discriminate a complex tone with a mistuned harmonic from the same complex with in-tune harmonics, and show an object-related event-related potential (ERP) electrophysiological (EEG) response to the stimulus with mistuned harmonics. In the present study we use an audiovisual procedure to investigate whether infants perceive a complex tone with an 8% mistuned harmonic as emanating from two objects, rather than merely detecting the mistuned cue. We paired in-tune and mistuned complex tones with visual displays that contained either one or two bouncing balls. Four-month-old infants showed surprise at the incongruous pairings, looking longer at the display of two balls when paired with the in-tune complex and at the display of one ball when paired with the mistuned harmonic complex. We conclude that infants use harmonicity as a cue for source separation when integrating auditory and visual information in object perception.
Collapse
Affiliation(s)
- Nicholas A Smith
- Perceptual Development Laboratory, Boys Town National Research Hospital, 555 N. 30th Street, Omaha, NE 68131, United States
| | - Nicole A Folland
- Department of Psychology, Neuroscience and Behaviour, McMaster University, 1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada
| | - Diana M Martinez
- Department of Psychology, Neuroscience and Behaviour, McMaster University, 1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada
| | - Laurel J Trainor
- Department of Psychology, Neuroscience and Behaviour, McMaster University, 1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada; McMaster Institute for Music and the Mind, McMaster University, 1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada; Rotman Research Institute, Baycrest, University of Toronto, 3560 Bathurst Street, Toronto, Ontario M6A 2E1, Canada.
| |
Collapse
|
15
|
Huberth M, Fujioka T. Neural representation of a melodic motif: Effects of polyphonic contexts. Brain Cogn 2017; 111:144-155. [DOI: 10.1016/j.bandc.2016.11.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2016] [Revised: 09/26/2016] [Accepted: 11/11/2016] [Indexed: 11/28/2022]
|
16
|
Tóth B, Kocsis Z, Háden GP, Szerafin Á, Shinn-Cunningham BG, Winkler I. EEG signatures accompanying auditory figure-ground segregation. Neuroimage 2016; 141:108-119. [PMID: 27421185 PMCID: PMC5656226 DOI: 10.1016/j.neuroimage.2016.07.028] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2016] [Revised: 07/06/2016] [Accepted: 07/11/2016] [Indexed: 11/16/2022] Open
Abstract
In everyday acoustic scenes, figure-ground segregation typically requires one to group together sound elements over both time and frequency. Electroencephalogram was recorded while listeners detected repeating tonal complexes composed of a random set of pure tones within stimuli consisting of randomly varying tonal elements. The repeating pattern was perceived as a figure over the randomly changing background. It was found that detection performance improved both as the number of pure tones making up each repeated complex (figure coherence) increased, and as the number of repeated complexes (duration) increased - i.e., detection was easier when either the spectral or temporal structure of the figure was enhanced. Figure detection was accompanied by the elicitation of the object related negativity (ORN) and the P400 event-related potentials (ERPs), which have been previously shown to be evoked by the presence of two concurrent sounds. Both ERP components had generators within and outside of auditory cortex. The amplitudes of the ORN and the P400 increased with both figure coherence and figure duration. However, only the P400 amplitude correlated with detection performance. These results suggest that 1) the ORN and P400 reflect processes involved in detecting the emergence of a new auditory object in the presence of other concurrent auditory objects; 2) the ORN corresponds to the likelihood of the presence of two or more concurrent sound objects, whereas the P400 reflects the perceptual recognition of the presence of multiple auditory objects and/or preparation for reporting the detection of a target object.
Collapse
Affiliation(s)
- Brigitta Tóth
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Budapest, Hungary; Center for Computational Neuroscience and Neural Technology, Boston University, Boston, USA.
| | - Zsuzsanna Kocsis
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Budapest, Hungary; Department of Cognitive Science, Faculty of Natural Sciences, Budapest University of Technology and Economics, Budapest, Hungary
| | - Gábor P Háden
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Budapest, Hungary
| | - Ágnes Szerafin
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Budapest, Hungary; Department of Cognitive Science, Faculty of Natural Sciences, Budapest University of Technology and Economics, Budapest, Hungary
| | | | - István Winkler
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Budapest, Hungary; Department of Cognitive and Neuropsychology, Institute of Psychology, University of Szeged, Szeged, Hungary
| |
Collapse
|
17
|
Dimitrijevic A, Alsamri J, John MS, Purcell D, George S, Zeng FG. Human Envelope Following Responses to Amplitude Modulation: Effects of Aging and Modulation Depth. Ear Hear 2016; 37:e322-35. [PMID: 27556365 PMCID: PMC5031488 DOI: 10.1097/aud.0000000000000324] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE To record envelope following responses (EFRs) to monaural amplitude-modulated broadband noise carriers in which amplitude modulation (AM) depth was slowly changed over time and to compare these objective electrophysiological measures to subjective behavioral thresholds in young normal hearing and older subjects. DESIGN PARTICIPANTS three groups of subjects included a young normal-hearing group (YNH 18 to 28 years; pure-tone average = 5 dB HL), a first older group ("O1"; 41 to 62 years; pure-tone average = 19 dB HL), and a second older group ("O2"; 67 to 82 years; pure-tone average = 35 dB HL). Electrophysiology: In condition 1, the AM depth (41 Hz) of a white noise carrier, was continuously varied from 2% to 100% (5%/s). EFRs were analyzed as a function of the AM depth. In condition 2, auditory steady-state responses were recorded to fixed AM depths (100%, 75%, 50%, and 25%) at a rate of 41 Hz. Psychophysics: A 3 AFC (alternative forced choice) procedure was used to track the AM depth needed to detect AM at 41 Hz (AM detection). The minimum AM depth capable of eliciting a statistically detectable EFR was defined as the physiological AM detection threshold. RESULTS Across all ages, the fixed AM depth auditory steady-state response and swept AM EFR yielded similar response amplitudes. Statistically significant correlations (r = 0.48) were observed between behavioral and physiological AM detection thresholds. Older subjects had slightly higher (not significant) behavioral AM detection thresholds than younger subjects. AM detection thresholds did not correlate with age. All groups showed a sigmoidal EFR amplitude versus AM depth function but the shape of the function differed across groups. The O2 group reached EFR amplitude plateau levels at lower modulation depths than the normal-hearing group and had a narrower neural dynamic range. In the young normal-hearing group, the EFR phase did not differ with AM depth, whereas in the older group, EFR phase showed a consistent decrease with increasing AM depth. The degree of phase change (or phase slope) was significantly correlated to the pure-tone threshold at 4 kHz. CONCLUSIONS EFRs can be recorded using either the swept modulation depth or the discrete AM depth techniques. Sweep recordings may provide additional valuable information at suprathreshold intensities including the plateau level, slope, and dynamic range. Older subjects had a reduced neural dynamic range compared with younger subjects suggesting that aging affects the ability of the auditory system to encode subtle differences in the depth of AM. The phase-slope differences are likely related to differences in low and high-frequency contributions to EFR. The behavioral-physiological AM depth threshold relationship was significant but likely too weak to be clinically useful in the present individual subjects who did not suffer from apparent temporal processing deficits.
Collapse
Affiliation(s)
- Andrew Dimitrijevic
- 1Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, USA; currently at Department of Otolaryngology-Head and Neck Surgery, Sunnybrook Health Sciences Centre, University of Toronto, 2075 Bayview Avenue, Toronto, Ontario, M4N 3M5, Canada; 2Department of Otolaryngology, Biomedical Engineering and Cognitive Sciences, University of California, Irvine, California, USA; 3Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, Ontario, Canada; Rotman Research Institute, Baycrest Centre, Toronto, Ontario, Canada; and 4National Centre for Audiology, Western University, London, Ontario, Canada; School of Communication Sciences and Disorders, Western University, London, Ontario, Canada
| | | | | | | | | | | |
Collapse
|
18
|
Wilson TW, Heinrichs-Graham E, Proskovec AL, McDermott TJ. Neuroimaging with magnetoencephalography: A dynamic view of brain pathophysiology. Transl Res 2016; 175:17-36. [PMID: 26874219 PMCID: PMC4959997 DOI: 10.1016/j.trsl.2016.01.007] [Citation(s) in RCA: 35] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/03/2015] [Revised: 01/15/2016] [Accepted: 01/18/2016] [Indexed: 01/12/2023]
Abstract
Magnetoencephalography (MEG) is a noninvasive, silent, and totally passive neurophysiological imaging method with excellent temporal resolution (∼1 ms) and good spatial precision (∼3-5 mm). In a typical experiment, MEG data are acquired as healthy controls or patients with neurologic or psychiatric disorders perform a specific cognitive task, or receive sensory stimulation. The resulting data are generally analyzed using standard electrophysiological methods, coupled with advanced image reconstruction algorithms. To date, the total number of MEG instruments and associated users is significantly smaller than comparable human neuroimaging techniques, although this is likely to change in the near future with advances in the technology. Despite this small base, MEG research has made a significant impact on several areas of translational neuroscience, largely through its unique capacity to quantify the oscillatory dynamics of activated brain circuits in humans. This review focuses on the clinical areas where MEG imaging has arguably had the greatest impact in regard to the identification of aberrant neural dynamics at the regional and network level, monitoring of disease progression, determining how efficacious pharmacologic and behavioral interventions modulate neural systems, and the development of neural markers of disease. Specifically, this review covers recent advances in understanding the abnormal neural oscillatory dynamics that underlie Parkinson's disease, autism spectrum disorders, human immunodeficiency virus (HIV)-associated neurocognitive disorders, cerebral palsy, attention-deficit hyperactivity disorder, cognitive aging, and post-traumatic stress disorder. MEG imaging has had a major impact on how clinical neuroscientists understand the brain basis of these disorders, and its translational influence is rapidly expanding with new discoveries and applications emerging continuously.
Collapse
Affiliation(s)
- Tony W Wilson
- Department of Pharmacology and Experimental Neuroscience, University of Nebraska Medical Center (UNMC), Omaha, Neb; Center for Magnetoencephalography, UNMC, Omaha, Neb; Department of Neurological Sciences, UNMC, Omaha, Neb.
| | - Elizabeth Heinrichs-Graham
- Department of Pharmacology and Experimental Neuroscience, University of Nebraska Medical Center (UNMC), Omaha, Neb; Center for Magnetoencephalography, UNMC, Omaha, Neb
| | - Amy L Proskovec
- Center for Magnetoencephalography, UNMC, Omaha, Neb; Department of Psychology, University of Nebraska - Omaha, Neb
| | - Timothy J McDermott
- Department of Pharmacology and Experimental Neuroscience, University of Nebraska Medical Center (UNMC), Omaha, Neb; Center for Magnetoencephalography, UNMC, Omaha, Neb
| |
Collapse
|
19
|
Theta oscillations accompanying concurrent auditory stream segregation. Int J Psychophysiol 2016; 106:141-51. [PMID: 27170058 DOI: 10.1016/j.ijpsycho.2016.05.002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2015] [Revised: 04/25/2016] [Accepted: 05/06/2016] [Indexed: 11/21/2022]
Abstract
The ability to isolate a single sound source among concurrent sources is crucial for veridical auditory perception. The present study investigated the event-related oscillations evoked by complex tones, which could be perceived as a single sound and tonal complexes with cues promoting the perception of two concurrent sounds by inharmonicity, onset asynchrony, and/or perceived source location difference of the components tones. In separate task conditions, participants performed a visual change detection task (visual control), watched a silent movie (passive listening) or reported for each tone whether they perceived one or two concurrent sounds (active listening). In two time windows, the amplitude of theta oscillation was modulated by the presence vs. absence of the cues: 60-350ms/6-8Hz (early) and 350-450ms/4-8Hz (late). The early response appeared both in the passive and the active listening conditions; it did not closely match the task performance; and it had a fronto-central scalp distribution. The late response was only elicited in the active listening condition; it closely matched the task performance; and it had a centro-parietal scalp distribution. The neural processes reflected by these responses are probably involved in the processing of concurrent sound segregation cues, in sound categorization, and response preparation and monitoring. The current results are compatible with the notion that theta oscillations mediate some of the processes involved in concurrent sound segregation.
Collapse
|
20
|
Parthasarathy A, Lai J, Bartlett EL. Age-Related Changes in Processing Simultaneous Amplitude Modulated Sounds Assessed Using Envelope Following Responses. J Assoc Res Otolaryngol 2016; 17:119-32. [PMID: 26905273 PMCID: PMC4791415 DOI: 10.1007/s10162-016-0554-z] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2015] [Accepted: 01/27/2016] [Indexed: 01/04/2023] Open
Abstract
Listening conditions in the real world involve segregating the stimuli of interest from competing auditory stimuli that differ in their sound level and spectral content. It is in these conditions of complex spectro-temporal processing that listeners with age-related hearing loss experience the most difficulties. Envelope following responses (EFRs) provide objective neurophysiological measures of auditory processing. EFRs were obtained to two simultaneous sinusoidally amplitude modulated (sAM) tones from young and aged Fischer-344 rats. One was held at a fixed suprathreshold sound level (sAM1FL) while the second varied in sound level (sAM2VL) and carrier frequency. EFR amplitudes to sAM1FL in the young decreased with signal-to-noise ratio (SNR), and this reduction was more pronounced when the sAM2VL carrier frequency was spectrally separated from sAM1FL. Aged animals showed similar trends, while having decreased overall response amplitudes compared to the young. These results were replicated using an established computational model of the auditory nerve. The trends observed in the EFRs were shown to be due to the contributions of the low-frequency tails of high-frequency neurons, rather than neurons tuned to the sAM1FL carrier frequency. Modeling changes in threshold and neural loss reproduced some of the changes seen with age, but accuracy improved when combined with an additional decrease representing synaptic loss of auditory nerve neurons. Sound segregation in this case derives primarily from peripheral processing, regardless of age. Contributions by more central neural mechanisms are likely to occur only at low SNRs.
Collapse
Affiliation(s)
- Aravindakshan Parthasarathy
- Department of Biological Sciences, Purdue University Interdisciplinary Life Sciences Program, and the Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
- Eaton-Peabody Labs, Massachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, MA, USA
| | - Jesyin Lai
- Department of Biological Sciences, Purdue University Interdisciplinary Life Sciences Program, and the Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
| | - Edward L Bartlett
- Department of Biological Sciences, Purdue University Interdisciplinary Life Sciences Program, and the Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA.
| |
Collapse
|
21
|
Helfer KS. Competing Speech Perception in Middle Age. Am J Audiol 2015; 24:80-3. [PMID: 25768264 DOI: 10.1044/2015_aja-14-0056] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2014] [Accepted: 01/11/2015] [Indexed: 11/09/2022] Open
Abstract
PURPOSE This research forum article summarizes research from our laboratory that assessed middle-aged adults' ability to understand speech in the presence of competing talkers. METHOD The performance of middle-aged adults on laboratory-based speech understanding tasks was compared to that of younger and older adults. RESULTS Decline in the ability to understand speech in complex listening environments can be demonstrated in midlife. The specific auditory and cognitive contributors to these problems have yet to be established. CONCLUSION There is evidence that the ability to understand a target speech message in the presence of competing speech messages changes relatively early in the aging process. The nature and impact of these changes warrant further investigation.
Collapse
|
22
|
Folland NA, Butler BE, Payne JE, Trainor LJ. Cortical Representations Sensitive to the Number of Perceived Auditory Objects Emerge between 2 and 4 Months of Age: Electrophysiological Evidence. J Cogn Neurosci 2015; 27:1060-7. [DOI: 10.1162/jocn_a_00764] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Sound waves emitted by two or more simultaneous sources reach the ear as one complex waveform. Auditory scene analysis involves parsing a complex waveform into separate perceptual representations of the sound sources [Bregman, A. S. Auditory scene analysis: The perceptual organization of sounds. London: MIT Press, 1990]. Harmonicity provides an important cue for auditory scene analysis. Normally, harmonics at integer multiples of a fundamental frequency are perceived as one sound with a pitch corresponding to the fundamental frequency. However, when one harmonic in such a complex, pitch-evoking sound is sufficiently mistuned, that harmonic emerges from the complex tone and is perceived as a separate auditory object. Previous work has shown that the percept of two objects is indexed in both children and adults by the object-related negativity component of the ERP derived from EEG recordings [Alain, C., Arnott, S. T., & Picton, T. W. Bottom–up and top–down influences on auditory scene analysis: Evidence from event-related brain potentials. Journal of Experimental Psychology: Human Perception and Performance, 27, 1072–1089, 2001]. Here we examine the emergence of object-related responses to an 8% harmonic mistuning in infants between 2 and 12 months of age. Two-month-old infants showed no significant object-related response. However, in 4- to 12-month-old infants, a significant frontally positive component was present, and by 8–12 months, a significant frontocentral object-related negativity was present, similar to that seen in older children and adults. This is in accordance with previous research demonstrating that infants younger than 4 months of age do not integrate harmonic information to perceive pitch when the fundamental is missing [He, C., Hotson, L., & Trainor, L. J. Maturation of cortical mismatch mismatch responses to occasional pitch change in early infancy: Effects of presentation rate and magnitude of change. Neuropsychologia, 47, 218–229, 2009]. The results indicate that the ability to use harmonic information to segregate simultaneous sounds emerges at the cortical level between 2 and 4 months of age.
Collapse
|
23
|
Bendixen A, Háden GP, Németh R, Farkas D, Török M, Winkler I. Newborn Infants Detect Cues of Concurrent Sound Segregation. Dev Neurosci 2015; 37:172-81. [DOI: 10.1159/000370237] [Citation(s) in RCA: 47] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2014] [Accepted: 11/28/2014] [Indexed: 11/19/2022] Open
Abstract
Separating concurrent sounds is fundamental for a veridical perception of one's auditory surroundings. Sound components that are harmonically related and start at the same time are usually grouped into a common perceptual object, whereas components that are not in harmonic relation or have different onset times are more likely to be perceived in terms of separate objects. Here we tested whether neonates are able to pick up the cues supporting this sound organization principle. We presented newborn infants with a series of complex tones with their harmonics in tune (creating the percept of a unitary sound object) and with manipulated variants, which gave the impression of two concurrently active sound sources. The manipulated variant had either one mistuned partial (single-cue condition) or the onset of this mistuned partial was also delayed (double-cue condition). Tuned and manipulated sounds were presented in random order with equal probabilities. Recording the neonates' electroencephalographic responses allowed us to evaluate their processing of the sounds. Results show that, in both conditions, mistuned sounds elicited a negative displacement of the event-related potential (ERP) relative to tuned sounds from 360 to 400 ms after sound onset. The mistuning-related ERP component resembles the object-related negativity (ORN) component in adults, which is associated with concurrent sound segregation. Delayed onset additionally led to a negative displacement from 160 to 200 ms, which was probably more related to the physical parameters of the sounds than to their perceptual segregation. The elicitation of an ORN-like response in newborn infants suggests that neonates possess the basic capabilities of segregating concurrent sounds by detecting inharmonic relations between the co-occurring sounds.
Collapse
|
24
|
Fishman YI, Steinschneider M, Micheyl C. Neural representation of concurrent harmonic sounds in monkey primary auditory cortex: implications for models of auditory scene analysis. J Neurosci 2014; 34:12425-43. [PMID: 25209282 PMCID: PMC4160777 DOI: 10.1523/jneurosci.0025-14.2014] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2014] [Revised: 07/14/2014] [Accepted: 07/28/2014] [Indexed: 11/21/2022] Open
Abstract
The ability to attend to a particular sound in a noisy environment is an essential aspect of hearing. To accomplish this feat, the auditory system must segregate sounds that overlap in frequency and time. Many natural sounds, such as human voices, consist of harmonics of a common fundamental frequency (F0). Such harmonic complex tones (HCTs) evoke a pitch corresponding to their F0. A difference in pitch between simultaneous HCTs provides a powerful cue for their segregation. The neural mechanisms underlying concurrent sound segregation based on pitch differences are poorly understood. Here, we examined neural responses in monkey primary auditory cortex (A1) to two concurrent HCTs that differed in F0 such that they are heard as two separate "auditory objects" with distinct pitches. We found that A1 can resolve, via a rate-place code, the lower harmonics of both HCTs, a prerequisite for deriving their pitches and for their perceptual segregation. Onset asynchrony between the HCTs enhanced the neural representation of their harmonics, paralleling their improved perceptual segregation in humans. Pitches of the concurrent HCTs could also be temporally represented by neuronal phase-locking at their respective F0s. Furthermore, a model of A1 responses using harmonic templates could qualitatively reproduce psychophysical data on concurrent sound segregation in humans. Finally, we identified a possible intracortical homolog of the "object-related negativity" recorded noninvasively in humans, which correlates with the perceptual segregation of concurrent sounds. Findings indicate that A1 contains sufficient spectral and temporal information for segregating concurrent sounds based on differences in pitch.
Collapse
Affiliation(s)
- Yonatan I Fishman
- Departments of Neurology and Neuroscience, Albert Einstein College of Medicine, Bronx, New York 10461,
| | - Mitchell Steinschneider
- Departments of Neurology and Neuroscience, Albert Einstein College of Medicine, Bronx, New York 10461
| | - Christophe Micheyl
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota 55455, and Starkey Hearing Research Center, Berkeley, California 94704
| |
Collapse
|
25
|
Kocsis Z, Winkler I, Szalárdy O, Bendixen A. Effects of multiple congruent cues on concurrent sound segregation during passive and active listening: an event-related potential (ERP) study. Biol Psychol 2014; 100:20-33. [PMID: 24816158 DOI: 10.1016/j.biopsycho.2014.04.005] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2013] [Revised: 04/29/2014] [Accepted: 04/30/2014] [Indexed: 11/17/2022]
Abstract
In two experiments, we assessed the effects of combining different cues of concurrent sound segregation on the object-related negativity (ORN) and the P400 event-related potential components. Participants were presented with sequences of complex tones, half of which contained some manipulation: one or two harmonic partials were mistuned, delayed, or presented from a different location than the rest. In separate conditions, one, two, or three of these manipulations were combined. Participants watched a silent movie (passive listening) or reported after each tone whether they perceived one or two concurrent sounds (active listening). ORN was found in almost all conditions except for location difference alone during passive listening. Combining several cues or manipulating more than one partial consistently led to sub-additive effects on the ORN amplitude. These results support the view that ORN reflects a combined, feature-unspecific assessment of the auditory system regarding the contribution of two sources to the incoming sound.
Collapse
Affiliation(s)
- Zsuzsanna Kocsis
- Institute of Psychology and Cognitive Neuroscience, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Budapest, Hungary; Budapest University of Technology and Economics, Budapest, Hungary.
| | - István Winkler
- Institute of Psychology and Cognitive Neuroscience, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Budapest, Hungary; Institute of Psychology, University of Szeged, Szeged, Hungary
| | - Orsolya Szalárdy
- Institute of Psychology and Cognitive Neuroscience, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Budapest, Hungary
| | - Alexandra Bendixen
- Department of Psychology, Cluster of Excellence "Hearing4all", European Medical School, Carl von Ossietzky University of Oldenburg, Oldenburg, Germany; Department of Psychology, University of Leipzig, Leipzig, Germany
| |
Collapse
|
26
|
Bendixen A. Predictability effects in auditory scene analysis: a review. Front Neurosci 2014; 8:60. [PMID: 24744695 PMCID: PMC3978260 DOI: 10.3389/fnins.2014.00060] [Citation(s) in RCA: 73] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2014] [Accepted: 03/14/2014] [Indexed: 12/02/2022] Open
Abstract
Many sound sources emit signals in a predictable manner. The idea that predictability can be exploited to support the segregation of one source's signal emissions from the overlapping signals of other sources has been expressed for a long time. Yet experimental evidence for a strong role of predictability within auditory scene analysis (ASA) has been scarce. Recently, there has been an upsurge in experimental and theoretical work on this topic resulting from fundamental changes in our perspective on how the brain extracts predictability from series of sensory events. Based on effortless predictive processing in the auditory system, it becomes more plausible that predictability would be available as a cue for sound source decomposition. In the present contribution, empirical evidence for such a role of predictability in ASA will be reviewed. It will be shown that predictability affects ASA both when it is present in the sound source of interest (perceptual foreground) and when it is present in other sound sources that the listener wishes to ignore (perceptual background). First evidence pointing toward age-related impairments in the latter capacity will be addressed. Moreover, it will be illustrated how effects of predictability can be shown by means of objective listening tests as well as by subjective report procedures, with the latter approach typically exploiting the multi-stable nature of auditory perception. Critical aspects of study design will be delineated to ensure that predictability effects can be unambiguously interpreted. Possible mechanisms for a functional role of predictability within ASA will be discussed, and an analogy with the old-plus-new heuristic for grouping simultaneous acoustic signals will be suggested.
Collapse
Affiliation(s)
- Alexandra Bendixen
- Auditory Psychophysiology Lab, Department of Psychology, Cluster of Excellence "Hearing4all," European Medical School, Carl von Ossietzky University of Oldenburg Oldenburg, Germany
| |
Collapse
|
27
|
Lodhia V, Brock J, Johnson BW, Hautus MJ. Reduced object related negativity response indicates impaired auditory scene analysis in adults with autistic spectrum disorder. PeerJ 2014; 2:e261. [PMID: 24688845 PMCID: PMC3940479 DOI: 10.7717/peerj.261] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2013] [Accepted: 01/15/2014] [Indexed: 11/20/2022] Open
Abstract
Auditory Scene Analysis provides a useful framework for understanding atypical auditory perception in autism. Specifically, a failure to segregate the incoming acoustic energy into distinct auditory objects might explain the aversive reaction autistic individuals have to certain auditory stimuli or environments. Previous research with non-autistic participants has demonstrated the presence of an Object Related Negativity (ORN) in the auditory event related potential that indexes pre-attentive processes associated with auditory scene analysis. Also evident is a later P400 component that is attention dependent and thought to be related to decision-making about auditory objects. We sought to determine whether there are differences between individuals with and without autism in the levels of processing indexed by these components. Electroencephalography (EEG) was used to measure brain responses from a group of 16 autistic adults, and 16 age- and verbal-IQ-matched typically-developing adults. Auditory responses were elicited using lateralized dichotic pitch stimuli in which inter-aural timing differences create the illusory perception of a pitch that is spatially separated from a carrier noise stimulus. As in previous studies, control participants produced an ORN in response to the pitch stimuli. However, this component was significantly reduced in the participants with autism. In contrast, processing differences were not observed between the groups at the attention-dependent level (P400). These findings suggest that autistic individuals have difficulty segregating auditory stimuli into distinct auditory objects, and that this difficulty arises at an early pre-attentive level of processing.
Collapse
Affiliation(s)
- Veema Lodhia
- Research Centre for Cognitive Neuroscience, School of Psychology, The University of Auckland , New Zealand
| | - Jon Brock
- ARC Centre of Excellence in Cognition and its Disorders , Australia ; Department of Cognitive Science, Macquarie University , Sydney , Australia
| | - Blake W Johnson
- ARC Centre of Excellence in Cognition and its Disorders , Australia ; Department of Cognitive Science, Macquarie University , Sydney , Australia
| | - Michael J Hautus
- Research Centre for Cognitive Neuroscience, School of Psychology, The University of Auckland , New Zealand
| |
Collapse
|
28
|
Alain C, Zendel BR, Hutka S, Bidelman GM. Turning down the noise: The benefit of musical training on the aging auditory brain. Hear Res 2014. [DOI: 10.10.1016/j.heares.2013.06.008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
29
|
Alain C, Roye A, Salloum C. Effects of age-related hearing loss and background noise on neuromagnetic activity from auditory cortex. Front Syst Neurosci 2014; 8:8. [PMID: 24550790 PMCID: PMC3907769 DOI: 10.3389/fnsys.2014.00008] [Citation(s) in RCA: 62] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2013] [Accepted: 01/13/2014] [Indexed: 11/13/2022] Open
Abstract
Aging is often accompanied by hearing loss, which impacts how sounds are processed and represented along the ascending auditory pathways and within the auditory cortices. Here, we assess the impact of mild binaural hearing loss on the older adults’ ability to both process complex sounds embedded in noise and to segregate a mistuned harmonic in an otherwise periodic stimulus. We measured auditory evoked fields (AEFs) using magnetoencephalography while participants were presented with complex tones that had either all harmonics in tune or had the third harmonic mistuned by 4 or 16% of its original value. The tones (75 dB sound pressure level, SPL) were presented without, with low (45 dBA SPL), or with moderate (65 dBA SPL) Gaussian noise. For each participant, we modeled the AEFs with a pair of dipoles in the superior temporal plane. We then examined the effects of hearing loss and noise on the amplitude and latency of the resulting source waveforms. In the present study, results revealed that similar noise-induced increases in N1m were present in older adults with and without hearing loss. Our results also showed that the P1m amplitude was larger in the hearing impaired than in the normal-hearing adults. In addition, the object-related negativity (ORN) elicited by the mistuned harmonic was larger in hearing impaired listeners. The enhanced P1m and ORN amplitude in the hearing impaired older adults suggests that hearing loss increased neural excitability in auditory cortices, which could be related to deficits in inhibitory control.
Collapse
Affiliation(s)
- Claude Alain
- Rotman Research Institute, Baycrest Centre for Geriatric Care Toronto, ON, Canada ; Department of Psychology, University of Toronto Toronto, ON, Canada ; Institute of Medical Sciences, University of Toronto Toronto, ON, Canada
| | - Anja Roye
- Rotman Research Institute, Baycrest Centre for Geriatric Care Toronto, ON, Canada
| | - Claire Salloum
- Rotman Research Institute, Baycrest Centre for Geriatric Care Toronto, ON, Canada
| |
Collapse
|
30
|
Gutschalk A, Dykstra AR. Functional imaging of auditory scene analysis. Hear Res 2013; 307:98-110. [PMID: 23968821 DOI: 10.1016/j.heares.2013.08.003] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/21/2013] [Revised: 07/26/2013] [Accepted: 08/08/2013] [Indexed: 11/16/2022]
Abstract
Our auditory system is constantly faced with the task of decomposing the complex mixture of sound arriving at the ears into perceptually independent streams constituting accurate representations of individual sound sources. This decomposition, termed auditory scene analysis, is critical for both survival and communication, and is thought to underlie both speech and music perception. The neural underpinnings of auditory scene analysis have been studied utilizing invasive experiments with animal models as well as non-invasive (MEG, EEG, and fMRI) and invasive (intracranial EEG) studies conducted with human listeners. The present article reviews human neurophysiological research investigating the neural basis of auditory scene analysis, with emphasis on two classical paradigms termed streaming and informational masking. Other paradigms - such as the continuity illusion, mistuned harmonics, and multi-speaker environments - are briefly addressed thereafter. We conclude by discussing the emerging evidence for the role of auditory cortex in remapping incoming acoustic signals into a perceptual representation of auditory streams, which are then available for selective attention and further conscious processing. This article is part of a Special Issue entitled Human Auditory Neuroimaging.
Collapse
Affiliation(s)
- Alexander Gutschalk
- Department of Neurology, Ruprecht-Karls-University Heidelberg, Heidelberg, Germany.
| | | |
Collapse
|
31
|
Alain C, Zendel BR, Hutka S, Bidelman GM. Turning down the noise: the benefit of musical training on the aging auditory brain. Hear Res 2013; 308:162-73. [PMID: 23831039 DOI: 10.1016/j.heares.2013.06.008] [Citation(s) in RCA: 84] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/09/2013] [Revised: 06/19/2013] [Accepted: 06/24/2013] [Indexed: 11/29/2022]
Abstract
Age-related decline in hearing abilities is a ubiquitous part of aging, and commonly impacts speech understanding, especially when there are competing sound sources. While such age effects are partially due to changes within the cochlea, difficulties typically exist beyond measurable hearing loss, suggesting that central brain processes, as opposed to simple peripheral mechanisms (e.g., hearing sensitivity), play a critical role in governing hearing abilities late into life. Current training regimens aimed to improve central auditory processing abilities have experienced limited success in promoting listening benefits. Interestingly, recent studies suggest that in young adults, musical training positively modifies neural mechanisms, providing robust, long-lasting improvements to hearing abilities as well as to non-auditory tasks that engage cognitive control. These results offer the encouraging possibility that musical training might be used to counteract age-related changes in auditory cognition commonly observed in older adults. Here, we reviewed studies that have examined the effects of age and musical experience on auditory cognition with an emphasis on auditory scene analysis. We infer that musical training may offer potential benefits to complex listening and might be utilized as a means to delay or even attenuate declines in auditory perception and cognition that often emerge later in life.
Collapse
Affiliation(s)
- Claude Alain
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Canada; Department of Psychology, University of Toronto, Canada.
| | - Benjamin Rich Zendel
- International Laboratory for Brain, Music and Sound Research (BRAMS), Département de Psychologie, Université de Montréal, Québec, Canada; Centre de Recherche, Institut Universitaire de Gériatrie de Montréal, Québec, Canada
| | - Stefanie Hutka
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Canada; Department of Psychology, University of Toronto, Canada
| | - Gavin M Bidelman
- Institute for Intelligent Systems & School of Communication Sciences and Disorders, University of Memphis, USA
| |
Collapse
|
32
|
Alain C, Roye A, Arnott SR. Middle- and long-latency auditory evoked potentials. DISORDERS OF PERIPHERAL AND CENTRAL AUDITORY PROCESSING 2013. [DOI: 10.1016/b978-0-7020-5310-8.00009-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/13/2023]
|
33
|
Zendel BR, Alain C. The influence of lifelong musicianship on neurophysiological measures of concurrent sound segregation. J Cogn Neurosci 2012; 25:503-16. [PMID: 23163409 DOI: 10.1162/jocn_a_00329] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The ability to separate concurrent sounds based on periodicity cues is critical for parsing complex auditory scenes. This ability is enhanced in young adult musicians and reduced in older adults. Here, we investigated the impact of lifelong musicianship on concurrent sound segregation and perception using scalp-recorded ERPs. Older and younger musicians and nonmusicians were presented with periodic harmonic complexes where the second harmonic could be tuned or mistuned by 1-16% of its original value. The likelihood of perceiving two simultaneous sounds increased with mistuning, and musicians, both older and younger, were more likely to detect and report hearing two sounds when the second harmonic was mistuned at or above 2%. The perception of a mistuned harmonic as a separate sound was paralleled by an object-related negativity that was larger and earlier in younger musicians compared with the other three groups. When listeners made a judgment about the harmonic stimuli, the perception of the mistuned harmonic as a separate sound was paralleled by a positive wave at about 400 msec poststimulus (P400), which was enhanced in both older and younger musicians. These findings suggest attention-dependent processing of a mistuned harmonic is enhanced in older musicians and provides further evidence that age-related decline in hearing abilities are mitigated by musical training.
Collapse
|
34
|
Marie C, Trainor LJ. Development of simultaneous pitch encoding: infants show a high voice superiority effect. Cereb Cortex 2012; 23:660-9. [PMID: 22419678 DOI: 10.1093/cercor/bhs050] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Infants must learn to make sense of real-world auditory environments containing simultaneous and overlapping sounds. In adults, event-related potential studies have demonstrated the existence of separate preattentive memory traces for concurrent note sequences and revealed perceptual dominance for encoding of the voice with higher fundamental frequency of 2 simultaneous tones or melodies. Here, we presented 2 simultaneous streams of notes (15 semitones apart) to 7-month-old infants. On 50% of trials, either the higher or the lower note was modified by one semitone, up or down, leaving 50% standard trials. Infants showed mismatch negativity (MMN) to changes in both voices, indicating separate memory traces for each voice. Furthermore, MMN was earlier and larger for the higher voice as in adults. When in the context of a second voice, representation of the lower voice was decreased and that of the higher voice increased compared with when each voice was presented alone. Additionally, correlations between MMN amplitude and amount of weekly music listening suggest that experience affects the development of auditory memory. In sum, the ability to process simultaneous pitches and the dominance of the highest voice emerge early during infancy and are likely important for the perceptual organization of sound in realistic environments.
Collapse
Affiliation(s)
- Céline Marie
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario L8S 4K1, Canada
| | | |
Collapse
|
35
|
Folland NA, Butler BE, Smith NA, Trainor LJ. Processing simultaneous auditory objects: infants' ability to detect mistuning in harmonic complexes. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2012; 131:993-997. [PMID: 22280722 DOI: 10.1121/1.3651254] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
The ability to separate simultaneous auditory objects is crucial to infant auditory development. Music in particular relies on the ability to separate musical notes, chords, and melodic lines. Little research addresses how infants process simultaneous sounds. The present study used a conditioned head-turn procedure to examine whether 6-month-old infants are able to discriminate a complex tone (240 Hz, 500 ms, six harmonics in random phase with a 6 dB roll-off per octave) from a version with the third harmonic mistuned. Adults perceive such stimuli as containing two auditory objects, one with the pitch of the mistuned harmonic and the other with pitch corresponding to the fundamental of the complex tone. Adult thresholds were between 1% and 2% mistuning. Infants performed above chance levels for 8%, 6%, and 4% mistunings, with no significant difference between conditions. However, performance was not significantly different from chance for 2% mistuning and significantly worse for 2% compared to all larger mistunings. These results indicate that 6-month-old infants are sensitive to violations of harmonic structure and suggest that they are able to separate two simultaneously sounding objects.
Collapse
Affiliation(s)
- Nicole A Folland
- Department of Psychology, Neuroscience and Behaviour, McMaster University, 1280 Main Street West, Hamilton, Ontario L8S 4L8, Canada
| | | | | | | |
Collapse
|
36
|
Weise A, Schröger E, Bendixen A. The processing of concurrent sounds based on inharmonicity and asynchronous onsets: an object-related negativity (ORN) study. Brain Res 2011; 1439:73-81. [PMID: 22265705 DOI: 10.1016/j.brainres.2011.12.044] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2011] [Revised: 12/12/2011] [Accepted: 12/21/2011] [Indexed: 11/27/2022]
Abstract
This study addresses the processing of concurrent sounds based on inharmonicity and onset asynchrony cues. We used harmonic complex sounds with one component starting marginally (40 ms) or considerably (500 ms) earlier than the complex and being slightly (3%) or strongly (13%) inharmonic. To index sound segregation of concurrent events, we measured the object-related negativity (ORN) component of the event-related potential. We contrasted two hypotheses: According to the concurrent-segregation hypothesis, increased onset asynchrony is assumed to promote segregation of the leading partial from the harmonic complex, which should be reflected in increased ORN amplitudes. That is, even with large onset asynchronies concurrent events would be processed by a simultaneous sound segregation mechanism. According to the sequential-integration hypothesis, however, with increased onset asynchrony concurrent cues are assumed to be less considered by simultaneous grouping processes, which should be reflected in attenuated ORN amplitudes for long onset asynchronies. This assumption is based on the notion that due to sequential integration, a stable percept of the leading partial has been developed within ~350 ms after sound onset, thus less processing is required from scene analysis mechanisms based on concurrent cues. Indeed, with increased onset asynchrony ORN was found to decrease, which supports the sequential-integration hypothesis. In line with previous data, ORN was also found to increase with increased inharmonicity. The absence of an inharmonicity×onset asynchrony interaction further supports the assumption that both cues are used in different ways for simultaneous sound segregation.
Collapse
|
37
|
Alain C, McDonald K, Van Roon P. Effects of age and background noise on processing a mistuned harmonic in an otherwise periodic complex sound. Hear Res 2011; 283:126-35. [PMID: 22101023 DOI: 10.1016/j.heares.2011.10.007] [Citation(s) in RCA: 45] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/14/2011] [Revised: 10/22/2011] [Accepted: 10/25/2011] [Indexed: 10/15/2022]
Abstract
Older adults presented with short (i.e., 40 ms) harmonic complex tones show a reduced likelihood of hearing the mistuned harmonic as a separate sound. Here, we examined whether this age difference for the mistuned harmonic would generalize to a longer signal duration (i.e., 200 ms). We measured auditory evoked fields (AEFs) using magnetoencephalography while young and older adults were presented with harmonic complex tones that either had all partials of the tones in tune (single sound object) or contained a 4 or 16% mistuned harmonic (dual sound objects). The auditory stimuli were presented in isolation or embedded in low or moderate levels of continuous white noise. For each participant, we modeled the AEFs with a pair of dipoles in the superior temporal plane and examined the effects of age and noise on the amplitude and latency of the resulting source waveforms. The present study reveals similar noise-induced increases in N1m and object-related negativity in young and older adults which may be mediated via efferent feedback connections and/or changes in the temporal window of integration. We observed less age-related differences in concurrent sound segregation for stimuli that matched the duration of the temporal integration window of auditory perception (i.e., ∼200 ms) than for short duration sounds (i.e., 40 ms). Possible explanations for this duration-dependent age-related decline in concurrent sound perception are a general slowing in auditory processing and/or lengthening of the temporal integration window.
Collapse
Affiliation(s)
- Claude Alain
- Rotman Research Institute, Baycrest Centre for Geriatric Care, 3560 Bathurst Street, Toronto, Ontario, Canada M6A 2E1.
| | | | | |
Collapse
|
38
|
Understanding of spoken language under challenging listening conditions in younger and older listeners: A combined behavioral and electrophysiological study. Brain Res 2011; 1415:8-22. [DOI: 10.1016/j.brainres.2011.08.001] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2011] [Revised: 07/29/2011] [Accepted: 08/01/2011] [Indexed: 11/19/2022]
|
39
|
|
40
|
Arnott SR, Bardouille T, Ross B, Alain C. Neural generators underlying concurrent sound segregation. Brain Res 2011; 1387:116-24. [PMID: 21362407 DOI: 10.1016/j.brainres.2011.02.062] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2010] [Revised: 02/11/2011] [Accepted: 02/19/2011] [Indexed: 11/25/2022]
Abstract
Although an object-based account of auditory attention has become an increasingly popular model for understanding how temporally overlapping sounds are segregated, relatively little is known about the cortical circuit that supports such ability. In the present study, we applied a beamformer spatial filter to magnetoencephalography (MEG) data recorded during an auditory paradigm that used inharmonicity to promote the formation of multiple auditory objects. Using this unconstrained, data-driven approach, the evoked field component linked with the perception of multiple auditory objects (i.e., the object-related negativity; ORNm), was found to be associated with bilateral auditory cortex sources that were distinct from those coinciding with the P1m, N1m, and P2m responses elicited by sound onset. The right hemispheric ORNm source in particular was consistently positioned anterior to the other sources across two experiments. These findings are consistent with earlier proposals of multiple auditory object detection being associated with generators in the auditory cortex and further suggest that these neural populations are distinct from the long latency evoked responses reflecting the detection of sound onset.
Collapse
Affiliation(s)
- Stephen R Arnott
- Rotman Research Institute, Baycrest Centre, Toronto, Ontario, Canada M6A 2E1.
| | | | | | | |
Collapse
|
41
|
Arehart KH, Souza PE, Muralimanohar RK, Miller CW. Effects of age on concurrent vowel perception in acoustic and simulated electroacoustic hearing. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2011; 54:190-210. [PMID: 20689036 PMCID: PMC3258509 DOI: 10.1044/1092-4388(2010/09-0145)] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
PURPOSE In this study, the authors investigated the effects of age on the use of fundamental frequency differences (ΔF(0)) in the perception of competing synthesized vowels in simulations of electroacoustic and cochlear-implant hearing. METHOD Twelve younger listeners with normal hearing and 13 older listeners with (near) normal hearing were evaluated in their use of ΔF(0) in the perception of competing synthesized vowels for 3 conditions: unprocessed synthesized vowels (UNP), envelope-vocoded synthesized vowels that simulated a cochlear implant (VOC), and synthesized vowels processed to simulate electroacoustic stimulation (EAS) hearing. Tasks included (a) multiplicity, which required listeners to identify whether a stimulus contained 1 or 2 sounds and (b) double-vowel identification, which required listeners to attach phonemic labels to the competing synthesized vowels. RESULTS Multiplicity perception was facilitated by ΔF(0) in UNP and EAS but not in VOC, with no age-related deficits evident. Double-vowel identification was facilitated by ΔF(0), with ΔF(0) benefit largest in UNP, reduced in EAS, and absent in VOC. Age adversely affected overall identification and ΔF(0) benefit on the double-vowel task. CONCLUSIONS Some but not all older listeners derived ΔF(0) benefit in EAS hearing. This variability may partly be due to how listeners are able to draw on higher-level processing resources in extracting and integrating cues in EAS hearing.
Collapse
Affiliation(s)
- Kathryn H Arehart
- University of Colorado at Boulder - Dept. SLHS, Campus Box 409, Boulder, Colorado 80309, USA.
| | | | | | | |
Collapse
|
42
|
The effects of age and interaural delay on detecting a change in interaural correlation: The role of temporal jitter. Hear Res 2010; 275:139-49. [PMID: 21184818 DOI: 10.1016/j.heares.2010.12.013] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/12/2010] [Revised: 12/07/2010] [Accepted: 12/13/2010] [Indexed: 11/24/2022]
Abstract
Duration thresholds for detecting a change in interaural correlation (from 0 to 1, or from 1 to 0) in the initial portion of a 1-second, broadband noise (0-10 kHz) were determined for younger and older adults in a two-interval, two-alternative forced choice paradigm as a function of the interaural delay between the noise bursts presented to each ear. When the interaural delay was 0 ms, older adults found it harder to detect a change in correlation from 0 to 1 than from 1 to 0. For younger adults, however, this pattern was reversed. For interaural delays greater than 0 ms, both younger adults and older adults found it easier to detect a change in interaural correlation from 0 to 1 for short interaural delays (1 ms) with the reverse being true for longer interaural delays (5 ms). It is shown that this pattern of results is expected if temporal jitter (loss of neural synchrony in the auditory system) increases with age and with interaural delay. The implications of these results for age-related changes in stream segregation are discussed.
Collapse
|
43
|
Neural correlates of auditory scene analysis based on inharmonicity in monkey primary auditory cortex. J Neurosci 2010; 30:12480-94. [PMID: 20844143 DOI: 10.1523/jneurosci.1780-10.2010] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Segregation of concurrent sounds in complex acoustic environments is a fundamental feature of auditory scene analysis. A powerful cue used by the auditory system to segregate concurrent sounds, such as speakers' voices at a cocktail party, is inharmonicity. This can be demonstrated when a component of a harmonic complex tone is perceived as a separate tone "popping out" from the complex as a whole when it is sufficiently mistuned from its harmonic value. The neural bases of perceptual "pop out" of mistuned harmonics are unclear. We recorded multiunit activity from primary auditory cortex (A1) of behaving monkeys elicited by harmonic complex tones that were either "in tune" or that contained a mistuned third harmonic set at the best frequency of the neural populations. Responses to mistuned sounds were enhanced relative to responses to "in-tune" sounds, thus correlating with the enhanced perceptual salience of the mistuned component. Consistent with human psychophysics of "pop out," response enhancements increased with the degree of mistuning, were maximal for neural populations tuned to the frequency of the mistuned component, and were not observed under comparable stimulus conditions that do not elicit perceptual "pop out." Mistuning was also associated with changes in neuronal temporal response patterns phase locked to "beats" in the stimuli. Intracortical auditory evoked potentials paralleled noninvasive neurophysiological correlates of perceptual "pop out" in humans, further augmenting the translational relevance of the results. Findings suggest two complementary neural mechanisms for "pop out," based on the detection of local differences in activation level or coherence of temporal response patterns across A1.
Collapse
|
44
|
Heinrich A, Schneider BA. Elucidating the effects of ageing on remembering perceptually distorted word pairs. Q J Exp Psychol (Hove) 2010; 64:186-205. [PMID: 20694922 DOI: 10.1080/17470218.2010.492621] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
We investigated the effects of age, background babble, and acoustic distortion of the word itself on serial position memory in a series of experiments involving six different auditory environments (quiet, and 12-talker background babble presented between, overlapping, or concurrent with word presentation or with two kinds of distortion applied to the words). To control for hearing, the level of babble or distortion was adjusted so that younger and older adults could hear the words equally well. Although the presence of continuous and word-flanking background babble adversely affected memory in the early serial positions in both age groups, only older adults' memory was adversely affected in the later serial positions. Moreover, younger adults' memory was not affected by acoustic word distortion, whereas one of the two types of temporal distortion adversely affected memory for later serial positions in older adults. The exact pattern of impairment and its interaction with age suggests that memory in older adults is more affected than that in younger adults in complex listening situations because they either need more time or have to employ more attentional resources to segregate different auditory streams, thereby depleting the pool of resources available for memory encoding.
Collapse
Affiliation(s)
- Antje Heinrich
- University of Toronto at Mississauga, Mississauga, Ontario, Canada
| | | |
Collapse
|
45
|
Du Y, He Y, Ross B, Bardouille T, Wu X, Li L, Alain C. Human Auditory Cortex Activity Shows Additive Effects of Spectral and Spatial Cues during Speech Segregation. Cereb Cortex 2010; 21:698-707. [PMID: 20685854 DOI: 10.1093/cercor/bhq136] [Citation(s) in RCA: 39] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Affiliation(s)
- Yi Du
- Department of Psychology, Speech and Hearing Research Center, Key Laboratory on Machine Perception (Ministry of Education), Peking University, Beijing, China 100871
| | | | | | | | | | | | | |
Collapse
|
46
|
Johnson BW, Hautus MJ. Processing of binaural spatial information in human auditory cortex: Neuromagnetic responses to interaural timing and level differences. Neuropsychologia 2010; 48:2610-9. [DOI: 10.1016/j.neuropsychologia.2010.05.008] [Citation(s) in RCA: 42] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2009] [Revised: 04/30/2010] [Accepted: 05/01/2010] [Indexed: 11/29/2022]
|
47
|
Ross B, Schneider B, Snyder JS, Alain C. Biological markers of auditory gap detection in young, middle-aged, and older adults. PLoS One 2010; 5:e10101. [PMID: 20404929 PMCID: PMC2852420 DOI: 10.1371/journal.pone.0010101] [Citation(s) in RCA: 47] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2009] [Accepted: 03/11/2010] [Indexed: 11/18/2022] Open
Abstract
The capability of processing rapid fluctuations in the temporal envelope of sound declines with age and this contributes to older adults' difficulties in understanding speech. Although, changes in central auditory processing during aging have been proposed as cause for communication deficits, an open question remains which stage of processing is mostly affected by age related changes. We investigated auditory temporal resolution in young, middle-aged, and older listeners with neuromagnetic evoked responses to gap stimuli with different leading marker and gap durations. Signal components specific for processing the physical details of sound stimuli as well as the auditory objects as a whole were derived from the evoked activity and served as biological markers for temporal processing at different cortical levels. Early oscillatory 40-Hz responses were elicited by the onsets of leading and lagging markers and indicated central registration of the gap with similar amplitude in all three age groups. High-gamma responses were predominantly related to the duration of no-gap stimuli or to the duration of gaps when present, and decreased in amplitude and phase locking with increasing age. Correspondingly, low-frequency activity around 200 ms and later was reduced in middle aged and older participants. High-gamma band, and long-latency low-frequency responses were interpreted as reflecting higher order processes related to the grouping of sound items into auditory objects and updating of memory for these objects. The observed effects indicate that age-related changes in auditory acuity have more to do with higher-order brain functions than previously thought.
Collapse
Affiliation(s)
- Bernhard Ross
- Rotman Research Institute, Baycrest Centre, Toronto, Ontario, Canada.
| | | | | | | |
Collapse
|
48
|
Buranelli G, Barbosa MB, Garcia CFD, Duarte SG, Marangoni AC, Coelho LMDFR, Reis ACMB, Isaac MDL. Mismatch Negativity (MMN) response studies in elderly subjects. Braz J Otorhinolaryngol 2010; 75:831-8. [PMID: 20209283 PMCID: PMC9445995 DOI: 10.1016/s1808-8694(15)30545-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2008] [Accepted: 04/07/2009] [Indexed: 11/17/2022] Open
Abstract
Mismatch Negativity is an endogenous potential which reflects the processing of differences incurred in the acoustic stimulus. Aim to characterize MMN responses in elderly subjects and compare with adult subjects. Materials and methods prospective study involving 30 subjects, 15 men and 15 women, aged between 60 and 80 years and 11 months. Statistical test: Mann-Whitney. The subjects went through medical evaluation, threshold tonal audiometry, immittance tests, otoacoustic emissions and short and long latency auditory potentials (MMN). Results mean latency was 161.33 ms (CZA2) and 148.67 ms (CZA1), in women; of 171 ms (CZA2) and 159.07 ms (CZA1), men. Mean amplitude was −2.753 μV (CZA2) and −2.177 μV (CZA1), women; −1.847 μV (CZA2) and −1.953 μV (CZA1), men. As to the right and left hemispheres, mean latency variable of 166 ms (CZA2) and 153.87 ms (CZA1); for the amplitude variable, mean value of −2.316 μV (CZA2) and −2.065 μV (CZA1). Conclusion there is no statistically significant difference between the latency and amplitude when we compared males and females, right and left sides in the elderly and between chronologic ages between adults and elderly subjects.
Collapse
|
49
|
Bendixen A, Jones SJ, Klump G, Winkler I. Probability dependence and functional separation of the object-related and mismatch negativity event-related potential components. Neuroimage 2010; 50:285-90. [DOI: 10.1016/j.neuroimage.2009.12.037] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2009] [Revised: 12/04/2009] [Accepted: 12/08/2009] [Indexed: 10/20/2022] Open
|
50
|
Lipp R, Kitterick P, Summerfield Q, Bailey PJ, Paul-Jordanov I. Concurrent sound segregation based on inharmonicity and onset asynchrony. Neuropsychologia 2010; 48:1417-25. [PMID: 20079754 DOI: 10.1016/j.neuropsychologia.2010.01.009] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2009] [Revised: 01/08/2010] [Accepted: 01/08/2010] [Indexed: 11/19/2022]
Abstract
To explore the neural processes underlying concurrent sound segregation, auditory evoked fields (AEFs) were measured using magnetoencephalography (MEG). To induce the segregation of two auditory objects we manipulated harmonicity and onset synchrony. Participants were presented with complex sounds with (i) all harmonics in-tune (ii) the third harmonic mistuned by 8% of its original value (iii) the onset of the third harmonic delayed by 160 ms compared to the other harmonics. During recording, participants listened to the sounds and performed an auditory localisation task whereas in another session they ignored the sounds and performed a visual localisation task. Active and passive listening was chosen to evaluate the contribution of attention on sound segregation. Both cues - inharmonicity and onset asynchrony - elicited sound segregation, as participants were more likely to report correctly on which side they heard the third harmonic when it was mistuned or delayed compared to being in-tune with all other harmonics. AEF activity associated with concurrent sound segregation was identified over both temporal lobes. We found an early deflection at approximately 75 ms (P75m) after sound onset, probably reflecting an automatic registration of the mistuned harmonic. Subsequent deflections, the object-related negativity (ORNm) and a later displacement (P230m) seem to be more general markers of concurrent sound segregation, as they were elicited by both mistuning and delaying the third harmonic. Results indicate that the ORNm reflects relatively automatic, bottom-up sound segregation processes, whereas the P230m is more sensitive to attention, especially with inharmonicity as the cue for concurrent sound segregation.
Collapse
Affiliation(s)
- Rosa Lipp
- Department of Psychology, Clinical Psychology and Neuropsychology, University of Konstanz, Germany.
| | | | | | | | | |
Collapse
|