1
|
Williams JR, Störmer VS. Cutting Through the Noise: Auditory Scenes and Their Effects on Visual Object Processing. Psychol Sci 2024:9567976241237737. [PMID: 38889285 DOI: 10.1177/09567976241237737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/20/2024] Open
Abstract
Despite the intuitive feeling that our visual experience is coherent and comprehensive, the world is full of ambiguous and indeterminate information. Here we explore how the visual system might take advantage of ambient sounds to resolve this ambiguity. Young adults (ns = 20-30) were tasked with identifying an object slowly fading in through visual noise while a task-irrelevant sound played. We found that participants demanded more visual information when the auditory object was incongruent with the visual object compared to when it was not. Auditory scenes, which are only probabilistically related to specific objects, produced similar facilitation even for unheard objects (e.g., a bench). Notably, these effects traverse categorical and specific auditory and visual-processing domains as participants performed across-category and within-category visual tasks, underscoring cross-modal integration across multiple levels of perceptual processing. To summarize, our study reveals the importance of audiovisual interactions to support meaningful perceptual experiences in naturalistic settings.
Collapse
Affiliation(s)
| | - Viola S Störmer
- Department of Psychology, University of California, San Diego
- Department of Psychological and Brain Sciences, Dartmouth College
| |
Collapse
|
2
|
Cheng CH, Hsieh YW, Chang CC, Hsiao FJ, Chen LF, Wang PN. Effects of 6-Month Combined Physical Exercise and Cognitive Training on Neuropsychological and Neurophysiological Function in Older Adults with Subjective Cognitive Decline: A Randomized Controlled Trial. J Alzheimers Dis 2024:JAD231257. [PMID: 38848174 DOI: 10.3233/jad-231257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/09/2024]
Abstract
Background Multidomain intervention may delay or ameliorate cognitive decline in older adults at risk of Alzheimer's disease, particularly in the memory and inhibitory functions. However, no study systematically investigates the changes of brain function in cognitively-normal elderly with subjective cognitive decline (SCD) when they receive multidomain intervention. Objective We aimed to examine whether a multidomain intervention could improve neuropsychological function and neurophysiological activities related to memory and inhibitory function in SCD subjects. Methods Eight clusters with a total of 50 community-dwelling SCD older adults were single-blind, randomized into intervention group, which received physical and cognitive training, or control group, which received treatment as usual. For the neuropsychological function, a composite Z score from six cognitive tests was calculated and compared between two groups. For the neurophysiological activities, event-related potentials (ERPs) of memory function, including mismatch negativity (MMN) and memory-P3, as well as ERPs of inhibitory function, including sensory gating (SG) and inhibition-P3, were measured. Assessments were performed at baseline (T1), end of the intervention (T2), and 6 months after T2 (T3). Results For the neuropsychological function, the effect was not observed after the intervention. For the neurophysiological activities, improved MMN responses of ΔT2-T1 were observed in the intervention group versus the control group. The multidomain intervention produced a sustained effect on memory-P3 latencies of ΔT3-T1. However, there were no significant differences in changes of SG and inhibition-P3 between intervention and control groups. Conclusions While not impactful on neuropsychological function, multidomain intervention enhances specific neurophysiological activities associated with memory function.
Collapse
Affiliation(s)
- Chia-Hsiung Cheng
- Department of Occupational Therapy and Graduate Institute of Behavioral Sciences, Chang Gung University, Taoyuan, Taiwan
- Laboratory of Brain Imaging and Neural Dynamics (BIND Lab), Chang Gung University, Taoyuan, Taiwan
- Healthy Aging Research Center, Chang Gung University, Taoyuan, Taiwan
- Department of Psychiatry, Chang Gung Memorial Hospital, Linkou, Taiwan
| | - Yu-Wei Hsieh
- Department of Occupational Therapy and Graduate Institute of Behavioral Sciences, Chang Gung University, Taoyuan, Taiwan
- Department of Physical Medicine and Rehabilitation, Chang Gung Memorial Hospital, Linkou, Taiwan
| | - Chiung-Chih Chang
- Department of Neurology, Cognition and Aging Center, Institute for Translational Research in Biomedicine, Kaohsiung Chang Gung Memorial Hospital, Chang Gung University College of Medicine, Kaohsiung, Taiwan
| | - Fu-Jung Hsiao
- Brain Research Center, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Li-Fen Chen
- Institute of Brain Science, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Pei-Ning Wang
- Department of Neurological Institute, Division of General Neurology, Taipei Veterans General Hospital, Taipei, Taiwan
| |
Collapse
|
3
|
Laback B, Tabuchi H, Kohlrausch A. Evidence for proactive and retroactive temporal pattern analysis in simultaneous maskinga). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:3742-3759. [PMID: 38856312 DOI: 10.1121/10.0026240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Accepted: 05/17/2024] [Indexed: 06/11/2024]
Abstract
Amplitude modulation (AM) of a masker reduces its masking on a simultaneously presented unmodulated pure-tone target, which likely involves dip listening. This study tested the idea that dip-listening efficiency may depend on stimulus context, i.e., the match in AM peakedness (AMP) between the masker and a precursor or postcursor stimulus, assuming a form of temporal pattern analysis process. Masked thresholds were measured in normal-hearing listeners using Schroeder-phase harmonic complexes as maskers and precursors or postcursors. Experiment 1 showed threshold elevation (i.e., interference) when a flat cursor preceded or followed a peaked masker, suggesting proactive and retroactive temporal pattern analysis. Threshold decline (facilitation) was observed when the masker AMP was matched to the precursor, irrespective of stimulus AMP, suggesting only proactive processing. Subsequent experiments showed that both interference and facilitation (1) remained robust when a temporal gap was inserted between masker and cursor, (2) disappeared when an F0-difference was introduced between masker and precursor, and (3) decreased when the presentation level was reduced. These results suggest an important role of envelope regularity in dip listening, especially when masker and cursor are F0-matched and, therefore, form one perceptual stream. The reported effects seem to represent a time-domain variant of comodulation masking release.
Collapse
Affiliation(s)
- Bernhard Laback
- Austrian Academy of Sciences, Acoustics Research Institute, Wohllebengasse 12-14, 1040 Vienna, Austria
| | - Hisaaki Tabuchi
- Department of Psychology, University of Innsbruck, Universitätsstraße 15, 6020 Innsbruck, Austria
| | - Armin Kohlrausch
- Industrial Engineering & Innovation Sciences, Technische Universiteit Eindhoven, P.O. Box 513, 5600 MB Eindhoven, Netherlands
| |
Collapse
|
4
|
Hauswald A, Benz KR, Hartmann T, Demarchi G, Weisz N. Carrier-frequency specific omission-related neural activity in ordered sound sequences is independent of omission-predictability. Eur J Neurosci 2024. [PMID: 38711271 DOI: 10.1111/ejn.16381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 03/20/2024] [Accepted: 04/20/2024] [Indexed: 05/08/2024]
Abstract
Regularities in our surroundings lead to predictions about upcoming events. Previous research has shown that omitted sounds during otherwise regular tone sequences elicit frequency-specific neural activity related to the upcoming but omitted tone. We tested whether this neural response is depending on the unpredictability of the omission. Therefore, we recorded magnetencephalography (MEG) data while participants listened to ordered or random tone sequences with omissions occurring either ordered or randomly. Using multivariate pattern analysis shows that the frequency-specific neural pattern during omission within ordered tone sequences occurs independent of the regularity of the omissions. These results suggest that the auditory predictions based on sensory experiences are not immediately updated by violations of those expectations.
Collapse
Affiliation(s)
- Anne Hauswald
- Center of Cognitive Neuroscience, University of Salzburg, Salzburg, Austria
- Department of Psychology, University of Salzburg, Salzburg, Austria
| | - Kaja Rosa Benz
- Center of Cognitive Neuroscience, University of Salzburg, Salzburg, Austria
- Department of Psychology, University of Salzburg, Salzburg, Austria
| | - Thomas Hartmann
- Center of Cognitive Neuroscience, University of Salzburg, Salzburg, Austria
- Department of Psychology, University of Salzburg, Salzburg, Austria
| | - Gianpaolo Demarchi
- Center of Cognitive Neuroscience, University of Salzburg, Salzburg, Austria
- Department of Psychology, University of Salzburg, Salzburg, Austria
| | - Nathan Weisz
- Center of Cognitive Neuroscience, University of Salzburg, Salzburg, Austria
- Department of Psychology, University of Salzburg, Salzburg, Austria
- Neuroscience Institute and Department of Neurology, Christian Doppler Clinic, Paracelsus Private Medical University, Salzburg, Austria
| |
Collapse
|
5
|
Pesnot Lerousseau J, Summerfield C. Space as a scaffold for rotational generalisation of abstract concepts. eLife 2024; 13:RP93636. [PMID: 38568075 PMCID: PMC10990485 DOI: 10.7554/elife.93636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/05/2024] Open
Abstract
Learning invariances allows us to generalise. In the visual modality, invariant representations allow us to recognise objects despite translations or rotations in physical space. However, how we learn the invariances that allow us to generalise abstract patterns of sensory data ('concepts') is a longstanding puzzle. Here, we study how humans generalise relational patterns in stimulation sequences that are defined by either transitions on a nonspatial two-dimensional feature manifold, or by transitions in physical space. We measure rotational generalisation, i.e., the ability to recognise concepts even when their corresponding transition vectors are rotated. We find that humans naturally generalise to rotated exemplars when stimuli are defined in physical space, but not when they are defined as positions on a nonspatial feature manifold. However, if participants are first pre-trained to map auditory or visual features to spatial locations, then rotational generalisation becomes possible even in nonspatial domains. These results imply that space acts as a scaffold for learning more abstract conceptual invariances.
Collapse
|
6
|
Ghodratitoostani I, Vaziri Z, Miranda Neto M, de Giacomo Carneiro Barros C, Delbem ACB, Hyppolito MA, Jalilvand H, Louzada F, Leite JP. Conceptual framework for tinnitus: a cognitive model in practice. Sci Rep 2024; 14:7186. [PMID: 38531913 DOI: 10.1038/s41598-023-48006-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Accepted: 11/21/2023] [Indexed: 03/28/2024] Open
Abstract
Tinnitus is a conscious attended awareness perception of sourceless sound. Widespread theoretical and evidence-based neurofunctional and psychological models have tried to explain tinnitus-related distress considering the influence of psychological and cognitive factors. However, tinnitus models seem to be less focused on causality, thereby easily misleading interpretations. Also, they may be incapable of individualization. This study proposes a Conceptual Cognitive Framework (CCF) providing insight into cognitive mechanisms involved in the predisposition, precipitation, and perpetuation of tinnitus and consequent cognitive-emotional disturbances. The current CCF for tinnitus relies on evaluative conditional learning and appraisal, generating negative valence (emotional value) and arousal (cognitive value) to annoyance, distress, and distorted perception. The suggested methodology is well-defined, reproducible, and accessible, which can help foster future high-quality clinical databases. Perceived tinnitus through the perpetual-learning process can always lead to annoyance, but only in the clinical stage directly cause annoyance. In the clinical stage, tinnitus perception can lead indirectly to distress only with experiencing annoyance either with ("I n d - 1 C " = 1.87; 95% CI 1.18-2.72)["1st indirect path in the Clinical stage model": Tinnitus Loudness → Attention Bias → Cognitive-Emotional Value → Annoyance → Clinical Distress]or without ("I n d - 2 C "= 2.03; 95% CI 1.02-3.32)[ "2nd indirect path in the Clinical stage model": Tinnitus Loudness → Annoyance → Clinical Distress] the perpetual-learning process. Further real-life testing of the CCF is expected to express a meticulous, decision-supporting platform for cognitive rehabilitation and clinical interventions. Furthermore, the suggested methodology offers a reliable platform for CCF development in other cognitive impairments and supports the causal clinical data models. It may also enhance our knowledge of psychological disorders and complicated comorbidities by supporting the design of different rehabilitation interventions and comprehensive frameworks in line with the "preventive medicine" policy.
Collapse
Affiliation(s)
- Iman Ghodratitoostani
- Neurocognitive Engineering Laboratory (NEL), Center for Engineering Applied to Health, Institute of Mathematics and Computer Science, University of Sao Paulo, Sao Carlos, Brazil.
- Institute of Mathematics and Computer Science, University of São Paulo, São Carlos, Brazil.
- Adjunct Scholar, Tehran University of Medical Sciences, Tehran, Iran.
| | - Zahra Vaziri
- Neurocognitive Engineering Laboratory (NEL), Center for Engineering Applied to Health, Institute of Mathematics and Computer Science, University of Sao Paulo, Sao Carlos, Brazil
- Department of Neurosciences and Behavioral Sciences, Medical School of Ribeirão Preto, University of São Paulo, São Paulo, Brazil
| | - Milton Miranda Neto
- Neurocognitive Engineering Laboratory (NEL), Center for Engineering Applied to Health, Institute of Mathematics and Computer Science, University of Sao Paulo, Sao Carlos, Brazil
- Institute of Mathematics and Computer Science, University of São Paulo, São Carlos, Brazil
| | - Camila de Giacomo Carneiro Barros
- Neurocognitive Engineering Laboratory (NEL), Center for Engineering Applied to Health, Institute of Mathematics and Computer Science, University of Sao Paulo, Sao Carlos, Brazil
- Department of Otorhinolaryngology, Ribeirão Preto Medical School, Universidade de São Paulo, Ribeirão Preto, Brazil
| | - Alexandre Cláudio Botazzo Delbem
- Neurocognitive Engineering Laboratory (NEL), Center for Engineering Applied to Health, Institute of Mathematics and Computer Science, University of Sao Paulo, Sao Carlos, Brazil
- Institute of Mathematics and Computer Science, University of São Paulo, São Carlos, Brazil
| | - Miguel Angelo Hyppolito
- Department of Ophthalmology, Otorhinolaryngology, Head and Neck Surgery, Ribeirão Preto Medical School, University of São Paulo, São Paulo, Brazil
| | - Hamid Jalilvand
- Department of Audiology, School of Rehabilitation, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Francisco Louzada
- Institute of Mathematics and Computer Science, University of São Paulo, São Carlos, Brazil
| | - Joao Pereira Leite
- Department of Neurosciences and Behavioral Sciences, Medical School of Ribeirão Preto, University of São Paulo, São Paulo, Brazil
| |
Collapse
|
7
|
Honbolygó F, Zulauf B, Zavogianni MI, Csépe V. Investigating the neurocognitive background of speech perception with a fast multi-feature MMN paradigm. Biol Futur 2024; 75:145-158. [PMID: 38805154 DOI: 10.1007/s42977-024-00219-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Accepted: 04/11/2024] [Indexed: 05/29/2024]
Abstract
The speech multi-feature MMN (Mismatch Negativity) offers a means to explore the neurocognitive background of the processing of multiple speech features in a short time, by capturing the time-locked electrophysiological activity of the brain known as event-related brain potentials (ERPs). Originating from Näätänen et al. (Clin Neurophysiol 115:140-144, 2004) pioneering work, this paradigm introduces several infrequent deviant stimuli alongside standard ones, each differing in various speech features. In this study, we aimed to refine the multi-feature MMN paradigm used previously to encompass both segmental and suprasegmental (prosodic) features of speech. In the experiment, a two-syllable long pseudoword was presented as a standard, and the deviant stimuli included alterations in consonants (deviation by place or place and mode of articulation), vowels (deviation by place or mode of articulation), and stress pattern in the first syllable of the pseudoword. Results indicated the emergence of MMN components across all segmental and prosodic contrasts, with the expected fronto-central amplitude distribution. Subsequent analyses revealed subtle differences in MMN responses to the deviants, suggesting varying sensitivity to phonetic contrasts. Furthermore, individual differences in MMN amplitudes were noted, partially attributable to participants' musical and language backgrounds. These findings underscore the utility of the multi-feature MMN paradigm for rapid and efficient investigation of the neurocognitive mechanisms underlying speech processing. Moreover, the paradigm demonstrated the potential to be used in further research to study the speech processing abilities in various populations.
Collapse
Affiliation(s)
- Ferenc Honbolygó
- HUN-REN Research Centre for Natural Sciences, Budapest, Hungary.
- Institute of Psychology, Eötvös Loránd University, Budapest, Hungary.
| | - Borbála Zulauf
- Institute of Psychology, Eötvös Loránd University, Budapest, Hungary
| | - Maria Ioanna Zavogianni
- HUN-REN Research Centre for Natural Sciences, Budapest, Hungary
- Faculty of Modern Philology and Social Sciences, Multilingualism Doctoral School, University of Pannonia, Veszprém, Hungary
| | - Valéria Csépe
- HUN-REN Research Centre for Natural Sciences, Budapest, Hungary
- University of Pannonia, Veszprém, Hungary
| |
Collapse
|
8
|
Bianco R, Zuk NJ, Bigand F, Quarta E, Grasso S, Arnese F, Ravignani A, Battaglia-Mayer A, Novembre G. Neural encoding of musical expectations in a non-human primate. Curr Biol 2024; 34:444-450.e5. [PMID: 38176416 DOI: 10.1016/j.cub.2023.12.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Revised: 10/26/2023] [Accepted: 12/07/2023] [Indexed: 01/06/2024]
Abstract
The appreciation of music is a universal trait of humankind.1,2,3 Evidence supporting this notion includes the ubiquity of music across cultures4,5,6,7 and the natural predisposition toward music that humans display early in development.8,9,10 Are we musical animals because of species-specific predispositions? This question cannot be answered by relying on cross-cultural or developmental studies alone, as these cannot rule out enculturation.11 Instead, it calls for cross-species experiments testing whether homologous neural mechanisms underlying music perception are present in non-human primates. We present music to two rhesus monkeys, reared without musical exposure, while recording electroencephalography (EEG) and pupillometry. Monkeys exhibit higher engagement and neural encoding of expectations based on the previously seeded musical context when passively listening to real music as opposed to shuffled controls. We then compare human and monkey neural responses to the same stimuli and find a species-dependent contribution of two fundamental musical features-pitch and timing12-in generating expectations: while timing- and pitch-based expectations13 are similarly weighted in humans, monkeys rely on timing rather than pitch. Together, these results shed light on the phylogeny of music perception. They highlight monkeys' capacity for processing temporal structures beyond plain acoustic processing, and they identify a species-dependent contribution of time- and pitch-related features to the neural encoding of musical expectations.
Collapse
Affiliation(s)
- Roberta Bianco
- Neuroscience of Perception & Action Lab, Italian Institute of Technology, Viale Regina Elena 291, 00161 Rome, Italy.
| | - Nathaniel J Zuk
- Department of Psychology, Nottingham Trent University, 50 Shakespeare Street, Nottingham NG1 4FQ, UK
| | - Félix Bigand
- Neuroscience of Perception & Action Lab, Italian Institute of Technology, Viale Regina Elena 291, 00161 Rome, Italy
| | - Eros Quarta
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Rome, Italy
| | - Stefano Grasso
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Rome, Italy
| | - Flavia Arnese
- Neuroscience of Perception & Action Lab, Italian Institute of Technology, Viale Regina Elena 291, 00161 Rome, Italy
| | - Andrea Ravignani
- Comparative Bioacoustics Group, Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525 XD Nijmegen, the Netherlands; Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Universitetsbyen 3, 8000 Aarhus, Denmark; Department of Human Neurosciences, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Rome, Italy
| | - Alexandra Battaglia-Mayer
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Rome, Italy
| | - Giacomo Novembre
- Neuroscience of Perception & Action Lab, Italian Institute of Technology, Viale Regina Elena 291, 00161 Rome, Italy.
| |
Collapse
|
9
|
Grundei M, Schmidt TT, Blankenburg F. A multimodal cortical network of sensory expectation violation revealed by fMRI. Hum Brain Mapp 2023; 44:5871-5891. [PMID: 37721377 PMCID: PMC10619418 DOI: 10.1002/hbm.26482] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 07/04/2023] [Accepted: 08/29/2023] [Indexed: 09/19/2023] Open
Abstract
The brain is subjected to multi-modal sensory information in an environment governed by statistical dependencies. Mismatch responses (MMRs), classically recorded with EEG, have provided valuable insights into the brain's processing of regularities and the generation of corresponding sensory predictions. Only few studies allow for comparisons of MMRs across multiple modalities in a simultaneous sensory stream and their corresponding cross-modal context sensitivity remains unknown. Here, we used a tri-modal version of the roving stimulus paradigm in fMRI to elicit MMRs in the auditory, somatosensory and visual modality. Participants (N = 29) were simultaneously presented with sequences of low and high intensity stimuli in each of the three senses while actively observing the tri-modal input stream and occasionally reporting the intensity of the previous stimulus in a prompted modality. The sequences were based on a probabilistic model, defining transition probabilities such that, for each modality, stimuli were more likely to repeat (p = .825) than change (p = .175) and stimulus intensities were equiprobable (p = .5). Moreover, each transition was conditional on the configuration of the other two modalities comprising global (cross-modal) predictive properties of the sequences. We identified a shared mismatch network of modality general inferior frontal and temporo-parietal areas as well as sensory areas, where the connectivity (psychophysiological interaction) between these regions was modulated during mismatch processing. Further, we found deviant responses within the network to be modulated by local stimulus repetition, which suggests highly comparable processing of expectation violation across modalities. Moreover, hierarchically higher regions of the mismatch network in the temporo-parietal area around the intraparietal sulcus were identified to signal cross-modal expectation violation. With the consistency of MMRs across audition, somatosensation and vision, our study provides insights into a shared cortical network of uni- and multi-modal expectation violation in response to sequence regularities.
Collapse
Affiliation(s)
- Miro Grundei
- Neurocomputation and Neuroimaging UnitFreie Universität BerlinBerlinGermany
- Berlin School of Mind and BrainHumboldt Universität zu BerlinBerlinGermany
| | | | - Felix Blankenburg
- Neurocomputation and Neuroimaging UnitFreie Universität BerlinBerlinGermany
- Berlin School of Mind and BrainHumboldt Universität zu BerlinBerlinGermany
| |
Collapse
|
10
|
Ringer H, Schröger E, Grimm S. Neural signatures of automatic repetition detection in temporally regular and jittered acoustic sequences. PLoS One 2023; 18:e0284836. [PMID: 37948467 PMCID: PMC10637696 DOI: 10.1371/journal.pone.0284836] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Accepted: 10/20/2023] [Indexed: 11/12/2023] Open
Abstract
Detection of repeating patterns within continuous sound streams is crucial for efficient auditory perception. Previous studies demonstrated a remarkable sensitivity of the human auditory system to periodic repetitions in unfamiliar, meaningless sounds. Automatic repetition detection was reflected in different EEG markers, including sustained activity, neural synchronisation, and event-related responses to pattern occurrences. The current study investigated how listeners' attention and the temporal regularity of a sound modulate repetition perception, and how this influence is reflected in different EEG markers that were previously suggested to subserve dissociable functions. We reanalysed data of a previous study in which listeners were presented with sequences of unfamiliar artificial sounds that either contained repetitions of a certain sound segment or not. Repeating patterns occurred either regularly or with a temporal jitter within the sequences, and participants' attention was directed either towards the pattern repetitions or away from the auditory stimulation. Across both regular and jittered sequences during both attention and in-attention, pattern repetitions led to increased sustained activity throughout the sequence, evoked a characteristic positivity-negativity complex in the event-related potential, and enhanced inter-trial phase coherence of low-frequency oscillatory activity time-locked to repeating pattern onsets. While regularity only had a minor (if any) influence, attention significantly strengthened pattern repetition perception, which was consistently reflected in all three EEG markers. These findings suggest that the detection of pattern repetitions within continuous sounds relies on a flexible mechanism that is robust against in-attention and temporal irregularity, both of which typically occur in naturalistic listening situations. Yet, attention to the auditory input can enhance processing of repeating patterns and improve repetition detection.
Collapse
Affiliation(s)
- Hanna Ringer
- International Max Planck Research School on Neuroscience of Communication (IMPRS NeuroCom), Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Cognitive and Biological Psychology, Wilhelm Wundt Institute for Psychology, Leipzig University, Leipzig, Germany
- Research Group Neurocognition of Music and Language, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
| | - Erich Schröger
- Cognitive and Biological Psychology, Wilhelm Wundt Institute for Psychology, Leipzig University, Leipzig, Germany
| | - Sabine Grimm
- Physics of Cognition Lab, Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany
- Cognitive Systems Lab, Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany
| |
Collapse
|
11
|
Bianco R, Hall ET, Pearce MT, Chait M. Implicit auditory memory in older listeners: From encoding to 6-month retention. CURRENT RESEARCH IN NEUROBIOLOGY 2023; 5:100115. [PMID: 38020808 PMCID: PMC10663129 DOI: 10.1016/j.crneur.2023.100115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Revised: 10/12/2023] [Accepted: 10/24/2023] [Indexed: 12/01/2023] Open
Abstract
Any listening task, from sound recognition to sound-based communication, rests on auditory memory which is known to decline in healthy ageing. However, how this decline maps onto multiple components and stages of auditory memory remains poorly characterised. In an online unsupervised longitudinal study, we tested ageing effects on implicit auditory memory for rapid tone patterns. The test required participants (younger, aged 20-30, and older adults aged 60-70) to quickly respond to rapid regularly repeating patterns emerging from random sequences. Patterns were novel in most trials (REGn), but unbeknownst to the participants, a few distinct patterns reoccurred identically throughout the sessions (REGr). After correcting for processing speed, the response times (RT) to REGn should reflect the information held in echoic and short-term memory before detecting the pattern; long-term memory formation and retention should be reflected by the RT advantage (RTA) to REGr vs REGn which is expected to grow with exposure. Older participants were slower than younger adults in detecting REGn and exhibited a smaller RTA to REGr. Computational simulations using a model of auditory sequence memory indicated that these effects reflect age-related limitations both in early and long-term memory stages. In contrast to ageing-related accelerated forgetting of verbal material, here older adults maintained stable memory traces for REGr patterns up to 6 months after the first exposure. The results demonstrate that ageing is associated with reduced short-term memory and long-term memory formation for tone patterns, but not with forgetting, even over surprisingly long timescales.
Collapse
Affiliation(s)
- Roberta Bianco
- Ear Institute, University College London, WC1X 8EE, London, United Kingdom
- Neuroscience of Perception and Action Laboratory, Italian Institute of Technology, 00161, Rome, Italy
| | - Edward T.R. Hall
- School of Electronic Engineering and Computer Science, Queen Mary University of London, E1 4NS, London, United Kingdom
| | - Marcus T. Pearce
- School of Electronic Engineering and Computer Science, Queen Mary University of London, E1 4NS, London, United Kingdom
- Department of Clinical Medicine, Aarhus University, 8000, Aarhus C, Denmark
| | - Maria Chait
- Ear Institute, University College London, WC1X 8EE, London, United Kingdom
| |
Collapse
|
12
|
Tóth B, Velősy PK, Kovács P, Háden GP, Polver S, Sziller I, Winkler I. Auditory learning of recurrent tone sequences is present in the newborn's brain. Neuroimage 2023; 281:120384. [PMID: 37739198 DOI: 10.1016/j.neuroimage.2023.120384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Revised: 08/13/2023] [Accepted: 09/19/2023] [Indexed: 09/24/2023] Open
Abstract
The seemingly effortless ability of our auditory system to rapidly detect new events in a dynamic environment is crucial for survival. Whether the underlying brain processes are innate is unknown. To answer this question, electroencephalography was recorded while regularly patterned (REG) versus random (RAND) tone sequences were presented to sleeping neonates. Regular relative to random sequences elicited differential neural responses after only a single repetition of the pattern indicating the existence of an innate capacity of the auditory system to detect auditory sequential regularities. We show that the newborn auditory system accumulates evidence only somewhat longer than the minimum amount determined by the ideal Bayesian observer model (the prediction from a variable-order Markov chain model) before detecting a repeating pattern. Thus, newborns can quickly form representations for regular features of the sound input, preparing the way for learning the contingencies of the environment.
Collapse
Affiliation(s)
- Brigitta Tóth
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Budapest, Hungary.
| | - Péter Kristóf Velősy
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Budapest, Hungary; Department of Cognitive Science, Budapest University of Technology and Economics, Budapest, Hungary
| | - Petra Kovács
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Budapest, Hungary; Department of Cognitive Science, Budapest University of Technology and Economics, Budapest, Hungary
| | - Gábor Peter Háden
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Budapest, Hungary; Department of Telecommunications and Media Informatics, Faculty of Electrical Engineering and Informatics, Budapest University of Technology and Economics, Budapest, Hungary
| | - Silvia Polver
- Department of Developmental Psychology and Socialisation, University of Padova, Padova, Italy
| | - Istvan Sziller
- Division of Obstetrics and Gynecology, DBC - Szent Imre University Teaching Hospital, Budapest, Hungary
| | - István Winkler
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Budapest, Hungary
| |
Collapse
|
13
|
Merchie A, Gomot M. Habituation, Adaptation and Prediction Processes in Neurodevelopmental Disorders: A Comprehensive Review. Brain Sci 2023; 13:1110. [PMID: 37509040 PMCID: PMC10377027 DOI: 10.3390/brainsci13071110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 07/12/2023] [Accepted: 07/20/2023] [Indexed: 07/30/2023] Open
Abstract
Habituation, the simplest form of learning preserved across species and evolution, is characterized by a response decrease as a stimulus is repeated. This adaptive function has been shown to be altered in some psychiatric and neurodevelopmental disorders such as autism spectrum disorder (ASD), attention-deficit/hyperactivity disorder (ADHD) or schizophrenia. At the brain level, habituation is characterized by a decrease in neural activity as a stimulation is repeated, referred to as neural adaptation. This phenomenon influences the ability to make predictions and to detect change, two processes altered in some neurodevelopmental and psychiatric disorders. In this comprehensive review, the objectives are to characterize habituation, neural adaptation, and prediction throughout typical development and in neurodevelopmental disorders; and to evaluate their implication in symptomatology, specifically in sensitivity to change or need for sameness. A summary of the different approaches to investigate adaptation will be proposed, in which we report the contribution of animal studies as well as electrophysiological studies in humans to understanding of underlying neuronal mechanisms.
Collapse
Affiliation(s)
| | - Marie Gomot
- UMR 1253 iBrain, Université de Tours, INSERM, 37000 Tours, France
| |
Collapse
|
14
|
Awwad B, Jankowski MM, Polterovich A, Bashari S, Nelken I. Extensive representation of sensory deviance in the responses to auditory gaps in unanesthetized rats. Curr Biol 2023:S0960-9822(23)00764-9. [PMID: 37385255 DOI: 10.1016/j.cub.2023.06.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2021] [Revised: 03/14/2023] [Accepted: 06/05/2023] [Indexed: 07/01/2023]
Abstract
Unexpected changes in incoming sensory streams are associated with large errors in predicting the deviant stimulus relative to a memory trace of past stimuli. Mismatch negativity (MMN) in human studies and the release from stimulus-specific adaptation (SSA) in animal models correlate with prediction errors and deviance detection.1 In human studies, violation of expectations elicited by an unexpected stimulus omission resulted in an omission MMN.2,3,4,5 These responses are evoked after the expected occurrence time of the omitted stimulus, implying that they reflect the violation of a temporal expectancy.6 Because they are often time locked to the end of the omitted stimulus,4,6,7 they resemble off responses. Indeed, suppression of cortical activity after the termination of the gap disrupts gap detection, suggesting an essential role for offset responses.8 Here, we demonstrate that brief gaps in short noise bursts in the auditory cortex of unanesthetized rats frequently evoke offset responses. Importantly, we show that omission responses are elicited when these gaps are expected but are omitted. These omission responses, together with the release from SSA of both onset and offset responses to rare gaps, form a rich and varied representation of prediction-related signals in the auditory cortex of unanesthetized rats, extending substantially and refining the representations described previously in anesthetized rats.
Collapse
Affiliation(s)
- Bshara Awwad
- Edmond and Lily Safra Center for Brain Sciences, Hebrew University of Jerusalem, Safra Campus, Jerusalem 91904, Israel; Department Neurobiology, the Silberman Institute of Life Sciences, Hebrew University of Jerusalem, Safra Campus, Jerusalem 91904, Israel
| | - Maciej M Jankowski
- Edmond and Lily Safra Center for Brain Sciences, Hebrew University of Jerusalem, Safra Campus, Jerusalem 91904, Israel; Department Neurobiology, the Silberman Institute of Life Sciences, Hebrew University of Jerusalem, Safra Campus, Jerusalem 91904, Israel
| | - Ana Polterovich
- Edmond and Lily Safra Center for Brain Sciences, Hebrew University of Jerusalem, Safra Campus, Jerusalem 91904, Israel; Department Neurobiology, the Silberman Institute of Life Sciences, Hebrew University of Jerusalem, Safra Campus, Jerusalem 91904, Israel
| | - Sapir Bashari
- Edmond and Lily Safra Center for Brain Sciences, Hebrew University of Jerusalem, Safra Campus, Jerusalem 91904, Israel; Department Neurobiology, the Silberman Institute of Life Sciences, Hebrew University of Jerusalem, Safra Campus, Jerusalem 91904, Israel
| | - Israel Nelken
- Edmond and Lily Safra Center for Brain Sciences, Hebrew University of Jerusalem, Safra Campus, Jerusalem 91904, Israel; Department Neurobiology, the Silberman Institute of Life Sciences, Hebrew University of Jerusalem, Safra Campus, Jerusalem 91904, Israel.
| |
Collapse
|
15
|
Radchenko G, Demareva V, Gromov K, Zayceva I, Rulev A, Zhukova M, Demarev A. Neural mechanisms of temporal and rhythmic structure processing in non-musicians. Front Neurosci 2023; 17:1124038. [PMID: 37234263 PMCID: PMC10206032 DOI: 10.3389/fnins.2023.1124038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Accepted: 04/19/2023] [Indexed: 05/27/2023] Open
Abstract
Music is increasingly being used as a therapeutic tool in the field of rehabilitation medicine and psychophysiology. One of the main key components of music is its temporal organization. The characteristics of neurocognitive processes during music perception of meter in different tempo variations technique have been studied by using the event-related potentials technique. The study involved 20 volunteers (6 men, the median age of the participants was 23 years). The participants were asked to listen to 4 experimental series that differed in tempo (fast vs. slow) and meter (duple vs. triple). Each series consisted of 625 audio stimuli, 85% of which were organized with a standard metric structure (standard stimulus) while 15% included unexpected accents (deviant stimulus). The results revealed that the type of metric structure influences the detection of the change in stimuli. The analysis showed that the N200 wave occurred significantly faster for stimuli with duple meter and fast tempo and was the slowest for stimuli with triple meter and fast pace.
Collapse
|
16
|
Gurariy G, Randall R, Greenberg AS. Neuroimaging evidence for the direct role of auditory scene analysis in object perception. Cereb Cortex 2023; 33:6257-6272. [PMID: 36562994 PMCID: PMC10183742 DOI: 10.1093/cercor/bhac501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Revised: 11/29/2022] [Accepted: 11/30/2022] [Indexed: 12/24/2022] Open
Abstract
Auditory Scene Analysis (ASA) refers to the grouping of acoustic signals into auditory objects. Previously, we have shown that perceived musicality of auditory sequences varies with high-level organizational features. Here, we explore the neural mechanisms mediating ASA and auditory object perception. Participants performed musicality judgments on randomly generated pure-tone sequences and manipulated versions of each sequence containing low-level changes (amplitude; timbre). Low-level manipulations affected auditory object perception as evidenced by changes in musicality ratings. fMRI was used to measure neural activation to sequences rated most and least musical, and the altered versions of each sequence. Next, we generated two partially overlapping networks: (i) a music processing network (music localizer) and (ii) an ASA network (base sequences vs. ASA manipulated sequences). Using Representational Similarity Analysis, we correlated the functional profiles of each ROI to a model generated from behavioral musicality ratings as well as models corresponding to low-level feature processing and music perception. Within overlapping regions, areas near primary auditory cortex correlated with low-level ASA models, whereas right IPS was correlated with musicality ratings. Shared neural mechanisms that correlate with behavior and underlie both ASA and music perception suggests that low-level features of auditory stimuli play a role in auditory object perception.
Collapse
Affiliation(s)
- Gennadiy Gurariy
- Department of Biomedical Engineering, Medical College of Wisconsin and Marquette University, 8701 W Watertown Plank Rd, Milwaukee, WI 53233, United States
| | - Richard Randall
- School of Music and Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, United States
| | - Adam S Greenberg
- Department of Biomedical Engineering, Medical College of Wisconsin and Marquette University, 8701 W Watertown Plank Rd, Milwaukee, WI 53233, United States
| |
Collapse
|
17
|
Bonmassar C, Scharf F, Widmann A, Wetzel N. On the relationship of arousal and attentional distraction by emotional novel sounds. Cognition 2023; 237:105470. [PMID: 37150156 DOI: 10.1016/j.cognition.2023.105470] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2022] [Revised: 04/23/2023] [Accepted: 04/24/2023] [Indexed: 05/09/2023]
Abstract
Unexpected and task-irrelevant sounds can impair performance in a task. It has been shown that highly arousing emotional distractor sounds impaired performance less compared to moderately arousing neutral distractor sounds. The present study tests whether these differential emotion-related distraction effects are directly related to an enhancement of arousal evoked by processing of emotional distractor sounds. We disentangled costs of orienting of attention and benefits of increased arousal levels during the presentation of highly arousing emotional and moderately arousing neutral novel sounds that were embedded in a sequence of repeated standard sounds. We used sound-related pupil dilation responses as a marker of arousal and RTs as a marker of distraction in a visual categorization task in 57 healthy young adults. Multilevel analyses revealed increased RT and increased pupil dilation in response to novel vs. standard sounds. Emotional novel sounds reduced distraction effects on the behavioral level and increased pupil dilation responses compared to neutral novel sounds. Bayes Factors revealed strong evidence against an inverse proportional relationship between behavioral distraction effects and sound-related pupil dilation responses for emotional sounds. Given that the activity of the locus coeruleus has been linked to both changes in pupil diameter and arousal, it may embody an indirect relationship as a common antecedent by the release of norepinephrine into brain networks involved in attention control and control of the pupil. The present study provides new insights into the relation of changes in arousal and attentional distraction during the processing of emotional task-irrelevant novel sounds.
Collapse
Affiliation(s)
| | | | - Andreas Widmann
- Leibniz Institute for Neurobiology, Magdeburg, Germany; Leipzig University, Germany
| | - Nicole Wetzel
- Leibniz Institute for Neurobiology, Magdeburg, Germany; Center for Behavioral Brain Sciences Magdeburg, Germany; University of Applied Sciences Magdeburg-, Stendal, Germany
| |
Collapse
|
18
|
Haigh SM, Berryhill ME, Kilgore-Gomez A, Dodd M. Working memory and sensory memory in subclinical high schizotypy: An avenue for understanding schizophrenia? Eur J Neurosci 2023; 57:1577-1596. [PMID: 36895099 PMCID: PMC10178355 DOI: 10.1111/ejn.15961] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Accepted: 03/07/2023] [Indexed: 03/11/2023]
Abstract
The search for robust, reliable biomarkers of schizophrenia remains a high priority in psychiatry. Biomarkers are valuable because they can reveal the underlying mechanisms of symptoms and monitor treatment progress and may predict future risk of developing schizophrenia. Despite the existence of various promising biomarkers that relate to symptoms across the schizophrenia spectrum, and despite published recommendations encouraging multivariate metrics, they are rarely investigated simultaneously within the same individuals. In those with schizophrenia, the magnitude of purported biomarkers is complicated by comorbid diagnoses, medications and other treatments. Here, we argue three points. First, we reiterate the importance of assessing multiple biomarkers simultaneously. Second, we argue that investigating biomarkers in those with schizophrenia-related traits (schizotypy) in the general population can accelerate progress in understanding the mechanisms of schizophrenia. We focus on biomarkers of sensory and working memory in schizophrenia and their smaller effects in individuals with nonclinical schizotypy. Third, we note irregularities across research domains leading to the current situation in which there is a preponderance of data on auditory sensory memory and visual working memory, but markedly less in visual (iconic) memory and auditory working memory, particularly when focusing on schizotypy where data are either scarce or inconsistent. Together, this review highlights opportunities for researchers without access to clinical populations to address gaps in knowledge. We conclude by highlighting the theory that early sensory memory deficits contribute negatively to working memory and vice versa. This presents a mechanistic perspective where biomarkers may interact with one another and impact schizophrenia-related symptoms.
Collapse
Affiliation(s)
- Sarah M. Haigh
- Department of Psychology, Center for Integrative Neuroscience, Programs in Cognitive and Brain Sciences, and Neuroscience, University of Nevada, Reno, Nevada, USA
| | - Marian E. Berryhill
- Department of Psychology, Center for Integrative Neuroscience, Programs in Cognitive and Brain Sciences, and Neuroscience, University of Nevada, Reno, Nevada, USA
| | - Alexandrea Kilgore-Gomez
- Department of Psychology, Center for Integrative Neuroscience, Programs in Cognitive and Brain Sciences, and Neuroscience, University of Nevada, Reno, Nevada, USA
| | - Michael Dodd
- Department of Psychology, University of Nebraska, Lincoln, Nebraska, USA
| |
Collapse
|
19
|
Park JJ, Baek SC, Suh MW, Choi J, Kim SJ, Lim Y. The effect of topic familiarity and volatility of auditory scene on selective auditory attention. Hear Res 2023; 433:108770. [PMID: 37104990 DOI: 10.1016/j.heares.2023.108770] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Revised: 04/06/2023] [Accepted: 04/15/2023] [Indexed: 04/29/2023]
Abstract
Selective auditory attention has been shown to modulate the cortical representation of speech. This effect has been well documented in acoustically more challenging environments. However, the influence of top-down factors, in particular topic familiarity, on this process remains unclear, despite evidence that semantic information can promote speech-in-noise perception. Apart from individual features forming a static listening condition, dynamic and irregular changes of auditory scenes-volatile listening environments-have been less studied. To address these gaps, we explored the influence of topic familiarity and volatile listening on the selective auditory attention process during dichotic listening using electroencephalography. When stories with unfamiliar topics were presented, participants' comprehension was severely degraded. However, their cortical activity selectively tracked the speech of the target story well. This implies that topic familiarity hardly influences the speech tracking neural index, possibly when the bottom-up information is sufficient. However, when the listening environment was volatile and the listeners had to re-engage in new speech whenever auditory scenes altered, the neural correlates of the attended speech were degraded. In particular, the cortical response to the attended speech and the spatial asymmetry of the response to the left and right attention were significantly attenuated around 100-200 ms after the speech onset. These findings suggest that volatile listening environments could adversely affect the modulation effect of selective attention, possibly by hampering proper attention due to increased perceptual load.
Collapse
Affiliation(s)
- Jonghwa Jeonglok Park
- Center for Intelligent & Interactive Robotics, Artificial Intelligence and Robot Institute, Korea Institute of Science and Technology, Seoul 02792, South Korea; Department of Electrical and Computer Engineering, College of Engineering, Seoul National University, Seoul 08826, South Korea
| | - Seung-Cheol Baek
- Center for Intelligent & Interactive Robotics, Artificial Intelligence and Robot Institute, Korea Institute of Science and Technology, Seoul 02792, South Korea; Research Group Neurocognition of Music and Language, Max Planck Institute for Empirical Aesthetics, Grüneburgweg 14, Frankfurt am Main 60322, Germany
| | - Myung-Whan Suh
- Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University Hospital, Seoul 03080, South Korea
| | - Jongsuk Choi
- Center for Intelligent & Interactive Robotics, Artificial Intelligence and Robot Institute, Korea Institute of Science and Technology, Seoul 02792, South Korea; Department of AI Robotics, KIST School, Korea University of Science and Technology, Seoul 02792, South Korea
| | - Sung June Kim
- Department of Electrical and Computer Engineering, College of Engineering, Seoul National University, Seoul 08826, South Korea
| | - Yoonseob Lim
- Center for Intelligent & Interactive Robotics, Artificial Intelligence and Robot Institute, Korea Institute of Science and Technology, Seoul 02792, South Korea; Department of HY-KIST Bio-convergence, Hanyang University, Seoul 04763, South Korea.
| |
Collapse
|
20
|
Sadagopan S, Kar M, Parida S. Quantitative models of auditory cortical processing. Hear Res 2023; 429:108697. [PMID: 36696724 PMCID: PMC9928778 DOI: 10.1016/j.heares.2023.108697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 12/17/2022] [Accepted: 01/12/2023] [Indexed: 01/15/2023]
Abstract
To generate insight from experimental data, it is critical to understand the inter-relationships between individual data points and place them in context within a structured framework. Quantitative modeling can provide the scaffolding for such an endeavor. Our main objective in this review is to provide a primer on the range of quantitative tools available to experimental auditory neuroscientists. Quantitative modeling is advantageous because it can provide a compact summary of observed data, make underlying assumptions explicit, and generate predictions for future experiments. Quantitative models may be developed to characterize or fit observed data, to test theories of how a task may be solved by neural circuits, to determine how observed biophysical details might contribute to measured activity patterns, or to predict how an experimental manipulation would affect neural activity. In complexity, quantitative models can range from those that are highly biophysically realistic and that include detailed simulations at the level of individual synapses, to those that use abstract and simplified neuron models to simulate entire networks. Here, we survey the landscape of recently developed models of auditory cortical processing, highlighting a small selection of models to demonstrate how they help generate insight into the mechanisms of auditory processing. We discuss examples ranging from models that use details of synaptic properties to explain the temporal pattern of cortical responses to those that use modern deep neural networks to gain insight into human fMRI data. We conclude by discussing a biologically realistic and interpretable model that our laboratory has developed to explore aspects of vocalization categorization in the auditory pathway.
Collapse
Affiliation(s)
- Srivatsun Sadagopan
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA; Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, USA.
| | - Manaswini Kar
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA
| | - Satyabrata Parida
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
21
|
Gumenyuk V, Korzyukov O, Tapaskar N, Wagner M, Larson CR, Hammer MJ. Deficiency in Re-Orienting of Attention in Adults with Attention-Deficit Hyperactivity Disorder. Clin EEG Neurosci 2023; 54:141-150. [PMID: 35861774 DOI: 10.1177/15500594221115737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
Objective: To characterize potential brain indexes of attention deficit hyperactivity disorder (ADHD) in adults. Methods: In an effort to develop objective, laboratory-based tests that can help to establish ADHD diagnosis, the brain indexes of distractibility was investigated in a group of adults. We used event-related brain potentials (ERPs) and performance measures in a forced-choice visual task. Results: Behaviorally aberrant distractibility in the ADHD group was significantly higher. Across three ERP components of distraction: N1 enhancement, P300 (P3a), and Reorienting Negativity (RON) the significant difference between ADHD and matched controls was found in the amplitude of the RON. We used non-parametric randomization tests, enabling us to statistically validated this difference between-group. Conclusions: Our main results of this feasibility study suggest that among other ERP components associated with auditory distraction, the RON response is promising index for a potential biomarker of deficient re-orienting of attention in adults s with ADHD.
Collapse
Affiliation(s)
- Valentina Gumenyuk
- Department of Neurological Sciences, MEG laboratory, 12284UNMC, Omaha, NE, USA
| | - Oleg Korzyukov
- Wisconsin Airway Sensory Physiology Laboratory, 5229University of Wisconsin - Whitewater, Whitewater, WI, USA.,Department of Communication Sciences and Disorders, 3270Northwestern University, Evanston, IL, USA
| | - Natalie Tapaskar
- Department of Communication Sciences and Disorders, 3270Northwestern University, Evanston, IL, USA.,Department of Medicine, 21727University of Chicago Medical Center, Chicago, IL, USA
| | | | - Charles R Larson
- Department of Communication Sciences and Disorders, 3270Northwestern University, Evanston, IL, USA
| | - Michael J Hammer
- Wisconsin Airway Sensory Physiology Laboratory, 5229University of Wisconsin - Whitewater, Whitewater, WI, USA
| |
Collapse
|
22
|
Ringer H, Schröger E, Grimm S. Within- and between-subject consistency of perceptual segmentation in periodic noise: A combined behavioral tapping and EEG study. Psychophysiology 2023; 60:e14174. [PMID: 36106761 DOI: 10.1111/psyp.14174] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 08/19/2022] [Accepted: 08/22/2022] [Indexed: 01/04/2023]
Abstract
It is remarkable that human listeners can perceive periodicity in noise, as the isochronous repetition of a particular noise segment is not accompanied by salient physical cues in the acoustic signal. Previous research suggested that listeners rely on short temporally local and idiosyncratic features to perceptually segment periodic noise sequences. The present study sought to test this assumption by disentangling consistency of perceptual segmentation within and between listeners. Presented periodic noise sequences either consisted of seamless repetitions of a 500-ms segment or of repetitions of a 200-ms segment that were interleaved with 300-ms portions of random noise. Both within- and between-subject consistency was stronger for interleaved (compared with seamless) periodic sequences. The increased consistency likely resulted from reduced temporal jitter of potential features used for perceptual segmentation when the recurring segment was shorter and occurred interleaved with random noise. These results support the notion that perceptual segmentation of periodic noise relies on subtle temporally local features. However, the finding that some specific noise sequences were segmented more consistently across listeners than others challenges the assumption that the features are necessarily idiosyncratic. Instead, in some specific noise samples, a preference for certain spectral features is shared between individuals.
Collapse
Affiliation(s)
- Hanna Ringer
- International Max Planck Research School NeuroCom, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Wilhelm Wundt Institute for Psychology, Leipzig University, Leipzig, Germany
| | - Erich Schröger
- Wilhelm Wundt Institute for Psychology, Leipzig University, Leipzig, Germany
| | - Sabine Grimm
- Wilhelm Wundt Institute for Psychology, Leipzig University, Leipzig, Germany.,Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany
| |
Collapse
|
23
|
Weise A, Grimm S, Maria Rimmele J, Schröger E. Auditory representations for long lasting sounds: Insights from event-related brain potentials and neural oscillations. BRAIN AND LANGUAGE 2023; 237:105221. [PMID: 36623340 DOI: 10.1016/j.bandl.2022.105221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 12/26/2022] [Accepted: 12/27/2022] [Indexed: 06/17/2023]
Abstract
The basic features of short sounds, such as frequency and intensity including their temporal dynamics, are integrated in a unitary representation. Knowledge on how our brain processes long lasting sounds is scarce. We review research utilizing the Mismatch Negativity event-related potential and neural oscillatory activity for studying representations for long lasting simple versus complex sounds such as sinusoidal tones versus speech. There is evidence for a temporal constraint in the formation of auditory representations: Auditory edges like sound onsets within long lasting sounds open a temporal window of about 350 ms in which the sounds' dynamics are integrated into a representation, while information beyond that window contributes less to that representation. This integration window segments the auditory input into short chunks. We argue that the representations established in adjacent integration windows can be concatenated into an auditory representation of a long sound, thus, overcoming the temporal constraint.
Collapse
Affiliation(s)
- Annekathrin Weise
- Department of Psychology, Ludwig-Maximilians-University Munich, Germany; Wilhelm Wundt Institute for Psychology, Leipzig University, Germany.
| | - Sabine Grimm
- Wilhelm Wundt Institute for Psychology, Leipzig University, Germany.
| | - Johanna Maria Rimmele
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Germany; Center for Language, Music and Emotion, New York University, Max Planck Institute, Department of Psychology, 6 Washington Place, New York, NY 10003, United States.
| | - Erich Schröger
- Wilhelm Wundt Institute for Psychology, Leipzig University, Germany.
| |
Collapse
|
24
|
Herrmann B, Maess B, Johnsrude IS. Sustained responses and neural synchronization to amplitude and frequency modulation in sound change with age. Hear Res 2023; 428:108677. [PMID: 36580732 DOI: 10.1016/j.heares.2022.108677] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Revised: 12/09/2022] [Accepted: 12/16/2022] [Indexed: 12/23/2022]
Abstract
Perception of speech requires sensitivity to features, such as amplitude and frequency modulations, that are often temporally regular. Previous work suggests age-related changes in neural responses to temporally regular features, but little work has focused on age differences for different types of modulations. We recorded magnetoencephalography in younger (21-33 years) and older adults (53-73 years) to investigate age differences in neural responses to slow (2-6 Hz sinusoidal and non-sinusoidal) modulations in amplitude, frequency, or combined amplitude and frequency. Audiometric pure-tone average thresholds were elevated in older compared to younger adults, indicating subclinical hearing impairment in the recruited older-adult sample. Neural responses to sound onset (independent of temporal modulations) were increased in magnitude in older compared to younger adults, suggesting hyperresponsivity and a loss of inhibition in the aged auditory system. Analyses of neural activity to modulations revealed greater neural synchronization with amplitude, frequency, and combined amplitude-frequency modulations for older compared to younger adults. This potentiated response generalized across different degrees of temporal regularity (sinusoidal and non-sinusoidal), although neural synchronization was generally lower for non-sinusoidal modulation. Despite greater synchronization, sustained neural activity was reduced in older compared to younger adults for sounds modulated both sinusoidally and non-sinusoidally in frequency. Our results suggest age differences in the sensitivity of the auditory system to features present in speech and other natural sounds.
Collapse
Affiliation(s)
- Björn Herrmann
- Rotman Research Institute, Baycrest, North York, ON M6A 2E1, Canada; Department of Psychology, University of Toronto, Toronto, ON M5S 1A1, Canada; Department of Psychology & Brain and Mind Institute, The University of Western Ontario, London, ON N6A 3K7, Canada.
| | - Burkhard Maess
- Max Planck Institute for Human Cognitive and Brain Sciences, Brain Networks Unit, Leipzig 04103, Germany
| | - Ingrid S Johnsrude
- Department of Psychology & Brain and Mind Institute, The University of Western Ontario, London, ON N6A 3K7, Canada; School of Communication Sciences & Disorders, The University of Western Ontario, London, ON N6A 5B7, Canada
| |
Collapse
|
25
|
Bianco R, Chait M. No Link Between Speech-in-Noise Perception and Auditory Sensory Memory - Evidence From a Large Cohort of Older and Younger Listeners. Trends Hear 2023; 27:23312165231190688. [PMID: 37828868 PMCID: PMC10576936 DOI: 10.1177/23312165231190688] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Revised: 07/06/2023] [Accepted: 07/11/2023] [Indexed: 10/14/2023] Open
Abstract
A growing literature is demonstrating a link between working memory (WM) and speech-in-noise (SiN) perception. However, the nature of this correlation and which components of WM might underlie it, are being debated. We investigated how SiN reception links with auditory sensory memory (aSM) - the low-level processes that support the short-term maintenance of temporally unfolding sounds. A large sample of old (N = 199, 60-79 yo) and young (N = 149, 20-35 yo) participants was recruited online and performed a coordinate response measure-based speech-in-babble task that taps listeners' ability to track a speech target in background noise. We used two tasks to investigate implicit and explicit aSM. Both were based on tone patterns overlapping in processing time scales with speech (presentation rate of tones 20 Hz; of patterns 2 Hz). We hypothesised that a link between SiN and aSM may be particularly apparent in older listeners due to age-related reduction in both SiN reception and aSM. We confirmed impaired SiN reception in the older cohort and demonstrated reduced aSM performance in those listeners. However, SiN and aSM did not share variability. Across the two age groups, SiN performance was predicted by a binaural processing test and age. The results suggest that previously observed links between WM and SiN may relate to the executive components and other cognitive demands of the used tasks. This finding helps to constrain the search for the perceptual and cognitive factors that explain individual variability in SiN performance.
Collapse
Affiliation(s)
- Roberta Bianco
- Ear Institute, University College London, London, UK
- Neuroscience of Perception and Action Lab, Italian Institute of Technology (IIT), Rome, Italy
| | - Maria Chait
- Ear Institute, University College London, London, UK
| |
Collapse
|
26
|
Larson LM, Feuerriegel D, Hasan MI, Braat S, Jin J, Tipu SMU, Shiraji S, Tofail F, Biggs BA, Hamadani JD, Johnson KA, Bode S, Pasricha SR. Effects of iron supplementation on neural indices of habituation in Bangladeshi children. Am J Clin Nutr 2023; 117:73-82. [PMID: 36789946 DOI: 10.1016/j.ajcnut.2022.11.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 11/23/2022] [Accepted: 11/29/2022] [Indexed: 12/24/2022] Open
Abstract
BACKGROUND Iron deficiency and anemia have been associated with poor cognition in children, yet the effects of iron supplementation on neurocognition remain unclear. OBJECTIVE We aimed to examine the effects of supplementation with iron on neural indices of habituation using auditory event-related brain potentials (ERPs). METHODS This substudy was nested within a 3-arm, double-blind, double-dummy, individual randomized trial in Bangladesh, in which 3300 8-mo-old children were randomly selected to receive 3 mo of daily iron syrup (12.5 mg iron), multiple micronutrient powders (MNPs) (including 12.5 mg iron), or placebo. Children were assessed after 3 mo of intervention (mo 3) and 9 mo thereafter (mo 12). The neurocognitive substudy comprised a randomly selected subset of children from the main trial. Brain activity elicited during an auditory roving oddball task was recorded using electroencephalography to provide an index of habituation. The differential response to a novel (deviant) compared with a repeated (standard) sound was examined. The primary outcome was the amplitude of the mismatch response (deviant minusstandard tone waveforms) at mo 3. Secondary outcomes included the deviant and standard tone-evoked amplitudes, N2 amplitude differences, and differences in mean amplitudes evoked by deviant tones presented in the second compared with first half of the oddball sequence at mo 3 and 12. RESULTS Data were analyzed from 329 children at month 3 and 363 at mo 12. Analyses indicated no treatment effects of iron interventions compared with placebo on the amplitude of the mismatch response (iron syrup compared with placebo: mean difference (MD) = 0.07μV [95% CI: -1.22, 1.37]; MNPs compared with placebo: MD = 0.58μV [95% CI: -0.74, 1.90]) nor any secondary ERP outcomes at mo 3 or 12, despite improvements in hemoglobin and ferritin concentrations from iron syrup and MNPs in this nested substudy. CONCLUSION In Bangladeshi children with >40% anemia prevalence, iron or MNP interventions alone are insufficient to improve neural indices of habituation. This trial was registered at the Australian New Zealand Clinical Trials Registry as ACTRN12617000660381.
Collapse
Affiliation(s)
- Leila M Larson
- Department of Health Promotion, Education, and Behavior, Arnold School of Public Health, University of South Carolina, Columbia, SC, USA; Population Health and Immunity Division, Walter and Eliza Hall Institute of Medical Research, Melbourne, VIC, Australia; Department of Infectious Diseases at the Peter Doherty Institute, The University of Melbourne, Melbourne, VIC, Australia.
| | - Daniel Feuerriegel
- Melbourne School of Psychological Sciences, The University of Melbourne, Melbourne, VIC, Australia
| | - Mohammed Imrul Hasan
- Maternal and Child Health Division, International Centre for Diarrhoeal Disease Research, Dhaka, Bangladesh
| | - Sabine Braat
- Population Health and Immunity Division, Walter and Eliza Hall Institute of Medical Research, Melbourne, VIC, Australia; Department of Infectious Diseases at the Peter Doherty Institute, The University of Melbourne, Melbourne, VIC, Australia; Centre for Epidemiology and Biostatistics, Melbourne School of Population and Global Health, The University of Melbourne, Australia
| | - Jerry Jin
- Department of Infectious Diseases at the Peter Doherty Institute, The University of Melbourne, Melbourne, VIC, Australia; Melbourne School of Psychological Sciences, The University of Melbourne, Melbourne, VIC, Australia
| | - Sm Mulk Uddin Tipu
- Maternal and Child Health Division, International Centre for Diarrhoeal Disease Research, Dhaka, Bangladesh
| | - Shamima Shiraji
- Maternal and Child Health Division, International Centre for Diarrhoeal Disease Research, Dhaka, Bangladesh
| | - Fahmida Tofail
- Maternal and Child Health Division, International Centre for Diarrhoeal Disease Research, Dhaka, Bangladesh
| | - Beverley-Ann Biggs
- Department of Infectious Diseases at the Peter Doherty Institute, The University of Melbourne, Melbourne, VIC, Australia; The Victorian Infectious Diseases Service, The Royal Melbourne Hospital, Melbourne, VIC, Australia
| | - Jena D Hamadani
- Maternal and Child Health Division, International Centre for Diarrhoeal Disease Research, Dhaka, Bangladesh
| | - Katherine A Johnson
- Melbourne School of Psychological Sciences, The University of Melbourne, Melbourne, VIC, Australia
| | - Stefan Bode
- Melbourne School of Psychological Sciences, The University of Melbourne, Melbourne, VIC, Australia
| | - Sant-Rayn Pasricha
- Population Health and Immunity Division, Walter and Eliza Hall Institute of Medical Research, Melbourne, VIC, Australia; Diagnostic Hematology, The Royal Melbourne Hospital, Parkville VIC, Australia; Diagnostic Hematology and Clinical Hematology, The Peter MacCallum Cancer Centre and The Royal Melbourne Hospital, Parkville VIC, Australia; Department of Medical Biology, The University of Melbourne, Melbourne, VIC, Australia
| |
Collapse
|
27
|
Debnath R, Wetzel N. Processing of task-irrelevant sounds during typical everyday activities in children. Dev Psychobiol 2022; 64:e22331. [PMID: 36282761 DOI: 10.1002/dev.22331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Revised: 07/29/2022] [Accepted: 08/30/2022] [Indexed: 01/27/2023]
Abstract
Our ability to focus on a task and ignore task-irrelevant stimuli is critical for efficient cognitive functioning. Attention control is especially required in the auditory modality as sound has privileged access to perception and consciousness. Despite this important function, little is known about auditory attention during typical everyday activities in childhood. We investigated the impact of task-irrelevant sounds on attention during three everyday activities - playing a game, reading a book, watching a movie. During these activities, environmental novel sounds were presented within a sequence of standard sounds to 7-8-year-old children and adults. We measured ERPs reflecting early sound processing and attentional orienting and theta power evoked by standard and novel sounds during these activities. Playing a game versus reading or watching reduced early encoding of sounds in children and affected ongoing information processing and attention allocation in both groups. In adults, theta power was reduced during playing at mid-central brain areas. Results show a pattern of immature neuronal mechanisms underlying perception and attention of task-irrelevant sounds in 7-8-year-old children. While the type of activity affected the processing of irrelevant sounds in both groups, early stimulus encoding processes were more sensitive to the type of activities in children.
Collapse
Affiliation(s)
- Ranjan Debnath
- Leibniz Institute for Neurobiology, Magdeburg, Germany.,Center for Behavioral Brain Sciences, Magdeburg, Germany
| | - Nicole Wetzel
- Leibniz Institute for Neurobiology, Magdeburg, Germany.,Center for Behavioral Brain Sciences, Magdeburg, Germany.,University of Applied Sciences Magdeburg-Stendal, Magdeburg, Germany
| |
Collapse
|
28
|
Widmann A, Schröger E. Intention-based predictive information modulates auditory deviance processing. Front Neurosci 2022; 16:995119. [PMID: 36248631 PMCID: PMC9554204 DOI: 10.3389/fnins.2022.995119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 09/08/2022] [Indexed: 11/26/2022] Open
Abstract
The human brain is highly responsive to (deviant) sounds violating an auditory regularity. Respective brain responses are usually investigated in situations when the sounds were produced by the experimenter. Acknowledging that humans also actively produce sounds, the present event-related potential study tested for differences in the brain responses to deviants that were produced by the listeners by pressing one of two buttons. In one condition, deviants were unpredictable with respect to the button-sound association. In another condition, deviants were predictable with high validity yielding correctly predicted deviants and incorrectly predicted (mispredicted) deviants. Temporal principal component analysis revealed deviant-specific N1 enhancement, mismatch negativity (MMN) and P3a. N1 enhancements were highly similar for each deviant type, indicating that the underlying neural mechanism is not affected by intention-based expectation about the self-produced forthcoming sound. The MMN was abolished for predictable deviants, suggesting that the intention-based prediction for a deviant can overwrite the prediction derived from the auditory regularity (predicting a standard). The P3a was present for each deviant type but was largest for mispredicted deviants. It is argued that the processes underlying P3a not only evaluate the deviant with respect to the fact that it violates an auditory regularity but also with respect to the intended sensorial effect of an action. Overall, our results specify current theories of auditory predictive processing, as they reveal that intention-based predictions exert different effects on different deviance-specific brain responses.
Collapse
Affiliation(s)
- Andreas Widmann
- Wilhelm Wundt Institute for Psychology, Leipzig University, Leipzig, Germany
- Leibniz Institute for Neurobiology, Magdeburg, Germany
- *Correspondence: Andreas Widmann,
| | - Erich Schröger
- Wilhelm Wundt Institute for Psychology, Leipzig University, Leipzig, Germany
- Erich Schröger,
| |
Collapse
|
29
|
Francis AL. Adding noise is a confounded nuisance. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:1375. [PMID: 36182286 DOI: 10.1121/10.0013874] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Accepted: 08/15/2022] [Indexed: 06/16/2023]
Abstract
A wide variety of research and clinical assessments involve presenting speech stimuli in the presence of some kind of noise. Here, I selectively review two theoretical perspectives and discuss ways in which these perspectives may help researchers understand the consequences for listeners of adding noise to a speech signal. I argue that adding noise changes more about the listening task than merely making the signal more difficult to perceive. To fully understand the effects of an added noise on speech perception, we must consider not just how much the noise affects task difficulty, but also how it affects all of the systems involved in understanding speech: increasing message uncertainty, modifying attentional demand, altering affective response, and changing motivation to perform the task.
Collapse
Affiliation(s)
- Alexander L Francis
- Department of Speech, Language, and Hearing Sciences, Purdue University, 715 Clinic Drive, West Lafayette, Indiana 47907, USA
| |
Collapse
|
30
|
Reduced functional connectivity supports statistical learning of temporally distributed regularities. Neuroimage 2022; 260:119459. [PMID: 35820582 DOI: 10.1016/j.neuroimage.2022.119459] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 06/29/2022] [Accepted: 07/07/2022] [Indexed: 10/17/2022] Open
Abstract
Statistical learning is a powerful ability that extracts regularities from our environment and makes predictions about future events. Using functional magnetic resonance imaging, we aimed to probe how a wide range of brain areas are intertwined to support statistical learning, characterising its architecture in the whole-brain functional connectivity (FC). Participants performed a statistical learning task of temporally distributed regularities. We used refined behavioural learning scores to associate individuals' learning performances with the FC changed by statistical learning. As a result, the learning performance was mediated by the activation strength in the lateral occipital cortex, angular gyrus, precuneus, anterior cingulate cortex, and superior frontal gyrus. Through a group independent component analysis, activations of the superior frontal network showed the largest correlation with the statistical learning performances. Seed-to-voxel whole-brain and seed-to-ROI FC analyses revealed that the FC between the superior frontal gyrus and the salience, language, and dorsal attention networks were reduced during statistical learning. We suggest that the weakened functional connections between the superior frontal gyrus and brain regions involved in top-down control processes serve a pivotal role in statistical learning, supporting better processing of novel information such as the extraction of new patterns from the environment.
Collapse
|
31
|
Haiduk F, Fitch WT. Understanding Design Features of Music and Language: The Choric/Dialogic Distinction. Front Psychol 2022; 13:786899. [PMID: 35529579 PMCID: PMC9075586 DOI: 10.3389/fpsyg.2022.786899] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Accepted: 02/22/2022] [Indexed: 12/03/2022] Open
Abstract
Music and spoken language share certain characteristics: both consist of sequences of acoustic elements that are combinatorically combined, and these elements partition the same continuous acoustic dimensions (frequency, formant space and duration). However, the resulting categories differ sharply: scale tones and note durations of small integer ratios appear in music, while speech uses phonemes, lexical tone, and non-isochronous durations. Why did music and language diverge into the two systems we have today, differing in these specific features? We propose a framework based on information theory and a reverse-engineering perspective, suggesting that design features of music and language are a response to their differential deployment along three different continuous dimensions. These include the familiar propositional-aesthetic ('goal') and repetitive-novel ('novelty') dimensions, and a dialogic-choric ('interactivity') dimension that is our focus here. Specifically, we hypothesize that music exhibits specializations enhancing coherent production by several individuals concurrently-the 'choric' context. In contrast, language is specialized for exchange in tightly coordinated turn-taking-'dialogic' contexts. We examine the evidence for our framework, both from humans and non-human animals, and conclude that many proposed design features of music and language follow naturally from their use in distinct dialogic and choric communicative contexts. Furthermore, the hybrid nature of intermediate systems like poetry, chant, or solo lament follows from their deployment in the less typical interactive context.
Collapse
Affiliation(s)
- Felix Haiduk
- Department of Behavioral and Cognitive Biology, University of Vienna, Vienna, Austria
| | - W. Tecumseh Fitch
- Department of Behavioral and Cognitive Biology, University of Vienna, Vienna, Austria
- Vienna Cognitive Science Hub, University of Vienna, Vienna, Austria
| |
Collapse
|
32
|
Mehra M, Mukesh A, Bandyopadhyay S. Separate Functional Subnetworks of Excitatory Neurons Show Preference to Periodic and Random Sound Structures. J Neurosci 2022; 42:3165-3183. [PMID: 35241488 PMCID: PMC8994540 DOI: 10.1523/jneurosci.0333-21.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2021] [Revised: 11/18/2021] [Accepted: 01/03/2022] [Indexed: 11/21/2022] Open
Abstract
Auditory cortex (ACX) neurons are sensitive to spectro-temporal sound patterns and violations in patterns induced by rare stimuli embedded within streams of sounds. We investigate the auditory cortical representation of repeated presentations of sequences of sounds with standard stimuli (common) with an embedded deviant (rare) stimulus in two conditions, Periodic (Fixed deviant position) or Random (Random deviant position). We used extracellular single-unit and two-photon Ca2+ imaging recordings in layer 2/3 neurons of the mouse (Mus musculus) ACX of either sex. Population single-unit average responses increased over repetitions in the Random condition and were suppressed or did not change in the Periodic condition, showing general irregularity preference. A subset of neurons showed the opposite behavior, indicating regularity preference. Furthermore, pairwise noise correlations were higher in the Random condition than in the Periodic condition, suggesting a role of recurrent connections in the observed differential adaptation. Functional two-photon Ca2+ imaging showed that excitatory (EX), and inhibitory (IN) neurons [parvalbumin-positive (PV) and somatostatin-positive (SOM)] also had different categories of long-term adaptation as observed with single-units. However, examination of functional connectivity between pairs of neurons of different categories showed that EX-PV connected pairs behaved opposite to the EX-EX and EX-SOM pairs, with more connections outside category in Random condition than Periodic condition. Finally, considering Regularity, Irregularity, and no preference of connected pairs of neurons showed that EX-EX and EX-SOM pairs were in largely separate functional subnetworks with different preferences, not EX-PV pairs. Thus, separate subnetworks underlie coding of periodic and random sound sequences.SIGNIFICANCE STATEMENT Studying how the auditory cortex (ACX) neurons respond to streams of sound sequences help us understand the importance of changes in dynamic acoustic noisy scenes around us. Humans and animals are sensitive to regularity and its violations in sound sequences. Psychophysical tasks in humans show that the auditory brain differentially responds to Periodic and Random structures, independent of the listener's attentional states. Here, we show that mouse ACX L2/3 neurons detect changes and respond differently to patterns over long-time scales. The differential functional connectivity profile obtained in response to two different sound contexts suggests the vital role of recurrent connections in the auditory cortical network. Furthermore, the excitatory-inhibitory neuronal interactions can contribute to detecting the changing sound patterns.
Collapse
Affiliation(s)
- Muneshwar Mehra
- Information Processing Laboratory, Department of Electronics and Electrical Communication Engineering, Indian Institute of Technology Kharagpur, 721302, India
- Advanced Technology Development Centre, Indian Institute of Technology Kharagpur, 721302, India
| | - Adarsh Mukesh
- Information Processing Laboratory, Department of Electronics and Electrical Communication Engineering, Indian Institute of Technology Kharagpur, 721302, India
- Advanced Technology Development Centre, Indian Institute of Technology Kharagpur, 721302, India
| | - Sharba Bandyopadhyay
- Information Processing Laboratory, Department of Electronics and Electrical Communication Engineering, Indian Institute of Technology Kharagpur, 721302, India
- Advanced Technology Development Centre, Indian Institute of Technology Kharagpur, 721302, India
| |
Collapse
|
33
|
Kadosh O, Bonneh YS. Involuntary oculomotor inhibition markers of saliency and deviance in response to auditory sequences. J Vis 2022; 22:8. [PMID: 35475911 PMCID: PMC9055552 DOI: 10.1167/jov.22.5.8] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
Our eyes move constantly but are often inhibited momentarily in response to external stimuli (oculomotor inhibition [OMI]), depending on the stimulus saliency, anticipation, and attention. Previous studies have shown prolonged OMI for auditory oddballs; however, they required counting the oddballs, possibly reflecting voluntary attention. Here, we investigated whether the “passive” OMI response to auditory deviants can provide a quantitative measure of deviance strength (pitch difference) and studied its dependence on the inter-trial interval (ITI). Participants fixated centrally and passively listened to repeated short sequences of pure tones that contained a deviant tone either regularly or with 20% probability (oddballs). In an “active” control experiment, participants counted the deviant or the standard. As in previous studies, the results showed prolonged microsaccade inhibition and increased pupil dilation following the rare deviant tone. Earlier inhibition onset was found in proportion to the pitch deviance (the saliency effect), and a later release was found for oddballs, but only for ITI <2.5 seconds. The active control experiment showed similar results when counting the deviant but longer OMI for the standard when counting it. Taken together, these results suggest that OMI provides involuntary markers of saliency and deviance, which can be obtained without the participant's response.
Collapse
Affiliation(s)
- Oren Kadosh
- School of Optometry and Vision Science, Faculty of Life Sciences, Bar-Ilan University, Ramat-Gan, Israel.,
| | - Yoram S Bonneh
- School of Optometry and Vision Science, Faculty of Life Sciences, Bar-Ilan University, Ramat-Gan, Israel., https://yorambonneh.wixsite.com/bonneh-lab
| |
Collapse
|
34
|
Wang L, Wang Y, Liu Z, Wu EX, Chen F. A Speech-Level–Based Segmented Model to Decode the Dynamic Auditory Attention States in the Competing Speaker Scenes. Front Neurosci 2022; 15:760611. [PMID: 35221885 PMCID: PMC8866945 DOI: 10.3389/fnins.2021.760611] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Accepted: 12/30/2021] [Indexed: 11/21/2022] Open
Abstract
In the competing speaker environments, human listeners need to focus or switch their auditory attention according to dynamic intentions. The reliable cortical tracking ability to the speech envelope is an effective feature for decoding the target speech from the neural signals. Moreover, previous studies revealed that the root mean square (RMS)–level–based speech segmentation made a great contribution to the target speech perception with the modulation of sustained auditory attention. This study further investigated the effect of the RMS-level–based speech segmentation on the auditory attention decoding (AAD) performance with both sustained and switched attention in the competing speaker auditory scenes. Objective biomarkers derived from the cortical activities were also developed to index the dynamic auditory attention states. In the current study, subjects were asked to concentrate or switch their attention between two competing speaker streams. The neural responses to the higher- and lower-RMS-level speech segments were analyzed via the linear temporal response function (TRF) before and after the attention switching from one to the other speaker stream. Furthermore, the AAD performance decoded by the unified TRF decoding model was compared to that by the speech-RMS-level–based segmented decoding model with the dynamic change of the auditory attention states. The results showed that the weight of the typical TRF component approximately 100-ms time lag was sensitive to the switching of the auditory attention. Compared to the unified AAD model, the segmented AAD model improved attention decoding performance under both the sustained and switched auditory attention modulations in a wide range of signal-to-masker ratios (SMRs). In the competing speaker scenes, the TRF weight and AAD accuracy could be used as effective indicators to detect the changes of the auditory attention. In addition, with a wide range of SMRs (i.e., from 6 to –6 dB in this study), the segmented AAD model showed the robust decoding performance even with short decision window length, suggesting that this speech-RMS-level–based model has the potential to decode dynamic attention states in the realistic auditory scenarios.
Collapse
Affiliation(s)
- Lei Wang
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, China
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
| | - Yihan Wang
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Zhixing Liu
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Ed X. Wu
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
| | - Fei Chen
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, China
- *Correspondence: Fei Chen,
| |
Collapse
|
35
|
Auerbach BD, Gritton HJ. Hearing in Complex Environments: Auditory Gain Control, Attention, and Hearing Loss. Front Neurosci 2022; 16:799787. [PMID: 35221899 PMCID: PMC8866963 DOI: 10.3389/fnins.2022.799787] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Accepted: 01/18/2022] [Indexed: 12/12/2022] Open
Abstract
Listening in noisy or complex sound environments is difficult for individuals with normal hearing and can be a debilitating impairment for those with hearing loss. Extracting meaningful information from a complex acoustic environment requires the ability to accurately encode specific sound features under highly variable listening conditions and segregate distinct sound streams from multiple overlapping sources. The auditory system employs a variety of mechanisms to achieve this auditory scene analysis. First, neurons across levels of the auditory system exhibit compensatory adaptations to their gain and dynamic range in response to prevailing sound stimulus statistics in the environment. These adaptations allow for robust representations of sound features that are to a large degree invariant to the level of background noise. Second, listeners can selectively attend to a desired sound target in an environment with multiple sound sources. This selective auditory attention is another form of sensory gain control, enhancing the representation of an attended sound source while suppressing responses to unattended sounds. This review will examine both “bottom-up” gain alterations in response to changes in environmental sound statistics as well as “top-down” mechanisms that allow for selective extraction of specific sound features in a complex auditory scene. Finally, we will discuss how hearing loss interacts with these gain control mechanisms, and the adaptive and/or maladaptive perceptual consequences of this plasticity.
Collapse
Affiliation(s)
- Benjamin D. Auerbach
- Department of Molecular and Integrative Physiology, Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- Neuroscience Program, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- *Correspondence: Benjamin D. Auerbach,
| | - Howard J. Gritton
- Neuroscience Program, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- Department of Comparative Biosciences, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, IL, United States
| |
Collapse
|
36
|
The effect of background speech on attentive sound processing: A pupil dilation study. Int J Psychophysiol 2022; 174:47-56. [PMID: 35150772 DOI: 10.1016/j.ijpsycho.2022.02.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Revised: 10/08/2021] [Accepted: 02/08/2022] [Indexed: 11/23/2022]
Abstract
Listening to task-irrelevant speech while performing a cognitive task can involuntarily deviate our attention and lead to decreases in performance. One explanation for the impairing effect of irrelevant speech is that semantic processing can consume attentional resources. In the present study, we tested this assumption by measuring performance in a non-linguistic attentional task while participants were exposed to meaningful (native) and non-meaningful (foreign) speech. Moreover, based on the tight relation between pupillometry and attentional processes, we also registered changes in pupil diameter size to quantify the effect of meaningfulness upon attentional allocation. To these aims, we recruited 41 native German speakers who had neither received formal instruction in French nor had extensive informal contact with this language. The focal task consisted of an auditory oddball task. Participants performed a duration discrimination task containing frequently repeated standard sounds and rarely presented deviant sounds while a story was read in German or (non-meaningful) French in the background. Our results revealed that, whereas effects of language meaningfulness on attention were not detectable at the behavioural level, participants' pupil dilated more in response to the sounds of the auditory task when background speech was played in non-meaningful French compared to German, independent of sound type. In line with the initial hypothesis, this suggested that semantic processing of the native language required attentional resources, which lead to fewer resources devoted to the processing of the sounds of the focal task. Our results highlight the potential of the pupil dilation response for the investigation of subtle cognitive processes that might not surface when only behaviour is measured.
Collapse
|
37
|
Otsuka S, Nakagawa S, Furukawa S. Expectations of the timing and intensity of a stimulus propagate to the auditory periphery through the medial olivocochlear reflex. Cereb Cortex 2022; 32:5121-5131. [PMID: 35094068 PMCID: PMC9667176 DOI: 10.1093/cercor/bhac002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Revised: 12/28/2021] [Accepted: 12/29/2021] [Indexed: 12/27/2022] Open
Abstract
Expectations concerning the timing of a stimulus enhance attention at the time at which the event occurs, which confers significant sensory and behavioral benefits. Herein, we show that temporal expectations modulate even the sensory transduction in the auditory periphery via the descending pathway. We measured the medial olivocochlear reflex (MOCR), a sound-activated efferent feedback that controls outer hair cell motility and optimizes the dynamic range of the sensory system. MOCR was noninvasively assessed using otoacoustic emissions. We found that the MOCR was enhanced by a visual cue presented at a fixed interval before a sound but was unaffected if the interval was changing between trials. The MOCR was also observed to be stronger when the learned timing expectation matched with the timing of the sound but remained unvaried when these two factors did not match. This implies that the MOCR can be voluntarily controlled in a stimulus- and goal-directed manner. Moreover, we found that the MOCR was enhanced by the expectation of a strong but not a weak, sound intensity. This asymmetrical enhancement could facilitate antimasking and noise protective effects without disrupting the detection of faint signals. Therefore, the descending pathway conveys temporal and intensity expectations to modulate auditory processing.
Collapse
Affiliation(s)
- Sho Otsuka
- Address correspondence to Sho Otsuka, Center for Frontier Medical Engineering, Chiba University, 1-33 Yayoicho, Inageku, Chiba 263-8522, Japan.
| | - Seiji Nakagawa
- Center for Frontier Medical Engineering, Chiba University, Chiba, Japan
| | - Shigeto Furukawa
- NTT Communication Science Laboratoires, NTT Corporation, Kanagawa, Japan
| |
Collapse
|
38
|
Soltanparast S, Toufan R, Talebian S, Pourbakht A. Regularity of background auditory scene and selective attention: a brain oscillatory study. Neurosci Lett 2022; 772:136465. [DOI: 10.1016/j.neulet.2022.136465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Revised: 12/29/2021] [Accepted: 01/14/2022] [Indexed: 11/27/2022]
|
39
|
Haigh SM, Brosseau P, Eack SM, Leitman DI, Salisbury DF, Behrmann M. Hyper-Sensitivity to Pitch and Poorer Prosody Processing in Adults With Autism: An ERP Study. Front Psychiatry 2022; 13:844830. [PMID: 35693971 PMCID: PMC9174755 DOI: 10.3389/fpsyt.2022.844830] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Accepted: 04/20/2022] [Indexed: 01/30/2023] Open
Abstract
Individuals with autism typically experience a range of symptoms, including abnormal sensory sensitivities. However, there are conflicting reports on the sensory profiles that characterize the sensory experience in autism that often depend on the type of stimulus. Here, we examine early auditory processing to simple changes in pitch and later auditory processing of more complex emotional utterances. We measured electroencephalography in 24 adults with autism and 28 controls. First, tones (1046.5Hz/C6, 1108.7Hz/C#6, or 1244.5Hz/D#6) were repeated three times or nine times before the pitch changed. Second, utterances of delight or frustration were repeated three or six times before the emotion changed. In response to the simple pitched tones, the autism group exhibited larger mismatch negativity (MMN) after nine standards compared to controls and produced greater trial-to-trial variability (TTV). In response to the prosodic utterances, the autism group showed smaller P3 responses when delight changed to frustration compared to controls. There was no significant correlation between ERPs to pitch and ERPs to prosody. Together, this suggests that early auditory processing is hyper-sensitive in autism whereas later processing of prosodic information is hypo-sensitive. The impact the different sensory profiles have on perceptual experience in autism may be key to identifying behavioral treatments to reduce symptoms.
Collapse
Affiliation(s)
- Sarah M Haigh
- Department of Psychology and Institute for Neuroscience, University of Nevada, Reno, NV, United States.,Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, United States
| | - Pat Brosseau
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, United States
| | - Shaun M Eack
- School of Social Work, University of Pittsburgh, Pittsburgh, PA, United States
| | - David I Leitman
- Division of Translational Research, National Institute of Mental Health, Bethesda, MD, United States
| | - Dean F Salisbury
- Department of Psychiatry, University of Pittsburgh School of Medicine, Pittsburgh, PA, United States
| | - Marlene Behrmann
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, United States.,Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, United States
| |
Collapse
|
40
|
Heurteloup C, Merchie A, Roux S, Bonnet-Brilhault F, Escera C, Gomot M. Neural repetition suppression to vocal and non-vocal sounds. Cortex 2021; 148:1-13. [DOI: 10.1016/j.cortex.2021.11.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2020] [Revised: 03/02/2021] [Accepted: 11/19/2021] [Indexed: 11/29/2022]
|
41
|
Neubert CR, Förstel AP, Debener S, Bendixen A. Predictability-Based Source Segregation and Sensory Deviance Detection in Auditory Aging. Front Hum Neurosci 2021; 15:734231. [PMID: 34776906 PMCID: PMC8586071 DOI: 10.3389/fnhum.2021.734231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Accepted: 10/08/2021] [Indexed: 11/30/2022] Open
Abstract
When multiple sound sources are present at the same time, auditory perception is often challenged with disentangling the resulting mixture and focusing attention on the target source. It has been repeatedly demonstrated that background (distractor) sound sources are easier to ignore when their spectrotemporal signature is predictable. Prior evidence suggests that this ability to exploit predictability for foreground-background segregation degrades with age. On a theoretical level, this has been related with an impairment in elderly adults’ capabilities to detect certain types of sensory deviance in unattended sound sequences. Yet the link between those two capacities, deviance detection and predictability-based sound source segregation, has not been empirically demonstrated. Here we report on a combined behavioral-EEG study investigating the ability of elderly listeners (60–75 years of age) to use predictability as a cue for sound source segregation, as well as their sensory deviance detection capacities. Listeners performed a detection task on a target stream that can only be solved when a concurrent distractor stream is successfully ignored. We contrast two conditions whose distractor streams differ in their predictability. The ability to benefit from predictability was operationalized as performance difference between the two conditions. Results show that elderly listeners can use predictability for sound source segregation at group level, yet with a high degree of inter-individual variation in this ability. In a further, passive-listening control condition, we measured correlates of deviance detection in the event-related brain potential (ERP) elicited by occasional deviations from the same spectrotemporal pattern as used for the predictable distractor sequence during the behavioral task. ERP results confirmed neural signatures of deviance detection in terms of mismatch negativity (MMN) at group level. Correlation analyses at single-subject level provide no evidence for the hypothesis that deviance detection ability (measured by MMN amplitude) is related to the ability to benefit from predictability for sound source segregation. These results are discussed in the frameworks of sensory deviance detection and predictive coding.
Collapse
Affiliation(s)
- Christiane R Neubert
- Cognitive Systems Lab, Faculty of Natural Sciences, Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany
| | - Alexander P Förstel
- Neuropsychology Lab, Department of Psychology, Carl von Ossietzky University of Oldenburg, Oldenburg, Germany
| | - Stefan Debener
- Neuropsychology Lab, Department of Psychology, Carl von Ossietzky University of Oldenburg, Oldenburg, Germany
| | - Alexandra Bendixen
- Cognitive Systems Lab, Faculty of Natural Sciences, Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany
| |
Collapse
|
42
|
Wetzel N, Kunke D, Widmann A. Tablet PC use directly affects children's perception and attention. Sci Rep 2021; 11:21215. [PMID: 34707134 PMCID: PMC8551317 DOI: 10.1038/s41598-021-00551-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Accepted: 10/07/2021] [Indexed: 11/22/2022] Open
Abstract
Children currently grow up with a marked increase in interactive digital mobile media. To what extent digital media directly modulate children’s perception and attention is largely unknown. We investigated the processing of task-irrelevant auditory information while 37 children aged 6;8–9;1-years played the identical card game on a tablet PC or with the experimenter in reality. The sound sequence included repeated standard sounds and occasionally novel sounds. Event-related potentials in the EEG, that reflect sound-related processes of perception and attention, were measured. Sounds evoked increased amplitudes of the ERP components P1, P2 and P3a during the interaction with the tablet PC compared to the human interaction. This indicates enhanced early processing of task-irrelevant information and increased allocation of attention to sounds throughout the interaction with a tablet PC compared to a human partner. Results suggest direct effects of typical situations, where children interact with a tablet PC, on neuronal mechanisms that drive perception and attention in the developing brain. More research into this phenomena is required to make specific suggestions for developing digital interactive learning programs.
Collapse
Affiliation(s)
- Nicole Wetzel
- Leibniz Institute for Neurobiology, Brenneckestr. 6, 39119, Magdeburg, Germany. .,Center for Behavioral Brain Sciences, Magdeburg, Germany. .,University of Applied Sciences Magdeburg-Stendal, Magdeburg, Germany.
| | - Dunja Kunke
- Leibniz Institute for Neurobiology, Brenneckestr. 6, 39119, Magdeburg, Germany
| | - Andreas Widmann
- Leibniz Institute for Neurobiology, Brenneckestr. 6, 39119, Magdeburg, Germany.,Institute of Psychology, Leipzig University, Leipzig, Germany
| |
Collapse
|
43
|
Henry MJ, Cook PF, de Reus K, Nityananda V, Rouse AA, Kotz SA. An ecological approach to measuring synchronization abilities across the animal kingdom. Philos Trans R Soc Lond B Biol Sci 2021; 376:20200336. [PMID: 34420382 PMCID: PMC8380968 DOI: 10.1098/rstb.2020.0336] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023] Open
Abstract
In this perspective paper, we focus on the study of synchronization abilities across the animal kingdom. We propose an ecological approach to studying nonhuman animal synchronization that begins from observations about when, how and why an animal might synchronize spontaneously with natural environmental rhythms. We discuss what we consider to be the most important, but thus far largely understudied, temporal, physical, perceptual and motivational constraints that must be taken into account when designing experiments to test synchronization in nonhuman animals. First and foremost, different species are likely to be sensitive to and therefore capable of synchronizing at different timescales. We also argue that it is fruitful to consider the latent flexibility of animal synchronization. Finally, we discuss the importance of an animal's motivational state for showcasing synchronization abilities. We demonstrate that the likelihood that an animal can successfully synchronize with an environmental rhythm is context-dependent and suggest that the list of species capable of synchronization is likely to grow when tested with ecologically honest, species-tuned experiments. This article is part of the theme issue ‘Synchrony and rhythm interaction: from the brain to behavioural ecology’.
Collapse
Affiliation(s)
- Molly J Henry
- Research Group 'Neural and Environmental Rhythms', Max Planck Institute for Empirical Aesthetics, Grüneburgweg 14, 60322 Frankfurt am Main, Germany
| | - Peter F Cook
- Department of Psychology, New College of Florida, 5800 Bayshore Rd, Sarasota, FL 34234, USA
| | - Koen de Reus
- Comparative Bioacoustics Group, Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525 XD Nijmegen, The Netherlands.,Artificial Intelligence Lab, Vrije Universiteit Brussel, Boulevard de la Plaine 9, 1050 Ixelles, Belgium
| | - Vivek Nityananda
- Biosciences Institute, Newcastle University, Newcastle Upon Tyne, NE2 4HH, UK
| | - Andrew A Rouse
- Department of Psychology, Tufts University, 419 Boston Ave, Medford, MA 02155, USA
| | - Sonja A Kotz
- Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, Universiteitssingel 40, 6200 MD Maastricht, The Netherlands
| |
Collapse
|
44
|
Babaeeghazvini P, Rueda-Delgado LM, Gooijers J, Swinnen SP, Daffertshofer A. Brain Structural and Functional Connectivity: A Review of Combined Works of Diffusion Magnetic Resonance Imaging and Electro-Encephalography. Front Hum Neurosci 2021; 15:721206. [PMID: 34690718 PMCID: PMC8529047 DOI: 10.3389/fnhum.2021.721206] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2021] [Accepted: 09/10/2021] [Indexed: 11/13/2022] Open
Abstract
Implications of structural connections within and between brain regions for their functional counterpart are timely points of discussion. White matter microstructural organization and functional activity can be assessed in unison. At first glance, however, the corresponding findings appear variable, both in the healthy brain and in numerous neuro-pathologies. To identify consistent associations between structural and functional connectivity and possible impacts for the clinic, we reviewed the literature of combined recordings of electro-encephalography (EEG) and diffusion-based magnetic resonance imaging (MRI). It appears that the strength of event-related EEG activity increases with increased integrity of structural connectivity, while latency drops. This agrees with a simple mechanistic perspective: the nature of microstructural white matter influences the transfer of activity. The EEG, however, is often assessed for its spectral content. Spectral power shows associations with structural connectivity that can be negative or positive often dependent on the frequencies under study. Functional connectivity shows even more variations, which are difficult to rank. This might be caused by the diversity of paradigms being investigated, from sleep and resting state to cognitive and motor tasks, from healthy participants to patients. More challenging, though, is the potential dependency of findings on the kind of analysis applied. While this does not diminish the principal capacity of EEG and diffusion-based MRI co-registration, it highlights the urgency to standardize especially EEG analysis.
Collapse
Affiliation(s)
- Parinaz Babaeeghazvini
- Department of Human Movements Sciences, Faculty of Behavioural and Movement Sciences, Amsterdam Movement Science Institute (AMS), Vrije Universiteit Amsterdam, Amsterdam, Netherlands
- Institute for Brain and Behaviour Amsterdam (iBBA), Faculty of Behavioural and Movement Sciences, Vrije Universiteit, Vrije Universiteit Amsterdam, Amsterdam, Netherlands
| | - Laura M. Rueda-Delgado
- Movement Control & Neuroplasticity Research Group, Department of Movement Sciences, KU Leuven, Leuven, Belgium
- Trinity Centre for Biomedical Engineering, Trinity College Dublin, The University of Dublin, Dublin, Ireland
| | - Jolien Gooijers
- Movement Control & Neuroplasticity Research Group, Department of Movement Sciences, KU Leuven, Leuven, Belgium
- KU Leuven Brain Institute (LBI), Leuven, Belgium
| | - Stephan P. Swinnen
- Movement Control & Neuroplasticity Research Group, Department of Movement Sciences, KU Leuven, Leuven, Belgium
- KU Leuven Brain Institute (LBI), Leuven, Belgium
| | - Andreas Daffertshofer
- Department of Human Movements Sciences, Faculty of Behavioural and Movement Sciences, Amsterdam Movement Science Institute (AMS), Vrije Universiteit Amsterdam, Amsterdam, Netherlands
- Institute for Brain and Behaviour Amsterdam (iBBA), Faculty of Behavioural and Movement Sciences, Vrije Universiteit, Vrije Universiteit Amsterdam, Amsterdam, Netherlands
| |
Collapse
|
45
|
Darriba Á, Hsu YF, Van Ommen S, Waszak F. Intention-based and sensory-based predictions. Sci Rep 2021; 11:19899. [PMID: 34615990 PMCID: PMC8494815 DOI: 10.1038/s41598-021-99445-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Accepted: 09/23/2021] [Indexed: 02/08/2023] Open
Abstract
We inhabit a continuously changing world, where the ability to anticipate future states of the environment is critical for adaptation. Anticipation can be achieved by learning about the causal or temporal relationship between sensory events, as well as by learning to act on the environment to produce an intended effect. Together, sensory-based and intention-based predictions provide the flexibility needed to successfully adapt. Yet it is currently unknown whether the two sources of information are processed independently to form separate predictions, or are combined into a common prediction. To investigate this, we ran an experiment in which the final tone of two possible four-tone sequences could be predicted from the preceding tones in the sequence and/or from the participants' intention to trigger that final tone. This tone could be congruent with both sensory-based and intention-based predictions, incongruent with both, or congruent with one while incongruent with the other. Trials where predictions were incongruent with each other yielded similar prediction error responses irrespectively of the violated prediction, indicating that both predictions were formulated and coexisted simultaneously. The violation of intention-based predictions yielded late additional error responses, suggesting that those violations underwent further differential processing which the violations of sensory-based predictions did not receive.
Collapse
Affiliation(s)
- Álvaro Darriba
- Université de Paris, INCC UMR 8002, CNRS, F-75006, Paris, France.
| | - Yi-Fang Hsu
- Department of Educational Psychology and Counselling, National Taiwan Normal University, 10610, Taipei, Taiwan
- Institute for Research Excellence in Learning Sciences, National Taiwan Normal University, 10610, Taipei, Taiwan
| | - Sandrien Van Ommen
- Department of Basic Neurosciences, University of Geneva, Biotech Campus, Geneva, Switzerland
| | - Florian Waszak
- Université de Paris, INCC UMR 8002, CNRS, F-75006, Paris, France
| |
Collapse
|
46
|
Distinct timescales for the neuronal encoding of vocal signals in a high-order auditory area. Sci Rep 2021; 11:19672. [PMID: 34608248 PMCID: PMC8490347 DOI: 10.1038/s41598-021-99135-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Accepted: 09/21/2021] [Indexed: 02/08/2023] Open
Abstract
The ability of the auditory system to selectively recognize natural sound categories while maintaining a certain degree of tolerance towards variations within these categories, which may have functional roles, is thought to be crucial for vocal communication. To date, it is still largely unknown how the balance between tolerance and sensitivity to variations in acoustic signals is coded at a neuronal level. Here, we investigate whether neurons in a high-order auditory area in zebra finches, a songbird species, are sensitive to natural variations in vocal signals by recording their responses to repeated exposures to identical and variant sound sequences. We used the songs of male birds which tend to be highly repetitive with only subtle variations between renditions. When playing these songs to both anesthetized and awake birds, we found that variations between songs did not affect the neuron firing rate but the temporal reliability of responses. This suggests that auditory processing operates on a range of distinct timescales, namely a short one to detect variations in vocal signals, and longer ones that allow the birds to tolerate variations in vocal signal structure and to encode the global context.
Collapse
|
47
|
Lim SJ, Carter YD, Njoroge JM, Shinn-Cunningham BG, Perrachione TK. Talker discontinuity disrupts attention to speech: Evidence from EEG and pupillometry. BRAIN AND LANGUAGE 2021; 221:104996. [PMID: 34358924 PMCID: PMC8515637 DOI: 10.1016/j.bandl.2021.104996] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Revised: 07/11/2021] [Accepted: 07/13/2021] [Indexed: 05/13/2023]
Abstract
Speech is processed less efficiently from discontinuous, mixed talkers than one consistent talker, but little is known about the neural mechanisms for processing talker variability. Here, we measured psychophysiological responses to talker variability using electroencephalography (EEG) and pupillometry while listeners performed a delayed recall of digit span task. Listeners heard and recalled seven-digit sequences with both talker (single- vs. mixed-talker digits) and temporal (0- vs. 500-ms inter-digit intervals) discontinuities. Talker discontinuity reduced serial recall accuracy. Both talker and temporal discontinuities elicited P3a-like neural evoked response, while rapid processing of mixed-talkers' speech led to increased phasic pupil dilation. Furthermore, mixed-talkers' speech produced less alpha oscillatory power during working memory maintenance, but not during speech encoding. Overall, these results are consistent with an auditory attention and streaming framework in which talker discontinuity leads to involuntary, stimulus-driven attentional reorientation to novel speech sources, resulting in the processing interference classically associated with talker variability.
Collapse
Affiliation(s)
- Sung-Joo Lim
- Department of Speech, Language, and Hearing Sciences, Boston University, United States.
| | - Yaminah D Carter
- Department of Speech, Language, and Hearing Sciences, Boston University, United States
| | - J Michelle Njoroge
- Department of Speech, Language, and Hearing Sciences, Boston University, United States
| | | | - Tyler K Perrachione
- Department of Speech, Language, and Hearing Sciences, Boston University, United States.
| |
Collapse
|
48
|
Macuch Silva V, Franke M. Pragmatic Prediction in the Processing of Referring Expressions Containing Scalar Quantifiers. Front Psychol 2021; 12:662050. [PMID: 34531781 PMCID: PMC8438145 DOI: 10.3389/fpsyg.2021.662050] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2021] [Accepted: 08/05/2021] [Indexed: 11/17/2022] Open
Abstract
Previous research in cognitive science and psycholinguistics has shown that language users are able to predict upcoming linguistic input probabilistically, pre-activating material on the basis of cues emerging from different levels of linguistic abstraction, from phonology to semantics. Current evidence suggests that linguistic prediction also operates at the level of pragmatics, where processing is strongly constrained by context. To test a specific theory of contextually-constrained processing, termed pragmatic surprisal theory here, we used a self-paced reading task where participants were asked to view visual scenes and then read descriptions of those same scenes. Crucially, we manipulated whether the visual context biased readers into specific pragmatic expectations about how the description might unfold word by word. Contrary to the predictions of pragmatic surprisal theory, we found that participants took longer reading the main critical term in scenarios where they were biased by context and pragmatic constraints to expect a given word, as opposed to scenarios where there was no pragmatic expectation for any particular referent.
Collapse
Affiliation(s)
- Vinicius Macuch Silva
- Cognitive Modeling Group, Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
| | - Michael Franke
- Cognitive Modeling Group, Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
| |
Collapse
|
49
|
Herrmann B, Maess B, Johnsrude IS. A neural signature of regularity in sound is reduced in older adults. Neurobiol Aging 2021; 109:1-10. [PMID: 34634748 DOI: 10.1016/j.neurobiolaging.2021.09.011] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Revised: 09/03/2021] [Accepted: 09/08/2021] [Indexed: 01/21/2023]
Abstract
Sensitivity to repetitions in sound amplitude and frequency is crucial for sound perception. As with other aspects of sound processing, sensitivity to such patterns may change with age, and may help explain some age-related changes in hearing such as segregating speech from background sound. We recorded magnetoencephalography to characterize differences in the processing of sound patterns between younger and older adults. We presented tone sequences that either contained a pattern (made of a repeated set of tones) or did not contain a pattern. We show that auditory cortex in older, compared to younger, adults is hyperresponsive to sound onsets, but that sustained neural activity in auditory cortex, indexing the processing of a sound pattern, is reduced. Hence, the sensitivity of neural populations in auditory cortex fundamentally differs between younger and older individuals, overresponding to sound onsets, while underresponding to patterns in sounds. This may help to explain some age-related changes in hearing such as increased sensitivity to distracting sounds and difficulties tracking speech in the presence of other sound.
Collapse
Affiliation(s)
- Björn Herrmann
- Department of Psychology & Brain and Mind Institute, The University of Western Ontario, London, ON, Canada; Rotman Research Institute, Baycrest, North York, ON, Canada; Department of Psychology, University of Toronto, Toronto, ON, Canada.
| | - Burkhard Maess
- Brain Networks Unit, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Ingrid S Johnsrude
- Department of Psychology & Brain and Mind Institute, The University of Western Ontario, London, ON, Canada; School of Communication Sciences & Disorders, The University of Western Ontario, London, ON, Canada
| |
Collapse
|
50
|
Jain S, Cherian R, Nataraja NP, Narne VK. The Relationship Between Tinnitus Pitch, Audiogram Edge Frequency, and Auditory Stream Segregation Abilities in Individuals With Tinnitus. Am J Audiol 2021; 30:524-534. [PMID: 34139145 DOI: 10.1044/2021_aja-20-00087] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023] Open
Abstract
Purpose Around 80%-93% of the individuals with tinnitus have hearing loss. Researchers have found that tinnitus pitch was related to the frequencies of hearing loss, but unclear about the relationship between tinnitus pitch and audiometry edge frequency. The comorbidity of tinnitus and speech perception in noise problems had also been reported, but the relationship between tinnitus pitch and speech perception in noise had seldom been investigated. This study was designed to estimate the relationship between tinnitus pitch, audiogram edge frequency, and speech perception in noise. The speech perception in noise was measured using auditory stream segregation paradigm. Method Thirteen individuals with bilateral mild-to-severe tonal tinnitus and minimal-to-mild cochlear hearing loss were selected. Thirteen individuals with hearing loss without tinnitus were also selected. The audiogram of each participant with tinnitus was matched with that of the participant without tinnitus. Tinnitus pitch of the participants with tinnitus was measured and compared with audiogram edge frequency. The stream segregation thresholds were calculated at the participants' admitted tinnitus pitch and one octave below the tinnitus pitch. The stream segregation thresholds were estimated at fission and fusion boundary using pure-tone stimuli in ABA paradigm. Results High correlation between tinnitus pitch and audiogram edge frequency was noted. Overall stream segregation thresholds were higher for individuals with tinnitus. Higher thresholds indicated poorer stream segregation abilities. Within tinnitus participants, the thresholds were significantly lesser at frequency corresponding to admitted tinnitus pitch than at one octave below the tinnitus pitch. Conclusions The information from this study may be helpful in educating the patients about the relationship between hearing loss and tinnitus. The findings may also account for speech-perception-in-noise difficulties often reported by the individuals with tinnitus.
Collapse
Affiliation(s)
- Saransh Jain
- Department of Speech and Hearing, Jagadguru Sri Shivarathreeshwara Institute of Speech and Hearing, Mysuru, India
| | - Riya Cherian
- Department of ENT, Sree Gokulam Medical College & Research Foundation, Venjaranmood, India
| | - Nuggehalli P. Nataraja
- Department of Speech and Hearing, Jagadguru Sri Shivarathreeshwara Institute of Speech and Hearing, Mysuru, India
| | - Vijaya Kumar Narne
- Department of Mechanical Engineering, Indian Institute of Technology Kanpur, India
| |
Collapse
|