1
|
Orioli G, Dragovic D, Farroni T. Perception of visual and audiovisual trajectories toward and away from the body in the first postnatal year. J Exp Child Psychol 2024; 243:105921. [PMID: 38615600 DOI: 10.1016/j.jecp.2024.105921] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 03/13/2024] [Accepted: 03/15/2024] [Indexed: 04/16/2024]
Abstract
Perceiving motion in depth is important in everyday life, especially motion in relation to the body. Visual and auditory cues inform us about motion in space when presented in isolation from each other, but the most comprehensive information is obtained through the combination of both of these cues. We traced the development of infants' ability to discriminate between visual motion trajectories across peripersonal space and to match these with auditory cues specifying the same peripersonal motion. We measured 5-month-old (n = 20) and 9-month-old (n = 20) infants' visual preferences for visual motion toward or away from their body (presented simultaneously and side by side) across three conditions: (a) visual displays presented alone, (b) paired with a sound increasing in intensity, and (c) paired with a sound decreasing in intensity. Both groups preferred approaching motion in the visual-only condition. When the visual displays were paired with a sound increasing in intensity, neither group showed a visual preference. When a sound decreasing in intensity was played instead, the 5-month-olds preferred the receding (spatiotemporally congruent) visual stimulus, whereas the 9-month-olds preferred the approaching (spatiotemporally incongruent) visual stimulus. We speculate that in the approaching sound condition, the behavioral salience of the sound could have led infants to focus on the auditory information alone, in order to prepare a motor response, and to neglect the visual stimuli. In the receding sound condition, instead, the difference in response patterns in the two groups may have been driven by infants' emerging motor abilities and their developing predictive processing mechanisms supporting and influencing each other.
Collapse
Affiliation(s)
- Giulia Orioli
- Centre for Developmental Science, School of Psychology, University of Birmingham, Birmingham B15 2SB, UK; Department of Developmental Psychology and Socialization, University of Padova, 35131 Padova, Italy.
| | - Danica Dragovic
- Paediatric Unit, Hospital of Monfalcone, 34074 Monfalcone, Italy
| | - Teresa Farroni
- Department of Developmental Psychology and Socialization, University of Padova, 35131 Padova, Italy
| |
Collapse
|
2
|
Rostami Z, Salari M, Mahdavi S, Etemadifar M. Abnormal multisensory temporal discrimination in Parkinson's disease. Brain Res 2024; 1834:148901. [PMID: 38561085 DOI: 10.1016/j.brainres.2024.148901] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Revised: 03/23/2024] [Accepted: 03/28/2024] [Indexed: 04/04/2024]
Abstract
Cognitive deficits are prevalent in Parkinson's disease (PD), ranging from mild deficits in perception and executive function to severe dementia. Multisensory integration (MSI), the ability to pool information from different sensory modalities to form a combined, coherent perception of the environment, is known to be impaired in PD. This study investigated the disruption of audiovisual MSI in PD patients by evaluating temporal discrimination ability between auditory and visual stimuli with different stimulus onset asynchronies (SOAs). The experiment was conducted with Fifteen PD patients and fifteen age-matched healthy controls where participants were requested to report whether the audiovisual stimuli pairs were temporal simultaneous. The temporal binding window (TBW), the time during which sensory modalities are perceived as synchronous, was adapted as the comparison index between PD patients and healthy individuals. Our results showed that PD patients had a significantly wider TBW than healthy controls, indicating abnormal audiovisual temporal discrimination. Furthermore, PD patients had more difficulty in discriminating temporal asynchrony in visual-first, but not in auditory-first stimuli, compared to healthy controls. In contrast, no significant difference was observed for auditory-first stimuli. PD patients also had shorter reaction times than healthy controls regardless of stimulus priority. Together, our findings point to abnormal audiovisual temporal discrimination, a major component of MSI irregularity, in PD patients. These results have important implications for future models of MSI experiments and models that aim to uncover the underlying mechanism of MSI in patients afflicted with PD.
Collapse
Affiliation(s)
- Zahra Rostami
- Clinical Research Development Unit, Shohada-e Tajrish Hospital, Shahid Beheshti University of Medical Sciences, Tehran, Iran; School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mehri Salari
- Clinical Research Development Unit, Shohada-e Tajrish Hospital, Shahid Beheshti University of Medical Sciences, Tehran, Iran; School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
| | - Sara Mahdavi
- School of Medicine, Alborz University of Medical Sciences, Karaj, Iran
| | - Masoud Etemadifar
- Faculty of Medicine, Isfahan University of Medical Science, Isfahan, Iran
| |
Collapse
|
3
|
Xie X, Li T, Xu S, Yu Y, Ma Y, Liu Z, Ji M. The Effects of Auditory Working Memory Task on Situation Awareness in Complex Dynamic Environments: An Eye-movement Study. Hum Factors 2024; 66:1844-1859. [PMID: 37529928 DOI: 10.1177/00187208231191389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/03/2023]
Abstract
OBJECTIVE This study investigated the effect of auditory working memory task on situation awareness (SA) and eye-movement patterns in complex dynamic environments. BACKGROUND Many human errors in aviation are caused by a lack of SA, and distraction from auditory secondary tasks is a serious threat to SA. However, it remains unclear how auditory working memory tasks affect SA and eye-movement patterns. METHOD Participants (n = 28) were randomly allocated to two groups and received different periods of visual search training (short versus long). They subsequently completed a situation awareness measurement task in three auditory secondary task conditions (without secondary task, auditory calculation task, and auditory 2-back task). Eye-movement data were collected during the situation awareness measurement task. RESULTS The auditory 2-back task significantly reduced overall SA, Level 1 SA, dwell times, and total percentage of fixation time on task-related areas of interests in the SA measurement task. Overall SA and Level 3 SA were not reduced by the auditory 2-back task in individuals in the longer visual search training time condition. CONCLUSION Auditory working memory load impairs SA in the perception and projection stage; however, greater experience can overcome impairment of SA in the projection stage. APPLICATION This study provided possible approaches to preventing loss of SA: (1) improving crew members' communication skills to ensure the accurate and clear transmission of information, reducing the difficulty of processing information, and (2) providing targeted cognitive training tailored to each pilot's level of experience.
Collapse
Affiliation(s)
- Xudong Xie
- School of Psychology, Shaanxi Normal University, Xi'an, China
- Key Laboratory for Behaviour and Cognitive Neuroscience of Shaanxi Province, Xi'an, China
| | - Tiantian Li
- Northwest University of Political Science and Law, Xi'an, China
| | - Shuai Xu
- School of Psychology, Shaanxi Normal University, Xi'an, China
- Key Laboratory for Behaviour and Cognitive Neuroscience of Shaanxi Province, Xi'an, China
| | - Yingyue Yu
- School of Psychology, Shaanxi Normal University, Xi'an, China
- Key Laboratory for Behaviour and Cognitive Neuroscience of Shaanxi Province, Xi'an, China
| | - Yifeng Ma
- School of Psychology, Shaanxi Normal University, Xi'an, China
- Key Laboratory for Behaviour and Cognitive Neuroscience of Shaanxi Province, Xi'an, China
| | - Zhen Liu
- School of Psychology, Shaanxi Normal University, Xi'an, China
- Key Laboratory for Behaviour and Cognitive Neuroscience of Shaanxi Province, Xi'an, China
| | - Ming Ji
- School of Psychology, Shaanxi Normal University, Xi'an, China
- Key Laboratory for Behaviour and Cognitive Neuroscience of Shaanxi Province, Xi'an, China
| |
Collapse
|
4
|
Kong PR, Han KT. Psychological and physiological effects of soundscapes: A systematic review of 25 experiments in the English and Chinese literature. Sci Total Environ 2024; 929:172197. [PMID: 38582113 DOI: 10.1016/j.scitotenv.2024.172197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Revised: 03/06/2024] [Accepted: 04/01/2024] [Indexed: 04/08/2024]
Abstract
The aim of this systematic review was to conduct a comprehensive and rigorous investigation of both psychological and physiological responses to, and audio-visual interactions with, soundscapes to present an overview of the current status and to provide suggestions for future research. Our literature search focused on empirical and quantitative studies of journal articles and gray literature in English and Chinese. This systematic review excluded literature related to pure music, religious sounds, humanistic sounds, historical sounds, medical research, and differences in materials used. The Joanna Briggs Institute's Checklist for Randomized Controlled Trials was used to assess the risk of bias in the included studies. Twenty-five studies were included, involving 1950 participants. The major findings of this systematic review were that: (1) there were significant associations between the psychological and physiological responses; (2) the audio-visual interaction affected the psychological and physiological responses; and (3) because of the high risk of bias of the included studies, interpretation of their findings should be cautious. Nevertheless, given that this systematic review has a higher level of evidence than a single study and the synthesized evidence identified in this review is aligned with the results of other studies, the studies reviewed herein together provide consistent evidence. Replications are important in empirical research to build trustworthy results. Future research should focus on the psychological responses of pleasantness, preference, tranquility, the eight semantic dimensions (ISO 12913-2:2018), and the 11 pairs of adjectives describing the soundscape (Ba et al., 2023) and the physiological responses of heart rate variability and salivary, and follow the CONSORT guidelines to improve the research quality. An integration of sensory modalities, environmental factors, contextual indicators, temporal data, demographic variables, socio-cultural factors, and psychological and physiological responses may provide deeper insights into how people experience and understand the acoustic environment in context.
Collapse
Affiliation(s)
- Pei-Rou Kong
- Department of Landscape Architecture, National Chin-Yi University of Technology, No.57, Sec. 2, Zhongshan Rd., Taiping Dist., Taichung 41170, Taiwan.
| | - Ke-Tsung Han
- Department of Landscape Architecture, National Chin-Yi University of Technology, No.57, Sec. 2, Zhongshan Rd., Taiping Dist., Taichung 41170, Taiwan.
| |
Collapse
|
5
|
Sato M. Audiovisual speech asynchrony asymmetrically modulates neural binding. Neuropsychologia 2024; 198:108866. [PMID: 38518889 DOI: 10.1016/j.neuropsychologia.2024.108866] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2023] [Revised: 01/09/2024] [Accepted: 03/19/2024] [Indexed: 03/24/2024]
Abstract
Previous psychophysical and neurophysiological studies in young healthy adults have provided evidence that audiovisual speech integration occurs with a large degree of temporal tolerance around true simultaneity. To further determine whether audiovisual speech asynchrony modulates auditory cortical processing and neural binding in young healthy adults, N1/P2 auditory evoked responses were compared using an additive model during a syllable categorization task, without or with an audiovisual asynchrony ranging from 240 ms visual lead to 240 ms auditory lead. Consistent with previous psychophysical findings, the observed results converge in favor of an asymmetric temporal integration window. Three main findings were observed: 1) predictive temporal and phonetic cues from pre-phonatory visual movements before the acoustic onset appeared essential for neural binding to occur, 2) audiovisual synchrony, with visual pre-phonatory movements predictive of the onset of the acoustic signal, was a prerequisite for N1 latency facilitation, and 3) P2 amplitude suppression and latency facilitation occurred even when visual pre-phonatory movements were not predictive of the acoustic onset but the syllable to come. Taken together, these findings help further clarify how audiovisual speech integration partly operates through two stages of visually-based temporal and phonetic predictions.
Collapse
Affiliation(s)
- Marc Sato
- Laboratoire Parole et Langage, Centre National de la Recherche Scientifique, Aix-Marseille Université, Aix-en-Provence, France.
| |
Collapse
|
6
|
Leavitt D, Alanazi FI, Al-Ozzi TM, Cohn M, Hodaie M, Kalia SK, Lozano AM, Milosevic L, Hutchison WD. Auditory oddball responses in the human subthalamic nucleus and substantia nigra pars reticulata. Neurobiol Dis 2024; 195:106490. [PMID: 38561111 DOI: 10.1016/j.nbd.2024.106490] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Revised: 03/24/2024] [Accepted: 03/29/2024] [Indexed: 04/04/2024] Open
Abstract
The auditory oddball is a mainstay in research on attention, novelty, and sensory prediction. How this task engages subcortical structures like the subthalamic nucleus and substantia nigra pars reticulata is unclear. We administered an auditory OB task while recording single unit activity (35 units) and local field potentials (57 recordings) from the subthalamic nucleus and substantia nigra pars reticulata of 30 patients with Parkinson's disease undergoing deep brain stimulation surgery. We found tone modulated and oddball modulated units in both regions. Population activity differentiated oddball from standard trials from 200 ms to 1000 ms after the tone in both regions. In the substantia nigra, beta band activity in the local field potential was decreased following oddball tones. The oddball related activity we observe may underlie attention, sensory prediction, or surprise-induced motor suppression.
Collapse
Affiliation(s)
- Dallas Leavitt
- Institute of Biomedical Engineering, University of Toronto, Canada; University of Toronto - Max Planck Centre for Neural Science and Technology, University of Toronto, Canada; Krembil Brain Institute, University Health Network, Toronto, Canada
| | - Frhan I Alanazi
- Krembil Brain Institute, University Health Network, Toronto, Canada; Department of Physiology, University of Toronto, Canada
| | - Tameem M Al-Ozzi
- Krembil Brain Institute, University Health Network, Toronto, Canada; Department of Physiology, University of Toronto, Canada
| | - Melanie Cohn
- Krembil Brain Institute, University Health Network, Toronto, Canada
| | - Mojgan Hodaie
- Krembil Brain Institute, University Health Network, Toronto, Canada; Department of Surgery, University of Toronto, Canada; Division of Neurosurgery, Toronto Western Hospital - University Health Network, Toronto, Canada
| | - Suneil K Kalia
- Krembil Brain Institute, University Health Network, Toronto, Canada; Department of Surgery, University of Toronto, Canada; Division of Neurosurgery, Toronto Western Hospital - University Health Network, Toronto, Canada
| | - Andres M Lozano
- Krembil Brain Institute, University Health Network, Toronto, Canada; Department of Surgery, University of Toronto, Canada; Division of Neurosurgery, Toronto Western Hospital - University Health Network, Toronto, Canada
| | - Luka Milosevic
- Institute of Biomedical Engineering, University of Toronto, Canada; University of Toronto - Max Planck Centre for Neural Science and Technology, University of Toronto, Canada; Krembil Brain Institute, University Health Network, Toronto, Canada; Center for Advancing Neurotechnological Innovation to Application (CRANIA), Toronto, Canada; KITE Research Institute, University Health Network, Toronto, Canada
| | - William D Hutchison
- Krembil Brain Institute, University Health Network, Toronto, Canada; Department of Physiology, University of Toronto, Canada; Department of Surgery, University of Toronto, Canada; Division of Neurosurgery, Toronto Western Hospital - University Health Network, Toronto, Canada.
| |
Collapse
|
7
|
Wong CY, Cedillo AH, Morsella E. The priming of stimulus-elicited involuntary mental imagery. Acta Psychol (Amst) 2024; 246:104250. [PMID: 38615596 DOI: 10.1016/j.actpsy.2024.104250] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 03/16/2024] [Accepted: 04/08/2024] [Indexed: 04/16/2024] Open
Abstract
Percepts, urges, and even high-level cognitions often enter the conscious field involuntarily. The Reflexive Imagery Task (RIT) was designed to investigate experimentally the nature of such entry into consciousness. In the most basic version of the task, participants are instructed not to subvocalize the names of visual objects. Involuntary subvocalizations arise on the majority of the trials. Can these effects be influenced by priming? In our experiment, participants were exposed to an auditory prime 300 ms before being presented with the RIT stimuli. For example, participants heard the word "FOOD" before seeing two RIT stimuli (e.g., line drawings of BANANA and CAT, with the former being the target of the prime). The short span between prime and target allowed us to assess whether the RIT effect is strategic or automatic. Before each trial, participants were instructed to disregard what they hear, and not to think of the name of any of the objects. On an average of 83% of the trials, the participants thought (involuntarily) of the name of the object associated with the prime. This is the first study to use a priming technique within the context of the RIT. The theoretical implications of these involuntary effects are discussed.
Collapse
Affiliation(s)
- Christina Y Wong
- Department of Psychology, San Francisco State University, United States of America
| | | | - Ezequiel Morsella
- Department of Psychology, San Francisco State University, United States of America; Department of Neurology, University of California, San Francisco, United States of America.
| |
Collapse
|
8
|
Delussi M, Valt C, Silvestri A, Ricci K, Ladisa E, Ammendola E, Rampino A, Pergola G, de Tommaso M. Auditory mismatch negativity in pre-manifest and manifest Huntington's disease. Clin Neurophysiol 2024; 162:121-128. [PMID: 38603947 DOI: 10.1016/j.clinph.2024.03.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 02/29/2024] [Accepted: 03/19/2024] [Indexed: 04/13/2024]
Abstract
AIM The aim of this study was to investigate the characteristics of the electrophysiological brain response elicited in a passive acoustic oddball paradigm, i.e. mismatch negativity (MMN), in patients with Huntington's disease (HD) in the premanifest (pHD) and manifest (mHD) phases. In this regard, we correlated the results of event-related potentials (ERP) with disease characteristics. METHODS This was an observational cross-sectional MMN study. In addition to the MMN recording of the passive oddball task, all subjects with first-degree inheritance for HD underwent genetic testing for mutant HTT, the Huntington's Disease Rating Scale, the Total Functional Capacity Scale, the Problem Behaviors Assessment short form, and the Mini-Mental State Examination. RESULTS We found that global field power (GFP) was reduced in the MMN time window in mHD patients compared to pHD and normal controls (NC). In the pHD group, MMN amplitude was only slightly and not significantly increased compared to mHD, while pHD patients showed increased theta coherence between trials compared to mHD. In the entire sample of HD gene carriers, the main MMN traits were not correlated with motor performance, cognitive impairment and functional disability. CONCLUSION These results suggest an initial and subtle deterioration of pre-attentive mechanisms in the presymptomatic phase of HD, with an increasing phase shift in the MMN time frame. This result could indicate initial functional changes with a possible compensatory effect. SIGNIFICANCE An initial and slight decrease in MMN associated with increased phase coherence in the corresponding EEG frequencies could indicate an early functional involvement of pre-attentive resources that could precede the clinical expression of HD.
Collapse
Affiliation(s)
- Marianna Delussi
- Department of Education, Psychology and Communication, University of Bari Aldo Moro, Bari, Italy
| | - Christian Valt
- Department of Translational Biomedicine and Neuroscience, University of Bari Aldo Moro, Bari, Italy
| | - Adelchi Silvestri
- Department of Translational Biomedicine and Neuroscience, University of Bari Aldo Moro, Bari, Italy
| | - Katia Ricci
- Neurophysiopathology Unit, Policlinico General Hospital, Bari, Italy
| | - Emanuella Ladisa
- Department of Translational Biomedicine and Neuroscience, University of Bari Aldo Moro, Bari, Italy
| | - Elena Ammendola
- Neurophysiopathology Unit, Policlinico General Hospital, Bari, Italy
| | - Antonio Rampino
- Department of Translational Biomedicine and Neuroscience, University of Bari Aldo Moro, Bari, Italy
| | - Giulio Pergola
- Department of Translational Biomedicine and Neuroscience, University of Bari Aldo Moro, Bari, Italy; Lieber Institute for Brain Development, Johns Hopkins Medical Campus, Baltimore, MD, United States; Department of Psychiatry and Behavioral Sciences, Johns Hopkins School of Medicine, Baltimore, MD, United States
| | - Marina de Tommaso
- Department of Translational Biomedicine and Neuroscience, University of Bari Aldo Moro, Bari, Italy.
| |
Collapse
|
9
|
Deniz B, Deniz R, Ataş A. Loudness Balancing Optimization for Better Speech Intelligibility, Music Perception, and Spectral Temporal Resolution in Cochlear Implant Users. Otol Neurotol 2024; 45:e385-e392. [PMID: 38518764 DOI: 10.1097/mao.0000000000004164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/24/2024]
Abstract
HYPOTHESIS The behaviorally based programming with loudness balancing (LB) would result in better speech understanding, spectral-temporal resolution, and music perception scores, and there would be a relationship between these scores. BACKGROUND Loudness imbalances at upper stimulation levels may cause sounds to be perceived as irregular, gravelly, or overly echoed and may negatively affect the listening performance of the cochlear implant (CI) user. LB should be performed after fitting to overcome these problems. METHODS The study included 26 unilateral Med-EL CI users. Two different CI programs based on the objective electrically evoked stapedial reflex threshold (P1) and the behaviorally program with LB (P2) were recorded for each participant. The Turkish Matrix Sentence Test (TMS) was applied to evaluate speech perception; the Random Gap Detection Test (RGDT) and Spectral-Temporally Modulated Ripple Test (SMRT) were applied to evaluate spectral temporal resolution skills; the Mini Profile of Music Perception Skills (mini-PROMS) and Melodic Contour Identification (MCI) tests were applied to evaluate music perception, and the results were compared. RESULTS Significantly better scores were obtained with P2 in TMS tests performed in noise and quiet. SMRT scores were significantly correlated with TMS in quiet and noise, and mini-PROMS sound perception results. Although better scores were obtained with P2 in the mini-PROMS total score and MCI, a significant difference was found only for MCI. CONCLUSION The data from the current study showed that equalization of loudness across CI electrodes leads to better perceptual acuity. It also revealed the relationship between speech perception, spectral-temporal resolution, and music perception.
Collapse
Affiliation(s)
- Burcu Deniz
- Istanbul University-Cerrahpaşa, Faculty of Health Science, Department of Audiology, İstanbul, Türkiye
| | - Rişvan Deniz
- Koç University Hospital, Department of Audiology, İstanbul, Türkiye
| | - Ahmet Ataş
- Koç University, Faculty of Medicine, Department of Otorhinolaryngology, İstanbul, Türkiye
| |
Collapse
|
10
|
Kong G, Aberkane C, Desoche C, Farnè A, Vernet M. No evidence in favor of the existence of "intentional" binding. J Exp Psychol Hum Percept Perform 2024; 50:626-635. [PMID: 38635224 DOI: 10.1037/xhp0001204] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/19/2024]
Abstract
Intentional binding refers to the subjective temporal compression between a voluntary action and its subsequent sensory outcome. Despite some studies challenging the link between temporal compression and intentional action, intentional binding is still widely used as an implicit measure for the sense of agency. The debate remains unsettled primarily because the experimental conditions used in previous studies were confounded with various alternative causes for temporal compression, and action intention has not yet been tested comprehensively against all potential alternative causes in a single study. Here, we solve this puzzle by jointly comparing participants' estimates of the interval between three types of triggering events with comparable predictability-voluntary movement, passive movement, and external sensory event-and an external sensory outcome (auditory or visual across experiments). The results failed to show intentional binding, that is, no shorter interval estimation for the voluntary than the passive movement conditions. Instead, we observed temporal (but not intentional) binding when comparing both movement conditions with the external sensory condition. Thus, temporal binding appears to originate from sensory integration and temporal prediction, not from action intention. As such, these findings underscore the need to reconsider the use of "intentional binding" as a reliable proxy of the sense of agency. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Collapse
Affiliation(s)
- Gaiqing Kong
- Integrative Multisensory Perception Action & Cognition Team (ImpAct), Lyon Neuroscience Research Center (CRNL), INSERM U1028 CNRS UMR5292, University Claude Bernard Lyon 1
| | - Cheryne Aberkane
- Integrative Multisensory Perception Action & Cognition Team (ImpAct), Lyon Neuroscience Research Center (CRNL), INSERM U1028 CNRS UMR5292, University Claude Bernard Lyon 1
| | - Clément Desoche
- Lyon Neuroscience Research Center (CRNL), INSERM U1028, CNRS UMR5292, University Claude Bernard Lyon 1
| | - Alessandro Farnè
- Integrative Multisensory Perception Action & Cognition Team (ImpAct), Lyon Neuroscience Research Center (CRNL), INSERM U1028 CNRS UMR5292, University Claude Bernard Lyon 1
| | - Marine Vernet
- Integrative Multisensory Perception Action & Cognition Team (ImpAct), Lyon Neuroscience Research Center (CRNL), INSERM U1028 CNRS UMR5292, University Claude Bernard Lyon 1
| |
Collapse
|
11
|
Li X, Cai S, Chen Y, Tian X, Wang A. Enhancement of visual dominance effects at the response level in children with attention-deficit/hyperactivity disorder. J Exp Child Psychol 2024; 242:105897. [PMID: 38461557 DOI: 10.1016/j.jecp.2024.105897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 02/16/2024] [Accepted: 02/16/2024] [Indexed: 03/12/2024]
Abstract
Previous studies have widely demonstrated that individuals with attention-deficit/hyperactivity disorder (ADHD) exhibit deficits in conflict control tasks. However, there is limited evidence regarding the performance of children with ADHD in cross-modal conflict processing tasks. The current study aimed to investigate whether children with ADHD have poor conflict control, which has an impact on sensory dominance effects at different levels of information processing under the influence of visual similarity. A total of 82 children aged 7 to 14 years, including 41 children with ADHD and 41 age- and sex-matched typically developing (TD) children, were recruited. We used the 2:1 mapping paradigm to separate levels of conflict, and the congruency of the audiovisual stimuli was divided into three conditions. In C trials, the target stimulus and the distractor stimulus were identical, and the bimodal stimuli corresponded to the same response keys. In PRIC trials, the distractor stimulus differed from the target stimulus and did not correspond to any response keys. In RIC trials, the distractor stimulus differed from the target stimulus, and the bimodal stimuli corresponded to different response keys. Therefore, we explicitly differentiated cross-modal conflict into a preresponse level (PRIC > C), corresponding to the encoding process, and a response level (RIC > PRIC), corresponding to the response selection process. Our results suggested that auditory distractors caused more interference during visual processing than visual distractors caused during auditory processing (i.e., typical auditory dominance) at the preresponse level regardless of group. However, visual dominance effects were observed in the ADHD group, whereas no visual dominance effects were observed in the TD group at the response level. A possible explanation is that the increased interference effects due to visual similarity and children with ADHD made it more difficult to control conflict when simultaneously confronted with incongruent visual and auditory inputs. The current study highlights how children with ADHD process cross-modal conflicts at multiple levels of information processing, thereby shedding light on the mechanisms underlying ADHD.
Collapse
Affiliation(s)
- Xin Li
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou 215123, China
| | - Shizhong Cai
- Department of Child and Adolescent Healthcare, Children's Hospital of Soochow University, Suzhou 215025, China
| | - Yan Chen
- Department of Child and Adolescent Healthcare, Children's Hospital of Soochow University, Suzhou 215025, China.
| | - Xiaoming Tian
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Suzhou University of Science and Technology, Suzhou 215011, China.
| | - Aijun Wang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou 215123, China.
| |
Collapse
|
12
|
Johansson RCG, Kelber P, Ulrich R. Speeded classification of visual events is sensitive to crossmodal intensity correspondence. J Exp Psychol Hum Percept Perform 2024; 50:554-569. [PMID: 38546625 DOI: 10.1037/xhp0001183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/14/2024]
Abstract
Crossmodal correspondences refer to systematic associations between stimulus attributes encountered in different sensory modalities. These correspondences can be probed in the speeded classification task where they tend to produce congruency effects. This study aimed to replicate and extend previous work conducted by Marks (1987, Experiment 3, Journal of Experimental Psychology: Human Perception and Performance, Vol. 13, No. 3, 384-394) which demonstrated a crossmodal correspondence between auditory and visual intensity attributes. Experiment 1 successfully replicates Marks' original finding that performance in a brightness classification task is affected by whether the loudness of a concurrently presented auditory distractor matches the brightness of the visual target. Furthermore, in line with the original study, we found that this effect was absent in a lightness classification task. In Experiment 2, we demonstrate that loudness-brightness correspondence is robust even when the exact stimulus input changes. This finding suggests that there is a context-dependent mapping between loudness and brightness levels, rather than an absolute mapping between any particular intensity levels. Finally, exploratory analysis using the diffusion model for conflict tasks indicated that evidence from the task-irrelevant modality generates a burst of weak, short-lived automatic activation that can bias decision-making in difficult tasks, but not in easy tasks. Our results provide further evidence for the existence of a flexible crossmodal correspondence between brightness and loudness, which might be helpful in determining one's distance to a stimulus source during the early stages of multisensory integration. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Collapse
Affiliation(s)
| | - Paul Kelber
- Department of Psychology, University of Tubingen
| | - Rolf Ulrich
- Department of Psychology, University of Tubingen
| |
Collapse
|
13
|
Silcox JW, Bennett K, Copeland A, Ferguson SH, Payne BR. The Costs (and Benefits?) of Effortful Listening for Older Adults: Insights from Simultaneous Electrophysiology, Pupillometry, and Memory. J Cogn Neurosci 2024; 36:997-1020. [PMID: 38579256 DOI: 10.1162/jocn_a_02161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/07/2024]
Abstract
Although the impact of acoustic challenge on speech processing and memory increases as a person ages, older adults may engage in strategies that help them compensate for these demands. In the current preregistered study, older adults (n = 48) listened to sentences-presented in quiet or in noise-that were high constraint with either expected or unexpected endings or were low constraint with unexpected endings. Pupillometry and EEG were simultaneously recorded, and subsequent sentence recognition and word recall were measured. Like young adults in prior work, we found that noise led to increases in pupil size, delayed and reduced ERP responses, and decreased recall for unexpected words. However, in contrast to prior work in young adults where a larger pupillary response predicted a recovery of the N400 at the cost of poorer memory performance in noise, older adults did not show an associated recovery of the N400 despite decreased memory performance. Instead, we found that in quiet, increases in pupil size were associated with delays in N400 onset latencies and increased recognition memory performance. In conclusion, we found that transient variation in pupil-linked arousal predicted trade-offs between real-time lexical processing and memory that emerged at lower levels of task demand in aging. Moreover, with increased acoustic challenge, older adults still exhibited costs associated with transient increases in arousal without the corresponding benefits.
Collapse
|
14
|
Evers S. The Cerebellum in Musicology: a Narrative Review. Cerebellum 2024; 23:1165-1175. [PMID: 37594626 DOI: 10.1007/s12311-023-01594-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 08/11/2023] [Indexed: 08/19/2023]
Abstract
The cerebellum is involved in cognitive procressing including music perception and music production. This narrative review aims to summarize the current knowledge on the activation of the cerebellum by different musical stimuli, on the involvement of the cerebellum in cognitive loops underlying the analysis of music, and on the role of the cerebellum in the motor network underlying music production. A possible role of the cerebellum in therapeutic settings is also briefly discussed. In a second part, the cerebellum as object of musicology (i.e., in classical music, in contemporary music, cerebellar disorders of musicians) is described.
Collapse
Affiliation(s)
- Stefan Evers
- Faculty of Medicine, University of Münster, Münster, Germany.
- Department of Neurology, Krankenhaus Lindenbrunn, 31863, Coppenbrügge, Lindenbrunn 1, Germany.
| |
Collapse
|
15
|
Muñoz-Caracuel M, Muñoz V, Ruiz-Martínez FJ, Vázquez Morejón AJ, Gómez CM. Systemic neurophysiological signals of auditory predictive coding. Psychophysiology 2024; 61:e14544. [PMID: 38351668 DOI: 10.1111/psyp.14544] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Revised: 01/03/2024] [Accepted: 02/02/2024] [Indexed: 05/16/2024]
Abstract
Predictive coding framework posits that our brain continuously monitors changes in the environment and updates its predictive models, minimizing prediction errors to efficiently adapt to environmental demands. However, the underlying neurophysiological mechanisms of these predictive phenomena remain unclear. The present study aimed to explore the systemic neurophysiological correlates of predictive coding processes during passive and active auditory processing. Electroencephalography (EEG), functional near-infrared spectroscopy (fNIRS), and autonomic nervous system (ANS) measures were analyzed using an auditory pattern-based novelty oddball paradigm. A sample of 32 healthy subjects was recruited. The results showed shared slow evoked potentials between passive and active conditions that could be interpreted as automatic predictive processes of anticipation and updating, independent of conscious attentional effort. A dissociated topography of the cortical hemodynamic activity and distinctive evoked potentials upon auditory pattern violation were also found between both conditions, whereas only conscious perception leading to imperative responses was accompanied by phasic ANS responses. These results suggest a systemic-level hierarchical reallocation of predictive coding neural resources as a function of contextual demands in the face of sensory stimulation. Principal component analysis permitted to associate the variability of some of the recorded signals.
Collapse
Affiliation(s)
- Manuel Muñoz-Caracuel
- Department of Experimental Psychology, University of Seville, Seville, Spain
- Mental Health Unit, Hospital Universitario Virgen del Rocio, Seville, Spain
| | - Vanesa Muñoz
- Department of Experimental Psychology, University of Seville, Seville, Spain
| | | | - Antonio J Vázquez Morejón
- Mental Health Unit, Hospital Universitario Virgen del Rocio, Seville, Spain
- Department of Personality, Evaluation and Psychological Treatments, University of Seville, Seville, Spain
| | - Carlos M Gómez
- Department of Experimental Psychology, University of Seville, Seville, Spain
| |
Collapse
|
16
|
Hao Y, Hu L. Lower Childhood Socioeconomic Status Is Associated with Greater Neural Responses to Ambient Auditory Changes in Adulthood. J Cogn Neurosci 2024; 36:979-996. [PMID: 38579240 DOI: 10.1162/jocn_a_02151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/07/2024]
Abstract
Humans' early life experience varies by socioeconomic status (SES), raising the question of how this difference is reflected in the adult brain. An important aspect of brain function is the ability to detect salient ambient changes while focusing on a task. Here, we ask whether subjective social status during childhood is reflected by the way young adults' brain detecting changes in irrelevant information. In two studies (total n = 58), we examine electrical brain responses in the frontocentral region to a series of auditory tones, consisting of standard stimuli (80%) and deviant stimuli (20%) interspersed randomly, while participants were engaged in various visual tasks. Both studies showed stronger automatic change detection indexed by MMN in lower SES individuals, regardless of the unattended sound's feature, attended emotional content, or study type. Moreover, we observed a larger MMN in lower-SES participants, although they did not show differences in brain and behavior responses to the attended task. Lower-SES people also did not involuntarily orient more attention to sound changes (i.e., deviant stimuli), as indexed by the P3a. The study indicates that individuals with lower subjective social status may have an increased ability to automatically detect changes in their environment, which may suggest their adaptation to their childhood environments.
Collapse
Affiliation(s)
- Yu Hao
- University of Pennsylvania
| | | |
Collapse
|
17
|
Lialiou M, Grice M, Röhr CT, Schumacher PB. Auditory Processing of Intonational Rises and Falls in German: Rises Are Special in Attention Orienting. J Cogn Neurosci 2024; 36:1099-1122. [PMID: 38358004 DOI: 10.1162/jocn_a_02129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/16/2024]
Abstract
This article investigates the processing of intonational rises and falls when presented unexpectedly in a stream of repetitive auditory stimuli. It examines the neurophysiological correlates (ERPs) of attention to these unexpected stimuli through the use of an oddball paradigm where sequences of repetitive stimuli are occasionally interspersed with a deviant stimulus, allowing for elicitation of an MMN. Whereas previous oddball studies on attention toward unexpected sounds involving pitch rises were conducted on nonlinguistic stimuli, the present study uses as stimuli lexical items in German with naturalistic intonation contours. Results indicate that rising intonation plays a special role in attention orienting at a pre-attentive processing stage, whereas contextual meaning (here a list of items) is essential for activating attentional resources at a conscious processing stage. This is reflected in the activation of distinct brain responses: Rising intonation evokes the largest MMN, whereas falling intonation elicits a less pronounced MMN followed by a P3 (reflecting a conscious processing stage). Subsequently, we also find a complex interplay between the phonological status (i.e., accent/head marking vs. boundary/edge marking) and the direction of pitch change in their contribution to attention orienting: Attention is not oriented necessarily toward a specific position in prosodic structure (head or edge). Rather, we find that the intonation contour itself and the appropriateness of the contour in the linguistic context are the primary cues to two core mechanisms of attention orienting, pre-attentive and conscious orientation respectively, whereas the phonological status of the pitch event plays only a supplementary role.
Collapse
|
18
|
Coy N, Bendixen A, Grimm S, Roeber U, Schröger E. Conditional deviant repetition in the oddball paradigm modulates processing at the level of P3a but not MMN. Psychophysiology 2024; 61:e14545. [PMID: 38366704 DOI: 10.1111/psyp.14545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 01/30/2024] [Accepted: 01/31/2024] [Indexed: 02/18/2024]
Abstract
The auditory system has an amazing ability to rapidly encode auditory regularities. Evidence comes from the popular oddball paradigm, in which frequent (standard) sounds are occasionally exchanged for rare deviant sounds, which then elicit signs of prediction error based on their unexpectedness (e.g., MMN and P3a). Here, we examine the widely neglected characteristics of deviants being bearers of predictive information themselves; naive participants listened to sound sequences constructed according to a new, modified version of the oddball paradigm including two types of deviants that followed diametrically opposed rules: one deviant sound occurred mostly in pairs (repetition rule), the other deviant sound occurred mostly in isolation (non-repetition rule). Due to this manipulation, the sound following a first deviant (either the same deviant or a standard) was either predictable or unpredictable based on its conditional probability associated with the preceding deviant sound. Our behavioral results from an active deviant detection task replicate previous findings that deviant repetition rules (based on conditional probability) can be extracted when behaviorally relevant. Our electrophysiological findings obtained in a passive listening setting indicate that conditional probability also translates into differential processing at the P3a level. However, MMN was confined to global deviants and was not sensitive to conditional probability. This suggests that higher-level processing concerned with stimulus selection and/or evaluation (reflected in P3a) but not lower-level sensory processing (reflected in MMN) considers rarely encountered rules.
Collapse
Affiliation(s)
- Nina Coy
- Wilhelm-Wundt-Institute of Psychology, Leipzig University, Leipzig, Germany
- Max Planck School of Cognition, Leipzig, Germany
| | - Alexandra Bendixen
- Cognitive Systems Lab, Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany
| | - Sabine Grimm
- Cognitive Systems Lab, Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany
- Physics of Cognition Lab, Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany
| | - Urte Roeber
- Wilhelm-Wundt-Institute of Psychology, Leipzig University, Leipzig, Germany
| | - Erich Schröger
- Wilhelm-Wundt-Institute of Psychology, Leipzig University, Leipzig, Germany
- Max Planck School of Cognition, Leipzig, Germany
| |
Collapse
|
19
|
Kimura T, Kawashima T. The influence of peripheral information on a proactive process during multitasking. Q J Exp Psychol (Hove) 2024; 77:1352-1362. [PMID: 37542429 DOI: 10.1177/17470218231195198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/07/2023]
Abstract
The aim of this study was to examine whether peripheral information facilitates proactive processes during multitasking. For this purpose, peripheral information was presented regularly during multitasking and its effects on the performance of a tracking task (main task: reactive process) and a discrimination task (sub-task: proactive process) were examined. Experiment 1 presented peripheral information (white circles) in the same sensory modality (visual) as the information used for multitasking and the number of circle presentations was manipulated. In Experiment 2, a pure tone (auditory) was presented as peripheral information. We found that, in both experiments, the difficulty of the tracking task influenced discrimination performance, showing that as the difficulty of the tracking task (reactive process) increased, more cognitive resources were consumed in the tracking task, resulting in a decrease in cognitive resources available for the discrimination task (proactive process). In addition, regular presentation of peripheral information facilitated discrimination task performance in both experiments. Interestingly, this peripheral information also facilitated the tracking task performance (reactive process) even if the tracking task was difficult. Moreover, this promoting effect of the peripheral information occurred regardless of the sensory modality. This study revealed that processing of peripheral information facilitates the proactive process even if more cognitive resources are consumed, and that this facilitating effect does not conflict with multitasking and provides a margin of cognitive resources and also facilitates the reactive process. Our results provide evidence of how peripheral information and cognitive resources are used during multitasking.
Collapse
Affiliation(s)
- Tsukasa Kimura
- The Institute of Scientific and Industrial Research (ISIR), Osaka University, Osaka, Japan
| | - Tomoya Kawashima
- Graduate School of Human Sciences, Osaka University, Osaka, Japan
| |
Collapse
|
20
|
Mizuguchi D, Sánchez-Valpuesta M, Kim Y, Dos Santos EB, Kang H, Mori C, Wada K, Kojima S. Daily singing of adult songbirds functions to maintain song performance independently of auditory feedback and age. Commun Biol 2024; 7:598. [PMID: 38762691 DOI: 10.1038/s42003-024-06311-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 05/08/2024] [Indexed: 05/20/2024] Open
Abstract
Many songbirds learn to produce songs through vocal practice in early life and continue to sing daily throughout their lifetime. While it is well-known that adult songbirds sing as part of their mating rituals, the functions of singing behavior outside of reproductive contexts remain unclear. Here, we investigated this issue in adult male zebra finches by suppressing their daily singing for two weeks and examining the effects on song performance. We found that singing suppression decreased the pitch, amplitude, and duration of songs, and that those song features substantially recovered through subsequent free singing. These reversible song changes were not dependent on auditory feedback or the age of the birds, contrasting with the adult song plasticity that has been reported previously. These results demonstrate that adult song structure is not stable without daily singing, and suggest that adult songbirds maintain song performance by preventing song changes through physical act of daily singing throughout their life. Such daily singing likely functions as vocal training to maintain the song production system in optimal conditions for song performance in reproductive contexts, similar to how human singers and athletes practice daily to maintain their performance.
Collapse
Affiliation(s)
- Daisuke Mizuguchi
- Sensory and Motor Systems Research Group, Korea Brain Research Institute, Daegu, 41062, Republic of Korea
| | - Miguel Sánchez-Valpuesta
- Sensory and Motor Systems Research Group, Korea Brain Research Institute, Daegu, 41062, Republic of Korea
| | - Yunbok Kim
- Sensory and Motor Systems Research Group, Korea Brain Research Institute, Daegu, 41062, Republic of Korea
| | - Ednei B Dos Santos
- Sensory and Motor Systems Research Group, Korea Brain Research Institute, Daegu, 41062, Republic of Korea
| | - HiJee Kang
- Sensory and Motor Systems Research Group, Korea Brain Research Institute, Daegu, 41062, Republic of Korea
- Department of Biomedical Engineering, School of Medicine, Johns Hopkins University, Baltimore, MD, 21205, USA
| | - Chihiro Mori
- Department of Life Sciences, Graduate School of Arts and Sciences, University of Tokyo, Tokyo, 153-0041, Japan
- Faculty of Pharmaceutical Sciences, Department of Life and Health Sciences, Teikyo University, Tokyo, 173-8605, Japan
| | - Kazuhiro Wada
- Department of Biological Sciences, Faculty of Science, Hokkaido University, Sapporo, 060-0810, Japan
| | - Satoshi Kojima
- Sensory and Motor Systems Research Group, Korea Brain Research Institute, Daegu, 41062, Republic of Korea.
| |
Collapse
|
21
|
Ishida K, Ishida T, Nittono H. Decoding predicted musical notes from omitted stimulus potentials. Sci Rep 2024; 14:11164. [PMID: 38750185 PMCID: PMC11096333 DOI: 10.1038/s41598-024-61989-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Accepted: 05/13/2024] [Indexed: 05/18/2024] Open
Abstract
Electrophysiological studies have investigated predictive processing in music by examining event-related potentials (ERPs) elicited by the violation of musical expectations. While several studies have reported that the predictability of stimuli can modulate the amplitude of ERPs, it is unclear how specific the representation of the expected note is. The present study addressed this issue by recording the omitted stimulus potentials (OSPs) to avoid contamination of bottom-up sensory processing with top-down predictive processing. Decoding of the omitted content was attempted using a support vector machine, which is a type of machine learning. ERP responses to the omission of four target notes (E, F, A, and C) at the same position in familiar and unfamiliar melodies were recorded from 25 participants. The results showed that the omission N1 were larger in the familiar melody condition than in the unfamiliar melody condition. The decoding accuracy of the four omitted notes was significantly higher in the familiar melody condition than in the unfamiliar melody condition. These results suggest that the OSPs contain discriminable predictive information, and the higher the predictability, the more the specific representation of the expected note is generated.
Collapse
Affiliation(s)
- Kai Ishida
- Graduate School of Human Sciences, Osaka University, 1-2 Yamadaoka, Suita, Osaka, 565-0871, Japan.
- Japan Society for the Promotion of Science, Tokyo, Japan.
| | - Tomomi Ishida
- Graduate School of Human Sciences, Osaka University, 1-2 Yamadaoka, Suita, Osaka, 565-0871, Japan
| | - Hiroshi Nittono
- Graduate School of Human Sciences, Osaka University, 1-2 Yamadaoka, Suita, Osaka, 565-0871, Japan
| |
Collapse
|
22
|
Bechtold TA, Curry B, Witek M. The perceived catchiness of music affects the experience of groove. PLoS One 2024; 19:e0303309. [PMID: 38748741 PMCID: PMC11095763 DOI: 10.1371/journal.pone.0303309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Accepted: 04/23/2024] [Indexed: 05/19/2024] Open
Abstract
Catchiness and groove are common phenomena when listening to popular music. Catchiness may be a potential factor for experiencing groove but quantitative evidence for such a relationship is missing. To examine whether and how catchiness influences a key component of groove-the pleasurable urge to move to music (PLUMM)-we conducted a listening experiment with 450 participants and 240 short popular music clips of drum patterns, bass lines or keys/guitar parts. We found four main results: (1) catchiness as measured in a recognition task was only weakly associated with participants' perceived catchiness of music. We showed that perceived catchiness is multi-dimensional, subjective, and strongly associated with pleasure. (2) We found a sizeable positive relationship between PLUMM and perceived catchiness. (3) However, the relationship is complex, as further analysis showed that pleasure suppresses perceived catchiness' effect on the urge to move. (4) We compared common factors that promote perceived catchiness and PLUMM and found that listener-related variables contributed similarly, while the effects of musical content diverged. Overall, our data suggests music perceived as catchy is likely to foster groove experiences.
Collapse
Affiliation(s)
- Toni Amadeus Bechtold
- Department of Music, University of Birmingham, Birmingham, United Kingdom
- Lucerne School of Music, Lucerne University of Applied Sciences and Arts, Lucerne, Switzerland
| | - Ben Curry
- Department of Music, University of Birmingham, Birmingham, United Kingdom
| | - Maria Witek
- Department of Music, University of Birmingham, Birmingham, United Kingdom
| |
Collapse
|
23
|
Yasoda-Mohan A, Faubert J, Ost J, Kropotov JD, Vanneste S. Investigating sensitivity to multi-domain prediction errors in chronic auditory phantom perception. Sci Rep 2024; 14:11036. [PMID: 38744906 PMCID: PMC11094085 DOI: 10.1038/s41598-024-61045-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Accepted: 04/29/2024] [Indexed: 05/16/2024] Open
Abstract
The perception of a continuous phantom in a sensory domain in the absence of an external stimulus is explained as a maladaptive compensation of aberrant predictive coding, a proposed unified theory of brain functioning. If this were true, these changes would occur not only in the domain of the phantom percept but in other sensory domains as well. We confirm this hypothesis by using tinnitus (continuous phantom sound) as a model and probe the predictive coding mechanism using the established local-global oddball paradigm in both the auditory and visual domains. We observe that tinnitus patients are sensitive to changes in predictive coding not only in the auditory but also in the visual domain. We report changes in well-established components of event-related EEG such as the mismatch negativity. Furthermore, deviations in stimulus characteristics were correlated with the subjective tinnitus distress. These results provide an empirical confirmation that aberrant perceptions are a symptom of a higher-order systemic disorder transcending the domain of the percept.
Collapse
Affiliation(s)
- Anusha Yasoda-Mohan
- Lab for Clinical and Integrative Neuroscience, School of Psychology, Trinity College Institute for Neuroscience, Trinity College Dublin, College Green, Dublin 2, Ireland
- Global Brain Health Institute, Trinity College Dublin, Dublin 2, Ireland
| | - Jocelyn Faubert
- Faubert Lab, School of Optometry, University of Montreal, Montreal, Canada
| | - Jan Ost
- Brain Research Center for Advanced International Innovative and Interdisciplinary Neuromodulation, Ghent, Belgium
| | - Juri D Kropotov
- N.P. Bechtereva Institute of the Human Brain of Russian Academy of Sciences, St. Petersburg, Russia
| | - Sven Vanneste
- Lab for Clinical and Integrative Neuroscience, School of Psychology, Trinity College Institute for Neuroscience, Trinity College Dublin, College Green, Dublin 2, Ireland.
- Global Brain Health Institute, Trinity College Dublin, Dublin 2, Ireland.
- Brain Research Center for Advanced International Innovative and Interdisciplinary Neuromodulation, Ghent, Belgium.
| |
Collapse
|
24
|
Ludwig S, Bakas S, Adamos DA, Laskaris N, Panagakis Y, Zafeiriou S. EEGminer: discovering interpretable features of brain activity with learnable filters. J Neural Eng 2024; 21:036010. [PMID: 38684154 DOI: 10.1088/1741-2552/ad44d7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Accepted: 04/29/2024] [Indexed: 05/02/2024]
Abstract
Objective. The patterns of brain activity associated with different brain processes can be used to identify different brain states and make behavioural predictions. However, the relevant features are not readily apparent and accessible. Our aim is to design a system for learning informative latent representations from multichannel recordings of ongoing EEG activity.Approach: We propose a novel differentiable decoding pipeline consisting of learnable filters and a pre-determined feature extraction module. Specifically, we introduce filters parameterized by generalized Gaussian functions that offer a smooth derivative for stable end-to-end model training and allow for learning interpretable features. For the feature module, we use signal magnitude and functional connectivity estimates.Main results.We demonstrate the utility of our model on a new EEG dataset of unprecedented size (i.e. 721 subjects), where we identify consistent trends of music perception and related individual differences. Furthermore, we train and apply our model in two additional datasets, specifically for emotion recognition on SEED and workload classification on simultaneous task EEG workload. The discovered features align well with previous neuroscience studies and offer new insights, such as marked differences in the functional connectivity profile between left and right temporal areas during music listening. This agrees with the specialisation of the temporal lobes regarding music perception proposed in the literature.Significance. The proposed method offers strong interpretability of learned features while reaching similar levels of accuracy achieved by black box deep learning models. This improved trustworthiness may promote the use of deep learning models in real world applications. The model code is available athttps://github.com/SMLudwig/EEGminer/.
Collapse
Affiliation(s)
- Siegfried Ludwig
- Department of Computing, Imperial College London, London SW7 2RH, United Kingdom
- Cogitat Ltd, London, United Kingdom
| | - Stylianos Bakas
- Department of Computing, Imperial College London, London SW7 2RH, United Kingdom
- School of Informatics, Aristotle University of Thessaloniki, Thessaloniki 54124, Greece
- Cogitat Ltd, London, United Kingdom
| | - Dimitrios A Adamos
- Department of Computing, Imperial College London, London SW7 2RH, United Kingdom
- Cogitat Ltd, London, United Kingdom
| | - Nikolaos Laskaris
- School of Informatics, Aristotle University of Thessaloniki, Thessaloniki 54124, Greece
- Cogitat Ltd, London, United Kingdom
| | - Yannis Panagakis
- Department of Informatics and Telecommunications, National and Kapodistrian University of Athens, Athens 15784, Greece
- Cogitat Ltd, London, United Kingdom
| | - Stefanos Zafeiriou
- Department of Computing, Imperial College London, London SW7 2RH, United Kingdom
- Cogitat Ltd, London, United Kingdom
| |
Collapse
|
25
|
Gelens F, Äijälä J, Roberts L, Komatsu M, Uran C, Jensen MA, Miller KJ, Ince RAA, Garagnani M, Vinck M, Canales-Johnson A. Distributed representations of prediction error signals across the cortical hierarchy are synergistic. Nat Commun 2024; 15:3941. [PMID: 38729937 PMCID: PMC11087548 DOI: 10.1038/s41467-024-48329-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Accepted: 04/26/2024] [Indexed: 05/12/2024] Open
Abstract
A relevant question concerning inter-areal communication in the cortex is whether these interactions are synergistic. Synergy refers to the complementary effect of multiple brain signals conveying more information than the sum of each isolated signal. Redundancy, on the other hand, refers to the common information shared between brain signals. Here, we dissociated cortical interactions encoding complementary information (synergy) from those sharing common information (redundancy) during prediction error (PE) processing. We analyzed auditory and frontal electrocorticography (ECoG) signals in five common awake marmosets performing two distinct auditory oddball tasks and investigated to what extent event-related potentials (ERP) and broadband (BB) dynamics encoded synergistic and redundant information about PE processing. The information conveyed by ERPs and BB signals was synergistic even at lower stages of the hierarchy in the auditory cortex and between auditory and frontal regions. Using a brain-constrained neural network, we simulated the synergy and redundancy observed in the experimental results and demonstrated that the emergence of synergy between auditory and frontal regions requires the presence of strong, long-distance, feedback, and feedforward connections. These results indicate that distributed representations of PE signals across the cortical hierarchy can be highly synergistic.
Collapse
Affiliation(s)
- Frank Gelens
- Department of Psychology, University of Amsterdam, Nieuwe Achtergracht 129-B, 1018 WT, Amsterdam, The Netherlands
- Department of Psychology, University of Cambridge, CB2 3EB, Cambridge, UK
| | - Juho Äijälä
- Department of Psychology, University of Cambridge, CB2 3EB, Cambridge, UK
| | - Louis Roberts
- Department of Psychology, University of Cambridge, CB2 3EB, Cambridge, UK
- Department of Computing, Goldsmiths, University of London, SE14 6NW, London, UK
| | - Misako Komatsu
- Laboratory for Haptic Perception and Cognitive Physiology, RIKEN Brain Science Institute, Saitama, 351-0198, Japan
| | - Cem Uran
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528, Frankfurt am Main, Germany
- Donders Centre for Neuroscience, Department of Neuroinformatics, Radboud University Nijmegen, 6525, Nijmegen, The Netherlands
| | - Michael A Jensen
- Department of Neurosurgery, Mayo Clinic, Rochester, MN, 55905, USA
| | - Kai J Miller
- Department of Neurosurgery, Mayo Clinic, Rochester, MN, 55905, USA
| | - Robin A A Ince
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, G12 8QB, Scotland, UK
| | - Max Garagnani
- Department of Computing, Goldsmiths, University of London, SE14 6NW, London, UK
- Brain Language Lab, Freie Universität Berlin, 14195, Berlin, Germany
| | - Martin Vinck
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528, Frankfurt am Main, Germany.
- Donders Centre for Neuroscience, Department of Neuroinformatics, Radboud University Nijmegen, 6525, Nijmegen, The Netherlands.
| | - Andres Canales-Johnson
- Department of Psychology, University of Cambridge, CB2 3EB, Cambridge, UK.
- Neuropsychology and Cognitive Neurosciences Research Center, Faculty of Health Sciences, Universidad Católica del Maule, 3460000, Talca, Chile.
| |
Collapse
|
26
|
Croom K, Rumschlag JA, Erickson MA, Binder D, Razak KA. Sex differences during development in cortical temporal processing and event related potentials in wild-type and fragile X syndrome model mice. J Neurodev Disord 2024; 16:24. [PMID: 38720271 PMCID: PMC11077726 DOI: 10.1186/s11689-024-09539-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Accepted: 04/17/2024] [Indexed: 05/12/2024] Open
Abstract
BACKGROUND Autism spectrum disorder (ASD) is currently diagnosed in approximately 1 in 44 children in the United States, based on a wide array of symptoms, including sensory dysfunction and abnormal language development. Boys are diagnosed ~ 3.8 times more frequently than girls. Auditory temporal processing is crucial for speech recognition and language development. Abnormal development of temporal processing may account for ASD language impairments. Sex differences in the development of temporal processing may underlie the differences in language outcomes in male and female children with ASD. To understand mechanisms of potential sex differences in temporal processing requires a preclinical model. However, there are no studies that have addressed sex differences in temporal processing across development in any animal model of ASD. METHODS To fill this major gap, we compared the development of auditory temporal processing in male and female wildtype (WT) and Fmr1 knock-out (KO) mice, a model of Fragile X Syndrome (FXS), a leading genetic cause of ASD-associated behaviors. Using epidural screw electrodes, we recorded auditory event related potentials (ERP) and auditory temporal processing with a gap-in-noise auditory steady state response (ASSR) paradigm at young (postnatal (p)21 and p30) and adult (p60) ages from both auditory and frontal cortices of awake, freely moving mice. RESULTS The results show that ERP amplitudes were enhanced in both sexes of Fmr1 KO mice across development compared to WT counterparts, with greater enhancement in adult female than adult male KO mice. Gap-ASSR deficits were seen in the frontal, but not auditory, cortex in early development (p21) in female KO mice. Unlike male KO mice, female KO mice show WT-like temporal processing at p30. There were no temporal processing deficits in the adult mice of both sexes. CONCLUSIONS These results show a sex difference in the developmental trajectories of temporal processing and hypersensitive responses in Fmr1 KO mice. Male KO mice show slower maturation of temporal processing than females. Female KO mice show stronger hypersensitive responses than males later in development. The differences in maturation rates of temporal processing and hypersensitive responses during various critical periods of development may lead to sex differences in language function, arousal and anxiety in FXS.
Collapse
Affiliation(s)
- Katilynne Croom
- Graduate Neuroscience Program, University of California, Riverside, USA
| | - Jeffrey A Rumschlag
- Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, Charleston, USA
| | - Michael A Erickson
- Department of Psychology, University of California, 900 University Avenue, Riverside, USA
| | - Devin Binder
- Graduate Neuroscience Program, University of California, Riverside, USA
- Biomedical Sciences, School of Medicine, University of California, Riverside, USA
| | - Khaleel A Razak
- Graduate Neuroscience Program, University of California, Riverside, USA.
- Department of Psychology, University of California, 900 University Avenue, Riverside, USA.
| |
Collapse
|
27
|
La Chioma A, Schneider DM. Auditory neuroscience: Sounds make the face move. Curr Biol 2024; 34:R346-R348. [PMID: 38714161 DOI: 10.1016/j.cub.2024.03.041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/09/2024]
Abstract
Animals including humans often react to sounds by involuntarily moving their face and body. A new study shows that facial movements provide a simple and reliable readout of a mouse's hearing ability that is more sensitive than traditional measurements.
Collapse
Affiliation(s)
| | - David M Schneider
- Center for Neural Science, New York University, New York, NY 10003, USA.
| |
Collapse
|
28
|
Duquette-Laplante F, Jutras B, Néron N, Fortin S, Koravand A. Exploring the Differences Between an Immature and a Mature Human Auditory System Through Auditory Late Responses in Quiet and in Noise. Neuroscience 2024; 545:171-184. [PMID: 38513763 DOI: 10.1016/j.neuroscience.2024.03.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Revised: 03/12/2024] [Accepted: 03/17/2024] [Indexed: 03/23/2024]
Abstract
Children are disadvantaged compared to adults when they perceive speech in a noisy environment. Noise reduces their ability to extract and understand auditory information. Auditory-Evoked Late Responses (ALRs) offer insight into how the auditory system can process information in noise. This study investigated how noise, signal-to-noise ratio (SNR), and stimulus type affect ALRs in children and adults. Fifteen participants from each group with normal hearing were studied under various conditions. The findings revealed that both groups experienced delayed latencies and reduced amplitudes in noise but that children had fewer identifiable waves than adults. Babble noise had a significant impact on both groups, limiting the analysis to one condition: the /da/ stimulus at +10 dB SNR for the P1 wave. P1 amplitude was greater in quiet for children compared to adults, with no stimulus effect. Children generally exhibited longer latencies. N1 latency was longer in noise, with larger amplitudes in white noise compared to quiet for both groups. P2 latency was shorter with the verbal stimulus in quiet, with larger amplitudes in children than adults. N2 latency was shorter in quiet, with no amplitude differences between the groups. Overall, noise prolonged latencies and reduced amplitudes. Different noise types had varying impacts, with the eight-talker babble noise causing more disruption. Children's auditory system responded similarly to adults but may be more susceptible to noise. This research emphasizes the need to understand noise's impact on children's auditory development, given their exposure to noisy environments, requiring further exploration of noise parameters in children.
Collapse
Affiliation(s)
- Fauve Duquette-Laplante
- Audiology and Speech Pathology Program, School of Rehabilitation Sciences, University of Ottawa, Roger Guindon Hall, 451 Smyth Road, Room 3071, Ottawa, Ontario K1H 8M5, Canada; School of Speech-Language Pathology and Audiology, Université de Montréal, c.p. 6128, succ. Centre-ville, Montréal H3C 3J7, Canada; Research Center, CHU Sainte-Justine, 3175, Côte Sainte-Catherine, Montréal, Québec H3T 1C5, Canada.
| | - Benoît Jutras
- School of Speech-Language Pathology and Audiology, Université de Montréal, c.p. 6128, succ. Centre-ville, Montréal H3C 3J7, Canada; Research Center, CHU Sainte-Justine, 3175, Côte Sainte-Catherine, Montréal, Québec H3T 1C5, Canada.
| | - Noémie Néron
- School of Speech-Language Pathology and Audiology, Université de Montréal, c.p. 6128, succ. Centre-ville, Montréal H3C 3J7, Canada; Research Center, CHU Sainte-Justine, 3175, Côte Sainte-Catherine, Montréal, Québec H3T 1C5, Canada.
| | - Sandra Fortin
- School of Speech-Language Pathology and Audiology, Université de Montréal, c.p. 6128, succ. Centre-ville, Montréal H3C 3J7, Canada.
| | - Amineh Koravand
- Audiology and Speech Pathology Program, School of Rehabilitation Sciences, University of Ottawa, Roger Guindon Hall, 451 Smyth Road, Room 3071, Ottawa, Ontario K1H 8M5, Canada.
| |
Collapse
|
29
|
Kong Y, Zhao C, Li D, Li B, Hu Y, Liu H, Woolgar A, Guo J, Song Y. Auditory change detection and visual selective attention: association between MMN and N2pc. Cereb Cortex 2024; 34:bhae175. [PMID: 38700440 DOI: 10.1093/cercor/bhae175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Revised: 04/02/2024] [Accepted: 04/16/2024] [Indexed: 05/05/2024] Open
Abstract
While the auditory and visual systems each provide distinct information to our brain, they also work together to process and prioritize input to address ever-changing conditions. Previous studies highlighted the trade-off between auditory change detection and visual selective attention; however, the relationship between them is still unclear. Here, we recorded electroencephalography signals from 106 healthy adults in three experiments. Our findings revealed a positive correlation at the population level between the amplitudes of event-related potential indices associated with auditory change detection (mismatch negativity) and visual selective attention (posterior contralateral N2) when elicited in separate tasks. This correlation persisted even when participants performed a visual task while disregarding simultaneous auditory stimuli. Interestingly, as visual attention demand increased, participants whose posterior contralateral N2 amplitude increased the most exhibited the largest reduction in mismatch negativity, suggesting a within-subject trade-off between the two processes. Taken together, our results suggest an intimate relationship and potential shared mechanism between auditory change detection and visual selective attention. We liken this to a total capacity limit that varies between individuals, which could drive correlated individual differences in auditory change detection and visual selective attention, and also within-subject competition between the two, with task-based modulation of visual attention causing within-participant decrease in auditory change detection sensitivity.
Collapse
Affiliation(s)
- Yuanjun Kong
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, 19 Xinjiekouwai Street, Beijing 100875, China
- MRC Cognition and Brain Sciences Unit, University of Cambridge, 15 Chaucer Road, Cambridge CB2 7EF, UK
| | - Chenguang Zhao
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, 19 Xinjiekouwai Street, Beijing 100875, China
| | - Dongwei Li
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, 19 Xinjiekouwai Street, Beijing 100875, China
- Department of Psychology, Faculty of Arts and Sciences, Beijing Normal University at Zhuhai, 18 Jinfeng Road, Zhuhai 519087, China
- Beijing Key Laboratory of Applied Experimental Psychology, National Demonstration Center for Experimental Psychology Education, Faculty of Psychology, Beijing Normal University, 19 Xinjiekouwai Street, Beijing 100875, China
| | - Bingkun Li
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, 19 Xinjiekouwai Street, Beijing 100875, China
| | - Yiqing Hu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, 19 Xinjiekouwai Street, Beijing 100875, China
| | - Hongyu Liu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, 19 Xinjiekouwai Street, Beijing 100875, China
| | - Alexandra Woolgar
- MRC Cognition and Brain Sciences Unit, University of Cambridge, 15 Chaucer Road, Cambridge CB2 7EF, UK
| | - Jialiang Guo
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, 19 Xinjiekouwai Street, Beijing 100875, China
| | - Yan Song
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, 19 Xinjiekouwai Street, Beijing 100875, China
| |
Collapse
|
30
|
Haruki Y, Ogawa K. Disrupted interoceptive awareness by auditory distractor: Difficulty inferring the internal bodily states? Neurosci Res 2024; 202:30-38. [PMID: 37935335 DOI: 10.1016/j.neures.2023.11.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 10/29/2023] [Accepted: 11/03/2023] [Indexed: 11/09/2023]
Abstract
Recent studies have associated interoceptive awareness, the perception of internal bodily sensations, with a predictive mechanism of perception across all sensory modalities. According to the framework, volitional attention plays a pivotal role in interoceptive awareness by prioritizing interoceptive sensations over exteroceptive ones. Consequently, it is hypothesized that the presence of irrelevant stimuli would disrupt the attentional modulation and interoceptive awareness, which remains untested. In this study, we investigated if interoceptive awareness is diminished by unrelated auditory distractors to validate the proposed perceptual framework. A total of 30 healthy human volunteers performed the heartbeat counting task both with and without auditory distractors. Additionally, we measured participant's psychophysiological traits related to interoception, including the high-frequency component of heart rate variability (HF-HRV) and trait interoceptive sensibility. The results showed that interoceptive accuracy, confidence, and heartbeat intensity decreased in the presence of distractor sound. Moreover, individuals with higher HF-HRV and a greater tendency to worry about bodily states experienced a more pronounced distractor effect on interoceptive awareness. These results provide support for the perceptual mechanism of interoceptive awareness in terms of the predictive process, highlighting the impact of relative precision across interoceptive and exteroceptive signals on perceptual experiences.
Collapse
Affiliation(s)
- Yusuke Haruki
- Department of Psychology, Graduate School of Humanities and Human Sciences, Hokkaido University, Sapporo 060-0810, Japan; Japan Society for the Promotion of Science (JSPS), Tokyo 102-8472, Japan.
| | - Kenji Ogawa
- Department of Psychology, Graduate School of Humanities and Human Sciences, Hokkaido University, Sapporo 060-0810, Japan
| |
Collapse
|
31
|
Hashim S, Küssner MB, Weinreich A, Omigie D. The neuro-oscillatory profiles of static and dynamic music-induced visual imagery. Int J Psychophysiol 2024; 199:112309. [PMID: 38242363 DOI: 10.1016/j.ijpsycho.2024.112309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Revised: 12/22/2023] [Accepted: 01/12/2024] [Indexed: 01/21/2024]
Abstract
Visual imagery, i.e., seeing in the absence of the corresponding retinal input, has been linked to visual and motor processing areas of the brain. Music listening provides an ideal vehicle for exploring the neural correlates of visual imagery because it has been shown to reliably induce a broad variety of content, ranging from abstract shapes to dynamic scenes. Forty-two participants listened with closed eyes to twenty-four excerpts of music, while a 15-channel EEG was recorded, and, after each excerpt, rated the extent to which they experienced static and dynamic visual imagery. Our results show both static and dynamic imagery to be associated with posterior alpha suppression (especially in lower alpha) early in the onset of music listening, while static imagery was associated with an additional alpha enhancement later in the listening experience. With regard to the beta band, our results demonstrate beta enhancement to static imagery, but first beta suppression before enhancement in response to dynamic imagery. We also observed a positive association, early in the listening experience, between gamma power and dynamic imagery ratings that was not present for static imagery ratings. Finally, we offer evidence that musical training may selectively drive effects found with respect to static and dynamic imagery and alpha, beta, and gamma band oscillations. Taken together, our results show the promise of using music listening as an effective stimulus for examining the neural correlates of visual imagery and its contents. Our study also highlights the relevance of future work seeking to study the temporal dynamics of music-induced visual imagery.
Collapse
Affiliation(s)
- Sarah Hashim
- Department of Psychology, Goldsmiths, University of London, United Kingdom.
| | - Mats B Küssner
- Department of Psychology, Goldsmiths, University of London, United Kingdom; Department of Musicology and Media Studies, Humboldt-Universität zu Berlin, Germany
| | - André Weinreich
- Department of Psychology, BSP Business & Law School Berlin, Germany
| | - Diana Omigie
- Department of Psychology, Goldsmiths, University of London, United Kingdom
| |
Collapse
|
32
|
Regener P, Heffer N, Love SA, Petrini K, Pollick F. Differences in audiovisual temporal processing in autistic adults are specific to simultaneity judgments. Autism Res 2024; 17:1041-1052. [PMID: 38661256 DOI: 10.1002/aur.3134] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Accepted: 04/02/2024] [Indexed: 04/26/2024]
Abstract
Research has shown that children on the autism spectrum and adults with high levels of autistic traits are less sensitive to audiovisual asynchrony compared to their neurotypical peers. However, this evidence has been limited to simultaneity judgments (SJ) which require participants to consider the timing of two cues together. Given evidence of partly divergent perceptual and neural mechanisms involved in making temporal order judgments (TOJ) and SJ, and given that SJ require a more global type of processing which may be impaired in autistic individuals, here we ask whether the observed differences in audiovisual temporal processing are task and stimulus specific. We examined the ability to detect audiovisual asynchrony in a group of 26 autistic adult males and a group of age and IQ-matched neurotypical males. Participants were presented with beep-flash, point-light drumming, and face-voice displays with varying degrees of asynchrony and asked to make SJ and TOJ. The results indicated that autistic participants were less able to detect audiovisual asynchrony compared to the control group, but this effect was specific to SJ and more complex social stimuli (e.g., face-voice) with stronger semantic correspondence between the cues, requiring a more global type of processing. This indicates that audiovisual temporal processing is not generally different in autistic individuals and that a similar level of performance could be achieved by using a more local type of processing, thus informing multisensory integration theory as well as multisensory training aimed to aid perceptual abilities in this population.
Collapse
Affiliation(s)
- Paula Regener
- Norwich Medical School, University of East Anglia, Norwich, UK
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, UK
| | - Naomi Heffer
- School of Sciences, Bath Spa University, Bath, UK
- Department of Psychology, University of Bath, Bath, UK
| | - Scott A Love
- INRAE, CNRS, Université de Tours, PRC, Nouzilly, France
| | - Karin Petrini
- Department of Psychology, University of Bath, Bath, UK
- The Centre for the Analysis of Motion, Entertainment Research and Applications (CAMERA), Bath, UK
| | - Frank Pollick
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, UK
| |
Collapse
|
33
|
Weng Y, Rong Y, Peng G. The development of audiovisual speech perception in Mandarin-speaking children: Evidence from the McGurk paradigm. Child Dev 2024; 95:750-765. [PMID: 37843038 DOI: 10.1111/cdev.14022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2023] [Revised: 08/30/2023] [Accepted: 09/21/2023] [Indexed: 10/17/2023]
Abstract
The developmental trajectory of audiovisual speech perception in Mandarin-speaking children remains understudied. This cross-sectional study in Mandarin-speaking 3- to 4-year-old, 5- to 6-year-old, 7- to 8-year-old children, and adults from Xiamen, China (n = 87, 44 males) investigated this issue using the McGurk paradigm with three levels of auditory noise. For the identification of congruent stimuli, 3- to 4-year-olds underperformed older groups whose performances were comparable. For the perception of the incongruent stimuli, a developmental shift was observed as 3- to 4-year-olds made significantly more audio-dominant but fewer audiovisual-integrated responses to incongruent stimuli than older groups. With increasing auditory noise, the difference between children and adults widened in identifying congruent stimuli but narrowed in perceiving incongruent ones. The findings regarding noise effects agree with the statistically optimal hypothesis.
Collapse
Affiliation(s)
- Yi Weng
- Department of Chinese and Bilingual Studies, Research Centre for Language, Cognition, and Neuroscience, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Yicheng Rong
- Department of Chinese and Bilingual Studies, Research Centre for Language, Cognition, and Neuroscience, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Gang Peng
- Department of Chinese and Bilingual Studies, Research Centre for Language, Cognition, and Neuroscience, The Hong Kong Polytechnic University, Hong Kong SAR, China
| |
Collapse
|
34
|
Li L, Ishida K, Mizuhara K, Barry RJ, Nittono H. Effects of the cardiac cycle on auditory processing: A preregistered study on mismatch negativity. Psychophysiology 2024; 61:e14506. [PMID: 38149745 DOI: 10.1111/psyp.14506] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2022] [Revised: 11/23/2023] [Accepted: 12/01/2023] [Indexed: 12/28/2023]
Abstract
The systolic and diastolic phases of the cardiac cycle are known to affect perception and cognition differently. Higher order processing tends to be facilitated at systole, whereas sensory processing of external stimuli tends to be impaired at systole compared to diastole. The current study aims to examine whether the cardiac cycle affects auditory deviance detection, as reflected in the mismatch negativity (MMN) of the event-related brain potential (ERP). We recorded the intensity deviance response to deviant tones (70 dB) presented among standard tones (60 or 80 dB, depending on blocks) and calculated the MMN by subtracting standard ERP waveforms from deviant ERP waveforms. We also assessed intensity-dependent N1 and P2 amplitude changes by subtracting ERPs elicited by soft standard tones (60 dB) from ERPs elicited by loud standard tones (80 dB). These subtraction methods were used to eliminate phase-locked cardiac-related electric artifacts that overlap auditory ERPs. The endogenous MMN was expected to be larger at systole, reflecting the facilitation of memory-based auditory deviance detection, whereas the exogenous N1 and P2 would be smaller at systole, reflecting impaired exteroceptive sensory processing. However, after the elimination of cardiac-related artifacts, there were no significant differences between systole and diastole in any ERP components. The intensity-dependent N1 and P2 amplitude changes were not obvious in either cardiac phase, probably because of the short interstimulus intervals. The lack of a cardiac phase effect on MMN amplitude suggests that preattentive auditory processing may not be affected by bodily signals from the heart.
Collapse
Affiliation(s)
- Lingjun Li
- Graduate School of Human Sciences, Osaka University, Osaka, Japan
| | - Kai Ishida
- Graduate School of Human Sciences, Osaka University, Osaka, Japan
- Japan Society for the Promotion of Science, Tokyo, Japan
| | - Keita Mizuhara
- Graduate School of Human Sciences, Osaka University, Osaka, Japan
- Japan Society for the Promotion of Science, Tokyo, Japan
| | - Robert J Barry
- School of Psychology, Brain & Behaviour Research Institute, University of Wollongong, Wollongong, New South Wales, Australia
| | - Hiroshi Nittono
- Graduate School of Human Sciences, Osaka University, Osaka, Japan
| |
Collapse
|
35
|
Böing S, Van der Stigchel S, Van der Stoep N. The impact of acute asymmetric hearing loss on multisensory integration. Eur J Neurosci 2024; 59:2373-2390. [PMID: 38303554 DOI: 10.1111/ejn.16263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 12/15/2023] [Accepted: 01/09/2024] [Indexed: 02/03/2024]
Abstract
Humans have the remarkable ability to integrate information from different senses, which greatly facilitates the detection, localization and identification of events in the environment. About 466 million people worldwide suffer from hearing loss. Yet, the impact of hearing loss on how the senses work together is rarely investigated. Here, we investigate how a common sensory impairment, asymmetric conductive hearing loss (AHL), alters the way our senses interact by examining human orienting behaviour with normal hearing (NH) and acute AHL. This type of hearing loss disrupts auditory localization. We hypothesized that this creates a conflict between auditory and visual spatial estimates and alters how auditory and visual inputs are integrated to facilitate multisensory spatial perception. We analysed the spatial and temporal properties of saccades to auditory, visual and audiovisual stimuli before and after plugging the right ear of participants. Both spatial and temporal aspects of multisensory integration were affected by AHL. Compared with NH, AHL caused participants to make slow, inaccurate and unprecise saccades towards auditory targets. Surprisingly, increased weight on visual input resulted in accurate audiovisual localization with AHL. This came at a cost: saccade latencies for audiovisual targets increased significantly. The larger the auditory localization errors, the less participants were able to benefit from audiovisual integration in terms of saccade latency. Our results indicate that observers immediately change sensory weights to effectively deal with acute AHL and preserve audiovisual accuracy in a way that cannot be fully explained by statistical models of optimal cue integration.
Collapse
Affiliation(s)
- Sanne Böing
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Stefan Van der Stigchel
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Nathan Van der Stoep
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
36
|
Ghosh P, Talwar S, Banerjee A. Unsupervised Characterization of Prediction Error Markers in Unisensory and Multisensory Streams Reveal the Spatiotemporal Hierarchy of Cortical Information Processing. eNeuro 2024; 11:ENEURO.0251-23.2024. [PMID: 38702194 PMCID: PMC11069433 DOI: 10.1523/eneuro.0251-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 03/19/2024] [Accepted: 03/20/2024] [Indexed: 05/06/2024] Open
Abstract
Elicited upon violation of regularity in stimulus presentation, mismatch negativity (MMN) reflects the brain's ability to perform automatic comparisons between consecutive stimuli and provides an electrophysiological index of sensory error detection whereas P300 is associated with cognitive processes such as updating of the working memory. To date, there has been extensive research on the roles of MMN and P300 individually, because of their potential to be used as clinical markers of consciousness and attention, respectively. Here, we intend to explore with an unsupervised and rigorous source estimation approach, the underlying cortical generators of MMN and P300, in the context of prediction error propagation along the hierarchies of brain information processing in healthy human participants. The existing methods of characterizing the two ERPs involve only approximate estimations of their amplitudes and latencies based on specific sensors of interest. Our objective is twofold: first, we introduce a novel data-driven unsupervised approach to compute latencies and amplitude of ERP components accurately on an individual-subject basis and reconfirm earlier findings. Second, we demonstrate that in multisensory environments, MMN generators seem to reflect a significant overlap of "modality-specific" and "modality-independent" information processing while P300 generators mark a shift toward completely "modality-independent" processing. Advancing earlier understanding that multisensory contexts speed up early sensory processing, our study reveals that temporal facilitation extends to even the later components of prediction error processing, using EEG experiments. Such knowledge can be of value to clinical research for characterizing the key developmental stages of lifespan aging, schizophrenia, and depression.
Collapse
Affiliation(s)
- Priyanka Ghosh
- Cognitive Brain Dynamics Lab, National Brain Research Centre, Gurgaon 122052, India
| | - Siddharth Talwar
- Cognitive Brain Dynamics Lab, National Brain Research Centre, Gurgaon 122052, India
| | - Arpan Banerjee
- Cognitive Brain Dynamics Lab, National Brain Research Centre, Gurgaon 122052, India
| |
Collapse
|
37
|
Neklyudova A, Kuramagomedova R, Voinova V, Sysoeva O. Atypical brain responses to 40-Hz click trains in girls with Rett syndrome: Auditory steady-state response and sustained wave. Psychiatry Clin Neurosci 2024; 78:282-290. [PMID: 38321640 DOI: 10.1111/pcn.13638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 12/01/2023] [Accepted: 12/27/2023] [Indexed: 02/08/2024]
Abstract
AIM The current study aimed to infer neurophysiological mechanisms of auditory processing in children with Rett syndrome (RTT)-rare neurodevelopmental disorders caused by MECP2 mutations. We examined two brain responses elicited by 40-Hz click trains: auditory steady-state response (ASSR), which reflects fine temporal analysis of auditory input, and sustained wave (SW), which is associated with integral processing of the auditory signal. METHODS We recorded electroencephalogram findings in 43 patients with RTT (aged 2.92-17.1 years) and 43 typically developing children of the same age during 40-Hz click train auditory stimulation, which lasted for 500 ms and was presented with interstimulus intervals of 500 to 800 ms. Mixed-model ancova with age as a covariate was used to compare amplitude of ASSR and SW between groups, taking into account the temporal dynamics and topography of the responses. RESULTS Amplitude of SW was atypically small in children with RTT starting from early childhood, with the difference from typically developing children decreasing with age. ASSR showed a different pattern of developmental changes: the between-group difference was negligible in early childhood but increased with age as ASSR increased in the typically developing group, but not in those with RTT. Moreover, ASSR was associated with expressive speech development in patients, so that children who could use words had more pronounced ASSR. CONCLUSION ASSR and SW show promise as noninvasive electrophysiological biomarkers of auditory processing that have clinical relevance and can shed light onto the link between genetic impairment and the RTT phenotype.
Collapse
Affiliation(s)
- Anastasia Neklyudova
- Laboratory of Human Higher Nervous Activity, Institute of Higher Nervous Activity and Neurophysiology, Russian Academy of Science, Moscow, Russia
| | - Rabiat Kuramagomedova
- Veltischev Research and Clinical Institute for Pediatrics of the Pirogov, Russian National Research Medical University, Ministry of Health of Russian Federation, Moscow, Russia
| | - Victoria Voinova
- Veltischev Research and Clinical Institute for Pediatrics of the Pirogov, Russian National Research Medical University, Ministry of Health of Russian Federation, Moscow, Russia
| | - Olga Sysoeva
- Laboratory of Human Higher Nervous Activity, Institute of Higher Nervous Activity and Neurophysiology, Russian Academy of Science, Moscow, Russia
- Faculty of Biology and Biotechnology, HSE University, Moscow, Russia
| |
Collapse
|
38
|
Ueda S, Yakushijin R, Ishiguchi A. Variance aftereffect within and between sensory modalities for visual and auditory domains. Atten Percept Psychophys 2024; 86:1375-1385. [PMID: 37100981 PMCID: PMC11093869 DOI: 10.3758/s13414-023-02705-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/26/2023] [Indexed: 04/28/2023]
Abstract
We can grasp various features of the outside world using summary statistics efficiently. Among these statistics, variance is an index of information homogeneity or reliability. Previous research has shown that visual variance information in the context of spatial integration is encoded directly as a unique feature, and currently perceived variance can be distorted by that of the preceding stimuli. In this study, we focused on variance perception in temporal integration. We investigated whether any variance aftereffects occurred in visual size and auditory pitch. Furthermore, to examine the mechanism of cross-modal variance perception, we also investigated whether variance aftereffects occur between different modalities. Four experimental conditions (a combination of sensory modalities of adaptor and test: visual-to-visual, visual-to-auditory, auditory-to-auditory, and auditory-to-visual) were conducted. Participants observed a sequence of visual or auditory stimuli perturbed in size or pitch with certain variance and performed a variance classification task before and after the variance adaptation phase. We found that in visual size, within modality adaptation to small or large variance, resulted in a variance aftereffect, indicating that variance judgments are biased in the direction away from that of the adapting stimulus. In auditory pitch, within modality adaptation to small variance caused variance aftereffect. For cross-modal combinations, adaptation to small variance in visual size resulted in variance aftereffect. However, the effect was weak, and variance aftereffect did not occur in other conditions. These findings indicate that the variance information of sequentially presented stimuli is encoded independently in visual and auditory domains.
Collapse
Affiliation(s)
- Sachiyo Ueda
- Department of Computer Science and Engineering, Toyohashi University of Technology, 1-1 Hibarigaoka, Tempaku-cho, Toyohashi, Aichi, 441-8580, Japan.
| | | | - Akira Ishiguchi
- Faculty of Core Research, Ochanomizu University, Tokyo, Japan
| |
Collapse
|
39
|
Bao W, Alain C, Thaut M, Molnar M. Is there a bilingual advantage in auditory attention among children? A systematic review and meta-analysis of standardized auditory attention tests. PLoS One 2024; 19:e0299393. [PMID: 38691540 PMCID: PMC11062550 DOI: 10.1371/journal.pone.0299393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Accepted: 02/09/2024] [Indexed: 05/03/2024] Open
Abstract
A wealth of research has investigated the associations between bilingualism and cognition, especially in regards to executive function. Some developmental studies reveal different cognitive profiles between monolinguals and bilinguals in visual or audio-visual attention tasks, which might stem from their attention allocation differences. Yet, whether such distinction exists in the auditory domain alone is unknown. In this study, we compared differences in auditory attention, measured by standardized tests, between monolingual and bilingual children. A comprehensive literature search was conducted in three electronic databases: OVID Medline, OVID PsycInfo, and EBSCO CINAHL. Twenty studies using standardized tests to assess auditory attention in monolingual and bilingual participants aged less than 18 years were identified. We assessed the quality of these studies using a scoring tool for evaluating primary research. For statistical analysis, we pooled the effect size in a random-effects meta-analytic model, where between-study heterogeneity was quantified using the I2 statistic. No substantial publication bias was observed based on the funnel plot. Further, meta-regression modelling suggests that test measure (accuracy vs. response times) significantly affected the studies' effect sizes whereas other factors (e.g., participant age, stimulus type) did not. Specifically, studies reporting accuracy observed marginally greater accuracy in bilinguals (g = 0.10), whereas those reporting response times indicated faster latency in monolinguals (g = -0.34). There was little difference between monolingual and bilingual children's performance on standardized auditory attention tests. We also found that studies tend to include a wide variety of bilingual children but report limited language background information of the participants. This, unfortunately, limits the potential theoretical contributions of the reviewed studies. Recommendations to improve the quality of future research are discussed.
Collapse
Affiliation(s)
- Wenfu Bao
- Department of Speech-Language Pathology, University of Toronto, Toronto, Ontario, Canada
- Rehabilitation Sciences Institute, University of Toronto, Toronto, Ontario, Canada
| | - Claude Alain
- Rotman Research Institute, Baycrest Health Centre, Toronto, Ontario, Canada
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada
- Institute of Medical Sciences, University of Toronto, Toronto, Ontario, Canada
- Music and Health Science Research Collaboratory, University of Toronto, Toronto, Ontario, Canada
| | - Michael Thaut
- Rehabilitation Sciences Institute, University of Toronto, Toronto, Ontario, Canada
- Institute of Medical Sciences, University of Toronto, Toronto, Ontario, Canada
- Music and Health Science Research Collaboratory, University of Toronto, Toronto, Ontario, Canada
- Faculty of Music, University of Toronto, Toronto, Ontario, Canada
| | - Monika Molnar
- Department of Speech-Language Pathology, University of Toronto, Toronto, Ontario, Canada
- Rehabilitation Sciences Institute, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
40
|
Ince MS, Guzel I, Akgor MC, Bahcelioglu M, Arikan KB, Okasha A, Sengezer S, Bolay H. Virtual dynamic interaction games reveal impaired multisensory integration in women with migraine. Headache 2024; 64:482-493. [PMID: 38693749 DOI: 10.1111/head.14720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 03/27/2024] [Accepted: 03/27/2024] [Indexed: 05/03/2024]
Abstract
OBJECTIVE In this cross-sectional observational study, we aimed to investigate sensory profiles and multisensory integration processes in women with migraine using virtual dynamic interaction systems. BACKGROUND Compared to studies on unimodal sensory processing, fewer studies show that multisensory integration differs in patients with migraine. Multisensory integration of visual, auditory, verbal, and haptic modalities has not been evaluated in migraine. METHODS A 12-min virtual dynamic interaction game consisting of four parts was played by the participants. During the game, the participants were exposed to either visual stimuli only or multisensory stimuli in which auditory, verbal, and haptic stimuli were added to the visual stimuli. A total of 78 women participants (28 with migraine without aura and 50 healthy controls) were enrolled in this prospective exploratory study. Patients with migraine and healthy participants who met the inclusion criteria were randomized separately into visual and multisensory groups: Migraine multisensory (14 adults), migraine visual (14 adults), healthy multisensory (25 adults), and healthy visual (25 adults). The Sensory Profile Questionnaire was utilized to assess the participants' sensory profiles. The game scores and survey results were analyzed. RESULTS In visual stimulus, the gaming performance scores of patients with migraine without aura were similar to the healthy controls, at a median (interquartile range [IQR]) of 81.8 (79.5-85.8) and 80.9 (77.1-84.2) (p = 0.149). Error rate of visual stimulus in patients with migraine without aura were comparable to healthy controls, at a median (IQR) of 0.11 (0.08-0.13) and 0.12 (0.10-0.14), respectively (p = 0,166). In multisensory stimulation, average gaming score was lower in patients with migraine without aura compared to healthy individuals (median [IQR] 82.2 [78.8-86.3] vs. 78.6 [74.0-82.4], p = 0.028). In women with migraine, exposure to new sensory modality upon visual stimuli in the fourth, seventh, and tenth rounds (median [IQR] 78.1 [74.1-82.0], 79.7 [77.2-82.5], 76.5 [70.2-82.1]) exhibited lower game scores compared to visual stimuli only (median [IQR] 82.3 [77.9-87.8], 84.2 [79.7-85.6], 80.8 [79.0-85.7], p = 0.044, p = 0.049, p = 0.016). According to the Sensory Profile Questionnaire results, sensory sensitivity, and sensory avoidance scores of patients with migraine (median [IQR] score 45.5 [41.0-54.7] and 47.0 [41.5-51.7]) were significantly higher than healthy participants (median [IQR] score 39.0 [34.0-44.2] and 40.0 [34.0-48.0], p < 0.001, p = 0.001). CONCLUSION The virtual dynamic game approach showed for the first time that the gaming performance of patients with migraine without aura was negatively affected by the addition of auditory, verbal, and haptic stimuli onto visual stimuli. Multisensory integration of sensory modalities including haptic stimuli is disturbed even in the interictal period in women with migraine. Virtual games can be employed to assess the impact of sensory problems in the course of the disease. Also, sensory training could be a potential therapy target to improve multisensory processing in migraine.
Collapse
Affiliation(s)
- Merve S Ince
- Neuroscience and Neurotechnology Center of Excellence (NÖROM), Institute of Health Sciences, Gazi University, Ankara, Turkey
- Faculty of Health Sciences, Yuksek Ihtisas University, Ankara, Turkey
| | - Ilkem Guzel
- Faculty of Health Sciences, Yuksek Ihtisas University, Ankara, Turkey
| | - Merve C Akgor
- Department of Neurology and Algology, Neuroscience and Neurotechnology Center of Excellence (NÖROM), Neuropsychiatry Center, Gazi University, Ankara, Turkey
| | - Meltem Bahcelioglu
- Department of Anatomy, Neuroscience and Neurotechnology Center of Excellence (NÖROM), Neuropsychiatry Center, Ankara, Turkey
| | - Kutluk B Arikan
- Department of Mechanical Engineering, TED University, Neuroscience and Neurotechnology Center of Excellence (NÖROM), Ankara, Turkey
| | - Amr Okasha
- Department of Mechanical Engineering, Middle East Technical University, Ankara, Turkey
| | - Sabahat Sengezer
- Applied Data Science Master Program, TED University, Ankara, Turkey
| | - Hayrunnisa Bolay
- Department of Neurology and Algology, Neuroscience and Neurotechnology Center of Excellence (NÖROM), Neuropsychiatry Center, Gazi University, Ankara, Turkey
| |
Collapse
|
41
|
Cantarella G, Mioni G, Bisiacchi PS. Young adults and multisensory time perception: Visual and auditory pathways in comparison. Atten Percept Psychophys 2024; 86:1386-1399. [PMID: 37674041 PMCID: PMC11093818 DOI: 10.3758/s13414-023-02773-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/01/2023] [Indexed: 09/08/2023]
Abstract
The brain continuously encodes information about time, but how sensorial channels interact to achieve a stable representation of such ubiquitous information still needs to be determined. According to recent research, children show a potential interference in multisensory conditions, leading to a trade-off between two senses (sight and audition) when considering time-perception tasks. This study aimed to examine how healthy young adults behave when performing a time-perception task. In Experiment 1, we tested the effects of temporary sensory deprivation on both visual and auditory senses in a group of young adults. In Experiment 2, we compared the temporal performances of young adults in the auditory modality with those of two samples of children (sighted and sighted but blindfolded) selected from a previous study. Statistically significant results emerged when comparing the two pathways: young adults overestimated and showed a higher sensitivity to time in the auditory modality compared to the visual modality. Restricting visual and auditory input did not affect their time sensitivity. Moreover, children were more accurate at estimating time than young adults after a transient visual deprivation. This implies that as we mature, sensory deprivation does not constitute a benefit to time perception, and supports the hypothesis of a calibration process between senses with age. However, more research is needed to determine how this calibration process affects the developmental trajectories of time perception.
Collapse
Affiliation(s)
- Giovanni Cantarella
- Department of Psychology, University of Bologna, Viale Berti Pichat, 5, 40127, Bologna, Italy
| | - Giovanna Mioni
- Department of General Psychology, University of Padova, Via Venezia, 8, 35131, Padova, Italy
| | - Patrizia Silvia Bisiacchi
- Department of General Psychology, University of Padova, Via Venezia, 8, 35131, Padova, Italy.
- Padova Neuroscience Center, Padova, Italy.
| |
Collapse
|
42
|
Nguyen T, Lagacé-Cusiac R, Everling JC, Henry MJ, Grahn JA. Audiovisual integration of rhythm in musicians and dancers. Atten Percept Psychophys 2024; 86:1400-1416. [PMID: 38557941 DOI: 10.3758/s13414-024-02874-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/23/2024] [Indexed: 04/04/2024]
Abstract
Music training is associated with better beat processing in the auditory modality. However, it is unknown how rhythmic training that emphasizes visual rhythms, such as dance training, might affect beat processing, nor whether training effects in general are modality specific. Here we examined how music and dance training interacted with modality during audiovisual integration and synchronization to auditory and visual isochronous sequences. In two experiments, musicians, dancers, and controls completed an audiovisual integration task and an audiovisual target-distractor synchronization task using dynamic visual stimuli (a bouncing figure). The groups performed similarly on the audiovisual integration tasks (Experiments 1 and 2). However, in the finger-tapping synchronization task (Experiment 1), musicians were more influenced by auditory distractors when synchronizing to visual sequences, while dancers were more influenced by visual distractors when synchronizing to auditory sequences. When participants synchronized with whole-body movements instead of finger-tapping (Experiment 2), all groups were more influenced by the visual distractor than the auditory distractor. Taken together, this study highlights how training is associated with audiovisual processing, and how different types of visual rhythmic stimuli and different movements alter beat perception and production outcome measures. Implications for the modality appropriateness hypothesis are discussed.
Collapse
Affiliation(s)
- Tram Nguyen
- Brain and Mind Institute and Department of Psychology, University of Western Ontario, London, Ontario, Canada
| | - Rebekka Lagacé-Cusiac
- Brain and Mind Institute and Department of Psychology, University of Western Ontario, London, Ontario, Canada
| | - J Celina Everling
- Brain and Mind Institute and Department of Psychology, University of Western Ontario, London, Ontario, Canada
| | - Molly J Henry
- Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
- Department of Psychology, Toronto Metropolitan University, Toronto, Ontario, Canada
| | - Jessica A Grahn
- Brain and Mind Institute and Department of Psychology, University of Western Ontario, London, Ontario, Canada.
| |
Collapse
|
43
|
Bernal-Berdun E, Vallejo M, Sun Q, Serrano A, Gutierrez D. Modeling the Impact of Head-Body Rotations on Audio-Visual Spatial Perception for Virtual Reality Applications. IEEE Trans Vis Comput Graph 2024; 30:2624-2632. [PMID: 38446650 DOI: 10.1109/tvcg.2024.3372112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/08/2024]
Abstract
Humans perceive the world by integrating multimodal sensory feedback, including visual and auditory stimuli, which holds true in virtual reality (VR) environments. Proper synchronization of these stimuli is crucial for perceiving a coherent and immersive VR experience. In this work, we focus on the interplay between audio and vision during localization tasks involving natural head-body rotations. We explore the impact of audio-visual offsets and rotation velocities on users' directional localization acuity for various viewing modes. Using psychometric functions, we model perceptual disparities between visual and auditory cues and determine offset detection thresholds. Our findings reveal that target localization accuracy is affected by perceptual audio-visual disparities during head-body rotations, but remains consistent in the absence of stimuli-head relative motion. We then showcase the effectiveness of our approach in predicting and enhancing users' localization accuracy within realistic VR gaming applications. To provide additional support for our findings, we implement a natural VR game wherein we apply a compensatory audio-visual offset derived from our measured psychometric functions. As a result, we demonstrate a substantial improvement of up to 40% in participants' target localization accuracy. We additionally provide guidelines for content creation to ensure coherent and seamless VR experiences.
Collapse
|
44
|
Huang YT, Wu CT, Koike S, Chao ZC. Dissecting Mismatch Negativity: Early and Late Subcomponents for Detecting Deviants in Local and Global Sequence Regularities. eNeuro 2024; 11:ENEURO.0050-24.2024. [PMID: 38702187 DOI: 10.1523/eneuro.0050-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2024] [Revised: 04/11/2024] [Accepted: 04/26/2024] [Indexed: 05/06/2024] Open
Abstract
Mismatch negativity (MMN) is commonly recognized as a neural signal of prediction error evoked by deviants from the expected patterns of sensory input. Studies show that MMN diminishes when sequence patterns become more predictable over a longer timescale. This implies that MMN is composed of multiple subcomponents, each responding to different levels of temporal regularities. To probe the hypothesized subcomponents in MMN, we record human electroencephalography during an auditory local-global oddball paradigm where the tone-to-tone transition probability (local regularity) and the overall sequence probability (global regularity) are manipulated to control temporal predictabilities at two hierarchical levels. We find that the size of MMN is correlated with both probabilities and the spatiotemporal structure of MMN can be decomposed into two distinct subcomponents. Both subcomponents appear as negative waveforms, with one peaking early in the central-frontal area and the other late in a more frontal area. With a quantitative predictive coding model, we map the early and late subcomponents to the prediction errors that are tied to local and global regularities, respectively. Our study highlights the hierarchical complexity of MMN and offers an experimental and analytical platform for developing a multitiered neural marker applicable in clinical settings.
Collapse
Affiliation(s)
- Yiyuan Teresa Huang
- International Research Center for Neurointelligence (WPI-IRCN), UTIAS, The University of Tokyo, Tokyo 113-0033, Japan
- School of Occupational Therapy, College of Medicine, National Taiwan University, Taipei 100, Taiwan
- Department of Multidisciplinary Sciences, Graduate School of Arts and Sciences, The University of Tokyo, Tokyo 153-8902, Japan
| | - Chien-Te Wu
- International Research Center for Neurointelligence (WPI-IRCN), UTIAS, The University of Tokyo, Tokyo 113-0033, Japan
- School of Occupational Therapy, College of Medicine, National Taiwan University, Taipei 100, Taiwan
| | - Shinsuke Koike
- International Research Center for Neurointelligence (WPI-IRCN), UTIAS, The University of Tokyo, Tokyo 113-0033, Japan
- Department of Multidisciplinary Sciences, Graduate School of Arts and Sciences, The University of Tokyo, Tokyo 153-8902, Japan
- University of Tokyo Institute for Diversity & Adaptation of Human Mind (UTIDAHM), Tokyo 113-0033, Japan
| | - Zenas C Chao
- International Research Center for Neurointelligence (WPI-IRCN), UTIAS, The University of Tokyo, Tokyo 113-0033, Japan
| |
Collapse
|
45
|
Chee ZJ, Chang CYM, Cheong JY, Malek FHBA, Hussain S, de Vries M, Bellato A. The effects of music and auditory stimulation on autonomic arousal, cognition and attention: A systematic review. Int J Psychophysiol 2024; 199:112328. [PMID: 38458383 DOI: 10.1016/j.ijpsycho.2024.112328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Revised: 03/01/2024] [Accepted: 03/04/2024] [Indexed: 03/10/2024]
Abstract
According to the arousal-mood hypothesis, changes in arousal and mood when exposed to auditory stimulation underlie the detrimental effects or improvements in cognitive performance. Findings supporting or against this hypothesis are, however, often based on subjective ratings of arousal rather than autonomic/physiological indices of arousal. To assess the arousal-mood hypothesis, we carried out a systematic review of the literature on 31 studies investigating cardiac, electrodermal, and pupillometry measures when exposed to different types of auditory stimulation (music, ambient noise, white noise, and binaural beats) in relation to cognitive performance. Our review suggests that the effects of music, noise, or binaural beats on cardiac, electrodermal, and pupillometry measures in relation to cognitive performance are either mixed or insufficient to draw conclusions. Importantly, the evidence for or against the arousal-mood hypothesis is at best indirect because autonomic arousal and cognitive performance are often considered separately. Future research is needed to directly evaluate the effects of auditory stimulation on autonomic arousal and cognitive performance holistically.
Collapse
Affiliation(s)
- Zhong Jian Chee
- School of Psychology, University of Nottingham Malaysia, Semenyih 43500, Malaysia; School of Psychology, University of Aberdeen, Aberdeen, United Kingdom
| | - Chern Yi Marybeth Chang
- School of Psychology, University of Nottingham Malaysia, Semenyih 43500, Malaysia; Mind and Neurodevelopment (MiND) Interdisciplinary Cluster, University of Nottingham Malaysia, Semenyih 43500, Malaysia
| | - Jean Yi Cheong
- School of Psychology, University of Nottingham Malaysia, Semenyih 43500, Malaysia
| | | | - Shahad Hussain
- School of Psychology, University of Nottingham Malaysia, Semenyih 43500, Malaysia
| | - Marieke de Vries
- School of Psychology, University of Nottingham Malaysia, Semenyih 43500, Malaysia; Mind and Neurodevelopment (MiND) Interdisciplinary Cluster, University of Nottingham Malaysia, Semenyih 43500, Malaysia; Development and Education of Youth in Diverse Societies (DEEDS), Faculty of Social Sciences, Utrecht University, the Netherlands
| | - Alessio Bellato
- School of Psychology, University of Nottingham Malaysia, Semenyih 43500, Malaysia; Mind and Neurodevelopment (MiND) Interdisciplinary Cluster, University of Nottingham Malaysia, Semenyih 43500, Malaysia; School of Psychology, University of Southampton, Southampton SO17 1BJ, United Kingdom; Centre for Innovation in Mental Health, University of Southampton, Southampton SO17 1BJ, United Kingdom; Institute for Life Sciences, University of Southampton, United Kingdom.
| |
Collapse
|
46
|
Kausel L, Zamorano F, Billeke P, Sutherland ME, Alliende MI, Larrain‐Valenzuela J, Soto‐Icaza P, Aboitiz F. Theta and alpha oscillations may underlie improved attention and working memory in musically trained children. Brain Behav 2024; 14:e3517. [PMID: 38702896 PMCID: PMC11069029 DOI: 10.1002/brb3.3517] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Revised: 04/10/2024] [Accepted: 04/13/2024] [Indexed: 05/06/2024] Open
Abstract
INTRODUCTION Attention and working memory are key cognitive functions that allow us to select and maintain information in our mind for a short time, being essential for our daily life and, in particular, for learning and academic performance. It has been shown that musical training can improve working memory performance, but it is still unclear if and how the neural mechanisms of working memory and particularly attention are implicated in this process. In this work, we aimed to identify the oscillatory signature of bimodal attention and working memory that contributes to improved working memory in musically trained children. MATERIALS AND METHODS We recruited children with and without musical training and asked them to complete a bimodal (auditory/visual) attention and working memory task, whereas their brain activity was measured using electroencephalography. Behavioral, time-frequency, and source reconstruction analyses were made. RESULTS Results showed that, overall, musically trained children performed better on the task than children without musical training. When comparing musically trained children with children without musical training, we found modulations in the alpha band pre-stimuli onset and the beginning of stimuli onset in the frontal and parietal regions. These correlated with correct responses to the attended modality. Moreover, during the end phase of stimuli presentation, we found modulations correlating with correct responses independent of attention condition in the theta and alpha bands, in the left frontal and right parietal regions. CONCLUSIONS These results suggest that musically trained children have improved neuronal mechanisms for both attention allocation and memory encoding. Our results can be important for developing interventions for people with attention and working memory difficulties.
Collapse
Affiliation(s)
- Leonie Kausel
- Centro de Estudios en Neurociencia Humana y Neuropsicología, Facultad de PsicologíaUniversidad Diego PortalesSantiagoChile
- Laboratorio de Neurociencia Social y Neuromodulación, Centro de Investigación en Complejidad Social (CICS), Facultad de GobiernoUniversidad del DesarrolloSantiagoChile
- Centro Interdisciplinario de NeurocienciasPontificia Universidad Católica de ChileSantiagoChile
| | - F. Zamorano
- Unidad de Imágenes Cuantitativas Avanzadas, Departamento de ImágenesClínica Alemanade SantiagoSantiagoChile
- Facultad de Ciencias para el Cuidado de la SaludUniversidad San SebastiánSantiagoChile
- Laboratorio de Psiquiatría TraslacionalDepartamento de PsiquiatríaFacultad de MedicinaUniversidad de ChileSantiagoChile
| | - P. Billeke
- Laboratorio de Neurociencia Social y Neuromodulación, Centro de Investigación en Complejidad Social (CICS), Facultad de GobiernoUniversidad del DesarrolloSantiagoChile
| | - M. E. Sutherland
- Centro Interdisciplinario de NeurocienciasPontificia Universidad Católica de ChileSantiagoChile
| | - M. I. Alliende
- Centro Interdisciplinario de NeurocienciasPontificia Universidad Católica de ChileSantiagoChile
| | - J. Larrain‐Valenzuela
- Centro de Investigación en Complejidad Social (CICS), Facultad de GobiernoUniversidad del DesarrolloSantiagoChile
| | - P. Soto‐Icaza
- Laboratorio de Neurociencia Social y Neuromodulación, Centro de Investigación en Complejidad Social (CICS), Facultad de GobiernoUniversidad del DesarrolloSantiagoChile
| | - F. Aboitiz
- Centro Interdisciplinario de NeurocienciasPontificia Universidad Católica de ChileSantiagoChile
| |
Collapse
|
47
|
Wadle SL, Ritter TC, Wadle TTX, Hirtz JJ. Topography and Ensemble Activity in the Auditory Cortex of a Mouse Model of Fragile X Syndrome. eNeuro 2024; 11:ENEURO.0396-23.2024. [PMID: 38627066 PMCID: PMC11097631 DOI: 10.1523/eneuro.0396-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 03/11/2024] [Accepted: 04/01/2024] [Indexed: 05/18/2024] Open
Abstract
Autism spectrum disorder (ASD) is often associated with social communication impairments and specific sound processing deficits, for example, problems in following speech in noisy environments. To investigate underlying neuronal processing defects located in the auditory cortex (AC), we performed two-photon Ca2+ imaging in FMR1 (fragile X messenger ribonucleoprotein 1) knock-out (KO) mice, a model for fragile X syndrome (FXS), the most common cause of hereditary ASD in humans. For primary AC (A1) and the anterior auditory field (AAF), topographic frequency representation was less ordered compared with control animals. We additionally analyzed ensemble AC activity in response to various sounds and found subfield-specific differences. In A1, ensemble correlations were lower in general, while in secondary AC (A2), correlations were higher in response to complex sounds, but not to pure tones. Furthermore, sound specificity of ensemble activity was decreased in AAF. Repeating these experiments 1 week later revealed no major differences regarding representational drift. Nevertheless, we found subfield- and genotype-specific changes in ensemble correlation values between the two times points, hinting at alterations in network stability in FMR1 KO mice. These detailed insights into AC network activity and topography in FMR1 KO mice add to the understanding of auditory processing defects in FXS.
Collapse
Affiliation(s)
- Simon L Wadle
- Physiology of Neuronal Networks, Department of Biology, RPTU University of Kaiserslautern-Landau, Kaiserslautern D-67663, Germany
| | - Tamara C Ritter
- Physiology of Neuronal Networks, Department of Biology, RPTU University of Kaiserslautern-Landau, Kaiserslautern D-67663, Germany
| | - Tatjana T X Wadle
- Physiology of Neuronal Networks, Department of Biology, RPTU University of Kaiserslautern-Landau, Kaiserslautern D-67663, Germany
| | - Jan J Hirtz
- Physiology of Neuronal Networks, Department of Biology, RPTU University of Kaiserslautern-Landau, Kaiserslautern D-67663, Germany
| |
Collapse
|
48
|
Tseng HC, Hsieh IH. Effects of absolute pitch on brain activation and functional connectivity during hearing-in-noise perception. Cortex 2024; 174:1-18. [PMID: 38484435 DOI: 10.1016/j.cortex.2024.02.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 01/11/2024] [Accepted: 02/06/2024] [Indexed: 04/21/2024]
Abstract
Hearing-in-noise (HIN) ability is crucial in speech and music communication. Recent evidence suggests that absolute pitch (AP), the ability to identify isolated musical notes, is associated with HIN benefits. A theoretical account postulates a link between AP ability and neural network indices of segregation. However, how AP ability modulates the brain activation and functional connectivity underlying HIN perception remains unclear. Here we used functional magnetic resonance imaging to contrast brain responses among a sample (n = 45) comprising 15 AP musicians, 15 non-AP musicians, and 15 non-musicians in perceiving Mandarin speech and melody targets under varying signal-to-noise ratios (SNRs: No-Noise, 0, -9 dB). Results reveal that AP musicians exhibited increased activation in auditory and superior frontal regions across both HIN domains (music and speech), irrespective of noise levels. Notably, substantially higher sensorimotor activation was found in AP musicians when the target was music compared to speech. Furthermore, we examined AP effects on neural connectivity using psychophysiological interaction analysis with the auditory cortex as the seed region. AP musicians showed decreased functional connectivity with the sensorimotor and middle frontal gyrus compared to non-AP musicians. Crucially, AP differentially affected connectivity with parietal and frontal brain regions depending on the HIN domain being music or speech. These findings suggest that AP plays a critical role in HIN perception, manifested by increased activation and functional independence between auditory and sensorimotor regions for perceiving music and speech streams.
Collapse
Affiliation(s)
- Hung-Chen Tseng
- Institute of Cognitive Neuroscience, National Central University, Taoyuan City, Taiwan
| | - I-Hui Hsieh
- Institute of Cognitive Neuroscience, National Central University, Taoyuan City, Taiwan; Cognitive Intelligence and Precision Healthcare Center, National Central University, Taoyuan City, Taiwan.
| |
Collapse
|
49
|
Mondul JA, Burke K, Morley B, Lauer AM. Alpha9alpha10 knockout mice show altered physiological and behavioral responses to signals in masking noise. J Acoust Soc Am 2024; 155:3183-3194. [PMID: 38738939 DOI: 10.1121/10.0025985] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/21/2023] [Accepted: 04/25/2024] [Indexed: 05/14/2024]
Abstract
Medial olivocochlear (MOC) efferents modulate outer hair cell motility through specialized nicotinic acetylcholine receptors to support encoding of signals in noise. Transgenic mice lacking the alpha9 subunits of these receptors (α9KOs) have normal hearing in quiet and noise, but lack classic cochlear suppression effects and show abnormal temporal, spectral, and spatial processing. Mice deficient for both the alpha9 and alpha10 receptor subunits (α9α10KOs) may exhibit more severe MOC-related phenotypes. Like α9KOs, α9α10KOs have normal auditory brainstem response (ABR) thresholds and weak MOC reflexes. Here, we further characterized auditory function in α9α10KO mice. Wild-type (WT) and α9α10KO mice had similar ABR thresholds and acoustic startle response amplitudes in quiet and noise, and similar frequency and intensity difference sensitivity. α9α10KO mice had larger ABR Wave I amplitudes than WTs in quiet and noise. Other ABR metrics of hearing-in-noise function yielded conflicting findings regarding α9α10KO susceptibility to masking effects. α9α10KO mice also had larger startle amplitudes in tone backgrounds than WTs. Overall, α9α10KO mice had grossly normal auditory function in quiet and noise, although their larger ABR amplitudes and hyperreactive startles suggest some auditory processing abnormalities. These findings contribute to the growing literature showing mixed effects of MOC dysfunction on hearing.
Collapse
Affiliation(s)
- Jane A Mondul
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland 21205, USA
| | - Kali Burke
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland 21205, USA
| | - Barbara Morley
- Boys Town National Research Hospital, Omaha, Nebraska 68131, USA
| | - Amanda M Lauer
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland 21205, USA
- Department of Neuroscience, Johns Hopkins University School of Medicine, Baltimore, Maryland 21205, USA
| |
Collapse
|
50
|
Shi K, Quass GL, Rogalla MM, Ford AN, Czarny JE, Apostolides PF. Population coding of time-varying sounds in the nonlemniscal inferior colliculus. J Neurophysiol 2024; 131:842-864. [PMID: 38505907 DOI: 10.1152/jn.00013.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Revised: 02/29/2024] [Accepted: 03/15/2024] [Indexed: 03/21/2024] Open
Abstract
The inferior colliculus (IC) of the midbrain is important for complex sound processing, such as discriminating conspecific vocalizations and human speech. The IC's nonlemniscal, dorsal "shell" region is likely important for this process, as neurons in these layers project to higher-order thalamic nuclei that subsequently funnel acoustic signals to the amygdala and nonprimary auditory cortices, forebrain circuits important for vocalization coding in a variety of mammals, including humans. However, the extent to which shell IC neurons transmit acoustic features necessary to discern vocalizations is less clear, owing to the technical difficulty of recording from neurons in the IC's superficial layers via traditional approaches. Here, we use two-photon Ca2+ imaging in mice of either sex to test how shell IC neuron populations encode the rate and depth of amplitude modulation, important sound cues for speech perception. Most shell IC neurons were broadly tuned, with a low neurometric discrimination of amplitude modulation rate; only a subset was highly selective to specific modulation rates. Nevertheless, neural network classifier trained on fluorescence data from shell IC neuron populations accurately classified amplitude modulation rate, and decoding accuracy was only marginally reduced when highly tuned neurons were omitted from training data. Rather, classifier accuracy increased monotonically with the modulation depth of the training data, such that classifiers trained on full-depth modulated sounds had median decoding errors of ∼0.2 octaves. Thus, shell IC neurons may transmit time-varying signals via a population code, with perhaps limited reliance on the discriminative capacity of any individual neuron.NEW & NOTEWORTHY The IC's shell layers originate a "nonlemniscal" pathway important for perceiving vocalization sounds. However, prior studies suggest that individual shell IC neurons are broadly tuned and have high response thresholds, implying a limited reliability of efferent signals. Using Ca2+ imaging, we show that amplitude modulation is accurately represented in the population activity of shell IC neurons. Thus, downstream targets can read out sounds' temporal envelopes from distributed rate codes transmitted by populations of broadly tuned neurons.
Collapse
Affiliation(s)
- Kaiwen Shi
- Department of Otolaryngology-Head & Neck Surgery, Kresge Hearing Research Institute, University of Michigan Medical School, Ann Arbor, Michigan, United States
| | - Gunnar L Quass
- Department of Otolaryngology-Head & Neck Surgery, Kresge Hearing Research Institute, University of Michigan Medical School, Ann Arbor, Michigan, United States
| | - Meike M Rogalla
- Department of Otolaryngology-Head & Neck Surgery, Kresge Hearing Research Institute, University of Michigan Medical School, Ann Arbor, Michigan, United States
| | - Alexander N Ford
- Department of Otolaryngology-Head & Neck Surgery, Kresge Hearing Research Institute, University of Michigan Medical School, Ann Arbor, Michigan, United States
| | - Jordyn E Czarny
- Department of Otolaryngology-Head & Neck Surgery, Kresge Hearing Research Institute, University of Michigan Medical School, Ann Arbor, Michigan, United States
| | - Pierre F Apostolides
- Department of Otolaryngology-Head & Neck Surgery, Kresge Hearing Research Institute, University of Michigan Medical School, Ann Arbor, Michigan, United States
- Department of Molecular & Integrative Physiology, University of Michigan Medical School, Ann Arbor, Michigan, United States
| |
Collapse
|