76
|
Duquette-Laplante F, Jutras B, Néron N, Fortin S, Koravand A. Exploring the Differences Between an Immature and a Mature Human Auditory System Through Auditory Late Responses in Quiet and in Noise. Neuroscience 2024; 545:171-184. [PMID: 38513763 DOI: 10.1016/j.neuroscience.2024.03.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Revised: 03/12/2024] [Accepted: 03/17/2024] [Indexed: 03/23/2024]
Abstract
Children are disadvantaged compared to adults when they perceive speech in a noisy environment. Noise reduces their ability to extract and understand auditory information. Auditory-Evoked Late Responses (ALRs) offer insight into how the auditory system can process information in noise. This study investigated how noise, signal-to-noise ratio (SNR), and stimulus type affect ALRs in children and adults. Fifteen participants from each group with normal hearing were studied under various conditions. The findings revealed that both groups experienced delayed latencies and reduced amplitudes in noise but that children had fewer identifiable waves than adults. Babble noise had a significant impact on both groups, limiting the analysis to one condition: the /da/ stimulus at +10 dB SNR for the P1 wave. P1 amplitude was greater in quiet for children compared to adults, with no stimulus effect. Children generally exhibited longer latencies. N1 latency was longer in noise, with larger amplitudes in white noise compared to quiet for both groups. P2 latency was shorter with the verbal stimulus in quiet, with larger amplitudes in children than adults. N2 latency was shorter in quiet, with no amplitude differences between the groups. Overall, noise prolonged latencies and reduced amplitudes. Different noise types had varying impacts, with the eight-talker babble noise causing more disruption. Children's auditory system responded similarly to adults but may be more susceptible to noise. This research emphasizes the need to understand noise's impact on children's auditory development, given their exposure to noisy environments, requiring further exploration of noise parameters in children.
Collapse
|
77
|
Kong Y, Zhao C, Li D, Li B, Hu Y, Liu H, Woolgar A, Guo J, Song Y. Auditory change detection and visual selective attention: association between MMN and N2pc. Cereb Cortex 2024; 34:bhae175. [PMID: 38700440 DOI: 10.1093/cercor/bhae175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Revised: 04/02/2024] [Accepted: 04/16/2024] [Indexed: 05/05/2024] Open
Abstract
While the auditory and visual systems each provide distinct information to our brain, they also work together to process and prioritize input to address ever-changing conditions. Previous studies highlighted the trade-off between auditory change detection and visual selective attention; however, the relationship between them is still unclear. Here, we recorded electroencephalography signals from 106 healthy adults in three experiments. Our findings revealed a positive correlation at the population level between the amplitudes of event-related potential indices associated with auditory change detection (mismatch negativity) and visual selective attention (posterior contralateral N2) when elicited in separate tasks. This correlation persisted even when participants performed a visual task while disregarding simultaneous auditory stimuli. Interestingly, as visual attention demand increased, participants whose posterior contralateral N2 amplitude increased the most exhibited the largest reduction in mismatch negativity, suggesting a within-subject trade-off between the two processes. Taken together, our results suggest an intimate relationship and potential shared mechanism between auditory change detection and visual selective attention. We liken this to a total capacity limit that varies between individuals, which could drive correlated individual differences in auditory change detection and visual selective attention, and also within-subject competition between the two, with task-based modulation of visual attention causing within-participant decrease in auditory change detection sensitivity.
Collapse
|
78
|
Haruki Y, Ogawa K. Disrupted interoceptive awareness by auditory distractor: Difficulty inferring the internal bodily states? Neurosci Res 2024; 202:30-38. [PMID: 37935335 DOI: 10.1016/j.neures.2023.11.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 10/29/2023] [Accepted: 11/03/2023] [Indexed: 11/09/2023]
Abstract
Recent studies have associated interoceptive awareness, the perception of internal bodily sensations, with a predictive mechanism of perception across all sensory modalities. According to the framework, volitional attention plays a pivotal role in interoceptive awareness by prioritizing interoceptive sensations over exteroceptive ones. Consequently, it is hypothesized that the presence of irrelevant stimuli would disrupt the attentional modulation and interoceptive awareness, which remains untested. In this study, we investigated if interoceptive awareness is diminished by unrelated auditory distractors to validate the proposed perceptual framework. A total of 30 healthy human volunteers performed the heartbeat counting task both with and without auditory distractors. Additionally, we measured participant's psychophysiological traits related to interoception, including the high-frequency component of heart rate variability (HF-HRV) and trait interoceptive sensibility. The results showed that interoceptive accuracy, confidence, and heartbeat intensity decreased in the presence of distractor sound. Moreover, individuals with higher HF-HRV and a greater tendency to worry about bodily states experienced a more pronounced distractor effect on interoceptive awareness. These results provide support for the perceptual mechanism of interoceptive awareness in terms of the predictive process, highlighting the impact of relative precision across interoceptive and exteroceptive signals on perceptual experiences.
Collapse
|
79
|
Hashim S, Küssner MB, Weinreich A, Omigie D. The neuro-oscillatory profiles of static and dynamic music-induced visual imagery. Int J Psychophysiol 2024; 199:112309. [PMID: 38242363 DOI: 10.1016/j.ijpsycho.2024.112309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Revised: 12/22/2023] [Accepted: 01/12/2024] [Indexed: 01/21/2024]
Abstract
Visual imagery, i.e., seeing in the absence of the corresponding retinal input, has been linked to visual and motor processing areas of the brain. Music listening provides an ideal vehicle for exploring the neural correlates of visual imagery because it has been shown to reliably induce a broad variety of content, ranging from abstract shapes to dynamic scenes. Forty-two participants listened with closed eyes to twenty-four excerpts of music, while a 15-channel EEG was recorded, and, after each excerpt, rated the extent to which they experienced static and dynamic visual imagery. Our results show both static and dynamic imagery to be associated with posterior alpha suppression (especially in lower alpha) early in the onset of music listening, while static imagery was associated with an additional alpha enhancement later in the listening experience. With regard to the beta band, our results demonstrate beta enhancement to static imagery, but first beta suppression before enhancement in response to dynamic imagery. We also observed a positive association, early in the listening experience, between gamma power and dynamic imagery ratings that was not present for static imagery ratings. Finally, we offer evidence that musical training may selectively drive effects found with respect to static and dynamic imagery and alpha, beta, and gamma band oscillations. Taken together, our results show the promise of using music listening as an effective stimulus for examining the neural correlates of visual imagery and its contents. Our study also highlights the relevance of future work seeking to study the temporal dynamics of music-induced visual imagery.
Collapse
|
80
|
Regener P, Heffer N, Love SA, Petrini K, Pollick F. Differences in audiovisual temporal processing in autistic adults are specific to simultaneity judgments. Autism Res 2024; 17:1041-1052. [PMID: 38661256 DOI: 10.1002/aur.3134] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Accepted: 04/02/2024] [Indexed: 04/26/2024]
Abstract
Research has shown that children on the autism spectrum and adults with high levels of autistic traits are less sensitive to audiovisual asynchrony compared to their neurotypical peers. However, this evidence has been limited to simultaneity judgments (SJ) which require participants to consider the timing of two cues together. Given evidence of partly divergent perceptual and neural mechanisms involved in making temporal order judgments (TOJ) and SJ, and given that SJ require a more global type of processing which may be impaired in autistic individuals, here we ask whether the observed differences in audiovisual temporal processing are task and stimulus specific. We examined the ability to detect audiovisual asynchrony in a group of 26 autistic adult males and a group of age and IQ-matched neurotypical males. Participants were presented with beep-flash, point-light drumming, and face-voice displays with varying degrees of asynchrony and asked to make SJ and TOJ. The results indicated that autistic participants were less able to detect audiovisual asynchrony compared to the control group, but this effect was specific to SJ and more complex social stimuli (e.g., face-voice) with stronger semantic correspondence between the cues, requiring a more global type of processing. This indicates that audiovisual temporal processing is not generally different in autistic individuals and that a similar level of performance could be achieved by using a more local type of processing, thus informing multisensory integration theory as well as multisensory training aimed to aid perceptual abilities in this population.
Collapse
|
81
|
Weng Y, Rong Y, Peng G. The development of audiovisual speech perception in Mandarin-speaking children: Evidence from the McGurk paradigm. Child Dev 2024; 95:750-765. [PMID: 37843038 DOI: 10.1111/cdev.14022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2023] [Revised: 08/30/2023] [Accepted: 09/21/2023] [Indexed: 10/17/2023]
Abstract
The developmental trajectory of audiovisual speech perception in Mandarin-speaking children remains understudied. This cross-sectional study in Mandarin-speaking 3- to 4-year-old, 5- to 6-year-old, 7- to 8-year-old children, and adults from Xiamen, China (n = 87, 44 males) investigated this issue using the McGurk paradigm with three levels of auditory noise. For the identification of congruent stimuli, 3- to 4-year-olds underperformed older groups whose performances were comparable. For the perception of the incongruent stimuli, a developmental shift was observed as 3- to 4-year-olds made significantly more audio-dominant but fewer audiovisual-integrated responses to incongruent stimuli than older groups. With increasing auditory noise, the difference between children and adults widened in identifying congruent stimuli but narrowed in perceiving incongruent ones. The findings regarding noise effects agree with the statistically optimal hypothesis.
Collapse
|
82
|
Becker J, Viertler M, Korn CW, Blank H. The pupil dilation response as an indicator of visual cue uncertainty and auditory outcome surprise. Eur J Neurosci 2024; 59:2686-2701. [PMID: 38469976 DOI: 10.1111/ejn.16306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 01/05/2024] [Accepted: 02/18/2024] [Indexed: 03/13/2024]
Abstract
In everyday perception, we combine incoming sensory information with prior expectations. Expectations can be induced by cues that indicate the probability of following sensory events. The information provided by cues may differ and hence lead to different levels of uncertainty about which event will follow. In this experiment, we employed pupillometry to investigate whether the pupil dilation response to visual cues varies depending on the level of cue-associated uncertainty about a following auditory outcome. Also, we tested whether the pupil dilation response reflects the amount of surprise about the subsequently presented auditory stimulus. In each trial, participants were presented with a visual cue (face image) which was followed by an auditory outcome (spoken vowel). After the face cue, participants had to indicate by keypress which of three auditory vowels they expected to hear next. We manipulated the cue-associated uncertainty by varying the probabilistic cue-outcome contingencies: One face was most likely followed by one specific vowel (low cue uncertainty), another face was equally likely followed by either of two vowels (intermediate cue uncertainty) and the third face was followed by all three vowels (high cue uncertainty). Our results suggest that pupil dilation in response to task-relevant cues depends on the associated uncertainty, but only for large differences in the cue-associated uncertainty. Additionally, in response to the auditory outcomes, the pupil dilation scaled negatively with the cue-dependent probabilities, likely signalling the amount of surprise.
Collapse
|
83
|
Li L, Ishida K, Mizuhara K, Barry RJ, Nittono H. Effects of the cardiac cycle on auditory processing: A preregistered study on mismatch negativity. Psychophysiology 2024; 61:e14506. [PMID: 38149745 DOI: 10.1111/psyp.14506] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2022] [Revised: 11/23/2023] [Accepted: 12/01/2023] [Indexed: 12/28/2023]
Abstract
The systolic and diastolic phases of the cardiac cycle are known to affect perception and cognition differently. Higher order processing tends to be facilitated at systole, whereas sensory processing of external stimuli tends to be impaired at systole compared to diastole. The current study aims to examine whether the cardiac cycle affects auditory deviance detection, as reflected in the mismatch negativity (MMN) of the event-related brain potential (ERP). We recorded the intensity deviance response to deviant tones (70 dB) presented among standard tones (60 or 80 dB, depending on blocks) and calculated the MMN by subtracting standard ERP waveforms from deviant ERP waveforms. We also assessed intensity-dependent N1 and P2 amplitude changes by subtracting ERPs elicited by soft standard tones (60 dB) from ERPs elicited by loud standard tones (80 dB). These subtraction methods were used to eliminate phase-locked cardiac-related electric artifacts that overlap auditory ERPs. The endogenous MMN was expected to be larger at systole, reflecting the facilitation of memory-based auditory deviance detection, whereas the exogenous N1 and P2 would be smaller at systole, reflecting impaired exteroceptive sensory processing. However, after the elimination of cardiac-related artifacts, there were no significant differences between systole and diastole in any ERP components. The intensity-dependent N1 and P2 amplitude changes were not obvious in either cardiac phase, probably because of the short interstimulus intervals. The lack of a cardiac phase effect on MMN amplitude suggests that preattentive auditory processing may not be affected by bodily signals from the heart.
Collapse
|
84
|
Böing S, Van der Stigchel S, Van der Stoep N. The impact of acute asymmetric hearing loss on multisensory integration. Eur J Neurosci 2024; 59:2373-2390. [PMID: 38303554 DOI: 10.1111/ejn.16263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 12/15/2023] [Accepted: 01/09/2024] [Indexed: 02/03/2024]
Abstract
Humans have the remarkable ability to integrate information from different senses, which greatly facilitates the detection, localization and identification of events in the environment. About 466 million people worldwide suffer from hearing loss. Yet, the impact of hearing loss on how the senses work together is rarely investigated. Here, we investigate how a common sensory impairment, asymmetric conductive hearing loss (AHL), alters the way our senses interact by examining human orienting behaviour with normal hearing (NH) and acute AHL. This type of hearing loss disrupts auditory localization. We hypothesized that this creates a conflict between auditory and visual spatial estimates and alters how auditory and visual inputs are integrated to facilitate multisensory spatial perception. We analysed the spatial and temporal properties of saccades to auditory, visual and audiovisual stimuli before and after plugging the right ear of participants. Both spatial and temporal aspects of multisensory integration were affected by AHL. Compared with NH, AHL caused participants to make slow, inaccurate and unprecise saccades towards auditory targets. Surprisingly, increased weight on visual input resulted in accurate audiovisual localization with AHL. This came at a cost: saccade latencies for audiovisual targets increased significantly. The larger the auditory localization errors, the less participants were able to benefit from audiovisual integration in terms of saccade latency. Our results indicate that observers immediately change sensory weights to effectively deal with acute AHL and preserve audiovisual accuracy in a way that cannot be fully explained by statistical models of optimal cue integration.
Collapse
|
85
|
Ghosh P, Talwar S, Banerjee A. Unsupervised Characterization of Prediction Error Markers in Unisensory and Multisensory Streams Reveal the Spatiotemporal Hierarchy of Cortical Information Processing. eNeuro 2024; 11:ENEURO.0251-23.2024. [PMID: 38702194 PMCID: PMC11069433 DOI: 10.1523/eneuro.0251-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 03/19/2024] [Accepted: 03/20/2024] [Indexed: 05/06/2024] Open
Abstract
Elicited upon violation of regularity in stimulus presentation, mismatch negativity (MMN) reflects the brain's ability to perform automatic comparisons between consecutive stimuli and provides an electrophysiological index of sensory error detection whereas P300 is associated with cognitive processes such as updating of the working memory. To date, there has been extensive research on the roles of MMN and P300 individually, because of their potential to be used as clinical markers of consciousness and attention, respectively. Here, we intend to explore with an unsupervised and rigorous source estimation approach, the underlying cortical generators of MMN and P300, in the context of prediction error propagation along the hierarchies of brain information processing in healthy human participants. The existing methods of characterizing the two ERPs involve only approximate estimations of their amplitudes and latencies based on specific sensors of interest. Our objective is twofold: first, we introduce a novel data-driven unsupervised approach to compute latencies and amplitude of ERP components accurately on an individual-subject basis and reconfirm earlier findings. Second, we demonstrate that in multisensory environments, MMN generators seem to reflect a significant overlap of "modality-specific" and "modality-independent" information processing while P300 generators mark a shift toward completely "modality-independent" processing. Advancing earlier understanding that multisensory contexts speed up early sensory processing, our study reveals that temporal facilitation extends to even the later components of prediction error processing, using EEG experiments. Such knowledge can be of value to clinical research for characterizing the key developmental stages of lifespan aging, schizophrenia, and depression.
Collapse
|
86
|
Ueda S, Yakushijin R, Ishiguchi A. Variance aftereffect within and between sensory modalities for visual and auditory domains. Atten Percept Psychophys 2024; 86:1375-1385. [PMID: 37100981 PMCID: PMC11093869 DOI: 10.3758/s13414-023-02705-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/26/2023] [Indexed: 04/28/2023]
Abstract
We can grasp various features of the outside world using summary statistics efficiently. Among these statistics, variance is an index of information homogeneity or reliability. Previous research has shown that visual variance information in the context of spatial integration is encoded directly as a unique feature, and currently perceived variance can be distorted by that of the preceding stimuli. In this study, we focused on variance perception in temporal integration. We investigated whether any variance aftereffects occurred in visual size and auditory pitch. Furthermore, to examine the mechanism of cross-modal variance perception, we also investigated whether variance aftereffects occur between different modalities. Four experimental conditions (a combination of sensory modalities of adaptor and test: visual-to-visual, visual-to-auditory, auditory-to-auditory, and auditory-to-visual) were conducted. Participants observed a sequence of visual or auditory stimuli perturbed in size or pitch with certain variance and performed a variance classification task before and after the variance adaptation phase. We found that in visual size, within modality adaptation to small or large variance, resulted in a variance aftereffect, indicating that variance judgments are biased in the direction away from that of the adapting stimulus. In auditory pitch, within modality adaptation to small variance caused variance aftereffect. For cross-modal combinations, adaptation to small variance in visual size resulted in variance aftereffect. However, the effect was weak, and variance aftereffect did not occur in other conditions. These findings indicate that the variance information of sequentially presented stimuli is encoded independently in visual and auditory domains.
Collapse
|
87
|
Grenzebach J, Wegner TGG, Einhäuser W, Bendixen A. Bimodal moment-by-moment coupling in perceptual multistability. J Vis 2024; 24:16. [PMID: 38819806 PMCID: PMC11146044 DOI: 10.1167/jov.24.5.16] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Accepted: 04/18/2024] [Indexed: 06/01/2024] Open
Abstract
Multistable perception occurs in all sensory modalities, and there is ongoing theoretical debate about whether there are overarching mechanisms driving multistability across modalities. Here we study whether multistable percepts are coupled across vision and audition on a moment-by-moment basis. To assess perception simultaneously for both modalities without provoking a dual-task situation, we query auditory perception by direct report, while measuring visual perception indirectly via eye movements. A support-vector-machine (SVM)-based classifier allows us to decode visual perception from the eye-tracking data on a moment-by-moment basis. For each timepoint, we compare visual percept (SVM output) and auditory percept (report) and quantify the co-occurrence of integrated (one-object) or segregated (two-objects) interpretations in the two modalities. Our results show an above-chance coupling of auditory and visual perceptual interpretations. By titrating stimulus parameters toward an approximately symmetric distribution of integrated and segregated percepts for each modality and individual, we minimize the amount of coupling expected by chance. Because of the nature of our task, we can rule out that the coupling stems from postperceptual levels (i.e., decision or response interference). Our results thus indicate moment-by-moment perceptual coupling in the resolution of visual and auditory multistability, lending support to theories that postulate joint mechanisms for multistable perception across the senses.
Collapse
|
88
|
Neklyudova A, Kuramagomedova R, Voinova V, Sysoeva O. Atypical brain responses to 40-Hz click trains in girls with Rett syndrome: Auditory steady-state response and sustained wave. Psychiatry Clin Neurosci 2024; 78:282-290. [PMID: 38321640 DOI: 10.1111/pcn.13638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 12/01/2023] [Accepted: 12/27/2023] [Indexed: 02/08/2024]
Abstract
AIM The current study aimed to infer neurophysiological mechanisms of auditory processing in children with Rett syndrome (RTT)-rare neurodevelopmental disorders caused by MECP2 mutations. We examined two brain responses elicited by 40-Hz click trains: auditory steady-state response (ASSR), which reflects fine temporal analysis of auditory input, and sustained wave (SW), which is associated with integral processing of the auditory signal. METHODS We recorded electroencephalogram findings in 43 patients with RTT (aged 2.92-17.1 years) and 43 typically developing children of the same age during 40-Hz click train auditory stimulation, which lasted for 500 ms and was presented with interstimulus intervals of 500 to 800 ms. Mixed-model ancova with age as a covariate was used to compare amplitude of ASSR and SW between groups, taking into account the temporal dynamics and topography of the responses. RESULTS Amplitude of SW was atypically small in children with RTT starting from early childhood, with the difference from typically developing children decreasing with age. ASSR showed a different pattern of developmental changes: the between-group difference was negligible in early childhood but increased with age as ASSR increased in the typically developing group, but not in those with RTT. Moreover, ASSR was associated with expressive speech development in patients, so that children who could use words had more pronounced ASSR. CONCLUSION ASSR and SW show promise as noninvasive electrophysiological biomarkers of auditory processing that have clinical relevance and can shed light onto the link between genetic impairment and the RTT phenotype.
Collapse
|
89
|
Jacoby N, Polak R, Grahn JA, Cameron DJ, Lee KM, Godoy R, Undurraga EA, Huanca T, Thalwitzer T, Doumbia N, Goldberg D, Margulis EH, Wong PCM, Jure L, Rocamora M, Fujii S, Savage PE, Ajimi J, Konno R, Oishi S, Jakubowski K, Holzapfel A, Mungan E, Kaya E, Rao P, Rohit MA, Alladi S, Tarr B, Anglada-Tort M, Harrison PMC, McPherson MJ, Dolan S, Durango A, McDermott JH. Commonality and variation in mental representations of music revealed by a cross-cultural comparison of rhythm priors in 15 countries. Nat Hum Behav 2024; 8:846-877. [PMID: 38438653 PMCID: PMC11132990 DOI: 10.1038/s41562-023-01800-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2021] [Accepted: 12/07/2023] [Indexed: 03/06/2024]
Abstract
Music is present in every known society but varies from place to place. What, if anything, is universal to music cognition? We measured a signature of mental representations of rhythm in 39 participant groups in 15 countries, spanning urban societies and Indigenous populations. Listeners reproduced random 'seed' rhythms; their reproductions were fed back as the stimulus (as in the game of 'telephone'), such that their biases (the prior) could be estimated from the distribution of reproductions. Every tested group showed a sparse prior with peaks at integer-ratio rhythms. However, the importance of different integer ratios varied across groups, often reflecting local musical practices. Our results suggest a common feature of music cognition: discrete rhythm 'categories' at small-integer ratios. These discrete representations plausibly stabilize musical systems in the face of cultural transmission but interact with culture-specific traditions to yield the diversity that is evident when mental representations are probed across many cultures.
Collapse
|
90
|
Ince MS, Guzel I, Akgor MC, Bahcelioglu M, Arikan KB, Okasha A, Sengezer S, Bolay H. Virtual dynamic interaction games reveal impaired multisensory integration in women with migraine. Headache 2024; 64:482-493. [PMID: 38693749 DOI: 10.1111/head.14720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 03/27/2024] [Accepted: 03/27/2024] [Indexed: 05/03/2024]
Abstract
OBJECTIVE In this cross-sectional observational study, we aimed to investigate sensory profiles and multisensory integration processes in women with migraine using virtual dynamic interaction systems. BACKGROUND Compared to studies on unimodal sensory processing, fewer studies show that multisensory integration differs in patients with migraine. Multisensory integration of visual, auditory, verbal, and haptic modalities has not been evaluated in migraine. METHODS A 12-min virtual dynamic interaction game consisting of four parts was played by the participants. During the game, the participants were exposed to either visual stimuli only or multisensory stimuli in which auditory, verbal, and haptic stimuli were added to the visual stimuli. A total of 78 women participants (28 with migraine without aura and 50 healthy controls) were enrolled in this prospective exploratory study. Patients with migraine and healthy participants who met the inclusion criteria were randomized separately into visual and multisensory groups: Migraine multisensory (14 adults), migraine visual (14 adults), healthy multisensory (25 adults), and healthy visual (25 adults). The Sensory Profile Questionnaire was utilized to assess the participants' sensory profiles. The game scores and survey results were analyzed. RESULTS In visual stimulus, the gaming performance scores of patients with migraine without aura were similar to the healthy controls, at a median (interquartile range [IQR]) of 81.8 (79.5-85.8) and 80.9 (77.1-84.2) (p = 0.149). Error rate of visual stimulus in patients with migraine without aura were comparable to healthy controls, at a median (IQR) of 0.11 (0.08-0.13) and 0.12 (0.10-0.14), respectively (p = 0,166). In multisensory stimulation, average gaming score was lower in patients with migraine without aura compared to healthy individuals (median [IQR] 82.2 [78.8-86.3] vs. 78.6 [74.0-82.4], p = 0.028). In women with migraine, exposure to new sensory modality upon visual stimuli in the fourth, seventh, and tenth rounds (median [IQR] 78.1 [74.1-82.0], 79.7 [77.2-82.5], 76.5 [70.2-82.1]) exhibited lower game scores compared to visual stimuli only (median [IQR] 82.3 [77.9-87.8], 84.2 [79.7-85.6], 80.8 [79.0-85.7], p = 0.044, p = 0.049, p = 0.016). According to the Sensory Profile Questionnaire results, sensory sensitivity, and sensory avoidance scores of patients with migraine (median [IQR] score 45.5 [41.0-54.7] and 47.0 [41.5-51.7]) were significantly higher than healthy participants (median [IQR] score 39.0 [34.0-44.2] and 40.0 [34.0-48.0], p < 0.001, p = 0.001). CONCLUSION The virtual dynamic game approach showed for the first time that the gaming performance of patients with migraine without aura was negatively affected by the addition of auditory, verbal, and haptic stimuli onto visual stimuli. Multisensory integration of sensory modalities including haptic stimuli is disturbed even in the interictal period in women with migraine. Virtual games can be employed to assess the impact of sensory problems in the course of the disease. Also, sensory training could be a potential therapy target to improve multisensory processing in migraine.
Collapse
|
91
|
Bao W, Alain C, Thaut M, Molnar M. Is there a bilingual advantage in auditory attention among children? A systematic review and meta-analysis of standardized auditory attention tests. PLoS One 2024; 19:e0299393. [PMID: 38691540 PMCID: PMC11062550 DOI: 10.1371/journal.pone.0299393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Accepted: 02/09/2024] [Indexed: 05/03/2024] Open
Abstract
A wealth of research has investigated the associations between bilingualism and cognition, especially in regards to executive function. Some developmental studies reveal different cognitive profiles between monolinguals and bilinguals in visual or audio-visual attention tasks, which might stem from their attention allocation differences. Yet, whether such distinction exists in the auditory domain alone is unknown. In this study, we compared differences in auditory attention, measured by standardized tests, between monolingual and bilingual children. A comprehensive literature search was conducted in three electronic databases: OVID Medline, OVID PsycInfo, and EBSCO CINAHL. Twenty studies using standardized tests to assess auditory attention in monolingual and bilingual participants aged less than 18 years were identified. We assessed the quality of these studies using a scoring tool for evaluating primary research. For statistical analysis, we pooled the effect size in a random-effects meta-analytic model, where between-study heterogeneity was quantified using the I2 statistic. No substantial publication bias was observed based on the funnel plot. Further, meta-regression modelling suggests that test measure (accuracy vs. response times) significantly affected the studies' effect sizes whereas other factors (e.g., participant age, stimulus type) did not. Specifically, studies reporting accuracy observed marginally greater accuracy in bilinguals (g = 0.10), whereas those reporting response times indicated faster latency in monolinguals (g = -0.34). There was little difference between monolingual and bilingual children's performance on standardized auditory attention tests. We also found that studies tend to include a wide variety of bilingual children but report limited language background information of the participants. This, unfortunately, limits the potential theoretical contributions of the reviewed studies. Recommendations to improve the quality of future research are discussed.
Collapse
|
92
|
Cantarella G, Mioni G, Bisiacchi PS. Young adults and multisensory time perception: Visual and auditory pathways in comparison. Atten Percept Psychophys 2024; 86:1386-1399. [PMID: 37674041 PMCID: PMC11093818 DOI: 10.3758/s13414-023-02773-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/01/2023] [Indexed: 09/08/2023]
Abstract
The brain continuously encodes information about time, but how sensorial channels interact to achieve a stable representation of such ubiquitous information still needs to be determined. According to recent research, children show a potential interference in multisensory conditions, leading to a trade-off between two senses (sight and audition) when considering time-perception tasks. This study aimed to examine how healthy young adults behave when performing a time-perception task. In Experiment 1, we tested the effects of temporary sensory deprivation on both visual and auditory senses in a group of young adults. In Experiment 2, we compared the temporal performances of young adults in the auditory modality with those of two samples of children (sighted and sighted but blindfolded) selected from a previous study. Statistically significant results emerged when comparing the two pathways: young adults overestimated and showed a higher sensitivity to time in the auditory modality compared to the visual modality. Restricting visual and auditory input did not affect their time sensitivity. Moreover, children were more accurate at estimating time than young adults after a transient visual deprivation. This implies that as we mature, sensory deprivation does not constitute a benefit to time perception, and supports the hypothesis of a calibration process between senses with age. However, more research is needed to determine how this calibration process affects the developmental trajectories of time perception.
Collapse
|
93
|
Ahlfors SP, Graham S, Bharadwaj H, Mamashli F, Khan S, Joseph RM, Losh A, Pawlyszyn S, McGuiggan NM, Vangel M, Hämäläinen MS, Kenet T. No Differences in Auditory Steady-State Responses in Children with Autism Spectrum Disorder and Typically Developing Children. J Autism Dev Disord 2024; 54:1947-1960. [PMID: 36932270 DOI: 10.1007/s10803-023-05907-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/23/2022] [Indexed: 03/19/2023]
Abstract
Auditory steady-state response (ASSR) has been studied as a potential biomarker for abnormal auditory sensory processing in autism spectrum disorder (ASD), with mixed results. Motivated by prior somatosensory findings of group differences in inter-trial coherence (ITC) between ASD and typically developing (TD) individuals at twice the steady-state stimulation frequency, we examined ASSR at 25 and 50 as well as 43 and 86 Hz in response to 25-Hz and 43-Hz auditory stimuli, respectively, using magnetoencephalography. Data were recorded from 22 ASD and 31 TD children, ages 6-17 years. ITC measures showed prominent ASSRs at the stimulation and double frequencies, without significant group differences. These results do not support ASSR as a robust ASD biomarker of abnormal auditory processing in ASD. Furthermore, the previously observed atypical double-frequency somatosensory response in ASD did not generalize to the auditory modality. Thus, the hypothesis about modality-independent abnormal local connectivity in ASD was not supported.
Collapse
|
94
|
Nguyen T, Lagacé-Cusiac R, Everling JC, Henry MJ, Grahn JA. Audiovisual integration of rhythm in musicians and dancers. Atten Percept Psychophys 2024; 86:1400-1416. [PMID: 38557941 DOI: 10.3758/s13414-024-02874-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/23/2024] [Indexed: 04/04/2024]
Abstract
Music training is associated with better beat processing in the auditory modality. However, it is unknown how rhythmic training that emphasizes visual rhythms, such as dance training, might affect beat processing, nor whether training effects in general are modality specific. Here we examined how music and dance training interacted with modality during audiovisual integration and synchronization to auditory and visual isochronous sequences. In two experiments, musicians, dancers, and controls completed an audiovisual integration task and an audiovisual target-distractor synchronization task using dynamic visual stimuli (a bouncing figure). The groups performed similarly on the audiovisual integration tasks (Experiments 1 and 2). However, in the finger-tapping synchronization task (Experiment 1), musicians were more influenced by auditory distractors when synchronizing to visual sequences, while dancers were more influenced by visual distractors when synchronizing to auditory sequences. When participants synchronized with whole-body movements instead of finger-tapping (Experiment 2), all groups were more influenced by the visual distractor than the auditory distractor. Taken together, this study highlights how training is associated with audiovisual processing, and how different types of visual rhythmic stimuli and different movements alter beat perception and production outcome measures. Implications for the modality appropriateness hypothesis are discussed.
Collapse
|
95
|
Bernal-Berdun E, Vallejo M, Sun Q, Serrano A, Gutierrez D. Modeling the Impact of Head-Body Rotations on Audio-Visual Spatial Perception for Virtual Reality Applications. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:2624-2632. [PMID: 38446650 DOI: 10.1109/tvcg.2024.3372112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/08/2024]
Abstract
Humans perceive the world by integrating multimodal sensory feedback, including visual and auditory stimuli, which holds true in virtual reality (VR) environments. Proper synchronization of these stimuli is crucial for perceiving a coherent and immersive VR experience. In this work, we focus on the interplay between audio and vision during localization tasks involving natural head-body rotations. We explore the impact of audio-visual offsets and rotation velocities on users' directional localization acuity for various viewing modes. Using psychometric functions, we model perceptual disparities between visual and auditory cues and determine offset detection thresholds. Our findings reveal that target localization accuracy is affected by perceptual audio-visual disparities during head-body rotations, but remains consistent in the absence of stimuli-head relative motion. We then showcase the effectiveness of our approach in predicting and enhancing users' localization accuracy within realistic VR gaming applications. To provide additional support for our findings, we implement a natural VR game wherein we apply a compensatory audio-visual offset derived from our measured psychometric functions. As a result, we demonstrate a substantial improvement of up to 40% in participants' target localization accuracy. We additionally provide guidelines for content creation to ensure coherent and seamless VR experiences.
Collapse
|
96
|
Huang YT, Wu CT, Koike S, Chao ZC. Dissecting Mismatch Negativity: Early and Late Subcomponents for Detecting Deviants in Local and Global Sequence Regularities. eNeuro 2024; 11:ENEURO.0050-24.2024. [PMID: 38702187 PMCID: PMC11103647 DOI: 10.1523/eneuro.0050-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2024] [Revised: 04/11/2024] [Accepted: 04/26/2024] [Indexed: 05/06/2024] Open
Abstract
Mismatch negativity (MMN) is commonly recognized as a neural signal of prediction error evoked by deviants from the expected patterns of sensory input. Studies show that MMN diminishes when sequence patterns become more predictable over a longer timescale. This implies that MMN is composed of multiple subcomponents, each responding to different levels of temporal regularities. To probe the hypothesized subcomponents in MMN, we record human electroencephalography during an auditory local-global oddball paradigm where the tone-to-tone transition probability (local regularity) and the overall sequence probability (global regularity) are manipulated to control temporal predictabilities at two hierarchical levels. We find that the size of MMN is correlated with both probabilities and the spatiotemporal structure of MMN can be decomposed into two distinct subcomponents. Both subcomponents appear as negative waveforms, with one peaking early in the central-frontal area and the other late in a more frontal area. With a quantitative predictive coding model, we map the early and late subcomponents to the prediction errors that are tied to local and global regularities, respectively. Our study highlights the hierarchical complexity of MMN and offers an experimental and analytical platform for developing a multitiered neural marker applicable in clinical settings.
Collapse
|
97
|
Abrams EB, Namballa R, He R, Poeppel D, Ripollés P. Elevator music as a tool for the quantitative characterization of reward. Ann N Y Acad Sci 2024; 1535:121-136. [PMID: 38566486 DOI: 10.1111/nyas.15131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
While certain musical genres and songs are widely popular, there is still large variability in the music that individuals find rewarding or emotional, even among those with a similar musical enculturation. Interestingly, there is one Western genre that is intended to attract minimal attention and evoke a mild emotional response: elevator music. In a series of behavioral experiments, we show that elevator music consistently elicits low pleasure and surprise. Participants reported elevator music as being less pleasurable than music from popular genres, even when participants did not regularly listen to the comparison genre. Participants reported elevator music to be familiar even when they had not explicitly heard the presented song before. Computational and behavioral measures of surprisal showed that elevator music was less surprising, and thus more predictable, than other well-known genres. Elevator music covers of popular songs were rated as less pleasurable, surprising, and arousing than their original counterparts. Finally, we used elevator music as a control for self-selected rewarding songs in a proof-of-concept physiological (electrodermal activity and piloerection) experiment. Our results suggest that elevator music elicits low emotional responses consistently across Western music listeners, making it a unique control stimulus for studying musical novelty, pleasure, and surprise.
Collapse
|
98
|
Charles A, Henaut Y, Saint-Jalme M, Mulot B, Lecu A, Delfour F. Visual and acoustic exploratory behaviors toward novel stimuli in Antillean manatees (Trichechus manatus manatus) under human care. J Comp Psychol 2024; 138:118-129. [PMID: 38095927 DOI: 10.1037/com0000360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/04/2024]
Abstract
Exploratory behaviors describe the actions performed by an animal to obtain information on an object, environment, or individual by using its different senses. Exploration is described in some marine mammals, but not yet in manatees. Our study investigated behavioral and acoustic responses of two groups of Antillean manatees (N = 12 and N = 4) housed in zoological parks toward various stimuli involving three sensory modalities: visual, tactile, and auditory. Simultaneous audio and video recordings were collected during three periods of time (i.e., before, during, and after the presentation of all stimuli). Behaviors related to interest, social behaviors, the number and type of calls produced, and their frequency and duration were recorded and analyzed. Manatees reacted more to submerged stimuli than to out-of-water and sound stimuli, with an increase in approach, social contacts, and number of vocalizations. The proportion of squeaks and squeals call types also varied according to stimuli, and call entropy and F0 range varied according to periods. Our results suggest that manatees display sensory preferences when exploring stimuli, with more interest in manipulable stimuli, supporting the importance of their somatic perception. We highlight the need for particular enrichment programs (i.e., involving submerged objects) in zoological facilities. By displaying social contacts and by producing vocalizations, manatees communicate information such as their motivational state. The increase in call rate, harsh calls, and entropy values could be valid indicators of heightened arousal. We encourage further studies to associate acoustic recordings with ethological data collection to increase the understanding of manatees' behaviors andperception. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Collapse
|
99
|
Kausel L, Zamorano F, Billeke P, Sutherland ME, Alliende MI, Larrain‐Valenzuela J, Soto‐Icaza P, Aboitiz F. Theta and alpha oscillations may underlie improved attention and working memory in musically trained children. Brain Behav 2024; 14:e3517. [PMID: 38702896 PMCID: PMC11069029 DOI: 10.1002/brb3.3517] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Revised: 04/10/2024] [Accepted: 04/13/2024] [Indexed: 05/06/2024] Open
Abstract
INTRODUCTION Attention and working memory are key cognitive functions that allow us to select and maintain information in our mind for a short time, being essential for our daily life and, in particular, for learning and academic performance. It has been shown that musical training can improve working memory performance, but it is still unclear if and how the neural mechanisms of working memory and particularly attention are implicated in this process. In this work, we aimed to identify the oscillatory signature of bimodal attention and working memory that contributes to improved working memory in musically trained children. MATERIALS AND METHODS We recruited children with and without musical training and asked them to complete a bimodal (auditory/visual) attention and working memory task, whereas their brain activity was measured using electroencephalography. Behavioral, time-frequency, and source reconstruction analyses were made. RESULTS Results showed that, overall, musically trained children performed better on the task than children without musical training. When comparing musically trained children with children without musical training, we found modulations in the alpha band pre-stimuli onset and the beginning of stimuli onset in the frontal and parietal regions. These correlated with correct responses to the attended modality. Moreover, during the end phase of stimuli presentation, we found modulations correlating with correct responses independent of attention condition in the theta and alpha bands, in the left frontal and right parietal regions. CONCLUSIONS These results suggest that musically trained children have improved neuronal mechanisms for both attention allocation and memory encoding. Our results can be important for developing interventions for people with attention and working memory difficulties.
Collapse
|
100
|
Wadle SL, Ritter TC, Wadle TTX, Hirtz JJ. Topography and Ensemble Activity in the Auditory Cortex of a Mouse Model of Fragile X Syndrome. eNeuro 2024; 11:ENEURO.0396-23.2024. [PMID: 38627066 PMCID: PMC11097631 DOI: 10.1523/eneuro.0396-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 03/11/2024] [Accepted: 04/01/2024] [Indexed: 05/18/2024] Open
Abstract
Autism spectrum disorder (ASD) is often associated with social communication impairments and specific sound processing deficits, for example, problems in following speech in noisy environments. To investigate underlying neuronal processing defects located in the auditory cortex (AC), we performed two-photon Ca2+ imaging in FMR1 (fragile X messenger ribonucleoprotein 1) knock-out (KO) mice, a model for fragile X syndrome (FXS), the most common cause of hereditary ASD in humans. For primary AC (A1) and the anterior auditory field (AAF), topographic frequency representation was less ordered compared with control animals. We additionally analyzed ensemble AC activity in response to various sounds and found subfield-specific differences. In A1, ensemble correlations were lower in general, while in secondary AC (A2), correlations were higher in response to complex sounds, but not to pure tones. Furthermore, sound specificity of ensemble activity was decreased in AAF. Repeating these experiments 1 week later revealed no major differences regarding representational drift. Nevertheless, we found subfield- and genotype-specific changes in ensemble correlation values between the two times points, hinting at alterations in network stability in FMR1 KO mice. These detailed insights into AC network activity and topography in FMR1 KO mice add to the understanding of auditory processing defects in FXS.
Collapse
|