26
|
Arutiunian V, Arcara G, Buyanova I, Fedorov M, Davydova E, Pereverzeva D, Sorokin A, Tyushkevich S, Mamokhina U, Danilina K, Dragoy O. Abnormalities in both stimulus-induced and baseline MEG alpha oscillations in the auditory cortex of children with Autism Spectrum Disorder. Brain Struct Funct 2024; 229:1225-1242. [PMID: 38683212 DOI: 10.1007/s00429-024-02802-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2023] [Accepted: 04/22/2024] [Indexed: 05/01/2024]
Abstract
The neurobiology of Autism Spectrum Disorder (ASD) is hypothetically related to the imbalance between neural excitation (E) and inhibition (I). Different studies have revealed that alpha-band (8-12 Hz) activity in magneto- and electroencephalography (MEG and EEG) may reflect E and I processes and, thus, can be of particular interest in ASD research. Previous findings indicated alterations in event-related and baseline alpha activity in different cortical systems in individuals with ASD, and these abnormalities were associated with core and co-occurring conditions of ASD. However, the knowledge on auditory alpha oscillations in this population is limited. This MEG study investigated stimulus-induced (Event-Related Desynchronization, ERD) and baseline alpha-band activity (both periodic and aperiodic) in the auditory cortex and also the relationships between these neural activities and behavioral measures of children with ASD. Ninety amplitude-modulated tones were presented to two groups of children: 20 children with ASD (5 girls, Mage = 10.03, SD = 1.7) and 20 typically developing controls (9 girls, Mage = 9.11, SD = 1.3). Children with ASD had a bilateral reduction of alpha-band ERD, reduced baseline aperiodic-adjusted alpha power, and flattened aperiodic exponent in comparison to TD children. Moreover, lower raw baseline alpha power and aperiodic offset in the language-dominant left auditory cortex were associated with better language skills of children with ASD measured in formal assessment. The findings highlighted the alterations of E / I balance metrics in response to basic auditory stimuli in children with ASD and also provided evidence for the contribution of low-level processing to language difficulties in ASD.
Collapse
|
27
|
Delussi M, Valt C, Silvestri A, Ricci K, Ladisa E, Ammendola E, Rampino A, Pergola G, de Tommaso M. Auditory mismatch negativity in pre-manifest and manifest Huntington's disease. Clin Neurophysiol 2024; 162:121-128. [PMID: 38603947 DOI: 10.1016/j.clinph.2024.03.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 02/29/2024] [Accepted: 03/19/2024] [Indexed: 04/13/2024]
Abstract
AIM The aim of this study was to investigate the characteristics of the electrophysiological brain response elicited in a passive acoustic oddball paradigm, i.e. mismatch negativity (MMN), in patients with Huntington's disease (HD) in the premanifest (pHD) and manifest (mHD) phases. In this regard, we correlated the results of event-related potentials (ERP) with disease characteristics. METHODS This was an observational cross-sectional MMN study. In addition to the MMN recording of the passive oddball task, all subjects with first-degree inheritance for HD underwent genetic testing for mutant HTT, the Huntington's Disease Rating Scale, the Total Functional Capacity Scale, the Problem Behaviors Assessment short form, and the Mini-Mental State Examination. RESULTS We found that global field power (GFP) was reduced in the MMN time window in mHD patients compared to pHD and normal controls (NC). In the pHD group, MMN amplitude was only slightly and not significantly increased compared to mHD, while pHD patients showed increased theta coherence between trials compared to mHD. In the entire sample of HD gene carriers, the main MMN traits were not correlated with motor performance, cognitive impairment and functional disability. CONCLUSION These results suggest an initial and subtle deterioration of pre-attentive mechanisms in the presymptomatic phase of HD, with an increasing phase shift in the MMN time frame. This result could indicate initial functional changes with a possible compensatory effect. SIGNIFICANCE An initial and slight decrease in MMN associated with increased phase coherence in the corresponding EEG frequencies could indicate an early functional involvement of pre-attentive resources that could precede the clinical expression of HD.
Collapse
|
28
|
de la Torre A, Sanchez I, Alvarez IM, Segura JC, Valderrama JT, Muller N, Vargas JL. Multi-response deconvolution of auditory evoked potentials in a reduced representation space. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:3639-3653. [PMID: 38836771 DOI: 10.1121/10.0026228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Accepted: 05/16/2024] [Indexed: 06/06/2024]
Abstract
The estimation of auditory evoked potentials requires deconvolution when the duration of the responses to be recovered exceeds the inter-stimulus interval. Based on least squares deconvolution, in this article we extend the procedure to the case of a multi-response convolutional model, that is, a model in which different categories of stimulus are expected to evoke different responses. The computational cost of the multi-response deconvolution significantly increases with the number of responses to be deconvolved, which restricts its applicability in practical situations. In order to alleviate this restriction, we propose to perform the multi-response deconvolution in a reduced representation space associated with a latency-dependent filtering of auditory responses, which provides a significant dimensionality reduction. We demonstrate the practical viability of the multi-response deconvolution with auditory responses evoked by clicks presented at different levels and categorized according to their stimulation level. The multi-response deconvolution applied in a reduced representation space provides the least squares estimation of the responses with a reasonable computational load. matlab/Octave code implementing the proposed procedure is included as supplementary material.
Collapse
|
29
|
Bell A, Toh WL, Allen P, Cella M, Jardri R, Larøi F, Moseley P, Rossell SL. Examining the relationships between cognition and auditory hallucinations: A systematic review. Aust N Z J Psychiatry 2024; 58:467-497. [PMID: 38470085 PMCID: PMC11128145 DOI: 10.1177/00048674241235849] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 03/13/2024]
Abstract
OBJECTIVE Auditory hallucinations (hearing voices) have been associated with a range of altered cognitive functions, pertaining to signal detection, source-monitoring, memory, inhibition and language processes. Yet, empirical results are inconsistent. Despite this, several theoretical models of auditory hallucinations persist, alongside increasing emphasis on the utility of a multidimensional framework. Thus, clarification of current evidence across the broad scope of proposed mechanisms is warranted. METHOD A systematic search of the Web of Science, PubMed and Scopus databases was conducted. Records were screened to confirm the use of an objective behavioural cognitive task, and valid measurement of hallucinations specific to the auditory modality. RESULTS Auditory hallucinations were primarily associated with difficulties in perceptual decision-making (i.e. reduced sensitivity/accuracy for signal-noise discrimination; liberal responding to ambiguity), source-monitoring (i.e. self-other and temporal context confusion), working memory and language function (i.e. reduced verbal fluency). Mixed or limited support was observed for perceptual feature discrimination, imagery vividness/illusion susceptibility, source-monitoring for stimulus form and spatial context, recognition and recall memory, executive functions (e.g. attention, inhibition), emotion processing and language comprehension/hemispheric organisation. CONCLUSIONS Findings were considered within predictive coding and self-monitoring frameworks. Of concern was the portion of studies which - despite offering auditory-hallucination-specific aims and inferences - employed modality-general measures, and/or diagnostic-based contrasts with psychologically healthy individuals. This review highlights disparities within the literature between theoretical conceptualisations of auditory hallucinations and the body of rigorous empirical evidence supporting such inferences. Future cognitive investigations, beyond the schizophrenia-spectrum, which explicitly define and measure the timeframe and sensory modality of hallucinations, are recommended.
Collapse
|
30
|
Wang B, Otten LJ, Schulze K, Afrah H, Varney L, Cotic M, Saadullah Khani N, Linden JF, Kuchenbaecker K, McQuillin A, Hall MH, Bramon E. Is auditory processing measured by the N100 an endophenotype for psychosis? A family study and a meta-analysis. Psychol Med 2024; 54:1559-1572. [PMID: 37997703 DOI: 10.1017/s0033291723003409] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/25/2023]
Abstract
BACKGROUND The N100, an early auditory event-related potential, has been found to be altered in patients with psychosis. However, it is unclear if the N100 is a psychosis endophenotype that is also altered in the relatives of patients. METHODS We conducted a family study using the auditory oddball paradigm to compare the N100 amplitude and latency across 243 patients with psychosis, 86 unaffected relatives, and 194 controls. We then conducted a systematic review and a random-effects meta-analysis pooling our results and 14 previously published family studies. We compared data from a total of 999 patients, 1192 relatives, and 1253 controls in order to investigate the evidence and degree of N100 differences. RESULTS In our family study, patients showed reduced N100 amplitudes and prolonged N100 latencies compared to controls, but no significant differences were found between unaffected relatives and controls. The meta-analysis revealed a significant reduction of the N100 amplitude and delay of the N100 latency in both patients with psychosis (standardized mean difference [s.m.d.] = -0.48 for N100 amplitude and s.m.d. = 0.43 for N100 latency) and their relatives (s.m.d. = - 0.19 for N100 amplitude and s.m.d. = 0.33 for N100 latency). However, only the N100 latency changes in relatives remained significant when excluding studies with affected relatives. CONCLUSIONS N100 changes, especially prolonged N100 latencies, are present in both patients with psychosis and their relatives, making the N100 a promising endophenotype for psychosis. Such changes in the N100 may reflect changes in early auditory processing underlying the etiology of psychosis.
Collapse
|
31
|
Kong G, Aberkane C, Desoche C, Farnè A, Vernet M. No evidence in favor of the existence of "intentional" binding. J Exp Psychol Hum Percept Perform 2024; 50:626-635. [PMID: 38635224 DOI: 10.1037/xhp0001204] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/19/2024]
Abstract
Intentional binding refers to the subjective temporal compression between a voluntary action and its subsequent sensory outcome. Despite some studies challenging the link between temporal compression and intentional action, intentional binding is still widely used as an implicit measure for the sense of agency. The debate remains unsettled primarily because the experimental conditions used in previous studies were confounded with various alternative causes for temporal compression, and action intention has not yet been tested comprehensively against all potential alternative causes in a single study. Here, we solve this puzzle by jointly comparing participants' estimates of the interval between three types of triggering events with comparable predictability-voluntary movement, passive movement, and external sensory event-and an external sensory outcome (auditory or visual across experiments). The results failed to show intentional binding, that is, no shorter interval estimation for the voluntary than the passive movement conditions. Instead, we observed temporal (but not intentional) binding when comparing both movement conditions with the external sensory condition. Thus, temporal binding appears to originate from sensory integration and temporal prediction, not from action intention. As such, these findings underscore the need to reconsider the use of "intentional binding" as a reliable proxy of the sense of agency. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Collapse
|
32
|
Deniz B, Deniz R, Ataş A. Loudness Balancing Optimization for Better Speech Intelligibility, Music Perception, and Spectral Temporal Resolution in Cochlear Implant Users. Otol Neurotol 2024; 45:e385-e392. [PMID: 38518764 DOI: 10.1097/mao.0000000000004164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/24/2024]
Abstract
HYPOTHESIS The behaviorally based programming with loudness balancing (LB) would result in better speech understanding, spectral-temporal resolution, and music perception scores, and there would be a relationship between these scores. BACKGROUND Loudness imbalances at upper stimulation levels may cause sounds to be perceived as irregular, gravelly, or overly echoed and may negatively affect the listening performance of the cochlear implant (CI) user. LB should be performed after fitting to overcome these problems. METHODS The study included 26 unilateral Med-EL CI users. Two different CI programs based on the objective electrically evoked stapedial reflex threshold (P1) and the behaviorally program with LB (P2) were recorded for each participant. The Turkish Matrix Sentence Test (TMS) was applied to evaluate speech perception; the Random Gap Detection Test (RGDT) and Spectral-Temporally Modulated Ripple Test (SMRT) were applied to evaluate spectral temporal resolution skills; the Mini Profile of Music Perception Skills (mini-PROMS) and Melodic Contour Identification (MCI) tests were applied to evaluate music perception, and the results were compared. RESULTS Significantly better scores were obtained with P2 in TMS tests performed in noise and quiet. SMRT scores were significantly correlated with TMS in quiet and noise, and mini-PROMS sound perception results. Although better scores were obtained with P2 in the mini-PROMS total score and MCI, a significant difference was found only for MCI. CONCLUSION The data from the current study showed that equalization of loudness across CI electrodes leads to better perceptual acuity. It also revealed the relationship between speech perception, spectral-temporal resolution, and music perception.
Collapse
|
33
|
Li X, Cai S, Chen Y, Tian X, Wang A. Enhancement of visual dominance effects at the response level in children with attention-deficit/hyperactivity disorder. J Exp Child Psychol 2024; 242:105897. [PMID: 38461557 DOI: 10.1016/j.jecp.2024.105897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 02/16/2024] [Accepted: 02/16/2024] [Indexed: 03/12/2024]
Abstract
Previous studies have widely demonstrated that individuals with attention-deficit/hyperactivity disorder (ADHD) exhibit deficits in conflict control tasks. However, there is limited evidence regarding the performance of children with ADHD in cross-modal conflict processing tasks. The current study aimed to investigate whether children with ADHD have poor conflict control, which has an impact on sensory dominance effects at different levels of information processing under the influence of visual similarity. A total of 82 children aged 7 to 14 years, including 41 children with ADHD and 41 age- and sex-matched typically developing (TD) children, were recruited. We used the 2:1 mapping paradigm to separate levels of conflict, and the congruency of the audiovisual stimuli was divided into three conditions. In C trials, the target stimulus and the distractor stimulus were identical, and the bimodal stimuli corresponded to the same response keys. In PRIC trials, the distractor stimulus differed from the target stimulus and did not correspond to any response keys. In RIC trials, the distractor stimulus differed from the target stimulus, and the bimodal stimuli corresponded to different response keys. Therefore, we explicitly differentiated cross-modal conflict into a preresponse level (PRIC > C), corresponding to the encoding process, and a response level (RIC > PRIC), corresponding to the response selection process. Our results suggested that auditory distractors caused more interference during visual processing than visual distractors caused during auditory processing (i.e., typical auditory dominance) at the preresponse level regardless of group. However, visual dominance effects were observed in the ADHD group, whereas no visual dominance effects were observed in the TD group at the response level. A possible explanation is that the increased interference effects due to visual similarity and children with ADHD made it more difficult to control conflict when simultaneously confronted with incongruent visual and auditory inputs. The current study highlights how children with ADHD process cross-modal conflicts at multiple levels of information processing, thereby shedding light on the mechanisms underlying ADHD.
Collapse
|
34
|
Johansson RCG, Kelber P, Ulrich R. Speeded classification of visual events is sensitive to crossmodal intensity correspondence. J Exp Psychol Hum Percept Perform 2024; 50:554-569. [PMID: 38546625 DOI: 10.1037/xhp0001183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/14/2024]
Abstract
Crossmodal correspondences refer to systematic associations between stimulus attributes encountered in different sensory modalities. These correspondences can be probed in the speeded classification task where they tend to produce congruency effects. This study aimed to replicate and extend previous work conducted by Marks (1987, Experiment 3, Journal of Experimental Psychology: Human Perception and Performance, Vol. 13, No. 3, 384-394) which demonstrated a crossmodal correspondence between auditory and visual intensity attributes. Experiment 1 successfully replicates Marks' original finding that performance in a brightness classification task is affected by whether the loudness of a concurrently presented auditory distractor matches the brightness of the visual target. Furthermore, in line with the original study, we found that this effect was absent in a lightness classification task. In Experiment 2, we demonstrate that loudness-brightness correspondence is robust even when the exact stimulus input changes. This finding suggests that there is a context-dependent mapping between loudness and brightness levels, rather than an absolute mapping between any particular intensity levels. Finally, exploratory analysis using the diffusion model for conflict tasks indicated that evidence from the task-irrelevant modality generates a burst of weak, short-lived automatic activation that can bias decision-making in difficult tasks, but not in easy tasks. Our results provide further evidence for the existence of a flexible crossmodal correspondence between brightness and loudness, which might be helpful in determining one's distance to a stimulus source during the early stages of multisensory integration. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Collapse
|
35
|
Silcox JW, Bennett K, Copeland A, Ferguson SH, Payne BR. The Costs (and Benefits?) of Effortful Listening for Older Adults: Insights from Simultaneous Electrophysiology, Pupillometry, and Memory. J Cogn Neurosci 2024; 36:997-1020. [PMID: 38579256 DOI: 10.1162/jocn_a_02161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/07/2024]
Abstract
Although the impact of acoustic challenge on speech processing and memory increases as a person ages, older adults may engage in strategies that help them compensate for these demands. In the current preregistered study, older adults (n = 48) listened to sentences-presented in quiet or in noise-that were high constraint with either expected or unexpected endings or were low constraint with unexpected endings. Pupillometry and EEG were simultaneously recorded, and subsequent sentence recognition and word recall were measured. Like young adults in prior work, we found that noise led to increases in pupil size, delayed and reduced ERP responses, and decreased recall for unexpected words. However, in contrast to prior work in young adults where a larger pupillary response predicted a recovery of the N400 at the cost of poorer memory performance in noise, older adults did not show an associated recovery of the N400 despite decreased memory performance. Instead, we found that in quiet, increases in pupil size were associated with delays in N400 onset latencies and increased recognition memory performance. In conclusion, we found that transient variation in pupil-linked arousal predicted trade-offs between real-time lexical processing and memory that emerged at lower levels of task demand in aging. Moreover, with increased acoustic challenge, older adults still exhibited costs associated with transient increases in arousal without the corresponding benefits.
Collapse
|
36
|
Evers S. The Cerebellum in Musicology: a Narrative Review. CEREBELLUM (LONDON, ENGLAND) 2024; 23:1165-1175. [PMID: 37594626 PMCID: PMC11102367 DOI: 10.1007/s12311-023-01594-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 08/11/2023] [Indexed: 08/19/2023]
Abstract
The cerebellum is involved in cognitive procressing including music perception and music production. This narrative review aims to summarize the current knowledge on the activation of the cerebellum by different musical stimuli, on the involvement of the cerebellum in cognitive loops underlying the analysis of music, and on the role of the cerebellum in the motor network underlying music production. A possible role of the cerebellum in therapeutic settings is also briefly discussed. In a second part, the cerebellum as object of musicology (i.e., in classical music, in contemporary music, cerebellar disorders of musicians) is described.
Collapse
|
37
|
Moura N, Fonseca P, Vilas-Boas JP, Serra S. Increased body movement equals better performance? Not always! Musical style determines motion degree perceived as optimal in music performance. PSYCHOLOGICAL RESEARCH 2024; 88:1314-1330. [PMID: 38329559 PMCID: PMC11142955 DOI: 10.1007/s00426-024-01928-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Accepted: 01/18/2024] [Indexed: 02/09/2024]
Abstract
Musicians' body behaviour has a preponderant role in audience perception. We investigated how performers' motion is perceived depending on the musical style and musical expertise. To further explore the effect of visual input, stimuli were presented in audio-only, audio-visual and visual-only conditions. We used motion and audio recordings of expert saxophone players playing two contrasting excerpts (positively and negatively valenced). For each excerpt, stimuli represented five motion degrees with increasing quantity of motion (QoM) and distinct predominant gestures. In the experiment (online and in-person), 384 participants rated performance recordings for expressiveness, professionalism and overall quality. Results revealed that, for the positively valenced excerpt, ratings increased as a function of QoM, whilst for the negatively valenced, the recording with predominant flap motion was favoured. Musicianship did not have a significant effect in motion perception. Concerning multisensory integration, both musicians and non-musicians presented visual dominance in the positively valenced excerpt, whereas in the negatively valenced, musicians shifted to auditory dominance. Our findings demonstrate that musical style not only determines the way observers perceive musicians' movement as adequate, but also that it can promote changes in multisensory integration.
Collapse
|
38
|
Bartlett EL, Han EX, Parthasarathy A. Neurometric amplitude modulation detection in the inferior colliculus of Young and Aged rats. Hear Res 2024; 447:109028. [PMID: 38733711 PMCID: PMC11129790 DOI: 10.1016/j.heares.2024.109028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Revised: 04/29/2024] [Accepted: 05/02/2024] [Indexed: 05/13/2024]
Abstract
Amplitude modulation is an important acoustic cue for sound discrimination, and humans and animals are able to detect small modulation depths behaviorally. In the inferior colliculus (IC), both firing rate and phase-locking may be used to detect amplitude modulation. How neural representations that detect modulation change with age are poorly understood, including the extent to which age-related changes may be attributed to the inherited properties of ascending inputs to IC neurons. Here, simultaneous measures of local field potentials (LFPs) and single-unit responses were made from the inferior colliculus of Young and Aged rats using both noise and tone carriers in response to sinusoidally amplitude-modulated sounds of varying depths. We found that Young units had higher firing rates than Aged for noise carriers, whereas Aged units had higher phase-locking (vector strength), especially for tone carriers. Sustained LFPs were larger in Young animals for modulation frequencies 8-16 Hz and comparable at higher modulation frequencies. Onset LFP amplitudes were much larger in Young animals and were correlated with the evoked firing rates, while LFP onset latencies were shorter in Aged animals. Unit neurometric thresholds by synchrony or firing rate measures did not differ significantly across age and were comparable to behavioral thresholds in previous studies whereas LFP thresholds were lower than behavior.
Collapse
|
39
|
Akçay B, İnanç G. The effect of Schroth Best Practice exercises and Cheneau brace treatment on perceptual and cognitive asymmetry in adolescent idiopathic scoliosis with thoracic major curve. Ir J Med Sci 2024; 193:1479-1486. [PMID: 38123885 DOI: 10.1007/s11845-023-03593-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Accepted: 12/11/2023] [Indexed: 12/23/2023]
Abstract
BACKGROUND Adolescent idiopathic scoliosis (AIS) patients have been found to exhibit cortical asymmetry. Although asymmetries in cortical structures have been found in patients with AIS, there has been no research on how conservative treatments affect cerebellar functional organization. AIMS This study aimed to examine the impact of conservative treatments on perceptual and cognitive asymmetry in the auditory system assessed by dichotic listening in AIS patients with thoracic major curves. METHOD This study involved 30 AIS patients and 21 healthy subjects. The intervention group used a Cheneau brace and performed 18 Schroth Best Practice (SBP) exercise sessions. Auditory lateralization was assessed using the Dichotic Listening Paradigm (DLP) in both groups before and after the intervention. RESULTS The 6-week intervention resulted in a significant increase in left ear responses in the force-left condition in the AIS (p < 0.05). Left ear responses were lower in AIS at baseline (p < 0.05). The results at week 6 were similar in all conditions (p > 0.05). CONCLUSION The results of this study demonstrated that SBP exercises and Cheneau brace treatment can improve perceptual and cognitive asymmetry in the auditory system in AIS patients with thoracic major curve. Scoliosis-associated changes in the spine and postural control may affect auditory perception by causing adaptations in sensory and motor networks. Future studies are needed to examine the connectivity in brain regions related to motor control and auditory processing after conservative treatment. TRIAL REGISTRATION Clinical trials number: NCT06141759.
Collapse
|
40
|
Laback B, Tabuchi H, Kohlrausch A. Evidence for proactive and retroactive temporal pattern analysis in simultaneous maskinga). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:3742-3759. [PMID: 38856312 DOI: 10.1121/10.0026240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Accepted: 05/17/2024] [Indexed: 06/11/2024]
Abstract
Amplitude modulation (AM) of a masker reduces its masking on a simultaneously presented unmodulated pure-tone target, which likely involves dip listening. This study tested the idea that dip-listening efficiency may depend on stimulus context, i.e., the match in AM peakedness (AMP) between the masker and a precursor or postcursor stimulus, assuming a form of temporal pattern analysis process. Masked thresholds were measured in normal-hearing listeners using Schroeder-phase harmonic complexes as maskers and precursors or postcursors. Experiment 1 showed threshold elevation (i.e., interference) when a flat cursor preceded or followed a peaked masker, suggesting proactive and retroactive temporal pattern analysis. Threshold decline (facilitation) was observed when the masker AMP was matched to the precursor, irrespective of stimulus AMP, suggesting only proactive processing. Subsequent experiments showed that both interference and facilitation (1) remained robust when a temporal gap was inserted between masker and cursor, (2) disappeared when an F0-difference was introduced between masker and precursor, and (3) decreased when the presentation level was reduced. These results suggest an important role of envelope regularity in dip listening, especially when masker and cursor are F0-matched and, therefore, form one perceptual stream. The reported effects seem to represent a time-domain variant of comodulation masking release.
Collapse
|
41
|
Muñoz-Caracuel M, Muñoz V, Ruiz-Martínez FJ, Vázquez Morejón AJ, Gómez CM. Systemic neurophysiological signals of auditory predictive coding. Psychophysiology 2024; 61:e14544. [PMID: 38351668 DOI: 10.1111/psyp.14544] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Revised: 01/03/2024] [Accepted: 02/02/2024] [Indexed: 05/16/2024]
Abstract
Predictive coding framework posits that our brain continuously monitors changes in the environment and updates its predictive models, minimizing prediction errors to efficiently adapt to environmental demands. However, the underlying neurophysiological mechanisms of these predictive phenomena remain unclear. The present study aimed to explore the systemic neurophysiological correlates of predictive coding processes during passive and active auditory processing. Electroencephalography (EEG), functional near-infrared spectroscopy (fNIRS), and autonomic nervous system (ANS) measures were analyzed using an auditory pattern-based novelty oddball paradigm. A sample of 32 healthy subjects was recruited. The results showed shared slow evoked potentials between passive and active conditions that could be interpreted as automatic predictive processes of anticipation and updating, independent of conscious attentional effort. A dissociated topography of the cortical hemodynamic activity and distinctive evoked potentials upon auditory pattern violation were also found between both conditions, whereas only conscious perception leading to imperative responses was accompanied by phasic ANS responses. These results suggest a systemic-level hierarchical reallocation of predictive coding neural resources as a function of contextual demands in the face of sensory stimulation. Principal component analysis permitted to associate the variability of some of the recorded signals.
Collapse
|
42
|
Pelzer L, Naefgen C, Herzig J, Gaschler R, Haider H. Can frequent long stimulus onset ansynchronies (SOAs) foster the representation of two separated task-sets in dual-tasking? PSYCHOLOGICAL RESEARCH 2024; 88:1231-1252. [PMID: 38418590 PMCID: PMC11143036 DOI: 10.1007/s00426-024-01935-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Accepted: 01/28/2024] [Indexed: 03/01/2024]
Abstract
Recent findings suggest that in dual-tasking the elements of the two tasks are associated across tasks and are stored in a conjoint memory episode, meaning that the tasks are not represented as isolated task-sets. In the current study, we tested whether frequent long stimulus onset ansynchronies (SOAs) can foster the representation of two separated task-sets thereby reducing or even hindering participants to generate conjoint memory episodes-compared to an integrated task-set representation induced by frequent short SOAs. Alternatively, it is conceivable that conjoint memory episodes are an inevitable consequence of presenting two tasks within a single trial. In two dual-task experiments, we tested between consecutive trials whether repeating the stimulus-response bindings of both tasks would lead to faster responses than repeating only one of the two tasks' stimulus-response bindings. The dual-task consisted of a visual-manual search task (VST) and an auditory-manual discrimination task (ADT). Overall, the results suggest that, after processing two tasks within a single trial, generating a conjoint memory episode seems to be a default process, regardless of SOA frequency. However, the respective SOA frequency affected the participants' strategy to group the processing of the two tasks or not, thereby modulating the impact of the reactivated memory episode on task performance.
Collapse
|
43
|
Costalunga G, Vallentin D, Benichov JI. A neuroethological view of the multifaceted sensory influences on birdsong. Curr Opin Neurobiol 2024; 86:102867. [PMID: 38520789 DOI: 10.1016/j.conb.2024.102867] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Revised: 02/13/2024] [Accepted: 03/07/2024] [Indexed: 03/25/2024]
Abstract
Learning and execution of complex motor skills are often modulated by sensory feedback and contextual cues arriving across multiple sensory modalities. Vocal motor behaviors, in particular, are primarily influenced by auditory inputs, both during learning and mature vocal production. The importance of auditory input in shaping vocal output has been investigated in several songbird species that acquire their adult song based on auditory exposure to a tutor during development. Recent studies have highlighted the influences of stimuli arriving through other sensory channels in juvenile song learning and in adult song production. Here, we review changes induced by diverse sensory stimuli during the song learning process and the production of adult song, considering the neuroethological significance of sensory channels in different species of songbirds. Additionally, we highlight advances, open questions, and possible future approaches for understanding the neural circuits that enable the multimodal shaping of singing behavior.
Collapse
|
44
|
Hao Y, Hu L. Lower Childhood Socioeconomic Status Is Associated with Greater Neural Responses to Ambient Auditory Changes in Adulthood. J Cogn Neurosci 2024; 36:979-996. [PMID: 38579240 DOI: 10.1162/jocn_a_02151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/07/2024]
Abstract
Humans' early life experience varies by socioeconomic status (SES), raising the question of how this difference is reflected in the adult brain. An important aspect of brain function is the ability to detect salient ambient changes while focusing on a task. Here, we ask whether subjective social status during childhood is reflected by the way young adults' brain detecting changes in irrelevant information. In two studies (total n = 58), we examine electrical brain responses in the frontocentral region to a series of auditory tones, consisting of standard stimuli (80%) and deviant stimuli (20%) interspersed randomly, while participants were engaged in various visual tasks. Both studies showed stronger automatic change detection indexed by MMN in lower SES individuals, regardless of the unattended sound's feature, attended emotional content, or study type. Moreover, we observed a larger MMN in lower-SES participants, although they did not show differences in brain and behavior responses to the attended task. Lower-SES people also did not involuntarily orient more attention to sound changes (i.e., deviant stimuli), as indexed by the P3a. The study indicates that individuals with lower subjective social status may have an increased ability to automatically detect changes in their environment, which may suggest their adaptation to their childhood environments.
Collapse
|
45
|
Michel L, Ricou C, Bonnet-Brilhault F, Houy-Durand E, Latinus M. Sounds Pleasantness Ratings in Autism: Interaction Between Social Information and Acoustical Noise Level. J Autism Dev Disord 2024; 54:2148-2157. [PMID: 37118645 DOI: 10.1007/s10803-023-05989-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/06/2023] [Indexed: 04/30/2023]
Abstract
A lack of response to voices, and a great interest for music are part of the behavioral expressions, commonly (self-)reported in Autism Spectrum Disorder (ASD). These atypical interests for vocal and musical sounds could be attributable to different levels of acoustical noise, quantified in the harmonic-to-noise ratio (HNR). No previous study has investigated explicit auditory pleasantness in ASD comparing vocal and non-vocal sounds, in relation to acoustic noise level. The aim of this study is to objectively evaluate auditory pleasantness. 16 adults on the autism spectrum and 16 neuro-typical (NT) matched adults rated the likeability of vocal and non-vocal sounds, with varying harmonic-to-noise ratio levels. A group by category interaction in pleasantness judgements revealed that participants on the autism spectrum judged vocal sounds as less pleasant than non-vocal sounds; an effect not found for NT participants. A category by HNR level interaction revealed that participants of both groups rated sounds with a high HNR as more pleasant for non-vocal sounds. A significant group by HNR interaction revealed that people on the autism spectrum tended to judge as less pleasant sounds with high HNR and more pleasant those with low HNR than NT participants. Acoustical noise level of sounds alone does not appear to explain atypical interest for voices and greater interest in music in ASD.
Collapse
|
46
|
Hannon E, Snyder J. What rhythm production can tell us about culture. Trends Cogn Sci 2024; 28:487-488. [PMID: 38664158 DOI: 10.1016/j.tics.2024.04.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2024] [Accepted: 04/08/2024] [Indexed: 06/07/2024]
Abstract
Jacoby and colleagues used an iterative rhythm reproduction paradigm with listeners from around the world to provide evidence for both rhythm universals (simple-integer ratios 1:1 and 2:1) and cross-cultural variation for specific rhythmic categories that can be linked to local music traditions in different regions of the world.
Collapse
|
47
|
Jertberg RM, Begeer S, Geurts HM, Chakrabarti B, Van der Burg E. Age, not autism, influences multisensory integration of speech stimuli among adults in a McGurk/MacDonald paradigm. Eur J Neurosci 2024; 59:2979-2994. [PMID: 38570828 DOI: 10.1111/ejn.16319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Revised: 02/27/2024] [Accepted: 02/28/2024] [Indexed: 04/05/2024]
Abstract
Differences between autistic and non-autistic individuals in perception of the temporal relationships between sights and sounds are theorized to underlie difficulties in integrating relevant sensory information. These, in turn, are thought to contribute to problems with speech perception and higher level social behaviour. However, the literature establishing this connection often involves limited sample sizes and focuses almost entirely on children. To determine whether these differences persist into adulthood, we compared 496 autistic and 373 non-autistic adults (aged 17 to 75 years). Participants completed an online version of the McGurk/MacDonald paradigm, a multisensory illusion indicative of the ability to integrate audiovisual speech stimuli. Audiovisual asynchrony was manipulated, and participants responded both to the syllable they perceived (revealing their susceptibility to the illusion) and to whether or not the audio and video were synchronized (allowing insight into temporal processing). In contrast with prior research with smaller, younger samples, we detected no evidence of impaired temporal or multisensory processing in autistic adults. Instead, we found that in both groups, multisensory integration correlated strongly with age. This contradicts prior presumptions that differences in multisensory perception persist and even increase in magnitude over the lifespan of autistic individuals. It also suggests that the compensatory role multisensory integration may play as the individual senses decline with age is intact. These findings challenge existing theories and provide an optimistic perspective on autistic development. They also underline the importance of expanding autism research to better reflect the age range of the autistic population.
Collapse
|
48
|
Castaldi E, Tinelli F, Filippo G, Bartoli M, Anobile G. Auditory time perception impairment in children with developmental dyscalculia. RESEARCH IN DEVELOPMENTAL DISABILITIES 2024; 149:104733. [PMID: 38663331 PMCID: PMC11155440 DOI: 10.1016/j.ridd.2024.104733] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 02/19/2024] [Accepted: 04/09/2024] [Indexed: 05/21/2024]
Abstract
Developmental dyscalculia (DD) is a specific learning disability which prevents children from acquiring adequate numerical and arithmetical competences. We investigated whether difficulties in children with DD spread beyond the numerical domain and impact also their ability to perceive time. A group of 37 children/adolescent with and without DD were tested with an auditory categorization task measuring time perception thresholds in the sub-second (0.25-1 s) and supra-second (0.75-3 s) ranges. Results showed that auditory time perception was strongly impaired in children with DD at both time scales. The impairment remained even when age, non-verbal reasoning, and gender were regressed out. Overall, our results show that the difficulties of DD can affect magnitudes other than numerical and contribute to the increasing evidence that frames dyscalculia as a disorder affecting multiple neurocognitive and perceptual systems.
Collapse
|
49
|
Sendesen E, Turkyilmaz D. Investigation of the behavior of tinnitus patients under varying listening conditions with simultaneous electroencephalography and pupillometry. Brain Behav 2024; 14:e3571. [PMID: 38841736 PMCID: PMC11154813 DOI: 10.1002/brb3.3571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Revised: 02/05/2024] [Accepted: 05/08/2024] [Indexed: 06/07/2024] Open
Abstract
OBJECTIVE This study aims to control all hearing thresholds, including extended high frequencies (EHFs), presents stimuli of varying difficulty levels, and measures electroencephalography (EEG) and pupillometry responses to determine whether listening difficulty in tinnitus patients is effort or fatigue-related. METHODS Twenty-one chronic tinnitus patients and 26 matched healthy controls having normal pure-tone averages with symmetrical hearing thresholds were included. Subjects were evaluated with 0.125-20 kHz pure-tone audiometry, Montreal Cognitive Assessment Test (MoCA), Tinnitus Handicap Inventory (THI), EEG, and pupillometry. RESULTS Pupil dilatation and EEG alpha power during the "encoding" phase of the presented sentence in tinnitus patients were less in all listening conditions (p < .05). Also, there was no statistically significant relationship between EEG and pupillometry components for all listening conditions and THI or MoCA (p > .05). CONCLUSION EEG and pupillometry results under various listening conditions indicate potential listening effort in tinnitus patients even if all frequencies, including EHFs, are controlled. Also, we suggest that pupillometry should be interpreted with caution in autonomic nervous system-related conditions such as tinnitus.
Collapse
|
50
|
Lialiou M, Grice M, Röhr CT, Schumacher PB. Auditory Processing of Intonational Rises and Falls in German: Rises Are Special in Attention Orienting. J Cogn Neurosci 2024; 36:1099-1122. [PMID: 38358004 DOI: 10.1162/jocn_a_02129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/16/2024]
Abstract
This article investigates the processing of intonational rises and falls when presented unexpectedly in a stream of repetitive auditory stimuli. It examines the neurophysiological correlates (ERPs) of attention to these unexpected stimuli through the use of an oddball paradigm where sequences of repetitive stimuli are occasionally interspersed with a deviant stimulus, allowing for elicitation of an MMN. Whereas previous oddball studies on attention toward unexpected sounds involving pitch rises were conducted on nonlinguistic stimuli, the present study uses as stimuli lexical items in German with naturalistic intonation contours. Results indicate that rising intonation plays a special role in attention orienting at a pre-attentive processing stage, whereas contextual meaning (here a list of items) is essential for activating attentional resources at a conscious processing stage. This is reflected in the activation of distinct brain responses: Rising intonation evokes the largest MMN, whereas falling intonation elicits a less pronounced MMN followed by a P3 (reflecting a conscious processing stage). Subsequently, we also find a complex interplay between the phonological status (i.e., accent/head marking vs. boundary/edge marking) and the direction of pitch change in their contribution to attention orienting: Attention is not oriented necessarily toward a specific position in prosodic structure (head or edge). Rather, we find that the intonation contour itself and the appropriateness of the contour in the linguistic context are the primary cues to two core mechanisms of attention orienting, pre-attentive and conscious orientation respectively, whereas the phonological status of the pitch event plays only a supplementary role.
Collapse
|