1
|
Revisiting alpha resting state dynamics underlying hallucinatory vulnerability: Insights from Hidden semi-Markov Modeling. J Neurosci Methods 2024; 407:110138. [PMID: 38648892 DOI: 10.1016/j.jneumeth.2024.110138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 03/22/2024] [Accepted: 04/12/2024] [Indexed: 04/25/2024]
Abstract
BACKGROUND Resting state (RS) brain activity is inherently non-stationary. Hidden semi-Markov Models (HsMM) can characterize continuous RS data as a sequence of recurring and distinct brain states along with their spatio-temporal dynamics. NEW METHOD Recent explorations suggest that HsMM state dynamics in the alpha frequency band link to auditory hallucination proneness (HP) in non-clinical individuals. The present study aimed to replicate these findings to elucidate robust neural correlates of hallucinatory vulnerability. Specifically, we aimed to investigate the reproducibility of HsMM states across different data sets and within-data set variants as well as the replicability of the association between alpha brain state dynamics and HP. RESULTS We found that most brain states are reproducible in different data sets, confirming that the HsMM characterized robust and generalizable EEG RS dynamics on a sub-second timescale. Brain state topographies and temporal dynamics of different within-data set variants showed substantial similarities and were robust against reduced data length and number of electrodes. However, the association with HP was not directly reproducible across data sets. COMPARISON WITH EXISTING METHODS The HsMM optimally leverages the high temporal resolution of EEG data and overcomes time-domain restrictions of other state allocation methods. CONCLUSION The results indicate that the sensitivity of brain state dynamics to capture individual variability in HP may depend on the data recording characteristics and individual variability in RS cognition, such as mind wandering. Future studies should consider that the order in which eyes-open and eyes-closed RS data are acquired directly influences an individual's attentional state and generation of spontaneous thoughts, and thereby might mediate the link to hallucinatory vulnerability.
Collapse
|
2
|
Mobile version of the Battery for the Assessment of Auditory Sensorimotor and Timing Abilities (BAASTA): Implementation and adult norms. Behav Res Methods 2024:10.3758/s13428-024-02363-x. [PMID: 38459221 DOI: 10.3758/s13428-024-02363-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/02/2024] [Indexed: 03/10/2024]
Abstract
Timing and rhythm abilities are complex and multidimensional skills that are highly widespread in the general population. This complexity can be partly captured by the Battery for the Assessment of Auditory Sensorimotor and Timing Abilities (BAASTA). The battery, consisting of four perceptual and five sensorimotor tests (finger-tapping), has been used in healthy adults and in clinical populations (e.g., Parkinson's disease, ADHD, developmental dyslexia, stuttering), and shows sensitivity to individual differences and impairment. However, major limitations for the generalized use of this tool are the lack of reliable and standardized norms and of a version of the battery that can be used outside the lab. To circumvent these caveats, we put forward a new version of BAASTA on a tablet device capable of ensuring lab-equivalent measurements of timing and rhythm abilities. We present normative data obtained with this version of BAASTA from over 100 healthy adults between the ages of 18 and 87 years in a test-retest protocol. Moreover, we propose a new composite score to summarize beat-based rhythm capacities, the Beat Tracking Index (BTI), with close to excellent test-retest reliability. BTI derives from two BAASTA tests (beat alignment, paced tapping), and offers a swift and practical way of measuring rhythmic abilities when research imposes strong time constraints. This mobile BAASTA implementation is more inclusive and far-reaching, while opening new possibilities for reliable remote testing of rhythmic abilities by leveraging accessible and cost-efficient technologies.
Collapse
|
3
|
Variability allows for adaptation in dynamic environments comment on 'From neural noise to co-adaptability: Rethinking the multifaceted architecture of motor variability' by L. Casartelli, C. Maronati & A. Cavallo. Phys Life Rev 2024; 48:104-105. [PMID: 38176320 DOI: 10.1016/j.plrev.2023.12.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Accepted: 12/26/2023] [Indexed: 01/06/2024]
|
4
|
Unravelling individual rhythmic abilities using machine learning. Sci Rep 2024; 14:1135. [PMID: 38212632 PMCID: PMC10784578 DOI: 10.1038/s41598-024-51257-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2023] [Accepted: 01/02/2024] [Indexed: 01/13/2024] Open
Abstract
Humans can easily extract the rhythm of a complex sound, like music, and move to its regular beat, like in dance. These abilities are modulated by musical training and vary significantly in untrained individuals. The causes of this variability are multidimensional and typically hard to grasp in single tasks. To date we lack a comprehensive model capturing the rhythmic fingerprints of both musicians and non-musicians. Here we harnessed machine learning to extract a parsimonious model of rhythmic abilities, based on behavioral testing (with perceptual and motor tasks) of individuals with and without formal musical training (n = 79). We demonstrate that variability in rhythmic abilities and their link with formal and informal music experience can be successfully captured by profiles including a minimal set of behavioral measures. These findings highlight that machine learning techniques can be employed successfully to distill profiles of rhythmic abilities, and ultimately shed light on individual variability and its relationship with both formal musical training and informal musical experiences.
Collapse
|
5
|
Time-travel to "A review and proposal for a model of sensory predictability in auditory language perception". Cortex 2024; 170:53-56. [PMID: 38101972 DOI: 10.1016/j.cortex.2023.11.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Revised: 11/16/2023] [Accepted: 11/20/2023] [Indexed: 12/17/2023]
Abstract
Since its inception 60 years ago, the mission of Cortex has been to foster a better understanding of cognition and the relationship between the nervous system, behavior in general, and mental processes in particular. Almost 15 years ago, we submitted "a review and proposal" along these lines to the journal, in which we sought to integrate two components that are not often discussed together, namely the basal ganglia and syntactic language functions (Kotz et al., 2009). One of the main motivations was to find potential explanations for two relatively straightforward earlier empirical observations: (i) electroencephalographic event-related potential responses (EEG/ERPs) known to be sensitive markers of syntactic violations in auditory language processing were found to be absent in persons with focal basal ganglia lesions (Friederici et al., 1999; Frisch et al., 2003; Kotz et al., 2003), and (ii) temporally regular rhythmic tone sequences presented before language stimuli were found to compensate for this effect (Kotz et al., 2005; Kotz & Gunter, 2015; Kotz & Schmidt-Kassow, 2015). The critical question was how to reconcile these specific components, the basal ganglia typically associated with motor behavior and language-related syntactic processes, under one hood to foster a better understanding of how the basal ganglia system contributes to auditory language processing. This core question was the starting point for further own research and trying to solve it, unsurprisingly, led to many more questions and rather few answers. It also changed perspectives and established collaborative efforts, sometimes in unsuspected ways and directions. In light of the journal's anniversary, we therefore want to take this exciting opportunity for some time travel, looking back at our original conception while linking it to more recent considerations, thereby providing some insights that might be useful for future research.
Collapse
|
6
|
Auditory attention measured by EEG in neurological populations: systematic review of literature and meta-analysis. Sci Rep 2023; 13:21064. [PMID: 38030693 PMCID: PMC10687139 DOI: 10.1038/s41598-023-47597-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 11/16/2023] [Indexed: 12/01/2023] Open
Abstract
Sensorimotor synchronization strategies have been frequently used for gait rehabilitation in different neurological populations. Despite these positive effects on gait, attentional processes required to dynamically attend to the auditory stimuli needs elaboration. Here, we investigate auditory attention in neurological populations compared to healthy controls quantified by EEG recordings. Literature was systematically searched in databases PubMed and Web of Science. Inclusion criteria were investigation of auditory attention quantified by EEG recordings in neurological populations in cross-sectional studies. In total, 35 studies were included, including participants with Parkinson's disease (PD), stroke, Traumatic Brain Injury (TBI), Multiple Sclerosis (MS), Amyotrophic Lateral Sclerosis (ALS). A meta-analysis was performed on P3 amplitude and latency separately to look at the differences between neurological populations and healthy controls in terms of P3 amplitude and latency. Overall, neurological populations showed impairments in auditory processing in terms of magnitude and delay compared to healthy controls. Consideration of individual auditory processes and thereafter selecting and/or designing the auditory structure during sensorimotor synchronization paradigms in neurological physical rehabilitation is recommended.
Collapse
|
7
|
Individual differences in neural markers of beat processing relate to spoken grammar skills in six-year-old children. BRAIN AND LANGUAGE 2023; 246:105345. [PMID: 37994830 DOI: 10.1016/j.bandl.2023.105345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 10/05/2023] [Accepted: 10/10/2023] [Indexed: 11/24/2023]
Abstract
Based on the idea that neural entrainment establishes regular attentional fluctuations that facilitate hierarchical processing in both music and language, we hypothesized that individual differences in syntactic (grammatical) skills will be partly explained by patterns of neural responses to musical rhythm. To test this hypothesis, we recorded neural activity using electroencephalography (EEG) while children (N = 25) listened passively to rhythmic patterns that induced different beat percepts. Analysis of evoked beta and gamma activity revealed that individual differences in the magnitude of neural responses to rhythm explained variance in six-year-olds' expressive grammar abilities, beyond and complementarily to their performance in a behavioral rhythm perception task. These results reinforce the idea that mechanisms of neural beat entrainment may be a shared neural resource supporting hierarchical processing across music and language and suggest a relevant marker of the relationship between rhythm processing and grammar abilities in elementary-school-age children, previously observed only behaviorally.
Collapse
|
8
|
Macaque monkeys and humans sample temporal regularities in the acoustic environment. Prog Neurobiol 2023; 229:102502. [PMID: 37442410 DOI: 10.1016/j.pneurobio.2023.102502] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Revised: 07/06/2023] [Accepted: 07/10/2023] [Indexed: 07/15/2023]
Abstract
Many animal species show comparable abilities to detect basic rhythms and produce rhythmic behavior. Yet, the capacities to process complex rhythms and synchronize rhythmic behavior appear to be species-specific: vocal learning animals can, but some primates might not. This discrepancy is of high interest as there is a putative link between rhythm processing and the development of sophisticated sensorimotor behavior in humans. Do our closest ancestors show comparable endogenous dispositions to sample the acoustic environment in the absence of task instructions and training? We recorded EEG from macaque monkeys and humans while they passively listened to isochronous equitone sequences. Individual- and trial-level analyses showed that macaque monkeys' and humans' delta-band neural oscillations encoded and tracked the timing of auditory events. Further, mu- (8-15 Hz) and beta-band (12-20 Hz) oscillations revealed the superimposition of varied accentuation patterns on a subset of trials. These observations suggest convergence in the encoding and dynamic attending of temporal regularities in the acoustic environment, bridging a gap in the phylogenesis of rhythm cognition.
Collapse
|
9
|
Aesthetic and physiological effects of naturalistic multimodal music listening. Cognition 2023; 239:105537. [PMID: 37487303 DOI: 10.1016/j.cognition.2023.105537] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Revised: 05/31/2023] [Accepted: 06/24/2023] [Indexed: 07/26/2023]
Abstract
Compared to audio only (AO) conditions, audiovisual (AV) information can enhance the aesthetic experience of a music performance. However, such beneficial multimodal effects have yet to be studied in naturalistic music performance settings. Further, peripheral physiological correlates of aesthetic experiences are not well-understood. Here, participants were invited to a concert hall for piano performances of Bach, Messiaen, and Beethoven, which were presented in two conditions: AV and AO. They rated their aesthetic experience (AE) after each piece (Experiment 1 and 2), while peripheral signals (cardiorespiratory measures, skin conductance, and facial muscle activity) were continuously measured (Experiment 2). Factor scores of AE were significantly higher in the AV condition in both experiments. LF/HF ratio, a heart rhythm that represents activation of the sympathetic nervous system, was higher in the AO condition, suggesting increased arousal, likely caused by less predictable sound onsets in the AO condition. We present partial evidence that breathing was faster and facial muscle activity was higher in the AV condition, suggesting that observing a performer's movements likely enhances motor mimicry in these more voluntary peripheral measures. Further, zygomaticus ('smiling') muscle activity was a significant predictor of AE. Thus, we suggest physiological measures are related to AE, but at different levels: the more involuntary measures (i.e., heart rhythms) may reflect more sensory aspects, while the more voluntary measures (i.e., muscular control of breathing and facial responses) may reflect the liking aspect of an AE. In summary, we replicate and extend previous findings that AV information enhances AE in a naturalistic music performance setting. We further show that a combination of self-report and peripheral measures benefit a meaningful assessment of AE in naturalistic music performance settings.
Collapse
|
10
|
Abstract
Sociality and timing are tightly interrelated in human interaction as seen in turn-taking or synchronised dance movements. Sociality and timing also show in communicative acts of other species that might be pleasurable, but also necessary for survival. Sociality and timing often co-occur, but their shared phylogenetic trajectory is unknown: How, when, and why did they become so tightly linked? Answering these questions is complicated by several constraints; these include the use of divergent operational definitions across fields and species, the focus on diverse mechanistic explanations (e.g., physiological, neural, or cognitive), and the frequent adoption of anthropocentric theories and methodologies in comparative research. These limitations hinder the development of an integrative framework on the evolutionary trajectory of social timing and make comparative studies not as fruitful as they could be. Here, we outline a theoretical and empirical framework to test contrasting hypotheses on the evolution of social timing with species-appropriate paradigms and consistent definitions. To facilitate future research, we introduce an initial set of representative species and empirical hypotheses. The proposed framework aims at building and contrasting evolutionary trees of social timing toward and beyond the crucial branch represented by our own lineage. Given the integration of cross-species and quantitative approaches, this research line might lead to an integrated empirical-theoretical paradigm and, as a long-term goal, explain why humans are such socially coordinated animals.
Collapse
|
11
|
Validation of the Dutch Sensory Gating Inventory (D-SGI): Psychometric properties and a Confirmatory factor analysis. APPLIED NEUROPSYCHOLOGY. ADULT 2023:1-10. [PMID: 37453801 DOI: 10.1080/23279095.2023.2235453] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/18/2023]
Abstract
The Sensory Gating Inventory (SGI) is an established self-report questionnaire that is used to assess the capacity for filtering redundant or irrelevant environmental stimuli. Translation and cross-cultural validation of the SGI are necessary to make this tool available to Dutch speaking populations. This study, therefore, aimed to design and validate a Dutch Sensory Gating Inventory (D-SGI). To this end, a forward-backward translation was performed and 469 native Dutch speakers filled in the questionnaire. A confirmatory factor analysis assessed the psychometric properties of the D-SGI. Additionally, test-retest reliability was measured. Results confirmed satisfactory similarity between the original English SGI and the D-SGI in terms of psychometric properties for the factor structure. Internal consistency and discriminant validity were also satisfactory. Overall test-retest reliability was excellent (ICC = 0.91, p < 0.001, 95% CI [0.87-0.93]). These findings confirm that the D-SGI is a psychometrically sound self-report measure that allows assessing the phenomenological dimensions of sensory gating in Dutch. Moreover, the D-SGI is publicly available. This establishes the D-SGI as a new tool for the assessment of sensory gating dimensions in general- and clinical Dutch speaking populations.
Collapse
|
12
|
Investigating the lateralisation of experimentally induced auditory verbal hallucinations. Front Neurosci 2023; 17:1193402. [PMID: 37483346 PMCID: PMC10359906 DOI: 10.3389/fnins.2023.1193402] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 05/16/2023] [Indexed: 07/25/2023] Open
Abstract
Introduction Auditory verbal hallucinations (AVHs), or hearing non-existent voices, are a common symptom in psychosis. Recent research suggests that AVHs are also experienced by neurotypical individuals. Individuals with schizophrenia experiencing AVHs and neurotypicals who are highly prone to hallucinate both produce false positive responses in auditory signal detection. These findings suggest that voice-hearing may lie on a continuum with similar mechanisms underlying AVHs in both populations. Methods The current study used a monaural auditory stimulus in a signal detection task to test to what extent experimentally induced verbal hallucinations are (1) left-lateralised (i.e., more likely to occur when presented to the right ear compared to the left ear due to the left-hemisphere dominance for language processing), and (2) predicted by self-reported hallucination proneness and auditory imagery tendencies. In a conditioning task, fifty neurotypical participants associated a negative word on-screen with the same word being played via headphones through successive simultaneous audio-visual presentations. A signal detection task followed where participants were presented with a target word on-screen and indicated whether they heard the word being played concurrently amongst white noise. Results Results showed that Pavlovian audio-visual conditioning reliably elicited a significant number of false positives (FPs). However, FP rates, perceptual sensitivities, and response biases did not differ between either ear. They were neither predicted by hallucination proneness nor auditory imagery. Discussion The results show that experimentally induced FPs in neurotypicals are not left-lateralised, adding further weight to the argument that lateralisation may not be a defining feature of hallucinations in clinical or non-clinical populations. The findings also support the idea that AVHs may be a continuous phenomenon that varies in severity and frequency across the population. Studying induced AVHs in neurotypicals may help identify the underlying cognitive and neural mechanisms contributing to AVHs in individuals with psychotic disorders.
Collapse
|
13
|
Sensory gating functions of the auditory thalamus: adaptation and modulations through noise-exposure and high-frequency stimulation in rats. Behav Brain Res 2023; 450:114498. [PMID: 37201892 DOI: 10.1016/j.bbr.2023.114498] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 05/12/2023] [Accepted: 05/14/2023] [Indexed: 05/20/2023]
Abstract
The medial geniculate body (MGB) of the thalamus is an obligatory relay for auditory processing. A breakdown of adaptive filtering and sensory gating at this level may lead to multiple auditory dysfunctions, while high-frequency stimulation (HFS) of the MGB might mitigate aberrant sensory gating. To further investigate the sensory gating functions of the MGB, this study (i) recorded electrophysiological evoked potentials in response to continuous auditory stimulation, and (ii) assessed the effect of MGB HFS on these responses in noise-exposed and control animals. Pure-tone sequences were presented to assess differential sensory gating functions associated with stimulus pitch, grouping (pairing), and temporal regularity. Evoked potentials were recorded from the MGB and acquired before and after HFS (100Hz). All animals (unexposed and noise-exposed, pre- and post-HFS) showed gating for pitch and grouping. Unexposed animals also showed gating for temporal regularity not found in noise-exposed animals. Moreover, only noise-exposed animals showed restoration comparable to the typical EP amplitude suppression pattern following MGB HFS. The current findings confirm adaptive thalamic sensory gating based on different sound characteristics and provide evidence that temporal regularity affects MGB auditory signaling.
Collapse
|
14
|
Sensitivity to syllable stress regularities in externally but not self-triggered speech in Dutch. Eur J Neurosci 2023. [PMID: 37122233 DOI: 10.1111/ejn.16003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2021] [Revised: 04/23/2023] [Accepted: 04/26/2023] [Indexed: 05/02/2023]
Abstract
Several theories of predictive processing propose reduced sensory and neural responses to anticipated events. Support comes from M/EEG studies, showing reduced auditory N1 and P2 responses to self- compared to externally generated events, or when the timing and form of stimuli are more predictable. The current study examined the sensitivity of N1 and P2 responses to statistical speech regularities. We employed a motor-to-auditory paradigm comparing ERP responses to externally and self-triggered pseudowords. Participants were presented with a cue indicating which button to press (motor-auditory condition) or which pseudoword would be presented (auditory-only condition). Stimuli consisted of the participant's own voice uttering pseudowords that varied in phonotactic probability and syllable stress. We expected to see N1 and P2 suppression for self-triggered stimuli, with greater suppression effects for more predictable features such as high phonotactic probability and first syllable stress in pseudowords. In a temporal PCA analysis, we observed an interaction between syllable stress and condition for the N1, where second syllable stress items elicited a larger N1 than first syllable stress items, but only for externally generated stimuli. We further observed an effect of syllable stress on the P2, where first syllable stress items elicited a larger P2. Strikingly, we did not observe motor-induced suppression for self-triggered stimuli for either the N1 or the P2 component, likely due to the temporal predictability of the stimulus onset in both conditions. Taking into account previous findings, the current results suggest that sensitivity to syllable stress regularities depends on task demands.
Collapse
|
15
|
Individual neurophysiological signatures of spontaneous rhythm processing. Neuroimage 2023; 273:120090. [PMID: 37028735 DOI: 10.1016/j.neuroimage.2023.120090] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 03/14/2023] [Accepted: 04/04/2023] [Indexed: 04/08/2023] Open
Abstract
When sensory input conveys rhythmic regularity, we can form predictions about the timing of upcoming events. Although rhythm processing capacities differ considerably between individuals, these differences are often obscured by participant- and trial-level data averaging procedures in M/EEG research. Here, we systematically assessed neurophysiological variability displayed by individuals listening to isochronous (1.54Hz) equitone sequences interspersed with unexpected (amplitude-attenuated) deviant tones. Our approach aimed at revealing time-varying adaptive neural mechanisms for sampling the acoustic environment at multiple timescales. Rhythm tracking analyses confirmed that individuals encode temporal regularities and form temporal expectations, as indicated in delta-band (1.54Hz) power and its anticipatory phase alignment to expected tone onsets. Zooming into tone- and participant-level data, we further characterized intra- and inter-individual variabilities in phase-alignment across auditory sequences. Further, individual modelling of beta-band tone-locked responses showed that a subset of auditory sequences was sampled rhythmically by superimposing binary (strong-weak; S-w), ternary (S-w-w) and mixed accentuation patterns. In these sequences, neural responses to standard and deviant tones were modulated by a binary accentuation pattern, thus pointing towards a mechanism of dynamic attending. Altogether, the current results point toward complementary roles of delta- and beta-band activity in rhythm processing and further highlight diverse and adaptive mechanisms to track and sample the acoustic environment at multiple timescales, even in the absence of task-specific instructions.
Collapse
|
16
|
Abstract
Appraisals can be influenced by cultural beliefs and stereotypes. In line with this, past research has shown that judgments about the emotional expression of a face are influenced by the face's sex, and vice versa that judgments about the sex of a person somewhat depend on the person's facial expression. For example, participants associate anger with male faces, and female faces with happiness or sadness. However, the strength and the bidirectionality of these effects remain debated. Moreover, the interplay of a stimulus' emotion and sex remains mostly unknown in the auditory domain. To investigate these questions, we created a novel stimulus set of 121 avatar faces and 121 human voices (available at https://bit.ly/2JkXrpy) with matched, fine-scale changes along the emotional (happy to angry) and sexual (male to female) dimensions. In a first experiment (N = 76), we found clear evidence for the mutual influence of facial emotion and sex cues on ratings, and moreover for larger implicit (task-irrelevant) effects of stimulus' emotion than of sex. These findings were replicated and extended in two preregistered studies-one laboratory categorization study using the same face stimuli (N = 108; https://osf.io/ve9an), and one online study with vocalizations (N = 72; https://osf.io/vhc9g). Overall, results show that the associations of maleness-anger and femaleness-happiness exist across sensory modalities, and suggest that emotions expressed in the face and voice cannot be entirely disregarded, even when attention is mainly focused on determining stimulus' sex. We discuss the relevance of these findings for cognitive and neural models of face and voice processing. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
Collapse
|
17
|
Age interferes with sensorimotor timing and error correction in the supra-second range. Front Aging Neurosci 2023; 14:1048610. [PMID: 36704500 PMCID: PMC9871492 DOI: 10.3389/fnagi.2022.1048610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Accepted: 12/22/2022] [Indexed: 01/12/2023] Open
Abstract
Introduction Precise motor timing including the ability to adjust movements after changes in the environment is fundamental to many daily activities. Sensorimotor timing in the sub-and supra-second range might rely on at least partially distinct brain networks, with the latter including the basal ganglia (BG) and the prefrontal cortex (PFC). Since both structures are particularly vulnerable to age-related decline, the present study investigated whether age might distinctively affect sensorimotor timing and error correction in the supra-second range. Methods A total of 50 healthy right-handed volunteers with 22 older (age range: 50-60 years) and 28 younger (age range: 20-36 years) participants synchronized the tap-onsets of their right index finger with an isochronous auditory pacing signal. Stimulus onset asynchronies were either 900 or 1,600 ms. Positive or negative step-changes that were perceivable or non-perceivable were occasionally interspersed to the fixed intervals to induce error correction. A simple reaction time task served as control condition. Results and Discussion In line with our hypothesis, synchronization variability in trials with supra-second intervals was larger in the older group. While reaction times were not affected by age, the mean negative asynchrony was significantly smaller in the elderly in trials with positive step-changes, suggesting more pronounced tolerance of positive deviations at older age. The analysis of error correction by means of the phase correction response (PCR) suggests reduced error correction in the older group. This effect emerged in trials with supra-second intervals and large positive step-changes, only. Overall, these results support the hypothesis that sensorimotor synchronization in the sub-second range is maintained but synchronization accuracy and error correction in the supra-second range is reduced in the elderly as early as in the fifth decade of life suggesting that these measures are suitable for the early detection of age-related changes of the motor system.
Collapse
|
18
|
The cheese was green with… envy: An EEG study on minimal fictional descriptions. BRAIN AND LANGUAGE 2023; 236:105218. [PMID: 36571932 DOI: 10.1016/j.bandl.2022.105218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Revised: 10/24/2022] [Accepted: 12/14/2022] [Indexed: 06/17/2023]
Abstract
Inconsistent information can be hard to understand, but in cases like fiction readers can integrate it with little to no difficulties. The present study aimed at examining if perspective switching can take place when only a minimal fictional description is provided (fictional world condition), as compared with general world knowledge (real world condition). Participants read sentences where food items had animated or inanimate features while EEG was recorded and performed a sentence completion task to evaluate recall. In the real-world condition, the N400 was significantly larger for sentences incongruent, rather than congruent, with general world knowledge. In the fictional world condition, the N400 elicited by congruent and incongruent sentences did not differ, confirming that the minimal description impacted online information processing. Information consistent with general knowledge was better recalled in both conditions. The current results highlight how contextual information is integrated during sentence comprehension.
Collapse
|
19
|
Sleeping with time in mind? A literature review and a proposal for a screening questionnaire on self-awakening. PLoS One 2023; 18:e0283221. [PMID: 36952462 PMCID: PMC10035927 DOI: 10.1371/journal.pone.0283221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Accepted: 03/03/2023] [Indexed: 03/25/2023] Open
Abstract
Some people report being able to spontaneously "time" the end of their sleep. This ability to self-awaken challenges the idea of sleep as a passive cognitive state. Yet, current evidence on this phenomenon is limited, partly because of the varied definitions of self-awakening and experimental approaches used to study it. Here, we provide a review of the literature on self-awakening. Our aim is to i) contextualise the phenomenon, ii) propose an operating definition, and iii) summarise the scientific approaches used so far. The literature review identified 17 studies on self-awakening. Most of them adopted an objective sleep evaluation (76%), targeted nocturnal sleep (76%), and used a single criterion to define the success of awakening (82%); for most studies, this corresponded to awakening occurring in a time window of 30 minutes around the expected awakening time. Out of 715 total participants, 125 (17%) reported to be self-awakeners, with an average age of 23.24 years and a slight predominance of males compared to females. These results reveal self-awakening as a relatively rare phenomenon. To facilitate the study of self-awakening, and based on the results of the literature review, we propose a quick paper-and-pencil screening questionnaire for self-awakeners and provide an initial validation for it. Taken together, the combined results of the literature review and the proposed questionnaire help in characterising a theoretical framework for self-awakenings, while providing a useful tool and empirical suggestions for future experimental studies, which should ideally employ objective measurements.
Collapse
|
20
|
Attention and emotion shape self-voice prioritization in speech processing. Cortex 2023; 158:83-95. [PMID: 36473276 DOI: 10.1016/j.cortex.2022.10.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Revised: 09/27/2022] [Accepted: 10/06/2022] [Indexed: 01/18/2023]
Abstract
Both self-voice and emotional speech are salient signals that are prioritized in perception. Surprisingly, self-voice perception has been investigated to a lesser extent than the self-face. Therefore, it remains to be clarified whether self-voice prioritization is boosted by emotion, and whether self-relevance and emotion interact differently when attention is focused on who is speaking vs. what is being said. Thirty participants listened to 210 prerecorded words spoken in one's own or an unfamiliar voice and differing in emotional valence in two tasks, manipulating the attention focus on either speaker identity or speech emotion. Event-related potentials (ERP) of the electroencephalogram (EEG) informed on the temporal dynamics of self-relevance, emotion, and attention effects. Words spoken in one's own voice elicited a larger N1 and Late Positive Potential (LPP), but smaller N400. Identity and emotion interactively modulated the P2 (self-positivity bias) and LPP (self-negativity bias). Attention to speaker identity modulated more strongly ERP responses within 600 ms post-word onset (N1, P2, N400), whereas attention to speech emotion altered the late component (LPP). However, attention did not modulate the interaction of self-relevance and emotion. These findings suggest that the self-voice is prioritized for neural processing at early sensory stages, and that both emotion and attention shape self-voice prioritization in speech processing. They also confirm involuntary processing of salient signals (self-relevance and emotion) even in situations in which attention is deliberately directed away from those cues. These findings have important implications for a better understanding of symptoms thought to arise from aberrant self-voice monitoring such as auditory verbal hallucinations.
Collapse
|
21
|
Magnetoencephalography recordings reveal the spatiotemporal dynamics of recognition memory for complex versus simple auditory sequences. Commun Biol 2022; 5:1272. [PMID: 36402843 PMCID: PMC9675809 DOI: 10.1038/s42003-022-04217-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Accepted: 11/02/2022] [Indexed: 11/21/2022] Open
Abstract
Auditory recognition is a crucial cognitive process that relies on the organization of single elements over time. However, little is known about the spatiotemporal dynamics underlying the conscious recognition of auditory sequences varying in complexity. To study this, we asked 71 participants to learn and recognize simple tonal musical sequences and matched complex atonal sequences while their brain activity was recorded using magnetoencephalography (MEG). Results reveal qualitative changes in neural activity dependent on stimulus complexity: recognition of tonal sequences engages hippocampal and cingulate areas, whereas recognition of atonal sequences mainly activates the auditory processing network. Our findings reveal the involvement of a cortico-subcortical brain network for auditory recognition and support the idea that stimulus complexity qualitatively alters the neural pathways of recognition memory.
Collapse
|
22
|
Interhemispheric Brain Communication and the Evolution of Turn-Taking in Mammals. Front Ecol Evol 2022. [DOI: 10.3389/fevo.2022.916956] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
In the last 20 years, research on turn-taking and duetting has flourished in at least three, historically separate disciplines: animal behavior, language sciences, and music cognition. While different in scope and methods, all three ultimately share one goal—namely the understanding of timed interactions among conspecifics. In this perspective, we aim at connecting turn-taking and duetting across species from a neural perspective. While we are still far from a defined neuroethology of turn-taking, we argue that the human neuroscience of turn-taking and duetting can inform animal bioacoustics. For this, we focus on a particular concept, interhemispheric connectivity, and its main white-matter substrate, the corpus callosum. We provide an overview of the role of corpus callosum in human neuroscience and interactive music and speech. We hypothesize its mechanistic connection to turn-taking and duetting in our species, and a potential translational link to mammalian research. We conclude by illustrating empirical venues for neuroethological research of turn-taking and duetting in mammals.
Collapse
|
23
|
Hypersensitivity to passive voice hearing in hallucination proneness. Front Hum Neurosci 2022; 16:859731. [PMID: 35966990 PMCID: PMC9366353 DOI: 10.3389/fnhum.2022.859731] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Accepted: 06/29/2022] [Indexed: 11/21/2022] Open
Abstract
Voices are a complex and rich acoustic signal processed in an extensive cortical brain network. Specialized regions within this network support voice perception and production and may be differentially affected in pathological voice processing. For example, the experience of hallucinating voices has been linked to hyperactivity in temporal and extra-temporal voice areas, possibly extending into regions associated with vocalization. Predominant self-monitoring hypotheses ascribe a primary role of voice production regions to auditory verbal hallucinations (AVH). Alternative postulations view a generalized perceptual salience bias as causal to AVH. These theories are not mutually exclusive as both ascribe the emergence and phenomenology of AVH to unbalanced top-down and bottom-up signal processing. The focus of the current study was to investigate the neurocognitive mechanisms underlying predisposition brain states for emergent hallucinations, detached from the effects of inner speech. Using the temporal voice area (TVA) localizer task, we explored putative hypersalient responses to passively presented sounds in relation to hallucination proneness (HP). Furthermore, to avoid confounds commonly found in in clinical samples, we employed the Launay-Slade Hallucination Scale (LSHS) for the quantification of HP levels in healthy people across an experiential continuum spanning the general population. We report increased activation in the right posterior superior temporal gyrus (pSTG) during the perception of voice features that positively correlates with increased HP scores. In line with prior results, we propose that this right-lateralized pSTG activation might indicate early hypersensitivity to acoustic features coding speaker identity that extends beyond own voice production to perception in healthy participants prone to experience AVH.
Collapse
|
24
|
Cognition through the lens of a body–brain dynamic system. Trends Neurosci 2022; 45:667-677. [DOI: 10.1016/j.tins.2022.06.004] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Revised: 06/07/2022] [Accepted: 06/13/2022] [Indexed: 12/01/2022]
|
25
|
ERPs reveal an iconic relation between sublexical phonology and affective meaning. Cognition 2022; 226:105182. [PMID: 35689874 DOI: 10.1016/j.cognition.2022.105182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2021] [Revised: 05/15/2022] [Accepted: 05/25/2022] [Indexed: 11/03/2022]
Abstract
Classical linguistic theory assumes that formal aspects, like sound, are not internally related to the meaning of words. However, recent research suggests language might code affective meaning such as threat and alert sublexically. Positing affective phonological iconicity as a systematic organization principle of the German lexicon, we calculated sublexical affective values for sub-syllabic phonological word segments from a large-scale affective lexical German database by averaging valence and arousal ratings of all words any phonological segment appears in. We tested word stimuli with either consistent or inconsistent mappings between lexical affective meaning and sublexical affective values (negative-valence/high-arousal vs. neutral-valence/low-arousal) in an EEG visual-lexical-decision task. A mismatch between sublexical and lexical affective values elicited an increased N400 response. These results reveal that systematic affective phonological iconicity - extracted from the lexicon - impacts the extraction of lexical word meaning during reading.
Collapse
|
26
|
Identifying a brain network for musical rhythm: A functional neuroimaging meta-analysis and systematic review. Neurosci Biobehav Rev 2022; 136:104588. [PMID: 35259422 PMCID: PMC9195154 DOI: 10.1016/j.neubiorev.2022.104588] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Revised: 01/31/2022] [Accepted: 02/14/2022] [Indexed: 01/05/2023]
Abstract
We conducted a systematic review and meta-analysis of 30 functional magnetic resonance imaging studies investigating processing of musical rhythms in neurotypical adults. First, we identified a general network for musical rhythm, encompassing all relevant sensory and motor processes (Beat-based, rest baseline, 12 contrasts) which revealed a large network involving auditory and motor regions. This network included the bilateral superior temporal cortices, supplementary motor area (SMA), putamen, and cerebellum. Second, we identified more precise loci for beat-based musical rhythms (Beat-based, audio-motor control, 8 contrasts) in the bilateral putamen. Third, we identified regions modulated by beat based rhythmic complexity (Complexity, 16 contrasts) which included the bilateral SMA-proper/pre-SMA, cerebellum, inferior parietal regions, and right temporal areas. This meta-analysis suggests that musical rhythm is largely represented in a bilateral cortico-subcortical network. Our findings align with existing theoretical frameworks about auditory-motor coupling to a musical beat and provide a foundation for studying how the neural bases of musical rhythm may overlap with other cognitive domains.
Collapse
|
27
|
Prediction in the aging brain: Merging cognitive, neurological, and evolutionary perspectives. J Gerontol B Psychol Sci Soc Sci 2022; 77:1580-1591. [PMID: 35429160 PMCID: PMC9434449 DOI: 10.1093/geronb/gbac062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Indexed: 12/02/2022] Open
Abstract
Although the aging brain is typically characterized by declines in a variety of cognitive functions, there has been growing attention to cognitive functions that may stabilize or improve with age. We integrate evidence from behavioral, computational, and neurological domains under the hypothesis that over the life span the brain becomes more effective at predicting (i.e., utilizing knowledge) compared to learning. Moving beyond mere description of the empirical literature—with the aim of arriving at a deeper understanding of cognitive aging—we provide potential explanations for a learning-to-prediction shift based on evolutionary models and principles of senescence and plasticity. The proposed explanations explore whether the occurrence of a learning-to-prediction shift can be explained by (changes in) the fitness effects of learning and prediction over the life span. Prediction may optimize (a) the allocation of limited resources across the life span, and/or (b) late-life knowledge transfer (social learning). Alternatively, late-life prediction may reflect a slower decline in prediction compared to learning. By discussing these hypotheses, we aim to provide a foundation for an integrative neurocognitive–evolutionary perspective on aging and to stimulate further theoretical and empirical work.
Collapse
|
28
|
Abstract
Introduction: Auditory verbal hallucinations (AVH) are a cardinal symptom of schizophrenia but are also reported in the general population without need for psychiatric care. Previous evidence suggests that AVH may reflect an imbalance of prior expectation and sensory information, and that altered salience processing is characteristic of both psychotic and non-clinical voice hearers. However, it remains to be shown how such an imbalance affects the categorisation of vocal emotions in perceptual ambiguity.Methods: Neutral and emotional nonverbal vocalisations were morphed along two continua differing in valence (anger; pleasure), each including 11 morphing steps at intervals of 10%. College students (N = 234) differing in AVH proneness (measured with the Launay-Slade Hallucination Scale) evaluated the emotional quality of the vocalisations.Results: Increased AVH proneness was associated with more frequent categorisation of ambiguous vocalisations as 'neutral', irrespective of valence. Similarly, the perceptual boundary for emotional classification was shifted by AVH proneness: participants needed more emotional information to categorise a voice as emotional.Conclusions: These findings suggest that emotional salience in vocalisations is dampened as a function of increased AVH proneness. This could be related to changes in the acoustic representations of emotions or reflect top-down expectations of less salient information in the social environment.
Collapse
|
29
|
Cortical thickness in default mode network hubs correlates with clinical features of dissociative seizures. Epilepsy Behav 2022; 128:108605. [PMID: 35152170 DOI: 10.1016/j.yebeh.2022.108605] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Revised: 01/21/2022] [Accepted: 01/26/2022] [Indexed: 11/03/2022]
Abstract
BACKGROUND Dissociative seizures (DS) are a common subtype of functional neurological disorder (FND) with an incompletely understood pathophysiology. Here, gray matter variations and their relationship to clinical features were investigated. METHODS Forty-eight patients with DS without neurological comorbidities and 43 matched clinical control patients with syncope with structural brain MRIs were identified retrospectively. FreeSurfer-based cortical thickness and FSL FIRST-based subcortical volumes were used for quantitative analyses, and all findings were age and sex adjusted, and corrected for multiple comparisons. RESULTS Groups were not statistically different in cortical thickness or subcortical volumes. For patients with DS, illness duration was inversely correlated with cortical thickness of left-sided anterior and posterior cortical midline structures (perigenual/dorsal anterior cingulate cortex, superior parietal cortex, precuneus), and clusters at the left temporoparietal junction (supramarginal gyrus, postcentral gyrus, superior temporal gyrus), left postcentral gyrus, and right pericalcarine cortex. Dissociative seizure duration was inversely correlated with cortical thickness in the left perigenual anterior cingulate cortex, superior/middle frontal gyri, precentral gyrus and lateral occipital cortex, along with the right isthmus-cingulate and posterior-cingulate, middle temporal gyrus, and precuneus. Seizure frequency did not show any significant correlations. CONCLUSIONS In patients with DS, illness duration inversely correlated with cortical thickness of left-sided default mode network cortical hubs, while seizure duration correlated with left frontopolar and right posteromedial areas, among others. Etiological factors contributing to neuroanatomical variations in areas related to self-referential processing in patients with DS require more research inquiry.
Collapse
|
30
|
Overt Oculomotor Behavior Reveals Covert Temporal Predictions. Front Hum Neurosci 2022; 16:758138. [PMID: 35221954 PMCID: PMC8874352 DOI: 10.3389/fnhum.2022.758138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2021] [Accepted: 01/14/2022] [Indexed: 11/21/2022] Open
Abstract
Our eyes move in response to stimulus statistics, reacting to surprising events, and adapting to predictable ones. Cortical and subcortical pathways contribute to generating context-specific eye-movement dynamics, and oculomotor dysfunction is recognized as one the early clinical markers of Parkinson's disease (PD). We asked if covert computations of environmental statistics generating temporal expectations for a potential target are registered by eye movements, and if so, assuming that temporal expectations rely on motor system efficiency, whether they are impaired in PD. We used a repeating tone sequence, which generates a hazard rate distribution of target probability, and analyzed the distribution of blinks when participants were waiting for the target, but the target did not appear. Results show that, although PD participants tend to produce fewer and less temporally organized blink events relative to healthy controls, in both groups blinks became more suppressed with increasing target probability, leading to a hazard rate of oculomotor inhibition effects. The covert generation of temporal predictions may reflect a key feature of cognitive resilience in Parkinson's Disease.
Collapse
|
31
|
Dysfunctional Timing in Traumatic Brain Injury Patients: Co-occurrence of Cognitive, Motor, and Perceptual Deficits. Front Psychol 2021; 12:731898. [PMID: 34733208 PMCID: PMC8558219 DOI: 10.3389/fpsyg.2021.731898] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Accepted: 09/27/2021] [Indexed: 11/21/2022] Open
Abstract
Timing is an essential part of human cognition and of everyday life activities, such as walking or holding a conversation. Previous studies showed that traumatic brain injury (TBI) often affects cognitive functions such as processing speed and time-sensitive abilities, causing long-term sequelae as well as daily impairments. However, the existing evidence on timing capacities in TBI is mostly limited to perception and the processing of isolated intervals. It is therefore open whether the observed deficits extend to motor timing and to continuous dynamic tasks that more closely match daily life activities. The current study set out to answer these questions by assessing audio motor timing abilities and their relationship with cognitive functioning in a group of TBI patients (n = 15) and healthy matched controls. We employed a comprehensive set of tasks aiming at testing timing abilities across perception and production and from single intervals to continuous auditory sequences. In line with previous research, we report functional impairments in TBI patients concerning cognitive processing speed and perceptual timing. Critically, these deficits extended to motor timing: The ability to adjust to tempo changes in an auditory pacing sequence was impaired in TBI patients, and this motor timing deficit covaried with measures of processing speed. These findings confirm previous evidence on perceptual and cognitive timing deficits resulting from TBI and provide first evidence for comparable deficits in motor behavior. This suggests basic co-occurring perceptual and motor timing impairments that may factor into a wide range of daily activities. Our results thus place TBI into the wider range of pathologies with well-documented timing deficits (such as Parkinson’s disease) and encourage the search for novel timing-based therapeutic interventions (e.g., employing dynamic and/or musical stimuli) with high transfer potential to everyday life activities.
Collapse
|
32
|
An ecological approach to measuring synchronization abilities across the animal kingdom. Philos Trans R Soc Lond B Biol Sci 2021; 376:20200336. [PMID: 34420382 PMCID: PMC8380968 DOI: 10.1098/rstb.2020.0336] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023] Open
Abstract
In this perspective paper, we focus on the study of synchronization abilities across the animal kingdom. We propose an ecological approach to studying nonhuman animal synchronization that begins from observations about when, how and why an animal might synchronize spontaneously with natural environmental rhythms. We discuss what we consider to be the most important, but thus far largely understudied, temporal, physical, perceptual and motivational constraints that must be taken into account when designing experiments to test synchronization in nonhuman animals. First and foremost, different species are likely to be sensitive to and therefore capable of synchronizing at different timescales. We also argue that it is fruitful to consider the latent flexibility of animal synchronization. Finally, we discuss the importance of an animal's motivational state for showcasing synchronization abilities. We demonstrate that the likelihood that an animal can successfully synchronize with an environmental rhythm is context-dependent and suggest that the list of species capable of synchronization is likely to grow when tested with ecologically honest, species-tuned experiments. This article is part of the theme issue ‘Synchrony and rhythm interaction: from the brain to behavioural ecology’.
Collapse
|
33
|
Synchrony and rhythm interaction: from the brain to behavioural ecology. Philos Trans R Soc Lond B Biol Sci 2021; 376:20200324. [PMID: 34420379 DOI: 10.1098/rstb.2020.0324] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022] Open
Abstract
This theme issue assembles current studies that ask how and why precise synchronization and related forms of rhythm interaction are expressed in a wide range of behaviour. The studies cover human activity, with an emphasis on music, and social behaviour, reproduction and communication in non-human animals. In most cases, the temporally aligned rhythms have short-from several seconds down to a fraction of a second-periods and are regulated by central nervous system pacemakers, but interactions involving rhythms that are 24 h or longer and originate in biological clocks also occur. Across this spectrum of activities, species and time scales, empirical work and modelling suggest that synchrony arises from a limited number of coupled-oscillator mechanisms with which individuals mutually entrain. Phylogenetic distribution of these common mechanisms points towards convergent evolution. Studies of animal communication indicate that many synchronous interactions between the signals of neighbouring individuals are specifically favoured by selection. However, synchronous displays are often emergent properties of entrainment between signalling individuals, and in some situations, the very signallers who produce a display might not gain any benefit from the collective timing of their production. This article is part of the theme issue 'Synchrony and rhythm interaction: from the brain to behavioural ecology'.
Collapse
|
34
|
Temporo-cerebellar connectivity underlies timing constraints in audition. eLife 2021; 10:67303. [PMID: 34542407 PMCID: PMC8480974 DOI: 10.7554/elife.67303] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2021] [Accepted: 09/09/2021] [Indexed: 12/26/2022] Open
Abstract
The flexible and efficient adaptation to dynamic, rapid changes in the auditory environment likely involves generating and updating of internal models. Such models arguably exploit connections between the neocortex and the cerebellum, supporting proactive adaptation. Here, we tested whether temporo-cerebellar disconnection is associated with the processing of sound at short timescales. First, we identify lesion-specific deficits for the encoding of short timescale spectro-temporal non-speech and speech properties in patients with left posterior temporal cortex stroke. Second, using lesion-guided probabilistic tractography in healthy participants, we revealed bidirectional temporo-cerebellar connectivity with cerebellar dentate nuclei and crura I/II. These findings support the view that the encoding and modeling of rapidly modulated auditory spectro-temporal properties can rely on a temporo-cerebellar interface. We discuss these findings in view of the conjecture that proactive adaptation to a dynamic environment via internal models is a generalizable principle.
Collapse
|
35
|
Resting functional connectivity in the semantic appraisal network predicts accuracy of emotion identification. NEUROIMAGE-CLINICAL 2021; 31:102755. [PMID: 34274726 PMCID: PMC8319356 DOI: 10.1016/j.nicl.2021.102755] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Revised: 07/01/2021] [Accepted: 07/03/2021] [Indexed: 11/27/2022]
Abstract
OBJECTIVE Structural and task-based functional studies associate emotion reading with frontotemporal brain networks, though it remains unclear whether functional connectivity (FC) alone predicts emotion reading ability. The predominantly frontotemporal salience and semantic appraisal (SAN) networks are selectively impacted in neurodegenerative disease syndromes like behavioral-variant frontotemporal dementia (bvFTD) and semantic-variant primary progressive aphasia (svPPA). Accurate emotion identification diminishes in some of these patients, but studies investigating the source of this symptom in patients have predominantly examined structural rather than functional brain changes. Thus, we investigated the impact of altered connectivity on their emotion reading. METHODS One-hundred-eighty-five participants (26 bvFTD, 21 svPPA, 24 non-fluent variant PPA, 24 progressive supranuclear palsy, 49 Alzheimer's disease, 41 neurologically healthy older controls) underwent task-free fMRI, and completed the Emotion Evaluation subtest of The Awareness of Social Inference Test (TASIT-EET), watching videos and selecting labels for actors' emotions. RESULTS As expected, patients averaged significantly worse on emotion reading, but with wide inter-individual variability. Across all groups, lower mean FC in the SAN, but not other ICNs, predicted worse TASIT-EET performance. Node-pair analysis revealed that emotion identification was predicted by FC between 1) right anterior temporal lobe (RaTL) and right anterior orbitofrontal (OFC), 2) RaTL and right posterior OFC, and 3) left basolateral amygdala and left posterior OFC. CONCLUSION Emotion reading test performance predicts FC in specific SAN regions mediating socioemotional semantics, personalized evaluations, and salience-driven attention, highlighting the value of emotion testing in clinical and research settings to index neural circuit dysfunction in patients with neurodegeneration and other neurologic disorders.
Collapse
|
36
|
Auditory thalamus dysfunction and pathophysiology in tinnitus: a predictive network hypothesis. Brain Struct Funct 2021; 226:1659-1676. [PMID: 33934235 PMCID: PMC8203542 DOI: 10.1007/s00429-021-02284-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Accepted: 04/21/2021] [Indexed: 01/12/2023]
Abstract
Tinnitus is the perception of a 'ringing' sound without an acoustic source. It is generally accepted that tinnitus develops after peripheral hearing loss and is associated with altered auditory processing. The thalamus is a crucial relay in the underlying pathways that actively shapes processing of auditory signals before the respective information reaches the cerebral cortex. Here, we review animal and human evidence to define thalamic function in tinnitus. Overall increased spontaneous firing patterns and altered coherence between the thalamic medial geniculate body (MGB) and auditory cortices is observed in animal models of tinnitus. It is likely that the functional connectivity between the MGB and primary and secondary auditory cortices is reduced in humans. Conversely, there are indications for increased connectivity between the MGB and several areas in the cingulate cortex and posterior cerebellar regions, as well as variability in connectivity between the MGB and frontal areas regarding laterality and orientation in the inferior, medial and superior frontal gyrus. We suggest that these changes affect adaptive sensory gating of temporal and spectral sound features along the auditory pathway, reflecting dysfunction in an extensive thalamo-cortical network implicated in predictive temporal adaptation to the auditory environment. Modulation of temporal characteristics of input signals might hence factor into a thalamo-cortical dysrhythmia profile of tinnitus, but could ultimately also establish new directions for treatment options for persons with tinnitus.
Collapse
|
37
|
Reading direct speech quotes increases theta phase-locking: Evidence for cortical tracking of inner speech? Neuroimage 2021; 239:118313. [PMID: 34175425 DOI: 10.1016/j.neuroimage.2021.118313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Revised: 05/28/2021] [Accepted: 06/24/2021] [Indexed: 11/25/2022] Open
Abstract
Growing evidence shows that theta-band (4-7 Hz) activity in the auditory cortex phase-locks to rhythms of overt speech. Does theta activity also encode the rhythmic dynamics of inner speech? Previous research established that silent reading of direct speech quotes (e.g., Mary said: "This dress is lovely!") elicits more vivid inner speech than indirect speech quotes (e.g., Mary said that the dress was lovely). As we cannot directly track the phase alignment between theta activity and inner speech over time, we used EEG to measure the brain's phase-locked responses to the onset of speech quote reading. We found that direct (vs. indirect) quote reading was associated with increased theta phase synchrony over trials at 250-500 ms post-reading onset, with sources of the evoked activity estimated in the speech processing network. An eye-tracking control experiment confirmed that increased theta phase synchrony in direct quote reading was not driven by eye movement patterns, and more likely reflects synchronous phase resetting at the onset of inner speech. These findings suggest a functional role of theta phase modulation in reading-induced inner speech.
Collapse
|
38
|
The impact of capitalized German words on lexical access. PSYCHOLOGICAL RESEARCH 2021; 86:891-902. [PMID: 34091714 DOI: 10.1007/s00426-021-01540-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2020] [Accepted: 05/25/2021] [Indexed: 10/21/2022]
Abstract
Leading models of visual word recognition assume that the process of word identification is driven by abstract, case-invariant units (e.g., table and TABLE activate the same abstract representation). But do these models need to be modified to meet nuances of orthography as in German, where the first letter of common nouns is capitalized (e.g., Buch [book] and Hund [dog], but blau [blue])? To examine the role of initial capitalization of German words in lexical access, we chose a semantic categorization task ("is the word an animal name?"). In Experiment 1, we compared German words in all-lowercase vs. initial capitalization (hund, buch, blau vs. Hund, Buch, Blau). Results showed faster responses for animal nouns with initial capitalization (Hund < hund) and faster responses for lowercase non-nouns (blau < Blau). Surprisingly, we found faster responses for lowercase non-animal nouns (buch < Buch). As the latter difference could derive from task demands (i.e., buch does not follow German orthographic rules and requires a "no" response), we replaced the all-lowercase format with an orthographically legal all-uppercase format in Experiment 2. Results showed an advantage for all nouns with initial capitalization (Hund < HUND and Buch < BUCH). These findings clearly show that initial capitalization in German words constitutes an essential part of the words' representations and is used during lexical access. Thus, models of visual word recognition, primarily focused on English orthography, should be expanded to the idiosyncrasies of other Latin-based orthographies.
Collapse
|
39
|
Dissociating embodiment and emotional reactivity in motor responses to artworks. Cognition 2021; 212:104663. [PMID: 33761410 DOI: 10.1016/j.cognition.2021.104663] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2020] [Revised: 11/13/2020] [Accepted: 03/08/2021] [Indexed: 10/21/2022]
Abstract
Perceiving art is known to elicit motor cortex activation in an observer's brain. This motor activation has often been attributed to a covert approach response associated with the emotional valence of an art piece (emotional reaction hypothesis). However, recent accounts have proposed that aesthetic experiences could be grounded in the motor simulation of actions required to produce an art piece and of the sensorimotor states embedded in its subject (embodied aesthetic hypothesis). Here, we aimed to test these two hypotheses by assessing whether motor facilitation during artwork perception mirrors emotional or motor simulation processes. To this aim, we capitalized on single pulse transcranial magnetic stimulation revealing a two-stage motor coding of emotional body postures: an early, non-specific activation related to emotion processing and a later action-specific activation reflecting motor simulation. We asked art-naïve individuals to rate how much they liked a series of pointillist and brushstroke canvases; photographs of artistic gardens served as control natural stimuli. After an early (150 ms) or a later (300 ms) post-stimulus delay, motor evoked potentials were recorded from wrist-extensor and finger muscles that were more involved in brushstroke- and pointillist-like painting, respectively. Results showed that observing the two canvas styles did not elicit differential motor activation in the early time window for either muscle, not supporting the emotional reaction hypothesis. However, in support of the embodied aesthetic hypothesis, we found in the later time window greater motor activation responses to brushstroke than pointillist canvases for the wrist-extensor, but not for the finger muscle. Furthermore, this muscle-selective facilitation was associated with lower liking ratings of brushstroke canvases and with greater empathy dispositions. These findings support the claim that simulation of the painter's movements is crucial for aesthetic experience, by documenting a link between motor simulation, dispositional empathy, and subjective appreciation in artwork perception.
Collapse
|
40
|
|
41
|
Expectancy changes the self-monitoring of voice identity. Eur J Neurosci 2021; 53:2681-2695. [PMID: 33638190 PMCID: PMC8252045 DOI: 10.1111/ejn.15162] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Revised: 01/18/2021] [Accepted: 02/20/2021] [Indexed: 12/02/2022]
Abstract
Self‐voice attribution can become difficult when voice characteristics are ambiguous, but functional magnetic resonance imaging (fMRI) investigations of such ambiguity are sparse. We utilized voice‐morphing (self‐other) to manipulate (un‐)certainty in self‐voice attribution in a button‐press paradigm. This allowed investigating how levels of self‐voice certainty alter brain activation in brain regions monitoring voice identity and unexpected changes in voice playback quality. FMRI results confirmed a self‐voice suppression effect in the right anterior superior temporal gyrus (aSTG) when self‐voice attribution was unambiguous. Although the right inferior frontal gyrus (IFG) was more active during a self‐generated compared to a passively heard voice, the putative role of this region in detecting unexpected self‐voice changes during the action was demonstrated only when hearing the voice of another speaker and not when attribution was uncertain. Further research on the link between right aSTG and IFG is required and may establish a threshold monitoring voice identity in action. The current results have implications for a better understanding of the altered experience of self‐voice feedback in auditory verbal hallucinations.
Collapse
|
42
|
Abstract
Neurocognitive models (e.g., Schirmer & Kotz, 2006) have helped to characterize how listeners incrementally derive meaning from vocal expressions of emotion in spoken language, what neural mechanisms are involved at different processing stages, and their relative time course. But how can these insights be applied to communicative situations in which prosody serves a predominantly interpersonal function? This comment examines recent data highlighting the dynamic interplay of prosody and language, when vocal attributes serve the sociopragmatic goals of the speaker or reveal interpersonal information that listeners use to construct a mental representation of what is being communicated. Our comment serves as a beacon to researchers interested in how the neurocognitive system “makes sense” of socioemotive aspects of prosody.
Collapse
|
43
|
Dissonance in Music Impairs Spatial Gait Parameters in Patients with Parkinson's Disease. JOURNAL OF PARKINSONS DISEASE 2020; 11:363-372. [PMID: 33285641 DOI: 10.3233/jpd-202413] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUND It is known that music influences gait parameters in Parkinson's disease (PD). However, it remains unclear whether this effect is merely due to temporal aspects of music (rhythm and tempo) or other musical parameters. OBJECTIVE To examine the influence of pleasant and unpleasant music on spatiotemporal gait parameters in PD, while controlling for rhythmic aspects of the musical signal. METHODS We measured spatiotemporal gait parameters of 18 patients suffering from mild PD (50%men, mean±SD age of 64±6 years; mean disease duration of 6±5 years; mean Unified PD Rating scale [UPDRS] motor score of 15±7) who listened to eight different pieces of music. Music pieces varied in harmonic consonance/dissonance to create the experience of pleasant/unpleasant feelings. To measure gait parameters, we used an established analysis of spatiotemporal gait, which consists of a walkway containing pressure-receptive sensors (GAITRite®). Repeated measures analyses of variance were used to evaluate effects of auditory stimuli. In addition, linear regression was used to evaluate effects of valence on gait. RESULTS Sensory dissonance modulated spatiotemporal and spatial gait parameters, namely velocity and stride length, while temporal gait parameters (cadence, swing duration) were not affected. In contrast, valence in music as perceived by patients was not associated with gait parameters. Motor and musical abilities did not relevantly influence the modulation of gait by auditory stimuli. CONCLUSION Our observations suggest that dissonant music negatively affects particularly spatial gait parameters in PD by yet unknown mechanisms, but putatively through increased cognitive interference reducing attention in auditory cueing.
Collapse
|
44
|
Dynamic acoustic salience evokes motor responses. Cortex 2020; 134:320-332. [PMID: 33340879 DOI: 10.1016/j.cortex.2020.10.019] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2019] [Revised: 06/25/2020] [Accepted: 10/08/2020] [Indexed: 11/28/2022]
Abstract
Audio-motor integration is currently viewed as a predictive process in which the brain simulates upcoming sounds based on voluntary actions. This perspective does not consider how our auditory environment may trigger involuntary action in the absence of prediction. We address this issue by examining the relationship between acoustic salience and involuntary motor responses. We investigate how acoustic features in music contribute to the perception of salience, and whether those features trigger involuntary peripheral motor responses. Participants with little-to-no musical training listened to musical excerpts once while remaining still during the recording of their muscle activity with surface electromyography (sEMG), and again while they continuously rated perceived salience within the music using a slider. We show cross-correlations between 1) salience ratings and acoustic features, 2) acoustic features and spontaneous muscle activity, and 3) salience ratings and spontaneous muscle activity. Amplitude, intensity, and spectral centroid were perceived as the most salient features in music, and fluctuations in these features evoked involuntary peripheral muscle responses. Our results suggest an involuntary mechanism for audio-motor integration, which may rely on brainstem-spinal or brainstem-cerebellar-spinal pathways. Based on these results, we argue that a new framework is needed to explain the full range of human sensorimotor capabilities. This goal can be achieved by considering how predictive and reactive audio-motor integration mechanisms could operate independently or interactively to optimize human behavior.
Collapse
|
45
|
Cerebellar circuitry and auditory verbal hallucinations: An integrative synthesis and perspective. Neurosci Biobehav Rev 2020; 118:485-503. [DOI: 10.1016/j.neubiorev.2020.08.004] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2020] [Revised: 06/30/2020] [Accepted: 08/07/2020] [Indexed: 02/06/2023]
|
46
|
|
47
|
An open-source toolbox for measuring dynamic video framerates and synchronizing video stimuli with neural and behavioral responses. J Neurosci Methods 2020; 343:108830. [DOI: 10.1016/j.jneumeth.2020.108830] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2020] [Revised: 06/21/2020] [Accepted: 06/23/2020] [Indexed: 11/28/2022]
|
48
|
Real and imagined sensory feedback have comparable effects on action anticipation. Cortex 2020; 130:290-301. [PMID: 32698087 DOI: 10.1016/j.cortex.2020.04.030] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2019] [Revised: 03/23/2020] [Accepted: 04/13/2020] [Indexed: 01/08/2023]
Abstract
The forward model monitors the success of sensory feedback to an action and links it to an efference copy originating in the motor system. The Readiness Potential (RP) of the electroencephalogram has been denoted as a neural signature of the efference copy. An open question is whether imagined sensory feedback works similarly to real sensory feedback. We investigated the RP to audible and imagined sounds in a button-press paradigm and assessed the role of sound complexity (vocal vs. non-vocal sound). Sensory feedback (both audible and imagined) in response to a voluntary action modulated the RP amplitude time-locked to the button press. The RP amplitude increase was larger for actions with expected sensory feedback (audible and imagined) than those without sensory feedback, and associated with N1 suppression for audible sounds. Further, the early RP phase was increased when actions elicited an imagined vocal (self-voice) compared to non-vocal sound. Our results support the notion that sensory feedback is anticipated before voluntary actions. This is the case for both audible and imagined sensory feedback and confirms a role of overt and covert feedback in the forward model.
Collapse
|
49
|
Changes in motor preparation affect the sensory consequences of voice production in voice hearers. Neuropsychologia 2020; 146:107531. [PMID: 32553846 DOI: 10.1016/j.neuropsychologia.2020.107531] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2020] [Revised: 05/11/2020] [Accepted: 06/08/2020] [Indexed: 10/24/2022]
Abstract
BACKGROUND Auditory verbal hallucinations (AVH) are a cardinal symptom of psychosis but are also present in 6-13% of the general population. Alterations in sensory feedback processing are a likely cause of AVH, indicative of changes in the forward model. However, it is unknown whether such alterations are related to anomalies in forming an efference copy during action preparation, selective for voices, and similar along the psychosis continuum. By directly comparing psychotic and nonclinical voice hearers (NCVH), the current study specifies whether and how AVH proneness modulates both the efference copy (Readiness Potential) and sensory feedback processing for voices and tones (N1, P2) with event-related brain potentials (ERPs). METHODS Controls with low AVH proneness (n = 15), NCVH (n = 16) and first-episode psychotic patients with AVH (n = 16) engaged in a button-press task with two types of stimuli: self-initiated and externally generated self-voices or tones during EEG recordings. RESULTS Groups differed in sensory feedback processing of expected and actual feedback: NCVH displayed an atypically enhanced N1 to self-initiated voices, while N1 suppression was reduced in psychotic patients. P2 suppression for voices and tones was strongest in NCVH, but absent for voices in patients. Motor activity preceding the button press was reduced in NCVH and patients, specifically for sensory feedback to self-voice in NCVH. CONCLUSIONS These findings suggest that selective changes in sensory feedback to voice are core to AVH. These changes already show in preparatory motor activity, potentially reflecting changes in forming an efference copy. The results provide partial support for continuum models of psychosis.
Collapse
|
50
|
ERP mismatch response to phonological and temporal regularities in speech. Sci Rep 2020; 10:9917. [PMID: 32555256 PMCID: PMC7303198 DOI: 10.1038/s41598-020-66824-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2019] [Accepted: 05/20/2020] [Indexed: 11/14/2022] Open
Abstract
Predictions of our sensory environment facilitate perception across domains. During speech perception, formal and temporal predictions may be made for phonotactic probability and syllable stress patterns, respectively, contributing to the efficient processing of speech input. The current experiment employed a passive EEG oddball paradigm to probe the neurophysiological processes underlying temporal and formal predictions simultaneously. The component of interest, the mismatch negativity (MMN), is considered a marker for experience-dependent change detection, where its timing and amplitude are indicative of the perceptual system’s sensitivity to presented stimuli. We hypothesized that more predictable stimuli (i.e. high phonotactic probability and first syllable stress) would facilitate change detection, indexed by shorter peak latencies or greater peak amplitudes of the MMN. This hypothesis was confirmed for phonotactic probability: high phonotactic probability deviants elicited an earlier MMN than low phonotactic probability deviants. We do not observe a significant modulation of the MMN to variations in syllable stress. Our findings confirm that speech perception is shaped by formal and temporal predictability. This paradigm may be useful to investigate the contribution of implicit processing of statistical regularities during (a)typical language development.
Collapse
|