1
|
Teng X, Larrouy-Maestri P, Poeppel D. Segmenting and Predicting Musical Phrase Structure Exploits Neural Gain Modulation and Phase Precession. J Neurosci 2024; 44:e1331232024. [PMID: 38926087 PMCID: PMC11270514 DOI: 10.1523/jneurosci.1331-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 05/29/2024] [Accepted: 06/11/2024] [Indexed: 06/28/2024] Open
Abstract
Music, like spoken language, is often characterized by hierarchically organized structure. Previous experiments have shown neural tracking of notes and beats, but little work touches on the more abstract question: how does the brain establish high-level musical structures in real time? We presented Bach chorales to participants (20 females and 9 males) undergoing electroencephalogram (EEG) recording to investigate how the brain tracks musical phrases. We removed the main temporal cues to phrasal structures, so that listeners could only rely on harmonic information to parse a continuous musical stream. Phrasal structures were disrupted by locally or globally reversing the harmonic progression, so that our observations on the original music could be controlled and compared. We first replicated the findings on neural tracking of musical notes and beats, substantiating the positive correlation between musical training and neural tracking. Critically, we discovered a neural signature in the frequency range ∼0.1 Hz (modulations of EEG power) that reliably tracks musical phrasal structure. Next, we developed an approach to quantify the phrasal phase precession of the EEG power, revealing that phrase tracking is indeed an operation of active segmentation involving predictive processes. We demonstrate that the brain establishes complex musical structures online over long timescales (>5 s) and actively segments continuous music streams in a manner comparable to language processing. These two neural signatures, phrase tracking and phrasal phase precession, provide new conceptual and technical tools to study the processes underpinning high-level structure building using noninvasive recording techniques.
Collapse
Affiliation(s)
- Xiangbin Teng
- Department of Psychology, The Chinese University of Hong Kong, Shatin, Hong Kong SAR, China
| | - Pauline Larrouy-Maestri
- Music Department, Max-Planck-Institute for Empirical Aesthetics, Frankfurt 60322, Germany
- Center for Language, Music, and Emotion (CLaME), New York, New York 10003
| | - David Poeppel
- Center for Language, Music, and Emotion (CLaME), New York, New York 10003
- Department of Psychology, New York University, New York, New York 10003
- Ernst Struengmann Institute for Neuroscience, Frankfurt 60528, Germany
- Music and Audio Research Laboratory (MARL), New York, New York 11201
| |
Collapse
|
2
|
MacLean J, Stirn J, Bidelman GM. Auditory-motor entrainment and listening experience shape the perceptual learning of concurrent speech. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.07.18.604167. [PMID: 39071391 PMCID: PMC11275804 DOI: 10.1101/2024.07.18.604167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/30/2024]
Abstract
Background Plasticity from auditory experience shapes the brain's encoding and perception of sound. Though prior research demonstrates that neural entrainment (i.e., brain-to-acoustic synchronization) aids speech perception, how long- and short-term plasticity influence entrainment to concurrent speech has not been investigated. Here, we explored neural entrainment mechanisms and the interplay between short- and long-term neuroplasticity for rapid auditory perceptual learning of concurrent speech sounds in young, normal-hearing musicians and nonmusicians. Method Participants learned to identify double-vowel mixtures during ∼45 min training sessions with concurrent high-density EEG recordings. We examined the degree to which brain responses entrained to the speech-stimulus train (∼9 Hz) to investigate whether entrainment to speech prior to behavioral decision predicted task performance. Source and directed functional connectivity analyses of the EEG probed whether behavior was driven by group differences auditory-motor coupling. Results Both musicians and nonmusicians showed rapid perceptual learning in accuracy with training. Interestingly, listeners' neural entrainment strength prior to target speech mixtures predicted behavioral identification performance; stronger neural synchronization was observed preceding incorrect compared to correct trial responses. We also found stark hemispheric biases in auditory-motor coupling during speech entrainment, with greater auditory-motor connectivity in the right compared to left hemisphere for musicians (R>L) but not in nonmusicians (R=L). Conclusions Our findings confirm stronger neuroacoustic synchronization and auditory-motor coupling during speech processing in musicians. Stronger neural entrainment to rapid stimulus trains preceding incorrect behavioral responses supports the notion that alpha-band (∼10 Hz) arousal/suppression in brain activity is an important modulator of trial-by-trial success in perceptual processing.
Collapse
|
3
|
Edalati M, Wallois F, Ghostine G, Kongolo G, Trainor LJ, Moghimi S. Neural oscillations suggest periodicity encoding during auditory beat processing in the premature brain. Dev Sci 2024:e13550. [PMID: 39010656 DOI: 10.1111/desc.13550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 06/25/2024] [Accepted: 06/26/2024] [Indexed: 07/17/2024]
Abstract
When exposed to rhythmic patterns with temporal regularity, adults exhibit an inherent ability to extract and anticipate an underlying sequence of regularly spaced beats, which is internally constructed, as beats are experienced even when no events occur at beat positions (e.g., in the case of rests). Perception of rhythm and synchronization to periodicity is indispensable for development of cognitive functions, social interaction, and adaptive behavior. We evaluated neural oscillatory activity in premature newborns (n = 19, mean age, 32 ± 2.59 weeks gestational age) during exposure to an auditory rhythmic sequence, aiming to identify early traces of periodicity encoding and rhythm processing through entrainment of neural oscillations at this stage of neurodevelopment. The rhythmic sequence elicited a systematic modulation of alpha power, synchronized to expected beat locations coinciding with both tones and rests, and independent of whether the beat was preceded by tone or rest. In addition, the periodic alpha-band fluctuations reached maximal power slightly before the corresponding beat onset times. Together, our results show neural encoding of periodicity in the premature brain involving neural oscillations in the alpha range that are much faster than the beat tempo, through alignment of alpha power to the beat tempo, consistent with observations in adults on predictive processing of temporal regularities in auditory rhythms. RESEARCH HIGHLIGHTS: In response to the presented rhythmic pattern, systematic modulations of alpha power showed that the premature brain extracted the temporal regularity of the underlying beat. In contrast to evoked potentials, which are greatly reduced when there is no sounds event, the modulation of alpha power occurred for beats coinciding with both tones and rests in a predictive way. The findings provide the first evidence for the neural coding of periodicity in auditory rhythm perception before the age of term.
Collapse
Affiliation(s)
- Mohammadreza Edalati
- Inserm UMR1105, Groupe de Recherches sur l'Analyse Multimodale de la Fonction Cérébrale, Université de Picardie Jules Verve, Amiens Cedex, France
| | - Fabrice Wallois
- Inserm UMR1105, Groupe de Recherches sur l'Analyse Multimodale de la Fonction Cérébrale, Université de Picardie Jules Verve, Amiens Cedex, France
- Inserm UMR1105, EFSN Pédiatriques, Amiens University Hospital, Amiens Cedex, France
| | - Ghida Ghostine
- Inserm UMR1105, Groupe de Recherches sur l'Analyse Multimodale de la Fonction Cérébrale, Université de Picardie Jules Verve, Amiens Cedex, France
| | - Guy Kongolo
- Inserm UMR1105, Groupe de Recherches sur l'Analyse Multimodale de la Fonction Cérébrale, Université de Picardie Jules Verve, Amiens Cedex, France
| | - Laurel J Trainor
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Ontario, Canada
- McMaster Institute for Music and the Mind, McMaster University, Hamilton, Ontario, Canada
- Rotman Research Institute, Baycrest Hospital, Toronto, Ontario, Canada
| | - Sahar Moghimi
- Inserm UMR1105, Groupe de Recherches sur l'Analyse Multimodale de la Fonction Cérébrale, Université de Picardie Jules Verve, Amiens Cedex, France
- Inserm UMR1105, EFSN Pédiatriques, Amiens University Hospital, Amiens Cedex, France
| |
Collapse
|
4
|
Xiao Q, Zheng X, Wen Y, Yuan Z, Chen Z, Lan Y, Li S, Huang X, Zhong H, Xu C, Zhan C, Pan J, Xie Q. Individualized music induces theta-gamma phase-amplitude coupling in patients with disorders of consciousness. Front Neurosci 2024; 18:1395627. [PMID: 39010944 PMCID: PMC11248187 DOI: 10.3389/fnins.2024.1395627] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Accepted: 06/18/2024] [Indexed: 07/17/2024] Open
Abstract
Objective This study aimed to determine whether patients with disorders of consciousness (DoC) could experience neural entrainment to individualized music, which explored the cross-modal influences of music on patients with DoC through phase-amplitude coupling (PAC). Furthermore, the study assessed the efficacy of individualized music or preferred music (PM) versus relaxing music (RM) in impacting patient outcomes, and examined the role of cross-modal influences in determining these outcomes. Methods Thirty-two patients with DoC [17 with vegetative state/unresponsive wakefulness syndrome (VS/UWS) and 15 with minimally conscious state (MCS)], alongside 16 healthy controls (HCs), were recruited for this study. Neural activities in the frontal-parietal network were recorded using scalp electroencephalography (EEG) during baseline (BL), RM and PM. Cerebral-acoustic coherence (CACoh) was explored to investigate participants' abilitiy to track music, meanwhile, the phase-amplitude coupling (PAC) was utilized to evaluate the cross-modal influences of music. Three months post-intervention, the outcomes of patients with DoC were followed up using the Coma Recovery Scale-Revised (CRS-R). Results HCs and patients with MCS showed higher CACoh compared to VS/UWS patients within musical pulse frequency (p = 0.016, p = 0.045; p < 0.001, p = 0.048, for RM and PM, respectively, following Bonferroni correction). Only theta-gamma PAC demonstrated a significant interaction effect between groups and music conditions (F (2,44) = 2.685, p = 0.036). For HCs, the theta-gamma PAC in the frontal-parietal network was stronger in the PM condition compared to the RM (p = 0.016) and BL condition (p < 0.001). For patients with MCS, the theta-gamma PAC was stronger in the PM than in the BL (p = 0.040), while no difference was observed among the three music conditions in patients with VS/UWS. Additionally, we found that MCS patients who showed improved outcomes after 3 months exhibited evident neural responses to preferred music (p = 0.019). Furthermore, the ratio of theta-gamma coupling changes in PM relative to BL could predict clinical outcomes in MCS patients (r = 0.992, p < 0.001). Conclusion Individualized music may serve as a potential therapeutic method for patients with DoC through cross-modal influences, which rely on enhanced theta-gamma PAC within the consciousness-related network.
Collapse
Affiliation(s)
- Qiuyi Xiao
- Joint Research Centre for Disorders of Consciousness, Department of Rehabilitation Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, Guangdong, China
- School of Laboratory Medicine and Biotechnology, Southern Medical University, Guangzhou, Guangdong, China
| | - Xiaochun Zheng
- Joint Research Centre for Disorders of Consciousness, Department of Rehabilitation Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, Guangdong, China
- School of Laboratory Medicine and Biotechnology, Southern Medical University, Guangzhou, Guangdong, China
| | - Yun Wen
- Music and Reflection Incorporated, Guangzhou, Guangdong, China
| | - Zhanxing Yuan
- Joint Research Centre for Disorders of Consciousness, Department of Rehabilitation Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, Guangdong, China
| | - Zerong Chen
- Joint Research Centre for Disorders of Consciousness, Department of Rehabilitation Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, Guangdong, China
- School of Laboratory Medicine and Biotechnology, Southern Medical University, Guangzhou, Guangdong, China
| | - Yue Lan
- Joint Research Centre for Disorders of Consciousness, Department of Rehabilitation Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, Guangdong, China
| | - Shuiyan Li
- Joint Research Centre for Disorders of Consciousness, Department of Rehabilitation Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, Guangdong, China
- School of Laboratory Medicine and Biotechnology, Southern Medical University, Guangzhou, Guangdong, China
| | - Xiyan Huang
- Joint Research Centre for Disorders of Consciousness, Department of Rehabilitation Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, Guangdong, China
| | - Haili Zhong
- Joint Research Centre for Disorders of Consciousness, Department of Rehabilitation Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, Guangdong, China
| | - Chengwei Xu
- Joint Research Centre for Disorders of Consciousness, Department of Rehabilitation Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, Guangdong, China
| | - Chang'an Zhan
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, China
| | - Jiahui Pan
- School of Software, South China Normal University, Guangzhou, Guangdong, China
| | - Qiuyou Xie
- Joint Research Centre for Disorders of Consciousness, Department of Rehabilitation Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, Guangdong, China
- School of Laboratory Medicine and Biotechnology, Southern Medical University, Guangzhou, Guangdong, China
- Department of Hyperbaric Oxygen, Zhujiang Hospital, Southern Medical University, Guangzhou, Guangdong, China
- School of Rehabilitation Sciences, Southern Medical University, Guangzhou, Guangdong, China
| |
Collapse
|
5
|
Mizokuchi K, Tanaka T, Sato TG, Shiraki Y. Alpha band modulation caused by selective attention to music enables EEG classification. Cogn Neurodyn 2024; 18:1005-1020. [PMID: 38826648 PMCID: PMC11143110 DOI: 10.1007/s11571-023-09955-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2022] [Revised: 02/19/2023] [Accepted: 03/08/2023] [Indexed: 06/04/2024] Open
Abstract
Humans are able to pay selective attention to music or speech in the presence of multiple sounds. It has been reported that in the speech domain, selective attention enhances the cross-correlation between the envelope of speech and electroencephalogram (EEG) while also affecting the spatial modulation of the alpha band. However, when multiple music pieces are performed at the same time, it is unclear how selective attention affects neural entrainment and spatial modulation. In this paper, we hypothesized that the entrainment to the attended music differs from that to the unattended music and that spatial modulation in the alpha band occurs in conjunction with attention. We conducted experiments in which we presented musical excerpts to 15 participants, each listening to two excerpts simultaneously but paying attention to one of the two. The results showed that the cross-correlation function between the EEG signal and the envelope of the unattended melody had a more prominent peak than that of the attended melody, contrary to the findings for speech. In addition, the spatial modulation in the alpha band was found with a data-driven approach called the common spatial pattern method. Classification of the EEG signal with a support vector machine identified attended melodies and achieved an accuracy of 100% for 11 of the 15 participants. These results suggest that selective attention to music suppresses entrainment to the melody and that spatial modulation of the alpha band occurs in conjunction with attention. To the best of our knowledge, this is the first report to detect attended music consisting of several types of music notes only with EEG.
Collapse
Affiliation(s)
- Kana Mizokuchi
- Department of Electrical and Electronic Engineering, Tokyo University of Agriculture and Technology, Tokyo, Japan
| | - Toshihisa Tanaka
- Department of Electrical Engineering and Computer Science, Tokyo University of Agriculture and Technology, Tokyo, Japan
| | - Takashi G. Sato
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Kanagawa, Japan
| | - Yoshifumi Shiraki
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Kanagawa, Japan
| |
Collapse
|
6
|
Spiech C, Danielsen A, Laeng B, Endestad T. Oscillatory attention in groove. Cortex 2024; 174:137-148. [PMID: 38547812 DOI: 10.1016/j.cortex.2024.02.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Revised: 11/10/2023] [Accepted: 02/19/2024] [Indexed: 04/21/2024]
Abstract
Attention is not constant but rather fluctuates over time and these attentional fluctuations may prioritize the processing of certain events over others. In music listening, the pleasurable urge to move to music (termed 'groove' by music psychologists) offers a particularly convenient case study of oscillatory attention because it engenders synchronous and oscillatory movements which also vary predictably with stimulus complexity. In this study, we simultaneously recorded pupillometry and scalp electroencephalography (EEG) from participants while they listened to drumbeats of varying complexity that they rated in terms of groove afterwards. Using the intertrial phase coherence of the beat frequency, we found that while subjects were listening, their pupil activity became entrained to the beat of the drumbeats and this entrained attention persisted in the EEG even as subjects imagined the drumbeats continuing through subsequent silent periods. This entrainment in both the pupillometry and EEG worsened with increasing rhythmic complexity, indicating poorer sensory precision as the beat became more obscured. Additionally, sustained pupil dilations revealed the expected, inverted U-shaped relationship between rhythmic complexity and groove ratings. Taken together, this work bridges oscillatory attention to rhythmic complexity in relation to musical groove.
Collapse
Affiliation(s)
- Connor Spiech
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Norway; Department of Psychology, University of Oslo, Norway.
| | - Anne Danielsen
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Norway; Department of Musicology, University of Oslo, Norway
| | - Bruno Laeng
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Norway; Department of Psychology, University of Oslo, Norway
| | - Tor Endestad
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Norway; Department of Psychology, University of Oslo, Norway
| |
Collapse
|
7
|
Chang A, Teng X, Assaneo MF, Poeppel D. The human auditory system uses amplitude modulation to distinguish music from speech. PLoS Biol 2024; 22:e3002631. [PMID: 38805517 PMCID: PMC11132470 DOI: 10.1371/journal.pbio.3002631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Accepted: 04/17/2024] [Indexed: 05/30/2024] Open
Abstract
Music and speech are complex and distinct auditory signals that are both foundational to the human experience. The mechanisms underpinning each domain are widely investigated. However, what perceptual mechanism transforms a sound into music or speech and how basic acoustic information is required to distinguish between them remain open questions. Here, we hypothesized that a sound's amplitude modulation (AM), an essential temporal acoustic feature driving the auditory system across processing levels, is critical for distinguishing music and speech. Specifically, in contrast to paradigms using naturalistic acoustic signals (that can be challenging to interpret), we used a noise-probing approach to untangle the auditory mechanism: If AM rate and regularity are critical for perceptually distinguishing music and speech, judging artificially noise-synthesized ambiguous audio signals should align with their AM parameters. Across 4 experiments (N = 335), signals with a higher peak AM frequency tend to be judged as speech, lower as music. Interestingly, this principle is consistently used by all listeners for speech judgments, but only by musically sophisticated listeners for music. In addition, signals with more regular AM are judged as music over speech, and this feature is more critical for music judgment, regardless of musical sophistication. The data suggest that the auditory system can rely on a low-level acoustic property as basic as AM to distinguish music from speech, a simple principle that provokes both neurophysiological and evolutionary experiments and speculations.
Collapse
Affiliation(s)
- Andrew Chang
- Department of Psychology, New York University, New York, New York, United States of America
| | - Xiangbin Teng
- Department of Psychology, Chinese University of Hong Kong, Hong Kong SAR, China
| | - M. Florencia Assaneo
- Instituto de Neurobiología, Universidad Nacional Autónoma de México, Juriquilla, Querétaro, México
| | - David Poeppel
- Department of Psychology, New York University, New York, New York, United States of America
- Ernst Struengmann Institute for Neuroscience, Frankfurt am Main, Germany
- Center for Language, Music, and Emotion (CLaME), New York University, New York, New York, United States of America
- Music and Audio Research Lab (MARL), New York University, New York, New York, United States of America
| |
Collapse
|
8
|
Momtaz S, Bidelman GM. Effects of Stimulus Rate and Periodicity on Auditory Cortical Entrainment to Continuous Sounds. eNeuro 2024; 11:ENEURO.0027-23.2024. [PMID: 38253583 PMCID: PMC10913036 DOI: 10.1523/eneuro.0027-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Revised: 01/14/2024] [Accepted: 01/16/2024] [Indexed: 01/24/2024] Open
Abstract
The neural mechanisms underlying the exogenous coding and neural entrainment to repetitive auditory stimuli have seen a recent surge of interest. However, few studies have characterized how parametric changes in stimulus presentation alter entrained responses. We examined the degree to which the brain entrains to repeated speech (i.e., /ba/) and nonspeech (i.e., click) sounds using phase-locking value (PLV) analysis applied to multichannel human electroencephalogram (EEG) data. Passive cortico-acoustic tracking was investigated in N = 24 normal young adults utilizing EEG source analyses that isolated neural activity stemming from both auditory temporal cortices. We parametrically manipulated the rate and periodicity of repetitive, continuous speech and click stimuli to investigate how speed and jitter in ongoing sound streams affect oscillatory entrainment. Neuronal synchronization to speech was enhanced at 4.5 Hz (the putative universal rate of speech) and showed a differential pattern to that of clicks, particularly at higher rates. PLV to speech decreased with increasing jitter but remained superior to clicks. Surprisingly, PLV entrainment to clicks was invariant to periodicity manipulations. Our findings provide evidence that the brain's neural entrainment to complex sounds is enhanced and more sensitized when processing speech-like stimuli, even at the syllable level, relative to nonspeech sounds. The fact that this specialization is apparent even under passive listening suggests a priority of the auditory system for synchronizing to behaviorally relevant signals.
Collapse
Affiliation(s)
- Sara Momtaz
- School of Communication Sciences & Disorders, University of Memphis, Memphis, Tennessee 38152
- Boys Town National Research Hospital, Boys Town, Nebraska 68131
| | - Gavin M Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, Indiana 47408
- Program in Neuroscience, Indiana University, Bloomington, Indiana 47405
| |
Collapse
|
9
|
T. Zaatar M, Alhakim K, Enayeh M, Tamer R. The transformative power of music: Insights into neuroplasticity, health, and disease. Brain Behav Immun Health 2024; 35:100716. [PMID: 38178844 PMCID: PMC10765015 DOI: 10.1016/j.bbih.2023.100716] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Revised: 12/04/2023] [Accepted: 12/08/2023] [Indexed: 01/06/2024] Open
Abstract
Music is a universal language that can elicit profound emotional and cognitive responses. In this literature review, we explore the intricate relationship between music and the brain, from how it is decoded by the nervous system to its therapeutic potential in various disorders. Music engages a diverse network of brain regions and circuits, including sensory-motor processing, cognitive, memory, and emotional components. Music-induced brain network oscillations occur in specific frequency bands, and listening to one's preferred music can grant easier access to these brain functions. Moreover, music training can bring about structural and functional changes in the brain, and studies have shown its positive effects on social bonding, cognitive abilities, and language processing. We also discuss how music therapy can be used to retrain impaired brain circuits in different disorders. Understanding how music affects the brain can open up new avenues for music-based interventions in healthcare, education, and wellbeing.
Collapse
Affiliation(s)
- Muriel T. Zaatar
- Department of Biological and Physical Sciences, American University in Dubai, Dubai, United Arab Emirates
| | | | | | | |
Collapse
|
10
|
Zoefel B, Kösem A. Neural tracking of continuous acoustics: properties, speech-specificity and open questions. Eur J Neurosci 2024; 59:394-414. [PMID: 38151889 DOI: 10.1111/ejn.16221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 11/17/2023] [Accepted: 11/22/2023] [Indexed: 12/29/2023]
Abstract
Human speech is a particularly relevant acoustic stimulus for our species, due to its role of information transmission during communication. Speech is inherently a dynamic signal, and a recent line of research focused on neural activity following the temporal structure of speech. We review findings that characterise neural dynamics in the processing of continuous acoustics and that allow us to compare these dynamics with temporal aspects in human speech. We highlight properties and constraints that both neural and speech dynamics have, suggesting that auditory neural systems are optimised to process human speech. We then discuss the speech-specificity of neural dynamics and their potential mechanistic origins and summarise open questions in the field.
Collapse
Affiliation(s)
- Benedikt Zoefel
- Centre de Recherche Cerveau et Cognition (CerCo), CNRS UMR 5549, Toulouse, France
- Université de Toulouse III Paul Sabatier, Toulouse, France
| | - Anne Kösem
- Lyon Neuroscience Research Center (CRNL), INSERM U1028, Bron, France
| |
Collapse
|
11
|
Gao J, Chen H, Fang M, Ding N. Original speech and its echo are segregated and separately processed in the human brain. PLoS Biol 2024; 22:e3002498. [PMID: 38358954 PMCID: PMC10868781 DOI: 10.1371/journal.pbio.3002498] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Accepted: 01/15/2024] [Indexed: 02/17/2024] Open
Abstract
Speech recognition crucially relies on slow temporal modulations (<16 Hz) in speech. Recent studies, however, have demonstrated that the long-delay echoes, which are common during online conferencing, can eliminate crucial temporal modulations in speech but do not affect speech intelligibility. Here, we investigated the underlying neural mechanisms. MEG experiments demonstrated that cortical activity can effectively track the temporal modulations eliminated by an echo, which cannot be fully explained by basic neural adaptation mechanisms. Furthermore, cortical responses to echoic speech can be better explained by a model that segregates speech from its echo than by a model that encodes echoic speech as a whole. The speech segregation effect was observed even when attention was diverted but would disappear when segregation cues, i.e., speech fine structure, were removed. These results strongly suggested that, through mechanisms such as stream segregation, the auditory system can build an echo-insensitive representation of speech envelope, which can support reliable speech recognition.
Collapse
Affiliation(s)
- Jiaxin Gao
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou, China
| | - Honghua Chen
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou, China
| | - Mingxuan Fang
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou, China
| | - Nai Ding
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou, China
- Nanhu Brain-computer Interface Institute, Hangzhou, China
- The State key Lab of Brain-Machine Intelligence; The MOE Frontier Science Center for Brain Science & Brain-machine Integration, Zhejiang University, Hangzhou, China
| |
Collapse
|
12
|
Wang K, Fang Y, Guo Q, Shen L, Chen Q. Superior Attentional Efficiency of Auditory Cue via the Ventral Auditory-thalamic Pathway. J Cogn Neurosci 2024; 36:303-326. [PMID: 38010315 DOI: 10.1162/jocn_a_02090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
Auditory commands are often executed more efficiently than visual commands. However, empirical evidence on the underlying behavioral and neural mechanisms remains scarce. In two experiments, we manipulated the delivery modality of informative cues and the prediction violation effect and found consistently enhanced RT benefits for the matched auditory cues compared with the matched visual cues. At the neural level, when the bottom-up perceptual input matched the prior prediction induced by the auditory cue, the auditory-thalamic pathway was significantly activated. Moreover, the stronger the auditory-thalamic connectivity, the higher the behavioral benefits of the matched auditory cue. When the bottom-up input violated the prior prediction induced by the auditory cue, the ventral auditory pathway was specifically involved. Moreover, the stronger the ventral auditory-prefrontal connectivity, the larger the behavioral costs caused by the violation of the auditory cue. In addition, the dorsal frontoparietal network showed a supramodal function in reacting to the violation of informative cues irrespective of the delivery modality of the cue. Taken together, the results reveal novel behavioral and neural evidence that the superior efficiency of the auditory cue is twofold: The auditory-thalamic pathway is associated with improvements in task performance when the bottom-up input matches the auditory cue, whereas the ventral auditory-prefrontal pathway is involved when the auditory cue is violated.
Collapse
Affiliation(s)
- Ke Wang
- South China Normal University, Guangzhou, China
| | - Ying Fang
- South China Normal University, Guangzhou, China
| | - Qiang Guo
- Guangdong Sanjiu Brain Hospital, Guangzhou, China
| | - Lu Shen
- South China Normal University, Guangzhou, China
| | - Qi Chen
- South China Normal University, Guangzhou, China
| |
Collapse
|
13
|
Cabral-Calderin Y, van Hinsberg D, Thielscher A, Henry MJ. Behavioral entrainment to rhythmic auditory stimulation can be modulated by tACS depending on the electrical stimulation field properties. eLife 2024; 12:RP87820. [PMID: 38289225 PMCID: PMC10945705 DOI: 10.7554/elife.87820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2024] Open
Abstract
Synchronization between auditory stimuli and brain rhythms is beneficial for perception. In principle, auditory perception could be improved by facilitating neural entrainment to sounds via brain stimulation. However, high inter-individual variability of brain stimulation effects questions the usefulness of this approach. Here we aimed to modulate auditory perception by modulating neural entrainment to frequency modulated (FM) sounds using transcranial alternating current stimulation (tACS). In addition, we evaluated the advantage of using tACS montages spatially optimized for each individual's anatomy and functional data compared to a standard montage applied to all participants. Across two different sessions, 2 Hz tACS was applied targeting auditory brain regions. Concurrent with tACS, participants listened to FM stimuli with modulation rate matching the tACS frequency but with different phase lags relative to the tACS, and detected silent gaps embedded in the FM sound. We observed that tACS modulated the strength of behavioral entrainment to the FM sound in a phase-lag specific manner. Both the optimal tACS lag and the magnitude of the tACS effect were variable across participants and sessions. Inter-individual variability of tACS effects was best explained by the strength of the inward electric field, depending on the field focality and proximity to the target brain region. Although additional evidence is necessary, our results also provided suggestive insights that spatially optimizing the electrode montage could be a promising tool to reduce inter-individual variability of tACS effects. This work demonstrates that tACS effectively modulates entrainment to sounds depending on the optimality of the electric field. However, the lack of reliability on optimal tACS lags calls for caution when planning tACS experiments based on separate sessions.
Collapse
Affiliation(s)
| | | | - Axel Thielscher
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Amager and HvidovreCopenhagenDenmark
- Section for Magnetic Resonance, DTU Health Tech, Technical University of DenmarkCopenhagenDenmark
| | - Molly J Henry
- Max Planck Institute for Empirical AestheticsFrankfurtGermany
- Toronto Metropolitan UniversityTorontoCanada
| |
Collapse
|
14
|
Pando-Naude V, Matthews TE, Højlund A, Jakobsen S, Østergaard K, Johnsen E, Garza-Villarreal EA, Witek MAG, Penhune V, Vuust P. Dopamine dysregulation in Parkinson's disease flattens the pleasurable urge to move to musical rhythms. Eur J Neurosci 2024; 59:101-118. [PMID: 37724707 DOI: 10.1111/ejn.16128] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 07/12/2023] [Accepted: 08/08/2023] [Indexed: 09/21/2023]
Abstract
The pleasurable urge to move to music (PLUMM) activates motor and reward areas of the brain and is thought to be driven by predictive processes. Dopamine in motor and limbic networks is implicated in beat-based timing and music-induced pleasure, suggesting a central role of basal ganglia (BG) dopaminergic systems in PLUMM. This study tested this hypothesis by comparing PLUMM in participants with Parkinson's disease (PD), age-matched controls, and young controls. Participants listened to musical sequences with varying rhythmic and harmonic complexity (low, medium and high), and rated their experienced pleasure and urge to move to the rhythm. In line with previous results, healthy younger participants showed an inverted U-shaped relationship between rhythmic complexity and ratings, with preference for medium complexity rhythms, while age-matched controls showed a similar, but weaker, inverted U-shaped response. Conversely, PD showed a significantly flattened response for both the urge to move and pleasure. Crucially, this flattened response could not be attributed to differences in rhythm discrimination and did not reflect an overall decrease in ratings. For harmonic complexity, PD showed a negative linear pattern for both the urge to move and pleasure while healthy age-matched controls showed the same pattern for pleasure and an inverted U for the urge to move. This contrasts with the pattern observed in young healthy controls in previous studies, suggesting that both healthy aging and PD also influence affective responses to harmonic complexity. Together, these results support the role of dopamine within cortico-striatal circuits in the predictive processes that form the link between the perceptual processing of rhythmic patterns and the affective and motor responses to rhythmic music.
Collapse
Affiliation(s)
- Victor Pando-Naude
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark
| | - Tomas Edward Matthews
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark
| | - Andreas Højlund
- Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University Hospital, Aarhus, Denmark
- Department of Linguistics, Cognitive Science and Semiotics, School of Communication and Culture, Aarhus University, Aarhus, Denmark
| | - Sebastian Jakobsen
- Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University Hospital, Aarhus, Denmark
- Department of Linguistics, Cognitive Science and Semiotics, School of Communication and Culture, Aarhus University, Aarhus, Denmark
| | - Karen Østergaard
- Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University Hospital, Aarhus, Denmark
- Department of Neurology, Aarhus University Hospital, Aarhus, Denmark
- Sano, Private Hospital, Aarhus, Denmark
| | - Erik Johnsen
- Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University Hospital, Aarhus, Denmark
- Department of Neurology, Aarhus University Hospital, Aarhus, Denmark
| | - Eduardo A Garza-Villarreal
- Instituto de Neurobiología, Universidad Nacional Autónoma de México (UNAM), Juriquilla, Querétaro, Mexico
| | - Maria A G Witek
- Department of Music School of Languages, Cultures, Art History and Music, University of Birmingham, Birmingham, UK
| | - Virginia Penhune
- Department of Psychology, Concordia University, Montreal, Quebec, Canada
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark
| |
Collapse
|
15
|
Charalambous E, Djebbara Z. On natural attunement: Shared rhythms between the brain and the environment. Neurosci Biobehav Rev 2023; 155:105438. [PMID: 37898445 DOI: 10.1016/j.neubiorev.2023.105438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Revised: 10/19/2023] [Accepted: 10/24/2023] [Indexed: 10/30/2023]
Abstract
Rhythms exist both in the embodied brain and the built environment. Becoming attuned to the rhythms of the environment, such as repetitive columns, can greatly affect perception. Here, we explore how the built environment affects human cognition and behavior through the concept of natural attunement, often resulting from the coordination of a person's sensory and motor systems with the rhythmic elements of the environment. We argue that the built environment should not be reduced to mere states, representations, and single variables but instead be considered a bundle of highly related continuous signals with which we can resonate. Resonance and entrainment are dynamic processes observed when intrinsic frequencies of the oscillatory brain are influenced by the oscillations of an external signal. This allows visual rhythmic stimulations of the environment to affect the brain and body through neural entrainment, cross-frequency coupling, and phase resetting. We review how real-world architectural settings can affect neural dynamics, cognitive processes, and behavior in people, suggesting the crucial role of everyday rhythms in the brain-body-environment relationship.
Collapse
Affiliation(s)
| | - Zakaria Djebbara
- Aalborg University, Department of Architecture, Design, Media, and Technology, Denmark; Technical University of Berlin, Biological Psychology and Neuroergonomics, Germany.
| |
Collapse
|
16
|
Meng J, Zhao Y, Wang K, Sun J, Yi W, Xu F, Xu M, Ming D. Rhythmic temporal prediction enhances neural representations of movement intention for brain-computer interface. J Neural Eng 2023; 20:066004. [PMID: 37875107 DOI: 10.1088/1741-2552/ad0650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 10/24/2023] [Indexed: 10/26/2023]
Abstract
Objective.Detecting movement intention is a typical use of brain-computer interfaces (BCI). However, as an endogenous electroencephalography (EEG) feature, the neural representation of movement is insufficient for improving motor-based BCI. This study aimed to develop a new movement augmentation BCI encoding paradigm by incorporating the cognitive function of rhythmic temporal prediction, and test the feasibility of this new paradigm in optimizing detections of movement intention.Methods.A visual-motion synchronization task was designed with two movement intentions (left vs. right) and three rhythmic temporal prediction conditions (1000 ms vs. 1500 ms vs. no temporal prediction). Behavioural and EEG data of 24 healthy participants were recorded. Event-related potentials (ERPs), event-related spectral perturbation induced by left- and right-finger movements, the common spatial pattern (CSP) and support vector machine, Riemann tangent space algorithm and logistic regression were used and compared across the three temporal prediction conditions, aiming to test the impact of temporal prediction on movement detection.Results.Behavioural results showed significantly smaller deviation time for 1000 ms and 1500 ms conditions. ERP analyses revealed 1000 ms and 1500 ms conditions led to rhythmic oscillations with a time lag in contralateral and ipsilateral areas of movement. Compared with no temporal prediction, 1000 ms condition exhibited greater beta event-related desynchronization (ERD) lateralization in motor area (P< 0.001) and larger beta ERD in frontal area (P< 0.001). 1000 ms condition achieved an averaged left-right decoding accuracy of 89.71% using CSP and 97.30% using Riemann tangent space, both significantly higher than no temporal prediction. Moreover, movement and temporal information can be decoded simultaneously, achieving 88.51% four-classification accuracy.Significance.The results not only confirm the effectiveness of rhythmic temporal prediction in enhancing detection ability of motor-based BCI, but also highlight the dual encodings of movement and temporal information within a single BCI paradigm, which is promising to expand the range of intentions that can be decoded by the BCI.
Collapse
Affiliation(s)
- Jiayuan Meng
- The Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China
- Haihe Laboratory of Brain-computer Interaction and Human-machine Integration, Tianjin 300392, People's Republic of China
| | - Yingru Zhao
- The Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China
| | - Kun Wang
- The Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China
- Haihe Laboratory of Brain-computer Interaction and Human-machine Integration, Tianjin 300392, People's Republic of China
| | - Jinsong Sun
- The Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China
| | - Weibo Yi
- Beijing Machine and Equipment Institute, Beijing, People's Republic of China
| | - Fangzhou Xu
- International School for Optoelectronic Engineering, Qilu University of Technology (Shandong Academy of Sciences), Jinan, People's Republic of China
| | - Minpeng Xu
- The Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China
- Haihe Laboratory of Brain-computer Interaction and Human-machine Integration, Tianjin 300392, People's Republic of China
- International School for Optoelectronic Engineering, Qilu University of Technology (Shandong Academy of Sciences), Jinan, People's Republic of China
| | - Dong Ming
- The Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China
- Haihe Laboratory of Brain-computer Interaction and Human-machine Integration, Tianjin 300392, People's Republic of China
| |
Collapse
|
17
|
Belo J, Clerc M, Schön D. The effect of familiarity on neural tracking of music stimuli is modulated by mind wandering. AIMS Neurosci 2023; 10:319-331. [PMID: 38188009 PMCID: PMC10767062 DOI: 10.3934/neuroscience.2023025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Revised: 10/29/2023] [Accepted: 11/06/2023] [Indexed: 01/09/2024] Open
Abstract
One way to investigate the cortical tracking of continuous auditory stimuli is to use the stimulus reconstruction approach. However, the cognitive and behavioral factors impacting this cortical representation remain largely overlooked. Two possible candidates are familiarity with the stimulus and the ability to resist internal distractions. To explore the possible impacts of these two factors on the cortical representation of natural music stimuli, forty-one participants listened to monodic natural music stimuli while we recorded their neural activity. Using the stimulus reconstruction approach and linear mixed models, we found that familiarity positively impacted the reconstruction accuracy of music stimuli and that this effect of familiarity was modulated by mind wandering.
Collapse
Affiliation(s)
- Joan Belo
- Athena Project Team, INRIA, Université Côte d'Azur, Nice, France
- Aix Marseille University, Inserm, INS, Institut de Neurosciences des Systèmes, Marseille, France
| | - Maureen Clerc
- Athena Project Team, INRIA, Université Côte d'Azur, Nice, France
| | - Daniele Schön
- Aix Marseille University, Inserm, INS, Institut de Neurosciences des Systèmes, Marseille, France
- Institute for Language, Communication, and the Brain, Aix-en-Provence, France
| |
Collapse
|
18
|
Santoyo AE, Gonzales MG, Iqbal ZJ, Backer KC, Balasubramaniam R, Bortfeld H, Shahin AJ. Neurophysiological time course of timbre-induced music-like perception. J Neurophysiol 2023; 130:291-302. [PMID: 37377190 PMCID: PMC10396220 DOI: 10.1152/jn.00042.2023] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Revised: 06/26/2023] [Accepted: 06/26/2023] [Indexed: 06/29/2023] Open
Abstract
Traditionally, pitch variation in a sound stream has been integral to music identity. We attempt to expand music's definition, by demonstrating that the neural code for musicality is independent of pitch encoding. That is, pitchless sound streams can still induce music-like perception and a neurophysiological hierarchy similar to pitched melodies. Previous work reported that neural processing of sounds with no-pitch, fixed-pitch, and irregular-pitch (melodic) patterns, exhibits a right-lateralized hierarchical shift, with pitchless sounds favorably processed in Heschl's gyrus (HG), ascending laterally to nonprimary auditory areas for fixed-pitch and even more laterally for melodic patterns. The objective of this EEG study was to assess whether sound encoding maintains a similar hierarchical profile when musical perception is driven by timbre irregularities in the absence of pitch changes. Individuals listened to repetitions of three musical and three nonmusical sound-streams. The nonmusical streams were comprised of seven 200-ms segments of white, pink, or brown noise, separated by silent gaps. Musical streams were created similarly, but with all three noise types combined in a unique order within each stream to induce timbre variations and music-like perception. Subjects classified the sound streams as musical or nonmusical. Musical processing exhibited right dominant α power enhancement, followed by a lateralized increase in θ phase-locking and spectral power. The θ phase-locking was stronger in musicians than in nonmusicians. The lateralization of activity suggests higher-level auditory processing. Our findings validate the existence of a hierarchical shift, traditionally observed with pitched-melodic perception, underscoring that musicality can be achieved with timbre irregularities alone.NEW & NOTEWORTHY EEG induced by streams of pitchless noise segments varying in timbre were classified as music-like and exhibited a right-lateralized hierarchy in processing similar to pitched melodic processing. This study provides evidence that the neural-code of musicality is independent of pitch encoding. The results have implications for understanding music processing in individuals with degraded pitch perception, such as in cochlear-implant listeners, as well as the role of nonpitched sounds in the induction of music-like perceptual states.
Collapse
Affiliation(s)
- Alejandra E Santoyo
- Department of Cognitive and Information Sciences, University of California, Merced, California, United States
| | - Mariel G Gonzales
- Department of Cognitive and Information Sciences, University of California, Merced, California, United States
| | - Zunaira J Iqbal
- Department of Cognitive and Information Sciences, University of California, Merced, California, United States
| | - Kristina C Backer
- Department of Cognitive and Information Sciences, University of California, Merced, California, United States
- Health Sciences Research Institute, University of California, Merced, California, United States
| | - Ramesh Balasubramaniam
- Department of Cognitive and Information Sciences, University of California, Merced, California, United States
- Health Sciences Research Institute, University of California, Merced, California, United States
| | - Heather Bortfeld
- Department of Cognitive and Information Sciences, University of California, Merced, California, United States
- Health Sciences Research Institute, University of California, Merced, California, United States
- Department of Psychology, University of California, Merced, California, United States
| | - Antoine J Shahin
- Department of Cognitive and Information Sciences, University of California, Merced, California, United States
- Health Sciences Research Institute, University of California, Merced, California, United States
| |
Collapse
|
19
|
Northoff G, Klar P, Bein M, Safron A. As without, so within: how the brain's temporo-spatial alignment to the environment shapes consciousness. Interface Focus 2023; 13:20220076. [PMID: 37065263 PMCID: PMC10102730 DOI: 10.1098/rsfs.2022.0076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 03/02/2023] [Indexed: 04/18/2023] Open
Abstract
Consciousness is constituted by a structure that includes contents as foreground and the environment as background. This structural relation between the experiential foreground and background presupposes a relationship between the brain and the environment, often neglected in theories of consciousness. The temporo-spatial theory of consciousness addresses the brain-environment relation by a concept labelled 'temporo-spatial alignment'. Briefly, temporo-spatial alignment refers to the brain's neuronal activity's interaction with and adaption to interoceptive bodily and exteroceptive environmental stimuli, including their symmetry as key for consciousness. Combining theory and empirical data, this article attempts to demonstrate the yet unclear neuro-phenomenal mechanisms of temporo-spatial alignment. First, we suggest three neuronal layers of the brain's temporo-spatial alignment to the environment. These neuronal layers span across a continuum from longer to shorter timescales. (i) The background layer comprises longer and more powerful timescales mediating topographic-dynamic similarities between different subjects' brains. (ii) The intermediate layer includes a mixture of medium-scaled timescales allowing for stochastic matching between environmental inputs and neuronal activity through the brain's intrinsic neuronal timescales and temporal receptive windows. (iii) The foreground layer comprises shorter and less powerful timescales for neuronal entrainment of stimuli temporal onset through neuronal phase shifting and resetting. Second, we elaborate on how the three neuronal layers of temporo-spatial alignment correspond to their respective phenomenal layers of consciousness. (i) The inter-subjectively shared contextual background of consciousness. (ii) An intermediate layer that mediates the relationship between different contents of consciousness. (iii) A foreground layer that includes specific fast-changing contents of consciousness. Overall, temporo-spatial alignment may provide a mechanism whose different neuronal layers modulate corresponding phenomenal layers of consciousness. Temporo-spatial alignment can provide a bridging principle for linking physical-energetic (free energy), dynamic (symmetry), neuronal (three layers of distinct time-space scales) and phenomenal (form featured by background-intermediate-foreground) mechanisms of consciousness.
Collapse
Affiliation(s)
- Georg Northoff
- Mind, Brain Imaging and Neuroethics Research Unit, TheRoyal's Institute of Mental Health Research, University of Ottawa, Ottawa, ON, Canada K1Z 7K4
- Mental Health Centre, Zhejiang University School of Medicine, Hangzhou 310053, People's Republic of China
- Centre for Cognition and Brain Disorders, Hangzhou Normal University, Hangzhou 310053, People's Republic of China
| | - Philipp Klar
- Medical Faculty, C. & O. Vogt-Institute for Brain Research, Heinrich Heine University of Düsseldorf, 40225 Düsseldorf, Germany
| | - Magnus Bein
- Department of Biology and Department of Psychiatry, McGill University, Quebec, Canada H3A 0G4
| | - Adam Safron
- Center for Psychedelic and Consciousness Research, Department of Psychiatry & Behavioral Sciences, Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA
- Cognitive Science Program, Indiana University, Bloomington, IN 47405, USA
- Institute for Advanced Consciousness Studies, Santa Monica, CA 90403, USA
| |
Collapse
|
20
|
Forrester N. Sounds of science: how music at work can fine-tune your research. Nature 2023; 616:399-401. [PMID: 37024686 DOI: 10.1038/d41586-023-00984-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/08/2023]
|
21
|
Schmidt-Kassow M, White TN, Abel C, Kaiser J. Pre-stimulus beta power varies as a function of auditory-motor synchronization and temporal predictability. Front Neurosci 2023; 17:1128197. [PMID: 36992854 PMCID: PMC10042076 DOI: 10.3389/fnins.2023.1128197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Accepted: 02/22/2023] [Indexed: 03/11/2023] Open
Abstract
IntroductionAuditory-motor interactions can support the preparation for expected sensory input. We investigated the periodic modulation of beta activity in the electroencephalogram to assess the role of active auditory-motor synchronization. Pre-stimulus beta activity (13–30 Hz) has been interpreted as a neural signature of the preparation for expected sensory input.MethodsIn the current study, participants silently counted frequency deviants in sequences of pure tones either during a physically inactive control condition or while pedaling on a cycling ergometer. Tones were presented either rhythmically (at 1 Hz) or arrhythmically with variable intervals. In addition to the pedaling conditions with rhythmic (auditory-motor synchronization, AMS) or arrhythmic stimulation, a self-generated stimulus condition was used in which tones were presented in sync with the participants’ spontaneous pedaling. This condition served to explore whether sensory predictions are driven primarily by the auditory or by the motor system.ResultsPre-stimulus beta power increased for rhythmic compared to arrhythmic stimulus presentation in both sitting and pedaling conditions but was strongest in the AMS condition. Furthermore, beta power in the AMS condition correlated with motor performance, i.e., the better participants synchronized with the rhythmic stimulus sequence, the higher was pre-stimulus beta power. Additionally, beta power was increased for the self-generated stimulus condition compared with arrhythmic pedaling, but there was no difference between the self-generated and the AMS condition.DiscussionThe current data pattern indicates that pre-stimulus beta power is not limited to neuronal entrainment (i.e., periodic stimulus presentation) but represents a more general correlate of temporal anticipation. Its association with the precision of AMS supports the role of active behavior for auditory predictions.
Collapse
Affiliation(s)
- Maren Schmidt-Kassow
- Institute of Medical Psychology, Goethe University Frankfurt, Frankfurt, Germany
- Department of Psychiatry, Psychosomatic Medicine and Psychotherapy, University Hospital, Goethe University Frankfurt, Frankfurt, Germany
- *Correspondence: Maren Schmidt-Kassow,
| | | | - Cornelius Abel
- Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| | - Jochen Kaiser
- Institute of Medical Psychology, Goethe University Frankfurt, Frankfurt, Germany
| |
Collapse
|
22
|
Wohltjen S, Toth B, Boncz A, Wheatley T. Synchrony to a beat predicts synchrony with other minds. Sci Rep 2023; 13:3591. [PMID: 36869056 PMCID: PMC9984464 DOI: 10.1038/s41598-023-29776-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Accepted: 02/10/2023] [Indexed: 03/05/2023] Open
Abstract
Synchrony has been used to describe simple beat entrainment as well as correlated mental processes between people, leading some to question whether the term conflates distinct phenomena. Here we ask whether simple synchrony (beat entrainment) predicts more complex attentional synchrony, consistent with a common mechanism. While eye-tracked, participants listened to regularly spaced tones and indicated changes in volume. Across multiple sessions, we found a reliable individual difference: some people entrained their attention more than others, as reflected in beat-matched pupil dilations that predicted performance. In a second study, eye-tracked participants completed the beat task and then listened to a storyteller, who had been previously recorded while eye-tracked. An individual's tendency to entrain to a beat predicted how strongly their pupils synchronized with those of the storyteller, a corollary of shared attention. The tendency to synchronize is a stable individual difference that predicts attentional synchrony across contexts and complexity.
Collapse
Affiliation(s)
- Sophie Wohltjen
- Psychological and Brain Sciences Department, Dartmouth College, 6207 Moore Hall, Hanover, NH, 03755, USA.
- Psychology Department, University of Wisconsin, 1202 West Johnson St. Madison, Madison, WI, 53706, USA.
| | - Brigitta Toth
- Psychological and Brain Sciences Department, Dartmouth College, 6207 Moore Hall, Hanover, NH, 03755, USA
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Magyar Tudósok Körútja 2, Budapest, 1117, Hungary
| | - Adam Boncz
- Psychological and Brain Sciences Department, Dartmouth College, 6207 Moore Hall, Hanover, NH, 03755, USA
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Magyar Tudósok Körútja 2, Budapest, 1117, Hungary
| | - Thalia Wheatley
- Psychological and Brain Sciences Department, Dartmouth College, 6207 Moore Hall, Hanover, NH, 03755, USA
- Santa Fe Institute, Santa Fe, NM, USA
| |
Collapse
|
23
|
Cortical encoding of rhythmic kinematic structures in biological motion. Neuroimage 2023; 268:119893. [PMID: 36693597 DOI: 10.1016/j.neuroimage.2023.119893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Revised: 01/04/2023] [Accepted: 01/20/2023] [Indexed: 01/22/2023] Open
Abstract
Biological motion (BM) perception is of great survival value to human beings. The critical characteristics of BM information lie in kinematic cues containing rhythmic structures. However, how rhythmic kinematic structures of BM are dynamically represented in the brain and contribute to visual BM processing remains largely unknown. Here, we probed this issue in three experiments using electroencephalogram (EEG). We found that neural oscillations of observers entrained to the hierarchical kinematic structures of the BM sequences (i.e., step-cycle and gait-cycle for point-light walkers). Notably, only the cortical tracking of the higher-level rhythmic structure (i.e., gait-cycle) exhibited a BM processing specificity, manifested by enhanced neural responses to upright over inverted BM stimuli. This effect could be extended to different motion types and tasks, with its strength positively correlated with the perceptual sensitivity to BM stimuli at the right temporal brain region dedicated to visual BM processing. Modeling results further suggest that the neural encoding of spatiotemporally integrative kinematic cues, in particular the opponent motions of bilateral limbs, drives the selective cortical tracking of BM information. These findings underscore the existence of a cortical mechanism that encodes periodic kinematic features of body movements, which underlies the dynamic construction of visual BM perception.
Collapse
|
24
|
Xu N, Zhao B, Luo L, Zhang K, Shao X, Luan G, Wang Q, Hu W, Wang Q. Two stages of speech envelope tracking in human auditory cortex modulated by speech intelligibility. Cereb Cortex 2023; 33:2215-2228. [PMID: 35695785 DOI: 10.1093/cercor/bhac203] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Revised: 05/01/2022] [Accepted: 05/02/2022] [Indexed: 11/13/2022] Open
Abstract
The envelope is essential for speech perception. Recent studies have shown that cortical activity can track the acoustic envelope. However, whether the tracking strength reflects the extent of speech intelligibility processing remains controversial. Here, using stereo-electroencephalogram technology, we directly recorded the activity in human auditory cortex while subjects listened to either natural or noise-vocoded speech. These 2 stimuli have approximately identical envelopes, but the noise-vocoded speech does not have speech intelligibility. According to the tracking lags, we revealed 2 stages of envelope tracking: an early high-γ (60-140 Hz) power stage that preferred the noise-vocoded speech and a late θ (4-8 Hz) phase stage that preferred the natural speech. Furthermore, the decoding performance of high-γ power was better in primary auditory cortex than in nonprimary auditory cortex, consistent with its short tracking delay, while θ phase showed better decoding performance in right auditory cortex. In addition, high-γ responses with sustained temporal profiles in nonprimary auditory cortex were dominant in both envelope tracking and decoding. In sum, we suggested a functional dissociation between high-γ power and θ phase: the former reflects fast and automatic processing of brief acoustic features, while the latter correlates to slow build-up processing facilitated by speech intelligibility.
Collapse
Affiliation(s)
- Na Xu
- Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, No. 119 South Fourth Ring West Road, Fengtai District, Beijing 100070, China.,National Clinical Research Center for Neurological Diseases, No. 119 South Fourth Ring West Road, Fengtai District, Beijing 100070, China
| | - Baotian Zhao
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, No. 119 South Fourth Ring West Road, Fengtai District, Beijing 100070, China
| | - Lu Luo
- School of Psychology, Beijing Sport University, No. 48 Xinxi Road, Haidian District, Beijing 100084, China
| | - Kai Zhang
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, No. 119 South Fourth Ring West Road, Fengtai District, Beijing 100070, China
| | - Xiaoqiu Shao
- Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, No. 119 South Fourth Ring West Road, Fengtai District, Beijing 100070, China
| | - Guoming Luan
- Beijing Key Laboratory of Epilepsy, Epilepsy Center, Sanbo Brain Hospital, Capital Medical University, No. 50 Yikesong Xiangshan Road, Haidian District, Beijing 100093, China.,Beijing Institute of Brain Disorders, Collaborative Innovation Center for Brain Disorders, Capital Medical University, No.10 Xitoutiao, You An Men, Beijing 100069, China
| | - Qian Wang
- Beijing Key Laboratory of Epilepsy, Epilepsy Center, Sanbo Brain Hospital, Capital Medical University, No. 50 Yikesong Xiangshan Road, Haidian District, Beijing 100093, China.,School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, No.5 Yiheyuan Road, Haidian District, Beijing 100871, China.,IDG/McGovern Institute for Brain Research, Peking University, No.5 Yiheyuan Road, Haidian District, Beijing 100871, China
| | - Wenhan Hu
- Beijing Neurosurgical Institute, Capital Medical University, No. 119 South Fourth Ring West Road, Fengtai District, Beijing 100070, China
| | - Qun Wang
- Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, No. 119 South Fourth Ring West Road, Fengtai District, Beijing 100070, China.,National Clinical Research Center for Neurological Diseases, No. 119 South Fourth Ring West Road, Fengtai District, Beijing 100070, China.,Beijing Institute of Brain Disorders, Collaborative Innovation Center for Brain Disorders, Capital Medical University, No.10 Xitoutiao, You An Men, Beijing 100069, China
| |
Collapse
|
25
|
Musical tempo affects EEG spectral dynamics during subsequent time estimation. Biol Psychol 2023; 178:108517. [PMID: 36801434 DOI: 10.1016/j.biopsycho.2023.108517] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Revised: 01/24/2023] [Accepted: 02/12/2023] [Indexed: 02/19/2023]
Abstract
The perception of time depends on the rhythmicity of internal and external synchronizers. One external synchronizer that affects time estimation is music. This study aimed to analyze the effects of musical tempi on EEG spectral dynamics during subsequent time estimation. Participants performed a time production task after (i) silence and (ii) listening to music at different tempi -90, 120, and 150 bpm- while EEG activity was recorded. While listening, there was an increase in alpha power at all tempi compared to the resting state and an increase of beta at the fastest tempo. The beta increase persisted during the subsequent time estimations, with higher beta power during the task after listening to music at the fastest tempo than task performance without music. Spectral dynamics in frontal regions showed lower alpha activity in the final stages of time estimations after listening to music at 90- and 120-bpm than in the silence condition and higher beta in the early stages at 150 bpm. Behaviorally, the 120 bpm musical tempo produced slight improvements. Listening to music modified tonic EEG activity that subsequently affected EEG dynamics during time production. Music at a more optimal rate could have benefited temporal expectation and anticipation. The fastest musical tempo may have generated an over-activated state that affected subsequent time estimations. These results emphasize the importance of music as an external stimulus that can affect brain functional organization during time perception even after listening.
Collapse
|
26
|
Malekmohammadi A, Ehrlich SK, Cheng G. Modulation of theta and gamma oscillations during familiarization with previously unknown music. Brain Res 2023; 1800:148198. [PMID: 36493897 DOI: 10.1016/j.brainres.2022.148198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 11/24/2022] [Accepted: 12/04/2022] [Indexed: 12/12/2022]
Abstract
Repeated listening to unknown music leads to gradual familiarization with musical sequences. Passively listening to musical sequences could involve an array of dynamic neural responses in reaching familiarization with the musical excerpts. This study elucidates the dynamic brain response and its variation over time by investigating the electrophysiological changes during the familiarization with initially unknown music. Twenty subjects were asked to familiarize themselves with previously unknown 10 s classical music excerpts over three repetitions while their electroencephalogram was recorded. Dynamic spectral changes in neural oscillations are monitored by time-frequency analyses for all frequency bands (theta: 5-9 Hz, alpha: 9-13 Hz, low-beta: 13-21 Hz, high beta: 21-32 Hz, and gamma: 32-50 Hz). Time-frequency analyses reveal sustained theta event-related desynchronization (ERD) in the frontal-midline and the left pre-frontal electrodes which decreased gradually from 1st to 3rd time repetition of the same excerpts (frontal-midline: 57.90 %, left-prefrontal: 75.93 %). Similarly, sustained gamma ERD decreased in the frontal-midline and bilaterally frontal/temporal areas (frontal-midline: 61.47 %, left-frontal: 90.88 %, right-frontal: 87.74 %). During familiarization, the decrease of theta ERD is superior in the first part (1-5 s) whereas the decrease of gamma ERD is superior in the second part (5-9 s) of music excerpts. The results suggest that decreased theta ERD is associated with successfully identifying familiar sequences, whereas decreased gamma ERD is related to forming unfamiliar sequences.
Collapse
Affiliation(s)
- Alireza Malekmohammadi
- Chair for Cognitive System, Technical University of Munich, Electrical Engineering, Munich, 80333, Germany.
| | - Stefan K Ehrlich
- Chair for Cognitive System, Technical University of Munich, Electrical Engineering, Munich, 80333, Germany
| | - Gordon Cheng
- Chair for Cognitive System, Technical University of Munich, Electrical Engineering, Munich, 80333, Germany
| |
Collapse
|
27
|
Weise A, Grimm S, Maria Rimmele J, Schröger E. Auditory representations for long lasting sounds: Insights from event-related brain potentials and neural oscillations. BRAIN AND LANGUAGE 2023; 237:105221. [PMID: 36623340 DOI: 10.1016/j.bandl.2022.105221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 12/26/2022] [Accepted: 12/27/2022] [Indexed: 06/17/2023]
Abstract
The basic features of short sounds, such as frequency and intensity including their temporal dynamics, are integrated in a unitary representation. Knowledge on how our brain processes long lasting sounds is scarce. We review research utilizing the Mismatch Negativity event-related potential and neural oscillatory activity for studying representations for long lasting simple versus complex sounds such as sinusoidal tones versus speech. There is evidence for a temporal constraint in the formation of auditory representations: Auditory edges like sound onsets within long lasting sounds open a temporal window of about 350 ms in which the sounds' dynamics are integrated into a representation, while information beyond that window contributes less to that representation. This integration window segments the auditory input into short chunks. We argue that the representations established in adjacent integration windows can be concatenated into an auditory representation of a long sound, thus, overcoming the temporal constraint.
Collapse
Affiliation(s)
- Annekathrin Weise
- Department of Psychology, Ludwig-Maximilians-University Munich, Germany; Wilhelm Wundt Institute for Psychology, Leipzig University, Germany.
| | - Sabine Grimm
- Wilhelm Wundt Institute for Psychology, Leipzig University, Germany.
| | - Johanna Maria Rimmele
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Germany; Center for Language, Music and Emotion, New York University, Max Planck Institute, Department of Psychology, 6 Washington Place, New York, NY 10003, United States.
| | - Erich Schröger
- Wilhelm Wundt Institute for Psychology, Leipzig University, Germany.
| |
Collapse
|
28
|
Snapiri L, Kaplan Y, Shalev N, Landau AN. Rhythmic modulation of visual discrimination is linked to individuals' spontaneous motor tempo. Eur J Neurosci 2023; 57:646-656. [PMID: 36512369 DOI: 10.1111/ejn.15898] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Revised: 11/17/2022] [Accepted: 12/01/2022] [Indexed: 12/15/2022]
Abstract
The impact of external rhythmic structure on perception has been demonstrated across different modalities and experimental paradigms. However, recent findings emphasize substantial individual differences in rhythm-based perceptual modulation. Here, we examine the link between spontaneous rhythmic preferences, as measured through the motor system, and individual differences in rhythmic modulation of visual discrimination. As a first step, we measure individual rhythmic preferences using the spontaneous tapping task. Then we assess perceptual rhythmic modulation using a visual discrimination task in which targets can appear either in-phase or out-of-phase with a preceding rhythmic stream of visual stimuli. The tempo of the preceding stream was manipulated over different experimental blocks (0.77 Hz, 1.4 Hz, 2 Hz). We find that visual rhythmic stimulation modulates discrimination performance. The modulation is dependent on the tempo of stimulation, with maximal perceptual benefits for the slowest tempo of stimulation (0.77 Hz). Most importantly, the strength of modulation is also linked to individuals' spontaneous motor tempo. Individuals with slower spontaneous tempi show greater rhythmic modulation compared to individuals with faster spontaneous tempi. This finding suggests that different tempi affect the cognitive system with varying levels of efficiency and that self-generated rhythms impact our ability to utilize rhythmic structure in the environment for guiding perception and performance.
Collapse
Affiliation(s)
- Leah Snapiri
- Department of Psychology, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Yael Kaplan
- Department of Psychology, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Nir Shalev
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Ayelet N Landau
- Department of Psychology, The Hebrew University of Jerusalem, Jerusalem, Israel.,Department of Cognitive Science, The Hebrew University of Jerusalem, Jerusalem, Israel
| |
Collapse
|
29
|
Dovorany N, Brannick S, Johnson N, Ratiu I, LaCroix AN. Happy and sad music acutely modulate different types of attention in older adults. Front Psychol 2023; 14:1029773. [PMID: 36777231 PMCID: PMC9909555 DOI: 10.3389/fpsyg.2023.1029773] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Accepted: 01/10/2023] [Indexed: 01/27/2023] Open
Abstract
Of the three subtypes of attention outlined by the attentional subsystems model, alerting (vigilance or arousal needed for task completion) and executive control (the ability to inhibit distracting information while completing a goal) are susceptible to age-related decline, while orienting remains relatively stable. Yet, few studies have investigated strategies that may acutely maintain or promote attention in typically aging older adults. Music listening may be one potential strategy for attentional maintenance as past research shows that listening to happy music characterized by a fast tempo and major mode increases cognitive task performance, likely by increasing cognitive arousal. The present study sought to investigate whether listening to happy music (fast tempo, major mode) impacts alerting, orienting, and executive control attention in 57 middle and older-aged adults (M = 61.09 years, SD = 7.16). Participants completed the Attention Network Test (ANT) before and after listening to music rated as happy or sad (slow tempo, minor mode), or no music (i.e., silence) for 10 min. Our results demonstrate that happy music increased alerting attention, particularly when relevant and irrelevant information conflicted within a trial. Contrary to what was predicted, sad music modulated executive control performance. Overall, our findings indicate that music written in the major mode with a fast tempo (happy) and minor mode with a slow tempo (sad) modulate different aspects of attention in the short-term.
Collapse
Affiliation(s)
- Nicholas Dovorany
- College of Graduate Studies, Midwestern University, Glendale, AZ, United States
| | - Schea Brannick
- College of Health Sciences, Midwestern University, Glendale, AZ, United States
| | - Nathan Johnson
- College of Graduate Studies, Midwestern University, Glendale, AZ, United States
| | - Ileana Ratiu
- College of Health Sciences, Midwestern University, Glendale, AZ, United States,College of Health Solutions, Arizona State University, Tempe, AZ, United States
| | - Arianna N. LaCroix
- Department of Speech, Language, and Hearing Sciences, College of Health and Human Sciences, Purdue University, West Lafayette, IN, United States,*Correspondence: Arianna N. LaCroix,
| |
Collapse
|
30
|
Cavalcanti JC, Eriksson A, Barbosa PA. On the speaker discriminatory power asymmetry regarding acoustic-phonetic parameters and the impact of speaking style. Front Psychol 2023; 14:1101187. [PMID: 37138997 PMCID: PMC10150585 DOI: 10.3389/fpsyg.2023.1101187] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Accepted: 03/29/2023] [Indexed: 05/05/2023] Open
Abstract
This study aimed to assess what we refer to as the speaker discriminatory power asymmetry and its forensic implications in comparisons performed in different speaking styles: spontaneous dialogues vs. interviews. We also addressed the impact of data sampling on the speaker's discriminatory performance concerning different acoustic-phonetic estimates. The participants were 20 male speakers, Brazilian Portuguese speakers from the same dialectal area. The speech material consisted of spontaneous telephone conversations between familiar individuals, and interviews conducted between each individual participant and the researcher. Nine acoustic-phonetic parameters were chosen for the comparisons, spanning from temporal and melodic to spectral acoustic-phonetic estimates. Ultimately, an analysis based on the combination of different parameters was also conducted. Two speaker discriminatory metrics were examined: Cost Log-likelihood-ratio (Cllr) and Equal Error Rate (EER) values. A general speaker discriminatory trend was suggested when assessing the parameters individually. Parameters pertaining to the temporal acoustic-phonetic class depicted the weakest performance in terms of speaker contrasting power as evidenced by the relatively higher Cllr and EER values. Moreover, from the set of acoustic parameters assessed, spectral parameters, mainly high formant frequencies, i.e., F3 and F4, were the best performing in terms of speaker discrimination, depicting the lowest EER and Cllr scores. The results appear to suggest a speaker discriminatory power asymmetry concerning parameters from different acoustic-phonetic classes, in which temporal parameters tended to present a lower discriminatory power. The speaking style mismatch also seemed to considerably impact the speaker comparison task, by undermining the overall discriminatory performance. A statistical model based on the combination of different acoustic-phonetic estimates was found to perform best in this case. Finally, data sampling has proven to be of crucial relevance for the reliability of discriminatory power assessment.
Collapse
Affiliation(s)
- Julio Cesar Cavalcanti
- Laboratory of Phonetics, Department of Linguistics, Stockholm University, Stockholm, Sweden
- Institute of Language Studies, Department of Linguistics, University of Campinas, Campinas, Brazil
- *Correspondence: Julio Cesar Cavalcanti
| | - Anders Eriksson
- Laboratory of Phonetics, Department of Linguistics, Stockholm University, Stockholm, Sweden
- Anders Eriksson
| | - Plinio A. Barbosa
- Institute of Language Studies, Department of Linguistics, University of Campinas, Campinas, Brazil
| |
Collapse
|
31
|
Tichko P, Page N, Kim JC, Large EW, Loui P. Neural Entrainment to Musical Pulse in Naturalistic Music Is Preserved in Aging: Implications for Music-Based Interventions. Brain Sci 2022; 12:brainsci12121676. [PMID: 36552136 PMCID: PMC9775503 DOI: 10.3390/brainsci12121676] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 11/21/2022] [Accepted: 12/01/2022] [Indexed: 12/12/2022] Open
Abstract
Neural entrainment to musical rhythm is thought to underlie the perception and production of music. In aging populations, the strength of neural entrainment to rhythm has been found to be attenuated, particularly during attentive listening to auditory streams. However, previous studies on neural entrainment to rhythm and aging have often employed artificial auditory rhythms or limited pieces of recorded, naturalistic music, failing to account for the diversity of rhythmic structures found in natural music. As part of larger project assessing a novel music-based intervention for healthy aging, we investigated neural entrainment to musical rhythms in the electroencephalogram (EEG) while participants listened to self-selected musical recordings across a sample of younger and older adults. We specifically measured neural entrainment to the level of musical pulse-quantified here as the phase-locking value (PLV)-after normalizing the PLVs to each musical recording's detected pulse frequency. As predicted, we observed strong neural phase-locking to musical pulse, and to the sub-harmonic and harmonic levels of musical meter. Overall, PLVs were not significantly different between older and younger adults. This preserved neural entrainment to musical pulse and rhythm could support the design of music-based interventions that aim to modulate endogenous brain activity via self-selected music for healthy cognitive aging.
Collapse
Affiliation(s)
- Parker Tichko
- Department of Music, Northeastern University, Boston, MA 02115, USA
| | - Nicole Page
- Department of Music, Northeastern University, Boston, MA 02115, USA
| | - Ji Chul Kim
- Department of Psychological Sciences, University of Connecticut, Storrs, CT 06269, USA
| | - Edward W. Large
- Department of Psychological Sciences, University of Connecticut, Storrs, CT 06269, USA
| | - Psyche Loui
- Department of Music, Northeastern University, Boston, MA 02115, USA
- Correspondence:
| |
Collapse
|
32
|
Daikoku T, Goswami U. Hierarchical amplitude modulation structures and rhythm patterns: Comparing Western musical genres, song, and nature sounds to Babytalk. PLoS One 2022; 17:e0275631. [PMID: 36240225 PMCID: PMC9565671 DOI: 10.1371/journal.pone.0275631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Accepted: 09/20/2022] [Indexed: 11/19/2022] Open
Abstract
Statistical learning of physical stimulus characteristics is important for the development of cognitive systems like language and music. Rhythm patterns are a core component of both systems, and rhythm is key to language acquisition by infants. Accordingly, the physical stimulus characteristics that yield speech rhythm in "Babytalk" may also describe the hierarchical rhythmic relationships that characterize human music and song. Computational modelling of the amplitude envelope of "Babytalk" (infant-directed speech, IDS) using a demodulation approach (Spectral-Amplitude Modulation Phase Hierarchy model, S-AMPH) can describe these characteristics. S-AMPH modelling of Babytalk has shown previously that bands of amplitude modulations (AMs) at different temporal rates and their phase relations help to create its structured inherent rhythms. Additionally, S-AMPH modelling of children's nursery rhymes shows that different rhythm patterns (trochaic, iambic, dactylic) depend on the phase relations between AM bands centred on ~2 Hz and ~5 Hz. The importance of these AM phase relations was confirmed via a second demodulation approach (PAD, Probabilistic Amplitude Demodulation). Here we apply both S-AMPH and PAD to demodulate the amplitude envelopes of Western musical genres and songs. Quasi-rhythmic and non-human sounds found in nature (birdsong, rain, wind) were utilized for control analyses. We expected that the physical stimulus characteristics in human music and song from an AM perspective would match those of IDS. Given prior speech-based analyses, we also expected that AM cycles derived from the modelling may identify musical units like crotchets, quavers and demi-quavers. Both models revealed an hierarchically-nested AM modulation structure for music and song, but not nature sounds. This AM modulation structure for music and song matched IDS. Both models also generated systematic AM cycles yielding musical units like crotchets and quavers. Both music and language are created by humans and shaped by culture. Acoustic rhythm in IDS and music appears to depend on many of the same physical characteristics, facilitating learning.
Collapse
Affiliation(s)
- Tatsuya Daikoku
- Centre for Neuroscience in Education, University of Cambridge, Cambridge, United Kingdom
- International Research Center for Neurointelligence, The University of Tokyo, Bunkyo City, Tokyo, Japan
- Center for Brain, Mind and KANSEI Sciences Research, Hiroshima University, Hiroshima, Japan
- * E-mail:
| | - Usha Goswami
- Centre for Neuroscience in Education, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
33
|
Broderick MP, Zuk NJ, Anderson AJ, Lalor EC. More than words: Neurophysiological correlates of semantic dissimilarity depend on comprehension of the speech narrative. Eur J Neurosci 2022; 56:5201-5214. [PMID: 35993240 DOI: 10.1111/ejn.15805] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Revised: 08/15/2022] [Accepted: 08/18/2022] [Indexed: 12/14/2022]
Abstract
Speech comprehension relies on the ability to understand words within a coherent context. Recent studies have attempted to obtain electrophysiological indices of this process by modelling how brain activity is affected by a word's semantic dissimilarity to preceding words. Although the resulting indices appear robust and are strongly modulated by attention, it remains possible that, rather than capturing the contextual understanding of words, they may actually reflect word-to-word changes in semantic content without the need for a narrative-level understanding on the part of the listener. To test this, we recorded electroencephalography from subjects who listened to speech presented in either its original, narrative form, or after scrambling the word order by varying amounts. This manipulation affected the ability of subjects to comprehend the speech narrative but not the ability to recognise individual words. Neural indices of semantic understanding and low-level acoustic processing were derived for each scrambling condition using the temporal response function. Signatures of semantic processing were observed when speech was unscrambled or minimally scrambled and subjects understood the speech. The same markers were absent for higher scrambling levels as speech comprehension dropped. In contrast, word recognition remained high and neural measures related to envelope tracking did not vary significantly across scrambling conditions. This supports the previous claim that electrophysiological indices based on the semantic dissimilarity of words to their context reflect a listener's understanding of those words relative to that context. It also highlights the relative insensitivity of neural measures of low-level speech processing to speech comprehension.
Collapse
Affiliation(s)
- Michael P Broderick
- School of Engineering, Trinity Centre for Biomedical Engineering and Trinity College Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
| | - Nathaniel J Zuk
- School of Engineering, Trinity Centre for Biomedical Engineering and Trinity College Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
| | - Andrew J Anderson
- Del Monte Institute for Neuroscience, Department of Neuroscience, Department of Biomedical Engineering, University of Rochester, Rochester, New York, USA
| | - Edmund C Lalor
- School of Engineering, Trinity Centre for Biomedical Engineering and Trinity College Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland.,Del Monte Institute for Neuroscience, Department of Neuroscience, Department of Biomedical Engineering, University of Rochester, Rochester, New York, USA
| |
Collapse
|
34
|
Bugos JA, Bidelman GM, Moreno S, Shen D, Lu J, Alain C. Music and Visual Art Training Increase Auditory-Evoked Theta Oscillations in Older Adults. Brain Sci 2022; 12:brainsci12101300. [PMID: 36291234 PMCID: PMC9599228 DOI: 10.3390/brainsci12101300] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2022] [Revised: 09/18/2022] [Accepted: 09/20/2022] [Indexed: 11/30/2022] Open
Abstract
Music training was shown to induce changes in auditory processing in older adults. However, most findings stem from correlational studies and fewer examine long-term sustainable benefits. Moreover, research shows small and variable changes in auditory event-related potential (ERP) amplitudes and/or latencies in older adults. Conventional time domain analysis methods, however, are susceptible to latency jitter in evoked responses and may miss important information of brain processing. Here, we used time-frequency analyses to examine training-related changes in auditory-evoked oscillatory activity in healthy older adults (N = 50) assigned to a music training (n = 16), visual art training (n = 17), or a no-treatment control (n = 17) group. All three groups were presented with oddball auditory paradigms with synthesized piano tones or vowels during the acquisition of high-density EEG. Neurophysiological measures were collected at three-time points: pre-training, post-training, and at a three-month follow-up. Training programs were administered for 12-weeks. Increased theta power was found pre and post- training for the music (p = 0.010) and visual art group (p = 0.010) as compared to controls (p = 0.776) and maintained at the three-month follow-up. Results showed training-related plasticity on auditory processing in aging adults. Neuroplastic changes were maintained three months post-training, suggesting music and visual art programs yield lasting benefits that might facilitate encoding, retention, and memory retrieval.
Collapse
Affiliation(s)
- Jennifer A. Bugos
- School of Music, University of South Florida, Tampa, FL 33620, USA
- Correspondence: ; Tel.: +1-352-339-4076
| | - Gavin M. Bidelman
- Department of Speech, Language, and Hearing Sciences, Indiana University, Bloomington, IN 47408, USA
| | - Sylvain Moreno
- School of Interactive Arts and Technology, Simon Fraser University, Burnaby, BC V3T OA3, Canada
- Circle Innovation, Burnaby, BC V3T OA3, Canada
| | - Dawei Shen
- Rotman Research Institute, Toronto, ON M6A 2E1, Canada
| | - Jing Lu
- MOE Key Lab for Neuroinformation, School of Life Science and Technology, University of Electronic and Science Technology of China, Chengdu 611731, China
| | - Claude Alain
- Rotman Research Institute, Toronto, ON M6A 2E1, Canada
- Department of Psychology, University of Toronto, Toronto, ON M5S 3G3, Canada
| |
Collapse
|
35
|
Weineck K, Wen OX, Henry MJ. Neural synchronization is strongest to the spectral flux of slow music and depends on familiarity and beat salience. eLife 2022; 11:75515. [PMID: 36094165 PMCID: PMC9467512 DOI: 10.7554/elife.75515] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2021] [Accepted: 07/25/2022] [Indexed: 11/29/2022] Open
Abstract
Neural activity in the auditory system synchronizes to sound rhythms, and brain–environment synchronization is thought to be fundamental to successful auditory perception. Sound rhythms are often operationalized in terms of the sound’s amplitude envelope. We hypothesized that – especially for music – the envelope might not best capture the complex spectro-temporal fluctuations that give rise to beat perception and synchronized neural activity. This study investigated (1) neural synchronization to different musical features, (2) tempo-dependence of neural synchronization, and (3) dependence of synchronization on familiarity, enjoyment, and ease of beat perception. In this electroencephalography study, 37 human participants listened to tempo-modulated music (1–4 Hz). Independent of whether the analysis approach was based on temporal response functions (TRFs) or reliable components analysis (RCA), the spectral flux of music – as opposed to the amplitude envelope – evoked strongest neural synchronization. Moreover, music with slower beat rates, high familiarity, and easy-to-perceive beats elicited the strongest neural response. Our results demonstrate the importance of spectro-temporal fluctuations in music for driving neural synchronization, and highlight its sensitivity to musical tempo, familiarity, and beat salience. When we listen to a melody, the activity of our neurons synchronizes to the music: in fact, it is likely that the closer the match, the better we can perceive the piece. However, it remains unclear exactly which musical features our brain cells synchronize to. Previous studies, which have often used ‘simplified’ music, have highlighted that the amplitude envelope (how the intensity of the sounds changes over time) could be involved in this phenomenon, alongside factors such as musical training, attention, familiarity with the piece or even enjoyment. Whether differences in neural synchronization could explain why musical tastes vary between people is also still a matter of debate. In their study, Weineck et al. aim to better understand what drives neuronal synchronization to music. A technique known as electroencephalography was used to record brain activity in 37 volunteers listening to instrumental music whose tempo ranged from 60 to 240 beats per minute. The tunes varied across an array of features such as familiarity, enjoyment and how easy the beat was to perceive. Two different approaches were then used to calculate neural synchronization, which yielded converging results. The analyses revealed that three types of factors were associated with a strong neural synchronization. First, amongst the various cadences, a tempo of 60-120 beats per minute elicited the strongest match with neuronal activity. Interestingly, this beat is commonly found in Western pop music, is usually preferred by listeners, and often matches spontaneous body rhythms such as walking pace. Second, synchronization was linked to variations in pitch and sound quality (known as ‘spectral flux’) rather than in the amplitude envelope. And finally, familiarity and perceived beat saliency – but not enjoyment or musical expertise – were connected to stronger synchronization. These findings help to better understand how our brains allow us to perceive and connect with music. The work conducted by Weineck et al. should help other researchers to investigate this field; in particular, it shows how important it is to consider spectral flux rather than amplitude envelope in experiments that use actual music.
Collapse
Affiliation(s)
- Kristin Weineck
- Research Group "Neural and Environmental Rhythms", Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany.,Goethe University Frankfurt, Institute for Cell Biology and Neuroscience, Frankfurt am Main, Germany
| | - Olivia Xin Wen
- Research Group "Neural and Environmental Rhythms", Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
| | - Molly J Henry
- Research Group "Neural and Environmental Rhythms", Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany.,Department of Psychology, Toronto Metropolitan University, Toronto, Canada
| |
Collapse
|
36
|
Gugnowska K, Novembre G, Kohler N, Villringer A, Keller PE, Sammler D. Endogenous sources of interbrain synchrony in duetting pianists. Cereb Cortex 2022; 32:4110-4127. [PMID: 35029645 PMCID: PMC9476614 DOI: 10.1093/cercor/bhab469] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Revised: 11/16/2021] [Accepted: 11/17/2021] [Indexed: 11/12/2022] Open
Abstract
When people interact with each other, their brains synchronize. However, it remains unclear whether interbrain synchrony (IBS) is functionally relevant for social interaction or stems from exposure of individual brains to identical sensorimotor information. To disentangle these views, the current dual-EEG study investigated amplitude-based IBS in pianists jointly performing duets containing a silent pause followed by a tempo change. First, we manipulated the similarity of the anticipated tempo change and measured IBS during the pause, hence, capturing the alignment of purely endogenous, temporal plans without sound or movement. Notably, right posterior gamma IBS was higher when partners planned similar tempi, it predicted whether partners' tempi matched after the pause, and it was modulated only in real, not in surrogate pairs. Second, we manipulated the familiarity with the partner's actions and measured IBS during joint performance with sound. Although sensorimotor information was similar across conditions, gamma IBS was higher when partners were unfamiliar with each other's part and had to attend more closely to the sound of the performance. These combined findings demonstrate that IBS is not merely an epiphenomenon of shared sensorimotor information but can also hinge on endogenous, cognitive processes crucial for behavioral synchrony and successful social interaction.
Collapse
Affiliation(s)
- Katarzyna Gugnowska
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
- Research Group Neurocognition of Music and Language, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main 60322, Germany
| | - Giacomo Novembre
- Neuroscience of Perception and Action Lab, Italian Institute of Technology (IIT), Rome 00161, Italy
| | - Natalie Kohler
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
- Research Group Neurocognition of Music and Language, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main 60322, Germany
| | - Arno Villringer
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - Peter E Keller
- Department of Clinical Medicine, Center for Music in the Brain, Aarhus University, Aarhus 8000, Denmark
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, NSW 2751, Australia
| | - Daniela Sammler
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
- Research Group Neurocognition of Music and Language, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main 60322, Germany
| |
Collapse
|
37
|
Comstock DC, Balasubramaniam R. Differential motor system entrainment to auditory and visual rhythms. J Neurophysiol 2022; 128:326-335. [PMID: 35766371 PMCID: PMC9342137 DOI: 10.1152/jn.00432.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Perception of, and synchronization to, auditory rhythms is known to be more accurate than with flashing visual rhythms. The motor system is known to play a role in the processing of timing information for auditory rhythm perception, but it is unclear if the motor system plays the same role for visual rhythm perception. One demonstrated component of auditory rhythm perception is neural entrainment at the frequency of the auditory rhythm. In this study, we use EEG to measure the entrainment of both auditory and visual rhythms from the motor cortex while subjects either tapped in synchrony with or passively attended to the presented rhythms. To isolate activity from motor cortex, we used independent component analysis to first separate out neural sources, then selected components using a combination of component topography, dipole location, mu activation, and beta modulation. This process took advantage of the fact that tapping activity results in reduced mu power, and characteristic beta modulation, which helped select motor components. Our findings suggest neural entrainment in motor components was stronger for visual rhythms than auditory rhythms and strongest during the tapping conditions for both modalities. We also find mu power increased in response to both auditory and visual rhythms. These findings indicate that the generally greater rhythm perception capabilities of the auditory system over the visual system may not depend entirely on neural entrainment in the motor system, but rather how the motor system is able to use the timing information made available to it. NEW & NOTEWORTHY We investigated neural entrainment in the motor system for both auditory and visual isochronous rhythms using electroencephalogram. Counter to expectations, our findings suggest stronger entrainment for visual rhythms than for auditory rhythms. Motor system activity was isolated with a novel procedure using independent component analysis as a means of blind source separation, along with known markers of mu activity from the motor system to identify motor components.
Collapse
Affiliation(s)
- Daniel C Comstock
- Center for Mind and Brain, University of California, Davis, California.,Cognitive and Information Sciences, University of California, Merced, California
| | | |
Collapse
|
38
|
Hervé E, Mento G, Desnous B, François C. Challenges and new perspectives of developmental cognitive EEG studies. Neuroimage 2022; 260:119508. [PMID: 35882267 DOI: 10.1016/j.neuroimage.2022.119508] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Revised: 07/07/2022] [Accepted: 07/22/2022] [Indexed: 10/16/2022] Open
Abstract
Despite shared procedures with adults, electroencephalography (EEG) in early development presents many specificities that need to be considered for good quality data collection. In this paper, we provide an overview of the most representative early cognitive developmental EEG studies focusing on the specificities of this neuroimaging technique in young participants, such as attrition and artifacts. We also summarize the most representative results in developmental EEG research obtained in the time and time-frequency domains and use more advanced signal processing methods. Finally, we briefly introduce three recent standardized pipelines that will help promote replicability and comparability across experiments and ages. While this paper does not claim to be exhaustive, it aims to give a sufficiently large overview of the challenges and solutions available to conduct robust cognitive developmental EEG studies.
Collapse
Affiliation(s)
- Estelle Hervé
- CNRS, LPL, Aix-Marseille University, 5 Avenue Pasteur, Aix-en-Provence 13100, France
| | - Giovanni Mento
- Department of General Psychology, University of Padova, Padova 35131, Italy; Padua Neuroscience Center (PNC), University of Padova, Padova 35131, Italy
| | - Béatrice Desnous
- APHM, Reference Center for Rare Epilepsies, Timone Children Hospital, Aix-Marseille University, Marseille 13005, France; Inserm, INS, Aix-Marseille University, Marseille 13005, France
| | - Clément François
- CNRS, LPL, Aix-Marseille University, 5 Avenue Pasteur, Aix-en-Provence 13100, France.
| |
Collapse
|
39
|
Kabdebon C, Fló A, de Heering A, Aslin R. The power of rhythms: how steady-state evoked responses reveal early neurocognitive development. Neuroimage 2022; 254:119150. [PMID: 35351649 PMCID: PMC9294992 DOI: 10.1016/j.neuroimage.2022.119150] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 03/23/2022] [Accepted: 03/24/2022] [Indexed: 12/17/2022] Open
Abstract
Electroencephalography (EEG) is a non-invasive and painless recording of cerebral activity, particularly well-suited for studying young infants, allowing the inspection of cerebral responses in a constellation of different ways. Of particular interest for developmental cognitive neuroscientists is the use of rhythmic stimulation, and the analysis of steady-state evoked potentials (SS-EPs) - an approach also known as frequency tagging. In this paper we rely on the existing SS-EP early developmental literature to illustrate the important advantages of SS-EPs for studying the developing brain. We argue that (1) the technique is both objective and predictive: the response is expected at the stimulation frequency (and/or higher harmonics), (2) its high spectral specificity makes the computed responses particularly robust to artifacts, and (3) the technique allows for short and efficient recordings, compatible with infants' limited attentional spans. We additionally provide an overview of some recent inspiring use of the SS-EP technique in adult research, in order to argue that (4) the SS-EP approach can be implemented creatively to target a wide range of cognitive and neural processes. For all these reasons, we expect SS-EPs to play an increasing role in the understanding of early cognitive processes. Finally, we provide practical guidelines for implementing and analyzing SS-EP studies.
Collapse
Affiliation(s)
- Claire Kabdebon
- Laboratoire de Sciences Cognitives et Psycholinguistique, Département d'études cognitives, ENS, EHESS, CNRS, PSL University, Paris, France; Haskins Laboratories, New Haven, CT, USA.
| | - Ana Fló
- Cognitive Neuroimaging Unit, CNRS ERL 9003, INSERM U992, CEA, Université Paris-Saclay, NeuroSpin Center, Gif/Yvette, France
| | - Adélaïde de Heering
- Center for Research in Cognition & Neuroscience (CRCN), Université libre de Bruxelles (ULB), Brussels, Belgium
| | - Richard Aslin
- Haskins Laboratories, New Haven, CT, USA; Department of Psychology, Yale University, New Haven, CT, USA
| |
Collapse
|
40
|
Criscuolo A, Schwartze M, Kotz SA. Cognition through the lens of a body–brain dynamic system. Trends Neurosci 2022; 45:667-677. [DOI: 10.1016/j.tins.2022.06.004] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Revised: 06/07/2022] [Accepted: 06/13/2022] [Indexed: 12/01/2022]
|
41
|
Lin WM, Oetringer DA, Bakker‐Marshall I, Emmerzaal J, Wilsch A, ElShafei HA, Rassi E, Haegens S. No behavioural evidence for rhythmic facilitation of perceptual discrimination. Eur J Neurosci 2022. [PMID: 33772897 PMCID: PMC9540985 DOI: 10.1111/ejn.15208 10.1101/2020.12.10.418947] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
Abstract
It has been hypothesized that internal oscillations can synchronize (i.e., entrain) to external environmental rhythms, thereby facilitating perception and behaviour. To date, evidence for the link between the phase of neural oscillations and behaviour has been scarce and contradictory; moreover, it remains an open question whether the brain can use this tentative mechanism for active temporal prediction. In our present study, we conducted a series of auditory pitch discrimination tasks with 181 healthy participants in an effort to shed light on the proposed behavioural benefits of rhythmic cueing and entrainment. In the three versions of our task, we observed no perceptual benefit of purported entrainment: targets occurring in-phase with a rhythmic cue provided no perceptual benefits in terms of discrimination accuracy or reaction time when compared with targets occurring out-of-phase or targets occurring randomly, nor did we find performance differences for targets preceded by rhythmic versus random cues. However, we found a surprising effect of cueing frequency on reaction time, in which participants showed faster responses to cue rhythms presented at higher frequencies. We therefore provide no evidence of entrainment, but instead a tentative effect of covert active sensing in which a faster external rhythm leads to a faster communication rate between motor and sensory cortices, allowing for sensory inputs to be sampled earlier in time.
Collapse
Affiliation(s)
- Wy Ming Lin
- Graduate Training Centre of NeuroscienceUniversity of TübingenTübingenGermany,Donders Institute for Brain, Cognition, and BehaviourRadboud UniversityNijmegenThe Netherlands
| | - Djamari A. Oetringer
- Donders Institute for Brain, Cognition, and BehaviourRadboud UniversityNijmegenThe Netherlands
| | - Iske Bakker‐Marshall
- Donders Institute for Brain, Cognition, and BehaviourRadboud UniversityNijmegenThe Netherlands
| | - Jill Emmerzaal
- Donders Institute for Brain, Cognition, and BehaviourRadboud UniversityNijmegenThe Netherlands
| | - Anna Wilsch
- Department of PsychologyNew York UniversityNew YorkNYUSA
| | - Hesham A. ElShafei
- Donders Institute for Brain, Cognition, and BehaviourRadboud UniversityNijmegenThe Netherlands
| | - Elie Rassi
- Donders Institute for Brain, Cognition, and BehaviourRadboud UniversityNijmegenThe Netherlands
| | - Saskia Haegens
- Donders Institute for Brain, Cognition, and BehaviourRadboud UniversityNijmegenThe Netherlands,Department of PsychiatryColumbia UniversityNew YorkNYUSA,Division of Systems NeuroscienceNew York State Psychiatric InstituteNew YorkNYUSA
| |
Collapse
|
42
|
Lin WM, Oetringer DA, Bakker‐Marshall I, Emmerzaal J, Wilsch A, ElShafei HA, Rassi E, Haegens S. No behavioural evidence for rhythmic facilitation of perceptual discrimination. Eur J Neurosci 2022; 55:3352-3364. [PMID: 33772897 PMCID: PMC9540985 DOI: 10.1111/ejn.15208] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2020] [Revised: 03/19/2021] [Accepted: 03/19/2021] [Indexed: 12/14/2022]
Abstract
It has been hypothesized that internal oscillations can synchronize (i.e., entrain) to external environmental rhythms, thereby facilitating perception and behaviour. To date, evidence for the link between the phase of neural oscillations and behaviour has been scarce and contradictory; moreover, it remains an open question whether the brain can use this tentative mechanism for active temporal prediction. In our present study, we conducted a series of auditory pitch discrimination tasks with 181 healthy participants in an effort to shed light on the proposed behavioural benefits of rhythmic cueing and entrainment. In the three versions of our task, we observed no perceptual benefit of purported entrainment: targets occurring in-phase with a rhythmic cue provided no perceptual benefits in terms of discrimination accuracy or reaction time when compared with targets occurring out-of-phase or targets occurring randomly, nor did we find performance differences for targets preceded by rhythmic versus random cues. However, we found a surprising effect of cueing frequency on reaction time, in which participants showed faster responses to cue rhythms presented at higher frequencies. We therefore provide no evidence of entrainment, but instead a tentative effect of covert active sensing in which a faster external rhythm leads to a faster communication rate between motor and sensory cortices, allowing for sensory inputs to be sampled earlier in time.
Collapse
Affiliation(s)
- Wy Ming Lin
- Graduate Training Centre of NeuroscienceUniversity of TübingenTübingenGermany,Donders Institute for Brain, Cognition, and BehaviourRadboud UniversityNijmegenThe Netherlands
| | - Djamari A. Oetringer
- Donders Institute for Brain, Cognition, and BehaviourRadboud UniversityNijmegenThe Netherlands
| | - Iske Bakker‐Marshall
- Donders Institute for Brain, Cognition, and BehaviourRadboud UniversityNijmegenThe Netherlands
| | - Jill Emmerzaal
- Donders Institute for Brain, Cognition, and BehaviourRadboud UniversityNijmegenThe Netherlands
| | - Anna Wilsch
- Department of PsychologyNew York UniversityNew YorkNYUSA
| | - Hesham A. ElShafei
- Donders Institute for Brain, Cognition, and BehaviourRadboud UniversityNijmegenThe Netherlands
| | - Elie Rassi
- Donders Institute for Brain, Cognition, and BehaviourRadboud UniversityNijmegenThe Netherlands
| | - Saskia Haegens
- Donders Institute for Brain, Cognition, and BehaviourRadboud UniversityNijmegenThe Netherlands,Department of PsychiatryColumbia UniversityNew YorkNYUSA,Division of Systems NeuroscienceNew York State Psychiatric InstituteNew YorkNYUSA
| |
Collapse
|
43
|
Tichko P, Kim JC, Large E, Loui P. Integrating music-based interventions with Gamma-frequency stimulation: Implications for healthy ageing. Eur J Neurosci 2022; 55:3303-3323. [PMID: 33236353 PMCID: PMC9899516 DOI: 10.1111/ejn.15059] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2020] [Revised: 11/18/2020] [Accepted: 11/18/2020] [Indexed: 02/07/2023]
Abstract
In recent years, music-based interventions (MBIs) have risen in popularity as a non-invasive, sustainable form of care for treating dementia-related disorders, such as Mild Cognitive Impairment (MCI) and Alzheimer's disease (AD). Despite their clinical potential, evidence regarding the efficacy of MBIs on patient outcomes is mixed. Recently, a line of related research has begun to investigate the clinical impact of non-invasive Gamma-frequency (e.g., 40 Hz) sensory stimulation on dementia. Current work, using non-human-animal models of AD, suggests that non-invasive Gamma-frequency stimulation can remediate multiple pathophysiologies of dementia at the molecular, cellular and neural-systems scales, and, importantly, improve cognitive functioning. These findings suggest that the efficacy of MBIs could, in theory, be enhanced by incorporating Gamma-frequency stimulation into current MBI protocols. In the current review, we propose a novel clinical framework for non-invasively treating dementia-related disorders that combines previous MBIs with current approaches employing Gamma-frequency sensory stimulation. We theorize that combining MBIs with Gamma-frequency stimulation could increase the therapeutic power of MBIs by simultaneously targeting multiple biomarkers of dementia, restoring neural activity that underlies learning and memory (e.g., Gamma-frequency neural activity, Theta-Gamma coupling), and actively engaging auditory and reward networks in the brain to promote behavioural change.
Collapse
Affiliation(s)
- Parker Tichko
- Department of Music, Northeastern University, Boston, MA, USA
| | - Ji Chul Kim
- Perception, Action, Cognition (PAC) Division, Department of Psychological Sciences, University of Connecticut, Storrs, CT, USA
| | - Edward Large
- Perception, Action, Cognition (PAC) Division, Department of Psychological Sciences, University of Connecticut, Storrs, CT, USA,Center for the Ecological Study of Perception & Action (CESPA), Department of Psychological Sciences, University of Connecticut, Storrs, CT, USA,Department of Physics, University of Connecticut, Storrs, CT, USA
| | - Psyche Loui
- Department of Music, Northeastern University, Boston, MA, USA
| |
Collapse
|
44
|
Samiee S, Vuvan D, Florin E, Albouy P, Peretz I, Baillet S. Cross-Frequency Brain Network Dynamics Support Pitch Change Detection. J Neurosci 2022; 42:3823-3835. [PMID: 35351829 PMCID: PMC9087716 DOI: 10.1523/jneurosci.0630-21.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2021] [Revised: 01/18/2022] [Accepted: 01/22/2022] [Indexed: 11/21/2022] Open
Abstract
Processing auditory sequences involves multiple brain networks and is crucial to complex perception associated with music appreciation and speech comprehension. We used time-resolved cortical imaging in a pitch change detection task to detail the underlying nature of human brain network activity, at the rapid time scales of neurophysiology. In response to tone sequence presentation to the participants, we observed slow inter-regional signaling at the pace of tone presentations (2-4 Hz) that was directed from auditory cortex toward both inferior frontal and motor cortices. Symmetrically, motor cortex manifested directed influence onto auditory and inferior frontal cortices via bursts of faster (15-35 Hz) activity. These bursts occurred precisely at the expected latencies of each tone in a sequence. This expression of interdependency between slow/fast neurophysiological activity yielded a form of local cross-frequency phase-amplitude coupling in auditory cortex, which strength varied dynamically and peaked when pitch changes were anticipated. We clarified the mechanistic relevance of these observations in relation to behavior by including a group of individuals afflicted by congenital amusia, as a model of altered function in processing sound sequences. In amusia, we found a depression of inter-regional slow signaling toward motor and inferior frontal cortices, and a chronic overexpression of slow/fast phase-amplitude coupling in auditory cortex. These observations are compatible with a misalignment between the respective neurophysiological mechanisms of stimulus encoding and internal predictive signaling, which was absent in controls. In summary, our study provides a functional and mechanistic account of neurophysiological activity for predictive, sequential timing of auditory inputs.SIGNIFICANCE STATEMENT Auditory sequences are processed by extensive brain networks, involving multiple systems. In particular, fronto-temporal brain connections participate in the encoding of sequential auditory events, but so far, their study was limited to static depictions. This study details the nature of oscillatory brain activity involved in these inter-regional interactions in human participants. It demonstrates how directed, polyrhythmic oscillatory interactions between auditory and motor cortical regions provide a functional account for predictive timing of incoming items in an auditory sequence. In addition, we show the functional relevance of these observations in relation to behavior, with data from both normal hearing participants and a rare cohort of individuals afflicted by congenital amusia, which we considered here as a model of altered function in processing sound sequences.
Collapse
Affiliation(s)
- Soheila Samiee
- McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montreal, Quebec H3A2B4, Canada
- Mila, Quebec AI Institute, Montreal, Quebec H2S 3H1, Canada
| | - Dominique Vuvan
- International Laboratory for Brain, Music, and Sound Research, University of Montreal, Montreal, Quebec H3C 3J7, Canada
- Psychology Department, Skidmore College, Saratoga Springs, New York 12866
| | - Esther Florin
- McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montreal, Quebec H3A2B4, Canada
- Institute of Clinical Neuroscience and Medical Psychology, Medical Faculty, Heinrich Heine University, Düsseldorf 40225, Germany
| | - Philippe Albouy
- McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montreal, Quebec H3A2B4, Canada
- International Laboratory for Brain, Music, and Sound Research, University of Montreal, Montreal, Quebec H3C 3J7, Canada
- Psychology Department, CERVO brain research Center, Laval University, Montreal, Quebec G1V 0A6, Canada
| | - Isabelle Peretz
- International Laboratory for Brain, Music, and Sound Research, University of Montreal, Montreal, Quebec H3C 3J7, Canada
| | - Sylvain Baillet
- McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montreal, Quebec H3A2B4, Canada
| |
Collapse
|
45
|
Young A, Robbins I, Shelat S. From Micro to Macro: The Combination of Consciousness. Front Psychol 2022; 13:755465. [PMID: 35432082 PMCID: PMC9008346 DOI: 10.3389/fpsyg.2022.755465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2021] [Accepted: 03/10/2022] [Indexed: 11/18/2022] Open
Abstract
Crick and Koch’s 1990 “neurobiological theory of consciousness” sparked the race for the physical correlates of subjective experience. 30 years later, cognitive sciences trend toward consideration of the brain’s electromagnetic field as the primary seat of consciousness, the “to be” of the individual. Recent advancements in laboratory tools have preceded an influx of studies reporting a synchronization between the neuronally generated EM fields of interacting individuals. An embodied and enactive neuroscientific approach has gained traction in the wake of these findings wherein consciousness and cognition are theorized to be regulated and distributed beyond the individual. We approach this frontier to extend the implications of person-to-person synchrony to propose a process of combination whereby coupled individual agents merge into a hierarchical cognitive system to which they are subsidiary. Such is to say, the complex mammalian consciousness humans possess may not be the tip of the iceberg, but another step in a succeeding staircase. To this end, the axioms and conjectures of General Resonance Theory are utilized to describe this phenomenon of interpersonal resonant combination. Our proposal describes a coupled system of spatially distributed EM fields that are synchronized through recurrent, entraining behavioral interactions. The system, having achieved sufficient synchronization, enjoys an optimization of information flow that alters the conscious states of its merging agents and enhances group performance capabilities. In the race for the neurobiological correlates of subjective experience, we attempt the first steps in the journey toward defining the physical basis of “group consciousness.” The establishment of a concrete account of the combination of consciousness at a scale superseding individual human consciousness remains speculation, but our suggested approach provides a framework for empirical testing of these possibilities.
Collapse
Affiliation(s)
- Asa Young
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, CA, United States
| | - Isabella Robbins
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, CA, United States
| | - Shivang Shelat
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, CA, United States
| |
Collapse
|
46
|
Tichko P, Kim JC, Large EW. A Dynamical, Radically Embodied, and Ecological Theory of Rhythm Development. Front Psychol 2022; 13:653696. [PMID: 35282203 PMCID: PMC8907845 DOI: 10.3389/fpsyg.2022.653696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2021] [Accepted: 01/03/2022] [Indexed: 11/13/2022] Open
Abstract
Musical rhythm abilities-the perception of and coordinated action to the rhythmic structure of music-undergo remarkable change over human development. In the current paper, we introduce a theoretical framework for modeling the development of musical rhythm. The framework, based on Neural Resonance Theory (NRT), explains rhythm development in terms of resonance and attunement, which are formalized using a general theory that includes non-linear resonance and Hebbian plasticity. First, we review the developmental literature on musical rhythm, highlighting several developmental processes related to rhythm perception and action. Next, we offer an exposition of Neural Resonance Theory and argue that elements of the theory are consistent with dynamical, radically embodied (i.e., non-representational) and ecological approaches to cognition and development. We then discuss how dynamical models, implemented as self-organizing networks of neural oscillations with Hebbian plasticity, predict key features of music development. We conclude by illustrating how the notions of dynamical embodiment, resonance, and attunement provide a conceptual language for characterizing musical rhythm development, and, when formalized in physiologically informed dynamical models, provide a theoretical framework for generating testable empirical predictions about musical rhythm development, such as the kinds of native and non-native rhythmic structures infants and children can learn, steady-state evoked potentials to native and non-native musical rhythms, and the effects of short-term (e.g., infant bouncing, infant music classes), long-term (e.g., perceptual narrowing to musical rhythm), and very-long term (e.g., music enculturation, musical training) learning on music perception-action.
Collapse
Affiliation(s)
- Parker Tichko
- Department of Music, Northeastern University, Boston, MA, United States
| | - Ji Chul Kim
- Perception, Action, Cognition (PAC) Division, Department of Psychological Sciences, University of Connecticut, Mansfield, CT, United States
| | - Edward W. Large
- Perception, Action, Cognition (PAC) Division, Department of Psychological Sciences, University of Connecticut, Mansfield, CT, United States
- Center for the Ecological Study of Perception and Action (CESPA), Department of Psychological Sciences, University of Connecticut, Mansfield, CT, United States
- Department of Physics, University of Connecticut, Mansfield, CT, United States
| |
Collapse
|
47
|
Marimon M, Höhle B, Langus A. Pupillary entrainment reveals individual differences in cue weighting in 9-month-old German-learning infants. Cognition 2022; 224:105054. [PMID: 35217262 DOI: 10.1016/j.cognition.2022.105054] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2021] [Revised: 01/30/2022] [Accepted: 01/31/2022] [Indexed: 02/08/2023]
Abstract
Young infants can segment continuous speech with statistical as well as prosodic cues. Understanding how these cues interact can be informative about how infants solve the segmentation problem. Here we investigate how German-speaking adults and 9-month-old German-learning infants weigh statistical and prosodic cues when segmenting continuous speech. We measured participants' pupil size while they were familiarized with a continuous speech stream where prosodic cues were pitted off against transitional probabilities. Adult participants' changes in pupil size synchronized with the occurrence of prosodic words during the familiarization and the temporal alignment of these pupillary changes was predictive of adult participants' performance at test. Further, 9-month-olds as a group failed to consistently segment the familiarization stream with prosodic or statistical cues. However, the variability in temporal alignment of the pupillary changes at word frequency showed that prosodic and statistical cues compete for dominance when segmenting continuous speech. A follow-up language development questionnaire at 40 months of age suggested that infants who entrained to prosodic words performed better on a vocabulary task and those infants who relied more on statistical cues performed better on grammatical tasks. Together these results suggest that statistics and prosody may serve different roles in speech segmentation in infancy.
Collapse
Affiliation(s)
- Mireia Marimon
- University of Potsdam, Cognitive Sciences, Department of Linguistics, Karl-Liebknecht-Str. 24-25, D-14476 Potsdam, Germany
| | - Barbara Höhle
- University of Potsdam, Cognitive Sciences, Department of Linguistics, Karl-Liebknecht-Str. 24-25, D-14476 Potsdam, Germany
| | - Alan Langus
- University of Potsdam, Cognitive Sciences, Department of Linguistics, Karl-Liebknecht-Str. 24-25, D-14476 Potsdam, Germany.
| |
Collapse
|
48
|
Fiveash A, Burger B, Canette LH, Bedoin N, Tillmann B. When Visual Cues Do Not Help the Beat: Evidence for a Detrimental Effect of Moving Point-Light Figures on Rhythmic Priming. Front Psychol 2022; 13:807987. [PMID: 35185727 PMCID: PMC8855071 DOI: 10.3389/fpsyg.2022.807987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Accepted: 01/10/2022] [Indexed: 11/13/2022] Open
Abstract
Rhythm perception involves strong auditory-motor connections that can be enhanced with movement. However, it is unclear whether just seeing someone moving to a rhythm can enhance auditory-motor coupling, resulting in stronger entrainment. Rhythmic priming studies show that presenting regular rhythms before naturally spoken sentences can enhance grammaticality judgments compared to irregular rhythms or other baseline conditions. The current study investigated whether introducing a point-light figure moving in time with regular rhythms could enhance the rhythmic priming effect. Three experiments revealed that the addition of a visual cue did not benefit rhythmic priming in comparison to auditory conditions with a static image. In Experiment 1 (27 7–8-year-old children), grammaticality judgments were poorer after audio-visual regular rhythms (with a bouncing point-light figure) compared to auditory-only regular rhythms. In Experiments 2 (31 adults) and 3 (31 different adults), there was no difference in grammaticality judgments after audio-visual regular rhythms compared to auditory-only irregular rhythms for either a bouncing point-light figure (Experiment 2) or a swaying point-light figure (Experiment 3). Comparison of the observed performance with previous data suggested that the audio-visual component removed the regular prime benefit. These findings suggest that the visual cues used in this study do not enhance rhythmic priming and could hinder the effect by potentially creating a dual-task situation. In addition, individual differences in sensory-motor and social scales of music reward influenced the effect of the visual cue. Implications for future audio-visual experiments aiming to enhance beat processing, and the importance of individual differences will be discussed.
Collapse
Affiliation(s)
- Anna Fiveash
- Lyon Neuroscience Research Center, CNRS, UMR 5292, INSERM, U1028, Lyon, France
- University of Lyon 1, Lyon, France
- *Correspondence: Anna Fiveash,
| | - Birgitta Burger
- Institute for Systematic Musicology, University of Hamburg, Hamburg, Germany
| | - Laure-Hélène Canette
- Lyon Neuroscience Research Center, CNRS, UMR 5292, INSERM, U1028, Lyon, France
- University of Lyon 1, Lyon, France
- University of Burgundy, F-21000, LEAD-CNRS UMR 5022, Dijon, France
| | - Nathalie Bedoin
- Lyon Neuroscience Research Center, CNRS, UMR 5292, INSERM, U1028, Lyon, France
- University of Lyon 1, Lyon, France
- University of Lyon 2, Lyon, France
| | - Barbara Tillmann
- Lyon Neuroscience Research Center, CNRS, UMR 5292, INSERM, U1028, Lyon, France
- University of Lyon 1, Lyon, France
| |
Collapse
|
49
|
Mosabbir AA, Braun Janzen T, Al Shirawi M, Rotzinger S, Kennedy SH, Farzan F, Meltzer J, Bartel L. Investigating the Effects of Auditory and Vibrotactile Rhythmic Sensory Stimulation on Depression: An EEG Pilot Study. Cureus 2022; 14:e22557. [PMID: 35371676 PMCID: PMC8958118 DOI: 10.7759/cureus.22557] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/23/2022] [Indexed: 12/18/2022] Open
Abstract
Background Major depressive disorder (MDD) is a persistent psychiatric condition and one of the leading causes of global disease burden. In a previous study, we investigated the effects of a five-week intervention consisting of rhythmic gamma frequency (30-70 Hz) vibroacoustic stimulation in 20 patients formally diagnosed with MDD. In that study, the findings suggested a significant clinical improvement in depression symptoms as measured using the Montgomery-Asberg Depression Rating Scale (MADRS), with 37% of participants meeting the criteria for clinical response. The goal of the present research was to examine possible changes from baseline to posttreatment in resting-state electroencephalography (EEG) recordings using the same treatment protocol and to characterize basic changes in EEG related to treatment response. Materials and methods The study sample consisted of 19 individuals aged 18-70 years with a clinical diagnosis of MDD. The participants were assessed before and after a five-week treatment period, which consisted of listening to an instrumental musical track on a vibroacoustic device, delivering auditory and vibrotactile stimulus in the gamma-band range (30-70 Hz, with particular emphasis on 40 Hz). The primary outcome measure was the change in Montgomery-Asberg Depression Rating Scale (MADRS) from baseline to posttreatment and resting-state EEG. Results Analysis comparing MADRS score at baseline and post-intervention indicated a significant change in the severity of depression symptoms after five weeks (t = 3.9923, df = 18, p = 0.0009). The clinical response rate was 36.85%. Resting-state EEG power analysis revealed a significant increase in occipital alpha power (t = -2.149, df = 18, p = 0.04548), as well as an increase in the prefrontal gamma power of the responders (t = 2.8079, df = 13.431, p = 0.01442). Conclusions The results indicate that improvements in MADRS scores after rhythmic sensory stimulation (RSS) were accompanied by an increase in alpha power in the occipital region and an increase in gamma in the prefrontal region, thus suggesting treatment effects on cortical activity in depression. The results of this pilot study will help inform subsequent controlled studies evaluating whether treatment response to vibroacoustic stimulation constitutes a real and replicable reduction of depressive symptoms and to characterize the underlying mechanisms.
Collapse
Affiliation(s)
| | | | | | - Susan Rotzinger
- Department of Psychiatry, University Health Network, Toronto, CAN
| | - Sidney H Kennedy
- Centre for Depression and Suicide Studies, St. Michael's Hospital, Toronto, CAN
| | - Faranak Farzan
- School of Mechatronic Systems Engineering, Simon Fraser University, Surrey, CAN
| | - Jed Meltzer
- Rotman Research Institute, Baycrest Health Sciences, Toronto, CAN
| | - Lee Bartel
- Faculty of Music, University of Toronto, Toronto, CAN
| |
Collapse
|
50
|
Cavalcanti JC, Eriksson A, Barbosa PA. Multi-parametric analysis of speech timing in inter-talker identical twin pairs and cross-pair comparisons: Some forensic implications. PLoS One 2022; 17:e0262800. [PMID: 35061853 PMCID: PMC8782339 DOI: 10.1371/journal.pone.0262800] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Accepted: 01/06/2022] [Indexed: 11/19/2022] Open
Abstract
The purpose of this study was to assess the speaker-discriminatory potential of a set of speech timing parameters while probing their suitability for forensic speaker comparison applications. The recordings comprised of spontaneous dialogues between twin pairs through mobile phones while being directly recorded with professional headset microphones. Speaker comparisons were performed with twins speakers engaged in a dialogue (i.e., intra-twin pairs) and among all subjects (i.e., cross-twin pairs). The participants were 20 Brazilian Portuguese speakers, ten male identical twin pairs from the same dialectal area. A set of 11 speech timing parameters was extracted and analyzed, including speech rate, articulation rate, syllable duration (V-V unit), vowel duration, and pause duration. Three system performance estimates were considered for assessing the suitability of the parameters for speaker comparison purposes, namely global Cllr, EER, and AUC values. These were interpreted while also taking into consideration the analysis of effect sizes. Overall, speech rate and articulation rate were found the most reliable parameters, displaying the largest effect sizes for the factor "speaker" and the best system performance outcomes, namely lowest Cllr, EER, and highest AUC values. Conversely, smaller effect sizes were found for the other parameters, which is compatible with a lower explanatory potential of the speaker identity on the duration of such units and a possibly higher linguistic control regarding their temporal variation. In addition, there was a tendency for speech timing estimates based on larger temporal intervals to present larger effect sizes and better speaker-discriminatory performance. Finally, identical twin pairs were found remarkably similar in their speech temporal patterns at the macro and micro levels while engaging in a dialogue, resulting in poor system discriminatory performance. Possible underlying factors for such a striking convergence in identical twins' speech timing patterns are presented and discussed.
Collapse
Affiliation(s)
- Julio Cesar Cavalcanti
- Department of linguistics, Stockholm University, Stockholm, Sweden
- Institute of Language Studies, Campinas State University, Campinas, Brazil
| | - Anders Eriksson
- Department of linguistics, Stockholm University, Stockholm, Sweden
| | - Plinio A. Barbosa
- Institute of Language Studies, Campinas State University, Campinas, Brazil
| |
Collapse
|