1
|
Chalehchaleh A, Winchester MM, Di Liberto GM. Robust assessment of the cortical encoding of word-level expectations using the temporal response function. J Neural Eng 2025; 22:016004. [PMID: 39719122 DOI: 10.1088/1741-2552/ada30a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Accepted: 12/24/2024] [Indexed: 12/26/2024]
Abstract
Objective. Speech comprehension involves detecting words and interpreting their meaning according to the preceding semantic context. This process is thought to be underpinned by a predictive neural system that uses that context to anticipate upcoming words. However, previous studies relied on evaluation metrics designed for continuous univariate sound features, overlooking the discrete and sparse nature of word-level features. This mismatch has limited effect sizes and hampered progress in understanding lexical prediction mechanisms in ecologically-valid experiments.Approach. We investigate these limitations by analyzing both simulated and actual electroencephalography (EEG) signals recorded during a speech comprehension task. We then introduce two novel assessment metrics tailored to capture the neural encoding of lexical surprise, improving upon traditional evaluation approaches.Main results. The proposed metrics demonstrated effect-sizes over 140% larger than those achieved with the conventional temporal response function (TRF) evaluation. These improvements were consistent across both simulated and real EEG datasets.Significance. Our findings substantially advance methods for evaluating lexical prediction in neural data, enabling more precise measurements and deeper insights into how the brain builds predictive representations during speech comprehension. These contributions open new avenues for research into predictive coding mechanisms in naturalistic language processing.
Collapse
|
2
|
Marion G, Gao F, Gold BP, Di Liberto GM, Shamma S. IDyOMpy: A new Python-based model for the statistical analysis of musical expectations. J Neurosci Methods 2024; 415:110347. [PMID: 39709074 DOI: 10.1016/j.jneumeth.2024.110347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2024] [Revised: 11/20/2024] [Accepted: 12/12/2024] [Indexed: 12/23/2024]
Abstract
BACKGROUND IDyOM (Information Dynamics of Music) is the statistical model of music the most used in the community of neuroscience of music. It has been shown to allow for significant correlations with EEG (Marion, 2021), ECoG (Di Liberto, 2020) and fMRI (Cheung, 2019) recordings of human music listening. The language used for IDyOM -Lisp- is not very familiar to the neuroscience community and makes this model hard to use and more importantly to modify. NEW METHOD IDyOMpy is a new Python re-implementation and extension of IDyOM. This new model allows for computing the information content and entropy for each melody note after training on a corpus of melodies. In addition to those features, two new features are presented: probability estimation of silences and enculturation modeling. RESULTS We first describe the mathematical details of the implementation. We extensively compare the two models and show that they generate very similar outputs. We also support the validity of IDyOMpy by using its output to replicate previous EEG and behavioral results that relied on the original Lisp version (Gold, 2019; Di Liberto, 2020; Marion, 2021). Finally, it reproduced the computation of cultural distances between two different datasets as described in previous studies (Pearce, 2018). COMPARISON WITH EXISTING METHODS AND CONCLUSIONS Our model replicates the previous behaviors of IDyOM in a modern and easy-to-use language -Python. In addition, more features are presented. We deeply think this new version will be of great use to the community of neuroscience of music.
Collapse
Affiliation(s)
- Guilhem Marion
- Laboratoire des Systèmes Perceptifs, Département d'Étude Cognitive, École Normale Supérieure, PSL, Paris, France; Department of Psychology, New York University, New York City, USA; Institute for Systems Research, Electrical and Computer Engineering, University of Maryland, College Park, USA.
| | - Fei Gao
- Department of Computational Linguistics, University of Zurich, Zurich, Switzerland
| | - Benjamin P Gold
- Neuroimaging & Brain Dynamics Lab, Vanderbilt University, Nashville, USA
| | - Giovanni M Di Liberto
- Laboratoire des Systèmes Perceptifs, Département d'Étude Cognitive, École Normale Supérieure, PSL, Paris, France; School of Computer Science and Statistics, Trinity College, The University of Dublin, Ireland; ADAPT Centre, Ireland; Trinity College Institute of Neuroscience, Dublin, Ireland
| | - Shihab Shamma
- Laboratoire des Systèmes Perceptifs, Département d'Étude Cognitive, École Normale Supérieure, PSL, Paris, France; Institute for Systems Research, Electrical and Computer Engineering, University of Maryland, College Park, USA
| |
Collapse
|
3
|
Quiroga Martinez DR, Fernández Rubio G, Bonetti L, Achyutuni KG, Tzovara A, Knight RT, Vuust P. Decoding reveals the neural representation of perceived and imagined musical sounds. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.08.15.553456. [PMID: 37645733 PMCID: PMC10462096 DOI: 10.1101/2023.08.15.553456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/31/2023]
Abstract
Vividly imagining a song or a melody is a skill that many people accomplish with relatively little effort. However, we are only beginning to understand how the brain represents, holds, and manipulates these musical "thoughts". Here, we decoded perceived and imagined melodies from magnetoencephalography (MEG) brain data (N = 71) to characterize their neural representation. We found that, during perception, auditory regions represent the sensory properties of individual sounds. In contrast, a widespread network including fronto-parietal cortex, hippocampus, basal nuclei, and sensorimotor regions hold the melody as an abstract unit during both perception and imagination. Furthermore, the mental manipulation of a melody systematically changes its neural representation, reflecting volitional control of auditory images. Our work sheds light on the nature and dynamics of auditory representations, informing future research on neural decoding of auditory imagination.
Collapse
Affiliation(s)
- David R. Quiroga Martinez
- Helen Wills Neuroscience Institute & Department of Psychology and Neuroscience, University of California Berkeley, Berkeley, CA
- Psychology Department, University of Copenhagen, Copenhagen, Denmark
| | - Gemma Fernández Rubio
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music, Aarhus, Denmark
| | - Leonardo Bonetti
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music, Aarhus, Denmark
- Center for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, Oxford UK
- Department of Psychiatry, University of Oxford, Oxford UK
| | - Kriti G. Achyutuni
- Helen Wills Neuroscience Institute & Department of Psychology and Neuroscience, University of California Berkeley, Berkeley, CA
| | - Athina Tzovara
- Helen Wills Neuroscience Institute & Department of Psychology and Neuroscience, University of California Berkeley, Berkeley, CA
- Institute of Computer Science, University of Bern, Bern, Switzerland
- Center for Experimental Neurology, Sleep Wake Epilepsy Center, NeuroTec, Department of Neurology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Robert T. Knight
- Helen Wills Neuroscience Institute & Department of Psychology and Neuroscience, University of California Berkeley, Berkeley, CA
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music, Aarhus, Denmark
| |
Collapse
|
4
|
Mori K, Zatorre R. State-dependent connectivity in auditory-reward networks predicts peak pleasure experiences to music. PLoS Biol 2024; 22:e3002732. [PMID: 39133721 PMCID: PMC11318860 DOI: 10.1371/journal.pbio.3002732] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Accepted: 07/03/2024] [Indexed: 08/16/2024] Open
Abstract
Music can evoke pleasurable and rewarding experiences. Past studies that examined task-related brain activity revealed individual differences in musical reward sensitivity traits and linked them to interactions between the auditory and reward systems. However, state-dependent fluctuations in spontaneous neural activity in relation to music-driven rewarding experiences have not been studied. Here, we used functional MRI to examine whether the coupling of auditory-reward networks during a silent period immediately before music listening can predict the degree of musical rewarding experience of human participants (N = 49). We used machine learning models and showed that the functional connectivity between auditory and reward networks, but not others, could robustly predict subjective, physiological, and neurobiological aspects of the strong musical reward of chills. Specifically, the right auditory cortex-striatum/orbitofrontal connections predicted the reported duration of chills and the activation level of nucleus accumbens and insula, whereas the auditory-amygdala connection was associated with psychophysiological arousal. Furthermore, the predictive model derived from the first sample of individuals was generalized in an independent dataset using different music samples. The generalization was successful only for state-like, pre-listening functional connectivity but not for stable, intrinsic functional connectivity. The current study reveals the critical role of sensory-reward connectivity in pre-task brain state in modulating subsequent rewarding experience.
Collapse
Affiliation(s)
- Kazuma Mori
- Institute for Quantum Life Science, National Institutes for Quantum Science and Technology, Chiba, Japan
- Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology (NICT), Osaka, Japan
| | - Robert Zatorre
- Montréal Neurological Institute, McGill University, Montréal, Canada
- International Laboratory for Brain, Music and Sound Research, Montréal, Canada
- Centre for Research in Brain, Language and Music, Montréal, Canada
| |
Collapse
|
5
|
Cruyt E, De Vriendt P, De Geyter N, Van Leirsberghe J, Santens P, De Baets S, De Letter M, Vlerick P, Calders P, De Pauw R, Oostra K, Van de Velde D. The underpinning of meaningful activities by brain correlates: a systematic review. Front Psychol 2023; 14:1136754. [PMID: 37179882 PMCID: PMC10169732 DOI: 10.3389/fpsyg.2023.1136754] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Accepted: 03/29/2023] [Indexed: 05/15/2023] Open
Abstract
Introduction Engaging in meaningful activities contributes to health and wellbeing. Research identifies meaningfulness by analysing retrospective and subjective data such as personal experiences in activities. Objectively measuring meaningful activities by registering the brain (fNIRS, EEG, PET, fMRI) remains poorly investigated. Methods A systematic review using PubMed, Web of Science, CINAHL, and Cochrane Library. Findings Thirty-one studies investigating the correlations between daily activities in adults, their degree of meaningfulness for the participant, and the brain areas involved, were identified. The activities could be classified according to the degree of meaningfulness, using the attributes of meaningfulness described in the literature. Eleven study activities contained all attributes, which means that these can be assumed to be meaningful for the participant. Brain areas involved in these activities were generally related to emotional and affective processing, motivation, and reward. Conclusion Although it is demonstrated that neural correlates of meaningful activities can be measured objectively by neurophysiological registration techniques, "meaning" as such has not yet been investigated explicitly. Further neurophysiological research for objective monitoring of meaningful activities is recommended.
Collapse
Affiliation(s)
- Ellen Cruyt
- Department of Rehabilitation Sciences, Faculty of Medicine and Health Sciences, Occupational Therapy Research Group, Physiotherapy and Speech-Language Pathology/Audiology, Ghent University, Ghent, Belgium
| | - Patricia De Vriendt
- Department of Rehabilitation Sciences, Faculty of Medicine and Health Sciences, Occupational Therapy Research Group, Physiotherapy and Speech-Language Pathology/Audiology, Ghent University, Ghent, Belgium
- Department of Occupational Therapy, Artevelde University of Applied Sciences, Ghent, Belgium
- Mental Health Research Group, Vrije Universiteit Brussel, Brussels, Belgium
- Frailty in Ageing Research Group, Vrije Universiteit Brussel, Brussels, Belgium
| | - Nele De Geyter
- Department of Rehabilitation Sciences, Faculty of Medicine and Health Sciences, Occupational Therapy Research Group, Physiotherapy and Speech-Language Pathology/Audiology, Ghent University, Ghent, Belgium
| | - Janne Van Leirsberghe
- Department of Rehabilitation Sciences, Faculty of Medicine and Health Sciences, Occupational Therapy Research Group, Physiotherapy and Speech-Language Pathology/Audiology, Ghent University, Ghent, Belgium
| | - Patrick Santens
- Department of Neurology, Ghent University Hospital, Ghent, Belgium
| | - Stijn De Baets
- Department of Rehabilitation Sciences, Faculty of Medicine and Health Sciences, Occupational Therapy Research Group, Physiotherapy and Speech-Language Pathology/Audiology, Ghent University, Ghent, Belgium
- Frailty in Ageing Research Group, Vrije Universiteit Brussel, Brussels, Belgium
| | - Miet De Letter
- Department of Rehabilitation Sciences, Faculty of Medicine and Health Sciences, Occupational Therapy Research Group, Physiotherapy and Speech-Language Pathology/Audiology, Ghent University, Ghent, Belgium
| | - Peter Vlerick
- Department of Work, Organization and Society, Faculty of Psychology and Educational Sciences, Ghent University, Ghent, Belgium
| | - Patrick Calders
- Department of Rehabilitation Sciences, Faculty of Medicine and Health Sciences, Occupational Therapy Research Group, Physiotherapy and Speech-Language Pathology/Audiology, Ghent University, Ghent, Belgium
| | - Robby De Pauw
- Department of Rehabilitation Sciences, Faculty of Medicine and Health Sciences, Occupational Therapy Research Group, Physiotherapy and Speech-Language Pathology/Audiology, Ghent University, Ghent, Belgium
- Lifestyle and Chronic Diseases, Department of Epidemiology and Public Health, Sciensano, Brussels, Belgium
| | - Kristine Oostra
- Department of Physical and Rehabilitation Medicine, Ghent University Hospital, Ghent, Belgium
| | - Dominique Van de Velde
- Department of Rehabilitation Sciences, Faculty of Medicine and Health Sciences, Occupational Therapy Research Group, Physiotherapy and Speech-Language Pathology/Audiology, Ghent University, Ghent, Belgium
| |
Collapse
|
6
|
Mesik J, Wojtczak M. The effects of data quantity on performance of temporal response function analyses of natural speech processing. Front Neurosci 2023; 16:963629. [PMID: 36711133 PMCID: PMC9878558 DOI: 10.3389/fnins.2022.963629] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Accepted: 12/26/2022] [Indexed: 01/15/2023] Open
Abstract
In recent years, temporal response function (TRF) analyses of neural activity recordings evoked by continuous naturalistic stimuli have become increasingly popular for characterizing response properties within the auditory hierarchy. However, despite this rise in TRF usage, relatively few educational resources for these tools exist. Here we use a dual-talker continuous speech paradigm to demonstrate how a key parameter of experimental design, the quantity of acquired data, influences TRF analyses fit to either individual data (subject-specific analyses), or group data (generic analyses). We show that although model prediction accuracy increases monotonically with data quantity, the amount of data required to achieve significant prediction accuracies can vary substantially based on whether the fitted model contains densely (e.g., acoustic envelope) or sparsely (e.g., lexical surprisal) spaced features, especially when the goal of the analyses is to capture the aspect of neural responses uniquely explained by specific features. Moreover, we demonstrate that generic models can exhibit high performance on small amounts of test data (2-8 min), if they are trained on a sufficiently large data set. As such, they may be particularly useful for clinical and multi-task study designs with limited recording time. Finally, we show that the regularization procedure used in fitting TRF models can interact with the quantity of data used to fit the models, with larger training quantities resulting in systematically larger TRF amplitudes. Together, demonstrations in this work should aid new users of TRF analyses, and in combination with other tools, such as piloting and power analyses, may serve as a detailed reference for choosing acquisition duration in future studies.
Collapse
Affiliation(s)
- Juraj Mesik
- Department of Psychology, University of Minnesota, Minneapolis, MN, United States
| | | |
Collapse
|
7
|
Abstract
Neural decoding models can be used to decode neural representations of visual, acoustic, or semantic information. Recent studies have demonstrated neural decoders that are able to decode accoustic information from a variety of neural signal types including electrocortiography (ECoG) and the electroencephalogram (EEG). In this study we explore how functional magnetic resonance imaging (fMRI) can be combined with EEG to develop an accoustic decoder. Specifically, we first used a joint EEG-fMRI paradigm to record brain activity while participants listened to music. We then used fMRI-informed EEG source localisation and a bi-directional long-term short term deep learning network to first extract neural information from the EEG related to music listening and then to decode and reconstruct the individual pieces of music an individual was listening to. We further validated our decoding model by evaluating its performance on a separate dataset of EEG-only recordings. We were able to reconstruct music, via our fMRI-informed EEG source analysis approach, with a mean rank accuracy of 71.8% ([Formula: see text], [Formula: see text]). Using only EEG data, without participant specific fMRI-informed source analysis, we were able to identify the music a participant was listening to with a mean rank accuracy of 59.2% ([Formula: see text], [Formula: see text]). This demonstrates that our decoding model may use fMRI-informed source analysis to aid EEG based decoding and reconstruction of acoustic information from brain activity and makes a step towards building EEG-based neural decoders for other complex information domains such as other acoustic, visual, or semantic information.
Collapse
|
8
|
Di Liberto GM, Hjortkjær J, Mesgarani N. Editorial: Neural Tracking: Closing the Gap Between Neurophysiology and Translational Medicine. Front Neurosci 2022; 16:872600. [PMID: 35368278 PMCID: PMC8966872 DOI: 10.3389/fnins.2022.872600] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Accepted: 02/17/2022] [Indexed: 11/25/2022] Open
Affiliation(s)
- Giovanni M. Di Liberto
- School of Computer Science and Statistics, Trinity College Dublin, Dublin, Ireland
- ADAPT Centre, d-real, Trinity College Institute for Neuroscience, Dublin, Ireland
- *Correspondence: Giovanni M. Di Liberto
| | - Jens Hjortkjær
- Hearing Systems Group, Department of Health Technology, Technical University of Denmark, Kongens Lyngby, Ireland
| | - Nima Mesgarani
- Electrical Engineering Department, Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States
| |
Collapse
|
9
|
Di Liberto GM, Marion G, Shamma SA. Accurate Decoding of Imagined and Heard Melodies. Front Neurosci 2021; 15:673401. [PMID: 34421512 PMCID: PMC8375770 DOI: 10.3389/fnins.2021.673401] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2021] [Accepted: 06/17/2021] [Indexed: 11/16/2022] Open
Abstract
Music perception requires the human brain to process a variety of acoustic and music-related properties. Recent research used encoding models to tease apart and study the various cortical contributors to music perception. To do so, such approaches study temporal response functions that summarise the neural activity over several minutes of data. Here we tested the possibility of assessing the neural processing of individual musical units (bars) with electroencephalography (EEG). We devised a decoding methodology based on a maximum correlation metric across EEG segments (maxCorr) and used it to decode melodies from EEG based on an experiment where professional musicians listened and imagined four Bach melodies multiple times. We demonstrate here that accurate decoding of melodies in single-subjects and at the level of individual musical units is possible, both from EEG signals recorded during listening and imagination. Furthermore, we find that greater decoding accuracies are measured for the maxCorr method than for an envelope reconstruction approach based on backward temporal response functions (bTRFenv). These results indicate that low-frequency neural signals encode information beyond note timing, especially with respect to low-frequency cortical signals below 1 Hz, which are shown to encode pitch-related information. Along with the theoretical implications of these results, we discuss the potential applications of this decoding methodology in the context of novel brain-computer interface solutions.
Collapse
Affiliation(s)
- Giovanni M Di Liberto
- Laboratoire des Systèmes Perceptifs, CNRS, Paris, France.,Ecole Normale Supérieure, PSL University, Paris, France.,Department of Mechanical, Manufacturing and Biomedical Engineering, Trinity Centre for Biomedical Engineering, Trinity College, Trinity Institute of Neuroscience, The University of Dublin, Dublin, Ireland.,Centre for Biomedical Engineering, School of Electrical and Electronic Engineering and UCD University College Dublin, Dublin, Ireland
| | - Guilhem Marion
- Laboratoire des Systèmes Perceptifs, CNRS, Paris, France
| | - Shihab A Shamma
- Laboratoire des Systèmes Perceptifs, CNRS, Paris, France.,Institute for Systems Research, Electrical and Computer Engineering, University of Maryland, College Park, College Park, MD, United States
| |
Collapse
|
10
|
The Music of Silence: Part II: Music Listening Induces Imagery Responses. J Neurosci 2021; 41:7449-7460. [PMID: 34341154 DOI: 10.1523/jneurosci.0184-21.2021] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2021] [Revised: 06/22/2021] [Accepted: 06/24/2021] [Indexed: 01/22/2023] Open
Abstract
During music listening, humans routinely acquire the regularities of the acoustic sequences and use them to anticipate and interpret the ongoing melody. Specifically, in line with this predictive framework, it is thought that brain responses during such listening reflect a comparison between the bottom-up sensory responses and top-down prediction signals generated by an internal model that embodies the music exposure and expectations of the listener. To attain a clear view of these predictive responses, previous work has eliminated the sensory inputs by inserting artificial silences (or sound omissions) that leave behind only the corresponding predictions of the thwarted expectations. Here, we demonstrate a new alternate approach in which we decode the predictive electroencephalography (EEG) responses to the silent intervals that are naturally interspersed within the music. We did this as participants (experiment 1, 20 participants, 10 female; experiment 2, 21 participants, 6 female) listened or imagined Bach piano melodies. Prediction signals were quantified and assessed via a computational model of the melodic structure of the music and were shown to exhibit the same response characteristics when measured during listening or imagining. These include an inverted polarity for both silence and imagined responses relative to listening, as well as response magnitude modulations that precisely reflect the expectations of notes and silences in both listening and imagery conditions. These findings therefore provide a unifying view that links results from many previous paradigms, including omission reactions and the expectation modulation of sensory responses, all in the context of naturalistic music listening.SIGNIFICANCE STATEMENT Music perception depends on our ability to learn and detect melodic structures. It has been suggested that our brain does so by actively predicting upcoming music notes, a process inducing instantaneous neural responses as the music confronts these expectations. Here, we studied this prediction process using EEGs recorded while participants listen to and imagine Bach melodies. Specifically, we examined neural signals during the ubiquitous musical pauses (or silent intervals) in a music stream and analyzed them in contrast to the imagery responses. We find that imagined predictive responses are routinely co-opted during ongoing music listening. These conclusions are revealed by a new paradigm using listening and imagery of naturalistic melodies.
Collapse
|