1
|
Fakhar K, Dixit S, Hadaeghi F, Kording KP, Hilgetag CC. Downstream network transformations dissociate neural activity from causal functional contributions. Sci Rep 2024; 14:2103. [PMID: 38267481 PMCID: PMC10808222 DOI: 10.1038/s41598-024-52423-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Accepted: 01/18/2024] [Indexed: 01/26/2024] Open
Abstract
Neuroscientists rely on distributed spatio-temporal patterns of neural activity to understand how neural units contribute to cognitive functions and behavior. However, the extent to which neural activity reliably indicates a unit's causal contribution to the behavior is not well understood. To address this issue, we provide a systematic multi-site perturbation framework that captures time-varying causal contributions of elements to a collectively produced outcome. Applying our framework to intuitive toy examples and artificial neural networks revealed that recorded activity patterns of neural elements may not be generally informative of their causal contribution due to activity transformations within a network. Overall, our findings emphasize the limitations of inferring causal mechanisms from neural activities and offer a rigorous lesioning framework for elucidating causal neural contributions.
Collapse
Affiliation(s)
- Kayson Fakhar
- Institute of Computational Neuroscience, University Medical Center Eppendorf, Hamburg University, Hamburg Center of Neuroscience, Hamburg, Germany.
| | - Shrey Dixit
- Institute of Computational Neuroscience, University Medical Center Eppendorf, Hamburg University, Hamburg Center of Neuroscience, Hamburg, Germany
| | - Fatemeh Hadaeghi
- Institute of Computational Neuroscience, University Medical Center Eppendorf, Hamburg University, Hamburg Center of Neuroscience, Hamburg, Germany
| | - Konrad P Kording
- Departments of Bioengineering and Neuroscience, University of Pennsylvania, Philadelphia, PA, USA
- Learning in Machines & Brains, CIFAR, Toronto, ON, Canada
| | - Claus C Hilgetag
- Institute of Computational Neuroscience, University Medical Center Eppendorf, Hamburg University, Hamburg Center of Neuroscience, Hamburg, Germany
- Department of Health Sciences, Boston University, Boston, MA, USA
| |
Collapse
|
2
|
Fakhar K, Dixit S, Hadaeghi F, Kording KP, Hilgetag CC. When Neural Activity Fails to Reveal Causal Contributions. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.06.06.543895. [PMID: 37333375 PMCID: PMC10274733 DOI: 10.1101/2023.06.06.543895] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/20/2023]
Abstract
Neuroscientists rely on distributed spatio-temporal patterns of neural activity to understand how neural units contribute to cognitive functions and behavior. However, the extent to which neural activity reliably indicates a unit's causal contribution to the behavior is not well understood. To address this issue, we provide a systematic multi-site perturbation framework that captures time-varying causal contributions of elements to a collectively produced outcome. Applying our framework to intuitive toy examples and artificial neuronal networks revealed that recorded activity patterns of neural elements may not be generally informative of their causal contribution due to activity transformations within a network. Overall, our findings emphasize the limitations of inferring causal mechanisms from neural activities and offer a rigorous lesioning framework for elucidating causal neural contributions.
Collapse
Affiliation(s)
- Kayson Fakhar
- Institute of Computational Neuroscience, University Medical Center Eppendorf, Hamburg University, Hamburg Center of Neuroscience, Germany
| | - Shrey Dixit
- Institute of Computational Neuroscience, University Medical Center Eppendorf, Hamburg University, Hamburg Center of Neuroscience, Germany
| | - Fatemeh Hadaeghi
- Institute of Computational Neuroscience, University Medical Center Eppendorf, Hamburg University, Hamburg Center of Neuroscience, Germany
| | - Konrad P. Kording
- Departments of Bioengineering and Neuroscience, University of Pennsylvania, Philadelphia, PA, USA
- Learning in Machines & Brains, CIFAR, Toronto, ON, Canada
| | - Claus C. Hilgetag
- Institute of Computational Neuroscience, University Medical Center Eppendorf, Hamburg University, Hamburg Center of Neuroscience, Germany
- Department of Health Sciences, Boston University, Boston, MA, USA
| |
Collapse
|
3
|
Renvall H, Seol J, Tuominen R, Sorger B, Riecke L, Salmelin R. Selective auditory attention within naturalistic scenes modulates reactivity to speech sounds. Eur J Neurosci 2021; 54:7626-7641. [PMID: 34697833 PMCID: PMC9298413 DOI: 10.1111/ejn.15504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2021] [Accepted: 10/10/2021] [Indexed: 11/27/2022]
Abstract
Rapid recognition and categorization of sounds are essential for humans and animals alike, both for understanding and reacting to our surroundings and for daily communication and social interaction. For humans, perception of speech sounds is of crucial importance. In real life, this task is complicated by the presence of a multitude of meaningful non‐speech sounds. The present behavioural, magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) study was set out to address how attention to speech versus attention to natural non‐speech sounds within complex auditory scenes influences cortical processing. The stimuli were superimpositions of spoken words and environmental sounds, with parametric variation of the speech‐to‐environmental sound intensity ratio. The participants' task was to detect a repetition in either the speech or the environmental sound. We found that specifically when participants attended to speech within the superimposed stimuli, higher speech‐to‐environmental sound ratios resulted in shorter sustained MEG responses and stronger BOLD fMRI signals especially in the left supratemporal auditory cortex and in improved behavioural performance. No such effects of speech‐to‐environmental sound ratio were observed when participants attended to the environmental sound part within the exact same stimuli. These findings suggest stronger saliency of speech compared with other meaningful sounds during processing of natural auditory scenes, likely linked to speech‐specific top‐down and bottom‐up mechanisms activated during speech perception that are needed for tracking speech in real‐life‐like auditory environments.
Collapse
Affiliation(s)
- Hanna Renvall
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland.,Aalto NeuroImaging, Aalto University, Espoo, Finland.,BioMag Laboratory, HUS Diagnostic Center, Helsinki University Hospital, University of Helsinki and Aalto University School of Science, Helsinki, Finland
| | - Jaeho Seol
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland.,Aalto NeuroImaging, Aalto University, Espoo, Finland
| | - Riku Tuominen
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland.,Aalto NeuroImaging, Aalto University, Espoo, Finland
| | - Bettina Sorger
- Department of Cognitive Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Lars Riecke
- Department of Cognitive Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Riitta Salmelin
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland.,Aalto NeuroImaging, Aalto University, Espoo, Finland
| |
Collapse
|
4
|
Boos M, Lücke J, Rieger JW. Generalizable dimensions of human cortical auditory processing of speech in natural soundscapes: A data-driven ultra high field fMRI approach. Neuroimage 2021; 237:118106. [PMID: 33991696 DOI: 10.1016/j.neuroimage.2021.118106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Accepted: 04/25/2021] [Indexed: 11/27/2022] Open
Abstract
Speech comprehension in natural soundscapes rests on the ability of the auditory system to extract speech information from a complex acoustic signal with overlapping contributions from many sound sources. Here we reveal the canonical processing of speech in natural soundscapes on multiple scales by using data-driven modeling approaches to characterize sounds to analyze ultra high field fMRI recorded while participants listened to the audio soundtrack of a movie. We show that at the functional level the neuronal processing of speech in natural soundscapes can be surprisingly low dimensional in the human cortex, highlighting the functional efficiency of the auditory system for a seemingly complex task. Particularly, we find that a model comprising three functional dimensions of auditory processing in the temporal lobes is shared across participants' fMRI activity. We further demonstrate that the three functional dimensions are implemented in anatomically overlapping networks that process different aspects of speech in natural soundscapes. One is most sensitive to complex auditory features present in speech, another to complex auditory features and fast temporal modulations, that are not specific to speech, and one codes mainly sound level. These results were derived with few a-priori assumptions and provide a detailed and computationally reproducible account of the cortical activity in the temporal lobe elicited by the processing of speech in natural soundscapes.
Collapse
Affiliation(s)
- Moritz Boos
- Applied Neurocognitive Psychology Lab, University of Oldenburg, Oldenburg, Germany; Cluster of Excellence "Hearing4all", University of Oldenburg, Oldenburg, Germany.
| | - Jörg Lücke
- Machine Learning Division, University of Oldenburg, Oldenburg, Germany; Cluster of Excellence "Hearing4all", University of Oldenburg, Oldenburg, Germany
| | - Jochem W Rieger
- Applied Neurocognitive Psychology Lab, University of Oldenburg, Oldenburg, Germany; Cluster of Excellence "Hearing4all", University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
5
|
Tremblay P, Brisson V, Deschamps I. Brain aging and speech perception: Effects of background noise and talker variability. Neuroimage 2020; 227:117675. [PMID: 33359849 DOI: 10.1016/j.neuroimage.2020.117675] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2020] [Revised: 12/15/2020] [Accepted: 12/17/2020] [Indexed: 10/22/2022] Open
Abstract
Speech perception can be challenging, especially for older adults. Despite the importance of speech perception in social interactions, the mechanisms underlying these difficulties remain unclear and treatment options are scarce. While several studies have suggested that decline within cortical auditory regions may be a hallmark of these difficulties, a growing number of studies have reported decline in regions beyond the auditory processing network, including regions involved in speech processing and executive control, suggesting a potentially diffuse underlying neural disruption, though no consensus exists regarding underlying dysfunctions. To address this issue, we conducted two experiments in which we investigated age differences in speech perception when background noise and talker variability are manipulated, two factors known to be detrimental to speech perception. In Experiment 1, we examined the relationship between speech perception, hearing and auditory attention in 88 healthy participants aged 19 to 87 years. In Experiment 2, we examined cortical thickness and BOLD signal using magnetic resonance imaging (MRI) and related these measures to speech perception performance using a simple mediation approach in 32 participants from Experiment 1. Our results show that, even after accounting for hearing thresholds and two measures of auditory attention, speech perception significantly declined with age. Age-related decline in speech perception in noise was associated with thinner cortex in auditory and speech processing regions (including the superior temporal cortex, ventral premotor cortex and inferior frontal gyrus) as well as in regions involved in executive control (including the dorsal anterior insula, the anterior cingulate cortex and medial frontal cortex). Further, our results show that speech perception performance was associated with reduced brain response in the right superior temporal cortex in older compared to younger adults, and to an increase in response to noise in older adults in the left anterior temporal cortex. Talker variability was not associated with different activation patterns in older compared to younger adults. Together, these results support the notion of a diffuse rather than a focal dysfunction underlying speech perception in noise difficulties in older adults.
Collapse
Affiliation(s)
- Pascale Tremblay
- CERVO Brain Research Center, Québec City, QC, Canada; Université Laval, Département de réadaptation, Québec City, QC, Canada.
| | - Valérie Brisson
- CERVO Brain Research Center, Québec City, QC, Canada; Université Laval, Département de réadaptation, Québec City, QC, Canada
| | | |
Collapse
|
6
|
Zuk NJ, Teoh ES, Lalor EC. EEG-based classification of natural sounds reveals specialized responses to speech and music. Neuroimage 2020; 210:116558. [DOI: 10.1016/j.neuroimage.2020.116558] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2019] [Revised: 12/23/2019] [Accepted: 01/14/2020] [Indexed: 11/30/2022] Open
|
7
|
Rampinini AC, Handjaras G, Leo A, Cecchetti L, Betta M, Marotta G, Ricciardi E, Pietrini P. Formant Space Reconstruction From Brain Activity in Frontal and Temporal Regions Coding for Heard Vowels. Front Hum Neurosci 2019; 13:32. [PMID: 30837851 PMCID: PMC6383050 DOI: 10.3389/fnhum.2019.00032] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2018] [Accepted: 01/21/2019] [Indexed: 11/29/2022] Open
Abstract
Classical studies have isolated a distributed network of temporal and frontal areas engaged in the neural representation of speech perception and production. With modern literature arguing against unique roles for these cortical regions, different theories have favored either neural code-sharing or cortical space-sharing, thus trying to explain the intertwined spatial and functional organization of motor and acoustic components across the fronto-temporal cortical network. In this context, the focus of attention has recently shifted toward specific model fitting, aimed at motor and/or acoustic space reconstruction in brain activity within the language network. Here, we tested a model based on acoustic properties (formants), and one based on motor properties (articulation parameters), where model-free decoding of evoked fMRI activity during perception, imagery, and production of vowels had been successful. Results revealed that phonological information organizes around formant structure during the perception of vowels; interestingly, such a model was reconstructed in a broad temporal region, outside of the primary auditory cortex, but also in the pars triangularis of the left inferior frontal gyrus. Conversely, articulatory features were not associated with brain activity in these regions. Overall, our results call for a degree of interdependence based on acoustic information, between the frontal and temporal ends of the language network.
Collapse
Affiliation(s)
| | | | - Andrea Leo
- IMT School for Advanced Studies Lucca, Lucca, Italy
| | | | - Monica Betta
- IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Giovanna Marotta
- Department of Philology, Literature and Linguistics, University of Pisa, Pisa, Italy
| | | | | |
Collapse
|
8
|
Varoquaux G, Poldrack RA. Predictive models avoid excessive reductionism in cognitive neuroimaging. Curr Opin Neurobiol 2018; 55:1-6. [PMID: 30513462 DOI: 10.1016/j.conb.2018.11.002] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2018] [Revised: 09/15/2018] [Accepted: 11/19/2018] [Indexed: 11/28/2022]
Abstract
Understanding the organization of complex behavior as it relates to the brain requires modeling the behavior, the relevant mental processes, and the corresponding neural activity. Experiments in cognitive neuroscience typically study a psychological process via controlled manipulations, reducing behavior to one of its components. Such reductionism can easily lead to paradigm-bound theories. Predictive models can generalize brain-mind associations to arbitrary new tasks and stimuli. We argue that they are needed to broaden theories beyond specific paradigms. Predicting behavior from neural activity can support robust reverse inference, isolating brain structures that support particular mental processes. The converse prediction enables modeling brain responses as a function of a complete description of the task, rather than building on oppositions.
Collapse
|
9
|
Cortical tracking of multiple streams outside the focus of attention in naturalistic auditory scenes. Neuroimage 2018; 181:617-626. [DOI: 10.1016/j.neuroimage.2018.07.052] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2018] [Revised: 07/19/2018] [Accepted: 07/22/2018] [Indexed: 11/30/2022] Open
|
10
|
de Souza AP, Soares QB, Felix LB, Mendes EMAM. Classification of auditory selective attention using spatial coherence and modular attention index. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 166:107-113. [PMID: 30415710 DOI: 10.1016/j.cmpb.2018.10.002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/06/2018] [Revised: 09/11/2018] [Accepted: 10/01/2018] [Indexed: 06/09/2023]
Abstract
BACKGROUND AND OBJECTIVE Brain-Computer Interfaces (BCIs) based on auditory selective attention have been receiving much attention because i) they are useful for completely paralyzed users since they do not require muscular effort or gaze and ii) focusing attention is a natural human ability. Several techniques - such as recently developed Spatial Coherence (SC) - have been proposed in order to optimize the BCI procedure. Thus, this work aims at investigating and comparing two strategies based on spatial coherence detection: contralateral and modular classifiers. The latter is a new method using modular attention index. The new classifier was developed to implement an auditory BCI where a volunteer makes binary choices using selective attention under the amplitude-modulated tones stimulation. METHODS Contralateral and modular classifiers were applied to the electroencephalogram (EEG) recorded from 144 subjects under the BCI protocol. The best set of parameters (carriers of the stimulus, channels and trials of signal) for this BCI was investigated taking into consideration the hit rate and the information transfer rate. RESULTS The best result obtained using the modular classifier was a hit rate of 91.67% and information transfer rate of 6.74 bits/min using 0.5 kHz/4.0 kHz as stimuli and three windows (5.10 sec of EEG signal). These results were obtained with five electrodes (C3, P3, F8, P4, O2) using exhaustive search to identify regions with greater coherence. CONCLUSION The modular classifier - using electroencephalogram channels from the central, frontal, occipital and parietal areas - improves the performance of auditory BCIs based on selective attention.
Collapse
Affiliation(s)
- Ana Paula de Souza
- Programa de Pós-Graduação em Engenharia Elétrica, Universidade Federal de Minas Gerais, Av. Presidente Antônio Carlos, 6627, Pampulha, Belo Horizonte, MG 31270-901, Brazil; Instituto de Ciências Exatas e Tecnológicas, Universidade Federal de Viçosa/Campus Florestal, Rodovia LMG 818 - km 6, Florestal, MG 35690-000, Brazil; Núcleo Interdisciplinar de Análise de Sinais, Departamento de Engenharia Elétrica, Universidade Federal de Viçosa, Av. Peter Henry Rolfs s/n, Viçosa, MG 36570-900, Brazil.
| | - Quenaz B Soares
- Núcleo Interdisciplinar de Análise de Sinais, Departamento de Engenharia Elétrica, Universidade Federal de Viçosa, Av. Peter Henry Rolfs s/n, Viçosa, MG 36570-900, Brazil.
| | - Leonardo B Felix
- Núcleo Interdisciplinar de Análise de Sinais, Departamento de Engenharia Elétrica, Universidade Federal de Viçosa, Av. Peter Henry Rolfs s/n, Viçosa, MG 36570-900, Brazil.
| | - Eduardo M A M Mendes
- Programa de Pós-Graduação em Engenharia Elétrica, Universidade Federal de Minas Gerais, Av. Presidente Antônio Carlos, 6627, Pampulha, Belo Horizonte, MG 31270-901, Brazil.
| |
Collapse
|