1
|
Norman-Haignere SV, Keshishian MK, Devinsky O, Doyle W, McKhann GM, Schevon CA, Flinker A, Mesgarani N. Temporal integration in human auditory cortex is predominantly yoked to absolute time, not structure duration. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.09.23.614358. [PMID: 39386565 PMCID: PMC11463558 DOI: 10.1101/2024.09.23.614358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 10/12/2024]
Abstract
Sound structures such as phonemes and words have highly variable durations. Thus, there is a fundamental difference between integrating across absolute time (e.g., 100 ms) vs. sound structure (e.g., phonemes). Auditory and cognitive models have traditionally cast neural integration in terms of time and structure, respectively, but the extent to which cortical computations reflect time or structure remains unknown. To answer this question, we rescaled the duration of all speech structures using time stretching/compression and measured integration windows in the human auditory cortex using a new experimental/computational method applied to spatiotemporally precise intracranial recordings. We observed significantly longer integration windows for stretched speech, but this lengthening was very small (∼5%) relative to the change in structure durations, even in non-primary regions strongly implicated in speech-specific processing. These findings demonstrate that time-yoked computations dominate throughout the human auditory cortex, placing important constraints on neurocomputational models of structure processing.
Collapse
|
2
|
Howard MW, Esfahani ZG, Le B, Sederberg PB. Learning temporal relationships between symbols with Laplace Neural Manifolds. ARXIV 2024:arXiv:2302.10163v4. [PMID: 36866224 PMCID: PMC9980275] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 03/04/2023]
Abstract
Firing across populations of neurons in many regions of the mammalian brain maintains a temporal memory, a neural timeline of the recent past. Behavioral results demonstrate that people can both remember the past and anticipate the future over an analogous internal timeline. This paper presents a mathematical framework for building this timeline of the future. We assume that the input to the system is a time series of symbols-sparse tokenized representations of the present-in continuous time. The goal is to record pairwise temporal relationships between symbols over a wide range of time scales. We assume that the brain has access to a temporal memory in the form of the real Laplace transform. Hebbian associations with a diversity of synaptic time scales are formed between the past timeline and the present symbol. The associative memory stores the convolution between the past and the present. Knowing the temporal relationship between the past and the present allows one to infer relationships between the present and the future. With appropriate normalization, this Hebbian associative matrix can store a Laplace successor representation and a Laplace predecessor representation from which measures of temporal contingency can be evaluated. The diversity of synaptic time constants allows for learning of non-stationary statistics as well as joint statistics between triplets of symbols. This framework synthesizes a number of recent neuroscientific findings including results from dopamine neurons in the mesolimbic forebrain.
Collapse
Affiliation(s)
- Marc W Howard
- Department of Psychological and Brain Sciences, Boston University, 610 Commonwealth Ave, Boston, 02215, MA, USA
| | - Zahra Gh Esfahani
- Department of Psychological and Brain Sciences, Boston University, 610 Commonwealth Ave, Boston, 02215, MA, USA
| | - Bao Le
- Department of Psychology, University of Virginia, 409 McCormick Road, Charlottesville, 22904, VA, USA
| | - Per B Sederberg
- Department of Psychology, University of Virginia, 409 McCormick Road, Charlottesville, 22904, VA, USA
| |
Collapse
|
3
|
Jain S, Vo VA, Wehbe L, Huth AG. Computational Language Modeling and the Promise of In Silico Experimentation. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2024; 5:80-106. [PMID: 38645624 PMCID: PMC11025654 DOI: 10.1162/nol_a_00101] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Accepted: 01/18/2023] [Indexed: 04/23/2024]
Abstract
Language neuroscience currently relies on two major experimental paradigms: controlled experiments using carefully hand-designed stimuli, and natural stimulus experiments. These approaches have complementary advantages which allow them to address distinct aspects of the neurobiology of language, but each approach also comes with drawbacks. Here we discuss a third paradigm-in silico experimentation using deep learning-based encoding models-that has been enabled by recent advances in cognitive computational neuroscience. This paradigm promises to combine the interpretability of controlled experiments with the generalizability and broad scope of natural stimulus experiments. We show four examples of simulating language neuroscience experiments in silico and then discuss both the advantages and caveats of this approach.
Collapse
Affiliation(s)
- Shailee Jain
- Department of Computer Science, University of Texas at Austin, Austin, TX, USA
| | - Vy A. Vo
- Brain-Inspired Computing Lab, Intel Labs, Hillsboro, OR, USA
| | - Leila Wehbe
- Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA, USA
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Alexander G. Huth
- Department of Computer Science, University of Texas at Austin, Austin, TX, USA
- Department of Neuroscience, University of Texas at Austin, Austin, TX, USA
| |
Collapse
|
4
|
Klar P, Çatal Y, Fogel S, Jocham G, Langner R, Owen AM, Northoff G. Auditory inputs modulate intrinsic neuronal timescales during sleep. Commun Biol 2023; 6:1180. [PMID: 37985812 PMCID: PMC10661171 DOI: 10.1038/s42003-023-05566-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Accepted: 11/09/2023] [Indexed: 11/22/2023] Open
Abstract
Functional magnetic resonance imaging (fMRI) studies have demonstrated that intrinsic neuronal timescales (INT) undergo modulation by external stimulation during consciousness. It remains unclear if INT keep the ability for significant stimulus-induced modulation during primary unconscious states, such as sleep. This fMRI analysis addresses this question via a dataset that comprises an awake resting-state plus rest and stimulus states during sleep. We analyzed INT measured via temporal autocorrelation supported by median frequency (MF) in the frequency-domain. Our results were replicated using a biophysical model. There were two main findings: (1) INT prolonged while MF decreased from the awake resting-state to the N2 resting-state, and (2) INT shortened while MF increased during the auditory stimulus in sleep. The biophysical model supported these results by demonstrating prolonged INT in slowed neuronal populations that simulate the sleep resting-state compared to an awake state. Conversely, under sine wave input simulating the stimulus state during sleep, the model's regions yielded shortened INT that returned to the awake resting-state level. Our results highlight that INT preserve reactivity to stimuli in states of unconsciousness like sleep, enhancing our understanding of unconscious brain dynamics and their reactivity to stimuli.
Collapse
Affiliation(s)
- Philipp Klar
- Faculty of Mathematics and Natural Sciences, Institute of Experimental Psychology, Heinrich Heine University of Düsseldorf, Düsseldorf, Germany.
- Institute of Neuroscience and Medicine, Brain & Behaviour (INM-7), Research Centre Jülich, Jülich, Germany.
| | - Yasir Çatal
- The Royal's Institute of Mental Health Research & University of Ottawa, Brain and Mind Research Institute, Centre for Neural Dynamics, Faculty of Medicine, University of Ottawa, 145 Carling Avenue, Room 6435, Ottawa, ON, K1Z 7K4, Canada
| | - Stuart Fogel
- Sleep Unit, University of Ottawa Institute of Mental Health Research at The Royal, K1Z 7K4, Ottawa, ON, Canada
| | - Gerhard Jocham
- Faculty of Mathematics and Natural Sciences, Institute of Experimental Psychology, Heinrich Heine University of Düsseldorf, Düsseldorf, Germany
| | - Robert Langner
- Institute of Neuroscience and Medicine, Brain & Behaviour (INM-7), Research Centre Jülich, Jülich, Germany
- Institute of Systems Neuroscience, Medical Faculty, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
| | - Adrian M Owen
- Departments of Physiology and Pharmacology and Psychology, Western University, London, ON, N6A 5B7, Canada
| | - Georg Northoff
- The Royal's Institute of Mental Health Research & University of Ottawa, Brain and Mind Research Institute, Centre for Neural Dynamics, Faculty of Medicine, University of Ottawa, 145 Carling Avenue, Room 6435, Ottawa, ON, K1Z 7K4, Canada
- Centre for Cognition and Brain Disorders, Hangzhou Normal University, Tianmu Road 305, Hangzhou, Zhejiang Province, 310013, China
| |
Collapse
|
5
|
Northoff G, Klar P, Bein M, Safron A. As without, so within: how the brain's temporo-spatial alignment to the environment shapes consciousness. Interface Focus 2023; 13:20220076. [PMID: 37065263 PMCID: PMC10102730 DOI: 10.1098/rsfs.2022.0076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 03/02/2023] [Indexed: 04/18/2023] Open
Abstract
Consciousness is constituted by a structure that includes contents as foreground and the environment as background. This structural relation between the experiential foreground and background presupposes a relationship between the brain and the environment, often neglected in theories of consciousness. The temporo-spatial theory of consciousness addresses the brain-environment relation by a concept labelled 'temporo-spatial alignment'. Briefly, temporo-spatial alignment refers to the brain's neuronal activity's interaction with and adaption to interoceptive bodily and exteroceptive environmental stimuli, including their symmetry as key for consciousness. Combining theory and empirical data, this article attempts to demonstrate the yet unclear neuro-phenomenal mechanisms of temporo-spatial alignment. First, we suggest three neuronal layers of the brain's temporo-spatial alignment to the environment. These neuronal layers span across a continuum from longer to shorter timescales. (i) The background layer comprises longer and more powerful timescales mediating topographic-dynamic similarities between different subjects' brains. (ii) The intermediate layer includes a mixture of medium-scaled timescales allowing for stochastic matching between environmental inputs and neuronal activity through the brain's intrinsic neuronal timescales and temporal receptive windows. (iii) The foreground layer comprises shorter and less powerful timescales for neuronal entrainment of stimuli temporal onset through neuronal phase shifting and resetting. Second, we elaborate on how the three neuronal layers of temporo-spatial alignment correspond to their respective phenomenal layers of consciousness. (i) The inter-subjectively shared contextual background of consciousness. (ii) An intermediate layer that mediates the relationship between different contents of consciousness. (iii) A foreground layer that includes specific fast-changing contents of consciousness. Overall, temporo-spatial alignment may provide a mechanism whose different neuronal layers modulate corresponding phenomenal layers of consciousness. Temporo-spatial alignment can provide a bridging principle for linking physical-energetic (free energy), dynamic (symmetry), neuronal (three layers of distinct time-space scales) and phenomenal (form featured by background-intermediate-foreground) mechanisms of consciousness.
Collapse
Affiliation(s)
- Georg Northoff
- Mind, Brain Imaging and Neuroethics Research Unit, TheRoyal's Institute of Mental Health Research, University of Ottawa, Ottawa, ON, Canada K1Z 7K4
- Mental Health Centre, Zhejiang University School of Medicine, Hangzhou 310053, People's Republic of China
- Centre for Cognition and Brain Disorders, Hangzhou Normal University, Hangzhou 310053, People's Republic of China
| | - Philipp Klar
- Medical Faculty, C. & O. Vogt-Institute for Brain Research, Heinrich Heine University of Düsseldorf, 40225 Düsseldorf, Germany
| | - Magnus Bein
- Department of Biology and Department of Psychiatry, McGill University, Quebec, Canada H3A 0G4
| | - Adam Safron
- Center for Psychedelic and Consciousness Research, Department of Psychiatry & Behavioral Sciences, Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA
- Cognitive Science Program, Indiana University, Bloomington, IN 47405, USA
- Institute for Advanced Consciousness Studies, Santa Monica, CA 90403, USA
| |
Collapse
|
6
|
Truzzi A, Cusack R. The development of intrinsic timescales: A comparison between the neonate and adult brain. Neuroimage 2023; 275:120155. [PMID: 37169116 DOI: 10.1016/j.neuroimage.2023.120155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Revised: 05/01/2023] [Accepted: 05/02/2023] [Indexed: 05/13/2023] Open
Abstract
In human adults and other mammals, different brain regions have distinct intrinsic timescales over which they integrate information, from shorter in unimodal sensory-motor regions to longer in transmodal higher-order regions. These have been related to cognitive performance and clinical symptoms, but it remains unclear how they develop. We asked if there are regional differences in timescales at birth that could shape learning by acting as an inductive bias, or if they develop later as the temporal statistics of the environment are learned. We used resting-state fMRI to characterise timescales in human neonates and adults. They were highly consistent across two independent neonatal groups, but in both sensory-motor and higher order areas, timescales were longer in infants compared to adults, as might be expected from their less developed myelination, and recent evidence of longer neural segments in infants watching naturalistic stimuli. In adults, we replicated the finding that transmodal areas have longer timescales than sensory-motor areas, but in infants the opposite pattern was found, driven by long infant timescales in the somotomotor network. Across regions within single brain networks, both positive (limbic) and negative (visual) correlations were found between neonates and adults. In conclusion, neonatal timescales were found to be highly structured, but distinct from adults, suggesting they act as an inductive bias that favours learning on longer timescales, particularly in unimodal regions and then develop with experience or maturation. This "take it slow" initial approach might help human infants to create more regularised, holistic representations of the input less bound to fleeting details, which would favour the development of abstract and contextual representations.
Collapse
Affiliation(s)
- Anna Truzzi
- School of Psychology, Trinity College Dublin, Dublin, Ireland; Trinity College Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland.
| | - Rhodri Cusack
- School of Psychology, Trinity College Dublin, Dublin, Ireland; Trinity College Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
| |
Collapse
|
7
|
Wang B, Chen Y, Chen K, Lu H, Zhang Z. From local properties to brain-wide organization: A review of intraregional temporal features in functional magnetic resonance imaging data. Hum Brain Mapp 2023; 44:3926-3938. [PMID: 37086446 DOI: 10.1002/hbm.26302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Revised: 03/15/2023] [Accepted: 03/21/2023] [Indexed: 04/24/2023] Open
Abstract
Based on the fluctuations ensembled over neighbouring neurons, blood oxygen level-dependent (BOLD) signal is a mesoscale measurement of brain signals. Intraregional temporal features (IRTFs) of BOLD signal, extracted from regional neural activities, are utilized to investigate how the brain functions in local brain areas. This literature highlights four types of IRTFs and their representative calculations including variability in the temporal domain, variability in the frequency domain, entropy, and intrinsic neural timescales, which are tightly related to cognitions. In the brain-wide spatial organization, these brain features generally organized into two spatial hierarchies, reflecting structural constraints of regional dynamics and hierarchical functional processing workflow in brain. Meanwhile, the spatial organization gives rise to the link between neuronal properties and cognitive performance. Disrupted or unbalanced spatial conditions of IRTFs emerge with suboptimal cognitive states, which improved our understanding of the aging process and/or neuropathology of brain disease. This review concludes that IRTFs are important properties of the brain functional system and IRTFs should be considered in a brain-wide manner.
Collapse
Affiliation(s)
- Bolong Wang
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- BABRI Centre, Beijing Normal University, Beijing, China
| | - Yaojing Chen
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- BABRI Centre, Beijing Normal University, Beijing, China
| | - Kewei Chen
- Banner Alzheimer's Institute, Phoenix, Arizona, USA
| | - Hui Lu
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- BABRI Centre, Beijing Normal University, Beijing, China
| | - Zhanjun Zhang
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- BABRI Centre, Beijing Normal University, Beijing, China
| |
Collapse
|
8
|
Giroud J, Lerousseau JP, Pellegrino F, Morillon B. The channel capacity of multilevel linguistic features constrains speech comprehension. Cognition 2023; 232:105345. [PMID: 36462227 DOI: 10.1016/j.cognition.2022.105345] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Revised: 09/28/2022] [Accepted: 11/22/2022] [Indexed: 12/05/2022]
Abstract
Humans are expert at processing speech but how this feat is accomplished remains a major question in cognitive neuroscience. Capitalizing on the concept of channel capacity, we developed a unified measurement framework to investigate the respective influence of seven acoustic and linguistic features on speech comprehension, encompassing acoustic, sub-lexical, lexical and supra-lexical levels of description. We show that comprehension is independently impacted by all these features, but at varying degrees and with a clear dominance of the syllabic rate. Comparing comprehension of French words and sentences further reveals that when supra-lexical contextual information is present, the impact of all other features is dramatically reduced. Finally, we estimated the channel capacity associated with each linguistic feature and compared them with their generic distribution in natural speech. Our data reveal that while acoustic modulation, syllabic and phonemic rates unfold respectively at 5, 5, and 12 Hz in natural speech, they are associated with independent processing bottlenecks whose channel capacity are of 15, 15 and 35 Hz, respectively, as suggested by neurophysiological theories. They moreover point towards supra-lexical contextual information as the feature limiting the flow of natural speech. Overall, this study reveals how multilevel linguistic features constrain speech comprehension.
Collapse
Affiliation(s)
- Jérémy Giroud
- Aix Marseille Univ, Inserm, INS, Inst Neurosci Syst, Marseille, France.
| | | | - François Pellegrino
- Laboratoire Dynamique du Langage UMR 5596, CNRS, University of Lyon, 14 Avenue Berthelot, 69007 Lyon, France
| | - Benjamin Morillon
- Aix Marseille Univ, Inserm, INS, Inst Neurosci Syst, Marseille, France
| |
Collapse
|
9
|
Weise A, Grimm S, Maria Rimmele J, Schröger E. Auditory representations for long lasting sounds: Insights from event-related brain potentials and neural oscillations. BRAIN AND LANGUAGE 2023; 237:105221. [PMID: 36623340 DOI: 10.1016/j.bandl.2022.105221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 12/26/2022] [Accepted: 12/27/2022] [Indexed: 06/17/2023]
Abstract
The basic features of short sounds, such as frequency and intensity including their temporal dynamics, are integrated in a unitary representation. Knowledge on how our brain processes long lasting sounds is scarce. We review research utilizing the Mismatch Negativity event-related potential and neural oscillatory activity for studying representations for long lasting simple versus complex sounds such as sinusoidal tones versus speech. There is evidence for a temporal constraint in the formation of auditory representations: Auditory edges like sound onsets within long lasting sounds open a temporal window of about 350 ms in which the sounds' dynamics are integrated into a representation, while information beyond that window contributes less to that representation. This integration window segments the auditory input into short chunks. We argue that the representations established in adjacent integration windows can be concatenated into an auditory representation of a long sound, thus, overcoming the temporal constraint.
Collapse
Affiliation(s)
- Annekathrin Weise
- Department of Psychology, Ludwig-Maximilians-University Munich, Germany; Wilhelm Wundt Institute for Psychology, Leipzig University, Germany.
| | - Sabine Grimm
- Wilhelm Wundt Institute for Psychology, Leipzig University, Germany.
| | - Johanna Maria Rimmele
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Germany; Center for Language, Music and Emotion, New York University, Max Planck Institute, Department of Psychology, 6 Washington Place, New York, NY 10003, United States.
| | - Erich Schröger
- Wilhelm Wundt Institute for Psychology, Leipzig University, Germany.
| |
Collapse
|
10
|
Suomala J, Kauttonen J. Computational meaningfulness as the source of beneficial cognitive biases. Front Psychol 2023; 14:1189704. [PMID: 37205079 PMCID: PMC10187636 DOI: 10.3389/fpsyg.2023.1189704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2023] [Accepted: 04/05/2023] [Indexed: 05/21/2023] Open
Abstract
The human brain has evolved to solve the problems it encounters in multiple environments. In solving these challenges, it forms mental simulations about multidimensional information about the world. These processes produce context-dependent behaviors. The brain as overparameterized modeling organ is an evolutionary solution for producing behavior in a complex world. One of the most essential characteristics of living creatures is that they compute the values of information they receive from external and internal contexts. As a result of this computation, the creature can behave in optimal ways in each environment. Whereas most other living creatures compute almost exclusively biological values (e.g., how to get food), the human as a cultural creature computes meaningfulness from the perspective of one's activity. The computational meaningfulness means the process of the human brain, with the help of which an individual tries to make the respective situation comprehensible to herself to know how to behave optimally. This paper challenges the bias-centric approach of behavioral economics by exploring different possibilities opened up by computational meaningfulness with insight into wider perspectives. We concentrate on confirmation bias and framing effect as behavioral economics examples of cognitive biases. We conclude that from the computational meaningfulness perspective of the brain, the use of these biases are indispensable property of an optimally designed computational system of what the human brain is like. From this perspective, cognitive biases can be rational under some conditions. Whereas the bias-centric approach relies on small-scale interpretable models which include only a few explanatory variables, the computational meaningfulness perspective emphasizes the behavioral models, which allow multiple variables in these models. People are used to working in multidimensional and varying environments. The human brain is at its best in such an environment and scientific study should increasingly take place in such situations simulating the real environment. By using naturalistic stimuli (e.g., videos and VR) we can create more realistic, life-like contexts for research purposes and analyze resulting data using machine learning algorithms. In this manner, we can better explain, understand and predict human behavior and choice in different contexts.
Collapse
Affiliation(s)
- Jyrki Suomala
- Department of NeuroLab, Laurea University of Applied Sciences, Vantaa, Finland
- *Correspondence: Jyrki Suomala,
| | - Janne Kauttonen
- Competences, RDI and Digitalization, Haaga-Helia University of Applied Sciences, Helsinki, Finland
| |
Collapse
|
11
|
Information flow across the cortical timescale hierarchy during narrative construction. Proc Natl Acad Sci U S A 2022; 119:e2209307119. [PMID: 36508677 PMCID: PMC9907070 DOI: 10.1073/pnas.2209307119] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
When listening to spoken narratives, we must integrate information over multiple, concurrent timescales, building up from words to sentences to paragraphs to a coherent narrative. Recent evidence suggests that the brain relies on a chain of hierarchically organized areas with increasing temporal receptive windows to process naturalistic narratives. We hypothesized that the structure of this cortical processing hierarchy should result in an observable sequence of response lags between networks comprising the hierarchy during narrative comprehension. This study uses functional MRI to estimate the response lags between functional networks during narrative comprehension. We use intersubject cross-correlation analysis to capture network connectivity driven by the shared stimulus. We found a fixed temporal sequence of response lags-on the scale of several seconds-starting in early auditory areas, followed by language areas, the attention network, and lastly the default mode network. This gradient is consistent across eight distinct stories but absent in data acquired during rest or using a scrambled story stimulus, supporting our hypothesis that narrative construction gives rise to internetwork lags. Finally, we build a simple computational model for the neural dynamics underlying the construction of nested narrative features. Our simulations illustrate how the gradual accumulation of information within the boundaries of nested linguistic events, accompanied by increased activity at each level of the processing hierarchy, can give rise to the observed lag gradient.
Collapse
|
12
|
Hassall CD, Harley J, Kolling N, Hunt LT. Temporal scaling of human scalp-recorded potentials. Proc Natl Acad Sci U S A 2022; 119:e2214638119. [PMID: 36256817 PMCID: PMC9618087 DOI: 10.1073/pnas.2214638119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Accepted: 09/19/2022] [Indexed: 12/02/2022] Open
Abstract
Much of human behavior is governed by common processes that unfold over varying timescales. Standard event-related potential analysis assumes fixed-duration responses relative to experimental events. However, recent single-unit recordings in animals have revealed neural activity scales to span different durations during behaviors demanding flexible timing. Here, we employed a general linear modeling approach using a combination of fixed-duration and variable-duration regressors to unmix fixed-time and scaled-time components in human magneto-/electroencephalography (M/EEG) data. We use this to reveal consistent temporal scaling of human scalp-recorded potentials across four independent electroencephalogram (EEG) datasets, including interval perception, production, prediction, and value-based decision making. Between-trial variation in the temporally scaled response predicts between-trial variation in subject reaction times, demonstrating the relevance of this temporally scaled signal for temporal variation in behavior. Our results provide a general approach for studying flexibly timed behavior in the human brain.
Collapse
Affiliation(s)
- Cameron D. Hassall
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford OX3 7JX, United Kingdom
| | - Jack Harley
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford OX3 7JX, United Kingdom
| | - Nils Kolling
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford OX3 7JX, United Kingdom
| | - Laurence T. Hunt
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford OX3 7JX, United Kingdom
| |
Collapse
|
13
|
Tinnemore AR, Montero L, Gordon-Salant S, Goupell MJ. The recognition of time-compressed speech as a function of age in listeners with cochlear implants or normal hearing. Front Aging Neurosci 2022; 14:887581. [PMID: 36247992 PMCID: PMC9557069 DOI: 10.3389/fnagi.2022.887581] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Accepted: 08/29/2022] [Indexed: 11/13/2022] Open
Abstract
Speech recognition is diminished when a listener has an auditory temporal processing deficit. Such deficits occur in listeners over 65 years old with normal hearing (NH) and with age-related hearing loss, but their source is still unclear. These deficits may be especially apparent when speech occurs at a rapid rate and when a listener is mostly reliant on temporal information to recognize speech, such as when listening with a cochlear implant (CI) or to vocoded speech (a CI simulation). Assessment of the auditory temporal processing abilities of adults with CIs across a wide range of ages should better reveal central or cognitive sources of age-related deficits with rapid speech because CI stimulation bypasses much of the cochlear encoding that is affected by age-related peripheral hearing loss. This study used time-compressed speech at four different degrees of time compression (0, 20, 40, and 60%) to challenge the auditory temporal processing abilities of younger, middle-aged, and older listeners with CIs or with NH. Listeners with NH were presented vocoded speech at four degrees of spectral resolution (unprocessed, 16, 8, and 4 channels). Results showed an interaction between age and degree of time compression. The reduction in speech recognition associated with faster rates of speech was greater for older adults than younger adults. The performance of the middle-aged listeners was more similar to that of the older listeners than to that of the younger listeners, especially at higher degrees of time compression. A measure of cognitive processing speed did not predict the effects of time compression. These results suggest that central auditory changes related to the aging process are at least partially responsible for the auditory temporal processing deficits seen in older listeners, rather than solely peripheral age-related changes.
Collapse
Affiliation(s)
- Anna R. Tinnemore
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, College Park, MD, United States
- Department of Hearing and Speech Sciences, University of Maryland, College Park, College Park, MD, United States
- *Correspondence: Anna R. Tinnemore,
| | - Lauren Montero
- Department of Hearing and Speech Sciences, University of Maryland, College Park, College Park, MD, United States
| | - Sandra Gordon-Salant
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, College Park, MD, United States
- Department of Hearing and Speech Sciences, University of Maryland, College Park, College Park, MD, United States
| | - Matthew J. Goupell
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, College Park, MD, United States
- Department of Hearing and Speech Sciences, University of Maryland, College Park, College Park, MD, United States
| |
Collapse
|
14
|
Encoding time in neural dynamic regimes with distinct computational tradeoffs. PLoS Comput Biol 2022; 18:e1009271. [PMID: 35239644 PMCID: PMC8893702 DOI: 10.1371/journal.pcbi.1009271] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Accepted: 02/08/2022] [Indexed: 11/19/2022] Open
Abstract
Converging evidence suggests the brain encodes time in dynamic patterns of neural activity, including neural sequences, ramping activity, and complex dynamics. Most temporal tasks, however, require more than just encoding time, and can have distinct computational requirements including the need to exhibit temporal scaling, generalize to novel contexts, or robustness to noise. It is not known how neural circuits can encode time and satisfy distinct computational requirements, nor is it known whether similar patterns of neural activity at the population level can exhibit dramatically different computational or generalization properties. To begin to answer these questions, we trained RNNs on two timing tasks based on behavioral studies. The tasks had different input structures but required producing identically timed output patterns. Using a novel framework we quantified whether RNNs encoded two intervals using either of three different timing strategies: scaling, absolute, or stimulus-specific dynamics. We found that similar neural dynamic patterns at the level of single intervals, could exhibit fundamentally different properties, including, generalization, the connectivity structure of the trained networks, and the contribution of excitatory and inhibitory neurons. Critically, depending on the task structure RNNs were better suited for generalization or robustness to noise. Further analysis revealed different connection patterns underlying the different regimes. Our results predict that apparently similar neural dynamic patterns at the population level (e.g., neural sequences) can exhibit fundamentally different computational properties in regards to their ability to generalize to novel stimuli and their robustness to noise—and that these differences are associated with differences in network connectivity and distinct contributions of excitatory and inhibitory neurons. We also predict that the task structure used in different experimental studies accounts for some of the experimentally observed variability in how networks encode time. The ability to tell time and anticipate when external events will occur are among the most fundamental computations the brain performs. Converging evidence suggests the brain encodes time through changing patterns of neural activity. Different temporal tasks, however, have distinct computational requirements, such as the need to flexibly scale temporal patterns or generalize to novel inputs. To understand how networks can encode time and satisfy different computational requirements we trained recurrent neural networks (RNNs) on two timing tasks that have previously been used in behavioral studies. Both tasks required producing identically timed output patterns. Using a novel framework to quantify how networks encode different intervals, we found that similar patterns of neural activity—neural sequences—were associated with fundamentally different underlying mechanisms, including the connectivity patterns of the RNNs. Critically, depending on the task the RNNs were trained on, they were better suited for generalization or robustness to noise. Our results predict that similar patterns of neural activity can be produced by distinct RNN configurations, which in turn have fundamentally different computational tradeoffs. Our results also predict that differences in task structure account for some of the experimentally observed variability in how networks encode time.
Collapse
|
15
|
Morales M, Patel T, Tamm A, Pickering MJ, Hoffman P. Similar Neural Networks Respond to Coherence during Comprehension and Production of Discourse. Cereb Cortex 2022; 32:4317-4330. [PMID: 35059718 PMCID: PMC9528896 DOI: 10.1093/cercor/bhab485] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Revised: 11/22/2021] [Accepted: 11/23/2021] [Indexed: 12/03/2022] Open
Abstract
When comprehending discourse, listeners engage default-mode regions associated with integrative semantic processing to construct a situation model of its content. We investigated how similar networks are engaged when we produce, as well as comprehend, discourse. During functional magnetic resonance imaging, participants spoke about a series of specific topics and listened to discourse on other topics. We tested how activation was predicted by natural fluctuations in the global coherence of the discourse, that is, the degree to which utterances conformed to the expected topic. The neural correlates of coherence were similar across speaking and listening, particularly in default-mode regions. This network showed greater activation when less coherent speech was heard or produced, reflecting updating of mental representations when discourse did not conform to the expected topic. In contrast, regions that exert control over semantic activation showed task-specific effects, correlating negatively with coherence during listening but not during production. Participants who showed greater activation in left inferior prefrontal cortex also produced more coherent discourse, suggesting a specific role for this region in goal-directed regulation of speech content. Results suggest strong correspondence of discourse representations during speaking and listening. However, they indicate that the semantic control network plays different roles in comprehension and production.
Collapse
Affiliation(s)
- Matías Morales
- School of Philosophy, Psychology and Language Sciences, University of Edinburgh, Edinburgh EH8 9JZ, UK
| | - Tanvi Patel
- School of Philosophy, Psychology and Language Sciences, University of Edinburgh, Edinburgh EH8 9JZ, UK
| | - Andres Tamm
- School of Philosophy, Psychology and Language Sciences, University of Edinburgh, Edinburgh EH8 9JZ, UK
| | - Martin J Pickering
- School of Philosophy, Psychology and Language Sciences, University of Edinburgh, Edinburgh EH8 9JZ, UK
| | - Paul Hoffman
- Address correspondence to Dr Paul Hoffman, School of Philosophy, Psychology and Language Sciences, University of Edinburgh, 7 George Square, Edinburgh EH8 9JZ, UK.
| |
Collapse
|
16
|
Wolff A, Berberian N, Golesorkhi M, Gomez-Pilar J, Zilio F, Northoff G. Intrinsic neural timescales: temporal integration and segregation. Trends Cogn Sci 2022; 26:159-173. [PMID: 34991988 DOI: 10.1016/j.tics.2021.11.007] [Citation(s) in RCA: 72] [Impact Index Per Article: 36.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Revised: 11/19/2021] [Accepted: 11/23/2021] [Indexed: 12/11/2022]
Abstract
We are continuously bombarded by external inputs of various timescales from the environment. How does the brain process this multitude of timescales? Recent resting state studies show a hierarchy of intrinsic neural timescales (INT) with a shorter duration in unimodal regions (e.g., visual cortex and auditory cortex) and a longer duration in transmodal regions (e.g., default mode network). This unimodal-transmodal hierarchy is present across acquisition modalities [electroencephalogram (EEG)/magnetoencephalogram (MEG) and fMRI] and can be found in different species and during a variety of different task states. Together, this suggests that the hierarchy of INT is central to the temporal integration (combining successive stimuli) and segregation (separating successive stimuli) of external inputs from the environment, leading to temporal segmentation and prediction in perception and cognition.
Collapse
Affiliation(s)
- Annemarie Wolff
- Mind, Brain Imaging, and Neuroethics Research Unit, Institute of Mental Health Research, The Royal Ottawa Mental Health Centre and University of Ottawa, Ottawa, Canada
| | - Nareg Berberian
- Mind, Brain Imaging, and Neuroethics Research Unit, Institute of Mental Health Research, The Royal Ottawa Mental Health Centre and University of Ottawa, Ottawa, Canada
| | - Mehrshad Golesorkhi
- Mind, Brain Imaging, and Neuroethics Research Unit, Institute of Mental Health Research, The Royal Ottawa Mental Health Centre and University of Ottawa, Ottawa, Canada
| | - Javier Gomez-Pilar
- Biomedical Engineering Group, University of Valladolid, Paseo de Belén, 15, 47011 Valladolid, Spain; Centro de Investigación Biomédica en Red en Bioingeniería, Biomateriales y Nanomedicia, (CIBER-BBN), Madrid, Spain
| | - Federico Zilio
- Department of Philosophy, Sociology, Education, and Applied Psychology, University of Padova, Padua, Italy
| | - Georg Northoff
- Mind, Brain Imaging, and Neuroethics Research Unit, Institute of Mental Health Research, The Royal Ottawa Mental Health Centre and University of Ottawa, Ottawa, Canada; Centre for Cognition and Brain Disorders, Hangzhou Normal University, Hangzhou, China; Mental Health Centre, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China.
| |
Collapse
|
17
|
Perez-Nieves N, Leung VCH, Dragotti PL, Goodman DFM. Neural heterogeneity promotes robust learning. Nat Commun 2021; 12:5791. [PMID: 34608134 PMCID: PMC8490404 DOI: 10.1038/s41467-021-26022-3] [Citation(s) in RCA: 50] [Impact Index Per Article: 16.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Accepted: 09/10/2021] [Indexed: 11/24/2022] Open
Abstract
The brain is a hugely diverse, heterogeneous structure. Whether or not heterogeneity at the neural level plays a functional role remains unclear, and has been relatively little explored in models which are often highly homogeneous. We compared the performance of spiking neural networks trained to carry out tasks of real-world difficulty, with varying degrees of heterogeneity, and found that heterogeneity substantially improved task performance. Learning with heterogeneity was more stable and robust, particularly for tasks with a rich temporal structure. In addition, the distribution of neuronal parameters in the trained networks is similar to those observed experimentally. We suggest that the heterogeneity observed in the brain may be more than just the byproduct of noisy processes, but rather may serve an active and important role in allowing animals to learn in changing environments.
Collapse
Affiliation(s)
- Nicolas Perez-Nieves
- Department of Electrical and Electronic Engineering, Imperial College London, London, SW7 2AZ, UK.
| | - Vincent C H Leung
- Department of Electrical and Electronic Engineering, Imperial College London, London, SW7 2AZ, UK
| | - Pier Luigi Dragotti
- Department of Electrical and Electronic Engineering, Imperial College London, London, SW7 2AZ, UK
| | - Dan F M Goodman
- Department of Electrical and Electronic Engineering, Imperial College London, London, SW7 2AZ, UK.
| |
Collapse
|
18
|
Owen LLW, Chang TH, Manning JR. High-level cognition during story listening is reflected in high-order dynamic correlations in neural activity patterns. Nat Commun 2021; 12:5728. [PMID: 34593791 PMCID: PMC8484677 DOI: 10.1038/s41467-021-25876-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2019] [Accepted: 08/24/2021] [Indexed: 02/08/2023] Open
Abstract
Our thoughts arise from coordinated patterns of interactions between brain structures that change with our ongoing experiences. High-order dynamic correlations in neural activity patterns reflect different subgraphs of the brain's functional connectome that display homologous lower-level dynamic correlations. Here we test the hypothesis that high-level cognition is reflected in high-order dynamic correlations in brain activity patterns. We develop an approach to estimating high-order dynamic correlations in timeseries data, and we apply the approach to neuroimaging data collected as human participants either listen to a ten-minute story or listen to a temporally scrambled version of the story. We train across-participant pattern classifiers to decode (in held-out data) when in the session each neural activity snapshot was collected. We find that classifiers trained to decode from high-order dynamic correlations yield the best performance on data collected as participants listened to the (unscrambled) story. By contrast, classifiers trained to decode data from scrambled versions of the story yielded the best performance when they were trained using first-order dynamic correlations or non-correlational activity patterns. We suggest that as our thoughts become more complex, they are reflected in higher-order patterns of dynamic network interactions throughout the brain.
Collapse
Affiliation(s)
- Lucy L W Owen
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| | - Thomas H Chang
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
- Amazon.com, Seattle, WA, USA
| | - Jeremy R Manning
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA.
| |
Collapse
|
19
|
Nastase SA, Liu YF, Hillman H, Zadbood A, Hasenfratz L, Keshavarzian N, Chen J, Honey CJ, Yeshurun Y, Regev M, Nguyen M, Chang CHC, Baldassano C, Lositsky O, Simony E, Chow MA, Leong YC, Brooks PP, Micciche E, Choe G, Goldstein A, Vanderwal T, Halchenko YO, Norman KA, Hasson U. The "Narratives" fMRI dataset for evaluating models of naturalistic language comprehension. Sci Data 2021; 8:250. [PMID: 34584100 PMCID: PMC8479122 DOI: 10.1038/s41597-021-01033-3] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Accepted: 08/18/2021] [Indexed: 02/08/2023] Open
Abstract
The "Narratives" collection aggregates a variety of functional MRI datasets collected while human subjects listened to naturalistic spoken stories. The current release includes 345 subjects, 891 functional scans, and 27 diverse stories of varying duration totaling ~4.6 hours of unique stimuli (~43,000 words). This data collection is well-suited for naturalistic neuroimaging analysis, and is intended to serve as a benchmark for models of language and narrative comprehension. We provide standardized MRI data accompanied by rich metadata, preprocessed versions of the data ready for immediate use, and the spoken story stimuli with time-stamped phoneme- and word-level transcripts. All code and data are publicly available with full provenance in keeping with current best practices in transparent and reproducible neuroimaging.
Collapse
Affiliation(s)
- Samuel A Nastase
- Princeton Neuroscience Institute and Department of Psychology, Princeton University, Princeton, NJ, USA.
| | - Yun-Fei Liu
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| | - Hanna Hillman
- Princeton Neuroscience Institute and Department of Psychology, Princeton University, Princeton, NJ, USA
| | - Asieh Zadbood
- Princeton Neuroscience Institute and Department of Psychology, Princeton University, Princeton, NJ, USA
| | - Liat Hasenfratz
- Princeton Neuroscience Institute and Department of Psychology, Princeton University, Princeton, NJ, USA
| | - Neggin Keshavarzian
- Princeton Neuroscience Institute and Department of Psychology, Princeton University, Princeton, NJ, USA
| | - Janice Chen
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| | - Christopher J Honey
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| | - Yaara Yeshurun
- School of Psychological Sciences, Tel Aviv University, Tel Aviv, Israel
| | - Mor Regev
- Montreal Neurological Institute, McGill University, Montreal, QC, Canada
| | - Mai Nguyen
- Princeton Neuroscience Institute and Department of Psychology, Princeton University, Princeton, NJ, USA
| | - Claire H C Chang
- Princeton Neuroscience Institute and Department of Psychology, Princeton University, Princeton, NJ, USA
| | | | - Olga Lositsky
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, Providence, RI, USA
| | - Erez Simony
- Faculty of Electrical Engineering, Holon Institute of Technology, Holon, Israel
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, Israel
| | | | - Yuan Chang Leong
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA, USA
| | - Paula P Brooks
- Princeton Neuroscience Institute and Department of Psychology, Princeton University, Princeton, NJ, USA
| | - Emily Micciche
- Peabody College, Vanderbilt University, Nashville, TN, USA
| | - Gina Choe
- Princeton Neuroscience Institute and Department of Psychology, Princeton University, Princeton, NJ, USA
| | - Ariel Goldstein
- Princeton Neuroscience Institute and Department of Psychology, Princeton University, Princeton, NJ, USA
| | - Tamara Vanderwal
- Department of Psychiatry, University of British Columbia, and BC Children's Hospital Research Institute, Vancouver, BC, Canada
| | - Yaroslav O Halchenko
- Department of Psychological and Brain Sciences and Department of Computer Science, Dartmouth College, Hanover, NH, USA
| | - Kenneth A Norman
- Princeton Neuroscience Institute and Department of Psychology, Princeton University, Princeton, NJ, USA
| | - Uri Hasson
- Princeton Neuroscience Institute and Department of Psychology, Princeton University, Princeton, NJ, USA
| |
Collapse
|
20
|
Baumgarten TJ, Maniscalco B, Lee JL, Flounders MW, Abry P, He BJ. Neural integration underlying naturalistic prediction flexibly adapts to varying sensory input rate. Nat Commun 2021; 12:2643. [PMID: 33976118 PMCID: PMC8113607 DOI: 10.1038/s41467-021-22632-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Accepted: 03/16/2021] [Indexed: 02/03/2023] Open
Abstract
Prediction of future sensory input based on past sensory information is essential for organisms to effectively adapt their behavior in dynamic environments. Humans successfully predict future stimuli in various natural settings. Yet, it remains elusive how the brain achieves effective prediction despite enormous variations in sensory input rate, which directly affect how fast sensory information can accumulate. We presented participants with acoustic sequences capturing temporal statistical regularities prevalent in nature and investigated neural mechanisms underlying predictive computation using MEG. By parametrically manipulating sequence presentation speed, we tested two hypotheses: neural prediction relies on integrating past sensory information over fixed time periods or fixed amounts of information. We demonstrate that across halved and doubled presentation speeds, predictive information in neural activity stems from integration over fixed amounts of information. Our findings reveal the neural mechanisms enabling humans to robustly predict dynamic stimuli in natural environments despite large sensory input rate variations.
Collapse
Affiliation(s)
- Thomas J Baumgarten
- Neuroscience Institute, New York University School of Medicine, New York, NY, USA
- Institute of Clinical Neuroscience and Medical Psychology, Medical Faculty, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
| | - Brian Maniscalco
- Neuroscience Institute, New York University School of Medicine, New York, NY, USA
| | - Jennifer L Lee
- Neuroscience Graduate Program, New York University, New York, NY, USA
| | - Matthew W Flounders
- Neuroscience Institute, New York University School of Medicine, New York, NY, USA
| | - Patrice Abry
- CNRS, Laboratoire de Physique, Université de Lyon, ENS Lyon, Lyon, France
| | - Biyu J He
- Neuroscience Institute, New York University School of Medicine, New York, NY, USA.
- Departments of Neurology, Neuroscience and Physiology, and Radiology, New York University School of Medicine, New York, NY, USA.
| |
Collapse
|
21
|
Lee CS, Aly M, Baldassano C. Anticipation of temporally structured events in the brain. eLife 2021; 10:e64972. [PMID: 33884953 PMCID: PMC8169103 DOI: 10.7554/elife.64972] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Accepted: 04/21/2021] [Indexed: 01/17/2023] Open
Abstract
Learning about temporal structure is adaptive because it enables the generation of expectations. We examined how the brain uses experience in structured environments to anticipate upcoming events. During fMRI (functional magnetic resonance imaging), individuals watched a 90 s movie clip six times. Using a hidden Markov model applied to searchlights across the whole brain, we identified temporal shifts between activity patterns evoked by the first vs. repeated viewings of the movie clip. In many regions throughout the cortex, neural activity patterns for repeated viewings shifted to precede those of initial viewing by up to 15 s. This anticipation varied hierarchically in a posterior (less anticipation) to anterior (more anticipation) fashion. We also identified specific regions in which the timing of the brain's event boundaries was related to those of human-labeled event boundaries, with the timing of this relationship shifting on repeated viewings. With repeated viewing, the brain's event boundaries came to precede human-annotated boundaries by 1-4 s on average. Together, these results demonstrate a hierarchy of anticipatory signals in the human brain and link them to subjective experiences of events.
Collapse
Affiliation(s)
- Caroline S Lee
- Columbia University, Department of PsychologyNew YorkUnited States
- Dartmouth College, Department of Psychological and Brain SciencesHanoverUnited States
| | - Mariam Aly
- Columbia University, Department of PsychologyNew YorkUnited States
| | | |
Collapse
|
22
|
Liu X, Chen GD, Salvi R. Neuroplastic changes in auditory cortex induced by long-duration "non-traumatic" noise exposures are triggered by deficits in the neural output of the cochlea. Hear Res 2021; 404:108203. [PMID: 33618162 DOI: 10.1016/j.heares.2021.108203] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Revised: 01/14/2021] [Accepted: 02/02/2021] [Indexed: 11/16/2022]
Abstract
Long-term exposure to moderate intensity noise that does not cause measureable hearing loss can cause striking changes in sound-evoked neural activity in auditory cortex. It is unclear if these changes originate in the cortex or result from functional deficits in the neural output of the cochlea. To explore this issue, rats were exposed for 6-weeks to 18-24 kHz noise at 45, 65 or 85 dB SPL and then compared the noise-induced changes in the cochlear compound action potential (CAP) with the neurophysiological alterations in the anterior auditory field (AAF) of auditory cortex. The 45-dB exposure, which had no effect on the cochlear CAP also had no effect on the AAF. In contrast, the 85-dB exposure greatly reduced CAP amplitudes at high frequencies, but had little or no effect on low frequencies. Despite the large reduction in high-frequency CAP neural responses, high frequency AAF neural responses (spike rate and local field potential amplitude) remained largely within normal limits, evidence of central gain compensation. AAF responses were also enhanced at the low frequencies even though CAP responses were normal; this AAF hyperactivity only occurred at low-moderate intensities (level-dependent enhanced central gain). The 65-dB exposure also caused a moderate reduction in high-frequency CAP amplitudes. Notwithstanding this cochlear loss, AAF responses were boosted into the normal range, evidence of homeostatic gain compensation. Our results suggest that the noise-induced neuroplastic changes in the auditory cortex from so-called "non-traumatic" exposures are triggered from functional deficits in the neural output of the cochlea.
Collapse
Affiliation(s)
- Xiaopeng Liu
- Center for Hearing and Deafness, SUNY at Buffalo, Buffalo, 137 Cary Hall, 3435 Main Street, NY 14214, USA
| | - Guang-Di Chen
- Center for Hearing and Deafness, SUNY at Buffalo, Buffalo, 137 Cary Hall, 3435 Main Street, NY 14214, USA.
| | - Richard Salvi
- Center for Hearing and Deafness, SUNY at Buffalo, Buffalo, 137 Cary Hall, 3435 Main Street, NY 14214, USA
| |
Collapse
|
23
|
Movies and narratives as naturalistic stimuli in neuroimaging. Neuroimage 2020; 224:117445. [PMID: 33059053 PMCID: PMC7805386 DOI: 10.1016/j.neuroimage.2020.117445] [Citation(s) in RCA: 54] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2020] [Revised: 10/06/2020] [Accepted: 10/09/2020] [Indexed: 01/06/2023] Open
Abstract
Using movies and narratives as naturalistic stimuli in human neuroimaging studies has yielded significant advances in understanding of cognitive and emotional functions. The relevant literature was reviewed, with emphasis on how the use of naturalistic stimuli has helped advance scientific understanding of human memory, attention, language, emotions, and social cognition in ways that would have been difficult otherwise. These advances include discovering a cortical hierarchy of temporal receptive windows, which supports processing of dynamic information that accumulates over several time scales, such as immediate reactions vs. slowly emerging patterns in social interactions. Naturalistic stimuli have also helped elucidate how the hippocampus supports segmentation and memorization of events in day-to-day life and have afforded insights into attentional brain mechanisms underlying our ability to adopt specific perspectives during natural viewing. Further, neuroimaging studies with naturalistic stimuli have revealed the role of the default-mode network in narrative-processing and in social cognition. Finally, by robustly eliciting genuine emotions, these stimuli have helped elucidate the brain basis of both basic and social emotions apparently manifested as highly overlapping yet distinguishable patterns of brain activity.
Collapse
|
24
|
Blank IA, Fedorenko E. No evidence for differences among language regions in their temporal receptive windows. Neuroimage 2020; 219:116925. [PMID: 32407994 PMCID: PMC9392830 DOI: 10.1016/j.neuroimage.2020.116925] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2019] [Revised: 03/20/2020] [Accepted: 05/06/2020] [Indexed: 10/24/2022] Open
Abstract
The "core language network" consists of left frontal and temporal regions that are selectively engaged in linguistic processing. Whereas functional differences among these regions have long been debated, many accounts propose distinctions in terms of representational grain-size-e.g., words vs. phrases/sentences-or processing time-scale, i.e., operating on local linguistic features vs. larger spans of input. Indeed, the topography of language regions appears to overlap with a cortical hierarchy reported by Lerner et al. (2011) wherein mid-posterior temporal regions are sensitive to low-level features of speech, surrounding areas-to word-level information, and inferior frontal areas-to sentence-level information and beyond. However, the correspondence between the language network and this hierarchy of "temporal receptive windows" (TRWs) is difficult to establish because the precise anatomical locations of language regions vary across individuals. To directly test this correspondence, we first identified language regions in each participant with a well-validated task-based localizer, which confers high functional resolution to the study of TRWs (traditionally based on stereotactic coordinates); then, we characterized regional TRWs with the naturalistic story listening paradigm of Lerner et al. (2011), which augments task-based characterizations of the language network by more closely resembling comprehension "in the wild". We find no region-by-TRW interactions across temporal and inferior frontal regions, which are all sensitive to both word-level and sentence-level information. Therefore, the language network as a whole constitutes a unique stage of information integration within a broader cortical hierarchy.
Collapse
Affiliation(s)
- Idan A Blank
- Department of Brain and Cognitive Sciences and McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA.
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences and McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| |
Collapse
|
25
|
Hierarchical dynamics as a macroscopic organizing principle of the human brain. Proc Natl Acad Sci U S A 2020; 117:20890-20897. [PMID: 32817467 DOI: 10.1073/pnas.2003383117] [Citation(s) in RCA: 97] [Impact Index Per Article: 24.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022] Open
Abstract
Multimodal evidence suggests that brain regions accumulate information over timescales that vary according to anatomical hierarchy. Thus, these experimentally defined "temporal receptive windows" are longest in cortical regions that are distant from sensory input. Interestingly, spontaneous activity in these regions also plays out over relatively slow timescales (i.e., exhibits slower temporal autocorrelation decay). These findings raise the possibility that hierarchical timescales represent an intrinsic organizing principle of brain function. Here, using resting-state functional MRI, we show that the timescale of ongoing dynamics follows hierarchical spatial gradients throughout human cerebral cortex. These intrinsic timescale gradients give rise to systematic frequency differences among large-scale cortical networks and predict individual-specific features of functional connectivity. Whole-brain coverage permitted us to further investigate the large-scale organization of subcortical dynamics. We show that cortical timescale gradients are topographically mirrored in striatum, thalamus, and cerebellum. Finally, timescales in the hippocampus followed a posterior-to-anterior gradient, corresponding to the longitudinal axis of increasing representational scale. Thus, hierarchical dynamics emerge as a global organizing principle of mammalian brains.
Collapse
|
26
|
Bellmund JLS, Polti I, Doeller CF. Sequence Memory in the Hippocampal-Entorhinal Region. J Cogn Neurosci 2020; 32:2056-2070. [PMID: 32530378 DOI: 10.1162/jocn_a_01592] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Episodic memories are constructed from sequences of events. When recalling such a memory, we not only recall individual events, but we also retrieve information about how the sequence of events unfolded. Here, we focus on the role of the hippocampal-entorhinal region in processing and remembering sequences of events, which are thought to be stored in relational networks. We summarize evidence that temporal relations are a central organizational principle for memories in the hippocampus. Importantly, we incorporate novel insights from recent studies about the role of the adjacent entorhinal cortex in sequence memory. In rodents, the lateral entorhinal subregion carries temporal information during ongoing behavior. The human homologue is recruited during memory recall where its representations reflect the temporal relationships between events encountered in a sequence. We further introduce the idea that the hippocampal-entorhinal region might enable temporal scaling of sequence representations. Flexible changes of sequence progression speed could underlie the traversal of episodic memories and mental simulations at different paces. In conclusion, we describe how the entorhinal cortex and hippocampus contribute to remembering event sequences-a core component of episodic memory.
Collapse
Affiliation(s)
- Jacob L S Bellmund
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Ignacio Polti
- Kavli Institute for Systems Neuroscience, Centre for Neural Computation, The Egil and Pauline Braathen and Fred Kavli Centre for Cortical Microcircuits, Norwegian University of Science and Technology, Trondheim, Norway
| | - Christian F Doeller
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Kavli Institute for Systems Neuroscience, Centre for Neural Computation, The Egil and Pauline Braathen and Fred Kavli Centre for Cortical Microcircuits, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
27
|
Temporal integration of narrative information in a hippocampal amnesic patient. Neuroimage 2020; 213:116658. [PMID: 32084563 DOI: 10.1016/j.neuroimage.2020.116658] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2019] [Revised: 02/06/2020] [Accepted: 02/13/2020] [Indexed: 11/21/2022] Open
Abstract
Default network regions appear to integrate information over time windows of 30 s or more during narrative listening. Does this long-timescale capability require the hippocampus? Amnesic behavior suggests that regions other than the hippocampus can independently support some online processing when input is continuous and semantically rich: amnesics can participate in conversations and tell stories spanning minutes, and when tested immediately on recently heard prose they are able to retain some information. We hypothesized that default network regions can integrate the semantically coherent information of a narrative across long time windows, even in the absence of an intact hippocampus. To test this prediction, we measured BOLD activity in the brain of a hippocampal amnesic patient (D.A.) and healthy control participants while they listened to a 7 min narrative. The narrative was played either in its intact form, or as a paragraph-scrambled version, which has been previously shown to interfere with the long-range temporal dependencies in default network activity. In the intact story condition, D.A.'s moment-by-moment BOLD activity spatial patterns were similar to those of controls in low-level auditory cortex as well as in some high-level default network regions (including lateral and medial posterior parietal cortex). Moreover, as in controls, D.A.'s response patterns in medial and lateral posterior parietal cortex were disrupted when paragraphs of the story were presented in a shuffled order, suggesting that activity in these areas did depend on information from 30 s or more in the past. Together, these results suggest that some default network cortical areas can integrate information across long timescales, even when the hippocampus is severely damaged.
Collapse
|
28
|
Slayton MA, Romero-Sosa JL, Shore K, Buonomano DV, Viskontas IV. Musical expertise generalizes to superior temporal scaling in a Morse code tapping task. PLoS One 2020; 15:e0221000. [PMID: 31905200 PMCID: PMC6944339 DOI: 10.1371/journal.pone.0221000] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2019] [Accepted: 12/10/2019] [Indexed: 11/26/2022] Open
Abstract
A key feature of the brain’s ability to tell time and generate complex temporal patterns is its capacity to produce similar temporal patterns at different speeds. For example, humans can tie a shoe, type, or play an instrument at different speeds or tempi—a phenomenon referred to as temporal scaling. While it is well established that training improves timing precision and accuracy, it is not known whether expertise improves temporal scaling, and if so, whether it generalizes across skill domains. We quantified temporal scaling and timing precision in musicians and non-musicians as they learned to tap a Morse code sequence. We found that non-musicians improved significantly over the course of days of training at the standard speed. In contrast, musicians exhibited a high level of temporal precision on the first day, which did not improve significantly with training. Although there was no significant difference in performance at the end of training at the standard speed, musicians were significantly better at temporal scaling—i.e., at reproducing the learned Morse code pattern at faster and slower speeds. Interestingly, both musicians and non-musicians exhibited a Weber-speed effect, where temporal precision at the same absolute time was higher when producing patterns at the faster speed. These results are the first to establish that the ability to generate the same motor patterns at different speeds improves with extensive training and generalizes to non-musical domains.
Collapse
Affiliation(s)
- Matthew A. Slayton
- San Francisco Conservatory of Music, San Francisco, CA, United States of America
| | - Juan L. Romero-Sosa
- Department of Neurobiology, University of California Los Angeles, Los Angeles, CA, United States of America
- Neuroscience Interdepartmental Program, University of California Los Angeles, Los Angeles, CA, United States of America
| | - Katrina Shore
- San Francisco Conservatory of Music, San Francisco, CA, United States of America
| | - Dean V. Buonomano
- Department of Neurobiology, University of California Los Angeles, Los Angeles, CA, United States of America
- Neuroscience Interdepartmental Program, University of California Los Angeles, Los Angeles, CA, United States of America
- Department of Psychology, University of California Los Angeles, Los Angeles, CA, United States of America
- * E-mail: (DVB); (IVV)
| | - Indre V. Viskontas
- San Francisco Conservatory of Music, San Francisco, CA, United States of America
- Department of Psychology, University of San Francisco, San Francisco, CA, United States of America
- * E-mail: (DVB); (IVV)
| |
Collapse
|
29
|
Heeger DJ, Mackey WE. Oscillatory recurrent gated neural integrator circuits (ORGaNICs), a unifying theoretical framework for neural dynamics. Proc Natl Acad Sci U S A 2019; 116:22783-22794. [PMID: 31636212 PMCID: PMC6842604 DOI: 10.1073/pnas.1911633116] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Working memory is an example of a cognitive and neural process that is not static but evolves dynamically with changing sensory inputs; another example is motor preparation and execution. We introduce a theoretical framework for neural dynamics, based on oscillatory recurrent gated neural integrator circuits (ORGaNICs), and apply it to simulate key phenomena of working memory and motor control. The model circuits simulate neural activity with complex dynamics, including sequential activity and traveling waves of activity, that manipulate (as well as maintain) information during working memory. The same circuits convert spatial patterns of premotor activity to temporal profiles of motor control activity and manipulate (e.g., time warp) the dynamics. Derivative-like recurrent connectivity, in particular, serves to manipulate and update internal models, an essential feature of working memory and motor execution. In addition, these circuits incorporate recurrent normalization, to ensure stability over time and robustness with respect to perturbations of synaptic weights.
Collapse
Affiliation(s)
- David J Heeger
- Department of Psychology, New York University, New York, NY 10003;
- Center for Neural Science, New York University, New York, NY 10003
| | - Wayne E Mackey
- Department of Psychology, New York University, New York, NY 10003
- Center for Neural Science, New York University, New York, NY 10003
| |
Collapse
|
30
|
Nastase SA, Gazzola V, Hasson U, Keysers C. Measuring shared responses across subjects using intersubject correlation. Soc Cogn Affect Neurosci 2019; 14:667-685. [PMID: 31099394 PMCID: PMC6688448 DOI: 10.1093/scan/nsz037] [Citation(s) in RCA: 112] [Impact Index Per Article: 22.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2019] [Revised: 05/10/2019] [Accepted: 05/13/2019] [Indexed: 12/18/2022] Open
Abstract
Our capacity to jointly represent information about the world underpins our social experience. By leveraging one individual's brain activity to model another's, we can measure shared information across brains-even in dynamic, naturalistic scenarios where an explicit response model may be unobtainable. Introducing experimental manipulations allows us to measure, for example, shared responses between speakers and listeners or between perception and recall. In this tutorial, we develop the logic of intersubject correlation (ISC) analysis and discuss the family of neuroscientific questions that stem from this approach. We also extend this logic to spatially distributed response patterns and functional network estimation. We provide a thorough and accessible treatment of methodological considerations specific to ISC analysis and outline best practices.
Collapse
Affiliation(s)
- Samuel A Nastase
- Princeton Neuroscience Institute and Department of Psychology, Princeton University, Princeton, NJ 08544, USA
| | - Valeria Gazzola
- Social Brain Lab, Netherlands Institute for Neuroscience, KNAW, 105BA Amsterdam, The Netherlands
- Department of Psychology, University of Amsterdam, 1018 WV Amsterdam, The Netherlands
| | - Uri Hasson
- Princeton Neuroscience Institute and Department of Psychology, Princeton University, Princeton, NJ 08544, USA
| | - Christian Keysers
- Social Brain Lab, Netherlands Institute for Neuroscience, KNAW, 105BA Amsterdam, The Netherlands
- Department of Psychology, University of Amsterdam, 1018 WV Amsterdam, The Netherlands
| |
Collapse
|
31
|
Itoh K, Nejime M, Konoike N, Nakamura K, Nakada T. Evolutionary Elongation of the Time Window of Integration in Auditory Cortex: Macaque vs. Human Comparison of the Effects of Sound Duration on Auditory Evoked Potentials. Front Neurosci 2019; 13:630. [PMID: 31293370 PMCID: PMC6601703 DOI: 10.3389/fnins.2019.00630] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2019] [Accepted: 05/31/2019] [Indexed: 11/29/2022] Open
Abstract
The auditory cortex integrates auditory information over time to obtain neural representations of sound events, the time scale of which critically affects perception. This work investigated the species differences in the time scale of integration by comparing humans and monkeys regarding how their scalp-recorded cortical auditory evoked potentials (CAEPs) decrease in amplitude as stimulus duration is shortened from 100 ms (or longer) to 2 ms. Cortical circuits tuned to processing sounds at short time scales would continue to produce large CAEPs to brief sounds whereas those tuned to longer time scales would produce diminished responses. Four peaks were identified in the CAEPs and labeled P1, N1, P2, and N2 in humans and mP1, mN1, mP2, and mN2 in monkeys. In humans, the N1 diminished in amplitude as sound duration was decreased, consistent with the previously described temporal integration window of N1 (>50 ms). In macaques, by contrast, the mN1 was unaffected by sound duration, and it was clearly elicited by even the briefest sounds. Brief sounds also elicited significant mN2 in the macaque, but not the human N2. Regarding earlier latencies, both P1 (humans) and mP1 (macaques) were elicited at their full amplitudes even by the briefest sounds. These findings suggest an elongation of the time scale of late stages of human auditory cortical processing, as reflected by N1/mN1 and later CAEP components. Longer time scales of integration would allow neural representations of complex auditory features that characterize speech and music.
Collapse
Affiliation(s)
- Kosuke Itoh
- Center for Integrated Human Brain Science, Brain Research Institute, Niigata University, Niigata, Japan
| | - Masafumi Nejime
- Cognitive Neuroscience Section, Primate Research Institute, Kyoto University, Kyoto, Japan
| | - Naho Konoike
- Cognitive Neuroscience Section, Primate Research Institute, Kyoto University, Kyoto, Japan
| | - Katsuki Nakamura
- Cognitive Neuroscience Section, Primate Research Institute, Kyoto University, Kyoto, Japan
| | - Tsutomu Nakada
- Center for Integrated Human Brain Science, Brain Research Institute, Niigata University, Niigata, Japan
| |
Collapse
|
32
|
Tóth B, Farkas D, Urbán G, Szalárdy O, Orosz G, Hunyadi L, Hajdu B, Kovács A, Szabó BT, Shestopalova LB, Winkler I. Attention and speech-processing related functional brain networks activated in a multi-speaker environment. PLoS One 2019; 14:e0212754. [PMID: 30818389 PMCID: PMC6394951 DOI: 10.1371/journal.pone.0212754] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2018] [Accepted: 02/10/2019] [Indexed: 11/19/2022] Open
Abstract
Human listeners can focus on one speech stream out of several concurrent ones. The present study aimed to assess the whole-brain functional networks underlying a) the process of focusing attention on a single speech stream vs. dividing attention between two streams and 2) speech processing on different time-scales and depth. Two spoken narratives were presented simultaneously while listeners were instructed to a) track and memorize the contents of a speech stream and b) detect the presence of numerals or syntactic violations in the same ("focused attended condition") or in the parallel stream ("divided attended condition"). Speech content tracking was found to be associated with stronger connectivity in lower frequency bands (delta band- 0,5-4 Hz), whereas the detection tasks were linked with networks operating in the faster alpha (8-10 Hz) and beta (13-30 Hz) bands. These results suggest that the oscillation frequencies of the dominant brain networks during speech processing may be related to the duration of the time window within which information is integrated. We also found that focusing attention on a single speaker compared to dividing attention between two concurrent speakers was predominantly associated with connections involving the frontal cortices in the delta (0.5-4 Hz), alpha (8-10 Hz), and beta bands (13-30 Hz), whereas dividing attention between two parallel speech streams was linked with stronger connectivity involving the parietal cortices in the delta and beta frequency bands. Overall, connections strengthened by focused attention may reflect control over information selection, whereas connections strengthened by divided attention may reflect the need for maintaining two streams in parallel and the related control processes necessary for performing the tasks.
Collapse
Affiliation(s)
- Brigitta Tóth
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Budapest, Hungary
| | - Dávid Farkas
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Budapest, Hungary
- Department of Cognitive Science, Faculty of Natural Sciences, Budapest University of Technology and Economics, Budapest, Hungary
| | - Gábor Urbán
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Budapest, Hungary
- Department of Cognitive Science, Faculty of Natural Sciences, Budapest University of Technology and Economics, Budapest, Hungary
| | - Orsolya Szalárdy
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Budapest, Hungary
- Institute of Behavioural Sciences, Faculty of Medicine, Semmelweis University, Budapest, Hungary
| | - Gábor Orosz
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Budapest, Hungary
- Department of Social and Educational Psychology, Eötvös Loránd University, Budapest, Hungary
| | - László Hunyadi
- Department of General and Applied Linguistic, University of Debrecen, Debrecen, Hungary
| | - Botond Hajdu
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Budapest, Hungary
| | - Annamária Kovács
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Budapest, Hungary
- Department of Telecommunication and Media Informatics, Budapest University of Technology and Economics, Budapest, Hungary
| | - Beáta Tünde Szabó
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Budapest, Hungary
- Faculty of Information Technology and Bionics, Pázmány Péter Catholic University, Piliscsaba, Hungary
| | | | - István Winkler
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Budapest, Hungary
| |
Collapse
|
33
|
Bola M, Orłowski P, Baranowska K, Schartner M, Marchewka A. Informativeness of Auditory Stimuli Does Not Affect EEG Signal Diversity. Front Psychol 2018; 9:1820. [PMID: 30319513 PMCID: PMC6168660 DOI: 10.3389/fpsyg.2018.01820] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2018] [Accepted: 09/06/2018] [Indexed: 11/13/2022] Open
Abstract
Brain signal diversity constitutes a robust neuronal marker of the global states of consciousness. It has been demonstrated that, in comparison to the resting wakefulness, signal diversity is lower during unconscious states, and higher during psychedelic states. A plausible interpretation of these findings is that the neuronal diversity corresponds to the diversity of subjective conscious experiences. Therefore, in the present study we varied an information rate processed by the subjects and hypothesized that greater information rate will be related to richer and more differentiated phenomenology and, consequently, to greater signal diversity. To test this hypothesis speech recordings (excerpts from an audio-book) were presented to subjects at five different speeds (65, 83, 100, 117, and 135% of the original speed). By increasing or decreasing speed of the recordings we were able to, respectively, increase or decrease the presented information rate. We also included a backward (unintelligible) speech presentation and a resting-state condition (no auditory stimulation). We tested 19 healthy subjects and analyzed the recorded EEG signal (64 channels) in terms of Lempel-Ziv diversity (LZs). We report the following findings. First, our main hypothesis was not confirmed, as Bayes Factor indicates evidence for no effect when comparing LZs among five presentation speeds. Second, we found that LZs during the resting-state was greater than during processing of both meaningful and unintelligible speech. Third, an additional analysis uncovered a gradual decrease of diversity over the time-course of the experiment, which might reflect a decrease in vigilance. We thus speculate that higher signal diversity during the unconstrained resting-state might be due to a greater variety of experiences, involving spontaneous attention switching and mind wandering.
Collapse
Affiliation(s)
- Michał Bola
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - Paweł Orłowski
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland.,Institute of Philosophy, University of Warsaw, Warsaw, Poland.,Faculty of Electronics and Information Technology, Warsaw University of Technology, Warsaw, Poland
| | - Karolina Baranowska
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland.,Faculty of Physics, Warsaw University of Technology, Warsaw, Poland
| | - Michael Schartner
- Département des Neurosciences Fondamentales, Université de Genève, Geneva, Switzerland
| | - Artur Marchewka
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| |
Collapse
|
34
|
Hertrich I, Dietrich S, Ackermann H. Cortical phase locking to accelerated speech in blind and sighted listeners prior to and after training. BRAIN AND LANGUAGE 2018; 185:19-29. [PMID: 30025355 DOI: 10.1016/j.bandl.2018.07.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/05/2017] [Revised: 07/06/2018] [Accepted: 07/06/2018] [Indexed: 06/08/2023]
Abstract
Cross-correlation of magnetoencephalography (MEG) with time courses derived from the speech signal has shown differences in phase-locking between blind subjects able to comprehend accelerated speech and sighted controls. The present training study contributes to disentangle the effects of blindness and training. Both subject groups (baseline: n = 16 blind, 13 sighted; trained: 10 blind, 3 sighted) were able to enhance speech comprehension up to ca. 18 syllables per second. MEG responses phase-locked to syllable onsets were captured in five pre-defined source locations comprising left and right auditory cortex (A1), right visual cortex (V1), left inferior frontal gyrus (IFG) and left pre-supplementary motor area. Phase locking in A1 was consistently increased while V1 showed opposite training effects in blind and sighted subjects. Also the IFG showed some group differences indicating enhanced top-down strategies in sighted subjects while blind subjects may have a more fine-grained bottom-up resolution for accelerated speech.
Collapse
Affiliation(s)
- Ingo Hertrich
- Department of Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Germany.
| | - Susanne Dietrich
- Department of Psychology, Evolutionary Cognition (Cognitive Sciences), University of Tübingen, Germany
| | - Hermann Ackermann
- Department of Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Germany
| |
Collapse
|
35
|
Alexandrou AM, Saarinen T, Kujala J, Salmelin R. Cortical Tracking of Global and Local Variations of Speech Rhythm during Connected Natural Speech Perception. J Cogn Neurosci 2018; 30:1704-1719. [PMID: 29916785 DOI: 10.1162/jocn_a_01295] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
During natural speech perception, listeners must track the global speaking rate, that is, the overall rate of incoming linguistic information, as well as transient, local speaking rate variations occurring within the global speaking rate. Here, we address the hypothesis that this tracking mechanism is achieved through coupling of cortical signals to the amplitude envelope of the perceived acoustic speech signals. Cortical signals were recorded with magnetoencephalography (MEG) while participants perceived spontaneously produced speech stimuli at three global speaking rates (slow, normal/habitual, and fast). Inherently to spontaneously produced speech, these stimuli also featured local variations in speaking rate. The coupling between cortical and acoustic speech signals was evaluated using audio-MEG coherence. Modulations in audio-MEG coherence spatially differentiated between tracking of global speaking rate, highlighting the temporal cortex bilaterally and the right parietal cortex, and sensitivity to local speaking rate variations, emphasizing the left parietal cortex. Cortical tuning to the temporal structure of natural connected speech thus seems to require the joint contribution of both auditory and parietal regions. These findings suggest that cortical tuning to speech rhythm operates on two functionally distinct levels: one encoding the global rhythmic structure of speech and the other associated with online, rapidly evolving temporal predictions. Thus, it may be proposed that speech perception is shaped by evolutionary tuning, a preference for certain speaking rates, and predictive tuning, associated with cortical tracking of the constantly changing-rate of linguistic information in a speech stream.
Collapse
|
36
|
Himberger KD, Chien HY, Honey CJ. Principles of Temporal Processing Across the Cortical Hierarchy. Neuroscience 2018; 389:161-174. [PMID: 29729293 DOI: 10.1016/j.neuroscience.2018.04.030] [Citation(s) in RCA: 46] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2017] [Revised: 04/17/2018] [Accepted: 04/19/2018] [Indexed: 12/20/2022]
Abstract
The world is richly structured on multiple spatiotemporal scales. In order to represent spatial structure, many machine-learning models repeat a set of basic operations at each layer of a hierarchical architecture. These iterated spatial operations - including pooling, normalization and pattern completion - enable these systems to recognize and predict spatial structure, while robust to changes in the spatial scale, contrast and noisiness of the input signal. Because our brains also process temporal information that is rich and occurs across multiple time scales, might the brain employ an analogous set of operations for temporal information processing? Here we define a candidate set of temporal operations, and we review evidence that they are implemented in the mammalian cerebral cortex in a hierarchical manner. We conclude that multiple consecutive stages of cortical processing can be understood to perform temporal pooling, temporal normalization and temporal pattern completion.
Collapse
Affiliation(s)
- Kevin D Himberger
- Department of Psychological & Brain Sciences, Johns Hopkins University, Baltimore, MD, United States
| | - Hsiang-Yun Chien
- Department of Psychological & Brain Sciences, Johns Hopkins University, Baltimore, MD, United States
| | - Christopher J Honey
- Department of Psychological & Brain Sciences, Johns Hopkins University, Baltimore, MD, United States.
| |
Collapse
|
37
|
White PA. Is conscious perception a series of discrete temporal frames? Conscious Cogn 2018; 60:98-126. [PMID: 29549714 DOI: 10.1016/j.concog.2018.02.012] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2018] [Revised: 02/21/2018] [Accepted: 02/21/2018] [Indexed: 10/17/2022]
Abstract
This paper reviews proposals that conscious perception consists, in whole or part, of successive discrete temporal frames on the sub-second time scale, each frame containing information registered as simultaneous or static. Although the idea of discrete frames in conscious perception cannot be regarded as falsified, there are many problems. Evidence does not consistently support any proposed duration or range of durations for frames. EEG waveforms provide evidence of periodicity in brain activity, but not necessarily in conscious perception. Temporal properties of perceptual processes are flexible in response to competing processing demands, which is hard to reconcile with the relative inflexibility of regular frames. There are also problems concerning the definition of frames, the need for informational connections between frames, the means by which boundaries between frames are established, and the apparent requirement for a storage buffer for information awaiting entry to the next frame.
Collapse
Affiliation(s)
- Peter A White
- School of Psychology, Cardiff University, Tower Building, Park Place, Cardiff CF10 3YG, Wales, UK.
| |
Collapse
|
38
|
Goudar V, Buonomano DV. Encoding sensory and motor patterns as time-invariant trajectories in recurrent neural networks. eLife 2018. [PMID: 29537963 PMCID: PMC5851701 DOI: 10.7554/elife.31134] [Citation(s) in RCA: 53] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023] Open
Abstract
Much of the information the brain processes and stores is temporal in nature—a spoken word or a handwritten signature, for example, is defined by how it unfolds in time. However, it remains unclear how neural circuits encode complex time-varying patterns. We show that by tuning the weights of a recurrent neural network (RNN), it can recognize and then transcribe spoken digits. The model elucidates how neural dynamics in cortical networks may resolve three fundamental challenges: first, encode multiple time-varying sensory and motor patterns as stable neural trajectories; second, generalize across relevant spatial features; third, identify the same stimuli played at different speeds—we show that this temporal invariance emerges because the recurrent dynamics generate neural trajectories with appropriately modulated angular velocities. Together our results generate testable predictions as to how recurrent networks may use different mechanisms to generalize across the relevant spatial and temporal features of complex time-varying stimuli.
Collapse
Affiliation(s)
- Vishwa Goudar
- Departments of Neurobiology, University of California, Los Angeles, Los Angeles, United States
| | - Dean V Buonomano
- Departments of Neurobiology, University of California, Los Angeles, Los Angeles, United States.,Integrative Center for Learning and Memory, University of California, Los Angeles, Los Angeles, United States.,Departments of Psychology, University of California, Los Angeles, Los Angeles, United States
| |
Collapse
|
39
|
|
40
|
Breakdown of long-range temporal correlations in brain oscillations during general anesthesia. Neuroimage 2017; 159:146-158. [DOI: 10.1016/j.neuroimage.2017.07.047] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2016] [Revised: 07/13/2017] [Accepted: 07/22/2017] [Indexed: 01/19/2023] Open
|
41
|
Northoff G. Personal Identity and Cortical Midline Structure (CMS): Do Temporal Features of CMS Neural Activity Transform Into “Self-Continuity”? PSYCHOLOGICAL INQUIRY 2017. [DOI: 10.1080/1047840x.2017.1337396] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Georg Northoff
- Mental Health Centre, Zhejiang University School of Medicine, Hangzhou, China
- Institute of Mental Health Research, University of Ottawa, Ottawa, Ontario, Canada
- Centre for Cognition and Brain Disorders, Hangzhou Normal University, Hangzhou, China
- Centre for Brain and Consciousness, Taipei Medical University, Taipei, Taiwan
- College for Humanities and Medicine, Taipei Medical University, Taipei, Taiwan
| |
Collapse
|
42
|
Amplification of local changes along the timescale processing hierarchy. Proc Natl Acad Sci U S A 2017; 114:9475-9480. [PMID: 28811367 DOI: 10.1073/pnas.1701652114] [Citation(s) in RCA: 50] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Small changes in word choice can lead to dramatically different interpretations of narratives. How does the brain accumulate and integrate such local changes to construct unique neural representations for different stories? In this study, we created two distinct narratives by changing only a few words in each sentence (e.g., "he" to "she" or "sobbing" to "laughing") while preserving the grammatical structure across stories. We then measured changes in neural responses between the two stories. We found that differences in neural responses between the two stories gradually increased along the hierarchy of processing timescales. For areas with short integration windows, such as early auditory cortex, the differences in neural responses between the two stories were relatively small. In contrast, in areas with the longest integration windows at the top of the hierarchy, such as the precuneus, temporal parietal junction, and medial frontal cortices, there were large differences in neural responses between stories. Furthermore, this gradual increase in neural differences between the stories was highly correlated with an area's ability to integrate information over time. Amplification of neural differences did not occur when changes in words did not alter the interpretation of the story (e.g., sobbing to "crying"). Our results demonstrate how subtle differences in words are gradually accumulated and amplified along the cortical hierarchy as the brain constructs a narrative over time.
Collapse
|
43
|
Hasson U, Chen J, Honey CJ. Hierarchical process memory: memory as an integral component of information processing. Trends Cogn Sci 2015; 19:304-13. [PMID: 25980649 PMCID: PMC4457571 DOI: 10.1016/j.tics.2015.04.006] [Citation(s) in RCA: 347] [Impact Index Per Article: 38.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2015] [Revised: 04/07/2015] [Accepted: 04/10/2015] [Indexed: 11/28/2022]
Abstract
Models of working memory (WM) commonly focus on how information is encoded into and retrieved from storage at specific moments. However, in the majority of real-life processes, past information is used continuously to process incoming information across multiple timescales. Considering single-unit, electrocorticography, and functional imaging data, we argue that (i) virtually all cortical circuits can accumulate information over time, and (ii) the timescales of accumulation vary hierarchically, from early sensory areas with short processing timescales (10s to 100s of milliseconds) to higher-order areas with long processing timescales (many seconds to minutes). In this hierarchical systems perspective, memory is not restricted to a few localized stores, but is intrinsic to information processing that unfolds throughout the brain on multiple timescales.
Collapse
Affiliation(s)
- Uri Hasson
- Department of Psychology and the Neuroscience Institute, Princeton University, NJ 08544-1010, USA.
| | - Janice Chen
- Department of Psychology and the Neuroscience Institute, Princeton University, NJ 08544-1010, USA
| | - Christopher J Honey
- Department of Psychology, University of Toronto, Toronto ON, M5S 3G3, Canada
| |
Collapse
|
44
|
Bibikov NG. Some features of the sound-signal envelope extracted by cochlear nucleus neurons in grass frog. Biophysics (Nagoya-shi) 2015. [DOI: 10.1134/s0006350915030045] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
|