1
|
Jha A, Ashwood ZC, Pillow JW. Active Learning for Discrete Latent Variable Models. Neural Comput 2024; 36:437-474. [PMID: 38363661 DOI: 10.1162/neco_a_01646] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Accepted: 10/13/2023] [Indexed: 02/18/2024]
Abstract
Active learning seeks to reduce the amount of data required to fit the parameters of a model, thus forming an important class of techniques in modern machine learning. However, past work on active learning has largely overlooked latent variable models, which play a vital role in neuroscience, psychology, and a variety of other engineering and scientific disciplines. Here we address this gap by proposing a novel framework for maximum-mutual-information input selection for discrete latent variable regression models. We first apply our method to a class of models known as mixtures of linear regressions (MLR). While it is well known that active learning confers no advantage for linear-gaussian regression models, we use Fisher information to show analytically that active learning can nevertheless achieve large gains for mixtures of such models, and we validate this improvement using both simulations and real-world data. We then consider a powerful class of temporally structured latent variable models given by a hidden Markov model (HMM) with generalized linear model (GLM) observations, which has recently been used to identify discrete states from animal decision-making data. We show that our method substantially reduces the amount of data needed to fit GLM-HMMs and outperforms a variety of approximate methods based on variational and amortized inference. Infomax learning for latent variable models thus offers a powerful approach for characterizing temporally structured latent states, with a wide variety of applications in neuroscience and beyond.
Collapse
Affiliation(s)
- Aditi Jha
- Princeton Neuroscience Institute and Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ 08544, U.S.A.
| | - Zoe C Ashwood
- Princeton Neuroscience Institute and Department of Computer Science, Princeton University, Princeton, NJ 08544, U.S.A.
| | - Jonathan W Pillow
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, U.S.A.
| |
Collapse
|
2
|
Weiss DA, Borsa AM, Pala A, Sederberg AJ, Stanley GB. A machine learning approach for real-time cortical state estimation. J Neural Eng 2024; 21:016016. [PMID: 38232377 PMCID: PMC10868597 DOI: 10.1088/1741-2552/ad1f7b] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Accepted: 01/17/2024] [Indexed: 01/19/2024]
Abstract
Objective.Cortical function is under constant modulation by internally-driven, latent variables that regulate excitability, collectively known as 'cortical state'. Despite a vast literature in this area, the estimation of cortical state remains relatively ad hoc, and not amenable to real-time implementation. Here, we implement robust, data-driven, and fast algorithms that address several technical challenges for online cortical state estimation.Approach. We use unsupervised Gaussian mixture models to identify discrete, emergent clusters in spontaneous local field potential signals in cortex. We then extend our approach to a temporally-informed hidden semi-Markov model (HSMM) with Gaussian observations to better model and infer cortical state transitions. Finally, we implement our HSMM cortical state inference algorithms in a real-time system, evaluating their performance in emulation experiments.Main results. Unsupervised clustering approaches reveal emergent state-like structure in spontaneous electrophysiological data that recapitulate arousal-related cortical states as indexed by behavioral indicators. HSMMs enable cortical state inferences in a real-time context by modeling the temporal dynamics of cortical state switching. Using HSMMs provides robustness to state estimates arising from noisy, sequential electrophysiological data.Significance. To our knowledge, this work represents the first implementation of a real-time software tool for continuously decoding cortical states with high temporal resolution (40 ms). The software tools that we provide can facilitate our understanding of how cortical states dynamically modulate cortical function on a moment-by-moment basis and provide a basis for state-aware brain machine interfaces across health and disease.
Collapse
Affiliation(s)
- David A Weiss
- Program in Bioengineering, Georgia Institute of Technology, Atlanta, GA, United States of America
- Wallace H Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, United States of America
| | - Adriano Mf Borsa
- Program in Bioengineering, Georgia Institute of Technology, Atlanta, GA, United States of America
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, United States of America
| | - Aurélie Pala
- Department of Biology, Emory University, Atlanta, GA, United States of America
| | - Audrey J Sederberg
- Department of Neuroscience, University of Minnesota Medical School, Minneapolis, MN, United States of America
- Medical Discovery Team in Optical Imaging and Brain Science, University of Minnesota, Minneapolis, MN, United States of America
| | - Garrett B Stanley
- Wallace H Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, United States of America
| |
Collapse
|
3
|
Weng G, Clark K, Akbarian A, Noudoost B, Nategh N. Time-varying generalized linear models: characterizing and decoding neuronal dynamics in higher visual areas. Front Comput Neurosci 2024; 18:1273053. [PMID: 38348287 PMCID: PMC10859875 DOI: 10.3389/fncom.2024.1273053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Accepted: 01/09/2024] [Indexed: 02/15/2024] Open
Abstract
To create a behaviorally relevant representation of the visual world, neurons in higher visual areas exhibit dynamic response changes to account for the time-varying interactions between external (e.g., visual input) and internal (e.g., reward value) factors. The resulting high-dimensional representational space poses challenges for precisely quantifying individual factors' contributions to the representation and readout of sensory information during a behavior. The widely used point process generalized linear model (GLM) approach provides a powerful framework for a quantitative description of neuronal processing as a function of various sensory and non-sensory inputs (encoding) as well as linking particular response components to particular behaviors (decoding), at the level of single trials and individual neurons. However, most existing variations of GLMs assume the neural systems to be time-invariant, making them inadequate for modeling nonstationary characteristics of neuronal sensitivity in higher visual areas. In this review, we summarize some of the existing GLM variations, with a focus on time-varying extensions. We highlight their applications to understanding neural representations in higher visual areas and decoding transient neuronal sensitivity as well as linking physiology to behavior through manipulation of model components. This time-varying class of statistical models provide valuable insights into the neural basis of various visual behaviors in higher visual areas and hold significant potential for uncovering the fundamental computational principles that govern neuronal processing underlying various behaviors in different regions of the brain.
Collapse
Affiliation(s)
- Geyu Weng
- Department of Biomedical Engineering, University of Utah, Salt Lake City, UT, United States
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Kelsey Clark
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Amir Akbarian
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Behrad Noudoost
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Neda Nategh
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
- Department of Electrical and Computer Engineering, University of Utah, Salt Lake City, UT, United States
| |
Collapse
|
4
|
Breffle J, Mokashe S, Qiu S, Miller P. Multistability in neural systems with random cross-connections. BIOLOGICAL CYBERNETICS 2023; 117:485-506. [PMID: 38133664 DOI: 10.1007/s00422-023-00981-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Accepted: 12/05/2023] [Indexed: 12/23/2023]
Abstract
Neural circuits with multiple discrete attractor states could support a variety of cognitive tasks according to both empirical data and model simulations. We assess the conditions for such multistability in neural systems using a firing rate model framework, in which clusters of similarly responsive neurons are represented as single units, which interact with each other through independent random connections. We explore the range of conditions in which multistability arises via recurrent input from other units while individual units, typically with some degree of self-excitation, lack sufficient self-excitation to become bistable on their own. We find many cases of multistability-defined as the system possessing more than one stable fixed point-in which stable states arise via a network effect, allowing subsets of units to maintain each others' activity because their net input to each other when active is sufficiently positive. In terms of the strength of within-unit self-excitation and standard deviation of random cross-connections, the region of multistability depends on the response function of units. Indeed, multistability can arise with zero self-excitation, purely through zero-mean random cross-connections, if the response function rises supralinearly at low inputs from a value near zero at zero input. We simulate and analyze finite systems, showing that the probability of multistability can peak at intermediate system size, and connect with other literature analyzing similar systems in the infinite-size limit. We find regions of multistability with a bimodal distribution for the number of active units in a stable state. Finally, we find evidence for a log-normal distribution of sizes of attractor basins, which produces Zipf's Law when enumerating the proportion of trials within which random initial conditions lead to a particular stable state of the system.
Collapse
Affiliation(s)
- Jordan Breffle
- Neuroscience Program, Brandeis University, 415 South St, Waltham, MA, 02454, USA
| | - Subhadra Mokashe
- Neuroscience Program, Brandeis University, 415 South St, Waltham, MA, 02454, USA
| | - Siwei Qiu
- Volen National Center for Complex Systems, Brandeis University, 415 South St, Waltham, MA, 02454, USA
- Department of Neurology, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Paul Miller
- Neuroscience Program, Brandeis University, 415 South St, Waltham, MA, 02454, USA.
- Volen National Center for Complex Systems, Brandeis University, 415 South St, Waltham, MA, 02454, USA.
- Department of Biology, Brandeis University, 415 South St, Waltham, MA, 02454, USA.
| |
Collapse
|
5
|
Luo TZ, Kim TD, Gupta D, Bondy AG, Kopec CD, Elliot VA, DePasquale B, Brody CD. Transitions in dynamical regime and neural mode underlie perceptual decision-making. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.10.15.562427. [PMID: 37904994 PMCID: PMC10614809 DOI: 10.1101/2023.10.15.562427] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/02/2023]
Abstract
Perceptual decision-making is the process by which an animal uses sensory stimuli to choose an action or mental proposition. This process is thought to be mediated by neurons organized as attractor networks 1,2 . However, whether attractor dynamics underlie decision behavior and the complex neuronal responses remains unclear. Here we use an unsupervised, deep learning-based method to discover decision-related dynamics from the simultaneous activity of neurons in frontal cortex and striatum of rats while they accumulate pulsatile auditory evidence. We show that contrary to prevailing hypotheses, attractors play a role only after a transition from a regime in the dynamics that is strongly driven by inputs to one dominated by the intrinsic dynamics. The initial regime mediates evidence accumulation, and the subsequent intrinsic-dominant regime subserves decision commitment. This regime transition is coupled to a rapid reorganization in the representation of the decision process in the neural population (a change in the "neural mode" along which the process develops). A simplified model approximating the coupled transition in the dynamics and neural mode allows inferring, from each trial's neural activity, the internal decision commitment time in that trial, and captures diverse and complex single-neuron temporal profiles, such as ramping and stepping 3-5 . It also captures trial-averaged curved trajectories 6-8 , and reveals distinctions between brain regions. Our results show that the formation of a perceptual choice involves a rapid, coordinated transition in both the dynamical regime and the neural mode of the decision process, and suggest pairing deep learning and parsimonious models as a promising approach for understanding complex data.
Collapse
|
6
|
Breffle J, Mokashe S, Qiu S, Miller P. Multistability in neural systems with random cross-connections. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.06.05.543727. [PMID: 37333310 PMCID: PMC10274702 DOI: 10.1101/2023.06.05.543727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/20/2023]
Abstract
Neural circuits with multiple discrete attractor states could support a variety of cognitive tasks according to both empirical data and model simulations. We assess the conditions for such multistability in neural systems, using a firing-rate model framework, in which clusters of neurons with net self-excitation are represented as units, which interact with each other through random connections. We focus on conditions in which individual units lack sufficient self-excitation to become bistable on their own. Rather, multistability can arise via recurrent input from other units as a network effect for subsets of units, whose net input to each other when active is sufficiently positive to maintain such activity. In terms of the strength of within-unit self-excitation and standard-deviation of random cross-connections, the region of multistability depends on the firing-rate curve of units. Indeed, bistability can arise with zero self-excitation, purely through zero-mean random cross-connections, if the firing-rate curve rises supralinearly at low inputs from a value near zero at zero input. We simulate and analyze finite systems, showing that the probability of multistability can peak at intermediate system size, and connect with other literature analyzing similar systems in the infinite-size limit. We find regions of multistability with a bimodal distribution for the number of active units in a stable state. Finally, we find evidence for a log-normal distribution of sizes of attractor basins, which can appear as Zipf's Law when sampled as the proportion of trials within which random initial conditions lead to a particular stable state of the system.
Collapse
Affiliation(s)
- Jordan Breffle
- Neuroscience Program, Brandeis University, 415 South St, Waltham, MA 02454
| | - Subhadra Mokashe
- Neuroscience Program, Brandeis University, 415 South St, Waltham, MA 02454
| | - Siwei Qiu
- Volen National Center for Complex Systems, Brandeis University, 415 South St, Waltham, MA 02454
- Current address: Department of Neurology, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Paul Miller
- Neuroscience Program, Brandeis University, 415 South St, Waltham, MA 02454
- Volen National Center for Complex Systems, Brandeis University, 415 South St, Waltham, MA 02454
- Department of Biology, Brandeis University, 415 South St, Waltham, MA 02454
| |
Collapse
|
7
|
Albers JL, Steibel JP, Klingler RH, Ivan LN, Garcia-Reyero N, Carvan MJ, Murphy CA. Altered Larval Yellow Perch Swimming Behavior Due to Methylmercury and PCB126 Detected Using Hidden Markov Chain Models. ENVIRONMENTAL SCIENCE & TECHNOLOGY 2022; 56:3514-3523. [PMID: 35201763 DOI: 10.1021/acs.est.1c07505] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Fish swimming behavior is a commonly measured response in aquatic ecotoxicology because behavior is considered a whole organism-level effect that integrates many sensory systems. Recent advancements in animal behavior models, such as hidden Markov chain models (HMM), suggest an improved analytical approach for toxicology. Using both new and traditional approaches, we examined the sublethal effects of PCB126 and methylmercury on yellow perch (YP) larvae (Perca flavescens) using three doses. Both approaches indicate larvae increase activity after exposure to either chemical. The middle methylmercury-dosed larvae showed multiple altered behavior patterns. First, larvae had a general increase in activity, typically performing more behavior states, more time swimming, and more swimming bouts per second. Second, when larvae were in a slow or medium swimming state, these larvae tended to switch between these states more often. Third, larvae swam slower during the swimming bouts. The upper PCB126-dosed larvae exhibited a higher proportion and a fast swimming state, but the total time spent swimming fast decreased. The middle PCB126-dosed larvae transitioned from fast to slow swimming states less often than the control larvae. These results indicate that developmental exposure to very low doses of these neurotoxicants alters YP larvae overall swimming behaviors, suggesting neurodevelopment alteration.
Collapse
Affiliation(s)
- Janice L Albers
- Department of Fisheries and Wildlife, Michigan State University, East Lansing, Michigan 48824, United States
| | - Juan P Steibel
- Department of Fisheries and Wildlife, Michigan State University, East Lansing, Michigan 48824, United States
| | - Rebekah H Klingler
- School of Freshwater Sciences, University of Wisconsin-Milwaukee, Milwaukee, Wisconsin 53204, United States
| | - Lori N Ivan
- Department of Fisheries and Wildlife, Michigan State University, East Lansing, Michigan 48824, United States
| | - Natàlia Garcia-Reyero
- Environmental Laboratory, US Army Engineer Research and Development Center, Vicksburg, Mississippi, 39180, United States
| | - Michael J Carvan
- School of Freshwater Sciences, University of Wisconsin-Milwaukee, Milwaukee, Wisconsin 53204, United States
| | - Cheryl A Murphy
- Department of Fisheries and Wildlife, Michigan State University, East Lansing, Michigan 48824, United States
| |
Collapse
|
8
|
Brinkman BAW, Yan H, Maffei A, Park IM, Fontanini A, Wang J, La Camera G. Metastable dynamics of neural circuits and networks. APPLIED PHYSICS REVIEWS 2022; 9:011313. [PMID: 35284030 PMCID: PMC8900181 DOI: 10.1063/5.0062603] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Accepted: 01/31/2022] [Indexed: 05/14/2023]
Abstract
Cortical neurons emit seemingly erratic trains of action potentials or "spikes," and neural network dynamics emerge from the coordinated spiking activity within neural circuits. These rich dynamics manifest themselves in a variety of patterns, which emerge spontaneously or in response to incoming activity produced by sensory inputs. In this Review, we focus on neural dynamics that is best understood as a sequence of repeated activations of a number of discrete hidden states. These transiently occupied states are termed "metastable" and have been linked to important sensory and cognitive functions. In the rodent gustatory cortex, for instance, metastable dynamics have been associated with stimulus coding, with states of expectation, and with decision making. In frontal, parietal, and motor areas of macaques, metastable activity has been related to behavioral performance, choice behavior, task difficulty, and attention. In this article, we review the experimental evidence for neural metastable dynamics together with theoretical approaches to the study of metastable activity in neural circuits. These approaches include (i) a theoretical framework based on non-equilibrium statistical physics for network dynamics; (ii) statistical approaches to extract information about metastable states from a variety of neural signals; and (iii) recent neural network approaches, informed by experimental results, to model the emergence of metastable dynamics. By discussing these topics, we aim to provide a cohesive view of how transitions between different states of activity may provide the neural underpinnings for essential functions such as perception, memory, expectation, or decision making, and more generally, how the study of metastable neural activity may advance our understanding of neural circuit function in health and disease.
Collapse
Affiliation(s)
| | - H. Yan
- State Key Laboratory of Electroanalytical Chemistry, Changchun Institute of Applied Chemistry, Chinese Academy of Sciences, Changchun, Jilin 130022, People's Republic of China
| | | | | | | | - J. Wang
- Authors to whom correspondence should be addressed: and
| | - G. La Camera
- Authors to whom correspondence should be addressed: and
| |
Collapse
|
9
|
Bolkan SS, Stone IR, Pinto L, Ashwood ZC, Iravedra Garcia JM, Herman AL, Singh P, Bandi A, Cox J, Zimmerman CA, Cho JR, Engelhard B, Pillow JW, Witten IB. Opponent control of behavior by dorsomedial striatal pathways depends on task demands and internal state. Nat Neurosci 2022; 25:345-357. [PMID: 35260863 PMCID: PMC8915388 DOI: 10.1038/s41593-022-01021-9] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2020] [Accepted: 01/21/2022] [Indexed: 11/27/2022]
Abstract
A classic view of the striatum holds that activity in direct and indirect pathways oppositely modulates motor output. Whether this involves direct control of movement, or reflects a cognitive process underlying movement, remains unresolved. Here we find that strong, opponent control of behavior by the two pathways of the dorsomedial striatum depends on the cognitive requirements of a task. Furthermore, a latent state model (a hidden Markov model with generalized linear model observations) reveals that-even within a single task-the contribution of the two pathways to behavior is state dependent. Specifically, the two pathways have large contributions in one of two states associated with a strategy of evidence accumulation, compared to a state associated with a strategy of repeating previous choices. Thus, both the demands imposed by a task, as well as the internal state of mice when performing a task, determine whether dorsomedial striatum pathways provide strong and opponent control of behavior.
Collapse
Affiliation(s)
- Scott S Bolkan
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - Iris R Stone
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - Lucas Pinto
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - Zoe C Ashwood
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | | | - Alison L Herman
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - Priyanka Singh
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - Akhil Bandi
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - Julia Cox
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | | | - Jounhong Ryan Cho
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - Ben Engelhard
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - Jonathan W Pillow
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA.
- Department of Psychology, Princeton University, Princeton, NJ, USA.
| | - Ilana B Witten
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA.
- Department of Psychology, Princeton University, Princeton, NJ, USA.
| |
Collapse
|
10
|
Cortical ensembles orchestrate social competition through hypothalamic outputs. Nature 2022; 603:667-671. [PMID: 35296862 PMCID: PMC9576144 DOI: 10.1038/s41586-022-04507-5] [Citation(s) in RCA: 54] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2020] [Accepted: 02/02/2022] [Indexed: 01/27/2023]
Abstract
Most social species self-organize into dominance hierarchies1,2, which decreases aggression and conserves energy3,4, but it is not clear how individuals know their social rank. We have only begun to learn how the brain represents social rank5-9 and guides behaviour on the basis of this representation. The medial prefrontal cortex (mPFC) is involved in social dominance in rodents7,8 and humans10,11. Yet, precisely how the mPFC encodes relative social rank and which circuits mediate this computation is not known. We developed a social competition assay in which mice compete for rewards, as well as a computer vision tool (AlphaTracker) to track multiple, unmarked animals. A hidden Markov model combined with generalized linear models was able to decode social competition behaviour from mPFC ensemble activity. Population dynamics in the mPFC predicted social rank and competitive success. Finally, we demonstrate that mPFC cells that project to the lateral hypothalamus promote dominance behaviour during reward competition. Thus, we reveal a cortico-hypothalamic circuit by which the mPFC exerts top-down modulation of social dominance.
Collapse
|
11
|
Ashwood ZC, Roy NA, Stone IR, Urai AE, Churchland AK, Pouget A, Pillow JW. Mice alternate between discrete strategies during perceptual decision-making. Nat Neurosci 2022; 25:201-212. [PMID: 35132235 PMCID: PMC8890994 DOI: 10.1038/s41593-021-01007-z] [Citation(s) in RCA: 57] [Impact Index Per Article: 28.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2021] [Accepted: 12/17/2021] [Indexed: 12/21/2022]
Abstract
Classical models of perceptual decision-making assume that subjects use a single, consistent strategy to form decisions, or that decision-making strategies evolve slowly over time. Here we present new analyses suggesting that this common view is incorrect. We analyzed data from mouse and human decision-making experiments and found that choice behavior relies on an interplay among multiple interleaved strategies. These strategies, characterized by states in a hidden Markov model, persist for tens to hundreds of trials before switching, and often switch multiple times within a session. The identified decision-making strategies were highly consistent across mice and comprised a single 'engaged' state, in which decisions relied heavily on the sensory stimulus, and several biased states in which errors frequently occurred. These results provide a powerful alternate explanation for 'lapses' often observed in rodent behavioral experiments, and suggest that standard measures of performance mask the presence of major changes in strategy across trials.
Collapse
Affiliation(s)
- Zoe C Ashwood
- Deptartment of Computer Science, Princeton University, Princeton, NJ, USA.
- Princeton Neuroscience Institute, Princeton, NJ, USA.
| | | | - Iris R Stone
- Princeton Neuroscience Institute, Princeton, NJ, USA
| | - Anne E Urai
- Cognitive Psychology Unit, Leiden University, Leiden, Netherlands
| | - Anne K Churchland
- David Geffen School of Medicine, The University of California, Los Angeles, Los Angeles, CA, USA
| | - Alexandre Pouget
- Faculty of Medicine & Deptartment of Basic Neurosciences, University of Geneva, Geneva, Switzerland
| | - Jonathan W Pillow
- Princeton Neuroscience Institute, Princeton, NJ, USA.
- Department of Psychology, Princeton University, Princeton, NJ, USA.
| |
Collapse
|
12
|
Teşileanu T, Golkar S, Nasiri S, Sengupta AM, Chklovskii DB. Neural Circuits for Dynamics-Based Segmentation of Time Series. Neural Comput 2022; 34:891-938. [PMID: 35026035 DOI: 10.1162/neco_a_01476] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Accepted: 10/15/2021] [Indexed: 11/04/2022]
Abstract
The brain must extract behaviorally relevant latent variables from the signals streamed by the sensory organs. Such latent variables are often encoded in the dynamics that generated the signal rather than in the specific realization of the waveform. Therefore, one problem faced by the brain is to segment time series based on underlying dynamics. We present two algorithms for performing this segmentation task that are biologically plausible, which we define as acting in a streaming setting and all learning rules being local. One algorithm is model based and can be derived from an optimization problem involving a mixture of autoregressive processes. This algorithm relies on feedback in the form of a prediction error and can also be used for forecasting future samples. In some brain regions, such as the retina, the feedback connections necessary to use the prediction error for learning are absent. For this case, we propose a second, model-free algorithm that uses a running estimate of the autocorrelation structure of the signal to perform the segmentation. We show that both algorithms do well when tasked with segmenting signals drawn from autoregressive models with piecewise-constant parameters. In particular, the segmentation accuracy is similar to that obtained from oracle-like methods in which the ground-truth parameters of the autoregressive models are known. We also test our methods on data sets generated by alternating snippets of voice recordings. We provide implementations of our algorithms at https://github.com/ttesileanu/bio-time-series.
Collapse
Affiliation(s)
- Tiberiu Teşileanu
- Center for Computational Neuroscience, Flatiron Institute, New York, NY 10010, U.S.A.
| | - Siavash Golkar
- Center for Computational Neuroscience, Flatiron Institute, New York, NY 10010, U.S.A.
| | - Samaneh Nasiri
- Department of Neurology, Harvard Medical School, Boston, MA 02115, U.S.A.
| | - Anirvan M Sengupta
- Center for Computational Neuroscience, Flatiron Institute, New York, NY 10010, and Department of Physics and Astronomy, Rutgers University, Piscataway, NJ 08854, U.S.A.
| | - Dmitri B Chklovskii
- Center for Computational Neuroscience, Flatiron Institute, New York, NY 10010, and Neuroscience Institute, NYU Langone Medical Center, New York, NY, U.S.A.
| |
Collapse
|
13
|
Lin JY, Mukherjee N, Bernstein MJ, Katz DB. Perturbation of amygdala-cortical projections reduces ensemble coherence of palatability coding in gustatory cortex. eLife 2021; 10:e65766. [PMID: 34018924 PMCID: PMC8139825 DOI: 10.7554/elife.65766] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Accepted: 04/30/2021] [Indexed: 01/01/2023] Open
Abstract
Taste palatability is centrally involved in consumption decisions-we ingest foods that taste good and reject those that don't. Gustatory cortex (GC) and basolateral amygdala (BLA) almost certainly work together to mediate palatability-driven behavior, but the precise nature of their interplay during taste decision-making is still unknown. To probe this issue, we discretely perturbed (with optogenetics) activity in rats' BLA→GC axons during taste deliveries. This perturbation strongly altered GC taste responses, but while the perturbation itself was tonic (2.5 s), the alterations were not-changes preferentially aligned with the onset times of previously-described taste response epochs, and reduced evidence of palatability-related activity in the 'late-epoch' of the responses without reducing the amount of taste identity information available in the 'middle epoch.' Finally, BLA→GC perturbations changed behavior-linked taste response dynamics themselves, distinctively diminishing the abruptness of ensemble transitions into the late epoch. These results suggest that BLA 'organizes' behavior-related GC taste dynamics.
Collapse
Affiliation(s)
- Jian-You Lin
- Department of PsychologyWalthamUnited States
- The Volen National Center for Complex Systems, Brandeis UniversityWalthamUnited States
| | - Narendra Mukherjee
- The Volen National Center for Complex Systems, Brandeis UniversityWalthamUnited States
| | - Max J Bernstein
- Department of PsychologyWalthamUnited States
- The Volen National Center for Complex Systems, Brandeis UniversityWalthamUnited States
| | - Donald B Katz
- Department of PsychologyWalthamUnited States
- The Volen National Center for Complex Systems, Brandeis UniversityWalthamUnited States
| |
Collapse
|
14
|
Chen B, Miller P. Attractor-state itinerancy in neural circuits with synaptic depression. JOURNAL OF MATHEMATICAL NEUROSCIENCE 2020; 10:15. [PMID: 32915327 PMCID: PMC7486362 DOI: 10.1186/s13408-020-00093-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/25/2019] [Accepted: 08/28/2020] [Indexed: 06/11/2023]
Abstract
Neural populations with strong excitatory recurrent connections can support bistable states in their mean firing rates. Multiple fixed points in a network of such bistable units can be used to model memory retrieval and pattern separation. The stability of fixed points may change on a slower timescale than that of the dynamics due to short-term synaptic depression, leading to transitions between quasi-stable point attractor states in a sequence that depends on the history of stimuli. To better understand these behaviors, we study a minimal model, which characterizes multiple fixed points and transitions between them in response to stimuli with diverse time- and amplitude-dependencies. The interplay between the fast dynamics of firing rate and synaptic responses and the slower timescale of synaptic depression makes the neural activity sensitive to the amplitude and duration of square-pulse stimuli in a nontrivial, history-dependent manner. Weak cross-couplings further deform the basins of attraction for different fixed points into intricate shapes. We find that while short-term synaptic depression can reduce the total number of stable fixed points in a network, it tends to strongly increase the number of fixed points visited upon repetitions of fixed stimuli. Our analysis provides a natural explanation for the system's rich responses to stimuli of different durations and amplitudes while demonstrating the encoding capability of bistable neural populations for dynamical features of incoming stimuli.
Collapse
Affiliation(s)
- Bolun Chen
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA, 02453, USA
| | - Paul Miller
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA, 02453, USA.
- Department of Biology, Brandeis University, Waltham, MA, 02453, USA.
| |
Collapse
|
15
|
Kadmon Harpaz N, Ungarish D, Hatsopoulos NG, Flash T. Movement Decomposition in the Primary Motor Cortex. Cereb Cortex 2020; 29:1619-1633. [PMID: 29668846 DOI: 10.1093/cercor/bhy060] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2017] [Revised: 02/16/2018] [Accepted: 02/22/2018] [Indexed: 02/06/2023] Open
Abstract
A complex action can be described as the composition of a set of elementary movements. While both kinematic and dynamic elements have been proposed to compose complex actions, the structure of movement decomposition and its neural representation remain unknown. Here, we examined movement decomposition by modeling the temporal dynamics of neural populations in the primary motor cortex of macaque monkeys performing forelimb reaching movements. Using a hidden Markov model, we found that global transitions in the neural population activity are associated with a consistent segmentation of the behavioral output into acceleration and deceleration epochs with directional selectivity. Single cells exhibited modulation of firing rates between the kinematic epochs, with abrupt changes in spiking activity timed with the identified transitions. These results reveal distinct encoding of acceleration and deceleration phases at the level of M1, and point to a specific pattern of movement decomposition that arises from the underlying neural activity. A similar approach can be used to probe the structure of movement decomposition in different brain regions, possibly controlling different temporal scales, to reveal the hierarchical structure of movement composition.
Collapse
Affiliation(s)
- Naama Kadmon Harpaz
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot, Israel
| | - David Ungarish
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot, Israel
| | - Nicholas G Hatsopoulos
- Committee on Computational Neuroscience, University of Chicago, Chicago, IL, USA.,Department of Organismal Biology and Anatomy, University of Chicago, Chicago, IL, USA
| | - Tamar Flash
- Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot, Israel
| |
Collapse
|
16
|
Calhoun AJ, Pillow JW, Murthy M. Unsupervised identification of the internal states that shape natural behavior. Nat Neurosci 2019; 22:2040-2049. [PMID: 31768056 DOI: 10.1038/s41593-019-0533-x] [Citation(s) in RCA: 90] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2019] [Accepted: 10/07/2019] [Indexed: 02/02/2023]
Abstract
Internal states shape stimulus responses and decision-making, but we lack methods to identify them. To address this gap, we developed an unsupervised method to identify internal states from behavioral data and applied it to a dynamic social interaction. During courtship, Drosophila melanogaster males pattern their songs using feedback cues from their partner. Our model uncovers three latent states underlying this behavior and is able to predict moment-to-moment variation in song-patterning decisions. These states correspond to different sensorimotor strategies, each of which is characterized by different mappings from feedback cues to song modes. We show that a pair of neurons previously thought to be command neurons for song production are sufficient to drive switching between states. Our results reveal how animals compose behavior from previously unidentified internal states, which is a necessary step for quantitative descriptions of animal behavior that link environmental cues, internal needs, neuronal activity and motor outputs.
Collapse
Affiliation(s)
- Adam J Calhoun
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - Jonathan W Pillow
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - Mala Murthy
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA.
| |
Collapse
|
17
|
Characterizing and dissociating multiple time-varying modulatory computations influencing neuronal activity. PLoS Comput Biol 2019; 15:e1007275. [PMID: 31513570 PMCID: PMC6759185 DOI: 10.1371/journal.pcbi.1007275] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2018] [Revised: 09/24/2019] [Accepted: 07/18/2019] [Indexed: 11/19/2022] Open
Abstract
In many brain areas, sensory responses are heavily modulated by factors including attentional state, context, reward history, motor preparation, learned associations, and other cognitive variables. Modelling the effect of these modulatory factors on sensory responses has proven challenging, mostly due to the time-varying and nonlinear nature of the underlying computations. Here we present a computational model capable of capturing and dissociating multiple time-varying modulatory effects on neuronal responses on the order of milliseconds. The model’s performance is tested on extrastriate perisaccadic visual responses in nonhuman primates. Visual neurons respond to stimuli presented around the time of saccades differently than during fixation. These perisaccadic changes include sensitivity to the stimuli presented at locations outside the neuron’s receptive field, which suggests a contribution of multiple sources to perisaccadic response generation. Current computational approaches cannot quantitatively characterize the contribution of each modulatory source in response generation, mainly due to the very short timescale on which the saccade takes place. In this study, we use a high spatiotemporal resolution experimental paradigm along with a novel extension of the generalized linear model framework (GLM), termed the sparse-variable GLM, to allow for time-varying model parameters representing the temporal evolution of the system with a resolution on the order of milliseconds. We used this model framework to precisely map the temporal evolution of the spatiotemporal receptive field of visual neurons in the middle temporal area during the execution of a saccade. Moreover, an extended model based on a factorization of the sparse-variable GLM allowed us to disassociate and quantify the contribution of individual sources to the perisaccadic response. Our results show that our novel framework can precisely capture the changes in sensitivity of neurons around the time of saccades, and provide a general framework to quantitatively track the role of multiple modulatory sources over time. The sensory responses of neurons in many brain areas, particularly those in higher prefrontal or parietal areas, are strongly influenced by factors including task rules, attentional state, context, reward history, motor preparation, learned associations, and other cognitive variables. These modulations often occur in combination, or on fast timescales which present a challenge for both experimental and modelling approaches aiming to describe the underlying mechanisms or computations. Here we present a computational model capable of capturing and dissociating multiple time-varying modulatory effects on spiking responses on the order of milliseconds. The model’s performance is evaluated by testing its ability to reproduce and dissociate multiple changes in visual sensitivity occurring in extrastriate visual cortex around the time of rapid eye movements. No previous model is capable of capturing these changes with as fine a resolution as that presented here. Our model both provides specific insight into the nature and time course of changes in visual sensitivity around the time of eye movements, and offers a general framework applicable to a wide variety of contexts in which sensory processing is modulated dynamically by multiple time-varying cognitive or behavioral factors, to understand the neuronal computations underpinning these modulations and make predictions about the underlying mechanisms.
Collapse
|
18
|
Marcos E, Londei F, Genovesio A. Hidden Markov Models Predict the Future Choice Better Than a PSTH-Based Method. Neural Comput 2019; 31:1874-1890. [PMID: 31335289 DOI: 10.1162/neco_a_01216] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Beyond average firing rate, other measurable signals of neuronal activity are fundamental to an understanding of behavior. Recently, hidden Markov models (HMMs) have been applied to neural recordings and have described how neuronal ensembles process information by going through sequences of different states. Such collective dynamics are impossible to capture by just looking at the average firing rate. To estimate how well HMMs can decode information contained in single trials, we compared HMMs with a recently developed classification method based on the peristimulus time histogram (PSTH). The accuracy of the two methods was tested by using the activity of prefrontal neurons recorded while two monkeys were engaged in a strategy task. In this task, the monkeys had to select one of three spatial targets based on an instruction cue and on their previous choice. We show that by using the single trial's neural activity in a period preceding action execution, both models were able to classify the monkeys' choice with an accuracy higher than by chance. Moreover, the HMM was significantly more accurate than the PSTH-based method, even in cases in which the HMM performance was low, although always above chance. Furthermore, the accuracy of both methods was related to the number of neurons exhibiting spatial selectivity within an experimental session. Overall, our study shows that neural activity is better described when not only the mean activity of individual neurons is considered and that therefore, the study of other signals rather than only the average firing rate is fundamental to an understanding of the dynamics of neuronal ensembles.
Collapse
Affiliation(s)
- Encarni Marcos
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome 00185, Italy, and Instituto de Neurociencias de Alicante, Consejo Superior de Investigaciones Científicas-Universidad Miguel Hernández de Elche, Sant Joan d'Alacant, Alicante 03550, Spain
| | - Fabrizio Londei
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome 00185, Italy
| | - Aldo Genovesio
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome 00185, Italy
| |
Collapse
|
19
|
La Camera G, Fontanini A, Mazzucato L. Cortical computations via metastable activity. Curr Opin Neurobiol 2019; 58:37-45. [PMID: 31326722 DOI: 10.1016/j.conb.2019.06.007] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2018] [Accepted: 06/22/2019] [Indexed: 12/27/2022]
Abstract
Metastable brain dynamics are characterized by abrupt, jump-like modulations so that the neural activity in single trials appears to unfold as a sequence of discrete, quasi-stationary 'states'. Evidence that cortical neural activity unfolds as a sequence of metastable states is accumulating at fast pace. Metastable activity occurs both in response to an external stimulus and during ongoing, self-generated activity. These spontaneous metastable states are increasingly found to subserve internal representations that are not locked to external triggers, including states of deliberations, attention and expectation. Moreover, decoding stimuli or decisions via metastable states can be carried out trial-by-trial. Focusing on metastability will allow us to shift our perspective on neural coding from traditional concepts based on trial-averaging to models based on dynamic ensemble representations. Recent theoretical work has started to characterize the mechanistic origin and potential roles of metastable representations. In this article we review recent findings on metastable activity, how it may arise in biologically realistic models, and its potential role for representing internal states as well as relevant task variables.
Collapse
Affiliation(s)
- Giancarlo La Camera
- Department of Neurobiology and Behavior, State University of New York at Stony Brook, Stony Brook, NY 11794, United States; Graduate Program in Neuroscience, State University of New York at Stony Brook, Stony Brook, NY 11794, United States.
| | - Alfredo Fontanini
- Department of Neurobiology and Behavior, State University of New York at Stony Brook, Stony Brook, NY 11794, United States; Graduate Program in Neuroscience, State University of New York at Stony Brook, Stony Brook, NY 11794, United States
| | - Luca Mazzucato
- Departments of Biology and Mathematics and Institute of Neuroscience, University of Oregon, Eugene, OR 97403, United States
| |
Collapse
|
20
|
Dynamic Brain Interactions during Picture Naming. eNeuro 2019; 6:ENEURO.0472-18.2019. [PMID: 31196941 PMCID: PMC6624411 DOI: 10.1523/eneuro.0472-18.2019] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2018] [Revised: 04/04/2019] [Accepted: 05/17/2019] [Indexed: 11/21/2022] Open
Abstract
Brain computations involve multiple processes by which sensory information is encoded and transformed to drive behavior. These computations are thought to be mediated by dynamic interactions between populations of neurons. Here, we demonstrate that human brains exhibit a reliable sequence of neural interactions during speech production. We use an autoregressive Hidden Markov Model (ARHMM) to identify dynamical network states exhibited by electrocorticographic signals recorded from human neurosurgical patients. Our method resolves dynamic latent network states on a trial-by-trial basis. We characterize individual network states according to the patterns of directional information flow between cortical regions of interest. These network states occur consistently and in a specific, interpretable sequence across trials and subjects: the data support the hypothesis of a fixed-length visual processing state, followed by a variable-length language state, and then by a terminal articulation state. This empirical evidence validates classical psycholinguistic theories that have posited such intermediate states during speaking. It further reveals these state dynamics are not localized to one brain area or one sequence of areas, but are instead a network phenomenon.
Collapse
|
21
|
Tucker HR, Mahoney E, Chhetri A, Unger K, Mamone G, Kim G, Audil A, Moolick B, Molho ES, Pilitsis JG, Shin DS. Deep brain stimulation of the ventroanterior and ventrolateral thalamus improves motor function in a rat model of Parkinson's disease. Exp Neurol 2019; 317:155-167. [PMID: 30890329 DOI: 10.1016/j.expneurol.2019.03.008] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2019] [Revised: 02/26/2019] [Accepted: 03/14/2019] [Indexed: 12/21/2022]
Abstract
Parkinson's disease (PD) is a neurodegenerative disease with affected individuals exhibiting motor symptoms of bradykinesia, muscle rigidity, tremor, postural instability and gait dysfunction. The current gold standard treatment is pharmacotherapy with levodopa, but long-term use is associated with motor response fluctuations and can cause abnormal movements called dyskinesias. An alternative treatment option is deep brain stimulation (DBS) with the two FDA-approved brain targets for PD situated in the basal ganglia; specifically, in the subthalamic nucleus (STN) and globus pallidus pars interna (GPi). Both improve quality of life and motor scores by ~50-70% in well-selected patients but can also elicit adverse effects on cognition and other non-motor symptoms. Therefore, identifying a novel DBS target that is efficacious for patients not optimally responsive to current DBS targets with fewer side-effects has clear clinical merit. Here, we investigate whether the ventroanterior (VA) and ventrolateral (VL) motor nuclei of the thalamus can serve as novel and effective DBS targets for PD. In the limb-use asymmetry test (LAT), hemiparkinsonian rats showcased left forelimb akinesia and touched only 6.5 ± 1.3% with that paw. However, these animals touched equally with both forepaws with DBS at 10 Hz, 100 μsec pulse width and 100 uA cathodic stimulation in the VA (n = 7), VL (n = 8) or at the interface between the two thalamic nuclei which we refer to as the VA|VL (n = 12). With whole-cell patch-clamp recordings, we noted that VA|VL stimulation in vitro increased the number of induced action potentials in proximal neurons in both areas albeit VL neurons transitioned from bursting to non-bursting action potentials (APs) with large excitatory postsynaptic potentials time-locked to stimulation. In contrast, VA neurons were excited with VA|VL electrical stimulation but with little change in spiking phenotype. Overall, our findings show that DBS in the VA, VL or VA|VL improved motor function in a rat model of PD; plausibly via increased excitation of residing neurons.
Collapse
Affiliation(s)
- Heidi R Tucker
- Department of Neuroscience & Experimental Therapeutics, Albany Medical College, Albany, NY, United States of America
| | - Emily Mahoney
- Department of Neuroscience & Experimental Therapeutics, Albany Medical College, Albany, NY, United States of America
| | - Ashok Chhetri
- Department of Neuroscience & Experimental Therapeutics, Albany Medical College, Albany, NY, United States of America
| | - Kristen Unger
- Department of Neuroscience & Experimental Therapeutics, Albany Medical College, Albany, NY, United States of America
| | - Gianna Mamone
- Department of Neuroscience & Experimental Therapeutics, Albany Medical College, Albany, NY, United States of America
| | - Gabrielle Kim
- Department of Neuroscience & Experimental Therapeutics, Albany Medical College, Albany, NY, United States of America
| | - Aliyah Audil
- Department of Neuroscience & Experimental Therapeutics, Albany Medical College, Albany, NY, United States of America
| | - Benjamin Moolick
- Department of Neuroscience & Experimental Therapeutics, Albany Medical College, Albany, NY, United States of America
| | - Eric S Molho
- Department of Neurology, Albany Medical Center, Albany, NY, United States of America
| | - Julie G Pilitsis
- Department of Neuroscience & Experimental Therapeutics, Albany Medical College, Albany, NY, United States of America; Department of Neurosurgery, Albany Medical Center, Albany, NY, United States of America
| | - Damian S Shin
- Department of Neuroscience & Experimental Therapeutics, Albany Medical College, Albany, NY, United States of America; Department of Neurology, Albany Medical Center, Albany, NY, United States of America.
| |
Collapse
|
22
|
Somers J, Harper REF, Albert JT. How Many Clocks, How Many Times? On the Sensory Basis and Computational Challenges of Circadian Systems. Front Behav Neurosci 2018; 12:211. [PMID: 30258357 PMCID: PMC6143808 DOI: 10.3389/fnbeh.2018.00211] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2018] [Accepted: 08/21/2018] [Indexed: 11/13/2022] Open
Abstract
A vital task for every organism is not only to decide what to do but also when to do it. For this reason, "circadian clocks" have evolved in virtually all forms of life. Conceptually, circadian clocks can be divided into two functional domains; an autonomous oscillator creates a ~24 h self-sustained rhythm and sensory machinery interprets external information to alter the phase of the autonomous oscillation. It is through this simple design that variations in external stimuli (for example, daylight) can alter our sense of time. However, the clock's simplicity ends with its basic concept. In metazoan animals, multiple external and internal stimuli, from light to temperature and even metabolism have been shown to affect clock time. This raises the fundamental question of cue integration: how are the many, and potentially conflicting, sources of information combined to sense a single time of day? Moreover, individual stimuli, are often detected through various sensory pathways. Some sensory cells, such as insect chordotonal neurons, provide the clock with both temperature and mechanical information. Adding confusion to complexity, there seems to be not only one central clock in the animal's brain but numerous additional clocks in the body's periphery. It is currently not clear how (or if) these "peripheral clocks" are synchronized to their central counterparts or if both clocks "tick" independently from one another. In this review article, we would like to leave the comfort zones of conceptual simplicity and assume a more holistic perspective of circadian clock function. Focusing on recent results from Drosophila melanogaster we will discuss some of the sensory, and computational, challenges organisms face when keeping track of time.
Collapse
Affiliation(s)
- Jason Somers
- Ear Institute, University College LondonLondon, United Kingdom
- The Francis Crick InstituteLondon, United Kingdom
| | - Ross E. F. Harper
- Ear Institute, University College LondonLondon, United Kingdom
- Centre for Mathematics and Physics in the Life Sciences and Experimental Biology (CoMPLEX), University College LondonLondon, United Kingdom
| | - Joerg T. Albert
- Ear Institute, University College LondonLondon, United Kingdom
- The Francis Crick InstituteLondon, United Kingdom
- Centre for Mathematics and Physics in the Life Sciences and Experimental Biology (CoMPLEX), University College LondonLondon, United Kingdom
- Department of Cell and Developmental Biology, University College LondonLondon, United Kingdom
| |
Collapse
|
23
|
Abouelseoud G, Abouelseoud Y, Shoukry A, Ismail N, Mekky J. A Mixed Integer Linear Programming Approach to Electrical Stimulation Optimization Problems. IEEE Trans Neural Syst Rehabil Eng 2018; 26:527-537. [PMID: 29432118 DOI: 10.1109/tnsre.2018.2789380] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Electrical stimulation optimization is a challenging problem. Even when a single region is targeted for excitation, the problem remains a constrained multi-objective optimization problem. The constrained nature of the problem results from safety concerns while its multi-objectives originate from the requirement that non-targeted regions should remain unaffected. In this paper, we propose a mixed integer linear programming formulation that can successfully address the challenges facing this problem. Moreover, the proposed framework can conclusively check the feasibility of the stimulation goals. This helps researchers to avoid wasting time trying to achieve goals that are impossible under a chosen stimulation setup. The superiority of the proposed framework over alternative methods is demonstrated through simulation examples.
Collapse
|
24
|
Chen X, Beck JM, Pearson JM. Neuron's eye view: Inferring features of complex stimuli from neural responses. PLoS Comput Biol 2017; 13:e1005645. [PMID: 28827790 PMCID: PMC5578681 DOI: 10.1371/journal.pcbi.1005645] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2016] [Revised: 08/31/2017] [Accepted: 06/17/2017] [Indexed: 11/26/2022] Open
Abstract
Experiments that study neural encoding of stimuli at the level of individual neurons typically choose a small set of features present in the world—contrast and luminance for vision, pitch and intensity for sound—and assemble a stimulus set that systematically varies along these dimensions. Subsequent analysis of neural responses to these stimuli typically focuses on regression models, with experimenter-controlled features as predictors and spike counts or firing rates as responses. Unfortunately, this approach requires knowledge in advance about the relevant features coded by a given population of neurons. For domains as complex as social interaction or natural movement, however, the relevant feature space is poorly understood, and an arbitrary a priori choice of features may give rise to confirmation bias. Here, we present a Bayesian model for exploratory data analysis that is capable of automatically identifying the features present in unstructured stimuli based solely on neuronal responses. Our approach is unique within the class of latent state space models of neural activity in that it assumes that firing rates of neurons are sensitive to multiple discrete time-varying features tied to the stimulus, each of which has Markov (or semi-Markov) dynamics. That is, we are modeling neural activity as driven by multiple simultaneous stimulus features rather than intrinsic neural dynamics. We derive a fast variational Bayesian inference algorithm and show that it correctly recovers hidden features in synthetic data, as well as ground-truth stimulus features in a prototypical neural dataset. To demonstrate the utility of the algorithm, we also apply it to cluster neural responses and demonstrate successful recovery of features corresponding to monkeys and faces in the image set. Many neuroscience experiments begin with a set of reduced stimuli designed to vary only along a small set of variables. Yet many phenomena of interest—natural movies, objects—are not easily parameterized by a small number of dimensions. Here, we develop a novel Bayesian model for clustering stimuli based solely on neural responses, allowing us to discover which latent features of complex stimuli actually drive neural activity. We demonstrate that this model allows us to recover key features of neural responses in a pair of well-studied paradigms.
Collapse
Affiliation(s)
- Xin Chen
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina, United States of America
- Center for Cognitive Neuroscience, Duke University, Durham, North Carolina, United States of America
| | - Jeffrey M. Beck
- Center for Cognitive Neuroscience, Duke University, Durham, North Carolina, United States of America
- Department of Neurobiology, Duke University Medical Center, Durham, North Carolina, United States of America
| | - John M. Pearson
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina, United States of America
- Center for Cognitive Neuroscience, Duke University, Durham, North Carolina, United States of America
- * E-mail:
| |
Collapse
|
25
|
Truccolo W. From point process observations to collective neural dynamics: Nonlinear Hawkes process GLMs, low-dimensional dynamics and coarse graining. JOURNAL OF PHYSIOLOGY, PARIS 2016; 110:336-347. [PMID: 28336305 PMCID: PMC5610574 DOI: 10.1016/j.jphysparis.2017.02.004] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2016] [Revised: 02/20/2017] [Accepted: 02/26/2017] [Indexed: 01/15/2023]
Abstract
This review presents a perspective on capturing collective dynamics in recorded neuronal ensembles based on multivariate point process models, inference of low-dimensional dynamics and coarse graining of spatiotemporal measurements. A general probabilistic framework for continuous time point processes reviewed, with an emphasis on multivariate nonlinear Hawkes processes with exogenous inputs. A point process generalized linear model (PP-GLM) framework for the estimation of discrete time multivariate nonlinear Hawkes processes is described. The approach is illustrated with the modeling of collective dynamics in neocortical neuronal ensembles recorded in human and non-human primates, and prediction of single-neuron spiking. A complementary approach to capture collective dynamics based on low-dimensional dynamics ("order parameters") inferred via latent state-space models with point process observations is presented. The approach is illustrated by inferring and decoding low-dimensional dynamics in primate motor cortex during naturalistic reach and grasp movements. Finally, we briefly review hypothesis tests based on conditional inference and spatiotemporal coarse graining for assessing collective dynamics in recorded neuronal ensembles.
Collapse
Affiliation(s)
- Wilson Truccolo
- Department of Neuroscience and Institute for Brain Science, Brown University, Providence, USA; Center for Neurorestoration and Neurotechnology, U.S. Department of Veterans Affairs, Providence, USA.
| |
Collapse
|
26
|
A gustocentric perspective to understanding primary sensory cortices. Curr Opin Neurobiol 2016; 40:118-124. [PMID: 27455038 DOI: 10.1016/j.conb.2016.06.008] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2016] [Revised: 06/08/2016] [Accepted: 06/09/2016] [Indexed: 12/27/2022]
Abstract
Most of the general principles used to explain sensory cortical function have been inferred from experiments performed on neocortical, primary sensory areas. Attempts to apply a neocortical view to the study of the gustatory cortex (GC) have provided only a limited understanding of this area. Failures to conform GC to classical neocortical principles have been implicitly interpreted as a demonstration of GC's uniqueness. Here we propose to take the opposite perspective, dismissing GC's uniqueness and using principles extracted from its study as a lens for looking at neocortical sensory function. In this review, we describe three significant findings related to gustatory cortical function and advocate their relevance for understanding neocortical sensory areas.
Collapse
|
27
|
Miller P. Itinerancy between attractor states in neural systems. Curr Opin Neurobiol 2016; 40:14-22. [PMID: 27318972 DOI: 10.1016/j.conb.2016.05.005] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2016] [Revised: 05/20/2016] [Accepted: 05/27/2016] [Indexed: 11/25/2022]
Abstract
Converging evidence from neural, perceptual and simulated data suggests that discrete attractor states form within neural circuits through learning and development. External stimuli may bias neural activity to one attractor state or cause activity to transition between several discrete states. Evidence for such transitions, whose timing can vary across trials, is best accrued through analyses that avoid any trial-averaging of data. One such method, hidden Markov modeling, has been effective in this context, revealing state transitions in many neural circuits during many tasks. Concurrently, modeling efforts have revealed computational benefits of stimulus processing via transitions between attractor states. This review describes the current state of the field, with comments on how its perceived limitations have been addressed.
Collapse
Affiliation(s)
- Paul Miller
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA 02454-9110, USA
| |
Collapse
|
28
|
Abstract
Whereas many laboratory-studied decisions involve a highly trained animal identifying an ambiguous stimulus, many naturalistic decisions do not. Consumption decisions, for instance, involve determining whether to eject or consume an already identified stimulus in the mouth and are decisions that can be made without training. By standard analyses, rodent cortical single-neuron taste responses come to predict such consumption decisions across the 500 ms preceding the consumption or rejection itself; decision-related firing emerges well after stimulus identification. Analyzing single-trial ensemble activity using hidden Markov models, we show these decision-related cortical responses to be part of a reliable sequence of states (each defined by the firing rates within the ensemble) separated by brief state-to-state transitions, the latencies of which vary widely between trials. When we aligned data to the onset of the (late-appearing) state that dominates during the time period in which single-neuron firing is correlated to taste palatability, the apparent ramp in stimulus-aligned choice-related firing was shown to be a much more precipitous coherent jump. This jump in choice-related firing resembled a step function more than it did the output of a standard (ramping) decision-making model, and provided a robust prediction of decision latency in single trials. Together, these results demonstrate that activity related to naturalistic consumption decisions emerges nearly instantaneously in cortical ensembles. Significance statement: This paper provides a description of how the brain makes evaluative decisions. The majority of work on the neurobiology of decision making deals with "what is it?" decisions; out of this work has emerged a model whereby neurons accumulate information about the stimulus in the form of slowly increasing firing rates and reach a decision when those firing rates reach a threshold. Here, we study a different kind of more naturalistic decision--a decision to evaluate "what shall I do with it?" after the identity of a taste in the mouth has been identified--and show that this decision is not made through the gradual increasing of stimulus-related firing, but rather that this decision appears to be made in a sudden moment of "insight."
Collapse
|
29
|
Matsubara T, Torikai H. An Asynchronous Recurrent Network of Cellular Automaton-Based Neurons and Its Reproduction of Spiking Neural Network Activities. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2016; 27:836-852. [PMID: 25974951 DOI: 10.1109/tnnls.2015.2425893] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Modeling and implementation approaches for the reproduction of input-output relationships in biological nervous tissues contribute to the development of engineering and clinical applications. However, because of high nonlinearity, the traditional modeling and implementation approaches encounter difficulties in terms of generalization ability (i.e., performance when reproducing an unknown data set) and computational resources (i.e., computation time and circuit elements). To overcome these difficulties, asynchronous cellular automaton-based neuron (ACAN) models, which are described as special kinds of cellular automata that can be implemented as small asynchronous sequential logic circuits have been proposed. This paper presents a novel type of such ACAN and a theoretical analysis of its excitability. This paper also presents a novel network of such neurons, which can mimic input-output relationships of biological and nonlinear ordinary differential equation model neural networks. Numerical analyses confirm that the presented network has a higher generalization ability than other major modeling and implementation approaches. In addition, Field-Programmable Gate Array-implementations confirm that the presented network requires lower computational resources.
Collapse
|
30
|
Mazzucato L, Fontanini A, La Camera G. Stimuli Reduce the Dimensionality of Cortical Activity. Front Syst Neurosci 2016; 10:11. [PMID: 26924968 PMCID: PMC4756130 DOI: 10.3389/fnsys.2016.00011] [Citation(s) in RCA: 63] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2015] [Accepted: 02/02/2016] [Indexed: 12/31/2022] Open
Abstract
The activity of ensembles of simultaneously recorded neurons can be represented as a set of points in the space of firing rates. Even though the dimension of this space is equal to the ensemble size, neural activity can be effectively localized on smaller subspaces. The dimensionality of the neural space is an important determinant of the computational tasks supported by the neural activity. Here, we investigate the dimensionality of neural ensembles from the sensory cortex of alert rats during periods of ongoing (inter-trial) and stimulus-evoked activity. We find that dimensionality grows linearly with ensemble size, and grows significantly faster during ongoing activity compared to evoked activity. We explain these results using a spiking network model based on a clustered architecture. The model captures the difference in growth rate between ongoing and evoked activity and predicts a characteristic scaling with ensemble size that could be tested in high-density multi-electrode recordings. Moreover, we present a simple theory that predicts the existence of an upper bound on dimensionality. This upper bound is inversely proportional to the amount of pair-wise correlations and, compared to a homogeneous network without clusters, it is larger by a factor equal to the number of clusters. The empirical estimation of such bounds depends on the number and duration of trials and is well predicted by the theory. Together, these results provide a framework to analyze neural dimensionality in alert animals, its behavior under stimulus presentation, and its theoretical dependence on ensemble size, number of clusters, and correlations in spiking network models.
Collapse
Affiliation(s)
- Luca Mazzucato
- Department of Neurobiology and Behavior, State University of New York at Stony Brook Stony Brook, NY, USA
| | - Alfredo Fontanini
- Department of Neurobiology and Behavior, State University of New York at Stony BrookStony Brook, NY, USA; Graduate Program in Neuroscience, State University of New York at Stony BrookStony Brook, NY, USA
| | - Giancarlo La Camera
- Department of Neurobiology and Behavior, State University of New York at Stony BrookStony Brook, NY, USA; Graduate Program in Neuroscience, State University of New York at Stony BrookStony Brook, NY, USA
| |
Collapse
|
31
|
Abstract
Single-trial analyses of ensemble activity in alert animals demonstrate that cortical circuits dynamics evolve through temporal sequences of metastable states. Metastability has been studied for its potential role in sensory coding, memory, and decision-making. Yet, very little is known about the network mechanisms responsible for its genesis. It is often assumed that the onset of state sequences is triggered by an external stimulus. Here we show that state sequences can be observed also in the absence of overt sensory stimulation. Analysis of multielectrode recordings from the gustatory cortex of alert rats revealed ongoing sequences of states, where single neurons spontaneously attain several firing rates across different states. This single-neuron multistability represents a challenge to existing spiking network models, where typically each neuron is at most bistable. We present a recurrent spiking network model that accounts for both the spontaneous generation of state sequences and the multistability in single-neuron firing rates. Each state results from the activation of neural clusters with potentiated intracluster connections, with the firing rate in each cluster depending on the number of active clusters. Simulations show that the model's ensemble activity hops among the different states, reproducing the ongoing dynamics observed in the data. When probed with external stimuli, the model predicts the quenching of single-neuron multistability into bistability and the reduction of trial-by-trial variability. Both predictions were confirmed in the data. Together, these results provide a theoretical framework that captures both ongoing and evoked network dynamics in a single mechanistic model.
Collapse
|
32
|
Wang W, Tripathy SJ, Padmanabhan K, Urban NN, Kass RE. An Empirical Model for Reliable Spiking Activity. Neural Comput 2015; 27:1609-23. [PMID: 26079749 DOI: 10.1162/neco_a_00754] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Understanding a neuron's transfer function, which relates a neuron's inputs to its outputs, is essential for understanding the computational role of single neurons. Recently, statistical models, based on point processes and using generalized linear model (GLM) technology, have been widely applied to predict dynamic neuronal transfer functions. However, the standard version of these models fails to capture important features of neural activity, such as responses to stimuli that elicit highly reliable trial-to-trial spiking. Here, we consider a generalization of the usual GLM that incorporates nonlinearity by modeling reliable and nonreliable spikes as being generated by distinct stimulus features. We develop and apply these models to spike trains from olfactory bulb mitral cells recorded in vitro. We find that spike generation in these neurons is better modeled when reliable and unreliable spikes are considered separately and that this effect is most pronounced for neurons with a large number of both reliable and unreliable spikes.
Collapse
Affiliation(s)
- Wanjie Wang
- Department of Statistics, Carnegie Mellon University, Pittsburgh, PA 15213, U.S.A.
| | - Shreejoy J Tripathy
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA 15213, U.S.A.
| | | | - Nathaniel N Urban
- Center for the Neural Basis of Cognition and Department of Biology, Carnegie Mellon University, Pittsburgh, PA 15213, U.S.A.
| | - Robert E Kass
- Department of Statistics, Machine Learning Department, and Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA 15213, U.S.A.
| |
Collapse
|
33
|
Volgushev M, Ilin V, Stevenson IH. Identifying and tracking simulated synaptic inputs from neuronal firing: insights from in vitro experiments. PLoS Comput Biol 2015; 11:e1004167. [PMID: 25823000 PMCID: PMC4379067 DOI: 10.1371/journal.pcbi.1004167] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2014] [Accepted: 02/02/2015] [Indexed: 11/18/2022] Open
Abstract
Accurately describing synaptic interactions between neurons and how interactions change over time are key challenges for systems neuroscience. Although intracellular electrophysiology is a powerful tool for studying synaptic integration and plasticity, it is limited by the small number of neurons that can be recorded simultaneously in vitro and by the technical difficulty of intracellular recording in vivo. One way around these difficulties may be to use large-scale extracellular recording of spike trains and apply statistical methods to model and infer functional connections between neurons. These techniques have the potential to reveal large-scale connectivity structure based on the spike timing alone. However, the interpretation of functional connectivity is often approximate, since only a small fraction of presynaptic inputs are typically observed. Here we use in vitro current injection in layer 2/3 pyramidal neurons to validate methods for inferring functional connectivity in a setting where input to the neuron is controlled. In experiments with partially-defined input, we inject a single simulated input with known amplitude on a background of fluctuating noise. In a fully-defined input paradigm, we then control the synaptic weights and timing of many simulated presynaptic neurons. By analyzing the firing of neurons in response to these artificial inputs, we ask 1) How does functional connectivity inferred from spikes relate to simulated synaptic input? and 2) What are the limitations of connectivity inference? We find that individual current-based synaptic inputs are detectable over a broad range of amplitudes and conditions. Detectability depends on input amplitude and output firing rate, and excitatory inputs are detected more readily than inhibitory. Moreover, as we model increasing numbers of presynaptic inputs, we are able to estimate connection strengths more accurately and detect the presence of connections more quickly. These results illustrate the possibilities and outline the limits of inferring synaptic input from spikes.
Collapse
Affiliation(s)
- Maxim Volgushev
- Department of Psychology, University of Connecticut, Storrs, Connecticut, United States of America
| | - Vladimir Ilin
- Department of Psychology, University of Connecticut, Storrs, Connecticut, United States of America
| | - Ian H. Stevenson
- Department of Psychology, University of Connecticut, Storrs, Connecticut, United States of America
| |
Collapse
|
34
|
Standage D, You H, Wang DH, Dorris MC. Trading speed and accuracy by coding time: a coupled-circuit cortical model. PLoS Comput Biol 2013; 9:e1003021. [PMID: 23592967 PMCID: PMC3617027 DOI: 10.1371/journal.pcbi.1003021] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2011] [Accepted: 02/21/2013] [Indexed: 11/19/2022] Open
Abstract
Our actions take place in space and time, but despite the role of time in decision theory and the growing acknowledgement that the encoding of time is crucial to behaviour, few studies have considered the interactions between neural codes for objects in space and for elapsed time during perceptual decisions. The speed-accuracy trade-off (SAT) provides a window into spatiotemporal interactions. Our hypothesis is that temporal coding determines the rate at which spatial evidence is integrated, controlling the SAT by gain modulation. Here, we propose that local cortical circuits are inherently suited to the relevant spatial and temporal coding. In simulations of an interval estimation task, we use a generic local-circuit model to encode time by ‘climbing’ activity, seen in cortex during tasks with a timing requirement. The model is a network of simulated pyramidal cells and inhibitory interneurons, connected by conductance synapses. A simple learning rule enables the network to quickly produce new interval estimates, which show signature characteristics of estimates by experimental subjects. Analysis of network dynamics formally characterizes this generic, local-circuit timing mechanism. In simulations of a perceptual decision task, we couple two such networks. Network function is determined only by spatial selectivity and NMDA receptor conductance strength; all other parameters are identical. To trade speed and accuracy, the timing network simply learns longer or shorter intervals, driving the rate of downstream decision processing by spatially non-selective input, an established form of gain modulation. Like the timing network's interval estimates, decision times show signature characteristics of those by experimental subjects. Overall, we propose, demonstrate and analyse a generic mechanism for timing, a generic mechanism for modulation of decision processing by temporal codes, and we make predictions for experimental verification. Studies in neuroscience have characterized how the brain represents objects in space and how these objects are selected for detailed perceptual processing. The selection process entails a decision about which object is favoured by the available evidence over time. This period of time is typically in the range of hundreds of milliseconds and is widely believed to be crucial for decisions, allowing neurons to filter noise in the evidence. Despite the widespread belief that time plays this role in decisions and the growing recognition that the brain estimates elapsed time during perceptual tasks, few studies have considered how the encoding of time effects decision making. We propose that neurons encode time in this range by the same general mechanisms used to select objects for detailed processing, and that these temporal representations determine how long evidence is filtered. To this end, we simulate a perceptual decision by coupling two instances of a neural network widely used to simulate localized regions of the cerebral cortex. One network encodes the passage of time and the other makes decisions based on noisy evidence. The former influences the performance of the latter, reproducing signature characteristics of temporal estimates and perceptual decisions.
Collapse
Affiliation(s)
- Dominic Standage
- Department of Biomedical and Molecular Sciences and Center for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
- * E-mail: (DS); (DHW)
| | - Hongzhi You
- Department of Systems Science and National Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | - Da-Hui Wang
- Department of Systems Science and National Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- * E-mail: (DS); (DHW)
| | - Michael C. Dorris
- Department of Biomedical and Molecular Sciences and Center for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
- Institute of Neuroscience, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai, China
| |
Collapse
|
35
|
Smith C, Paninski L. Computing loss of efficiency in optimal Bayesian decoders given noisy or incomplete spike trains. NETWORK (BRISTOL, ENGLAND) 2013; 24:75-98. [PMID: 23742213 DOI: 10.3109/0954898x.2013.789568] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
We investigate Bayesian methods for optimal decoding of noisy or incompletely-observed spike trains. Information about neural identity or temporal resolution may be lost during spike detection and sorting, or spike times measured near the soma may be corrupted with noise due to stochastic membrane channel effects in the axon. We focus on neural encoding models in which the (discrete) neural state evolves according to stimulus-dependent Markovian dynamics. Such models are sufficiently flexible that we may incorporate realistic stimulus encoding and spiking dynamics, but nonetheless permit exact computation via efficient hidden Markov model forward-backward methods. We analyze two types of signal degradation. First, we quantify the information lost due to jitter or downsampling in the spike-times. Second, we quantify the information lost when knowledge of the identities of different spiking neurons is corrupted. In each case the methods introduced here make it possible to quantify the dependence of the information loss on biophysical parameters such as firing rate, spike jitter amplitude, spike observation noise, etc. In particular, decoders that model the probability distribution of spike-neuron assignments significantly outperform decoders that use only the most likely spike assignments, and are ignorant of the posterior spike assignment uncertainty.
Collapse
Affiliation(s)
- Carl Smith
- Department of Chemistry, Columbia University, New York, NY 10027, USA.
| | | |
Collapse
|
36
|
Mishchenko Y, Paninski L. Efficient methods for sampling spike trains in networks of coupled neurons. Ann Appl Stat 2011. [DOI: 10.1214/11-aoas467] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
37
|
Vidaurre D, Bielza C, Larrañaga P. On nonlinearity in neural encoding models applied to the primary visual cortex. NETWORK (BRISTOL, ENGLAND) 2011; 22:97-125. [PMID: 22149671 DOI: 10.3109/0954898x.2011.637606] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Within the regression framework, we show how different levels of nonlinearity influence the instantaneous firing rate prediction of single neurons. Nonlinearity can be achieved in several ways. In particular, we can enrich the predictor set with basis expansions of the input variables (enlarging the number of inputs) or train a simple but different model for each area of the data domain. Spline-based models are popular within the first category. Kernel smoothing methods fall into the second category. Whereas the first choice is useful for globally characterizing complex functions, the second is very handy for temporal data and is able to include inner-state subject variations. Also, interactions among stimuli are considered. We compare state-of-the-art firing rate prediction methods with some more sophisticated spline-based nonlinear methods: multivariate adaptive regression splines and sparse additive models. We also study the impact of kernel smoothing. Finally, we explore the combination of various local models in an incremental learning procedure. Our goal is to demonstrate that appropriate nonlinearity treatment can greatly improve the results. We test our hypothesis on both synthetic data and real neuronal recordings in cat primary visual cortex, giving a plausible explanation of the results from a biological perspective.
Collapse
Affiliation(s)
- Diego Vidaurre
- Computational Intelligence Group, Departamento de Inteligencia Artificial , Universidad Politécnica de Madrid, Boadilla del Monte,Spain.
| | | | | |
Collapse
|