1
|
Jha A, Ashwood ZC, Pillow JW. Active Learning for Discrete Latent Variable Models. Neural Comput 2024; 36:437-474. [PMID: 38363661 DOI: 10.1162/neco_a_01646] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Accepted: 10/13/2023] [Indexed: 02/18/2024]
Abstract
Active learning seeks to reduce the amount of data required to fit the parameters of a model, thus forming an important class of techniques in modern machine learning. However, past work on active learning has largely overlooked latent variable models, which play a vital role in neuroscience, psychology, and a variety of other engineering and scientific disciplines. Here we address this gap by proposing a novel framework for maximum-mutual-information input selection for discrete latent variable regression models. We first apply our method to a class of models known as mixtures of linear regressions (MLR). While it is well known that active learning confers no advantage for linear-gaussian regression models, we use Fisher information to show analytically that active learning can nevertheless achieve large gains for mixtures of such models, and we validate this improvement using both simulations and real-world data. We then consider a powerful class of temporally structured latent variable models given by a hidden Markov model (HMM) with generalized linear model (GLM) observations, which has recently been used to identify discrete states from animal decision-making data. We show that our method substantially reduces the amount of data needed to fit GLM-HMMs and outperforms a variety of approximate methods based on variational and amortized inference. Infomax learning for latent variable models thus offers a powerful approach for characterizing temporally structured latent states, with a wide variety of applications in neuroscience and beyond.
Collapse
Affiliation(s)
- Aditi Jha
- Princeton Neuroscience Institute and Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ 08544, U.S.A.
| | - Zoe C Ashwood
- Princeton Neuroscience Institute and Department of Computer Science, Princeton University, Princeton, NJ 08544, U.S.A.
| | - Jonathan W Pillow
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, U.S.A.
| |
Collapse
|
2
|
Tauber JM, Brincat SL, Stephen EP, Donoghue JA, Kozachkov L, Brown EN, Miller EK. Propofol-mediated Unconsciousness Disrupts Progression of Sensory Signals through the Cortical Hierarchy. J Cogn Neurosci 2024; 36:394-413. [PMID: 37902596 PMCID: PMC11161138 DOI: 10.1162/jocn_a_02081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2023]
Abstract
A critical component of anesthesia is the loss of sensory perception. Propofol is the most widely used drug for general anesthesia, but the neural mechanisms of how and when it disrupts sensory processing are not fully understood. We analyzed local field potential and spiking recorded from Utah arrays in auditory cortex, associative cortex, and cognitive cortex of nonhuman primates before and during propofol-mediated unconsciousness. Sensory stimuli elicited robust and decodable stimulus responses and triggered periods of stimulus-related synchronization between brain areas in the local field potential of Awake animals. By contrast, propofol-mediated unconsciousness eliminated stimulus-related synchrony and drastically weakened stimulus responses and information in all brain areas except for auditory cortex, where responses and information persisted. However, we found stimuli occurring during spiking Up states triggered weaker spiking responses than in Awake animals in auditory cortex, and little or no spiking responses in higher order areas. These results suggest that propofol's effect on sensory processing is not just because of asynchronous Down states. Rather, both Down states and Up states reflect disrupted dynamics.
Collapse
Affiliation(s)
- John M Tauber
- Massachusetts Institute of Technology, Cambridge, MA
| | | | | | | | - Leo Kozachkov
- Massachusetts Institute of Technology, Cambridge, MA
| | - Emery N Brown
- Massachusetts Institute of Technology, Cambridge, MA
- Massachusetts General Hospital, Boston
- Harvard University, Cambridge, MA
| | - Earl K Miller
- Massachusetts Institute of Technology, Cambridge, MA
| |
Collapse
|
3
|
Weiss DA, Borsa AM, Pala A, Sederberg AJ, Stanley GB. A machine learning approach for real-time cortical state estimation. J Neural Eng 2024; 21:016016. [PMID: 38232377 PMCID: PMC10868597 DOI: 10.1088/1741-2552/ad1f7b] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Accepted: 01/17/2024] [Indexed: 01/19/2024]
Abstract
Objective.Cortical function is under constant modulation by internally-driven, latent variables that regulate excitability, collectively known as 'cortical state'. Despite a vast literature in this area, the estimation of cortical state remains relatively ad hoc, and not amenable to real-time implementation. Here, we implement robust, data-driven, and fast algorithms that address several technical challenges for online cortical state estimation.Approach. We use unsupervised Gaussian mixture models to identify discrete, emergent clusters in spontaneous local field potential signals in cortex. We then extend our approach to a temporally-informed hidden semi-Markov model (HSMM) with Gaussian observations to better model and infer cortical state transitions. Finally, we implement our HSMM cortical state inference algorithms in a real-time system, evaluating their performance in emulation experiments.Main results. Unsupervised clustering approaches reveal emergent state-like structure in spontaneous electrophysiological data that recapitulate arousal-related cortical states as indexed by behavioral indicators. HSMMs enable cortical state inferences in a real-time context by modeling the temporal dynamics of cortical state switching. Using HSMMs provides robustness to state estimates arising from noisy, sequential electrophysiological data.Significance. To our knowledge, this work represents the first implementation of a real-time software tool for continuously decoding cortical states with high temporal resolution (40 ms). The software tools that we provide can facilitate our understanding of how cortical states dynamically modulate cortical function on a moment-by-moment basis and provide a basis for state-aware brain machine interfaces across health and disease.
Collapse
Affiliation(s)
- David A Weiss
- Program in Bioengineering, Georgia Institute of Technology, Atlanta, GA, United States of America
- Wallace H Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, United States of America
| | - Adriano Mf Borsa
- Program in Bioengineering, Georgia Institute of Technology, Atlanta, GA, United States of America
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, United States of America
| | - Aurélie Pala
- Department of Biology, Emory University, Atlanta, GA, United States of America
| | - Audrey J Sederberg
- Department of Neuroscience, University of Minnesota Medical School, Minneapolis, MN, United States of America
- Medical Discovery Team in Optical Imaging and Brain Science, University of Minnesota, Minneapolis, MN, United States of America
| | - Garrett B Stanley
- Wallace H Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, United States of America
| |
Collapse
|
4
|
Garwood IC, Major AJ, Antonini MJ, Correa J, Lee Y, Sahasrabudhe A, Mahnke MK, Miller EK, Brown EN, Anikeeva P. Multifunctional fibers enable modulation of cortical and deep brain activity during cognitive behavior in macaques. SCIENCE ADVANCES 2023; 9:eadh0974. [PMID: 37801492 PMCID: PMC10558126 DOI: 10.1126/sciadv.adh0974] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Accepted: 09/05/2023] [Indexed: 10/08/2023]
Abstract
Recording and modulating neural activity in vivo enables investigations of the neurophysiology underlying behavior and disease. However, there is a dearth of translational tools for simultaneous recording and localized receptor-specific modulation. We address this limitation by translating multifunctional fiber neurotechnology previously only available for rodent studies to enable cortical and subcortical neural recording and modulation in macaques. We record single-neuron and broader oscillatory activity during intracranial GABA infusions in the premotor cortex and putamen. By applying state-space models to characterize changes in electrophysiology, we uncover that neural activity evoked by a working memory task is reshaped by even a modest local inhibition. The recordings provide detailed insight into the electrophysiological effect of neurotransmitter receptor modulation in both cortical and subcortical structures in an awake macaque. Our results demonstrate a first-time application of multifunctional fibers for causal studies of neuronal activity in behaving nonhuman primates and pave the way for clinical translation of fiber-based neurotechnology.
Collapse
Affiliation(s)
- Indie C. Garwood
- Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Alex J. Major
- The Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Marc-Joseph Antonini
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Josefina Correa
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Youngbin Lee
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Materials Science and Engineering, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Atharva Sahasrabudhe
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Chemistry, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Meredith K. Mahnke
- The Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Earl K. Miller
- The Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Emery N. Brown
- Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, USA
- The Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Anesthesia, Critical Care, and Pain Medicine, Massachusetts General Hospital, Boston, MA, USA
- Institute for Medical Engineering and Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Anaesthesia, Harvard Medical School, Boston, MA, USA
| | - Polina Anikeeva
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Materials Science and Engineering, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
5
|
He M, Das P, Hotan G, Purdon PL. Switching state-space modeling of neural signal dynamics. PLoS Comput Biol 2023; 19:e1011395. [PMID: 37639391 PMCID: PMC10491408 DOI: 10.1371/journal.pcbi.1011395] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 09/08/2023] [Accepted: 07/28/2023] [Indexed: 08/31/2023] Open
Abstract
Linear parametric state-space models are a ubiquitous tool for analyzing neural time series data, providing a way to characterize the underlying brain dynamics with much greater statistical efficiency than non-parametric data analysis approaches. However, neural time series data are frequently time-varying, exhibiting rapid changes in dynamics, with transient activity that is often the key feature of interest in the data. Stationary methods can be adapted to time-varying scenarios by employing fixed-duration windows under an assumption of quasi-stationarity. But time-varying dynamics can be explicitly modeled by switching state-space models, i.e., by using a pool of state-space models with different dynamics selected by a probabilistic switching process. Unfortunately, exact solutions for state inference and parameter learning with switching state-space models are intractable. Here we revisit a switching state-space model inference approach first proposed by Ghahramani and Hinton. We provide explicit derivations for solving the inference problem iteratively after applying a variational approximation on the joint posterior of the hidden states and the switching process. We introduce a novel initialization procedure using an efficient leave-one-out strategy to compare among candidate models, which significantly improves performance compared to the existing method that relies on deterministic annealing. We then utilize this state inference solution within a generalized expectation-maximization algorithm to estimate model parameters of the switching process and the linear state-space models with dynamics potentially shared among candidate models. We perform extensive simulations under different settings to benchmark performance against existing switching inference methods and further validate the robustness of our switching inference solution outside the generative switching model class. Finally, we demonstrate the utility of our method for sleep spindle detection in real recordings, showing how switching state-space models can be used to detect and extract transient spindles from human sleep electroencephalograms in an unsupervised manner.
Collapse
Affiliation(s)
- Mingjian He
- Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Boston, Massachusetts, United States of America
- Harvard-MIT Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Proloy Das
- Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Boston, Massachusetts, United States of America
- Harvard Medical School, Boston, Massachusetts, United States of America
- Department of Anesthesia, Perioperative and Pain Medicine, Stanford University School of Medicine, Stanford, California, United States of America
| | - Gladia Hotan
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore
| | - Patrick L. Purdon
- Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Boston, Massachusetts, United States of America
- Harvard Medical School, Boston, Massachusetts, United States of America
- Department of Anesthesia, Perioperative and Pain Medicine, Stanford University School of Medicine, Stanford, California, United States of America
| |
Collapse
|
6
|
Tauber JM, Brincat SL, Stephen EP, Donaghue JA, Kozachkov L, Brown EN, Miller EK. Propofol Mediated Unconsciousness Disrupts Progression of Sensory Signals through the Cortical Hierarchy. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.06.25.546463. [PMID: 37425684 PMCID: PMC10327085 DOI: 10.1101/2023.06.25.546463] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/11/2023]
Abstract
A critical component of anesthesia is the loss sensory perception. Propofol is the most widely used drug for general anesthesia, but the neural mechanisms of how and when it disrupts sensory processing are not fully understood. We analyzed local field potential (LFP) and spiking recorded from Utah arrays in auditory cortex, associative cortex, and cognitive cortex of non-human primates before and during propofol mediated unconsciousness. Sensory stimuli elicited robust and decodable stimulus responses and triggered periods of stimulus-induced coherence between brain areas in the LFP of awake animals. By contrast, propofol mediated unconsciousness eliminated stimulus-induced coherence and drastically weakened stimulus responses and information in all brain areas except for auditory cortex, where responses and information persisted. However, we found stimuli occurring during spiking Up states triggered weaker spiking responses than in awake animals in auditory cortex, and little or no spiking responses in higher order areas. These results suggest that propofol's effect on sensory processing is not just due to asynchronous down states. Rather, both Down states and Up states reflect disrupted dynamics.
Collapse
Affiliation(s)
- John M. Tauber
- The Picower Institute for Learning & Memory, Massachusetts Institute of Technology (MIT), Cambridge, MA 02139, USA
- Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology (MIT), Cambridge, MA 02139, USA
- Institute for Data, Systems, and Society, Massachusetts Institute of Technology (MIT), Cambridge, MA 02139, USA
| | - Scott L. Brincat
- The Picower Institute for Learning & Memory, Massachusetts Institute of Technology (MIT), Cambridge, MA 02139, USA
- Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology (MIT), Cambridge, MA 02139, USA
| | - Emily P. Stephen
- Department of Mathematics and Statistics, Boston University, Boston, MA 02215, USA
| | - Jacob A. Donaghue
- The Picower Institute for Learning & Memory, Massachusetts Institute of Technology (MIT), Cambridge, MA 02139, USA
- Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology (MIT), Cambridge, MA 02139, USA
- Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology (MIT), Cambridge, MA 02139, USA
| | - Leo Kozachkov
- Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology (MIT), Cambridge, MA 02139, USA
| | - Emery N. Brown
- The Picower Institute for Learning & Memory, Massachusetts Institute of Technology (MIT), Cambridge, MA 02139, USA
- Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology (MIT), Cambridge, MA 02139, USA
- Institute for Data, Systems, and Society, Massachusetts Institute of Technology (MIT), Cambridge, MA 02139, USA
- Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology (MIT), Cambridge, MA 02139, USA
- Department of Anesthesia, Massachusetts General Hospital, Boston, MA 02114, USA
- Harvard Medical School, Harvard University, Boston, MA 02115, USA
| | - Earl K. Miller
- The Picower Institute for Learning & Memory, Massachusetts Institute of Technology (MIT), Cambridge, MA 02139, USA
- Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology (MIT), Cambridge, MA 02139, USA
| |
Collapse
|
7
|
Joo P, Lee H, Wang S, Kim S, Hudetz AG. Network Model With Reduced Metabolic Rate Predicts Spatial Synchrony of Neuronal Activity. Front Comput Neurosci 2021; 15:738362. [PMID: 34690730 PMCID: PMC8529180 DOI: 10.3389/fncom.2021.738362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Accepted: 09/01/2021] [Indexed: 11/25/2022] Open
Abstract
In a cerebral hypometabolic state, cortical neurons exhibit slow synchronous oscillatory activity with sparse firing. How such a synchronization spatially organizes as the cerebral metabolic rate decreases have not been systemically investigated. We developed a network model of leaky integrate-and-fire neurons with an additional dependency on ATP dynamics. Neurons were scattered in a 2D space, and their population activity patterns at varying ATP levels were simulated. The model predicted a decrease in firing activity as the ATP production rate was lowered. Under hypometabolic conditions, an oscillatory firing pattern, that is, an ON-OFF cycle arose through a failure of sustainable firing due to reduced excitatory positive feedback and rebound firing after the slow recovery of ATP concentration. The firing rate oscillation of distant neurons developed at first asynchronously that changed into burst suppression and global synchronization as ATP production further decreased. These changes resembled the experimental data obtained from anesthetized rats, as an example of a metabolically suppressed brain. Together, this study substantiates a novel biophysical mechanism of neuronal network synchronization under limited energy supply conditions.
Collapse
Affiliation(s)
- Pangyu Joo
- Center for Consciousness Science, Department of Anesthesiology, University of Michigan, Ann Arbor, MI, United States.,Department of Physics, Pohang University of Science and Technology, Pohang, South Korea
| | - Heonsoo Lee
- Center for Consciousness Science, Department of Anesthesiology, University of Michigan, Ann Arbor, MI, United States
| | - Shiyong Wang
- Center for Consciousness Science, Department of Anesthesiology, University of Michigan, Ann Arbor, MI, United States
| | - Seunghwan Kim
- Department of Physics, Pohang University of Science and Technology, Pohang, South Korea
| | - Anthony G Hudetz
- Center for Consciousness Science, Department of Anesthesiology, University of Michigan, Ann Arbor, MI, United States
| |
Collapse
|
8
|
Garwood IC, Chakravarty S, Donoghue J, Mahnke M, Kahali P, Chamadia S, Akeju O, Miller EK, Brown EN. A hidden Markov model reliably characterizes ketamine-induced spectral dynamics in macaque local field potentials and human electroencephalograms. PLoS Comput Biol 2021; 17:e1009280. [PMID: 34407069 PMCID: PMC8405019 DOI: 10.1371/journal.pcbi.1009280] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2020] [Revised: 08/30/2021] [Accepted: 07/15/2021] [Indexed: 11/18/2022] Open
Abstract
Ketamine is an NMDA receptor antagonist commonly used to maintain general anesthesia. At anesthetic doses, ketamine causes high power gamma (25-50 Hz) oscillations alternating with slow-delta (0.1-4 Hz) oscillations. These dynamics are readily observed in local field potentials (LFPs) of non-human primates (NHPs) and electroencephalogram (EEG) recordings from human subjects. However, a detailed statistical analysis of these dynamics has not been reported. We characterize ketamine's neural dynamics using a hidden Markov model (HMM). The HMM observations are sequences of spectral power in seven canonical frequency bands between 0 to 50 Hz, where power is averaged within each band and scaled between 0 and 1. We model the observations as realizations of multivariate beta probability distributions that depend on a discrete-valued latent state process whose state transitions obey Markov dynamics. Using an expectation-maximization algorithm, we fit this beta-HMM to LFP recordings from 2 NHPs, and separately, to EEG recordings from 9 human subjects who received anesthetic doses of ketamine. Our beta-HMM framework provides a useful tool for experimental data analysis. Together, the estimated beta-HMM parameters and optimal state trajectory revealed an alternating pattern of states characterized primarily by gamma and slow-delta activities. The mean duration of the gamma activity was 2.2s([1.7,2.8]s) and 1.2s([0.9,1.5]s) for the two NHPs, and 2.5s([1.7,3.6]s) for the human subjects. The mean duration of the slow-delta activity was 1.6s([1.2,2.0]s) and 1.0s([0.8,1.2]s) for the two NHPs, and 1.8s([1.3,2.4]s) for the human subjects. Our characterizations of the alternating gamma slow-delta activities revealed five sub-states that show regular sequential transitions. These quantitative insights can inform the development of rhythm-generating neuronal circuit models that give mechanistic insights into this phenomenon and how ketamine produces altered states of arousal.
Collapse
Affiliation(s)
- Indie C. Garwood
- Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- The Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Sourish Chakravarty
- The Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Department of Anesthesia, Critical Care, and Pain Medicine, Massachusetts General Hospital, Boston, Massachusetts, United States of America
| | - Jacob Donoghue
- Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- The Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Meredith Mahnke
- The Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Pegah Kahali
- The Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Department of Anesthesia, Critical Care, and Pain Medicine, Massachusetts General Hospital, Boston, Massachusetts, United States of America
| | - Shubham Chamadia
- Department of Anesthesia, Critical Care, and Pain Medicine, Massachusetts General Hospital, Boston, Massachusetts, United States of America
| | - Oluwaseun Akeju
- Department of Anesthesia, Critical Care, and Pain Medicine, Massachusetts General Hospital, Boston, Massachusetts, United States of America
| | - Earl K. Miller
- The Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Emery N. Brown
- Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- The Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Department of Anesthesia, Critical Care, and Pain Medicine, Massachusetts General Hospital, Boston, Massachusetts, United States of America
| |
Collapse
|
9
|
Ratanov N. Mean-reverting neuronal model based on two alternating patterns. Biosystems 2020; 196:104190. [PMID: 32574580 DOI: 10.1016/j.biosystems.2020.104190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2020] [Revised: 06/07/2020] [Accepted: 06/18/2020] [Indexed: 10/24/2022]
Abstract
A neuronal action potential model based on the generalised two-state Ornstein-Uhlenbeck process is studied. The model well describes all phases of a neuronal spike cycle, and the intrinsic parameters of the model have clear specification. Laplace transforms of a firing time are obtained explicitly. Formulae for the mean interspike intervals and their variances, as well as for the average duration of the relative refractory period are also obtained.
Collapse
Affiliation(s)
- Nikita Ratanov
- Chelyabinsk State University, Br. Kashirinykh str., 129, Chelyabinsk, Russia.
| |
Collapse
|
10
|
Abstract
'Bursting', defined as periods of high-frequency firing of a neuron separated by periods of quiescence, has been observed in various neuronal systems, both in vitro and in vivo. It has been associated with a range of neuronal processes, including efficient information transfer and the formation of functional networks during development, and has been shown to be sensitive to genetic and pharmacological manipulations. Accurate detection of periods of bursting activity is thus an important aspect of characterising both spontaneous and evoked neuronal network activity. A wide variety of computational methods have been developed to detect periods of bursting in spike trains recorded from neuronal networks. In this chapter, we review several of the most popular and successful of these methods.
Collapse
|
11
|
Yousefi A, Dougherty DD, Eskandar EN, Widge AS, Eden UT. Estimating Dynamic Signals From Trial Data With Censored Values. COMPUTATIONAL PSYCHIATRY (CAMBRIDGE, MASS.) 2017; 1:58-81. [PMID: 29601047 PMCID: PMC5774187 DOI: 10.1162/cpsy_a_00003] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/03/2016] [Accepted: 04/05/2017] [Indexed: 11/12/2022]
Abstract
Censored data occur commonly in trial-structured behavioral experiments and many other forms of longitudinal data. They can lead to severe bias and reduction of statistical power in subsequent analyses. Principled approaches for dealing with censored data, such as data imputation and methods based on the complete data's likelihood, work well for estimating fixed features of statistical models but have not been extended to dynamic measures, such as serial estimates of an underlying latent variable over time. Here we propose an approach to the censored-data problem for dynamic behavioral signals. We developed a state-space modeling framework with a censored observation process at the trial timescale. We then developed a filter algorithm to compute the posterior distribution of the state process using the available data. We showed that special cases of this framework can incorporate the three most common approaches to censored observations: ignoring trials with censored data, imputing the censored data values, or using the full information available in the data likelihood. Finally, we derived a computationally efficient approximate Gaussian filter that is similar in structure to a Kalman filter, but that efficiently accounts for censored data. We compared the performances of these methods in a simulation study and provide recommendations of approaches to use, based on the expected amount of censored data in an experiment. These new techniques can broadly be applied in many research domains in which censored data interfere with estimation, including survival analysis and other clinical trial applications.
Collapse
Affiliation(s)
- Ali Yousefi
- Department of Neurological Surgery, Massachusetts General Hospital and Harvard Medical School, Boston, MA
- Department of Mathematics and Statistics, Boston University, Boston, MA
| | - Darin D. Dougherty
- Department of Psychiatry, Massachusetts General Hospital and Harvard Medical School, Boston, MA
| | - Emad N. Eskandar
- Department of Neurological Surgery, Massachusetts General Hospital and Harvard Medical School, Boston, MA
| | - Alik S. Widge
- Department of Psychiatry, Massachusetts General Hospital and Harvard Medical School, Boston, MA
- Picower Institute for Learning & Memory, Massachusetts Institute of Technology, Cambridge, MA
| | - Uri T. Eden
- Department of Mathematics and Statistics, Boston University, Boston, MA
| |
Collapse
|
12
|
Jercog D, Roxin A, Barthó P, Luczak A, Compte A, de la Rocha J. UP-DOWN cortical dynamics reflect state transitions in a bistable network. eLife 2017; 6:22425. [PMID: 28826485 PMCID: PMC5582872 DOI: 10.7554/elife.22425] [Citation(s) in RCA: 85] [Impact Index Per Article: 12.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2016] [Accepted: 07/21/2017] [Indexed: 11/21/2022] Open
Abstract
In the idling brain, neuronal circuits transition between periods of sustained firing (UP state) and quiescence (DOWN state), a pattern the mechanisms of which remain unclear. Here we analyzed spontaneous cortical population activity from anesthetized rats and found that UP and DOWN durations were highly variable and that population rates showed no significant decay during UP periods. We built a network rate model with excitatory (E) and inhibitory (I) populations exhibiting a novel bistable regime between a quiescent and an inhibition-stabilized state of arbitrarily low rate. Fluctuations triggered state transitions, while adaptation in E cells paradoxically caused a marginal decay of E-rate but a marked decay of I-rate in UP periods, a prediction that we validated experimentally. A spiking network implementation further predicted that DOWN-to-UP transitions must be caused by synchronous high-amplitude events. Our findings provide evidence of bistable cortical networks that exhibit non-rhythmic state transitions when the brain rests.
Collapse
Affiliation(s)
- Daniel Jercog
- Institut d'Investigacions Biomèdiques August Pi i Sunyer, Barcelona, Spain
| | - Alex Roxin
- Centre de Recerca Matemàtica, Bellaterra, Spain
| | - Peter Barthó
- MTA TTK NAP B Research Group of Sleep Oscillations, Budapest, Hungary
| | - Artur Luczak
- Canadian Center for Behavioural Neuroscience, University of Lethbridge, Lethbridge, Canada
| | - Albert Compte
- Institut d'Investigacions Biomèdiques August Pi i Sunyer, Barcelona, Spain
| | - Jaime de la Rocha
- Institut d'Investigacions Biomèdiques August Pi i Sunyer, Barcelona, Spain
| |
Collapse
|
13
|
Dragomir A, Akay YM, Zhang D, Akay M. Ventral Tegmental Area Dopamine Neurons Firing Model Reveals Prenatal Nicotine Induced Alterations. IEEE Trans Neural Syst Rehabil Eng 2016; 25:1387-1396. [PMID: 28114025 DOI: 10.1109/tnsre.2016.2636133] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
The dopamine (DA) neurons found in the ventral tegmental area (VTA) are widely involved in the addiction and natural reward circuitry of the brain. Their firing patterns were shown to be important modulators of dopamine release and repetitive burst-like firing activity was highlighted as a major firing pattern of DA neurons in the VTA. In the present study we use a state space model to characterize the DA neurons firing patterns, and trace transitions of neural activity through bursting and non-bursting states. The hidden semi-Markov model (HSMM) framework, which we use, offers a statistically principled inference of bursting states and considers VTA DA firing patterns to be generated according to a Gamma process. Additionally, the explicit Gamma-based modeling of state durations allows efficient decoding of underlying neural information. Consequently, we decode and segment our single unit recordings from DA neurons in VTA according to the sequence of statistically discriminated HSMM states. The segmentation is used to study bursting state characteristics in data recorded from rats prenatally exposed to nicotine (6 mg/kg/day starting with gestational day 3) and rats from saline treated dams. Our results indicate that prenatal nicotine exposure significantly alters burst firing patterns of a subset of DA neurons in adolescent rats, suggesting nicotine exposure during gestation may induce severe effects on the neural networks involved in addiction and reward.
Collapse
|
14
|
Gigante G, Deco G, Marom S, Del Giudice P. Network Events on Multiple Space and Time Scales in Cultured Neural Networks and in a Stochastic Rate Model. PLoS Comput Biol 2015; 11:e1004547. [PMID: 26558616 PMCID: PMC4641680 DOI: 10.1371/journal.pcbi.1004547] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2015] [Accepted: 08/28/2015] [Indexed: 11/19/2022] Open
Abstract
Cortical networks, in-vitro as well as in-vivo, can spontaneously generate a variety of collective dynamical events such as network spikes, UP and DOWN states, global oscillations, and avalanches. Though each of them has been variously recognized in previous works as expression of the excitability of the cortical tissue and the associated nonlinear dynamics, a unified picture of the determinant factors (dynamical and architectural) is desirable and not yet available. Progress has also been partially hindered by the use of a variety of statistical measures to define the network events of interest. We propose here a common probabilistic definition of network events that, applied to the firing activity of cultured neural networks, highlights the co-occurrence of network spikes, power-law distributed avalanches, and exponentially distributed 'quasi-orbits', which offer a third type of collective behavior. A rate model, including synaptic excitation and inhibition with no imposed topology, synaptic short-term depression, and finite-size noise, accounts for all these different, coexisting phenomena. We find that their emergence is largely regulated by the proximity to an oscillatory instability of the dynamics, where the non-linear excitable behavior leads to a self-amplification of activity fluctuations over a wide range of scales in space and time. In this sense, the cultured network dynamics is compatible with an excitation-inhibition balance corresponding to a slightly sub-critical regime. Finally, we propose and test a method to infer the characteristic time of the fatigue process, from the observed time course of the network's firing rate. Unlike the model, possessing a single fatigue mechanism, the cultured network appears to show multiple time scales, signalling the possible coexistence of different fatigue mechanisms.
Collapse
Affiliation(s)
- Guido Gigante
- Italian Institute of Health, Rome, Italy
- Mperience srl, Rome, Italy
| | - Gustavo Deco
- Center for Brain and Cognition, Computational Neuroscience Group, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain
- Institució Catalana de la Recerca i Estudis Avançats (ICREA), Universitat Pompeu Fabra, Passeig Lluís Companys 23, Barcelona, Spain
| | - Shimon Marom
- Technion - Israel Institute of Technology, Haifa Israel
| | - Paolo Del Giudice
- Italian Institute of Health, Rome, Italy
- Istituto Nazionale di Fisica Nucleare (INFN), Rome Italy
| |
Collapse
|
15
|
Estimating latent attentional states based on simultaneous binary and continuous behavioral measures. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2015; 2015:493769. [PMID: 25883639 PMCID: PMC4391722 DOI: 10.1155/2015/493769] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/14/2014] [Revised: 02/25/2015] [Accepted: 03/09/2015] [Indexed: 11/17/2022]
Abstract
Cognition is a complex and dynamic process. It is an essential goal to
estimate latent attentional states based on behavioral measures in many
sequences of behavioral tasks. Here, we propose a probabilistic modeling
and inference framework for estimating the attentional state using simultaneous binary and continuous behavioral measures. The proposed model
extends the standard hidden Markov model (HMM) by explicitly modeling the state duration distribution, which yields a special example of
the hidden semi-Markov model (HSMM). We validate our methods using
computer simulations and experimental data. In computer simulations,
we systematically investigate the impacts of model mismatch and the latency distribution. For the experimental data collected from a rodent visual detection task, we validate the results with predictive log-likelihood. Our work is useful for many behavioral neuroscience experiments, where
the common goal is to infer the discrete (binary or multinomial) state
sequences from multiple behavioral measures.
Collapse
|
16
|
Chen Z, Gomperts SN, Yamamoto J, Wilson MA. Neural representation of spatial topology in the rodent hippocampus. Neural Comput 2014; 26:1-39. [PMID: 24102128 PMCID: PMC3967246 DOI: 10.1162/neco_a_00538] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Pyramidal cells in the rodent hippocampus often exhibit clear spatial tuning in navigation. Although it has been long suggested that pyramidal cell activity may underlie a topological code rather than a topographic code, it remains unclear whether an abstract spatial topology can be encoded in the ensemble spiking activity of hippocampal place cells. Using a statistical approach developed previously, we investigate this question and related issues in greater detail. We recorded ensembles of hippocampal neurons as rodents freely foraged in one- and two-dimensional spatial environments and used a "decode-to-uncover" strategy to examine the temporally structured patterns embedded in the ensemble spiking activity in the absence of observed spatial correlates during periods of rodent navigation or awake immobility. Specifically, the spatial environment was represented by a finite discrete state space. Trajectories across spatial locations ("states") were associated with consistent hippocampal ensemble spiking patterns, which were characterized by a state transition matrix. From this state transition matrix, we inferred a topology graph that defined the connectivity in the state space. In both one- and two-dimensional environments, the extracted behavior patterns from the rodent hippocampal population codes were compared against randomly shuffled spike data. In contrast to a topographic code, our results support the efficiency of topological coding in the presence of sparse sample size and fuzzy space mapping. This computational approach allows us to quantify the variability of ensemble spiking activity, examine hippocampal population codes during off-line states, and quantify the topological complexity of the environment.
Collapse
Affiliation(s)
- Zhe Chen
- Department of Brain and Cognitive Sciences and Picower Institute for Learning and Memory, MIT, Cambridge, MA 02139, U.S.A.
| | | | | | | |
Collapse
|
17
|
Indic P, Paydarfar D, Barbieri R. Point process modeling of interbreath interval: a new approach for the assessment of instability of breathing in neonates. IEEE Trans Biomed Eng 2013; 60:2858-66. [PMID: 23739777 DOI: 10.1109/tbme.2013.2264162] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Interbreath interval (IBI), the time interval between breaths, is an important measure used to analyze irregular breathing patterns in neonates. The discrete bursts of neural activity generate the IBI time series, which exhibits stochastic as well as deterministic dynamics. To quantify the irregularity of breathing, we propose a point process model of IBI using a comprehensive stochastic dynamic modeling framework. The IBIs of immature breathing patterns exhibit a long tail distribution and within a point process model, we have considered the lognormal distribution to represent the stochastic IBI characteristics. An autoregressive (AR) function is embedded within the model to capture the short-term IBI dynamics including abrupt IBI prolongations related to sporadic and periodic apneas that are common in neonates. We tested the utility of our paradigm for depicting the respiratory dynamics in neonatal rats and in preterm infants. Kolmogorov-Smirnov (KS) and independence tests reveal that the model accurately tracks the dynamic characteristics of the signals. In preterm infants, our model-derived indices of IBI instability strongly correlate with clinically derived indices of maturation. Our results validate a new class of algorithms, based on the point process theory, for defining instantaneous measures of breathing irregularity in neonates.
Collapse
|
18
|
Smith C, Paninski L. Computing loss of efficiency in optimal Bayesian decoders given noisy or incomplete spike trains. NETWORK (BRISTOL, ENGLAND) 2013; 24:75-98. [PMID: 23742213 DOI: 10.3109/0954898x.2013.789568] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
We investigate Bayesian methods for optimal decoding of noisy or incompletely-observed spike trains. Information about neural identity or temporal resolution may be lost during spike detection and sorting, or spike times measured near the soma may be corrupted with noise due to stochastic membrane channel effects in the axon. We focus on neural encoding models in which the (discrete) neural state evolves according to stimulus-dependent Markovian dynamics. Such models are sufficiently flexible that we may incorporate realistic stimulus encoding and spiking dynamics, but nonetheless permit exact computation via efficient hidden Markov model forward-backward methods. We analyze two types of signal degradation. First, we quantify the information lost due to jitter or downsampling in the spike-times. Second, we quantify the information lost when knowledge of the identities of different spiking neurons is corrupted. In each case the methods introduced here make it possible to quantify the dependence of the information loss on biophysical parameters such as firing rate, spike jitter amplitude, spike observation noise, etc. In particular, decoders that model the probability distribution of spike-neuron assignments significantly outperform decoders that use only the most likely spike assignments, and are ignorant of the posterior spike assignment uncertainty.
Collapse
Affiliation(s)
- Carl Smith
- Department of Chemistry, Columbia University, New York, NY 10027, USA.
| | | |
Collapse
|
19
|
Kelly RC, Kass RE. A framework for evaluating pairwise and multiway synchrony among stimulus-driven neurons. Neural Comput 2012; 24:2007-32. [PMID: 22509967 PMCID: PMC3374919 DOI: 10.1162/neco_a_00307] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Several authors have previously discussed the use of log-linear models, often called maximum entropy models, for analyzing spike train data to detect synchrony. The usual log-linear modeling techniques, however, do not allow time-varying firing rates that typically appear in stimulus-driven (or action-driven) neurons, nor do they incorporate non-Poisson history effects or covariate effects. We generalize the usual approach, combining point-process regression models of individual neuron activity with log-linear models of multiway synchronous interaction. The methods are illustrated with results found in spike trains recorded simultaneously from primary visual cortex. We then assess the amount of data needed to reliably detect multiway spiking.
Collapse
Affiliation(s)
- Ryan C Kelly
- Department of Statistics and Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA 15213, U.S.A.
| | | |
Collapse
|
20
|
Chen Z, Kloosterman F, Brown EN, Wilson MA. Uncovering spatial topology represented by rat hippocampal population neuronal codes. J Comput Neurosci 2012; 33:227-55. [PMID: 22307459 DOI: 10.1007/s10827-012-0384-x] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2011] [Revised: 01/16/2012] [Accepted: 01/23/2012] [Indexed: 10/14/2022]
Abstract
Hippocampal population codes play an important role in representation of spatial environment and spatial navigation. Uncovering the internal representation of hippocampal population codes will help understand neural mechanisms of the hippocampus. For instance, uncovering the patterns represented by rat hippocampus (CA1) pyramidal cells during periods of either navigation or sleep has been an active research topic over the past decades. However, previous approaches to analyze or decode firing patterns of population neurons all assume the knowledge of the place fields, which are estimated from training data a priori. The question still remains unclear how can we extract information from population neuronal responses either without a priori knowledge or in the presence of finite sampling constraint. Finding the answer to this question would leverage our ability to examine the population neuronal codes under different experimental conditions. Using rat hippocampus as a model system, we attempt to uncover the hidden "spatial topology" represented by the hippocampal population codes. We develop a hidden Markov model (HMM) and a variational Bayesian (VB) inference algorithm to achieve this computational goal, and we apply the analysis to extensive simulation and experimental data. Our empirical results show promising direction for discovering structural patterns of ensemble spike activity during periods of active navigation. This study would also provide useful insights for future exploratory data analysis of population neuronal codes during periods of sleep.
Collapse
Affiliation(s)
- Zhe Chen
- Neuroscience Statistics Research Lab, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114, USA.
| | | | | | | |
Collapse
|
21
|
Ghorbani M, Mehta M, Bruinsma R, Levine AJ. Nonlinear-dynamics theory of up-down transitions in neocortical neural networks. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2012; 85:021908. [PMID: 22463245 DOI: 10.1103/physreve.85.021908] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/07/2011] [Revised: 11/30/2011] [Indexed: 05/31/2023]
Abstract
The neurons of the neocortex show ~1-Hz synchronized transitions between an active up state and a quiescent down state. The up-down state transitions are highly coherent over large sections of the cortex, yet they are accompanied by pronounced, incoherent noise. We propose a simple model for the up-down state oscillations that allows analysis by straightforward dynamical systems theory. An essential feature is a nonuniform network geometry composed of groups of excitatory and inhibitory neurons with strong coupling inside a group and weak coupling between groups. The enhanced deterministic noise of the up state appears as the natural result of the proximity of a partial synchronization transition. The synchronization transition takes place as a function of the long-range synaptic strength linking different groups of neurons.
Collapse
Affiliation(s)
- Maryam Ghorbani
- Department of Physics and Astronomy, University of California, Los Angeles, Los Angeles, California 90095-1547, USA
| | | | | | | |
Collapse
|
22
|
Eldar E, Morris G, Niv Y. The effects of motivation on response rate: A hidden semi-Markov model analysis of behavioral dynamics. J Neurosci Methods 2011; 201:251-61. [DOI: 10.1016/j.jneumeth.2011.06.028] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2010] [Revised: 06/25/2011] [Accepted: 06/28/2011] [Indexed: 10/18/2022]
|
23
|
McFarland JM, Hahn TTG, Mehta MR. Explicit-duration hidden Markov model inference of UP-DOWN states from continuous signals. PLoS One 2011; 6:e21606. [PMID: 21738730 PMCID: PMC3125293 DOI: 10.1371/journal.pone.0021606] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2010] [Accepted: 06/05/2011] [Indexed: 11/19/2022] Open
Abstract
Neocortical neurons show UP-DOWN state (UDS) oscillations under a variety of conditions. These UDS have been extensively studied because of the insight they can yield into the functioning of cortical networks, and their proposed role in putative memory formation. A key element in these studies is determining the precise duration and timing of the UDS. These states are typically determined from the membrane potential of one or a small number of cells, which is often not sufficient to reliably estimate the state of an ensemble of neocortical neurons. The local field potential (LFP) provides an attractive method for determining the state of a patch of cortex with high spatio-temporal resolution; however current methods for inferring UDS from LFP signals lack the robustness and flexibility to be applicable when UDS properties may vary substantially within and across experiments. Here we present an explicit-duration hidden Markov model (EDHMM) framework that is sufficiently general to allow statistically principled inference of UDS from different types of signals (membrane potential, LFP, EEG), combinations of signals (e.g., multichannel LFP recordings) and signal features over long recordings where substantial non-stationarities are present. Using cortical LFPs recorded from urethane-anesthetized mice, we demonstrate that the proposed method allows robust inference of UDS. To illustrate the flexibility of the algorithm we show that it performs well on EEG recordings as well. We then validate these results using simultaneous recordings of the LFP and membrane potential (MP) of nearby cortical neurons, showing that our method offers significant improvements over standard methods. These results could be useful for determining functional connectivity of different brain regions, as well as understanding network dynamics.
Collapse
Affiliation(s)
- James M. McFarland
- Department of Physics, Brown University, Providence, Rhode Island, United States of America
- Department of Physics and Astronomy, and Integrative Center for Learning and Memory, University of California Los Angeles, Los Angeles, California, United States of America
| | - Thomas T. G. Hahn
- Department of Psychiatry, Central Institute for Mental Health, Mannheim, Germany
- Behavioural Neurophysiology, Max Planck Institute for Medical Research, Heidelberg, Germany
| | - Mayank R. Mehta
- Department of Physics and Astronomy, and Integrative Center for Learning and Memory, University of California Los Angeles, Los Angeles, California, United States of America
- Departments of Neurology and Neurobiology, University of California Los Angeles, Los Angeles, California, United States of America
| |
Collapse
|
24
|
Escola S, Fontanini A, Katz D, Paninski L. Hidden Markov models for the stimulus-response relationships of multistate neural systems. Neural Comput 2011; 23:1071-132. [PMID: 21299424 DOI: 10.1162/neco_a_00118] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Given recent experimental results suggesting that neural circuits may evolve through multiple firing states, we develop a framework for estimating state-dependent neural response properties from spike train data. We modify the traditional hidden Markov model (HMM) framework to incorporate stimulus-driven, non-Poisson point-process observations. For maximal flexibility, we allow external, time-varying stimuli and the neurons' own spike histories to drive both the spiking behavior in each state and the transitioning behavior between states. We employ an appropriately modified expectation-maximization algorithm to estimate the model parameters. The expectation step is solved by the standard forward-backward algorithm for HMMs. The maximization step reduces to a set of separable concave optimization problems if the model is restricted slightly. We first test our algorithm on simulated data and are able to fully recover the parameters used to generate the data and accurately recapitulate the sequence of hidden states. We then apply our algorithm to a recently published data set in which the observed neuronal ensembles displayed multistate behavior and show that inclusion of spike history information significantly improves the fit of the model. Additionally, we show that a simple reformulation of the state space of the underlying Markov chain allows us to implement a hybrid half-multistate, half-histogram model that may be more appropriate for capturing the complexity of certain data sets than either a simple HMM or a simple peristimulus time histogram model alone.
Collapse
Affiliation(s)
- Sean Escola
- Center for Theoretical Neuroscience and Department of Psychiatry, Columbia University, New York, NY 10032, USA.
| | | | | | | |
Collapse
|
25
|
Takiyama K, Okada M. Detection of hidden structures in nonstationary spike trains. Neural Comput 2011; 23:1205-33. [PMID: 21299427 DOI: 10.1162/neco_a_00109] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
We propose an algorithm for simultaneously estimating state transitions among neural states and nonstationary firing rates using a switching state-space model (SSSM). This algorithm enables us to detect state transitions on the basis of not only discontinuous changes in mean firing rates but also discontinuous changes in the temporal profiles of firing rates (e.g., temporal correlation). We construct estimation and learning algorithms for a nongaussian SSSM, whose nongaussian property is caused by binary spike events. Local variational methods can transform the binary observation process into a quadratic form. The transformed observation process enables us to construct a variational Bayes algorithm that can determine the number of neural states based on automatic relevance determination. Additionally, our algorithm can estimate model parameters from single-trial data using a priori knowledge about state transitions and firing rates. Synthetic data analysis reveals that our algorithm has higher performance for estimating nonstationary firing rates than previous methods. The analysis also confirms that our algorithm can detect state transitions on the basis of discontinuous changes in temporal correlation, which are transitions that previous hidden Markov models could not detect. We also analyze neural data recorded from the medial temporal area. The statistically detected neural states probably coincide with transient and sustained states that have been detected heuristically. Estimated parameters suggest that our algorithm detects the state transitions on the basis of discontinuous changes in the temporal correlation of firing rates. These results suggest that our algorithm is advantageous in real-data analysis.
Collapse
Affiliation(s)
- Ken Takiyama
- The University of Tokyo, Kashiwanoha 5-1-5, Kashiwa-shi, Chiba 277-8561, Japan.
| | | |
Collapse
|
26
|
Abstract
Neurons perform computations, and convey the results of those computations through the statistical structure of their output spike trains. Here we present a practical method, grounded in the information-theoretic analysis of prediction, for inferring a minimal representation of that structure and for characterizing its complexity. Starting from spike trains, our approach finds their causal state models (CSMs), the minimal hidden Markov models or stochastic automata capable of generating statistically identical time series. We then use these CSMs to objectively quantify both the generalizable structure and the idiosyncratic randomness of the spike train. Specifically, we show that the expected algorithmic information content (the information needed to describe the spike train exactly) can be split into three parts describing (1) the time-invariant structure (complexity) of the minimal spike-generating process, which describes the spike train statistically; (2) the randomness (internal entropy rate) of the minimal spike-generating process; and (3) a residual pure noise term not described by the minimal spike-generating process. We use CSMs to approximate each of these quantities. The CSMs are inferred nonparametrically from the data, making only mild regularity assumptions, via the causal state splitting reconstruction algorithm. The methods presented here complement more traditional spike train analyses by describing not only spiking probability and spike train entropy, but also the complexity of a spike train's structure. We demonstrate our approach using both simulated spike trains and experimental data recorded in rat barrel cortex during vibrissa stimulation.
Collapse
Affiliation(s)
- Robert Haslinger
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA 02129, USA.
| | | | | |
Collapse
|
27
|
Detection of bursts in extracellular spike trains using hidden semi-Markov point process models. J Comput Neurosci 2009; 29:203-212. [PMID: 19697116 DOI: 10.1007/s10827-009-0182-2] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2008] [Revised: 07/01/2009] [Accepted: 08/07/2009] [Indexed: 10/20/2022]
Abstract
Neurons in vitro and in vivo have epochs of bursting or "up state" activity during which firing rates are dramatically elevated. Various methods of detecting bursts in extracellular spike trains have appeared in the literature, the most widely used apparently being Poisson Surprise (PS). A natural description of the phenomenon assumes (1) there are two hidden states, which we label "burst" and "non-burst," (2) the neuron evolves stochastically, switching at random between these two states, and (3) within each state the spike train follows a time-homogeneous point process. If in (2) the transitions from non-burst to burst and burst to non-burst states are memoryless, this becomes a hidden Markov model (HMM). For HMMs, the state transitions follow exponential distributions, and are highly irregular. Because observed bursting may in some cases be fairly regular-exhibiting inter-burst intervals with small variation-we relaxed this assumption. When more general probability distributions are used to describe the state transitions the two-state point process model becomes a hidden semi-Markov model (HSMM). We developed an efficient Bayesian computational scheme to fit HSMMs to spike train data. Numerical simulations indicate the method can perform well, sometimes yielding very different results than those based on PS.
Collapse
|