1
|
Londei F, Arena G, Ferrucci L, Russo E, Ceccarelli F, Genovesio A. Connecting the dots in the zona incerta: A study of neural assemblies and motifs of inter-area coordination in mice. iScience 2024; 27:108761. [PMID: 38274403 PMCID: PMC10808920 DOI: 10.1016/j.isci.2023.108761] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Revised: 10/23/2023] [Accepted: 11/11/2023] [Indexed: 01/27/2024] Open
Abstract
The zona incerta (ZI), a subthalamic area connected to numerous brain regions, has raised clinical interest because its stimulation alleviates the motor symptoms of Parkinson's disease. To explore its coordinative nature, we studied the assembly formation in a dataset of neural recordings in mice and quantified the degree of functional coordination of ZI with other 24 brain areas. We found that the ZI is a highly integrative area. The analysis in terms of "loop-like" motifs, directional assemblies composed of three neurons spanning two areas, has revealed reciprocal functional interactions with reentrant signals that, in most cases, start and end with the activation of ZI units. In support of its proposed integrative role, we found that almost one-third of the ZI's neurons formed assemblies with more than half of the other recorded areas and that loop-like assemblies may stand out as hyper-integrative motifs compared to other types of activation patterns.
Collapse
Affiliation(s)
- Fabrizio Londei
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Rome, Italy
- PhD Program in Behavioral Neuroscience, Sapienza University of Rome, Rome, Italy
| | - Giulia Arena
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Rome, Italy
- PhD Program in Behavioral Neuroscience, Sapienza University of Rome, Rome, Italy
| | - Lorenzo Ferrucci
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Rome, Italy
| | - Eleonora Russo
- The BioRobotics Institute, Department of Excellence in Robotics and AI, Scuola Superiore Sant’Anna, 56127 Pisa, Italy
| | - Francesco Ceccarelli
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Rome, Italy
| | - Aldo Genovesio
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Rome, Italy
| |
Collapse
|
2
|
Donati E, Valle G. Neuromorphic hardware for somatosensory neuroprostheses. Nat Commun 2024; 15:556. [PMID: 38228580 PMCID: PMC10791662 DOI: 10.1038/s41467-024-44723-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Accepted: 01/03/2024] [Indexed: 01/18/2024] Open
Abstract
In individuals with sensory-motor impairments, missing limb functions can be restored using neuroprosthetic devices that directly interface with the nervous system. However, restoring the natural tactile experience through electrical neural stimulation requires complex encoding strategies. Indeed, they are presently limited in effectively conveying or restoring tactile sensations by bandwidth constraints. Neuromorphic technology, which mimics the natural behavior of neurons and synapses, holds promise for replicating the encoding of natural touch, potentially informing neurostimulation design. In this perspective, we propose that incorporating neuromorphic technologies into neuroprostheses could be an effective approach for developing more natural human-machine interfaces, potentially leading to advancements in device performance, acceptability, and embeddability. We also highlight ongoing challenges and the required actions to facilitate the future integration of these advanced technologies.
Collapse
Affiliation(s)
- Elisa Donati
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland.
| | - Giacomo Valle
- Department of Organismal Biology and Anatomy, University of Chicago, Chicago, IL, USA.
| |
Collapse
|
3
|
Sotomayor-Gómez B, Battaglia FP, Vinck M. SpikeShip: A method for fast, unsupervised discovery of high-dimensional neural spiking patterns. PLoS Comput Biol 2023; 19:e1011335. [PMID: 37523401 PMCID: PMC10414626 DOI: 10.1371/journal.pcbi.1011335] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 08/10/2023] [Accepted: 07/07/2023] [Indexed: 08/02/2023] Open
Abstract
Neural coding and memory formation depend on temporal spiking sequences that span high-dimensional neural ensembles. The unsupervised discovery and characterization of these spiking sequences requires a suitable dissimilarity measure to spiking patterns, which can then be used for clustering and decoding. Here, we present a new dissimilarity measure based on optimal transport theory called SpikeShip, which compares multi-neuron spiking patterns based on all the relative spike-timing relationships among neurons. SpikeShip computes the optimal transport cost to make all the relative spike-timing relationships (across neurons) identical between two spiking patterns. We show that this transport cost can be decomposed into a temporal rigid translation term, which captures global latency shifts, and a vector of neuron-specific transport flows, which reflect inter-neuronal spike timing differences. SpikeShip can be effectively computed for high-dimensional neuronal ensembles, has a low (linear) computational cost that has the same order as the spike count, and is sensitive to higher-order correlations. Furthermore, SpikeShip is binless, can handle any form of spike time distributions, is not affected by firing rate fluctuations, can detect patterns with a low signal-to-noise ratio, and can be effectively combined with a sliding window approach. We compare the advantages and differences between SpikeShip and other measures like SPIKE and Victor-Purpura distance. We applied SpikeShip to large-scale Neuropixel recordings during spontaneous activity and visual encoding. We show that high-dimensional spiking sequences detected via SpikeShip reliably distinguish between different natural images and different behavioral states. These spiking sequences carried complementary information to conventional firing rate codes. SpikeShip opens new avenues for studying neural coding and memory consolidation by rapid and unsupervised detection of temporal spiking patterns in high-dimensional neural ensembles.
Collapse
Affiliation(s)
- Boris Sotomayor-Gómez
- Donders Centre for Neuroscience, Department of Neurophysics, Radboud University Nijmegen, Nijmegen, Netherlands
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, Frankfurt, Germany
| | - Francesco P. Battaglia
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Martin Vinck
- Donders Centre for Neuroscience, Department of Neurophysics, Radboud University Nijmegen, Nijmegen, Netherlands
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, Frankfurt, Germany
| |
Collapse
|
4
|
Precise Spiking Motifs in Neurobiological and Neuromorphic Data. Brain Sci 2022; 13:brainsci13010068. [PMID: 36672049 PMCID: PMC9856822 DOI: 10.3390/brainsci13010068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 12/20/2022] [Accepted: 12/23/2022] [Indexed: 12/31/2022] Open
Abstract
Why do neurons communicate through spikes? By definition, spikes are all-or-none neural events which occur at continuous times. In other words, spikes are on one side binary, existing or not without further details, and on the other, can occur at any asynchronous time, without the need for a centralized clock. This stands in stark contrast to the analog representation of values and the discretized timing classically used in digital processing and at the base of modern-day neural networks. As neural systems almost systematically use this so-called event-based representation in the living world, a better understanding of this phenomenon remains a fundamental challenge in neurobiology in order to better interpret the profusion of recorded data. With the growing need for intelligent embedded systems, it also emerges as a new computing paradigm to enable the efficient operation of a new class of sensors and event-based computers, called neuromorphic, which could enable significant gains in computation time and energy consumption-a major societal issue in the era of the digital economy and global warming. In this review paper, we provide evidence from biology, theory and engineering that the precise timing of spikes plays a crucial role in our understanding of the efficiency of neural networks.
Collapse
|
5
|
Bouhadjar Y, Wouters DJ, Diesmann M, Tetzlaff T. Sequence learning, prediction, and replay in networks of spiking neurons. PLoS Comput Biol 2022; 18:e1010233. [PMID: 35727857 PMCID: PMC9273101 DOI: 10.1371/journal.pcbi.1010233] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Revised: 07/11/2022] [Accepted: 05/20/2022] [Indexed: 11/24/2022] Open
Abstract
Sequence learning, prediction and replay have been proposed to constitute the universal computations performed by the neocortex. The Hierarchical Temporal Memory (HTM) algorithm realizes these forms of computation. It learns sequences in an unsupervised and continuous manner using local learning rules, permits a context specific prediction of future sequence elements, and generates mismatch signals in case the predictions are not met. While the HTM algorithm accounts for a number of biological features such as topographic receptive fields, nonlinear dendritic processing, and sparse connectivity, it is based on abstract discrete-time neuron and synapse dynamics, as well as on plasticity mechanisms that can only partly be related to known biological mechanisms. Here, we devise a continuous-time implementation of the temporal-memory (TM) component of the HTM algorithm, which is based on a recurrent network of spiking neurons with biophysically interpretable variables and parameters. The model learns high-order sequences by means of a structural Hebbian synaptic plasticity mechanism supplemented with a rate-based homeostatic control. In combination with nonlinear dendritic input integration and local inhibitory feedback, this type of plasticity leads to the dynamic self-organization of narrow sequence-specific subnetworks. These subnetworks provide the substrate for a faithful propagation of sparse, synchronous activity, and, thereby, for a robust, context specific prediction of future sequence elements as well as for the autonomous replay of previously learned sequences. By strengthening the link to biology, our implementation facilitates the evaluation of the TM hypothesis based on experimentally accessible quantities. The continuous-time implementation of the TM algorithm permits, in particular, an investigation of the role of sequence timing for sequence learning, prediction and replay. We demonstrate this aspect by studying the effect of the sequence speed on the sequence learning performance and on the speed of autonomous sequence replay. Essentially all data processed by mammals and many other living organisms is sequential. This holds true for all types of sensory input data as well as motor output activity. Being able to form memories of such sequential data, to predict future sequence elements, and to replay learned sequences is a necessary prerequisite for survival. It has been hypothesized that sequence learning, prediction and replay constitute the fundamental computations performed by the neocortex. The Hierarchical Temporal Memory (HTM) constitutes an abstract powerful algorithm implementing this form of computation and has been proposed to serve as a model of neocortical processing. In this study, we are reformulating this algorithm in terms of known biological ingredients and mechanisms to foster the verifiability of the HTM hypothesis based on electrophysiological and behavioral data. The proposed model learns continuously in an unsupervised manner by biologically plausible, local plasticity mechanisms, and successfully predicts and replays complex sequences. Apart from establishing contact to biology, the study sheds light on the mechanisms determining at what speed we can process sequences and provides an explanation of fast sequence replay observed in the hippocampus and in the neocortex.
Collapse
Affiliation(s)
- Younes Bouhadjar
- Institute of Neuroscience and Medicine (INM-6), & Institute for Advanced Simulation (IAS-6), & JARA BRAIN Institute Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Peter Grünberg Institute (PGI-7,10), Jülich Research Centre and JARA, Jülich, Germany
- RWTH Aachen University, Aachen, Germany
- * E-mail:
| | - Dirk J. Wouters
- Institute of Electronic Materials (IWE 2) & JARA-FIT, RWTH Aachen University, Aachen, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6), & Institute for Advanced Simulation (IAS-6), & JARA BRAIN Institute Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, & Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany
| | - Tom Tetzlaff
- Institute of Neuroscience and Medicine (INM-6), & Institute for Advanced Simulation (IAS-6), & JARA BRAIN Institute Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| |
Collapse
|
6
|
He L, Caudill MS, Jing J, Wang W, Sun Y, Tang J, Jiang X, Zoghbi HY. A weakened recurrent circuit in the hippocampus of Rett syndrome mice disrupts long-term memory representations. Neuron 2022; 110:1689-1699.e6. [PMID: 35290792 PMCID: PMC9930308 DOI: 10.1016/j.neuron.2022.02.014] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Revised: 12/30/2021] [Accepted: 02/16/2022] [Indexed: 02/02/2023]
Abstract
Successful recall of a contextual memory requires reactivating ensembles of hippocampal cells that were allocated during memory formation. Altering the ratio of excitation-to-inhibition (E/I) during memory retrieval can bias cell participation in an ensemble and hinder memory recall. In the case of Rett syndrome (RTT), a neurological disorder with severe learning and memory deficits, the E/I balance is altered, but the source of this imbalance is unknown. Using in vivo imaging during an associative memory task, we show that during long-term memory retrieval, RTT CA1 cells poorly distinguish mnemonic context and form larger ensembles than wild-type mouse cells. Simultaneous multiple whole-cell recordings revealed that mutant somatostatin expressing interneurons (SOM) are poorly recruited by CA1 pyramidal cells and are less active during long-term memory retrieval in vivo. Chemogenetic manipulation revealed that reduced SOM activity underlies poor long-term memory recall. Our findings reveal a disrupted recurrent CA1 circuit contributing to RTT memory impairment.
Collapse
Affiliation(s)
- Lingjie He
- Department of Molecular and Human Genetics, Baylor College of Medicine, Houston, TX, USA; Jan and Dan Duncan Neurological Research Institute at Texas Children's Hospital, Houston, TX, USA; Howard Hughes Medical Institute, Baylor College of Medicine, Houston, TX, USA
| | - Matthew S Caudill
- Jan and Dan Duncan Neurological Research Institute at Texas Children's Hospital, Houston, TX, USA; Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
| | - Junzhan Jing
- Jan and Dan Duncan Neurological Research Institute at Texas Children's Hospital, Houston, TX, USA; Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
| | - Wei Wang
- Department of Molecular and Human Genetics, Baylor College of Medicine, Houston, TX, USA; Jan and Dan Duncan Neurological Research Institute at Texas Children's Hospital, Houston, TX, USA
| | - Yaling Sun
- Jan and Dan Duncan Neurological Research Institute at Texas Children's Hospital, Houston, TX, USA; Howard Hughes Medical Institute, Baylor College of Medicine, Houston, TX, USA
| | - Jianrong Tang
- Jan and Dan Duncan Neurological Research Institute at Texas Children's Hospital, Houston, TX, USA; Department of Pediatrics, Baylor College of Medicine, Houston, TX, USA
| | - Xiaolong Jiang
- Jan and Dan Duncan Neurological Research Institute at Texas Children's Hospital, Houston, TX, USA; Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
| | - Huda Y Zoghbi
- Department of Molecular and Human Genetics, Baylor College of Medicine, Houston, TX, USA; Jan and Dan Duncan Neurological Research Institute at Texas Children's Hospital, Houston, TX, USA; Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA; Department of Pediatrics, Baylor College of Medicine, Houston, TX, USA; Department of Neurology, Baylor College of Medicine, Houston, TX, USA; Howard Hughes Medical Institute, Baylor College of Medicine, Houston, TX, USA.
| |
Collapse
|
7
|
Porrmann F, Pilz S, Stella A, Kleinjohann A, Denker M, Hagemeyer J, Rückert U. Acceleration of the SPADE Method Using a Custom-Tailored FP-Growth Implementation. Front Neuroinform 2021; 15:723406. [PMID: 34603002 PMCID: PMC8483730 DOI: 10.3389/fninf.2021.723406] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Accepted: 08/16/2021] [Indexed: 11/13/2022] Open
Abstract
The SPADE (spatio-temporal Spike PAttern Detection and Evaluation) method was developed to find reoccurring spatio-temporal patterns in neuronal spike activity (parallel spike trains). However, depending on the number of spike trains and the length of recording, this method can exhibit long runtimes. Based on a realistic benchmark data set, we identified that the combination of pattern mining (using the FP-Growth algorithm) and the result filtering account for 85–90% of the method's total runtime. Therefore, in this paper, we propose a customized FP-Growth implementation tailored to the requirements of SPADE, which significantly accelerates pattern mining and result filtering. Our version allows for parallel and distributed execution, and due to the improvements made, an execution on heterogeneous and low-power embedded devices is now also possible. The implementation has been evaluated using a traditional workstation based on an Intel Broadwell Xeon E5-1650 v4 as a baseline. Furthermore, the heterogeneous microserver platform RECS|Box has been used for evaluating the implementation on two HiSilicon Hi1616 (Kunpeng 916), an Intel Coffee Lake-ER Xeon E-2276ME, an Intel Broadwell Xeon D-D1577, and three NVIDIA Tegra devices (Jetson AGX Xavier, Jetson Xavier NX, and Jetson TX2). Depending on the platform, our implementation is between 27 and 200 times faster than the original implementation. At the same time, the energy consumption was reduced by up to two orders of magnitude.
Collapse
Affiliation(s)
- Florian Porrmann
- Cognitronics and Sensor Systems, CITEC, Bielefeld University, Bielefeld, Germany
| | - Sarah Pilz
- Cognitronics and Sensor Systems, CITEC, Bielefeld University, Bielefeld, Germany
| | - Alessandra Stella
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Center, Jülich, Germany.,RWTH Aachen University, Aachen, Germany
| | - Alexander Kleinjohann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Center, Jülich, Germany.,RWTH Aachen University, Aachen, Germany
| | - Michael Denker
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Center, Jülich, Germany
| | - Jens Hagemeyer
- Cognitronics and Sensor Systems, CITEC, Bielefeld University, Bielefeld, Germany
| | - Ulrich Rückert
- Cognitronics and Sensor Systems, CITEC, Bielefeld University, Bielefeld, Germany
| |
Collapse
|
8
|
Williams AH, Linderman SW. Statistical neuroscience in the single trial limit. Curr Opin Neurobiol 2021; 70:193-205. [PMID: 34861596 DOI: 10.1016/j.conb.2021.10.008] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 09/29/2021] [Accepted: 10/27/2021] [Indexed: 11/24/2022]
Abstract
Individual neurons often produce highly variable responses over nominally identical trials, reflecting a mixture of intrinsic 'noise' and systematic changes in the animal's cognitive and behavioral state. Disentangling these sources of variability is of great scientific interest in its own right, but it is also increasingly inescapable as neuroscientists aspire to study more complex and naturalistic animal behaviors. In these settings, behavioral actions never repeat themselves exactly and may rarely do so even approximately. Thus, new statistical methods that extract reliable features of neural activity using few, if any, repeated trials are needed. Accurate statistical modeling in this severely trial-limited regime is challenging, but still possible if simplifying structure in neural data can be exploited. We review recent works that have identified different forms of simplifying structure - including shared gain modulations across neural subpopulations, temporal smoothness in neural firing rates, and correlations in responses across behavioral conditions - and exploited them to reveal novel insights into the trial-by-trial operation of neural circuits.
Collapse
Affiliation(s)
- Alex H Williams
- Department of Statistics and Wu Tsai Neurosciences Institute, Stanford University, USA
| | - Scott W Linderman
- Department of Statistics and Wu Tsai Neurosciences Institute, Stanford University, USA.
| |
Collapse
|
9
|
Wason TD. A model integrating multiple processes of synchronization and coherence for information instantiation within a cortical area. Biosystems 2021; 205:104403. [PMID: 33746019 DOI: 10.1016/j.biosystems.2021.104403] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Accepted: 03/05/2021] [Indexed: 12/14/2022]
Abstract
What is the form of dynamic, e.g., sensory, information in the mammalian cortex? Information in the cortex is modeled as a coherence map of a mixed chimera state of synchronous, phasic, and disordered minicolumns. The theoretical model is built on neurophysiological evidence. Complex spatiotemporal information is instantiated through a system of interacting biological processes that generate a synchronized cortical area, a coherent aperture. Minicolumn elements are grouped in macrocolumns in an array analogous to a phased-array radar, modeled as an aperture, a "hole through which radiant energy flows." Coherence maps in a cortical area transform inputs from multiple sources into outputs to multiple targets, while reducing complexity and entropy. Coherent apertures can assume extremely large numbers of different information states as coherence maps, which can be communicated among apertures with corresponding very large bandwidths. The coherent aperture model incorporates considerable reported research, integrating five conceptually and mathematically independent processes: 1) a damped Kuramoto network model, 2) a pumped area field potential, 3) the gating of nearly coincident spikes, 4) the coherence of activity across cortical lamina, and 5) complex information formed through functions in macrocolumns. Biological processes and their interactions are described in equations and a functional circuit such that the mathematical pieces can be assembled the same way the neurophysiological ones are. The model can be conceptually convolved over the specifics of local cortical areas within and across species. A coherent aperture becomes a node in a graph of cortical areas with a corresponding distribution of information.
Collapse
Affiliation(s)
- Thomas D Wason
- North Carolina State University, Department of Biological Sciences, Meitzen Laboratory, Campus Box 7617, 128 David Clark Labs, Raleigh, NC 27695-7617, USA.
| |
Collapse
|
10
|
Williams AH, Degleris A, Wang Y, Linderman SW. Point process models for sequence detection in high-dimensional neural spike trains. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 2020; 33:14350-14361. [PMID: 35002191 PMCID: PMC8734964] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Sparse sequences of neural spikes are posited to underlie aspects of working memory [1], motor production [2], and learning [3, 4]. Discovering these sequences in an unsupervised manner is a longstanding problem in statistical neuroscience [5-7]. Promising recent work [4, 8] utilized a convolutive nonnegative matrix factorization model [9] to tackle this challenge. However, this model requires spike times to be discretized, utilizes a sub-optimal least-squares criterion, and does not provide uncertainty estimates for model predictions or estimated parameters. We address each of these shortcomings by developing a point process model that characterizes fine-scale sequences at the level of individual spikes and represents sequence occurrences as a small number of marked events in continuous time. This ultra-sparse representation of sequence events opens new possibilities for spike train modeling. For example, we introduce learnable time warping parameters to model sequences of varying duration, which have been experimentally observed in neural circuits [10]. We demonstrate these advantages on experimental recordings from songbird higher vocal center and rodent hippocampus.
Collapse
Affiliation(s)
- Alex H Williams
- Department of Statistics, Stanford University, Stanford, CA 94305
| | - Anthony Degleris
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305
| | - Yixin Wang
- Department of Statistics, Columbia University, New York NY 10027
| | | |
Collapse
|
11
|
Tingley D, Peyrache A. On the methods for reactivation and replay analysis. Philos Trans R Soc Lond B Biol Sci 2020; 375:20190231. [PMID: 32248787 DOI: 10.1098/rstb.2019.0231] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
Abstract
A major task in the history of neurophysiology has been to relate patterns of neural activity to ongoing external stimuli. More recently, this approach has branched out to relating current neural activity patterns to external stimuli or experiences that occurred in the past or future. Here, we aim to review the large body of methodological approaches used towards this goal, and to assess the assumptions each makes with reference to the statistics of neural data that are commonly observed. These methods primarily fall into two categories, those that quantify zero-lag relationships without examining temporal evolution, termed reactivation, and those that quantify the temporal structure of changing activity patterns, termed replay. However, no two studies use the exact same approach, which prevents an unbiased comparison between findings. These observations should instead be validated by multiple and, if possible, previously established tests. This will help the community to speak a common language and will eventually provide tools to study, more generally, the organization of neuronal patterns in the brain. This article is part of the Theo Murphy meeting issue 'Memory reactivation: replaying events past, present and future'.
Collapse
Affiliation(s)
- David Tingley
- Neuroscience Institute, New York University, New York, NY, USA
| | - Adrien Peyrache
- Montreal Neurological Institute, McGill University, Montreal, QC, Canada
| |
Collapse
|
12
|
Unakafova VA, Gail A. Comparing Open-Source Toolboxes for Processing and Analysis of Spike and Local Field Potentials Data. Front Neuroinform 2019; 13:57. [PMID: 31417389 PMCID: PMC6682703 DOI: 10.3389/fninf.2019.00057] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Accepted: 07/11/2019] [Indexed: 11/13/2022] Open
Abstract
Analysis of spike and local field potential (LFP) data is an essential part of neuroscientific research. Today there exist many open-source toolboxes for spike and LFP data analysis implementing various functionality. Here we aim to provide a practical guidance for neuroscientists in the choice of an open-source toolbox best satisfying their needs. We overview major open-source toolboxes for spike and LFP data analysis as well as toolboxes with tools for connectivity analysis, dimensionality reduction and generalized linear modeling. We focus on comparing toolboxes functionality, statistical and visualization tools, documentation and support quality. To give a better insight, we compare and illustrate functionality of the toolboxes on open-access dataset or simulated data and make corresponding MATLAB scripts publicly available.
Collapse
Affiliation(s)
| | - Alexander Gail
- Cognitive Neurosciences Laboratory, German Primate Center, Göttingen, Germany
- Primate Cognition, Göttingen, Germany
- Georg-Elias-Mueller-Institute of Psychology, University of Goettingen, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, Göttingen, Germany
| |
Collapse
|
13
|
Mackevicius EL, Bahle AH, Williams AH, Gu S, Denisenko NI, Goldman MS, Fee MS. Unsupervised discovery of temporal sequences in high-dimensional datasets, with applications to neuroscience. eLife 2019; 8:38471. [PMID: 30719973 PMCID: PMC6363393 DOI: 10.7554/elife.38471] [Citation(s) in RCA: 53] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2018] [Accepted: 01/04/2019] [Indexed: 11/22/2022] Open
Abstract
Identifying low-dimensional features that describe large-scale neural recordings is a major challenge in neuroscience. Repeated temporal patterns (sequences) are thought to be a salient feature of neural dynamics, but are not succinctly captured by traditional dimensionality reduction techniques. Here, we describe a software toolbox—called seqNMF—with new methods for extracting informative, non-redundant, sequences from high-dimensional neural data, testing the significance of these extracted patterns, and assessing the prevalence of sequential structure in data. We test these methods on simulated data under multiple noise conditions, and on several real neural and behavioral data sets. In hippocampal data, seqNMF identifies neural sequences that match those calculated manually by reference to behavioral events. In songbird data, seqNMF discovers neural sequences in untutored birds that lack stereotyped songs. Thus, by identifying temporal structure directly from neural data, seqNMF enables dissection of complex neural circuits without relying on temporal references from stimuli or behavioral outputs.
Collapse
Affiliation(s)
- Emily L Mackevicius
- McGovern Institute for Brain Research, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States
| | - Andrew H Bahle
- McGovern Institute for Brain Research, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States
| | - Alex H Williams
- Neurosciences Program, Stanford University, Stanford, United States
| | - Shijie Gu
- McGovern Institute for Brain Research, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States.,School of Life Sciences and Technology, ShanghaiTech University, Shanghai, China
| | - Natalia I Denisenko
- McGovern Institute for Brain Research, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States
| | - Mark S Goldman
- Center for Neuroscience, Department of Neurobiology, Physiology and Behavior, University of California, Davis, Davis, United States.,Department of Ophthamology and Vision Science, University of California, Davis, Davis, United States
| | - Michale S Fee
- McGovern Institute for Brain Research, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States
| |
Collapse
|
14
|
Gutzen R, von Papen M, Trensch G, Quaglio P, Grün S, Denker M. Reproducible Neural Network Simulations: Statistical Methods for Model Validation on the Level of Network Activity Data. Front Neuroinform 2018; 12:90. [PMID: 30618696 PMCID: PMC6305903 DOI: 10.3389/fninf.2018.00090] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2018] [Accepted: 11/14/2018] [Indexed: 11/13/2022] Open
Abstract
Computational neuroscience relies on simulations of neural network models to bridge the gap between the theory of neural networks and the experimentally observed activity dynamics in the brain. The rigorous validation of simulation results against reference data is thus an indispensable part of any simulation workflow. Moreover, the availability of different simulation environments and levels of model description require also validation of model implementations against each other to evaluate their equivalence. Despite rapid advances in the formalized description of models, data, and analysis workflows, there is no accepted consensus regarding the terminology and practical implementation of validation workflows in the context of neural simulations. This situation prevents the generic, unbiased comparison between published models, which is a key element of enhancing reproducibility of computational research in neuroscience. In this study, we argue for the establishment of standardized statistical test metrics that enable the quantitative validation of network models on the level of the population dynamics. Despite the importance of validating the elementary components of a simulation, such as single cell dynamics, building networks from validated building blocks does not entail the validity of the simulation on the network scale. Therefore, we introduce a corresponding set of validation tests and present an example workflow that practically demonstrates the iterative model validation of a spiking neural network model against its reproduction on the SpiNNaker neuromorphic hardware system. We formally implement the workflow using a generic Python library that we introduce for validation tests on neural network activity data. Together with the companion study (Trensch et al., 2018), the work presents a consistent definition, formalization, and implementation of the verification and validation process for neural network simulations.
Collapse
Affiliation(s)
- Robin Gutzen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany.,Theoretical Systems Neurobiology, RWTH Aachen University, Aachen, Germany
| | - Michael von Papen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Guido Trensch
- Simulation Lab Neuroscience, Jülich Supercomputing Centre, Institute for Advanced Simulation, JARA, Jülich Research Centre, Jülich, Germany
| | - Pietro Quaglio
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany.,Theoretical Systems Neurobiology, RWTH Aachen University, Aachen, Germany
| | - Sonja Grün
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany.,Theoretical Systems Neurobiology, RWTH Aachen University, Aachen, Germany
| | - Michael Denker
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| |
Collapse
|