1
|
Jha A, Ashwood ZC, Pillow JW. Active Learning for Discrete Latent Variable Models. Neural Comput 2024; 36:437-474. [PMID: 38363661 DOI: 10.1162/neco_a_01646] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Accepted: 10/13/2023] [Indexed: 02/18/2024]
Abstract
Active learning seeks to reduce the amount of data required to fit the parameters of a model, thus forming an important class of techniques in modern machine learning. However, past work on active learning has largely overlooked latent variable models, which play a vital role in neuroscience, psychology, and a variety of other engineering and scientific disciplines. Here we address this gap by proposing a novel framework for maximum-mutual-information input selection for discrete latent variable regression models. We first apply our method to a class of models known as mixtures of linear regressions (MLR). While it is well known that active learning confers no advantage for linear-gaussian regression models, we use Fisher information to show analytically that active learning can nevertheless achieve large gains for mixtures of such models, and we validate this improvement using both simulations and real-world data. We then consider a powerful class of temporally structured latent variable models given by a hidden Markov model (HMM) with generalized linear model (GLM) observations, which has recently been used to identify discrete states from animal decision-making data. We show that our method substantially reduces the amount of data needed to fit GLM-HMMs and outperforms a variety of approximate methods based on variational and amortized inference. Infomax learning for latent variable models thus offers a powerful approach for characterizing temporally structured latent states, with a wide variety of applications in neuroscience and beyond.
Collapse
Affiliation(s)
- Aditi Jha
- Princeton Neuroscience Institute and Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ 08544, U.S.A.
| | - Zoe C Ashwood
- Princeton Neuroscience Institute and Department of Computer Science, Princeton University, Princeton, NJ 08544, U.S.A.
| | - Jonathan W Pillow
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, U.S.A.
| |
Collapse
|
2
|
Tuckute G, Sathe A, Srikant S, Taliaferro M, Wang M, Schrimpf M, Kay K, Fedorenko E. Driving and suppressing the human language network using large language models. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.04.16.537080. [PMID: 37090673 PMCID: PMC10120732 DOI: 10.1101/2023.04.16.537080] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/25/2023]
Abstract
Transformer models such as GPT generate human-like language and are highly predictive of human brain responses to language. Here, using fMRI-measured brain responses to 1,000 diverse sentences, we first show that a GPT-based encoding model can predict the magnitude of brain response associated with each sentence. Then, we use the model to identify new sentences that are predicted to drive or suppress responses in the human language network. We show that these model-selected novel sentences indeed strongly drive and suppress activity of human language areas in new individuals. A systematic analysis of the model-selected sentences reveals that surprisal and well-formedness of linguistic input are key determinants of response strength in the language network. These results establish the ability of neural network models to not only mimic human language but also noninvasively control neural activity in higher-level cortical areas, like the language network.
Collapse
Affiliation(s)
- Greta Tuckute
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
| | - Aalok Sathe
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
| | - Shashank Srikant
- Computer Science & Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
- MIT-IBM Watson AI Lab, Cambridge, MA 02142, USA
| | - Maya Taliaferro
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
| | - Mingye Wang
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
| | - Martin Schrimpf
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
- Quest for Intelligence, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
- Neuro-X Institute, École Polytechnique Fédérale de Lausanne, CH-1015 Lausanne, Switzerland
| | - Kendrick Kay
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN 55455 USA
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139 USA
- The Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA 02138 USA
| |
Collapse
|
3
|
Gomez-Villa A, Martín A, Vazquez-Corral J, Bertalmío M, Malo J. On the synthesis of visual illusions using deep generative models. J Vis 2022; 22:2. [PMID: 35833884 PMCID: PMC9290318 DOI: 10.1167/jov.22.8.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Visual illusions expand our understanding of the visual system by imposing constraints in the models in two different ways: i) visual illusions for humans should induce equivalent illusions in the model, and ii) illusions synthesized from the model should be compelling for human viewers too. These constraints are alternative strategies to find good vision models. Following the first research strategy, recent studies have shown that artificial neural network architectures also have human-like illusory percepts when stimulated with classical hand-crafted stimuli designed to fool humans. In this work we focus on the second (less explored) strategy: we propose a framework to synthesize new visual illusions using the optimization abilities of current automatic differentiation techniques. The proposed framework can be used with classical vision models as well as with more recent artificial neural network architectures. This framework, validated by psychophysical experiments, can be used to study the difference between a vision model and the actual human perception and to optimize the vision model to decrease this difference.
Collapse
Affiliation(s)
- Alex Gomez-Villa
- Computer Vision Center, Universitat Autónoma de Barcelona, Barcelona, Spain.,
| | - Adrián Martín
- Department of Information and Communications Technologies, Universitat Pompeu Fabra, Barcelona, Spain.,
| | - Javier Vazquez-Corral
- Computer Science Department, Universitat Autónoma de Barcelona and Computer Vision Center, Barcelona, Spain.,
| | | | - Jesús Malo
- Image Processing Lab, Faculty of Physics, Universitat de Valéncia, Spain.,
| |
Collapse
|
4
|
Ouyang G, Dien J, Lorenz R. Handling EEG artifacts and searching individually optimal experimental parameter in real time: a system development and demonstration. J Neural Eng 2022; 19. [PMID: 34902847 DOI: 10.1088/1741-2552/ac42b6] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Accepted: 12/13/2021] [Indexed: 02/02/2023]
Abstract
Objective.Neuroadaptive paradigms that systematically assess event-related potential (ERP) features across many different experimental parameters have the potential to improve the generalizability of ERP findings and may help to accelerate ERP-based biomarker discovery by identifying the exact experimental conditions for which ERPs differ most for a certain clinical population. Obtaining robust and reliable ERPs online is a prerequisite for ERP-based neuroadaptive research. One of the key steps involved is to correctly isolate electroencephalography artifacts in real time because they contribute a large amount of variance that, if not removed, will greatly distort the ERP obtained. Another key factor of concern is the computational cost of the online artifact handling method. This work aims to develop and validate a cost-efficient system to support ERP-based neuroadaptive research.Approach.We developed a simple online artifact handling method, single trial PCA-based artifact removal (SPA), based on variance distribution dichotomies to distinguish between artifacts and neural activity. We then applied this method in an ERP-based neuroadaptive paradigm in which Bayesian optimization was used to search individually optimal inter-stimulus-interval (ISI) that generates ERP with the highest signal-to-noise ratio.Main results.SPA was compared to other offline and online algorithms. The results showed that SPA exhibited good performance in both computational efficiency and preservation of ERP pattern. Based on SPA, the Bayesian optimization procedure was able to quickly find individually optimal ISI.Significance.The current work presents a simple yet highly cost-efficient method that has been validated in its ability to extract ERP, preserve ERP effects, and better support ERP-based neuroadaptive paradigm.
Collapse
Affiliation(s)
- Guang Ouyang
- Faculty of Education, The University of Hong Kong, Hong Kong, People's Republic of China
| | - Joseph Dien
- Department of Human Development and Quantitative Methodology, University of Maryland, College Park, MD, United States of America
| | - Romy Lorenz
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, United Kingdom.,Department of Neurophysics, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Department of Psychology, Stanford University, Stanford, CA, United States of America
| |
Collapse
|
5
|
Draelos A, Gupta P, Jun NY, Sriworarat C, Pearson J. Bubblewrap: Online tiling and real-time flow prediction on neural manifolds. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 2021; 34:6062-6074. [PMID: 35785106 PMCID: PMC9247712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
While most classic studies of function in experimental neuroscience have focused on the coding properties of individual neurons, recent developments in recording technologies have resulted in an increasing emphasis on the dynamics of neural populations. This has given rise to a wide variety of models for analyzing population activity in relation to experimental variables, but direct testing of many neural population hypotheses requires intervening in the system based on current neural state, necessitating models capable of inferring neural state online. Existing approaches, primarily based on dynamical systems, require strong parametric assumptions that are easily violated in the noise-dominated regime and do not scale well to the thousands of data channels in modern experiments. To address this problem, we propose a method that combines fast, stable dimensionality reduction with a soft tiling of the resulting neural manifold, allowing dynamics to be approximated as a probability flow between tiles. This method can be fit efficiently using online expectation maximization, scales to tens of thousands of tiles, and outperforms existing methods when dynamics are noise-dominated or feature multi-modal transition probabilities. The resulting model can be trained at kiloHertz data rates, produces accurate approximations of neural dynamics within minutes, and generates predictions on submillisecond time scales. It retains predictive performance throughout many time steps into the future and is fast enough to serve as a component of closed-loop causal experiments.
Collapse
Affiliation(s)
| | | | | | | | - John Pearson
- Biostatistics & Bioinformatics, Electrical & Computer Engineering, Neurobiology, Psychology & Neuroscience, Duke University
| |
Collapse
|
6
|
Estimating the Parameters of Fitzhugh-Nagumo Neurons from Neural Spiking Data. Brain Sci 2019; 9:brainsci9120364. [PMID: 31835351 PMCID: PMC6956007 DOI: 10.3390/brainsci9120364] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2019] [Revised: 12/03/2019] [Accepted: 12/05/2019] [Indexed: 11/16/2022] Open
Abstract
A theoretical and computational study on the estimation of the parameters of a single Fitzhugh-Nagumo model is presented. The difference of this work from a conventional system identification is that the measured data only consist of discrete and noisy neural spiking (spike times) data, which contain no amplitude information. The goal can be achieved by applying a maximum likelihood estimation approach where the likelihood function is derived from point process statistics. The firing rate of the neuron was assumed as a nonlinear map (logistic sigmoid) relating it to the membrane potential variable. The stimulus data were generated by a phased cosine Fourier series having fixed amplitude and frequency but a randomly shot phase (shot at each repeated trial). Various values of amplitude, stimulus component size, and sample size were applied to examine the effect of stimulus to the identification process. Results are presented in tabular and graphical forms, which also include statistical analysis (mean and standard deviation of the estimates). We also tested our model using realistic data from a previous research (H1 neurons of blowflies) and found that the estimates have a tendency to converge.
Collapse
|
7
|
Vila CH, Williamson RS, Hancock KE, Polley DB. Optimizing optogenetic stimulation protocols in auditory corticofugal neurons based on closed-loop spike feedback. J Neural Eng 2019; 16:066023. [PMID: 31394519 PMCID: PMC6956656 DOI: 10.1088/1741-2552/ab39cf] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
OBJECTIVE Optogenetics provides a means to probe functional connections between brain areas. By activating a set of presynaptic neurons and recording the activity from a downstream brain area, one can establish the sign and strength of a feedforward connection. One challenge is that there are virtually limitless patterns that can be used to stimulate a presynaptic brain area. Functional influences on downstream brain areas can depend not just on whether presynaptic neurons were activated, but how they were activated. Corticofugal axons from the auditory cortex (ACtx) heavily innervate the auditory tectum, the inferior colliculus (IC). Here, we sought to determine whether different modes of corticocollicular activation could titrate the strength of feedforward modulation of sound processing in IC neurons. APPROACH We used multi-channel electrophysiology and optogenetics to record from multiple regions of the IC in awake head-fixed mice while optogenetically stimulating ACtx neurons expressing Chronos, an ultra-fast channelrhodopsin. To identify cortical activation patterns associated with the strongest effects on IC firing rates, we employed a closed-loop evolutionary optimization procedure that tailored the voltage command signal sent to the laser based on spike feedback from single IC neurons. MAIN RESULTS Within minutes, our evolutionary search procedure converged on ACtx stimulation configurations that produced more effective and widespread enhancement of IC unit activity than generic activation parameters. Cortical modulation of midbrain spiking was bi-directional, as the evolutionary search procedure could be programmed to converge on activation patterns that either suppressed or enhanced sound-evoked IC firing rate. SIGNIFICANCE This study introduces a closed-loop optimization procedure to probe functional connections between brain areas. Our findings demonstrate that the influence of descending feedback projections on subcortical sensory processing can vary both in sign and degree depending on how cortical neurons are activated in time.
Collapse
Affiliation(s)
- Charles-Henri Vila
- - Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston MA 02114 USA
- - Bertarelli Fellows Program, Ecole Polytechnique Fédérale de Lausanne, 1015 Lausanne, Switzerland
| | - Ross S Williamson
- - Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston MA 02114 USA
- - Dept. Otolaryngology, Harvard Medical School, Boston MA 02114
| | - Kenneth E Hancock
- - Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston MA 02114 USA
- - Dept. Otolaryngology, Harvard Medical School, Boston MA 02114
| | - Daniel B Polley
- - Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston MA 02114 USA
- - Dept. Otolaryngology, Harvard Medical School, Boston MA 02114
| |
Collapse
|
8
|
Abstract
Behavioral testing in perceptual or cognitive domains requires querying a subject multiple times in order to quantify his or her ability in the corresponding domain. These queries must be conducted sequentially, and any additional testing domains are also typically tested sequentially, such as with distinct tests comprising a test battery. As a result, existing behavioral tests are often lengthy and do not offer comprehensive evaluation. The use of active machine-learning kernel methods for behavioral assessment provides extremely flexible yet efficient estimation tools to more thoroughly investigate perceptual or cognitive processes without incurring the penalty of excessive testing time. Audiometry represents perhaps the simplest test case to demonstrate the utility of these techniques. In pure-tone audiometry, hearing is assessed in the two-dimensional input space of frequency and intensity, and the test is repeated for both ears. Although an individual's ears are not linked physiologically, they share many features in common that lead to correlations suitable for exploitation in testing. The bilateral audiogram estimates hearing thresholds in both ears simultaneously by conjoining their separate input domains into a single search space, which can be evaluated efficiently with modern machine-learning methods. The result is the introduction of the first conjoint psychometric function estimation procedure, which consistently delivers accurate results in significantly less time than sequential disjoint estimators.
Collapse
|
9
|
Adesnik H, Naka A. Cracking the Function of Layers in the Sensory Cortex. Neuron 2019; 100:1028-1043. [PMID: 30521778 DOI: 10.1016/j.neuron.2018.10.032] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2018] [Revised: 08/08/2018] [Accepted: 10/18/2018] [Indexed: 12/24/2022]
Abstract
Understanding how cortical activity generates sensory perceptions requires a detailed dissection of the function of cortical layers. Despite our relatively extensive knowledge of their anatomy and wiring, we have a limited grasp of what each layer contributes to cortical computation. We need to develop a theory of cortical function that is rooted solidly in each layer's component cell types and fine circuit architecture and produces predictions that can be validated by specific perturbations. Here we briefly review the progress toward such a theory and suggest an experimental road map toward this goal. We discuss new methods for the all-optical interrogation of cortical layers, for correlating in vivo function with precise identification of transcriptional cell type, and for mapping local and long-range activity in vivo with synaptic resolution. The new technologies that can crack the function of cortical layers are finally on the immediate horizon.
Collapse
Affiliation(s)
- Hillel Adesnik
- Department of Molecular and Cell Biology, University of California, Berkeley, Berkeley, CA, USA; The Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, USA.
| | - Alexander Naka
- Department of Molecular and Cell Biology, University of California, Berkeley, Berkeley, CA, USA; The Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, USA
| |
Collapse
|
10
|
Doruk RO, Zhang K. Adaptive Stimulus Design for Dynamic Recurrent Neural Network Models. Front Neural Circuits 2019; 12:119. [PMID: 30723397 PMCID: PMC6349832 DOI: 10.3389/fncir.2018.00119] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2018] [Accepted: 12/17/2018] [Indexed: 11/26/2022] Open
Abstract
We present an adaptive stimulus design method for efficiently estimating the parameters of a dynamic recurrent network model with interacting excitatory and inhibitory neuronal populations. Although stimuli that are optimized for model parameter estimation should, in theory, have advantages over nonadaptive random stimuli, in practice it remains unclear in what way and to what extent an optimal design of time-varying stimuli may actually improve parameter estimation for this common type of recurrent network models. Here we specified the time course of each stimulus by a Fourier series whose amplitudes and phases were determined by maximizing a utility function based on the Fisher information matrix. To facilitate the optimization process, we have derived differential equations that govern the time evolution of the gradients of the utility function with respect to the stimulus parameters. The network parameters were estimated by maximum likelihood from the spike train data generated by an inhomogeneous Poisson process from the continuous network state. The adaptive design process was repeated in a closed loop, alternating between optimal stimulus design and parameter estimation from the updated stimulus-response data. Our results confirmed that, compared with random stimuli, optimally designed stimuli elicited responses with significantly better likelihood values for parameter estimation. Furthermore, all individual parameters, including the time constants and the connection weights, were recovered more accurately by the optimal design method. We also examined how the errors of different parameter estimates were correlated, and proposed heuristic formulas to account for the correlation patterns by an approximate parameter-confounding theory. Our results suggest that although adaptive optimal stimulus design incurs considerable computational cost even for the simplest excitatory-inhibitory recurrent network model, it may potentially help save time in experiments by reducing the number of stimuli needed for network parameter estimation.
Collapse
Affiliation(s)
- R. Ozgur Doruk
- Department of Electrical and Electronic Engineering, Atilim University, Golbasi, Turkey
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, United States
| | - Kechen Zhang
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, United States
| |
Collapse
|
11
|
Doruk RO, Zhang K. Fitting of dynamic recurrent neural network models to sensory stimulus-response data. J Biol Phys 2018; 44:449-469. [PMID: 29860641 PMCID: PMC6082798 DOI: 10.1007/s10867-018-9501-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2017] [Accepted: 05/07/2018] [Indexed: 01/22/2023] Open
Abstract
We present a theoretical study aiming at model fitting for sensory neurons. Conventional neural network training approaches are not applicable to this problem due to lack of continuous data. Although the stimulus can be considered as a smooth time-dependent variable, the associated response will be a set of neural spike timings (roughly the instants of successive action potential peaks) that have no amplitude information. A recurrent neural network model can be fitted to such a stimulus-response data pair by using the maximum likelihood estimation method where the likelihood function is derived from Poisson statistics of neural spiking. The universal approximation feature of the recurrent dynamical neuron network models allows us to describe excitatory-inhibitory characteristics of an actual sensory neural network with any desired number of neurons. The stimulus data are generated by a phased cosine Fourier series having a fixed amplitude and frequency but a randomly shot phase. Various values of amplitude, stimulus component size, and sample size are applied in order to examine the effect of the stimulus to the identification process. Results are presented in tabular and graphical forms at the end of this text. In addition, to demonstrate the success of this research, a study involving the same model, nominal parameters and stimulus structure, and another study that works on different models are compared to that of this research.
Collapse
Affiliation(s)
- R Ozgur Doruk
- Department of Electrical and Electronics Engineering, Atilim University, Kizilcasar Mahallesi, Incek, Golbasi, Ankara, 06836, Turkey.
| | - Kechen Zhang
- Department of Biomedical Engineering, Johns Hopkins School of Medicine, 720 Rutland Avenue, Traylor 407, Baltimore, MD, 21205, USA
| |
Collapse
|
12
|
Seu GP, Angotzi GN, Boi F, Raffo L, Berdondini L, Meloni P. Exploiting All Programmable SoCs in Neural Signal Analysis: A Closed-Loop Control for Large-Scale CMOS Multielectrode Arrays. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2018; 12:839-850. [PMID: 29993584 DOI: 10.1109/tbcas.2018.2830659] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Microelectrode array (MEA) systems with up to several thousands of recording electrodes and electrical or optical stimulation capabilities are commercially available or described in the literature. By exploiting their submillisecond and micrometric temporal and spatial resolutions to record bioelectrical signals, such emerging MEA systems are increasingly used in neuroscience to study the complex dynamics of neuronal networks and brain circuits. However, they typically lack the capability of implementing real-time feedback between the detection of neuronal spiking events and stimulation, thus restricting large-scale neural interfacing to open-loop conditions. In order to exploit the potential of such large-scale recording systems and stimulation, we designed and validated a fully reconfigurable FPGA-based processing system for closed-loop multichannel control. By adopting a Xilinx Zynq-all-programmable system on chip that integrates reconfigurable logic and a dual-core ARM-based processor on the same device, the proposed platform permits low-latency preprocessing (filtering and detection) of spikes acquired simultaneously from several thousands of electrode sites. To demonstrate the proposed platform, we tested its performances through ex vivo experiments on the mice retina using a state-of-the-art planar high-density MEA that samples 4096 electrodes at 18 kHz and record light-evoked spikes from several thousands of retinal ganglion cells simultaneously. Results demonstrate that the platform is able to provide a total latency from whole-array data acquisition to stimulus generation below 2 ms. This opens the opportunity to design closed-loop experiments on neural systems and biomedical applications using emerging generations of planar or implantable large-scale MEA systems.
Collapse
|
13
|
Cooke JRH, Selen LPJ, van Beers RJ, Medendorp WP. Bayesian adaptive stimulus selection for dissociating models of psychophysical data. J Vis 2018; 18:12. [PMID: 30372761 DOI: 10.1167/18.8.12] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Comparing models facilitates testing different hypotheses regarding the computational basis of perception and action. Effective model comparison requires stimuli for which models make different predictions. Typically, experiments use a predetermined set of stimuli or sample stimuli randomly. Both methods have limitations; a predetermined set may not contain stimuli that dissociate the models, whereas random sampling may be inefficient. To overcome these limitations, we expanded the psi-algorithm (Kontsevich & Tyler, 1999) from estimating the parameters of a psychometric curve to distinguishing models. To test our algorithm, we applied it to two distinct problems. First, we investigated dissociating sensory noise models. We simulated ideal observers with different noise models performing a two-alternative forced-choice task. Stimuli were selected randomly or using our algorithm. We found using our algorithm improved the accuracy of model comparison. We also validated the algorithm in subjects by inferring which noise model underlies speed perception. Our algorithm converged quickly to the model previously proposed (Stocker & Simoncelli, 2006), whereas if stimuli were selected randomly, model probabilities separated slower and sometimes supported alternative models. Second, we applied our algorithm to a different problem-comparing models of target selection under body acceleration. Previous work found target choice preference is modulated by whole body acceleration (Rincon-Gonzalez et al., 2016). However, the effect is subtle, making model comparison difficult. We show that selecting stimuli adaptively could have led to stronger conclusions in model comparison. We conclude that our technique is more efficient and more reliable than current methods of stimulus selection for dissociating models.
Collapse
Affiliation(s)
- James R H Cooke
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, the Netherlands
| | - Luc P J Selen
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, the Netherlands
| | - Robert J van Beers
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, the Netherlands.,Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| | - W Pieter Medendorp
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, the Netherlands
| |
Collapse
|
14
|
Madi MK, Karameh FN. Adaptive optimal input design and parametric estimation of nonlinear dynamical systems: application to neuronal modeling. J Neural Eng 2018; 15:046028. [PMID: 29749350 DOI: 10.1088/1741-2552/aac3f7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
OBJECTIVE Many physical models of biological processes including neural systems are characterized by parametric nonlinear dynamical relations between driving inputs, internal states, and measured outputs of the process. Fitting such models using experimental data (data assimilation) is a challenging task since the physical process often operates in a noisy, possibly non-stationary environment; moreover, conducting multiple experiments under controlled and repeatable conditions can be impractical, time consuming or costly. The accuracy of model identification, therefore, is dictated principally by the quality and dynamic richness of collected data over single or few experimental sessions. Accordingly, it is highly desirable to design efficient experiments that, by exciting the physical process with smart inputs, yields fast convergence and increased accuracy of the model. APPROACH We herein introduce an adaptive framework in which optimal input design is integrated with square root cubature Kalman filters (OID-SCKF) to develop an online estimation procedure that first, converges significantly quicker, thereby permitting model fitting over shorter time windows, and second, enhances model accuracy when only few process outputs are accessible. The methodology is demonstrated on common nonlinear models and on a four-area neural mass model with noisy and limited measurements. Estimation quality (speed and accuracy) is benchmarked against high-performance SCKF-based methods that commonly employ dynamically rich informed inputs for accurate model identification. MAIN RESULTS For all the tested models, simulated single-trial and ensemble averages showed that OID-SCKF exhibited (i) faster convergence of parameter estimates and (ii) lower dependence on inter-trial noise variability with gains up to around 1000 ms in speed and 81% increase in variability for the neural mass models. In terms of accuracy, OID-SCKF estimation was superior, and exhibited considerably less variability across experiments, in identifying model parameters of (a) systems with challenging model inversion dynamics and (b) systems with fewer measurable outputs that directly relate to the underlying processes. SIGNIFICANCE Fast and accurate identification therefore carries particular promise for modeling of transient (short-lived) neuronal network dynamics using a spatially under-sampled set of noisy measurements, as is commonly encountered in neural engineering applications.
Collapse
Affiliation(s)
- Mahmoud K Madi
- Department of Electrical and Computer Engineering, American University of Beirut, Beirut, Lebanon
| | | |
Collapse
|
15
|
Bertrán MA, Martínez NL, Wang Y, Dunson D, Sapiro G, Ringach D. Active learning of cortical connectivity from two-photon imaging data. PLoS One 2018; 13:e0196527. [PMID: 29718955 PMCID: PMC5931643 DOI: 10.1371/journal.pone.0196527] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2018] [Accepted: 04/13/2018] [Indexed: 11/19/2022] Open
Abstract
Understanding how groups of neurons interact within a network is a fundamental question in system neuroscience. Instead of passively observing the ongoing activity of a network, we can typically perturb its activity, either by external sensory stimulation or directly via techniques such as two-photon optogenetics. A natural question is how to use such perturbations to identify the connectivity of the network efficiently. Here we introduce a method to infer sparse connectivity graphs from in-vivo, two-photon imaging of population activity in response to external stimuli. A novel aspect of the work is the introduction of a recommended distribution, incrementally learned from the data, to optimally refine the inferred network. Unlike existing system identification techniques, this “active learning” method automatically focuses its attention on key undiscovered areas of the network, instead of targeting global uncertainty indicators like parameter variance. We show how active learning leads to faster inference while, at the same time, provides confidence intervals for the network parameters. We present simulations on artificial small-world networks to validate the methods and apply the method to real data. Analysis of frequency of motifs recovered show that cortical networks are consistent with a small-world topology model.
Collapse
Affiliation(s)
- Martín A. Bertrán
- Electrical and Computer Engineering, Duke University, Durham, North Carolina, United States of America
- * E-mail:
| | - Natalia L. Martínez
- Electrical and Computer Engineering, Duke University, Durham, North Carolina, United States of America
| | - Ye Wang
- Statistical Science Program, Duke University, Durham, North Carolina, United States of America
| | - David Dunson
- Statistical Science Program, Duke University, Durham, North Carolina, United States of America
| | - Guillermo Sapiro
- Electrical and Computer Engineering, Duke University, Durham, North Carolina, United States of America
- BME, CS, and Math, Duke University, Durham, North Carolina, United States of America
| | - Dario Ringach
- Neurobiology and Psychology, Jules Stein Eye Institute, Biomedical Engineering Program, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, California, United States of America
| |
Collapse
|
16
|
Charles AS, Park M, Weller JP, Horwitz GD, Pillow JW. Dethroning the Fano Factor: A Flexible, Model-Based Approach to Partitioning Neural Variability. Neural Comput 2018; 30:1012-1045. [PMID: 29381442 DOI: 10.1162/neco_a_01062] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Neurons in many brain areas exhibit high trial-to-trial variability, with spike counts that are overdispersed relative to a Poisson distribution. Recent work (Goris, Movshon, & Simoncelli, 2014 ) has proposed to explain this variability in terms of a multiplicative interaction between a stochastic gain variable and a stimulus-dependent Poisson firing rate, which produces quadratic relationships between spike count mean and variance. Here we examine this quadratic assumption and propose a more flexible family of models that can account for a more diverse set of mean-variance relationships. Our model contains additive gaussian noise that is transformed nonlinearly to produce a Poisson spike rate. Different choices of the nonlinear function can give rise to qualitatively different mean-variance relationships, ranging from sublinear to linear to quadratic. Intriguingly, a rectified squaring nonlinearity produces a linear mean-variance function, corresponding to responses with a constant Fano factor. We describe a computationally efficient method for fitting this model to data and demonstrate that a majority of neurons in a V1 population are better described by a model with a nonquadratic relationship between mean and variance. Finally, we demonstrate a practical use of our model via an application to Bayesian adaptive stimulus selection in closed-loop neurophysiology experiments, which shows that accounting for overdispersion can lead to dramatic improvements in adaptive tuning curve estimation.
Collapse
Affiliation(s)
- Adam S Charles
- Princeton Neuroscience Institute and Department of Psychology, Princeton University, Princeton, NJ 08544, U.S.A.
| | - Mijung Park
- Gatsby Computational Neuroscience Unit, University College London, London W1T 4JG, U.K.
| | - J Patrick Weller
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195, U.S.A.
| | - Gregory D Horwitz
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195, U.S.A.
| | - Jonathan W Pillow
- Princeton Neuroscience Institute and Department of Psychology, Princeton University, Princeton, NJ 08544, U.S.A.
| |
Collapse
|
17
|
Sloas DC, Zhuo R, Xue H, Chambers AR, Kolaczyk E, Polley DB, Sen K. Interactions across Multiple Stimulus Dimensions in Primary Auditory Cortex. eNeuro 2016; 3:ENEURO.0124-16.2016. [PMID: 27622211 PMCID: PMC5008244 DOI: 10.1523/eneuro.0124-16.2016] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2016] [Revised: 07/28/2016] [Accepted: 08/07/2016] [Indexed: 11/21/2022] Open
Abstract
Although sensory cortex is thought to be important for the perception of complex objects, its specific role in representing complex stimuli remains unknown. Complex objects are rich in information along multiple stimulus dimensions. The position of cortex in the sensory hierarchy suggests that cortical neurons may integrate across these dimensions to form a more gestalt representation of auditory objects. Yet, studies of cortical neurons typically explore single or few dimensions due to the difficulty of determining optimal stimuli in a high dimensional stimulus space. Evolutionary algorithms (EAs) provide a potentially powerful approach for exploring multidimensional stimulus spaces based on real-time spike feedback, but two important issues arise in their application. First, it is unclear whether it is necessary to characterize cortical responses to multidimensional stimuli or whether it suffices to characterize cortical responses to a single dimension at a time. Second, quantitative methods for analyzing complex multidimensional data from an EA are lacking. Here, we apply a statistical method for nonlinear regression, the generalized additive model (GAM), to address these issues. The GAM quantitatively describes the dependence between neural response and all stimulus dimensions. We find that auditory cortical neurons in mice are sensitive to interactions across dimensions. These interactions are diverse across the population, indicating significant integration across stimulus dimensions in auditory cortex. This result strongly motivates using multidimensional stimuli in auditory cortex. Together, the EA and the GAM provide a novel quantitative paradigm for investigating neural coding of complex multidimensional stimuli in auditory and other sensory cortices.
Collapse
Affiliation(s)
- David C. Sloas
- Hearing Research Center and Department of Biomedical Engineering, Boston University, Boston, Massachusetts 02215
| | - Ran Zhuo
- Department of Mathematics and Statistics, Boston University, Boston, Massachusetts 02215
| | - Hongbo Xue
- Department of Mathematics and Statistics, Boston University, Boston, Massachusetts 02215
| | - Anna R. Chambers
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, Massachusetts 02114, and
| | - Eric Kolaczyk
- Department of Mathematics and Statistics, Boston University, Boston, Massachusetts 02215
| | - Daniel B. Polley
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, Massachusetts 02114, and
- Department of Otolaryngology, Harvard Medical School, Boston, Massachusetts 02115
| | - Kamal Sen
- Hearing Research Center and Department of Biomedical Engineering, Boston University, Boston, Massachusetts 02215
| |
Collapse
|
18
|
Potter SM, El Hady A, Fetz EE. Closed-loop neuroscience and neuroengineering. Front Neural Circuits 2014; 8:115. [PMID: 25294988 PMCID: PMC4171982 DOI: 10.3389/fncir.2014.00115] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2013] [Accepted: 09/01/2014] [Indexed: 01/18/2023] Open
Affiliation(s)
- Steve M Potter
- Laboratory for Neuroengineering, Coulter Department of Biomedical Engineering, Georgia Institute of Technology Atlanta, GA, USA
| | - Ahmed El Hady
- Department of Non Linear Dynamics, Max Planck Institute for Dynamics and Self Organization Goettingen, Germany
| | - Eberhard E Fetz
- Departments of Physiology and Biophysics and Bioengineering, Washington National Primate Research Center, University of Washington Seattle, WA, USA
| |
Collapse
|
19
|
Online stimulus optimization rapidly reveals multidimensional selectivity in auditory cortical neurons. J Neurosci 2014; 34:8963-75. [PMID: 24990917 DOI: 10.1523/jneurosci.0260-14.2014] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
Neurons in sensory brain regions shape our perception of the surrounding environment through two parallel operations: decomposition and integration. For example, auditory neurons decompose sounds by separately encoding their frequency, temporal modulation, intensity, and spatial location. Neurons also integrate across these various features to support a unified perceptual gestalt of an auditory object. At higher levels of a sensory pathway, neurons may select for a restricted region of feature space defined by the intersection of multiple, independent stimulus dimensions. To further characterize how auditory cortical neurons decompose and integrate multiple facets of an isolated sound, we developed an automated procedure that manipulated five fundamental acoustic properties in real time based on single-unit feedback in awake mice. Within several minutes, the online approach converged on regions of the multidimensional stimulus manifold that reliably drove neurons at significantly higher rates than predefined stimuli. Optimized stimuli were cross-validated against pure tone receptive fields and spectrotemporal receptive field estimates in the inferior colliculus and primary auditory cortex. We observed, from midbrain to cortex, increases in both level invariance and frequency selectivity, which may underlie equivalent sparseness of responses in the two areas. We found that onset and steady-state spike rates increased proportionately as the stimulus was tailored to the multidimensional receptive field. By separately evaluating the amount of leverage each sound feature exerted on the overall firing rate, these findings reveal interdependencies between stimulus features as well as hierarchical shifts in selectivity and invariance that may go unnoticed with traditional approaches.
Collapse
|