1
|
Trägenap S, Whitney DE, Fitzpatrick D, Kaschube M. The developmental emergence of reliable cortical representations. Nat Neurosci 2025; 28:394-405. [PMID: 39905211 DOI: 10.1038/s41593-024-01857-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 11/20/2024] [Indexed: 02/06/2025]
Abstract
The fundamental structure of cortical networks arises early in development before the onset of sensory experience. However, how endogenously generated networks respond to the onset of sensory experience and how they form mature sensory representations with experience remain unclear. In this study, we examined this 'nature-nurture transform' at the single-trial level using chronic in vivo calcium imaging in ferret visual cortex. At eye opening, visual stimulation evokes robust patterns of modular cortical network activity that are highly variable within and across trials, severely limiting stimulus discriminability. These initial stimulus-evoked modular patterns are distinct from spontaneous network activity patterns present before and at the time of eye opening. Within a week of normal visual experience, cortical networks develop low-dimensional, highly reliable stimulus representations that correspond with reorganized patterns of spontaneous activity. Using a computational model, we propose that reliable visual representations derive from the alignment of feedforward and recurrent cortical networks shaped by novel patterns of visually driven activity.
Collapse
Affiliation(s)
- Sigrid Trägenap
- Frankfurt Institute for Advanced Studies (FIAS), Frankfurt, Germany
- International Max Planck Research School for Neural Circuits, Frankfurt, Germany
- Department of Physics, Goethe University Frankfurt, Frankfurt, Germany
| | - David E Whitney
- Department of Functional Architecture and Development of Cerebral Cortex, Max Planck Florida Institute for Neuroscience, Jupiter, FL, USA
| | - David Fitzpatrick
- Department of Functional Architecture and Development of Cerebral Cortex, Max Planck Florida Institute for Neuroscience, Jupiter, FL, USA.
| | - Matthias Kaschube
- Frankfurt Institute for Advanced Studies (FIAS), Frankfurt, Germany.
- Department of Computer Science and Mathematics, Goethe University Frankfurt, Frankfurt, Germany.
| |
Collapse
|
2
|
Jurewicz K, Sleezer BJ, Mehta PS, Hayden BY, Ebitz RB. Irrational choices via a curvilinear representational geometry for value. Nat Commun 2024; 15:6424. [PMID: 39080250 PMCID: PMC11289086 DOI: 10.1038/s41467-024-49568-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 06/06/2024] [Indexed: 08/02/2024] Open
Abstract
We make decisions by comparing values, but it is not yet clear how value is represented in the brain. Many models assume, if only implicitly, that the representational geometry of value is linear. However, in part due to a historical focus on noisy single neurons, rather than neuronal populations, this hypothesis has not been rigorously tested. Here, we examine the representational geometry of value in the ventromedial prefrontal cortex (vmPFC), a part of the brain linked to economic decision-making, in two male rhesus macaques. We find that values are encoded along a curved manifold in vmPFC. This curvilinear geometry predicts a specific pattern of irrational decision-making: that decision-makers will make worse choices when an irrelevant, decoy option is worse in value, compared to when it is better. We observe this type of irrational choices in behavior. Together, these results not only suggest that the representational geometry of value is nonlinear, but that this nonlinearity could impose bounds on rational decision-making.
Collapse
Affiliation(s)
- Katarzyna Jurewicz
- Department of Neurosciences, Faculté de médecine, and Centre interdisciplinaire de recherche sur le cerveau et l'apprentissage, Université de Montréal, Montréal, QC, Canada
- Department of Physiology, Faculty of Medicine and Health Sciences, McGill University, Montréal, QC, Canada
| | - Brianna J Sleezer
- Department of Neuroscience, Center for Magnetic Resonance Research, and Center for Neuroengineering, University of Minnesota, Minneapolis, MN, USA
| | - Priyanka S Mehta
- Department of Neuroscience, Center for Magnetic Resonance Research, and Center for Neuroengineering, University of Minnesota, Minneapolis, MN, USA
- Psychology Program, Department of Human Behavior, Justice, and Diversity, University of Wisconsin, Superior, Superior, WI, USA
| | - Benjamin Y Hayden
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, USA
| | - R Becket Ebitz
- Department of Neurosciences, Faculté de médecine, and Centre interdisciplinaire de recherche sur le cerveau et l'apprentissage, Université de Montréal, Montréal, QC, Canada.
| |
Collapse
|
3
|
Greenidge CD, Scholl B, Yates JL, Pillow JW. Efficient Decoding of Large-Scale Neural Population Responses With Gaussian-Process Multiclass Regression. Neural Comput 2024; 36:175-226. [PMID: 38101329 DOI: 10.1162/neco_a_01630] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Accepted: 08/09/2022] [Indexed: 12/17/2023]
Abstract
Neural decoding methods provide a powerful tool for quantifying the information content of neural population codes and the limits imposed by correlations in neural activity. However, standard decoding methods are prone to overfitting and scale poorly to high-dimensional settings. Here, we introduce a novel decoding method to overcome these limitations. Our approach, the gaussian process multiclass decoder (GPMD), is well suited to decoding a continuous low-dimensional variable from high-dimensional population activity and provides a platform for assessing the importance of correlations in neural population codes. The GPMD is a multinomial logistic regression model with a gaussian process prior over the decoding weights. The prior includes hyperparameters that govern the smoothness of each neuron's decoding weights, allowing automatic pruning of uninformative neurons during inference. We provide a variational inference method for fitting the GPMD to data, which scales to hundreds or thousands of neurons and performs well even in data sets with more neurons than trials. We apply the GPMD to recordings from primary visual cortex in three species: monkey, ferret, and mouse. Our decoder achieves state-of-the-art accuracy on all three data sets and substantially outperforms independent Bayesian decoding, showing that knowledge of the correlation structure is essential for optimal decoding in all three species.
Collapse
Affiliation(s)
| | - Benjamin Scholl
- University of Pennsylvania, Perelman School of Medicine, Philadelphia, PA 19104, U.S.A.
| | - Jacob L Yates
- University of California, Berkeley, School of Optometry, Berkeley, CA 94720, U.S.A.
| | | |
Collapse
|
4
|
To deconvolve, or not to deconvolve: Inferences of neuronal activities using calcium imaging data. J Neurosci Methods 2022; 366:109431. [PMID: 34856319 DOI: 10.1016/j.jneumeth.2021.109431] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2021] [Revised: 11/25/2021] [Accepted: 11/26/2021] [Indexed: 01/19/2023]
Abstract
BACKGROUND With the increasing popularity of calcium imaging in neuroscience research, choosing the right methods to analyze calcium imaging data is critical to address various scientific questions. Unlike spike trains measured using electrodes, fluorescence intensity traces provide an indirect and noisy measurement of the underlying neuronal activities. The observed calcium traces are either analyzed directly or deconvolved to spike trains to infer neuronal activities. When both approaches are applicable, it is unclear whether deconvolving calcium traces is a necessary step. METHODS In this article, we compare the performance of using calcium traces or their deconvolved spike trains for three common analyses: clustering, principal component analysis (PCA), and population decoding. RESULTS We found that (1) the two approaches lead to diverging results; (2) estimated spike trains, when smoothed or binned appropriately, usually lead to satisfactory performances, such as more accurate estimation of cluster membership; (3) although estimate spike train produce results more similar to true spike data than trace data, we found that the PCA results from trace data might better reflect the underlying neuronal ensembles (clusters); and (4) for both approaches, decobability can be improved by using denoising or smoothing methods. COMPARISON WITH EXISTING METHODS Our simulations and applications to real data suggest that estimated spike data outperform trace data in cluster analysis and give comparable results for population decoding. In addition, the decobability of estimated spike data can be slightly better than that of calcium trace data with appropriate filtering / smoothing methods. CONCLUSION We conclude that spike detection might be a useful pre-processing step for certain problems such as clustering; however, the continuous nature of calcium imaging data provides a natural smoothness that might be helpful for problems such as dimensional reduction.
Collapse
|
5
|
Decoding neurobiological spike trains using recurrent neural networks: a case study with electrophysiological auditory cortex recordings. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-06589-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
AbstractRecent advancements in multielectrode methods and spike-sorting algorithms enable the in vivo recording of the activities of many neurons at a high temporal resolution. These datasets offer new opportunities in the investigation of the biological neural code, including the direct testing of specific coding hypotheses, but they also reveal the limitations of present decoder algorithms. Classical methods rely on a manual feature extraction step, resulting in a feature vector, like the firing rates of an ensemble of neurons. In this paper, we present a recurrent neural-network-based decoder and evaluate its performance on experimental and artificial datasets. The experimental datasets were obtained by recording the auditory cortical responses of rats exposed to sound stimuli, while the artificial datasets represent preset encoding schemes. The task of the decoder was to classify the action potential timeseries according to the corresponding sound stimuli. It is illustrated that, depending on the coding scheme, the performance of the recurrent-network-based decoder can exceed the performance of the classical methods. We also show how randomized copies of the training datasets can be used to reveal the role of candidate spike-train features. We conclude that artificial neural network decoders can be a useful alternative to classical population vector-based techniques in studies of the biological neural code.
Collapse
|
6
|
Pokorny C, Ison MJ, Rao A, Legenstein R, Papadimitriou C, Maass W. STDP Forms Associations between Memory Traces in Networks of Spiking Neurons. Cereb Cortex 2021; 30:952-968. [PMID: 31403679 PMCID: PMC7132978 DOI: 10.1093/cercor/bhz140] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2018] [Revised: 03/25/2019] [Accepted: 05/09/2019] [Indexed: 11/17/2022] Open
Abstract
Memory traces and associations between them are fundamental for cognitive brain function. Neuron recordings suggest that distributed assemblies of neurons in the brain serve as memory traces for spatial information, real-world items, and concepts. However, there is conflicting evidence regarding neural codes for associated memory traces. Some studies suggest the emergence of overlaps between assemblies during an association, while others suggest that the assemblies themselves remain largely unchanged and new assemblies emerge as neural codes for associated memory items. Here we study the emergence of neural codes for associated memory items in a generic computational model of recurrent networks of spiking neurons with a data-constrained rule for spike-timing-dependent plasticity. The model depends critically on 2 parameters, which control the excitability of neurons and the scale of initial synaptic weights. By modifying these 2 parameters, the model can reproduce both experimental data from the human brain on the fast formation of associations through emergent overlaps between assemblies, and rodent data where new neurons are recruited to encode the associated memories. Hence, our findings suggest that the brain can use both of these 2 neural codes for associations, and dynamically switch between them during consolidation.
Collapse
Affiliation(s)
- Christoph Pokorny
- Institute for Theoretical Computer Science, Graz University of Technology, 8010 Graz, Austria
| | - Matias J Ison
- School of Psychology, University of Nottingham, Nottingham, NG7 2RD, UK
| | - Arjun Rao
- Institute for Theoretical Computer Science, Graz University of Technology, 8010 Graz, Austria
| | - Robert Legenstein
- Institute for Theoretical Computer Science, Graz University of Technology, 8010 Graz, Austria
| | - Christos Papadimitriou
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, CA 94720-1770, USA
| | - Wolfgang Maass
- Institute for Theoretical Computer Science, Graz University of Technology, 8010 Graz, Austria
| |
Collapse
|
7
|
Generation of Sharp Wave-Ripple Events by Disinhibition. J Neurosci 2020; 40:7811-7836. [PMID: 32913107 PMCID: PMC7548694 DOI: 10.1523/jneurosci.2174-19.2020] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2019] [Revised: 06/29/2020] [Accepted: 07/17/2020] [Indexed: 11/21/2022] Open
Abstract
Sharp wave-ripple complexes (SWRs) are hippocampal network phenomena involved in memory consolidation. To date, the mechanisms underlying their occurrence remain obscure. Here, we show how the interactions between pyramidal cells, parvalbumin-positive (PV+) basket cells, and an unidentified class of anti-SWR interneurons can contribute to the initiation and termination of SWRs. Using a biophysically constrained model of a network of spiking neurons and a rate-model approximation, we demonstrate that SWRs emerge as a result of the competition between two interneuron populations and the resulting disinhibition of pyramidal cells. Our models explain how the activation of pyramidal cells or PV+ cells can trigger SWRs, as shown in vitro, and suggests that PV+ cell-mediated short-term synaptic depression influences the experimentally reported dynamics of SWR events. Furthermore, we predict that the silencing of anti-SWR interneurons can trigger SWRs. These results broaden our understanding of the microcircuits supporting the generation of memory-related network dynamics. SIGNIFICANCE STATEMENT The hippocampus is a part of the mammalian brain that is crucial for episodic memories. During periods of sleep and inactive waking, the extracellular activity of the hippocampus is dominated by sharp wave-ripple events (SWRs), which have been shown to be important for memory consolidation. The mechanisms regulating the emergence of these events are still unclear. We developed a computational model to study the emergence of SWRs and to explain the roles of different cell types in regulating them. The model accounts for several previously unexplained features of SWRs and thus advances the understanding of memory-related dynamics.
Collapse
|
8
|
Quax SC, D'Asaro M, van Gerven MAJ. Adaptive time scales in recurrent neural networks. Sci Rep 2020; 10:11360. [PMID: 32647161 PMCID: PMC7347927 DOI: 10.1038/s41598-020-68169-x] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2020] [Accepted: 06/16/2020] [Indexed: 11/09/2022] Open
Abstract
Recent experiments have revealed a hierarchy of time scales in the visual cortex, where different stages of the visual system process information at different time scales. Recurrent neural networks are ideal models to gain insight in how information is processed by such a hierarchy of time scales and have become widely used to model temporal dynamics both in machine learning and computational neuroscience. However, in the derivation of such models as discrete time approximations of the firing rate of a population of neurons, the time constants of the neuronal process are generally ignored. Learning these time constants could inform us about the time scales underlying temporal processes in the brain and enhance the expressive capacity of the network. To investigate the potential of adaptive time constants, we compare the standard approximations to a more lenient one that accounts for the time scales at which processes unfold. We show that such a model performs better on predicting simulated neural data and allows recovery of the time scales at which the underlying processes unfold. A hierarchy of time scales emerges when adapting to data with multiple underlying time scales, underscoring the importance of such a hierarchy in processing complex temporal information.
Collapse
Affiliation(s)
- Silvan C Quax
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.
| | - Michele D'Asaro
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Marcel A J van Gerven
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
9
|
Valadez-Godínez S, Sossa H, Santiago-Montero R. On the accuracy and computational cost of spiking neuron implementation. Neural Netw 2019; 122:196-217. [PMID: 31689679 DOI: 10.1016/j.neunet.2019.09.026] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2019] [Revised: 09/12/2019] [Accepted: 09/17/2019] [Indexed: 10/25/2022]
Abstract
Since more than a decade ago, three statements about spiking neuron (SN) implementations have been widely accepted: 1) Hodgkin and Huxley (HH) model is computationally prohibitive, 2) Izhikevich (IZH) artificial neuron is as efficient as Leaky Integrate-and-Fire (LIF) model, and 3) IZH model is more efficient than HH model (Izhikevich, 2004). As suggested by Hodgkin and Huxley (1952), their model operates in two modes: by using the α's and β's rate functions directly (HH model) and by storing them into tables (HHT model) for computational cost reduction. Recently, it has been stated that: 1) HHT model (HH using tables) is not prohibitive, 2) IZH model is not efficient, and 3) both HHT and IZH models are comparable in computational cost (Skocik & Long, 2014). That controversy shows that there is no consensus concerning SN simulation capacities. Hence, in this work, we introduce a refined approach, based on the multiobjective optimization theory, describing the SN simulation capacities and ultimately choosing optimal simulation parameters. We have used normalized metrics to define the capacity levels of accuracy, computational cost, and efficiency. Normalized metrics allowed comparisons between SNs at the same level or scale. We conducted tests for balanced, lower, and upper boundary conditions under a regular spiking mode with constant and random current stimuli. We found optimal simulation parameters leading to a balance between computational cost and accuracy. Importantly, and, in general, we found that 1) HH model (without using tables) is the most accurate, computationally inexpensive, and efficient, 2) IZH model is the most expensive and inefficient, 3) both LIF and HHT models are the most inaccurate, 4) HHT model is more expensive and inaccurate than HH model due to α's and β's table discretization, and 5) HHT model is not comparable in computational cost to IZH model. These results refute the theory formulated over a decade ago (Izhikevich, 2004) and go more in-depth in the statements formulated by Skocik and Long (2014). Our statements imply that the number of dimensions or FLOPS in the SNs are theoretical but not practical indicators of the true computational cost. The metric we propose for the computational cost is more precise than FLOPS and was found to be invariant to computer architecture. Moreover, we found that the firing frequency used in previous works is a necessary but an insufficient metric to evaluate the simulation accuracy. We also show that our results are consistent with the theory of numerical methods and the theory of SN discontinuity. Discontinuous SNs, such LIF and IZH models, introduce a considerable error every time a spike is generated. In addition, compared to the constant input current, the random input current increases the computational cost and inaccuracy. Besides, we found that the search for optimal simulation parameters is problem-specific. That is important because most of the previous works have intended to find a general and unique optimal simulation. Here, we show that this solution could not exist because it is a multiobjective optimization problem that depends on several factors. This work sets up a renewed thesis concerning the SN simulation that is useful to several related research areas, including the emergent Deep Spiking Neural Networks.
Collapse
Affiliation(s)
- Sergio Valadez-Godínez
- Laboratorio de Robótica y Mecatrónica, Centro de Investigación en Computación, Instituto Politécnico Nacional, Av. Juan de Dios Bátiz, S/N, Col. Nva. Industrial Vallejo, Ciudad de México, México, 07738, Mexico; División de Ingeniería Informática, Instituto Tecnológico Superior de Purísima del Rincón, Gto., México, 36413, Mexico; División de Ingenierías de Educación Superior, Universidad Virtual del Estado de Guanajuato, Gto., México, 36400, Mexico.
| | - Humberto Sossa
- Laboratorio de Robótica y Mecatrónica, Centro de Investigación en Computación, Instituto Politécnico Nacional, Av. Juan de Dios Bátiz, S/N, Col. Nva. Industrial Vallejo, Ciudad de México, México, 07738, Mexico; Tecnológico de Monterrey, Campus Guadalajara, Av. Gral. Ramón Corona 2514, Zapopan, Jal., México, 45138, Mexico.
| | - Raúl Santiago-Montero
- División de Estudios de Posgrado e Investigación, Instituto Tecnológico de León, Av. Tecnológico S/N, León, Gto., México, 37290, Mexico.
| |
Collapse
|
10
|
Ogi M, Yamagishi T, Tsukano H, Nishio N, Hishida R, Takahashi K, Horii A, Shibuki K. Associative responses to visual shape stimuli in the mouse auditory cortex. PLoS One 2019; 14:e0223242. [PMID: 31581242 PMCID: PMC6776301 DOI: 10.1371/journal.pone.0223242] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2019] [Accepted: 09/17/2019] [Indexed: 11/18/2022] Open
Abstract
Humans can recall various aspects of a characteristic sound as a whole when they see a visual shape stimulus that has been intimately associated with the sound. In subjects with audio-visual associative memory, auditory responses that code the associated sound may be induced in the auditory cortex in response to presentation of the associated visual shape stimulus. To test this possibility, mice were pre-exposed to a combination of an artificial sound mimicking a cat’s “meow” and a visual shape stimulus of concentric circles or stars for more than two weeks, since such passive exposure is known to be sufficient for inducing audio-visual associative memory in mice. After the exposure, we anesthetized the mice, and presented them with the associated visual shape stimulus. We found that associative responses in the auditory cortex were induced in response to the visual stimulus. The associative auditory responses were observed when complex sounds such as “meow” were used for formation of audio-visual associative memory, but not when a pure tone was used. These results suggest that associative auditory responses in the auditory cortex represent the characteristics of the complex sound stimulus as a whole.
Collapse
Affiliation(s)
- Manabu Ogi
- Department of Neurophysiology, Brain Research Institute, Niigata University, Asahi-machi, Chuo-ku, Niigata, Japan
- Department of Otolaryngology, Head and Neck Surgery, Graduate School of Medical and Dental Sciences, Niigata University, Asahi-machi, Chuo-ku, Niigata, Japan
| | - Tatsuya Yamagishi
- Department of Neurophysiology, Brain Research Institute, Niigata University, Asahi-machi, Chuo-ku, Niigata, Japan
- Department of Otolaryngology, Head and Neck Surgery, Graduate School of Medical and Dental Sciences, Niigata University, Asahi-machi, Chuo-ku, Niigata, Japan
| | - Hiroaki Tsukano
- Department of Neurophysiology, Brain Research Institute, Niigata University, Asahi-machi, Chuo-ku, Niigata, Japan
| | - Nana Nishio
- Department of Neurophysiology, Brain Research Institute, Niigata University, Asahi-machi, Chuo-ku, Niigata, Japan
| | - Ryuichi Hishida
- Department of Neurophysiology, Brain Research Institute, Niigata University, Asahi-machi, Chuo-ku, Niigata, Japan
| | - Kuniyuki Takahashi
- Department of Otolaryngology, Head and Neck Surgery, Graduate School of Medical and Dental Sciences, Niigata University, Asahi-machi, Chuo-ku, Niigata, Japan
| | - Arata Horii
- Department of Otolaryngology, Head and Neck Surgery, Graduate School of Medical and Dental Sciences, Niigata University, Asahi-machi, Chuo-ku, Niigata, Japan
| | - Katsuei Shibuki
- Department of Neurophysiology, Brain Research Institute, Niigata University, Asahi-machi, Chuo-ku, Niigata, Japan
- * E-mail:
| |
Collapse
|
11
|
Baram Y. Primal categories of neural polarity codes. Cogn Neurodyn 2019; 14:125-135. [PMID: 32015771 DOI: 10.1007/s11571-019-09552-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2019] [Revised: 07/28/2019] [Accepted: 08/14/2019] [Indexed: 10/26/2022] Open
Abstract
Neuronal membrane and synapse polarities have been attracting considerable interest in recent years. Certain functional roles for such polarities have been suggested, yet, they have largely remained a subject for speculation and debate. Here, we note that neural circuit polarity codes, defined as sets of polarity permutations, divide into primal-size circuit polarity subcodes, which, sharing certain connectivity attributes, are called categories. Two long-debated, seemingly competing paradigms of neuronal self-feedback, namely, axonal discharge and synaptic mediation, are shown to jointly define the distinction between these categories. However, as the second paradigm contains the first, it is mathematically sufficient for complete specification of all categories. The analysis of primal-size circuit polarity categories is found to reveal, explain and extend experimentally observed cortical information capacity values termed "magical numbers", associated with "working memory". While these have been previously argued on grounds of psychological experiments, here they are further supported on analytic grounds by the so-called Hebbian memory paradigm. The information dimensionality associated with these capacities is found to be a consequence of prime factorization of composite circuit polarity code sizes. Different categories of circuit polarity, identical in size and neuronal parameters, are shown to generate different firing rate dynamics.
Collapse
Affiliation(s)
- Yoram Baram
- Computer Science Department, Technion- Israel Institute of Technology, 32000 Haifa, Israel
| |
Collapse
|
12
|
Saxena S, Cunningham JP. Towards the neural population doctrine. Curr Opin Neurobiol 2019; 55:103-111. [DOI: 10.1016/j.conb.2019.02.002] [Citation(s) in RCA: 110] [Impact Index Per Article: 18.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2018] [Revised: 01/30/2019] [Accepted: 02/07/2019] [Indexed: 01/06/2023]
|
13
|
Beiran M, Ostojic S. Contrasting the effects of adaptation and synaptic filtering on the timescales of dynamics in recurrent networks. PLoS Comput Biol 2019; 15:e1006893. [PMID: 30897092 PMCID: PMC6445477 DOI: 10.1371/journal.pcbi.1006893] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2018] [Revised: 04/02/2019] [Accepted: 02/19/2019] [Indexed: 11/19/2022] Open
Abstract
Neural activity in awake behaving animals exhibits a vast range of timescales that can be several fold larger than the membrane time constant of individual neurons. Two types of mechanisms have been proposed to explain this conundrum. One possibility is that large timescales are generated by a network mechanism based on positive feedback, but this hypothesis requires fine-tuning of the strength or structure of the synaptic connections. A second possibility is that large timescales in the neural dynamics are inherited from large timescales of underlying biophysical processes, two prominent candidates being intrinsic adaptive ionic currents and synaptic transmission. How the timescales of adaptation or synaptic transmission influence the timescale of the network dynamics has however not been fully explored. To address this question, here we analyze large networks of randomly connected excitatory and inhibitory units with additional degrees of freedom that correspond to adaptation or synaptic filtering. We determine the fixed points of the systems, their stability to perturbations and the corresponding dynamical timescales. Furthermore, we apply dynamical mean field theory to study the temporal statistics of the activity in the fluctuating regime, and examine how the adaptation and synaptic timescales transfer from individual units to the whole population. Our overarching finding is that synaptic filtering and adaptation in single neurons have very different effects at the network level. Unexpectedly, the macroscopic network dynamics do not inherit the large timescale present in adaptive currents. In contrast, the timescales of network activity increase proportionally to the time constant of the synaptic filter. Altogether, our study demonstrates that the timescales of different biophysical processes have different effects on the network level, so that the slow processes within individual neurons do not necessarily induce slow activity in large recurrent neural networks.
Collapse
Affiliation(s)
- Manuel Beiran
- Group for Neural Theory, Laboratoire de Neurosciences Cognitives Computationnelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| | - Srdjan Ostojic
- Group for Neural Theory, Laboratoire de Neurosciences Cognitives Computationnelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| |
Collapse
|
14
|
Abstract
We formulate the computational processes of perception in the framework of the principle of least action by postulating the theoretical action as a time integral of the variational free energy in the neurosciences. The free energy principle is accordingly rephrased, on autopoetic grounds, as follows: all viable organisms attempt to minimize their sensory uncertainty about an unpredictable environment over a temporal horizon. By taking the variation of informational action, we derive neural recognition dynamics (RD), which by construction reduces to the Bayesian filtering of external states from noisy sensory inputs. Consequently, we effectively cast the gradient-descent scheme of minimizing the free energy into Hamiltonian mechanics by addressing only the positions and momenta of the organisms' representations of the causal environment. To demonstrate the utility of our theory, we show how the RD may be implemented in a neuronally based biophysical model at a single-cell level and subsequently in a coarse-grained, hierarchical architecture of the brain. We also present numerical solutions to the RD for a model brain and analyze the perceptual trajectories around attractors in neural state space.
Collapse
Affiliation(s)
- Chang Sub Kim
- Department of Physics, Chonnam National University, Gwangju 61186, Republic of Korea
| |
Collapse
|
15
|
Rutishauser U, Slotine JJ, Douglas RJ. Solving Constraint-Satisfaction Problems with Distributed Neocortical-Like Neuronal Networks. Neural Comput 2018; 30:1359-1393. [PMID: 29566357 PMCID: PMC5930080 DOI: 10.1162/neco_a_01074] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
Finding actions that satisfy the constraints imposed by both external inputs and internal representations is central to decision making. We demonstrate that some important classes of constraint satisfaction problems (CSPs) can be solved by networks composed of homogeneous cooperative-competitive modules that have connectivity similar to motifs observed in the superficial layers of neocortex. The winner-take-all modules are sparsely coupled by programming neurons that embed the constraints onto the otherwise homogeneous modular computational substrate. We show rules that embed any instance of the CSP's planar four-color graph coloring, maximum independent set, and sudoku on this substrate and provide mathematical proofs that guarantee these graph coloring problems will convergence to a solution. The network is composed of nonsaturating linear threshold neurons. Their lack of right saturation allows the overall network to explore the problem space driven through the unstable dynamics generated by recurrent excitation. The direction of exploration is steered by the constraint neurons. While many problems can be solved using only linear inhibitory constraints, network performance on hard problems benefits significantly when these negative constraints are implemented by nonlinear multiplicative inhibition. Overall, our results demonstrate the importance of instability rather than stability in network computation and offer insight into the computational role of dual inhibitory mechanisms in neural circuits.
Collapse
Affiliation(s)
- Ueli Rutishauser
- Computation and Neural Systems, Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA 91125, U.S.A., and Cedars-Sinai Medical Center, Departments of Neurosurgery, Neurology and Biomedical Sciences, Los Angeles, CA 90048, U.S.A.
| | - Jean-Jacques Slotine
- Nonlinear Systems Laboratory, Department of Mechanical Engineering and Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, U.S.A.
| | - Rodney J Douglas
- Institute of Neuroinformatics, University and ETH Zurich, Zurich 8057, Switzerland
| |
Collapse
|
16
|
Mizrahi A, Grollier J, Querlioz D, Stiles M. Overcoming device unreliability with continuous learning in a population coding based computing system. JOURNAL OF APPLIED PHYSICS 2018; 124:10.1063/1.5042250. [PMID: 39450140 PMCID: PMC11500185 DOI: 10.1063/1.5042250] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/26/2024]
Abstract
The brain, which uses redundancy and continuous learning to overcome the unreliability of its components, provides a promising path to building computing systems that are robust to the unreliability of their constituent nanodevices. In this work, we illustrate this path by a computing system based on population coding with magnetic tunnel junctions that implement both neurons and synaptic weights. We show that equipping such a system with continuous learning enables it to recover from the loss of neurons and makes it possible to use unreliable synaptic weights (i.e. low energy barrier magnetic memories). There is a tradeoff between power consumption and precision because low energy barrier memories consume less energy than high barrier ones. For a given precision, there is an optimal number of neurons and an optimal energy barrier for the weights that leads to minimum power consumption.
Collapse
Affiliation(s)
- Alice Mizrahi
- National Institute of Standards and Technology, Gaithersburg, USA
- Maryland NanoCenter, University of Maryland, College Park, USA
| | - Julie Grollier
- Unité Mixte de Physique CNRS, Thales, Univ. Paris-Sud, Université Paris-Saclay, 91767, Palaiseau, France
| | - Damien Querlioz
- Centre de Nanosciences et de Nanotechnologies, Univ. Paris-Sud, CNRS, Université Paris-Saclay, 91405, Orsay, France
| | - M.D. Stiles
- National Institute of Standards and Technology, Gaithersburg, USA
| |
Collapse
|
17
|
Arandia-Romero I, Nogueira R, Mochol G, Moreno-Bote R. What can neuronal populations tell us about cognition? Curr Opin Neurobiol 2017; 46:48-57. [PMID: 28806694 DOI: 10.1016/j.conb.2017.07.008] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2017] [Revised: 07/06/2017] [Accepted: 07/25/2017] [Indexed: 12/24/2022]
Abstract
Nowadays, it is possible to record the activity of hundreds of cells at the same time in behaving animals. However, these data are often treated and analyzed as if they consisted of many independently recorded neurons. How can neuronal populations be uniquely used to learn about cognition? We describe recent work that shows that populations of simultaneously recorded neurons are fundamental to understand the basis of decision-making, including processes such as ongoing deliberations and decision confidence, which generally fall outside the reach of single-cell analysis. Thus, neuronal population data allow addressing novel questions, but they also come with so far unsolved challenges.
Collapse
Affiliation(s)
- Iñigo Arandia-Romero
- Center for Brain and Cognition & Department of Information and Communications Technologies, University Pompeu Fabra, 08018 Barcelona, Spain
| | - Ramon Nogueira
- Center for Brain and Cognition & Department of Information and Communications Technologies, University Pompeu Fabra, 08018 Barcelona, Spain
| | - Gabriela Mochol
- Center for Brain and Cognition & Department of Information and Communications Technologies, University Pompeu Fabra, 08018 Barcelona, Spain
| | - Rubén Moreno-Bote
- Center for Brain and Cognition & Department of Information and Communications Technologies, University Pompeu Fabra, 08018 Barcelona, Spain; Serra Húnter Fellow Programme, 08018 Barcelona, Spain.
| |
Collapse
|
18
|
van der Scheer HT, Doelman A. Synapse fits neuron: joint reduction by model inversion. BIOLOGICAL CYBERNETICS 2017; 111:309-334. [PMID: 28689352 PMCID: PMC5506247 DOI: 10.1007/s00422-017-0722-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/11/2015] [Accepted: 06/19/2017] [Indexed: 06/07/2023]
Abstract
In this paper, we introduce a novel simplification method for dealing with physical systems that can be thought to consist of two subsystems connected in series, such as a neuron and a synapse. The aim of our method is to help find a simple, yet convincing model of the full cascade-connected system, assuming that a satisfactory model of one of the subsystems, e.g., the neuron, is already given. Our method allows us to validate a candidate model of the full cascade against data at a finer scale. In our main example, we apply our method to part of the squid's giant fiber system. We first postulate a simple, hypothetical model of cell-to-cell signaling based on the squid's escape response. Then, given a FitzHugh-type neuron model, we derive the verifiable model of the squid giant synapse that this hypothesis implies. We show that the derived synapse model accurately reproduces synaptic recordings, hence lending support to the postulated, simple model of cell-to-cell signaling, which thus, in turn, can be used as a basic building block for network models.
Collapse
Affiliation(s)
- H. T. van der Scheer
- Mathematical Institute, Leiden University, P.O. Box 9512, 2300 RA Leiden, The Netherlands
| | - A. Doelman
- Mathematical Institute, Leiden University, P.O. Box 9512, 2300 RA Leiden, The Netherlands
| |
Collapse
|
19
|
Papadimitriou C, White RL, Snyder LH. Ghosts in the Machine II: Neural Correlates of Memory Interference from the Previous Trial. Cereb Cortex 2017; 27:2513-2527. [PMID: 27114176 PMCID: PMC6059123 DOI: 10.1093/cercor/bhw106] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
Previous memoranda interfere with working memory. For example, spatial memories are biased toward locations memorized on the previous trial. We predicted, based on attractor network models of memory, that activity in the frontal eye fields (FEFs) encoding a previous target location can persist into the subsequent trial and that this ghost will then bias the readout of the current target. Contrary to this prediction, we find that FEF memory representations appear biased away from (not toward) the previous target location. The behavioral and neural data can be reconciled by a model in which receptive fields of memory neurons converge toward remembered locations, much as receptive fields converge toward attended locations. Convergence increases the resources available to encode the relevant memoranda and decreases overall error in the network, but the residual convergence from the previous trial can give rise to an attractive behavioral bias on the next trial.
Collapse
Affiliation(s)
- Charalampos Papadimitriou
- Department of Anatomy and Neurobiology, Washington University in St. Louis, St. Louis, MO 63116, USA
| | - Robert L. White
- Department of Psychology, University of California Berkeley, Berkeley, CA 94720, USA
| | - Lawrence H. Snyder
- Department of Anatomy and Neurobiology, Washington University in St. Louis, St. Louis, MO 63116, USA
| |
Collapse
|
20
|
Baram Y. Asynchronous Segregation of Cortical Circuits and Their Function: A Life-long Role for Synaptic Death. AIMS Neurosci 2017. [DOI: 10.3934/neuroscience.2017.2.87] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
|
21
|
Baram Y. Developmental metaplasticity in neural circuit codes of firing and structure. Neural Netw 2016; 85:182-196. [PMID: 27890605 DOI: 10.1016/j.neunet.2016.09.007] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2016] [Revised: 09/02/2016] [Accepted: 09/20/2016] [Indexed: 11/29/2022]
Abstract
Firing-rate dynamics have been hypothesized to mediate inter-neural information transfer in the brain. While the Hebbian paradigm, relating learning and memory to firing activity, has put synaptic efficacy variation at the center of cortical plasticity, we suggest that the external expression of plasticity by changes in the firing-rate dynamics represents a more general notion of plasticity. Hypothesizing that time constants of plasticity and firing dynamics increase with age, and employing the filtering property of the neuron, we obtain the elementary code of global attractors associated with the firing-rate dynamics in each developmental stage. We define a neural circuit connectivity code as an indivisible set of circuit structures generated by membrane and synapse activation and silencing. Synchronous firing patterns under parameter uniformity, and asynchronous circuit firing are shown to be driven, respectively, by membrane and synapse silencing and reactivation, and maintained by the neuronal filtering property. Analytic, graphical and simulation representation of the discrete iteration maps and of the global attractor codes of neural firing rate are found to be consistent with previous empirical neurobiological findings, which have lacked, however, a specific correspondence between firing modes, time constants, circuit connectivity and cortical developmental stages.
Collapse
Affiliation(s)
- Yoram Baram
- Computer Science Department, Technion - Israel Institute of Technology, Haifa 32000, Israel.
| |
Collapse
|
22
|
Luo X, Gee S, Sohal V, Small D. A point-process response model for spike trains from single neurons in neural circuits under optogenetic stimulation. Stat Med 2016; 35:455-74. [PMID: 26411923 PMCID: PMC4713323 DOI: 10.1002/sim.6742] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2014] [Accepted: 09/02/2015] [Indexed: 11/12/2022]
Abstract
Optogenetics is a new tool to study neuronal circuits that have been genetically modified to allow stimulation by flashes of light. We study recordings from single neurons within neural circuits under optogenetic stimulation. The data from these experiments present a statistical challenge of modeling a high-frequency point process (neuronal spikes) while the input is another high-frequency point process (light flashes). We further develop a generalized linear model approach to model the relationships between two point processes, employing additive point-process response functions. The resulting model, point-process responses for optogenetics (PRO), provides explicit nonlinear transformations to link the input point process with the output one. Such response functions may provide important and interpretable scientific insights into the properties of the biophysical process that governs neural spiking in response to optogenetic stimulation. We validate and compare the PRO model using a real dataset and simulations, and our model yields a superior area-under-the-curve value as high as 93% for predicting every future spike. For our experiment on the recurrent layer V circuit in the prefrontal cortex, the PRO model provides evidence that neurons integrate their inputs in a sophisticated manner. Another use of the model is that it enables understanding how neural circuits are altered under various disease conditions and/or experimental conditions by comparing the PRO parameters.
Collapse
Affiliation(s)
- X. Luo
- Department of Biostatistics, Brown University, Providence, Rhode Island 02912, USA
| | - S. Gee
- Department of Psychiatry and Neuroscience Graduate Program, University of California, San Francisco, California 94143, USA
| | - V. Sohal
- Department of Psychiatry and Neuroscience Graduate Program, University of California, San Francisco, California 94143, USA
| | - D. Small
- Department of Statistics, The Wharton School, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
| |
Collapse
|
23
|
Carruthers IM, Laplagne DA, Jaegle A, Briguglio JJ, Mwilambwe-Tshilobo L, Natan RG, Geffen MN. Emergence of invariant representation of vocalizations in the auditory cortex. J Neurophysiol 2015; 114:2726-40. [PMID: 26311178 DOI: 10.1152/jn.00095.2015] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2015] [Accepted: 08/25/2015] [Indexed: 11/22/2022] Open
Abstract
An essential task of the auditory system is to discriminate between different communication signals, such as vocalizations. In everyday acoustic environments, the auditory system needs to be capable of performing the discrimination under different acoustic distortions of vocalizations. To achieve this, the auditory system is thought to build a representation of vocalizations that is invariant to their basic acoustic transformations. The mechanism by which neuronal populations create such an invariant representation within the auditory cortex is only beginning to be understood. We recorded the responses of populations of neurons in the primary and nonprimary auditory cortex of rats to original and acoustically distorted vocalizations. We found that populations of neurons in the nonprimary auditory cortex exhibited greater invariance in encoding vocalizations over acoustic transformations than neuronal populations in the primary auditory cortex. These findings are consistent with the hypothesis that invariant representations are created gradually through hierarchical transformation within the auditory pathway.
Collapse
Affiliation(s)
- Isaac M Carruthers
- Department of Otorhinolaryngology and Head and Neck Surgery, University of Pennsylvania, Philadelphia, Pennsylvania; Graduate Group in Physics, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Diego A Laplagne
- Brain Institute, Federal University of Rio Grande do Norte, Natal, Brazil; and
| | - Andrew Jaegle
- Department of Otorhinolaryngology and Head and Neck Surgery, University of Pennsylvania, Philadelphia, Pennsylvania; Graduate Group in Neuroscience, University of Pennsylvania, Philadelphia, Pennsylvania
| | - John J Briguglio
- Department of Otorhinolaryngology and Head and Neck Surgery, University of Pennsylvania, Philadelphia, Pennsylvania; Graduate Group in Physics, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Laetitia Mwilambwe-Tshilobo
- Department of Otorhinolaryngology and Head and Neck Surgery, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Ryan G Natan
- Department of Otorhinolaryngology and Head and Neck Surgery, University of Pennsylvania, Philadelphia, Pennsylvania; Brain Institute, Federal University of Rio Grande do Norte, Natal, Brazil; and
| | - Maria N Geffen
- Department of Otorhinolaryngology and Head and Neck Surgery, University of Pennsylvania, Philadelphia, Pennsylvania; Graduate Group in Physics, University of Pennsylvania, Philadelphia, Pennsylvania; Graduate Group in Neuroscience, University of Pennsylvania, Philadelphia, Pennsylvania
| |
Collapse
|
24
|
Rutishauser U, Slotine JJ, Douglas R. Computation in dynamically bounded asymmetric systems. PLoS Comput Biol 2015; 11:e1004039. [PMID: 25617645 PMCID: PMC4305289 DOI: 10.1371/journal.pcbi.1004039] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2014] [Accepted: 11/12/2014] [Indexed: 11/18/2022] Open
Abstract
Previous explanations of computations performed by recurrent networks have focused on symmetrically connected saturating neurons and their convergence toward attractors. Here we analyze the behavior of asymmetrical connected networks of linear threshold neurons, whose positive response is unbounded. We show that, for a wide range of parameters, this asymmetry brings interesting and computationally useful dynamical properties. When driven by input, the network explores potential solutions through highly unstable 'expansion' dynamics. This expansion is steered and constrained by negative divergence of the dynamics, which ensures that the dimensionality of the solution space continues to reduce until an acceptable solution manifold is reached. Then the system contracts stably on this manifold towards its final solution trajectory. The unstable positive feedback and cross inhibition that underlie expansion and divergence are common motifs in molecular and neuronal networks. Therefore we propose that very simple organizational constraints that combine these motifs can lead to spontaneous computation and so to the spontaneous modification of entropy that is characteristic of living systems.
Collapse
Affiliation(s)
- Ueli Rutishauser
- Computation and Neural Systems, California Institute of Technology, Pasadena, California, United States of America
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, California, United States of America
- Departments of Neurosurgery, Neurology and Biomedical Sciences, Cedars-Sinai Medical Center, Los Angeles, California, United States of America
| | - Jean-Jacques Slotine
- Nonlinear Systems Laboratory, Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Rodney Douglas
- Institute of Neuroinformatics, University and ETH Zurich, Zurich, Switzerland
| |
Collapse
|
25
|
Varoquaux G, Thirion B. How machine learning is shaping cognitive neuroimaging. Gigascience 2014; 3:28. [PMID: 25405022 PMCID: PMC4234525 DOI: 10.1186/2047-217x-3-28] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2014] [Accepted: 10/23/2014] [Indexed: 11/10/2022] Open
Abstract
Functional brain images are rich and noisy data that can capture indirect signatures of neural activity underlying cognition in a given experimental setting. Can data mining leverage them to build models of cognition? Only if it is applied to well-posed questions, crafted to reveal cognitive mechanisms. Here we review how predictive models have been used on neuroimaging data to ask new questions, i.e., to uncover new aspects of cognitive organization. We also give a statistical learning perspective on these progresses and on the remaining gaping holes.
Collapse
Affiliation(s)
- Gael Varoquaux
- Parietal, INRIA, NeuroSpin, bat 145 CEA Saclay, 91191 Gif sur Yvette, France
| | - Bertrand Thirion
- Parietal, INRIA, NeuroSpin, bat 145 CEA Saclay, 91191 Gif sur Yvette, France
| |
Collapse
|
26
|
Visual space is compressed in prefrontal cortex before eye movements. Nature 2014; 507:504-7. [PMID: 24670771 PMCID: PMC4064801 DOI: 10.1038/nature13149] [Citation(s) in RCA: 130] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2013] [Accepted: 02/13/2014] [Indexed: 11/08/2022]
Abstract
We experience the visual world through a series of saccadic eye movements, each one shifting our gaze to bring objects of interest to the fovea for further processing. Although such movements lead to frequent and substantial displacements of the retinal image, these displacements go unnoticed. It is widely assumed that a primary mechanism underlying this apparent stability is an anticipatory shifting of visual receptive fields (RFs) from their presaccadic to their postsaccadic locations before movement onset. Evidence of this predictive 'remapping' of RFs has been particularly apparent within brain structures involved in gaze control. However, critically absent among that evidence are detailed measurements of visual RFs before movement onset. Here we show that during saccade preparation, rather than remap, RFs of neurons in a prefrontal gaze control area massively converge towards the saccadic target. We mapped the visual RFs of prefrontal neurons during stable fixation and immediately before the onset of eye movements, using multi-electrode recordings in monkeys. Following movements from an initial fixation point to a target, RFs remained stationary in retinocentric space. However, in the period immediately before movement onset, RFs shifted by as much as 18 degrees of visual angle, and converged towards the target location. This convergence resulted in a threefold increase in the proportion of RFs responding to stimuli near the target region. In addition, like in human observers, the population of prefrontal neurons grossly mislocalized presaccadic stimuli as being closer to the target. Our results show that RF shifts do not predict the retinal displacements due to saccades, but instead reflect the overriding perception of target space during eye movements.
Collapse
|
27
|
Earland K, Lee M, Shaw P, Law J. Overlapping structures in sensory-motor mappings. PLoS One 2014; 9:e84240. [PMID: 24392118 PMCID: PMC3879306 DOI: 10.1371/journal.pone.0084240] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2013] [Accepted: 11/13/2013] [Indexed: 11/18/2022] Open
Abstract
This paper examines a biologically-inspired representation technique designed for the support of sensory-motor learning in developmental robotics. An interesting feature of the many topographic neural sheets in the brain is that closely packed receptive fields must overlap in order to fully cover a spatial region. This raises interesting scientific questions with engineering implications: e.g. is overlap detrimental? does it have any benefits? This paper examines the effects and properties of overlap between elements arranged in arrays or maps. In particular we investigate how overlap affects the representation and transmission of spatial location information on and between topographic maps. Through a series of experiments we determine the conditions under which overlap offers advantages and identify useful ranges of overlap for building mappings in cognitive robotic systems. Our motivation is to understand the phenomena of overlap in order to provide guidance for application in sensory-motor learning robots.
Collapse
Affiliation(s)
- Kevin Earland
- Department of Computer Science/Aberystwith University, Wales, United Kingdom
| | - Mark Lee
- Department of Computer Science/Aberystwith University, Wales, United Kingdom
| | - Patricia Shaw
- Department of Computer Science/Aberystwith University, Wales, United Kingdom
| | - James Law
- Department of Computer Science/Aberystwith University, Wales, United Kingdom
| |
Collapse
|
28
|
Lehky SR, Sereno ME, Sereno AB. Population coding and the labeling problem: extrinsic versus intrinsic representations. Neural Comput 2013; 25:2235-64. [PMID: 23777516 DOI: 10.1162/neco_a_00486] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Current population coding methods, including weighted averaging and Bayesian estimation, are based on extrinsic representations. These require that neurons be labeled with response parameters, such as tuning curve peaks or noise distributions, which are tied to some external, world-based metric scale. Firing rates alone, without this external labeling, are insufficient to represent a variable. However, the extrinsic approach does not explain how such neural labeling is implemented. A radically different and perhaps more physiological approach is based on intrinsic representations, which have access only to firing rates. Because neurons are unlabeled, intrinsic coding represents relative, rather than absolute, values of a variable. We show that intrinsic coding has representational advantages, including invariance, categorization, and discrimination, and in certain situations it may also recover absolute stimulus values.
Collapse
Affiliation(s)
- Sidney R Lehky
- Computational Neurobiology Laboratory, Salk Institute, La Jolla, CA 92037, USA.
| | | | | |
Collapse
|
29
|
Neftci EO, Toth B, Indiveri G, Abarbanel HDI. Dynamic State and Parameter Estimation Applied to Neuromorphic Systems. Neural Comput 2012; 24:1669-94. [DOI: 10.1162/neco_a_00293] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Neuroscientists often propose detailed computational models to probe the properties of the neural systems they study. With the advent of neuromorphic engineering, there is an increasing number of hardware electronic analogs of biological neural systems being proposed as well. However, for both biological and hardware systems, it is often difficult to estimate the parameters of the model so that they are meaningful to the experimental system under study, especially when these models involve a large number of states and parameters that cannot be simultaneously measured. We have developed a procedure to solve this problem in the context of interacting neural populations using a recently developed dynamic state and parameter estimation (DSPE) technique. This technique uses synchronization as a tool for dynamically coupling experimentally measured data to its corresponding model to determine its parameters and internal state variables. Typically experimental data are obtained from the biological neural system and the model is simulated in software; here we show that this technique is also efficient in validating proposed network models for neuromorphic spike-based very large-scale integration (VLSI) chips and that it is able to systematically extract network parameters such as synaptic weights, time constants, and other variables that are not accessible by direct observation. Our results suggest that this method can become a very useful tool for model-based identification and configuration of neuromorphic multichip VLSI systems.
Collapse
Affiliation(s)
- Emre Ozgur Neftci
- Institute of Informatics, University of Zurich and ETH Zurich, Zurich CH-8057, Switzerland
| | - Bryan Toth
- Department of Physics and Center for Theoretical Biological Physics, Scripps Institute of Oceanography, University of California, San Diego, La Jolla, CA 92093-0402, U.S.A
| | - Giacomo Indiveri
- Institute of Informatics, University of Zurich and ETH Zurich, Zurich CH-8057, Switzerland
| | - Henry D. I. Abarbanel
- Department of Physics and Marine Physical Laboratory, Scripps Institute of Oceanography, University of California, San Diego, La Jolla, CA 92093-0402, U.S.A
| |
Collapse
|
30
|
|
31
|
Quian Quiroga R, Kreiman G. Measuring sparseness in the brain: comment on Bowers (2009). Psychol Rev 2010; 117:291-7. [PMID: 20063978 DOI: 10.1037/a0016917] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Bowers challenged the common view in favor of distributed representations in psychological modeling and the main arguments given against localist and grandmother cell coding schemes. He revisited the results of several single-cell studies, arguing that they do not support distributed representations. We praise the contribution of Bowers (2009) for joining evidence from psychological modeling and neurophysiological recordings, but we disagree with several of his claims. In this comment, we argue that distinctions between distributed, localist, and grandmother cell coding can be troublesome with real data. Moreover, these distinctions seem to be lying within the same continuum, and we argue that it may be sensible to characterize coding schemes with a sparseness measure. We further argue that there may not be a unique coding scheme implemented in all brain areas and for all possible functions. In particular, current evidence suggests that the brain may use distributed codes in primary sensory areas and sparser and invariant representations in higher areas.
Collapse
|
32
|
Eliasmith C, Thagard P. Integrating structure and meaning: a distributed model of analogical mapping. Cogn Sci 2010. [DOI: 10.1207/s15516709cog2502_3] [Citation(s) in RCA: 59] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
33
|
Rutishauser U, Douglas RJ. State-dependent computation using coupled recurrent networks. Neural Comput 2009; 21:478-509. [PMID: 19431267 DOI: 10.1162/neco.2008.03-08-734] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Although conditional branching between possible behavioral states is a hallmark of intelligent behavior, very little is known about the neuronal mechanisms that support this processing. In a step toward solving this problem, we demonstrate by theoretical analysis and simulation how networks of richly interconnected neurons, such as those observed in the superficial layers of the neocortex, can embed reliable, robust finite state machines. We show how a multistable neuronal network containing a number of states can be created very simply by coupling two recurrent networks whose synaptic weights have been configured for soft winner-take-all (sWTA) performance. These two sWTAs have simple, homogeneous, locally recurrent connectivity except for a small fraction of recurrent cross-connections between them, which are used to embed the required states. This coupling between the maps allows the network to continue to express the current state even after the input that elicited that state is withdrawn. In addition, a small number of transition neurons implement the necessary input-driven transitions between the embedded states. We provide simple rules to systematically design and construct neuronal state machines of this kind. The significance of our finding is that it offers a method whereby the cortex could construct networks supporting a broad range of sophisticated processing by applying only small specializations to the same generic neuronal circuit.
Collapse
Affiliation(s)
- Ueli Rutishauser
- Computation and Neural Systems, California Institute of Technology, Pasadena, CA 91225, USA.
| | | |
Collapse
|
34
|
Extracting information from neuronal populations: information theory and decoding approaches. Nat Rev Neurosci 2009; 10:173-85. [PMID: 19229240 DOI: 10.1038/nrn2578] [Citation(s) in RCA: 492] [Impact Index Per Article: 30.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
35
|
Abstract
We have used a combination of theory and experiment to assess how information is represented in a realistic cortical population response, examining how motion direction and timing is encoded in groups of neurons in cortical area MT. Combining data from several single-unit experiments, we constructed model population responses in small time windows and represented the response in each window as a binary vector of 1s or 0s signifying spikes or no spikes from each cell. We found that patterns of spikes and silence across a population of nominally redundant neurons can carry up to twice as much information about visual motion than does population spike count, even when the neurons respond independently to their sensory inputs. This extra information arises by virtue of the broad diversity of firing rate dynamics found in even very similarly tuned groups of MT neurons. Additionally, specific patterns of spiking and silence can carry more information than the sum of their parts (synergy), opening up the possibility for combinatorial coding in cortex. These results also held for populations in which we imposed levels of nonindependence (correlation) comparable to those found in cortical recordings. Our findings suggest that combinatorial codes are advantageous for representing stimulus information on short time scales, even when neurons have no complicated, stimulus-dependent correlation structure.
Collapse
|
36
|
van Hemmen JL, Schwartz AB. Population vector code: a geometric universal as actuator. BIOLOGICAL CYBERNETICS 2008; 98:509-518. [PMID: 18491163 DOI: 10.1007/s00422-008-0215-3] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2008] [Accepted: 01/30/2008] [Indexed: 05/26/2023]
Abstract
The population vector code relates directional tuning of single cells and global, directional motion incited by an assembly of neurons. In this paper three things are done. First, we analyze the population vector code as a purely geometric construct, focusing attention on its universality. Second, we generalize the algorithm on the basis of its geometrical realization so that the same construct that responds to sensation can function as an actuator for behavioral output. Third, we suggest at least a partial answer to the question of what many maps, neuronal representations of the outside sensory world in space-time, are good for: encoding vectorial input they enable a direct realization of the population vector code.
Collapse
Affiliation(s)
- J Leo van Hemmen
- Physik Department, TU München, 85747, Garching bei München, Germany.
| | | |
Collapse
|
37
|
Hamker FH, Zirnsak M, Calow D, Lappe M. The peri-saccadic perception of objects and space. PLoS Comput Biol 2008; 4:e31. [PMID: 18282086 PMCID: PMC2242822 DOI: 10.1371/journal.pcbi.0040031] [Citation(s) in RCA: 76] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2007] [Accepted: 12/20/2007] [Indexed: 11/26/2022] Open
Abstract
Eye movements affect object localization and object recognition. Around saccade onset, briefly flashed stimuli appear compressed towards the saccade target, receptive fields dynamically change position, and the recognition of objects near the saccade target is improved. These effects have been attributed to different mechanisms. We provide a unifying account of peri-saccadic perception explaining all three phenomena by a quantitative computational approach simulating cortical cell responses on the population level. Contrary to the common view of spatial attention as a spotlight, our model suggests that oculomotor feedback alters the receptive field structure in multiple visual areas at an intermediate level of the cortical hierarchy to dynamically recruit cells for processing a relevant part of the visual field. The compression of visual space occurs at the expense of this locally enhanced processing capacity. Early in the vertebrate lineage fast movements of the eye, called saccades, developed. This improvement in spatial direction selectivity has been achieved at a cost to handle a sequence of different views. Recent experiments showed that the brain uses its knowledge about the upcoming eye movement to guide perception prior to the next saccade. They revealed an improved recognition of objects at the saccade target, a change of receptive fields, and a mislocalization of briefly flashed stimuli towards the saccade target. We here offer a novel, unifying explanation for these phenomena and link them to a common neural mechanism. Our model predicts that the brain uses oculomotor feedback to transiently increase the processing capacity around the saccade target by changing the receptive field structure in visual areas and thus, it links the pre-saccadic scene to the post-saccadic one. A briefly flashed stimulus probes this change in the receptive field structure and demonstrates a close interaction of object and spatial perception.
Collapse
Affiliation(s)
- Fred H Hamker
- Institute of Psychology, Westfälische Wilhelms-Universität Münster, Münster, Germany.
| | | | | | | |
Collapse
|
38
|
Quiroga RQ, Kreiman G, Koch C, Fried I. Sparse but not 'grandmother-cell' coding in the medial temporal lobe. Trends Cogn Sci 2008; 12:87-91. [PMID: 18262826 DOI: 10.1016/j.tics.2007.12.003] [Citation(s) in RCA: 163] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2007] [Revised: 12/13/2007] [Accepted: 12/17/2007] [Indexed: 11/25/2022]
Abstract
Although a large number of neuropsychological and imaging studies have demonstrated that the medial temporal lobe (MTL) plays an important role in human memory, there are few data regarding the activity of neurons involved in this process. The MTL receives massive inputs from visual cortical areas, and evidence over the last decade has consistently shown that MTL neurons respond selectively to complex visual stimuli. Here, we focus on how the activity patterns of these cells might reflect the transformation of visual percepts into long-term memories. Given the very sparse and abstract representation of visual information by these neurons, they could in principle be considered as 'grandmother cells'. However, we give several arguments that make such an extreme interpretation unlikely.
Collapse
Affiliation(s)
- R Quian Quiroga
- Department of Engineering, University of Leicester, LE1 7RH, Leicester, UK.
| | | | | | | |
Collapse
|
39
|
Quiroga RQ, Reddy L, Koch C, Fried I. Decoding Visual Inputs From Multiple Neurons in the Human Temporal Lobe. J Neurophysiol 2007; 98:1997-2007. [PMID: 17671106 DOI: 10.1152/jn.00125.2007] [Citation(s) in RCA: 72] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We investigated the representation of visual inputs by multiple simultaneously recorded single neurons in the human medial temporal lobe, using their firing rates to infer which images were shown to subjects. The selectivity of these neurons was quantified with a novel measure. About four spikes per neuron, triggered between 300 and 600 ms after image onset in a handful of units (7.8 on average), predicted the identity of images far above chance. Decoding performance increased linearly with the number of units considered, peaked between 400 and 500 ms, did not improve when considering correlations among simultaneously recorded units, and generalized to very different images. The feasibility of decoding sensory information from human extracellular recordings has implications for the development of brain–machine interfaces.
Collapse
Affiliation(s)
- R Quian Quiroga
- Department of Engineering, University of Leicester, Leicester, UK.
| | | | | | | |
Collapse
|
40
|
Manjarrez E, Vázquez M, Flores A. Computing the center of mass for traveling alpha waves in the human brain. Brain Res 2007; 1145:239-47. [PMID: 17320825 DOI: 10.1016/j.brainres.2007.01.114] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2006] [Revised: 01/25/2007] [Accepted: 01/26/2007] [Indexed: 11/28/2022]
Abstract
The phenomenon of traveling waves of the brain is an intriguing area of research, and its mechanisms and neurobiological bases have been unknown since the 1950s. The present study offers a new method to compute traveling alpha waves using the center of mass algorithm. Electroencephalographic alpha waves are oscillations with a characteristic frequency range and reactivity to closed eyes. Several lines of evidence derived from qualitative observations suggest that the alpha waves represent a spreading wave process with specific trajectories in the human brain. We found that during a certain alpha wave peak recorded with 30 electrodes the trajectory starts and ends in distinct regions of the brain, mostly frontal-occipital, frontal-frontal, or occipital-frontal, but the position of the trajectory at the time in which the maximal positivity of the alpha wave occurs has a definite position near the central regions. Thus we observed that the trajectory always crossed around the central zones, traveling from one region to another region of the brain. A similar trajectory pattern was observed for different alpha wave peaks in one alpha burst, and in different subjects, with a mean velocity of 2.1+/-0.29 m/s. We found that all our results were clear and reproducible in all of the subjects. To our knowledge, the present method documents the first explicit description of a spreading wave process with a singular pattern in the human brain in terms of the center of mass algorithm.
Collapse
Affiliation(s)
- Elías Manjarrez
- Instituto de Fisiología, Benemérita Universidad Autónoma de Puebla, 14 Sur 6301, Col. San Manuel, Apartado Postal 406, Puebla, Pue. CP 72570, Mexico.
| | | | | |
Collapse
|
41
|
Coyle JT. Substance use disorders and schizophrenia: A question of shared glutamatergic mechanisms. Neurotox Res 2006; 10:221-33. [PMID: 17197372 DOI: 10.1007/bf03033359] [Citation(s) in RCA: 53] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
Schizophrenia is noted for the remarkably high prevalence of substance use disorders (SUDs) including nicotine (>85%), alcohol and stimulants. Mounting evidence supports the hypothesis that the endophenotype of schizophrenia involves hypofunction of a subpopulation of cortico-limbic NMDA receptors. Low doses of NMDA receptor antagonists such as ketamine replicate in normal volunteers positive, negative and cognitive symptoms of schizophrenia as well as associated physiologic abnormalities such as eye tracking and abnormal event related potentials. Genetic studies have identified putative risk genes that directly or indirectly affect NMDA receptors including D-amino acid oxidase, its modulator G72, proline oxidase, mGluR3 and neuregulin. Clinical trials have shown that agents that directly or indirectly enhance the function of the NMDA receptor at its glycine modulatory site (GMS) reduce negative symptoms and in the case of D-serine and sarcosine improve cognition and reduce positive symptoms in schizophrenic subjects receiving concurrent anti-psychotic medications. Notably, the GMS partial agonist D-cycloserine exacerbates negative symptoms in clozapine responders whereas full agonists, glycine and D-serine have no effects, suggesting clozapine may act indirectly as a full agonist at the GMS of the NMDA receptor. Clozapine treatment is uniquely associated with decreased substance use in patients with schizophrenia, even without psychologic intervention. Given the role of NMDA receptors in the reward circuitry and in substance dependence, it is reasonable to speculate that NMDA receptor dysfunction is a shared pathologic process in schizophrenia and co-morbid SUDs.
Collapse
Affiliation(s)
- Joseph T Coyle
- Harvard Medical School, Department of Psychiatry, McLean Hospital, Belmont, MA 02478, USA.
| |
Collapse
|
42
|
Hung CP, Kreiman G, Poggio T, DiCarlo JJ. Fast Readout of Object Identity from Macaque Inferior Temporal Cortex. Science 2005; 310:863-6. [PMID: 16272124 DOI: 10.1126/science.1117593] [Citation(s) in RCA: 514] [Impact Index Per Article: 25.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
Understanding the brain computations leading to object recognition requires quantitative characterization of the information represented in inferior temporal (IT) cortex. We used a biologically plausible, classifier-based readout technique to investigate the neural coding of selectivity and invariance at the IT population level. The activity of small neuronal populations (∼100 randomly selected cells) over very short time intervals (as small as 12.5 milliseconds) contained unexpectedly accurate and robust information about both object “identity” and “category.” This information generalized over a range of object positions and scales, even for novel objects. Coarse information about position and scale could also be read out from the same population.
Collapse
Affiliation(s)
- Chou P Hung
- McGovern Institute for Brain Research, Cambridge, MA 02139, USA.
| | | | | | | |
Collapse
|
43
|
Howard MW, Natu VS. Place from time: Reconstructing position from a distributed representation of temporal context. Neural Netw 2005; 18:1150-62. [PMID: 16198538 PMCID: PMC1444898 DOI: 10.1016/j.neunet.2005.08.002] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
The temporal context model (TCM) [. A distributed representation of temporal context. Journal of Mathematical Psychology, 46(3), 269-299] was proposed to describe recency and associative effects observed in episodic recall. Episodic recall depends on an intact medial temporal lobe, a region of the brain that also supports a place code. Howard, Fotedar, Datey, and Hasselmo [. The temporal context model in spatial navigation and relational learning: Toward a common explanation of medial temporal lobe function across domains. Psychological Review, 112(1), 75-116] demonstrated that the leaky integrator that supports a gradually changing representation of temporal context in TCM is sufficient to describe properties of cells observed in ventromedial entorhinal cortex during spatial navigation if it is provided with input about the animal's current velocity. This representation of temporal context generates noisy place cells in the open field, unlike the clearly defined place cells observed in the hippocampus. Here we demonstrate that a reasonably accurate spatial representation can be extracted from temporal context with as few as eight cells, suggesting that the spatial precision observed in the place code in the hippocampus is not inconsistent with the input from a representation of temporal-spatial context in entorhinal cortex.
Collapse
Affiliation(s)
- Marc W Howard
- Department of Psychology, Syracuse University, 430 Huntington Hall, Syracuse, NY 13244-2340, USA.
| | | |
Collapse
|
44
|
Hegdé J, Van Essen DC. Role of primate visual area V4 in the processing of 3-D shape characteristics defined by disparity. J Neurophysiol 2005; 94:2856-66. [PMID: 15987759 DOI: 10.1152/jn.00802.2004] [Citation(s) in RCA: 37] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We studied the responses of V4 neurons in awake, fixating monkeys to a diverse set of stereoscopic stimuli, including zero-order disparity (frontoparallel) stimuli, surfaces oriented in depth, and convex and concave shapes presented at various mean disparities. The responses of many V4 cells were significantly modulated across each of these stimulus subsets. In general, V4 cells were broadly tuned for zero-order disparity, and at any given disparity value, about four-fifths of the cells responded significantly above background. The response modulation by flat surfaces oriented in depth was significant for about one-quarter of cells, and the responses of about one-third of the cells were significantly modulated by convex or concave surfaces at various mean disparities. However, we encountered no cells that unambiguously distinguished a given three-dimensional (3-D) shape independent of mean disparity. Thus 3-D shapes defined by disparity are unlikely to be represented explicitly at the level of individual V4 cells. Nonetheless, V4 cells likely play an important role in the processing of 3-D shape characteristics defined by disparity as a part of a distributed network.
Collapse
Affiliation(s)
- Jay Hegdé
- Department of Anatomy and Neurobiology, Washington University School of Medicine, St. Louis, MO 63110, USA
| | | |
Collapse
|
45
|
|
46
|
Abstract
The firing rate of visual cortical neurons typically changes substantially during a sustained visual stimulus. To assess whether, and to what extent, the information about shape conveyed by neurons in visual area V2 changes over the course of the response, we recorded the responses of V2 neurons in awake, fixating monkeys while presenting a diverse set of static shape stimuli within the classical receptive field. We analyzed the time course of various measures of responsiveness and stimulus-related response modulation at the level of individual cells and of the population. For a majority of V2 cells, the response modulation was maximal during the initial transient response (40-80 ms after stimulus onset). During the same period, the population response was relatively correlated, in that V2 cells tended to respond similarly to specific subsets of stimuli. Over the ensuing 80-100 ms, the signal-to-noise ratio of individual cells generally declined, but to a lesser degree than the evoked-response rate during the corresponding time bins, and the response profiles became decorrelated for many individual cells. Concomitantly, the population response became substantially decorrelated. Our results indicate that the information about stimulus shape evolves dynamically and relatively rapidly in V2 during static visual stimulation in ways that may contribute to form discrimination.
Collapse
Affiliation(s)
- Jay Hegdé
- Department of Anatomy and Neurobiology, Box 8108, Washington University School of Medicine, St. Louis, MO 63110, USA
| | | |
Collapse
|
47
|
Li YX, Wang YQ, Miura R. Clustering in small networks of excitatory neurons with heterogeneous coupling strengths. J Comput Neurosci 2003; 14:139-59. [PMID: 12567014 DOI: 10.1023/a:1021902717424] [Citation(s) in RCA: 20] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Excitatory coupling with a slow rise time destabilizes synchrony between coupled neurons. Thus, the fully synchronous state is usually unstable in networks of excitatory neurons. Phase-clustered states, in which neurons are divided into multiple synchronized clusters, have also been found unstable in numerical studies of excitatory networks in the presence of noise. The question arises as to whether synchrony is possible in networks of neurons coupled through slow, excitatory synapses. In this paper, we show that robust, synchronous clustered states can occur in such networks. The effects of non-uniform distributions of coupling strengths are explored. Conditions for the existence and stability of clustered states are derived analytically. The analysis shows that a multi-cluster state can be stable in excitatory networks if the overall interactions between neurons in different clusters are stabilizing and strong enough to counter-act the destabilizing interactions between neurons within each cluster. When heterogeneity in the coupling strengths strengthens the stabilizing inter-cluster interactions and/or weakens the destabilizing in-cluster interactions, robust clustered states can occur in excitatory networks of all known model neurons. Numerical simulations were carried out to support the analytical results.
Collapse
Affiliation(s)
- Yue-Xian Li
- Department of Mathematics, University of British Columbia, Vancouver, BC, Canada V6T 1Z2.
| | | | | |
Collapse
|
48
|
Buchholtz F, Schinor N, Schneider FW. Stochastic Nonlinear Dynamics: How Many Ion Channels are in a Single Neuron? J Phys Chem B 2002. [DOI: 10.1021/jp0120662] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Affiliation(s)
- F. Buchholtz
- Institute of Physical Chemistry, University of Wuerzburg, Am Hubland, 97074 Wuerzburg, Germany
| | - N. Schinor
- Institute of Physical Chemistry, University of Wuerzburg, Am Hubland, 97074 Wuerzburg, Germany
| | - F. W. Schneider
- Institute of Physical Chemistry, University of Wuerzburg, Am Hubland, 97074 Wuerzburg, Germany
| |
Collapse
|
49
|
Abstract
Weakly electric fish use an electric sense to navigate and capture prey in the dark. Objects in the surroundings of the fish produce distortions in their self-generated electric field; these distortions form a two-dimensional Gaussian-like electric image on the skin surface. To determine the distance of an object, the peak amplitude and width of its electric image must be estimated. These sensory features are encoded by a neuronal population in the early stages of the electrosensory pathway, but are not represented with classic bell-shaped neuronal tuning curves. In contrast, bell-shaped tuning curves do characterize the neuronal responses to the location of the electric image on the body surface, such that parallel two-dimensional maps of this feature are formed. In the case of such two-dimensional maps, theoretical results suggest that the width of neural tuning should have no effect on the accuracy of a population code. Here we show that although the spatial scale of the electrosensory maps does not affect the accuracy of encoding the body surface location of the electric image, maps with narrower tuning are better for estimating image width and those with wider tuning are better for estimating image amplitude. We quantitatively evaluate a two-step algorithm for distance perception involving the sequential estimation of peak amplitude and width of the electric image. This algorithm is best implemented by two neural maps with different tuning widths. These results suggest that multiple maps of sensory features may be specialized with different tuning widths, for encoding additional sensory features that are not explicitly mapped.
Collapse
|
50
|
Senn W, Markram H, Tsodyks M. An algorithm for modifying neurotransmitter release probability based on pre- and postsynaptic spike timing. Neural Comput 2001; 13:35-67. [PMID: 11177427 DOI: 10.1162/089976601300014628] [Citation(s) in RCA: 149] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The precise times of occurrence of individual pre- and postsynaptic action potentials are known to play a key role in the modification of synaptic efficacy. Based on stimulation protocols of two synaptically connected neurons, we infer an algorithm that reproduces the experimental data by modifying the probability of vesicle discharge as a function of the relative timing of spikes in the pre- and postsynaptic neurons. The primary feature of this algorithm is an asymmetry with respect to the direction of synaptic modification depending on whether the presynaptic spikes precede or follow the postsynaptic spike. Specifically, if the presynaptic spike occurs up to 50 ms before the postsynaptic spike, the probability of vesicle discharge is upregulated, while the probability of vesicle discharge is downregulated if the presynaptic spike occurs up to 50 ms after the postsynaptic spike. When neurons fire irregularly with Poisson spike trains at constant mean firing rates, the probability of vesicle discharge converges toward a characteristic value determined by the pre- and postsynaptic firing rates. On the other hand, if the mean rates of the Poisson spike trains slowly change with time, our algorithm predicts modifications in the probability of release that generalize Hebbian and Bienenstock-Cooper-Munro rules. We conclude that the proposed spike-based synaptic learning algorithm provides a general framework for regulating neurotransmitter release probability.
Collapse
Affiliation(s)
- W Senn
- Department of Neurobiology, Weizmann Institute, Rehovot, Israel
| | | | | |
Collapse
|