1
|
Wang R, Kang L. Multiple bumps can enhance robustness to noise in continuous attractor networks. PLoS Comput Biol 2022; 18:e1010547. [PMID: 36215305 PMCID: PMC9584540 DOI: 10.1371/journal.pcbi.1010547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 10/20/2022] [Accepted: 09/06/2022] [Indexed: 11/19/2022] Open
Abstract
A central function of continuous attractor networks is encoding coordinates and accurately updating their values through path integration. To do so, these networks produce localized bumps of activity that move coherently in response to velocity inputs. In the brain, continuous attractors are believed to underlie grid cells and head direction cells, which maintain periodic representations of position and orientation, respectively. These representations can be achieved with any number of activity bumps, and the consequences of having more or fewer bumps are unclear. We address this knowledge gap by constructing 1D ring attractor networks with different bump numbers and characterizing their responses to three types of noise: fluctuating inputs, spiking noise, and deviations in connectivity away from ideal attractor configurations. Across all three types, networks with more bumps experience less noise-driven deviations in bump motion. This translates to more robust encodings of linear coordinates, like position, assuming that each neuron represents a fixed length no matter the bump number. Alternatively, we consider encoding a circular coordinate, like orientation, such that the network distance between adjacent bumps always maps onto 360 degrees. Under this mapping, bump number does not significantly affect the amount of error in the coordinate readout. Our simulation results are intuitively explained and quantitatively matched by a unified theory for path integration and noise in multi-bump networks. Thus, to suppress the effects of biologically relevant noise, continuous attractor networks can employ more bumps when encoding linear coordinates; this advantage disappears when encoding circular coordinates. Our findings provide motivation for multiple bumps in the mammalian grid network.
Collapse
Affiliation(s)
- Raymond Wang
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, California, United States of America
- Neural Circuits and Computations Unit, RIKEN Center for Brain Science, Wako, Saitama, Japan
| | - Louis Kang
- Neural Circuits and Computations Unit, RIKEN Center for Brain Science, Wako, Saitama, Japan
- * E-mail:
| |
Collapse
|
2
|
Low IIC, Williams AH, Campbell MG, Linderman SW, Giocomo LM. Dynamic and reversible remapping of network representations in an unchanging environment. Neuron 2021; 109:2967-2980.e11. [PMID: 34363753 DOI: 10.1016/j.neuron.2021.07.005] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Revised: 02/26/2021] [Accepted: 07/06/2021] [Indexed: 12/14/2022]
Abstract
Neurons in the medial entorhinal cortex alter their firing properties in response to environmental changes. This flexibility in neural coding is hypothesized to support navigation and memory by dividing sensory experience into unique episodes. However, it is unknown how the entorhinal circuit as a whole transitions between different representations when sensory information is not delineated into discrete contexts. Here we describe rapid and reversible transitions between multiple spatial maps of an unchanging task and environment. These remapping events were synchronized across hundreds of neurons, differentially affected navigational cell types, and correlated with changes in running speed. Despite widespread changes in spatial coding, remapping comprised a translation along a single dimension in population-level activity space, enabling simple decoding strategies. These findings provoke reconsideration of how the medial entorhinal cortex dynamically represents space and suggest a remarkable capacity of cortical circuits to rapidly and substantially reorganize their neural representations.
Collapse
Affiliation(s)
- Isabel I C Low
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA, USA; Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA.
| | - Alex H Williams
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA; Department of Statistics, Stanford University, Stanford, CA, USA
| | - Malcolm G Campbell
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Scott W Linderman
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA; Department of Statistics, Stanford University, Stanford, CA, USA
| | - Lisa M Giocomo
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA, USA; Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA.
| |
Collapse
|
3
|
Natale JL, Hentschel HGE, Nemenman I. Precise spatial memory in local random networks. Phys Rev E 2020; 102:022405. [PMID: 32942429 DOI: 10.1103/physreve.102.022405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2019] [Accepted: 06/16/2020] [Indexed: 11/07/2022]
Abstract
Self-sustained, elevated neuronal activity persisting on timescales of 10 s or longer is thought to be vital for aspects of working memory, including brain representations of real space. Continuous-attractor neural networks, one of the most well-known modeling frameworks for persistent activity, have been able to model crucial aspects of such spatial memory. These models tend to require highly structured or regular synaptic architectures. In contrast, we study numerical simulations of a geometrically embedded model with a local, but otherwise random, connectivity profile; imposing a global regulation of our system's mean firing rate produces localized, finely spaced discrete attractors that effectively span a two-dimensional manifold. We demonstrate how the set of attracting states can reliably encode a representation of the spatial locations at which the system receives external input, thereby accomplishing spatial memory via attractor dynamics without synaptic fine-tuning or regular structure. We then measure the network's storage capacity numerically and find that the statistics of retrievable positions are also equivalent to a full tiling of the plane, something hitherto achievable only with (approximately) translationally invariant synapses, and which may be of interest in modeling such biological phenomena as visuospatial working memory in two dimensions.
Collapse
Affiliation(s)
- Joseph L Natale
- Department of Physics, Emory University, Atlanta, Georgia 30322, USA
| | | | - Ilya Nemenman
- Department of Physics, Department of Biology, and Initiative in Theory and Modeling of Living Systems, Emory University, Atlanta, Georgia 30322, USA
| |
Collapse
|
4
|
Bayati M, Neher T, Melchior J, Diba K, Wiskott L, Cheng S. Storage fidelity for sequence memory in the hippocampal circuit. PLoS One 2018; 13:e0204685. [PMID: 30286147 PMCID: PMC6171846 DOI: 10.1371/journal.pone.0204685] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2018] [Accepted: 09/11/2018] [Indexed: 12/24/2022] Open
Abstract
Episodic memories have been suggested to be represented by neuronal sequences, which are stored and retrieved from the hippocampal circuit. A special difficulty is that realistic neuronal sequences are strongly correlated with each other since computational memory models generally perform poorly when correlated patterns are stored. Here, we study in a computational model under which conditions the hippocampal circuit can perform this function robustly. During memory encoding, CA3 sequences in our model are driven by intrinsic dynamics, entorhinal inputs, or a combination of both. These CA3 sequences are hetero-associated with the input sequences, so that the network can retrieve entire sequences based on a single cue pattern. We find that overall memory performance depends on two factors: the robustness of sequence retrieval from CA3 and the circuit's ability to perform pattern completion through the feedforward connectivity, including CA3, CA1 and EC. The two factors, in turn, depend on the relative contribution of the external inputs and recurrent drive on CA3 activity. In conclusion, memory performance in our network model critically depends on the network architecture and dynamics in CA3.
Collapse
Affiliation(s)
- Mehdi Bayati
- Institut für Neuroinformatik, Ruhr-University Bochum, Bochum, Germany
| | - Torsten Neher
- Mental Health Research and Treatment Center, Department of Clinical Child and Adolescent Psychology, Faculty of Psychology, Ruhr University Bochum, Bochum, Germany
| | - Jan Melchior
- Institut für Neuroinformatik, Ruhr-University Bochum, Bochum, Germany
| | - Kamran Diba
- Department of Anesthesiology, University of Michigan, Ann Arbor, United States of America
| | - Laurenz Wiskott
- Institut für Neuroinformatik, Ruhr-University Bochum, Bochum, Germany
| | - Sen Cheng
- Institut für Neuroinformatik, Ruhr-University Bochum, Bochum, Germany
| |
Collapse
|
5
|
Rolls ET, Mills WPC. Computations in the deep vs superficial layers of the cerebral cortex. Neurobiol Learn Mem 2017; 145:205-221. [PMID: 29042296 DOI: 10.1016/j.nlm.2017.10.011] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2017] [Revised: 10/07/2017] [Accepted: 10/10/2017] [Indexed: 12/31/2022]
Abstract
A fundamental question is how the cerebral neocortex operates functionally, computationally. The cerebral neocortex with its superficial and deep layers and highly developed recurrent collateral systems that provide a basis for memory-related processing might perform somewhat different computations in the superficial and deep layers. Here we take into account the quantitative connectivity within and between laminae. Using integrate-and-fire neuronal network simulations that incorporate this connectivity, we first show that attractor networks implemented in the deep layers that are activated by the superficial layers could be partly independent in that the deep layers might have a different time course, which might because of adaptation be more transient and useful for outputs from the neocortex. In contrast the superficial layers could implement more prolonged firing, useful for slow learning and for short-term memory. Second, we show that a different type of computation could in principle be performed in the superficial and deep layers, by showing that the superficial layers could operate as a discrete attractor network useful for categorisation and feeding information forward up a cortical hierarchy, whereas the deep layers could operate as a continuous attractor network useful for providing a spatially and temporally smooth output to output systems in the brain. A key advance is that we draw attention to the functions of the recurrent collateral connections between cortical pyramidal cells, often omitted in canonical models of the neocortex, and address principles of operation of the neocortex by which the superficial and deep layers might be specialized for different types of attractor-related memory functions implemented by the recurrent collaterals.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, UK; University of Warwick, Department of Computer Science, Coventry, UK.
| | | |
Collapse
|
6
|
Neher T, Azizi AH, Cheng S. From grid cells to place cells with realistic field sizes. PLoS One 2017; 12:e0181618. [PMID: 28750005 PMCID: PMC5531553 DOI: 10.1371/journal.pone.0181618] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2016] [Accepted: 07/05/2017] [Indexed: 01/10/2023] Open
Abstract
While grid cells in the medial entorhinal cortex (MEC) of rodents have multiple, regularly arranged firing fields, place cells in the cornu ammonis (CA) regions of the hippocampus mostly have single spatial firing fields. Since there are extensive projections from MEC to the CA regions, many models have suggested that a feedforward network can transform grid cell firing into robust place cell firing. However, these models generate place fields that are consistently too small compared to those recorded in experiments. Here, we argue that it is implausible that grid cell activity alone can be transformed into place cells with robust place fields of realistic size in a feedforward network. We propose two solutions to this problem. Firstly, weakly spatially modulated cells, which are abundant throughout EC, provide input to downstream place cells along with grid cells. This simple model reproduces many place cell characteristics as well as results from lesion studies. Secondly, the recurrent connections between place cells in the CA3 network generate robust and realistic place fields. Both mechanisms could work in parallel in the hippocampal formation and this redundancy might account for the robustness of place cell responses to a range of disruptions of the hippocampal circuitry.
Collapse
Affiliation(s)
- Torsten Neher
- Institute for Neural Computation, Ruhr University Bochum, Bochum, Germany
- International Graduate School of Neuroscience, Ruhr University Bochum, Bochum, Germany
- Department of Psychology, Ruhr University Bochum, Bochum, Germany
| | - Amir Hossein Azizi
- Institute for Neural Computation, Ruhr University Bochum, Bochum, Germany
| | - Sen Cheng
- Institute for Neural Computation, Ruhr University Bochum, Bochum, Germany
- International Graduate School of Neuroscience, Ruhr University Bochum, Bochum, Germany
- * E-mail:
| |
Collapse
|
7
|
Leibold C, Monsalve-Mercado MM. Asymmetry of Neuronal Combinatorial Codes Arises from Minimizing Synaptic Weight Change. Neural Comput 2016; 28:1527-52. [PMID: 27348595 DOI: 10.1162/neco_a_00854] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Synaptic change is a costly resource, particularly for brain structures that have a high demand of synaptic plasticity. For example, building memories of object positions requires efficient use of plasticity resources since objects can easily change their location in space and yet we can memorize object locations. But how should a neural circuit ideally be set up to integrate two input streams (object location and identity) in case the overall synaptic changes should be minimized during ongoing learning? This letter provides a theoretical framework on how the two input pathways should ideally be specified. Generally the model predicts that the information-rich pathway should be plastic and encoded sparsely, whereas the pathway conveying less information should be encoded densely and undergo learning only if a neuronal representation of a novel object has to be established. As an example, we consider hippocampal area CA1, which combines place and object information. The model thereby provides a normative account of hippocampal rate remapping, that is, modulations of place field activity by changes of local cues. It may as well be applicable to other brain areas (such as neocortical layer V) that learn combinatorial codes from multiple input streams.
Collapse
Affiliation(s)
- Christian Leibold
- Department Biology II, Ludwig-Maximilians-Universität München, and Bernstein Center for Computational Neuroscience Munich, 82152 Martisreid, Germany
| | | |
Collapse
|
8
|
González M, Alonso-Almeida MDM, Avila C, Dominguez D. Modeling sustainability report scoring sequences using an attractor network. Neurocomputing 2015. [DOI: 10.1016/j.neucom.2015.05.004] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
9
|
Neher T, Cheng S, Wiskott L. Memory storage fidelity in the hippocampal circuit: the role of subregions and input statistics. PLoS Comput Biol 2015; 11:e1004250. [PMID: 25954996 PMCID: PMC4425359 DOI: 10.1371/journal.pcbi.1004250] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2014] [Accepted: 03/19/2015] [Indexed: 01/14/2023] Open
Abstract
In the last decades a standard model regarding the function of the hippocampus in memory formation has been established and tested computationally. It has been argued that the CA3 region works as an auto-associative memory and that its recurrent fibers are the actual storing place of the memories. Furthermore, to work properly CA3 requires memory patterns that are mutually uncorrelated. It has been suggested that the dentate gyrus orthogonalizes the patterns before storage, a process known as pattern separation. In this study we review the model when random input patterns are presented for storage and investigate whether it is capable of storing patterns of more realistic entorhinal grid cell input. Surprisingly, we find that an auto-associative CA3 net is redundant for random inputs up to moderate noise levels and is only beneficial at high noise levels. When grid cell input is presented, auto-association is even harmful for memory performance at all levels. Furthermore, we find that Hebbian learning in the dentate gyrus does not support its function as a pattern separator. These findings challenge the standard framework and support an alternative view where the simpler EC-CA1-EC network is sufficient for memory storage.
Collapse
Affiliation(s)
- Torsten Neher
- International Graduate School Neuroscience, Ruhr-University Bochum, Bochum, Germany
- Institute for Neural Computation, Ruhr-University Bochum, Bochum, Germany
- * E-mail:
| | - Sen Cheng
- International Graduate School Neuroscience, Ruhr-University Bochum, Bochum, Germany
- Mercator Research Group ‘Structure of Memory’, Department of Psychology, Ruhr-University Bochum, Bochum, Germany
| | - Laurenz Wiskott
- International Graduate School Neuroscience, Ruhr-University Bochum, Bochum, Germany
- Institute for Neural Computation, Ruhr-University Bochum, Bochum, Germany
| |
Collapse
|
10
|
Hardcastle K, Ganguli S, Giocomo LM. Environmental boundaries as an error correction mechanism for grid cells. Neuron 2015; 86:827-39. [PMID: 25892299 DOI: 10.1016/j.neuron.2015.03.039] [Citation(s) in RCA: 145] [Impact Index Per Article: 16.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2014] [Revised: 12/17/2014] [Accepted: 03/17/2015] [Indexed: 11/19/2022]
Abstract
Medial entorhinal grid cells fire in periodic, hexagonally patterned locations and are proposed to support path-integration-based navigation. The recursive nature of path integration results in accumulating error and, without a corrective mechanism, a breakdown in the calculation of location. The observed long-term stability of grid patterns necessitates that the system either performs highly precise internal path integration or implements an external landmark-based error correction mechanism. To distinguish these possibilities, we examined grid cells in behaving rodents as they made long trajectories across an open arena. We found that error accumulates relative to time and distance traveled since the animal last encountered a boundary. This error reflects coherent drift in the grid pattern. Further, interactions with boundaries yield direction-dependent error correction, suggesting that border cells serve as a neural substrate for error correction. These observations, combined with simulations of an attractor network grid cell model, demonstrate that landmarks are crucial to grid stability.
Collapse
Affiliation(s)
- Kiah Hardcastle
- Department of Neurobiology, Stanford University, Stanford, CA 94305, USA.
| | - Surya Ganguli
- Department of Neurobiology, Stanford University, Stanford, CA 94305, USA; Department of Applied Physics, Stanford University, Stanford, CA 94305, USA
| | - Lisa M Giocomo
- Department of Neurobiology, Stanford University, Stanford, CA 94305, USA.
| |
Collapse
|
11
|
González M, Dominguez D, Rodríguez FB, Sánchez Á. Retrieval of noisy fingerprint patterns using metric attractor networks. Int J Neural Syst 2014; 24:1450025. [PMID: 25236929 DOI: 10.1142/s0129065714500257] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
This work experimentally analyzes the learning and retrieval capabilities of the diluted metric attractor neural network when applied to collections of fingerprint images. The computational cost of the network decreases with the dilution, so we can increase the region of interest to cover almost the complete fingerprint. The network retrieval was successfully tested for different noisy configurations of the fingerprints, and proved to be robust with a large basin of attraction. We showed that network topologies with a 2D-Grid arrangement adapt better to the fingerprints spatial structure, outperforming the typical 1D-Ring configuration. An optimal ratio of local connections to random shortcuts that better represent the intrinsic spatial structure of the fingerprints was found, and its influence on the retrieval quality was characterized in a phase diagram. Since the present model is a set of nonlinear equations, it is possible to go beyond the naïve static solution (consisting in matching two fingerprints using a fixed distance threshold value), and a crossing evolution of similarities was shown, leading to the retrieval of the right fingerprint from an apparently more distant candidate. This feature could be very useful for fingerprint verification to discriminate between fingerprints pairs.
Collapse
Affiliation(s)
- Mario González
- Universidad Estatal de Milagro, Milagro, Guayas, Ecuador
| | | | | | | |
Collapse
|
12
|
Abstract
One of the grand challenges in neuroscience is to comprehend neural computation in the association cortices, the parts of the cortex that have shown the largest expansion and differentiation during mammalian evolution and that are thought to contribute profoundly to the emergence of advanced cognition in humans. In this Review, we use grid cells in the medial entorhinal cortex as a gateway to understand network computation at a stage of cortical processing in which firing patterns are shaped not primarily by incoming sensory signals but to a large extent by the intrinsic properties of the local circuit.
Collapse
|
13
|
Solstad T, Yousif HN, Sejnowski TJ. Place cell rate remapping by CA3 recurrent collaterals. PLoS Comput Biol 2014; 10:e1003648. [PMID: 24902003 PMCID: PMC4046921 DOI: 10.1371/journal.pcbi.1003648] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2013] [Accepted: 04/11/2014] [Indexed: 11/26/2022] Open
Abstract
Episodic-like memory is thought to be supported by attractor dynamics in the hippocampus. A possible neural substrate for this memory mechanism is rate remapping, in which the spatial map of place cells encodes contextual information through firing rate variability. To test whether memories are stored as multimodal attractors in populations of place cells, recent experiments morphed one familiar context into another while observing the responses of CA3 cell ensembles. Average population activity in CA3 was reported to transition gradually rather than abruptly from one familiar context to the next, suggesting a lack of attractive forces associated with the two stored representations. On the other hand, individual CA3 cells showed a mix of gradual and abrupt transitions at different points along the morph sequence, and some displayed hysteresis which is a signature of attractor dynamics. To understand whether these seemingly conflicting results are commensurate with attractor network theory, we developed a neural network model of the CA3 with attractors for both position and discrete contexts. We found that for memories stored in overlapping neural ensembles within a single spatial map, position-dependent context attractors made transitions at different points along the morph sequence. Smooth transition curves arose from averaging across the population, while a heterogeneous set of responses was observed on the single unit level. In contrast, orthogonal memories led to abrupt and coherent transitions on both population and single unit levels as experimentally observed when remapping between two independent spatial maps. Strong recurrent feedback entailed a hysteretic effect on the network which diminished with the amount of overlap in the stored memories. These results suggest that context-dependent memory can be supported by overlapping local attractors within a spatial map of CA3 place cells. Similar mechanisms for context-dependent memory may also be found in other regions of the cerebral cortex. The activity of ‘place cells’ in hippocampal area CA3 systematically changes as a function of the animal's position in an arena as well as contextual variables like the color or shape of enclosing walls. Large changes to the local environment, e.g. moving the animal to a different room, can induce a complete reorganization of place-cell firing locations. Such ‘global remapping’ reveals that memory for different environments is encoded as separate spatial maps. Smaller changes to features within an environment can induce a modulation of place cell firing rates without affecting their firing locations. This kind of ‘rate remapping’ is still poorly understood. In this paper we describe a computational model in which discrete memories for contextual features were stored locally within a spatial map of place cells. This network structure supports retrieval of both positional and contextual information from an arbitrary cue, as required by an episodic memory structure. The activity of the network qualitatively matches empirical data from rate remapping experiments, both on the population level and the level of single place cells. The results support the idea that CA3 rate remapping reflects content-addressable memories stored as multimodal attractor states in the hippocampus.
Collapse
Affiliation(s)
- Trygve Solstad
- Howard Hughes Medical Institute, Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, California, United States of America
- Kavli Institute for Systems Neuroscience and Centre for Neural Computation, Norwegian University of Science and Technology, MTFS, Trondheim, Norway
- * E-mail:
| | - Hosam N. Yousif
- Howard Hughes Medical Institute, Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, California, United States of America
- Department of Physics, University of California at San Diego, La Jolla, California, United States of America
| | - Terrence J. Sejnowski
- Howard Hughes Medical Institute, Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, California, United States of America
- Division of Biological Sciences, University of California at San Diego, La Jolla, California, United States of America
| |
Collapse
|
14
|
Abstract
One of the major breakthroughs in neuroscience is the emerging understanding of how signals from the external environment are extracted and represented in the primary sensory cortices of the mammalian brain. The operational principles of the rest of the cortex, however, have essentially remained in the dark. The discovery of grid cells, and their functional organization, opens the door to some of the first insights into the workings of the association cortices, at a stage of neural processing where firing properties are shaped not primarily by the nature of incoming sensory signals but rather by internal self-organizing principles. Grid cells are place-modulated neurons whose firing locations define a periodic triangular array overlaid on the entire space available to a moving animal. The unclouded firing pattern of these cells is rare within the association cortices. In this paper, we shall review recent advances in our understanding of the mechanisms of grid-cell formation which suggest that the pattern originates by competitive network interactions, and we shall relate these ideas to new insights regarding the organization of grid cells into functionally segregated modules.
Collapse
Affiliation(s)
- Edvard I Moser
- Centre for Neural Computation, Kavli Institute for Systems Neuroscience, Norwegian University of Science and Technology, , MTFS, Olav Kyrres gate 9, NTNU, 7489 Trondheim, Norway
| | | | | |
Collapse
|
15
|
Azizi AH, Wiskott L, Cheng S. A computational model for preplay in the hippocampus. Front Comput Neurosci 2013; 7:161. [PMID: 24282402 PMCID: PMC3824291 DOI: 10.3389/fncom.2013.00161] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2013] [Accepted: 10/22/2013] [Indexed: 01/20/2023] Open
Abstract
The hippocampal network produces sequences of neural activity even when there is no time-varying external drive. In offline states, the temporal sequence in which place cells fire spikes correlates with the sequence of their place fields. Recent experiments found this correlation even between offline sequential activity (OSA) recorded before the animal ran in a novel environment and the place fields in that environment. This preplay phenomenon suggests that OSA is generated intrinsically in the hippocampal network, and not established by external sensory inputs. Previous studies showed that continuous attractor networks with asymmetric patterns of connectivity, or with slow, local negative feedback, can generate sequential activity. This mechanism could account for preplay if the network only represented a single spatial map, or chart. However, global remapping in the hippocampus implies that multiple charts are represented simultaneously in the hippocampal network and it remains unknown whether the network with multiple charts can account for preplay. Here we show that it can. Driven with random inputs, the model generates sequences in every chart. Place fields in a given chart and OSA generated by the network are highly correlated. We also find significant correlations, albeit less frequently, even when the OSA is correlated with a new chart in which place fields are randomly scattered. These correlations arise from random correlations between the orderings of place fields in the new chart and those in a pre-existing chart. Our results suggest two different accounts for preplay. Either an existing chart is re-used to represent a novel environment or a new chart is formed.
Collapse
Affiliation(s)
- Amir H Azizi
- Mercator Research Group "Structure of Memory," Department of Psychology, Ruhr-University Bochum Bochum, Germany
| | | | | |
Collapse
|
16
|
Cerasti E, Treves A. The spatial representations acquired in CA3 by self-organizing recurrent connections. Front Cell Neurosci 2013; 7:112. [PMID: 23882184 PMCID: PMC3712127 DOI: 10.3389/fncel.2013.00112] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2013] [Accepted: 06/26/2013] [Indexed: 11/13/2022] Open
Abstract
Neural computation models have hypothesized that the dentate gyrus (DG) drives the storage in the CA3 network of new memories including, e.g., in rodents, spatial memories. Can recurrent CA3 connections self-organize, during storage, and form what have been called continuous attractors, or charts—so that they express spatial information later, when aside from a partial cue the information may not be available in the inputs? We use a simplified mathematical network model to contrast the properties of spatial representations self-organized through simulated Hebbian plasticity with those of charts pre-wired in the synaptic matrix, a control case closer to the ideal notion of continuous attractors. Both models form granular quasi-attractors, characterized by drift, which approach continuous ones only in the limit of an infinitely large network. The two models are comparable in terms of precision, but not of accuracy: with self-organized connections, the metric of space remains distorted, ill-adequate for accurate path integration, even when scaled up to the real hippocampus. While prolonged self-organization makes charts somewhat more informative about position in the environment, some positional information is surprisingly present also about environments never learned, borrowed, as it were, from unrelated charts. In contrast, context discrimination decreases with more learning, as different charts tend to collapse onto each other. These observations challenge the feasibility of the idealized CA3 continuous chart concept, and are consistent with a CA3 specialization for episodic memory rather than path integration.
Collapse
Affiliation(s)
- Erika Cerasti
- SISSA, Cognitive Neuroscience Sector Trieste, Italy ; Collège de France Paris, France
| | | |
Collapse
|
17
|
Stella F, Cerasti E, Treves A. Unveiling the metric structure of internal representations of space. Front Neural Circuits 2013; 7:81. [PMID: 23637653 PMCID: PMC3636501 DOI: 10.3389/fncir.2013.00081] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2012] [Accepted: 04/10/2013] [Indexed: 11/27/2022] Open
Abstract
How are neuronal representations of space organized in the hippocampus? The self-organization of such representations, thought to be driven in the CA3 network by the strong randomizing input from the Dentate Gyrus, appears to run against preserving the topology and even less the exact metric of physical space. We present a way to assess this issue quantitatively, and find that in a simple neural network model of CA3, the average topology is largely preserved, but the local metric is loose, retaining e.g., 10% of the optimal spatial resolution.
Collapse
|
18
|
Stella F, Cerasti E, Si B, Jezek K, Treves A. Self-organization of multiple spatial and context memories in the hippocampus. Neurosci Biobehav Rev 2011; 36:1609-25. [PMID: 22192880 DOI: 10.1016/j.neubiorev.2011.12.002] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2011] [Revised: 12/03/2011] [Accepted: 12/07/2011] [Indexed: 11/16/2022]
Abstract
One obstacle to understanding the exact processes unfolding inside the hippocampus is that it is still difficult to clearly define what the hippocampus actually does, at the system level. Associated for a long time with the formation of episodic and semantic memories, and with their temporary storage, the hippocampus is also regarded as a structure involved in spatial navigation. These two independent perspectives on the hippocampus are not necessarily exclusive: proposals have been put forward to make them fit into the same conceptual frame. We review both approaches and argue that three critical developments need consideration: (a) recordings of neuronal activity in rodents, revealing beautiful spatial codes expressed in entorhinal cortex, upstream of the hippocampus; (b) comparative behavioral results suggesting, in an evolutionary perspective, qualitative similarity of function across homologous structures with a distinct internal organization; (c) quantitative measures of information, shifting the focus from who does what to how much each neuronal population expresses each code. These developments take the hippocampus away from philosophical discussions of all-or-none cause-effect relations, and into the quantitative mainstream of modern neural science.
Collapse
|
19
|
Lehky SR, Sereno AB. Population coding of visual space: modeling. Front Comput Neurosci 2011; 4:155. [PMID: 21344012 PMCID: PMC3034232 DOI: 10.3389/fncom.2010.00155] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2010] [Accepted: 12/09/2010] [Indexed: 11/13/2022] Open
Abstract
We examine how the representation of space is affected by receptive field (RF) characteristics of the encoding population. Spatial responses were defined by overlapping Gaussian RFs. These responses were analyzed using multidimensional scaling to extract the representation of global space implicit in population activity. Spatial representations were based purely on firing rates, which were not labeled with RF characteristics (tuning curve peak location, for example), differentiating this approach from many other population coding models. Because responses were unlabeled, this model represents space using intrinsic coding, extracting relative positions amongst stimuli, rather than extrinsic coding where known RF characteristics provide a reference frame for extracting absolute positions. Two parameters were particularly important: RF diameter and RF dispersion, where dispersion indicates how broadly RF centers are spread out from the fovea. For large RFs, the model was able to form metrically accurate representations of physical space on low-dimensional manifolds embedded within the high-dimensional neural population response space, suggesting that in some cases the neural representation of space may be dimensionally isomorphic with 3D physical space. Smaller RF sizes degraded and distorted the spatial representation, with the smallest RF sizes (present in early visual areas) being unable to recover even a topologically consistent rendition of space on low-dimensional manifolds. Finally, although positional invariance of stimulus responses has long been associated with large RFs in object recognition models, we found RF dispersion rather than RF diameter to be the critical parameter. In fact, at a population level, the modeling suggests that higher ventral stream areas with highly restricted RF dispersion would be unable to achieve positionally-invariant representations beyond this narrow region around fixation.
Collapse
Affiliation(s)
- Sidney R Lehky
- Computational Neuroscience Laboratory, Salk Institute for Biological Studies La Jolla, CA, USA
| | | |
Collapse
|
20
|
Romani S, Tsodyks M. Continuous attractors with morphed/correlated maps. PLoS Comput Biol 2010; 6. [PMID: 20700490 PMCID: PMC2916844 DOI: 10.1371/journal.pcbi.1000869] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2010] [Accepted: 06/28/2010] [Indexed: 12/03/2022] Open
Abstract
Continuous attractor networks are used to model the storage and representation of analog quantities, such as position of a visual stimulus. The storage of multiple continuous attractors in the same network has previously been studied in the context of self-position coding. Several uncorrelated maps of environments are stored in the synaptic connections, and a position in a given environment is represented by a localized pattern of neural activity in the corresponding map, driven by a spatially tuned input. Here we analyze networks storing a pair of correlated maps, or a morph sequence between two uncorrelated maps. We find a novel state in which the network activity is simultaneously localized in both maps. In this state, a fixed cue presented to the network does not determine uniquely the location of the bump, i.e. the response is unreliable, with neurons not always responding when their preferred input is present. When the tuned input varies smoothly in time, the neuronal responses become reliable and selective for the environment: the subset of neurons responsive to a moving input in one map changes almost completely in the other map. This form of remapping is a non-trivial transformation between the tuned input to the network and the resulting tuning curves of the neurons. The new state of the network could be related to the formation of direction selectivity in one-dimensional environments and hippocampal remapping. The applicability of the model is not confined to self-position representations; we show an instance of the network solving a simple delayed discrimination task. How is your position in an environment represented in the brain, and how does the representation distinguish between multiple environments? One of the proposed answers relies on continuous attractor neural networks. Consider the web page of your campus map as a network of pixels. Every pixel is a neuron, and nearby pixels excite each other, while distant pairs are inhibited. As a result of their interactions, a bunch of close-by pixels will light up, indicating your current position as suggested by your web-cam (the sensory input). When you travel to another campus, the common assumption holds that pixels are completely scrambled and the excitatory/inhibitory pattern of connections is summed to the existing one. Now these connections and the sensory input will activate the pixels corresponding to your location in the new campus. The active pixels will look like noise in the old map. But what if the campuses are similar, i.e. the pixels are not completely scrambled? We show that the network has a novel way of distinguishing between the environments, by lighting up distinct subsets of pixels for each campus. This emergent selectivity for the environment could be a mechanism underlying hippocampal remapping and directional selectivity of place cells in 1D environments.
Collapse
Affiliation(s)
- Sandro Romani
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, Israel
| | - Misha Tsodyks
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, Israel
- * E-mail:
| |
Collapse
|
21
|
Cerasti E, Treves A. How informative are spatial CA3 representations established by the dentate gyrus? PLoS Comput Biol 2010; 6:e1000759. [PMID: 20454678 PMCID: PMC2861628 DOI: 10.1371/journal.pcbi.1000759] [Citation(s) in RCA: 39] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2009] [Accepted: 03/24/2010] [Indexed: 11/18/2022] Open
Abstract
In the mammalian hippocampus, the dentate gyrus (DG) is characterized by sparse and powerful unidirectional projections to CA3 pyramidal cells, the so-called mossy fibers. Mossy fiber synapses appear to duplicate, in terms of the information they convey, what CA3 cells already receive from entorhinal cortex layer II cells, which project both to the dentate gyrus and to CA3. Computational models of episodic memory have hypothesized that the function of the mossy fibers is to enforce a new, well-separated pattern of activity onto CA3 cells, to represent a new memory, prevailing over the interference produced by the traces of older memories already stored on CA3 recurrent collateral connections. Can this hypothesis apply also to spatial representations, as described by recent neurophysiological recordings in rats? To address this issue quantitatively, we estimate the amount of information DG can impart on a new CA3 pattern of spatial activity, using both mathematical analysis and computer simulations of a simplified model. We confirm that, also in the spatial case, the observed sparse connectivity and level of activity are most appropriate for driving memory storage-and not to initiate retrieval. Surprisingly, the model also indicates that even when DG codes just for space, much of the information it passes on to CA3 acquires a non-spatial and episodic character, akin to that of a random number generator. It is suggested that further hippocampal processing is required to make full spatial use of DG inputs.
Collapse
Affiliation(s)
- Erika Cerasti
- SISSA, Cognitive Neuroscience Sector, Trieste, Italy.
| | | |
Collapse
|
22
|
Minciacchi D, Del Tongo C, Carretta D, Nosi D, Granato A. Alterations of the cortico-cortical network in sensori-motor areas of dystrophin deficient mice. Neuroscience 2010; 166:1129-39. [DOI: 10.1016/j.neuroscience.2010.01.040] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2009] [Revised: 01/19/2010] [Accepted: 01/19/2010] [Indexed: 02/09/2023]
|
23
|
Voges N, Guijarro C, Aertsen A, Rotter S. Models of cortical networks with long-range patchy projections. J Comput Neurosci 2009; 28:137-54. [PMID: 19866352 DOI: 10.1007/s10827-009-0193-z] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2008] [Revised: 08/25/2009] [Accepted: 10/01/2009] [Indexed: 10/20/2022]
Abstract
The cortex exhibits an intricate vertical and horizontal architecture, the latter often featuring spatially clustered projection patterns, so-called patches. Many network studies of cortical dynamics ignore such spatial structures and assume purely random wiring. Here, we focus on non-random network structures provided by long-range horizontal (patchy) connections that remain inside the gray matter. We investigate how the spatial arrangement of patchy projections influences global network topology and predict its impact on the activity dynamics of the network. Since neuroanatomical data on horizontal projections is rather sparse, we suggest and compare four candidate scenarios of how patchy connections may be established. To identify a set of characteristic network properties that enables us to pin down the differences between the resulting network models, we employ the framework of stochastic graph theory. We find that patchy projections provide an exceptionally efficient way of wiring, as the resulting networks tend to exhibit small-world properties with significantly reduced wiring costs. Furthermore, the eigenvalue spectra, as well as the structure of common in- and output of the networks suggest that different spatial connectivity patterns support distinct types of activity propagation.
Collapse
Affiliation(s)
- Nicole Voges
- Bernstein Center for Computational Neuroscience Freiburg, Albert-Ludwig University, Freiburg, Germany.
| | | | | | | |
Collapse
|
24
|
|
25
|
Li N, Cox DD, Zoccolan D, DiCarlo JJ. What response properties do individual neurons need to underlie position and clutter "invariant" object recognition? J Neurophysiol 2009; 102:360-76. [PMID: 19439676 DOI: 10.1152/jn.90745.2008] [Citation(s) in RCA: 61] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Primates can easily identify visual objects over large changes in retinal position--a property commonly referred to as position "invariance." This ability is widely assumed to depend on neurons in inferior temporal cortex (IT) that can respond selectively to isolated visual objects over similarly large ranges of retinal position. However, in the real world, objects rarely appear in isolation, and the interplay between position invariance and the representation of multiple objects (i.e., clutter) remains unresolved. At the heart of this issue is the intuition that the representations of nearby objects can interfere with one another and that the large receptive fields needed for position invariance can exacerbate this problem by increasing the range over which interference acts. Indeed, most IT neurons' responses are strongly affected by the presence of clutter. While external mechanisms (such as attention) are often invoked as a way out of the problem, we show (using recorded neuronal data and simulations) that the intrinsic properties of IT population responses, by themselves, can support object recognition in the face of limited clutter. Furthermore, we carried out extensive simulations of hypothetical neuronal populations to identify the essential individual-neuron ingredients of a good population representation. These simulations show that the crucial neuronal property to support recognition in clutter is not preservation of response magnitude, but preservation of each neuron's rank-order object preference under identity-preserving image transformations (e.g., clutter). Because IT neuronal responses often exhibit that response property, while neurons in earlier visual areas (e.g., V1) do not, we suggest that preserving the rank-order object preference regardless of clutter, rather than the response magnitude, more precisely describes the goal of individual neurons at the top of the ventral visual stream.
Collapse
Affiliation(s)
- Nuo Li
- McGovern Institute for Brain Research, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, 77 Massachusetts Ave., Cambridge, MA 02139, USA
| | | | | | | |
Collapse
|
26
|
Roudi Y, Tyrcha J, Hertz J. Ising model for neural data: model quality and approximate methods for extracting functional connectivity. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2009; 79:051915. [PMID: 19518488 DOI: 10.1103/physreve.79.051915] [Citation(s) in RCA: 76] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/18/2009] [Revised: 04/03/2009] [Indexed: 05/24/2023]
Abstract
We study pairwise Ising models for describing the statistics of multineuron spike trains, using data from a simulated cortical network. We explore efficient ways of finding the optimal couplings in these models and examine their statistical properties. To do this, we extract the optimal couplings for subsets of size up to 200 neurons, essentially exactly, using Boltzmann learning. We then study the quality of several approximate methods for finding the couplings by comparing their results with those found from Boltzmann learning. Two of these methods--inversion of the Thouless-Anderson-Palmer equations and an approximation proposed by Sessak and Monasson--are remarkably accurate. Using these approximations for larger subsets of neurons, we find that extracting couplings using data from a subset smaller than the full network tends systematically to overestimate their magnitude. This effect is described qualitatively by infinite-range spin-glass theory for the normal phase. We also show that a globally correlated input to the neurons in the network leads to a small increase in the average coupling. However, the pair-to-pair variation in the couplings is much larger than this and reflects intrinsic properties of the network. Finally, we study the quality of these models by comparing their entropies with that of the data. We find that they perform well for small subsets of the neurons in the network, but the fit quality starts to deteriorate as the subset size grows, signaling the need to include higher-order correlations to describe the statistics of large networks.
Collapse
Affiliation(s)
- Yasser Roudi
- NORDITA, Roslagstullsbacken 23, 10691 Stockholm, Sweden
| | | | | |
Collapse
|
27
|
Kropff E, Treves A. The emergence of grid cells: Intelligent design or just adaptation? Hippocampus 2009; 18:1256-69. [PMID: 19021261 DOI: 10.1002/hipo.20520] [Citation(s) in RCA: 161] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
Individual medial entorhinal cortex (mEC) 'grid' cells provide a representation of space that appears to be essentially invariant across environments, modulo simple transformations, in contrast to multiple, rapidly acquired hippocampal maps; it may therefore be established gradually during rodent development. We explore with a simplified mathematical model the possibility that the self-organization of multiple grid fields into a triangular grid pattern may be a single-cell process, driven by firing rate adaptation and slowly varying spatial inputs. A simple analytical derivation indicates that triangular grids are favored asymptotic states of the self-organizing system, and computer simulations confirm that such states are indeed reached during a model learning process, provided it is sufficiently slow to effectively average out fluctuations. The interactions among local ensembles of grid units serve solely to stabilize a common grid orientation. Spatial information, in the real mEC network, may be provided by any combination of feedforward cortical afferents and feedback hippocampal projections from place cells, since either input alone is likely sufficient to yield grid fields.
Collapse
Affiliation(s)
- Emilio Kropff
- Kavli Institute for Systems Neuroscience and Centre for the Biology of Memory, NTNU-Norwegian University of Science and Technology, 7489 Trondheim, Norway
| | | |
Collapse
|
28
|
Goris RLT, Op de Beeck HP. Neural representations that support invariant object recognition. Front Comput Neurosci 2009; 3:3. [PMID: 19242556 PMCID: PMC2647334 DOI: 10.3389/neuro.10.003.2009] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2008] [Accepted: 02/04/2009] [Indexed: 11/13/2022] Open
Abstract
Neural mechanisms underlying invariant behaviour such as object recognition are not well understood. For brain regions critical for object recognition, such as inferior temporal cortex (ITC), there is now ample evidence indicating that single cells code for many stimulus aspects, implying that only a moderate degree of invariance is present. However, recent theoretical and empirical work seems to suggest that integrating responses of multiple non-invariant units may produce invariant representations at population level. We provide an explicit test for the hypothesis that a linear read-out mechanism of a pool of units resembling ITC neurons may achieve invariant performance in an identification task. A linear classifier was trained to decode a particular value in a 2-D stimulus space using as input the response pattern across a population of units. Only one dimension was relevant for the task, and the stimulus location on the irrelevant dimension (ID) was kept constant during training. In a series of identification tests, the stimulus location on the relevant dimension (RD) and ID was manipulated, yielding estimates for both the level of sensitivity and tolerance reached by the network. We studied the effects of several single-cell characteristics as well as population characteristics typically considered in the literature, but found little support for the hypothesis. While the classifier averages out effects of idiosyncratic tuning properties and inter-unit variability, its invariance is very much determined by the (hypothetical) ‘average’ neuron. Consequently, even at population level there exists a fundamental trade-off between selectivity and tolerance, and invariant behaviour does not emerge spontaneously.
Collapse
Affiliation(s)
- Robbe L T Goris
- Laboratory of Experimental Psychology, University of Leuven Leuven, Belgium
| | | |
Collapse
|
29
|
Akrami A, Liu Y, Treves A, Jagadeesh B. Converging neuronal activity in inferior temporal cortex during the classification of morphed stimuli. ACTA ACUST UNITED AC 2008; 19:760-76. [PMID: 18669590 PMCID: PMC2651479 DOI: 10.1093/cercor/bhn125] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
How does the brain dynamically convert incoming sensory data into a representation useful for classification? Neurons in inferior temporal (IT) cortex are selective for complex visual stimuli, but their response dynamics during perceptual classification is not well understood. We studied IT dynamics in monkeys performing a classification task. The monkeys were shown visual stimuli that were morphed (interpolated) between pairs of familiar images. Their ability to classify the morphed images depended systematically on the degree of morph. IT neurons were selected that responded more strongly to one of the 2 familiar images (the effective image). The responses tended to peak approximately 120 ms following stimulus onset with an amplitude that depended almost linearly on the degree of morph. The responses then declined, but remained above baseline for several hundred ms. This sustained component remained linearly dependent on morph level for stimuli more similar to the ineffective image but progressively converged to a single response profile, independent of morph level, for stimuli more similar to the effective image. Thus, these neurons represented the dynamic conversion of graded sensory information into a task-relevant classification. Computational models suggest that these dynamics could be produced by attractor states and firing rate adaptation within the population of IT neurons.
Collapse
Affiliation(s)
- Athena Akrami
- SISSA International School for Advanced Studies, Trieste, Italy
| | | | | | | |
Collapse
|