1
|
Abstract
AbstractThis commentary questions the target articles inferences from a limited set of empirical data to support this model and conceptual scheme. Especially questionable is the attribution of internal representation properties to an assembly of cells in a discrete cortical module firing at a discrete attractor frequency. Alternative inferences are drawn from cortical cooling and cell-firing data that point to the internal representation as a broad and specific cortical network defined by cortico-cortical connectivity. Active memory, it is proposed, consists in the sustained activation of the component neuron populations of the network.
Collapse
|
2
|
Distributed cell assemblies and detailed cell models. Behav Brain Sci 2010. [DOI: 10.1017/s0140525x00040292] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
AbstractHebbian cell-assembly theory and attractor networks are good starting points for modeling cortical processing. Detailed cell models can be useful in understanding the dynamics of attractor networks. Cell assemblies are likely to be distributed, with the cortical column as the local processing unit. Synaptic memory may be dominant in all but the first couple of seconds.
Collapse
|
3
|
Another ANN model for the Miyashita experiments. Behav Brain Sci 2010. [DOI: 10.1017/s0140525x00040310] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
AbstractThe Miyashita experiments are very interesting and the results should be examined from a viewpoint of attractor dynamics. Amit's target article shows a path toward realistic modeling by artificial neural networks (ANN), but it is not necessarily the only one. I introduce another model that can explain a substantial part of the empirical observations and makes an interesting prediction. This model consists of such units that have nonmonotonic input-output characteristics with local inhibition neurons.
Collapse
|
4
|
Abstract
AbstractRecurrent excitation is experimentally well documented in cortical populations. It provides for intracortical excitatory biases that linearize negative feedback interactions and induce macroscopic state transitions during perception. The concept of the local neighborhood should be expanded to spatial patterns as the basis for perception, in which large areas of cortex are bound into cooperative behavior with near-silent columns as important as active columns revealed by unit recording.
Collapse
|
5
|
Abstract
AbstractInterpreting the Miyashita et al. experiments in terms of a cellassembly representation does not adequately explain the performance of Miyashita's monkeys on novel stimuli. We will argue that the latter observations point to acompositionalrepresentation and suggest a dynamics involving rapid and reversible binding of distinct activity patterns.
Collapse
|
6
|
Reverberation reconsidered: On the path to cognitive theory. Behav Brain Sci 2010. [DOI: 10.1017/s0140525x0004019x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
AbstractAmit's work addresses a critical issue in cognitive science: the structure of neural representations. The use of Hebbian cell assemblies is a positive step, and we now need to consider its role in a larger cognitive theory. When considering the dynamics of a system built out of attractors, a more limited version of reverberation becomes necessary.
Collapse
|
7
|
Abstract
AbstractCortical reverberations may induce synaptic changes that underlie developmental plasticity as well as long-term memory. They may be especially important for the consolidation of synaptic changes. Reverberations in cortical networks should have particular significance during development, when large numbers of new representations are formed. This includes the formation of representations across different sensory modalities.
Collapse
|
8
|
How do local reverberations achieve global integration? Behav Brain Sci 2010. [DOI: 10.1017/s0140525x00040371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
AbstractAmit's Hebbian model risks being overexplanatory, since it does not depend on specific physiological modelling of cortical ANNs, but concentrates on those phenomena which are modelled by a large class of ANNs. While offering a strong demonstration of the presence of Hebb's “cell assemblies,” it does not offer an equal account of Hebb's “phase sequence” concept.
Collapse
|
9
|
Abstract
AbstractThe concept of an attractor in a mathematical dynamical system is reviewed. Emphasis is placed on the distinction between a cell assembly, the corresponding attractor, and the attractor dynamics. The biological significance of these entities is discussed, especially the question of whether the representation of the stimulus requires the full attractor dynamics, or merely the cell assembly as a set of reverberating neurons. Comparison is made to Freeman's study of dynamic patterns in olfaction.
Collapse
|
10
|
Abstract
AbstractThe cell assembly as a simple attractor cannot explain many cognitive phenomena. It must be a highly structured network that can sustain highly structured excitation patterns. Moreover, a cell assembly must be more widely distributed in space than on a square millimeter.
Collapse
|
11
|
Abstract
AbstractThe neurophysiological evidence from the Miyashita group's experiments on monkeys as well as cognitive experience common to us all suggests that local neuronal spike rate distributions might persist in the absence of their eliciting stimulus. In Hebb's cell-assembly theory, learning dynamics stabilize such self-maintaining reverberations. Quasi-quantitive modeling of the experimental data on internal representations in association-cortex modules identifies the reverberations (delay spike activity) as the internal code (representation). This leads to cognitive and neurophysiological predictions, many following directly from the language used to describe the activity in the experimental delay period, others from the details of how the model captures the properties of the internal representations.
Collapse
|
12
|
Additional tests of Amit's attractor neural networks. Behav Brain Sci 2010. [DOI: 10.1017/s0140525x00040255] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
AbstractFurther tests of Amit's model are indicated. One strategy is to use the apparent coding sparseness of the model to make predictions about coding sparseness in Miyashita's network. A second approach is to use memory overload to induce false positive responses in modules and biological systems. In closing, the importance of temporal coding and timing requirements in developing biologically plausible attractor networks is mentioned.
Collapse
|
13
|
Morita M. Computational study on the neural mechanism of sequential pattern memory. BRAIN RESEARCH. COGNITIVE BRAIN RESEARCH 1996; 5:137-46. [PMID: 9049080 DOI: 10.1016/s0926-6410(96)00050-x] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
The brain stores various kinds of temporal sequences as long-term memories, such as motor sequences, episodes, and melodies. The present study aims at clarifying the general principle underlying such memories. For this purpose, the memory mechanism of sequential patterns is examined from the viewpoint of computational theory and neural network modeling, and a neural network model of sequential pattern memory based on a simple and reasonable principle is presented. Specifically, spatio-temporal patterns varying gradually with time are stably stored in a network consisting of pairs of excitatory and inhibitory cells with recurrent connections; such a pair can achieve non-monotonic input-output characteristics which are essential for smooth sequential recall. Storage is performed using a simple learning algorithm which is based on the covariance rule and requires only that the sequence be input several times and retrieval is highly tolerant to noise. It is thought that a similar principle is used in cerebral memory systems, and the relevance of this model to the brain is discussed. Also, possible roles of hippocampus and basal ganglia in memorizing sequences are suggested.
Collapse
Affiliation(s)
- M Morita
- Institute of Information Sciences and Electronics, University of Tsukuba, Ibaraki, Japan.
| |
Collapse
|
14
|
Abstract
Conventional neural network models for temporal association generally do not work well in the absence of synchronizing neurons. This is because their dynamical properties are fundamentally not suitable for storing sequential patterns, no matter what storage or learning algorithm is used. The present article describes a nonmonotone neural network (NNN) model in which sequential patterns are stored by being embedded in a trajectory attractor of the dynamical system, and recalled stably and smoothly without synchronization; recall is done in such a way that the network state successively moves along the trajectory. A simple and natural learning algorithm for the NNN is also presented, where one only has to vary the input pattern gradually and modify the synaptic weights according to a kind of covariance rule; then the network state follows slightly behind the input pattern, and its trajectory grows to be an attractor with a small number of repetitions. Copyright 1996 Elsevier Science Ltd.
Collapse
|
15
|
How representation works is more important than what representations are. Behav Brain Sci 1995. [DOI: 10.1017/s0140525x00040218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
AbstractA theory of representation is incomplete if it states “representations areX” whereXcan be symbols, cell assemblies, functional states, or the flock of birds fromTheaetetus, without explaining the nature of the link between the universe ofXs and the world. Amit's thesis, equating representations with reverberations in Hebbian cell assemblies, will only be considered a solution to the problem of representation when it is complemented by a theory of how a reverberation in the brain can be a representation of anything.
Collapse
|
16
|
The functional meaning of reverberations for sensoric and contextual encoding. Behav Brain Sci 1995. [DOI: 10.1017/s0140525x00040279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
AbstractAmit argues that the local neuronal spike rate that persists (reverberating) in the absence of the eliciting stimulus represents the code of the eliciting stimulus. Based on the general argument that the inferred functional meaning of reverberation depends in part on the type of representational assumptions, reverberations may only be important for the encoding of contextual information.
Collapse
|
17
|
How to decide whether a neural representation is a cognitive concept? Behav Brain Sci 1995. [DOI: 10.1017/s0140525x00040346] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
AbstractA distinction should be made between the formation of stimulus-driven associations and cognitive concepts. To test the learning mode of a neural network, we propose a simple and classic input-output test: the discrimination shift task. Feed-forward PDP models appear to form stimulus-driven associations. A Hopfield network should be extended to apply the test.
Collapse
|
18
|
Association and computation with cell assemblies. Behav Brain Sci 1995. [DOI: 10.1017/s0140525x0004036x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
AbstractThe cell assembly is an important concept for cognitive psychology. Cognitive processing will to a large extent depend on the relations that can exist between different assemblies. A potential relation between assemblies can already be seen in the occurrence of (classical) conditioning. However, the resulting associations between assemblies only produce behavioristic processing or so-called regular computation. Higher-level cognitive abilities most likely result from nonregular computation. I discuss the possibility of this form of computation in terms of cell assemblies.
Collapse
|
19
|
Abstract
AbstractPersistent activity can be the product of mechanisms other than attractor reverberations. The single-unit data presented by Amit cannot discriminate between the different mechanisms. In fact, single-unit data do not appear to be adequate for testing neural network models.
Collapse
|
20
|
Attractors – don't get sucked in. Behav Brain Sci 1995. [DOI: 10.1017/s0140525x00040309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
AbstractEvery immediate memory is unique; it is therefore unlikely to consist of an attractor or even a combination of attractors. In the present state of knowledge about the chemistry of synaptic transmission, there is no reason to look beyond neurons that directly receive sensory afferents for the afterdischarges that correspond to active memories.
Collapse
|
21
|
Local or transcortical assemblies? Some evidence from cognitive neuroscience. Behav Brain Sci 1995. [DOI: 10.1017/s0140525x00040334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
AbstractAmit defines cell assemblies aslocal cortical neuron populationswith strong internal connections. However, Hebb himself proposed that cell assemblies are distributed over different cortical areas (nonlocal ortranscortical assemblies). We review evidence from cognitive neuroscience and neuropsychology supporting the assumption that cell assemblies are transcortical.
Collapse
|
22
|
Hebb's accomplishments misunderstood. Behav Brain Sci 1995. [DOI: 10.1017/s0140525x00040267] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
AbstractAmit's efforts to provide stronger theoretical and empirical support for Hebb's cell-assembly concept is admirable, but we have serious reservations about the perspective presented in the target article. For Hebb, the cell assembly was a building block; by contrast, the framework proposed here eschews the need to fit the assembly into a broader picture of its function.
Collapse
|
23
|
Abstract
AbstractAmit's “Attractor Neural Network” perspective on cognition raises difficult technical problems already met by prior dynamical models. This commentary sketches briefly some of them concerning the internal topological structure of attractors, the constituency problem, the possibility of activating simultaneously several attractors, and the different kinds of dynamical structures one can use to model brain activity: point attractors, strange attractors, synchronized arrays of oscillators, synfire chains, and so forth.
Collapse
|
24
|
An evolutionary perspective on Hebb's reverberatory representations. Behav Brain Sci 1995. [DOI: 10.1017/s0140525x00040280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
AbstractHebbian mechanisms are justified according to their functional utility in an evolutionary sense. The selective advantage of correlating content-contingent stimuli reflects the putative common cause of temporally or spatially contiguous inputs. The selective consequences of such correlations are discussed by using examples from the evolution of signal form in sexual selection and model-mimic coevolution. We suggest that evolutionary justification might be considered in addition to neurophysiology plansibility when constructing representational models.
Collapse
|
25
|
Empirical and theoretical active memory: The proper context. Behav Brain Sci 1995. [DOI: 10.1017/s0140525x00040383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
AbstractThe context of the target article is delimited again, underlining the intended locationof the argument in the bottomup hierarchy of brain study. The central message is that collective delay activity distributions (reverberations) in cortical modules extend the role of a spike (a potentialinformation carrier across long distances) to an active memory of structured, learned information that can be carried across long time intervals. Moreover, the population code of the reverberations makes them readable down the cortical processing stream. Most of the critical comments are then interpreted and addressed in relation to misreading of the proper context. The price for the limitation of the context (in cognitive, behavioral, and computational terms) is compared with the advantages of a clear, direct contact with experiment on the one hand and with a well controlled body of modeling and analysis on the other.
Collapse
|