1
|
Deep Intelligence: What AI Should Learn from Nature’s Imagination. Cognit Comput 2023. [DOI: 10.1007/s12559-023-10124-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/08/2023]
|
2
|
Hasani R, Ferrari G, Yamamoto H, Tanii T, Prati E. Role of Noise in Spontaneous Activity of Networks of Neurons on Patterned Silicon Emulated by Noise–activated CMOS Neural Nanoelectronic Circuits. NANO EXPRESS 2021. [DOI: 10.1088/2632-959x/abf2ae] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Abstract
Background noise in biological cortical microcircuits constitutes a powerful resource to assess their computational tasks, including, for instance, the synchronization of spiking activity, the enhancement of the speed of information transmission, and the minimization of the corruption of signals. We explore the correlation of spontaneous firing activity of ≈ 100 biological neurons adhering to engineered scaffolds by governing the number of functionalized patterned connection pathways among groups of neurons. We then emulate the biological system by a series of noise-activated silicon neural network simulations. We show that by suitably tuning both the amplitude of noise and the number of synapses between the silicon neurons, the same controlled correlation of the biological population is achieved. Our results extend to a realistic silicon nanoelectronics neuron design using noise injection to be exploited in artificial spiking neural networks such as liquid state machines and recurrent neural networks for stochastic computation.
Collapse
|
3
|
Chen Q, Luley R, Wu Q, Bishop M, Linderman RW, Qiu Q. AnRAD: A Neuromorphic Anomaly Detection Framework for Massive Concurrent Data Streams. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:1622-1636. [PMID: 28328516 DOI: 10.1109/tnnls.2017.2676110] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
The evolution of high performance computing technologies has enabled the large-scale implementation of neuromorphic models and pushed the research in computational intelligence into a new era. Among the machine learning applications, unsupervised detection of anomalous streams is especially challenging due to the requirements of detection accuracy and real-time performance. Designing a computing framework that harnesses the growing computing power of the multicore systems while maintaining high sensitivity and specificity to the anomalies is an urgent research topic. In this paper, we propose anomaly recognition and detection (AnRAD), a bioinspired detection framework that performs probabilistic inferences. We analyze the feature dependency and develop a self-structuring method that learns an efficient confabulation network using unlabeled data. This network is capable of fast incremental learning, which continuously refines the knowledge base using streaming data. Compared with several existing anomaly detection approaches, our method provides competitive detection quality. Furthermore, we exploit the massive parallel structure of the AnRAD framework. Our implementations of the detection algorithm on the graphic processing unit and the Xeon Phi coprocessor both obtain substantial speedups over the sequential implementation on general-purpose microprocessor. The framework provides real-time service to concurrent data streams within diversified knowledge contexts, and can be applied to large problems with multiple local patterns. Experimental results demonstrate high computing performance and memory efficiency. For vehicle behavior detection, the framework is able to monitor up to 16000 vehicles (data streams) and their interactions in real time with a single commodity coprocessor, and uses less than 0.2 ms for one testing subject. Finally, the detection network is ported to our spiking neural network simulator to show the potential of adapting to the emerging neuromorphic architectures.
Collapse
|
4
|
Badin AS, Fermani F, Greenfield SA. The Features and Functions of Neuronal Assemblies: Possible Dependency on Mechanisms beyond Synaptic Transmission. Front Neural Circuits 2017; 10:114. [PMID: 28119576 PMCID: PMC5223595 DOI: 10.3389/fncir.2016.00114] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2016] [Accepted: 12/22/2016] [Indexed: 11/13/2022] Open
Abstract
"Neuronal assemblies" are defined here as coalitions within the brain of millions of neurons extending in space up to 1-2 mm, and lasting for hundreds of milliseconds: as such they could potentially link bottom-up, micro-scale with top-down, macro-scale events. The perspective first compares the features in vitro versus in vivo of this underappreciated "meso-scale" level of brain processing, secondly considers the various diverse functions in which assemblies may play a pivotal part, and thirdly analyses whether the surprisingly spatially extensive and prolonged temporal properties of assemblies can be described exclusively in terms of classic synaptic transmission or whether additional, different types of signaling systems are likely to operate. Based on our own voltage-sensitive dye imaging (VSDI) data acquired in vitro we show how restriction to only one signaling process, i.e., synaptic transmission, is unlikely to be adequate for modeling the full profile of assemblies. Based on observations from VSDI with its protracted spatio-temporal scales, we suggest that two other, distinct processes are likely to play a significant role in assembly dynamics: "volume" transmission (the passive diffusion of diverse bioactive transmitters, hormones, and modulators), as well as electrotonic spread via gap junctions. We hypothesize that a combination of all three processes has the greatest potential for deriving a realistic model of assemblies and hence elucidating the various complex brain functions that they may mediate.
Collapse
Affiliation(s)
- Antoine-Scott Badin
- Neuro-Bio Ltd., Culham Science CentreAbingdon, UK; Department of Physiology, Anatomy and Genetics, Mann Group, University of OxfordOxford, UK
| | | | | |
Collapse
|
5
|
Spike-Based Bayesian-Hebbian Learning of Temporal Sequences. PLoS Comput Biol 2016; 12:e1004954. [PMID: 27213810 PMCID: PMC4877102 DOI: 10.1371/journal.pcbi.1004954] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2015] [Accepted: 04/28/2016] [Indexed: 11/25/2022] Open
Abstract
Many cognitive and motor functions are enabled by the temporal representation and processing of stimuli, but it remains an open issue how neocortical microcircuits can reliably encode and replay such sequences of information. To better understand this, a modular attractor memory network is proposed in which meta-stable sequential attractor transitions are learned through changes to synaptic weights and intrinsic excitabilities via the spike-based Bayesian Confidence Propagation Neural Network (BCPNN) learning rule. We find that the formation of distributed memories, embodied by increased periods of firing in pools of excitatory neurons, together with asymmetrical associations between these distinct network states, can be acquired through plasticity. The model’s feasibility is demonstrated using simulations of adaptive exponential integrate-and-fire model neurons (AdEx). We show that the learning and speed of sequence replay depends on a confluence of biophysically relevant parameters including stimulus duration, level of background noise, ratio of synaptic currents, and strengths of short-term depression and adaptation. Moreover, sequence elements are shown to flexibly participate multiple times in the sequence, suggesting that spiking attractor networks of this type can support an efficient combinatorial code. The model provides a principled approach towards understanding how multiple interacting plasticity mechanisms can coordinate hetero-associative learning in unison. From one moment to the next, in an ever-changing world, and awash in a deluge of sensory data, the brain fluidly guides our actions throughout an astonishing variety of tasks. Processing this ongoing bombardment of information is a fundamental problem faced by its underlying neural circuits. Given that the structure of our actions along with the organization of the environment in which they are performed can be intuitively decomposed into sequences of simpler patterns, an encoding strategy reflecting the temporal nature of these patterns should offer an efficient approach for assembling more complex memories and behaviors. We present a model that demonstrates how activity could propagate through recurrent cortical microcircuits as a result of a learning rule based on neurobiologically plausible time courses and dynamics. The model predicts that the interaction between several learning and dynamical processes constitute a compound mnemonic engram that can flexibly generate sequential step-wise increases of activity within neural populations.
Collapse
|
6
|
Clark I, Dumas G. The Regulation of Task Performance: A Trans-Disciplinary Review. Front Psychol 2016; 6:1862. [PMID: 26779050 PMCID: PMC4703823 DOI: 10.3389/fpsyg.2015.01862] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2015] [Accepted: 11/17/2015] [Indexed: 01/10/2023] Open
Abstract
Definitions of meta-cognition typically have two components: (1) knowledge about one's own cognitive functioning; and, (2) control over one's own cognitive activities. Since Flavell and his colleagues provided the empirical foundation on which to build studies of meta-cognition and the autonoetic (self) knowledge required for effective learning, the intervening years have seen the extensive dissemination of theoretical and empirical research on meta-cognition, which now encompasses a variety of issues and domains including educational psychology and neuroscience. Nevertheless, the psychological and neural underpinnings of meta-cognitive predictions and reflections that determine subsequent regulation of task performance remain ill understood. This article provides an outline of meta-cognition in the science of education with evidence drawn from neuroimaging, psycho-physiological, and psychological literature. We will rigorously explore research that addresses the pivotal role of the prefrontal cortex (PFC) in controlling the meta-cognitive processes that underpin the self-regulated learning (SRL) strategies learners employ to regulate task performance. The article delineates what those strategies are, and how the learning environment can facilitate or frustrate strategy use by influencing learners' self-efficacy.
Collapse
Affiliation(s)
- Ian Clark
- Nagoya University of Commerce and Business Nagoya, Japan
| | - Guillaume Dumas
- Human Genetics and Cognitive Functions Unit, Department of Neuroscience, Institut PasteurParis, France; Synapses and Cognition, UMR3571 Genes, Centre National de la Recherche ScientifiqueParis, France; Human Genetics and Cognitive Functions, Sorbonne Paris Cité, University Paris DiderotParis, France
| |
Collapse
|
7
|
Malagarriga D, Villa AEP, Garcia-Ojalvo J, Pons AJ. Mesoscopic segregation of excitation and inhibition in a brain network model. PLoS Comput Biol 2015; 11:e1004007. [PMID: 25671573 PMCID: PMC4324935 DOI: 10.1371/journal.pcbi.1004007] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2014] [Accepted: 10/28/2014] [Indexed: 02/02/2023] Open
Abstract
Neurons in the brain are known to operate under a careful balance of excitation and inhibition, which maintains neural microcircuits within the proper operational range. How this balance is played out at the mesoscopic level of neuronal populations is, however, less clear. In order to address this issue, here we use a coupled neural mass model to study computationally the dynamics of a network of cortical macrocolumns operating in a partially synchronized, irregular regime. The topology of the network is heterogeneous, with a few of the nodes acting as connector hubs while the rest are relatively poorly connected. Our results show that in this type of mesoscopic network excitation and inhibition spontaneously segregate, with some columns acting mainly in an excitatory manner while some others have predominantly an inhibitory effect on their neighbors. We characterize the conditions under which this segregation arises, and relate the character of the different columns with their topological role within the network. In particular, we show that the connector hubs are preferentially inhibitory, the more so the larger the node's connectivity. These results suggest a potential mesoscale organization of the excitation-inhibition balance in brain networks.
Collapse
Affiliation(s)
- Daniel Malagarriga
- Departament de Física i Enginyeria Nuclear, Universitat Politècnica de Catalunya, Terrassa, Spain
- Neuroheuristic Research Group, Faculty of Business and Economics, University of Lausanne, Lausanne, Switzerland
| | - Alessandro E. P. Villa
- Neuroheuristic Research Group, Faculty of Business and Economics, University of Lausanne, Lausanne, Switzerland
| | - Jordi Garcia-Ojalvo
- Department of Experimental and Health Sciences, Universitat Pompeu Fabra, Barcelona, Spain
| | - Antonio J. Pons
- Departament de Física i Enginyeria Nuclear, Universitat Politècnica de Catalunya, Terrassa, Spain
| |
Collapse
|
8
|
Rinkus GJ. Sparsey™: event recognition via deep hierarchical sparse distributed codes. Front Comput Neurosci 2014; 8:160. [PMID: 25566046 PMCID: PMC4266026 DOI: 10.3389/fncom.2014.00160] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2014] [Accepted: 11/19/2014] [Indexed: 11/25/2022] Open
Abstract
The visual cortex's hierarchical, multi-level organization is captured in many biologically inspired computational vision models, the general idea being that progressively larger scale (spatially/temporally) and more complex visual features are represented in progressively higher areas. However, most earlier models use localist representations (codes) in each representational field (which we equate with the cortical macrocolumn, “mac”), at each level. In localism, each represented feature/concept/event (hereinafter “item”) is coded by a single unit. The model we describe, Sparsey, is hierarchical as well but crucially, it uses sparse distributed coding (SDC) in every mac in all levels. In SDC, each represented item is coded by a small subset of the mac's units. The SDCs of different items can overlap and the size of overlap between items can be used to represent their similarity. The difference between localism and SDC is crucial because SDC allows the two essential operations of associative memory, storing a new item and retrieving the best-matching stored item, to be done in fixed time for the life of the model. Since the model's core algorithm, which does both storage and retrieval (inference), makes a single pass over all macs on each time step, the overall model's storage/retrieval operation is also fixed-time, a criterion we consider essential for scalability to the huge (“Big Data”) problems. A 2010 paper described a nonhierarchical version of this model in the context of purely spatial pattern processing. Here, we elaborate a fully hierarchical model (arbitrary numbers of levels and macs per level), describing novel model principles like progressive critical periods, dynamic modulation of principal cells' activation functions based on a mac-level familiarity measure, representation of multiple simultaneously active hypotheses, a novel method of time warp invariant recognition, and we report results showing learning/recognition of spatiotemporal patterns.
Collapse
|
9
|
Opris I, Casanova MF. Prefrontal cortical minicolumn: from executive control to disrupted cognitive processing. ACTA ACUST UNITED AC 2014; 137:1863-75. [PMID: 24531625 DOI: 10.1093/brain/awt359] [Citation(s) in RCA: 87] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Abstract
The prefrontal cortex of the primate brain has a modular architecture based on the aggregation of neurons in minicolumnar arrangements having afferent and efferent connections distributed across many brain regions to represent, select and/or maintain behavioural goals and executive commands. Prefrontal cortical microcircuits are assumed to play a key role in the perception to action cycle that integrates relevant information about environment, and then selects and enacts behavioural responses. Thus, neurons within the interlaminar microcircuits participate in various functional states requiring the integration of signals across cortical layers and the selection of executive variables. Recent research suggests that executive abilities emerge from cortico-cortical interactions between interlaminar prefrontal cortical microcircuits, whereas their disruption is involved in a broad spectrum of neurologic and psychiatric disorders such as autism, schizophrenia, Alzheimer's and drug addiction. The focus of this review is on the structural, functional and pathological approaches involving cortical minicolumns. Based on recent technological progress it has been demonstrated that microstimulation of infragranular cortical layers with patterns of microcurrents derived from supragranular layers led to an increase in cognitive performance. This suggests that interlaminar prefrontal cortical microcircuits are playing a causal role in improving cognitive performance. An important reason for the new interest in cortical modularity comes from both the impressive progress in understanding anatomical, physiological and pathological facets of cortical microcircuits and the promise of neural prosthetics for patients with neurological and psychiatric disorders.
Collapse
Affiliation(s)
- Ioan Opris
- 1 Department of Physiology and Pharmacology, Wake Forest University Health Sciences, Winston-Salem, NC, USA
| | - Manuel F Casanova
- 2 Department of Psychiatry and Behavioural Sciences, University of Louisville, Louisville, KY, USA
| |
Collapse
|
10
|
|
11
|
Gripon V, Berrou C. Sparse Neural Networks With Large Learning Diversity. ACTA ACUST UNITED AC 2011; 22:1087-96. [DOI: 10.1109/tnn.2011.2146789] [Citation(s) in RCA: 99] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|