1
|
Maass W. How can neuromorphic hardware attain brain-like functional capabilities? Natl Sci Rev 2024; 11:nwad301. [PMID: 38577672 PMCID: PMC10989294 DOI: 10.1093/nsr/nwad301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Revised: 10/22/2023] [Accepted: 11/30/2023] [Indexed: 04/06/2024] Open
Abstract
The author provides 4 design principles of how to make cortical microcircuits into neuromorphic hardwares, shedding light for the next generation neuromorphic hardware design.
Collapse
Affiliation(s)
- Wolfgang Maass
- Computer Science and Biomedical Engineering, Graz University of Technology, Austria
| |
Collapse
|
2
|
Subramoney A, Bellec G, Scherr F, Legenstein R, Maass W. Fast learning without synaptic plasticity in spiking neural networks. Sci Rep 2024; 14:8557. [PMID: 38609429 PMCID: PMC11015027 DOI: 10.1038/s41598-024-55769-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Accepted: 02/27/2024] [Indexed: 04/14/2024] Open
Abstract
Spiking neural networks are of high current interest, both from the perspective of modelling neural networks of the brain and for porting their fast learning capability and energy efficiency into neuromorphic hardware. But so far we have not been able to reproduce fast learning capabilities of the brain in spiking neural networks. Biological data suggest that a synergy of synaptic plasticity on a slow time scale with network dynamics on a faster time scale is responsible for fast learning capabilities of the brain. We show here that a suitable orchestration of this synergy between synaptic plasticity and network dynamics does in fact reproduce fast learning capabilities of generic recurrent networks of spiking neurons. This points to the important role of recurrent connections in spiking networks, since these are necessary for enabling salient network dynamics. We show more specifically that the proposed synergy enables synaptic weights to encode more general information such as priors and task structures, since moment-to-moment processing of new information can be delegated to the network dynamics.
Collapse
Affiliation(s)
- Anand Subramoney
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria
- Department of Computer Science, Royal Holloway University of London, Egham, UK
| | - Guillaume Bellec
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria
- Laboratory of Computational Neuroscience, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Franz Scherr
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Robert Legenstein
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Wolfgang Maass
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria.
| |
Collapse
|
3
|
Stöckl C, Yang Y, Maass W. Local prediction-learning in high-dimensional spaces enables neural networks to plan. Nat Commun 2024; 15:2344. [PMID: 38490999 PMCID: PMC10943103 DOI: 10.1038/s41467-024-46586-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Accepted: 03/01/2024] [Indexed: 03/18/2024] Open
Abstract
Planning and problem solving are cornerstones of higher brain function. But we do not know how the brain does that. We show that learning of a suitable cognitive map of the problem space suffices. Furthermore, this can be reduced to learning to predict the next observation through local synaptic plasticity. Importantly, the resulting cognitive map encodes relations between actions and observations, and its emergent high-dimensional geometry provides a sense of direction for reaching distant goals. This quasi-Euclidean sense of direction provides a simple heuristic for online planning that works almost as well as the best offline planning algorithms from AI. If the problem space is a physical space, this method automatically extracts structural regularities from the sequence of observations that it receives so that it can generalize to unseen parts. This speeds up learning of navigation in 2D mazes and the locomotion with complex actuator systems, such as legged bodies. The cognitive map learner that we propose does not require a teacher, similar to self-attention networks (Transformers). But in contrast to Transformers, it does not require backpropagation of errors or very large datasets for learning. Hence it provides a blue-print for future energy-efficient neuromorphic hardware that acquires advanced cognitive capabilities through autonomous on-chip learning.
Collapse
Affiliation(s)
- Christoph Stöckl
- Institute of Theoretical Computer Science, Graz University of Technology, 8010, Graz, Austria
| | - Yukun Yang
- Institute of Theoretical Computer Science, Graz University of Technology, 8010, Graz, Austria
| | - Wolfgang Maass
- Institute of Theoretical Computer Science, Graz University of Technology, 8010, Graz, Austria.
| |
Collapse
|
4
|
Galván Fraile J, Scherr F, Ramasco JJ, Arkhipov A, Maass W, Mirasso CR. Modeling circuit mechanisms of opposing cortical responses to visual flow perturbations. PLoS Comput Biol 2024; 20:e1011921. [PMID: 38452057 PMCID: PMC10950248 DOI: 10.1371/journal.pcbi.1011921] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 03/19/2024] [Accepted: 02/18/2024] [Indexed: 03/09/2024] Open
Abstract
In an ever-changing visual world, animals' survival depends on their ability to perceive and respond to rapidly changing motion cues. The primary visual cortex (V1) is at the forefront of this sensory processing, orchestrating neural responses to perturbations in visual flow. However, the underlying neural mechanisms that lead to distinct cortical responses to such perturbations remain enigmatic. In this study, our objective was to uncover the neural dynamics that govern V1 neurons' responses to visual flow perturbations using a biologically realistic computational model. By subjecting the model to sudden changes in visual input, we observed opposing cortical responses in excitatory layer 2/3 (L2/3) neurons, namely, depolarizing and hyperpolarizing responses. We found that this segregation was primarily driven by the competition between external visual input and recurrent inhibition, particularly within L2/3 and L4. This division was not observed in excitatory L5/6 neurons, suggesting a more prominent role for inhibitory mechanisms in the visual processing of the upper cortical layers. Our findings share similarities with recent experimental studies focusing on the opposing influence of top-down and bottom-up inputs in the mouse primary visual cortex during visual flow perturbations.
Collapse
Affiliation(s)
- J. Galván Fraile
- Instituto de Física Interdisciplinar y Sistemas Complejos (IFISC), UIB-CSIC, Palma de Mallorca, Spain
| | - Franz Scherr
- Institute of Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - José J. Ramasco
- Instituto de Física Interdisciplinar y Sistemas Complejos (IFISC), UIB-CSIC, Palma de Mallorca, Spain
| | - Anton Arkhipov
- Allen Institute, Seattle, Washington, United States of America
| | - Wolfgang Maass
- Institute of Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Claudio R. Mirasso
- Instituto de Física Interdisciplinar y Sistemas Complejos (IFISC), UIB-CSIC, Palma de Mallorca, Spain
| |
Collapse
|
5
|
Sandoval L, Jafri S, Balasubramanian JB, Bhawsar P, Edelson JL, Martins Y, Maass W, Chanock SJ, Garcia-Closas M, Almeida JS. PRScalc, a privacy-preserving calculation of raw polygenic risk scores from direct-to-consumer genomics data. Bioinform Adv 2023; 3:vbad145. [PMID: 37868335 PMCID: PMC10589913 DOI: 10.1093/bioadv/vbad145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Revised: 08/28/2023] [Accepted: 10/07/2023] [Indexed: 10/24/2023]
Abstract
Motivation Currently, the Polygenic Score (PGS) Catalog curates over 400 publications on over 500 traits corresponding to over 3000 polygenic risk scores (PRSs). To assess the feasibility of privately calculating the underlying multivariate relative risk for individuals with consumer genomics data, we developed an in-browserPRS calculator for genomic data that does not circulate any data or engage in any computation outside of the user's personal device. Results A prototype personal risk score calculator, created for research purposes, was developed to demonstrate how the PGS Catalog can be privately and readily applied to readily available direct-to-consumer genetic testing services, such as 23andMe. No software download, installation, or configuration is needed. The PRS web calculator matches individual PGS catalog entries with an individual's 23andMe genome data composed of 600k to 1.4 M single-nucleotide polymorphisms (SNPs). Beta coefficients provide researchers with a convenient assessment of risk associated with matched SNPs. This in-browser application was tested in a variety of personal devices, including smartphones, establishing the feasibility of privately calculating personal risk scores with up to a few thousand reference genetic variations and from the full 23andMe SNP data file (compressed or not). Availability and implementation The PRScalc web application is developed in JavaScript, HTML, and CSS and is available at GitHub repository (https://episphere.github.io/prs) under an MIT license. The datasets were derived from sources in the public domain: [PGS Catalog, Personal Genome Project].
Collapse
Affiliation(s)
- Lorena Sandoval
- Department of Biomedical Informatics, George Mason University, Fairfax, VA 22030, United States
- Division of Cancer Epidemiology and Genetics (DCEG), National Cancer Institute, Rockville, MD 20850, United States
| | - Saleet Jafri
- Department of Biomedical Informatics, George Mason University, Fairfax, VA 22030, United States
| | - Jeya Balaji Balasubramanian
- Division of Cancer Epidemiology and Genetics (DCEG), National Cancer Institute, Rockville, MD 20850, United States
| | - Praphulla Bhawsar
- Division of Cancer Epidemiology and Genetics (DCEG), National Cancer Institute, Rockville, MD 20850, United States
| | - Jacob L Edelson
- Division of Cancer Epidemiology and Genetics (DCEG), National Cancer Institute, Rockville, MD 20850, United States
| | - Yasmmin Martins
- Bioinformatics Laboratory, National Laboratory for Scientific Computing, Petropolis 25651, Brazil
| | | | - Stephen J Chanock
- Division of Cancer Epidemiology and Genetics (DCEG), National Cancer Institute, Rockville, MD 20850, United States
| | - Montserrat Garcia-Closas
- Division of Cancer Epidemiology and Genetics (DCEG), National Cancer Institute, Rockville, MD 20850, United States
| | - Jonas S Almeida
- Division of Cancer Epidemiology and Genetics (DCEG), National Cancer Institute, Rockville, MD 20850, United States
| |
Collapse
|
6
|
Chen G, Scherr F, Maass W. A data-based large-scale model for primary visual cortex enables brain-like robust and versatile visual processing. Sci Adv 2022; 8:eabq7592. [PMID: 36322646 PMCID: PMC9629744 DOI: 10.1126/sciadv.abq7592] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Accepted: 09/15/2022] [Indexed: 06/16/2023]
Abstract
We analyze visual processing capabilities of a large-scale model for area V1 that arguably provides the most comprehensive accumulation of anatomical and neurophysiological data to date. We find that this brain-like neural network model can reproduce a number of characteristic visual processing capabilities of the brain, in particular the capability to solve diverse visual processing tasks, also on temporally dispersed visual information, with remarkable robustness to noise. This V1 model, whose architecture and neurons markedly differ from those of deep neural networks used in current artificial intelligence (AI), such as convolutional neural networks (CNNs), also reproduces a number of characteristic neural coding properties of the brain, which provides explanations for its superior noise robustness. Because visual processing is substantially more energy efficient in the brain compared with CNNs in AI, such brain-like neural networks are likely to have an impact on future technology: as blueprints for visual processing in more energy-efficient neuromorphic hardware.
Collapse
|
7
|
Wolf M, Landgraeber S, Maass W, Orth P. Impact of Covid-19 on the global orthopaedic research output. Front Surg 2022; 9:962844. [PMID: 35990096 PMCID: PMC9390087 DOI: 10.3389/fsurg.2022.962844] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Accepted: 07/11/2022] [Indexed: 12/02/2022] Open
Abstract
The pandemic led to a significant change in the clinical routine of many orthopaedic surgeons. To observe the impact of the pandemic on scientific output all studies published in the fields of orthopaedics listed in the Web of Science databases were analysed regarding the scientific merit of the years 2019, 2020, and 2021. Subsequently, correlation analyses were performed with parameters of regional pandemic situation (obtained from WHO) and economic strength (obtained from the World Bank). The investigations revealed that the Covid-19 pandemic led to a decrease in the annual publication rate for the first time in 20 years (2020 to 2021: –5.69%). There were regional differences in the publication rate, which correlated significantly with the respective Covid-19 case count (r = –.77, p < 0.01), associated death count (r = –.63, p < 0.01), and the gross domestic product per capita (r = –.40, p < 0.01) but not with the number of vaccinations (r = .09, p = 0.30). Furthermore, there was a drastic decrease in funding from private agencies (relative share: 2019: 36.43%, 2020: 22.66%, 2021: 19.22%), and a balanced decrease in publication output for research areas of acute and elective patient care. The Covid-19 pandemic resulted in a decline in orthopaedic annual publication rates for the first time in 20 years. This reduction was subject to marked regional differences and correlated directly with the pandemic load and was associated with decreased research funding from the private sector.
Collapse
Affiliation(s)
- Milan Wolf
- Department of Orthopaedics and Orthopaedic Surgery, Saarland University Medical Centre, Homburg, Germany
- Correspondence: Milan Anton Wolf
| | - Stefan Landgraeber
- Department of Orthopaedics and Orthopaedic Surgery, Saarland University Medical Centre, Homburg, Germany
| | - Wolfgang Maass
- German Research Center for Artificial Intelligence (DFKI), Saarland University, Saarbrücken, Germany
| | - Patrick Orth
- Department of Orthopaedics and Orthopaedic Surgery, Saarland University Medical Centre, Homburg, Germany
| |
Collapse
|
8
|
|
9
|
Salaj D, Subramoney A, Kraisnikovic C, Bellec G, Legenstein R, Maass W. Spike frequency adaptation supports network computations on temporally dispersed information. eLife 2021; 10:e65459. [PMID: 34310281 PMCID: PMC8313230 DOI: 10.7554/elife.65459] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2020] [Accepted: 06/29/2021] [Indexed: 11/13/2022] Open
Abstract
For solving tasks such as recognizing a song, answering a question, or inverting a sequence of symbols, cortical microcircuits need to integrate and manipulate information that was dispersed over time during the preceding seconds. Creating biologically realistic models for the underlying computations, especially with spiking neurons and for behaviorally relevant integration time spans, is notoriously difficult. We examine the role of spike frequency adaptation in such computations and find that it has a surprisingly large impact. The inclusion of this well-known property of a substantial fraction of neurons in the neocortex - especially in higher areas of the human neocortex - moves the performance of spiking neural network models for computations on network inputs that are temporally dispersed from a fairly low level up to the performance level of the human brain.
Collapse
Affiliation(s)
- Darjan Salaj
- Institute of Theoretical Computer Science, Graz University of TechnologyGrazAustria
| | - Anand Subramoney
- Institute of Theoretical Computer Science, Graz University of TechnologyGrazAustria
| | - Ceca Kraisnikovic
- Institute of Theoretical Computer Science, Graz University of TechnologyGrazAustria
| | - Guillaume Bellec
- Institute of Theoretical Computer Science, Graz University of TechnologyGrazAustria
- Laboratory of Computational Neuroscience, Ecole Polytechnique Fédérale de Lausanne (EPFL)LausanneSwitzerland
| | - Robert Legenstein
- Institute of Theoretical Computer Science, Graz University of TechnologyGrazAustria
| | - Wolfgang Maass
- Institute of Theoretical Computer Science, Graz University of TechnologyGrazAustria
| |
Collapse
|
10
|
|
11
|
Pokorny C, Ison MJ, Rao A, Legenstein R, Papadimitriou C, Maass W. STDP Forms Associations between Memory Traces in Networks of Spiking Neurons. Cereb Cortex 2021; 30:952-968. [PMID: 31403679 PMCID: PMC7132978 DOI: 10.1093/cercor/bhz140] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2018] [Revised: 03/25/2019] [Accepted: 05/09/2019] [Indexed: 11/17/2022] Open
Abstract
Memory traces and associations between them are fundamental for cognitive brain function. Neuron recordings suggest that distributed assemblies of neurons in the brain serve as memory traces for spatial information, real-world items, and concepts. However, there is conflicting evidence regarding neural codes for associated memory traces. Some studies suggest the emergence of overlaps between assemblies during an association, while others suggest that the assemblies themselves remain largely unchanged and new assemblies emerge as neural codes for associated memory items. Here we study the emergence of neural codes for associated memory items in a generic computational model of recurrent networks of spiking neurons with a data-constrained rule for spike-timing-dependent plasticity. The model depends critically on 2 parameters, which control the excitability of neurons and the scale of initial synaptic weights. By modifying these 2 parameters, the model can reproduce both experimental data from the human brain on the fast formation of associations through emergent overlaps between assemblies, and rodent data where new neurons are recruited to encode the associated memories. Hence, our findings suggest that the brain can use both of these 2 neural codes for associations, and dynamically switch between them during consolidation.
Collapse
Affiliation(s)
- Christoph Pokorny
- Institute for Theoretical Computer Science, Graz University of Technology, 8010 Graz, Austria
| | - Matias J Ison
- School of Psychology, University of Nottingham, Nottingham, NG7 2RD, UK
| | - Arjun Rao
- Institute for Theoretical Computer Science, Graz University of Technology, 8010 Graz, Austria
| | - Robert Legenstein
- Institute for Theoretical Computer Science, Graz University of Technology, 8010 Graz, Austria
| | - Christos Papadimitriou
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, CA 94720-1770, USA
| | - Wolfgang Maass
- Institute for Theoretical Computer Science, Graz University of Technology, 8010 Graz, Austria
| |
Collapse
|
12
|
Zenke F, Bohté SM, Clopath C, Comşa IM, Göltz J, Maass W, Masquelier T, Naud R, Neftci EO, Petrovici MA, Scherr F, Goodman DFM. Visualizing a joint future of neuroscience and neuromorphic engineering. Neuron 2021; 109:571-575. [PMID: 33600754 DOI: 10.1016/j.neuron.2021.01.009] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2020] [Revised: 12/16/2020] [Accepted: 01/07/2021] [Indexed: 11/25/2022]
Abstract
Recent research resolves the challenging problem of building biophysically plausible spiking neural models that are also capable of complex information processing. This advance creates new opportunities in neuroscience and neuromorphic engineering, which we discussed at an online focus meeting.
Collapse
Affiliation(s)
- Friedemann Zenke
- Friedrich Miescher Institute for Biomedical Research, Basel, Switzerland.
| | - Sander M Bohté
- CWI, Amsterdam, the Netherlands; Swammerdam Institute for Life Sciences (SILS), University of Amsterdam, Amsterdam, the Netherlands; AI Department, Rijksuniversiteit Groningen, Groningen, the Netherlands
| | - Claudia Clopath
- Bioengineering Department, Imperial College London, London, UK
| | | | - Julian Göltz
- Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany; Department of Physiology, University of Bern, Bern, Switzerland
| | - Wolfgang Maass
- Institute of Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | | | - Richard Naud
- Brain and Mind Research Institute of the University of Ottawa, Department of Cellular Molecular Medicine, University of Ottawa, Ottawa, Canada
| | - Emre O Neftci
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA, USA; Department of Computer Science, University of California, Irvine, Irvine, CA, USA
| | - Mihai A Petrovici
- Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany; Department of Physiology, University of Bern, Bern, Switzerland
| | - Franz Scherr
- Institute of Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Dan F M Goodman
- Department of Electrical and Electronic Engineering, Imperial College London, London, UK
| |
Collapse
|
13
|
Kaiser J, Hoff M, Konle A, Vasquez Tieck JC, Kappel D, Reichard D, Subramoney A, Legenstein R, Roennau A, Maass W, Dillmann R. Embodied Synaptic Plasticity With Online Reinforcement Learning. Front Neurorobot 2019; 13:81. [PMID: 31632262 PMCID: PMC6786305 DOI: 10.3389/fnbot.2019.00081] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2019] [Accepted: 09/13/2019] [Indexed: 01/02/2023] Open
Abstract
The endeavor to understand the brain involves multiple collaborating research fields. Classically, synaptic plasticity rules derived by theoretical neuroscientists are evaluated in isolation on pattern classification tasks. This contrasts with the biological brain which purpose is to control a body in closed-loop. This paper contributes to bringing the fields of computational neuroscience and robotics closer together by integrating open-source software components from these two fields. The resulting framework allows to evaluate the validity of biologically-plausibe plasticity models in closed-loop robotics environments. We demonstrate this framework to evaluate Synaptic Plasticity with Online REinforcement learning (SPORE), a reward-learning rule based on synaptic sampling, on two visuomotor tasks: reaching and lane following. We show that SPORE is capable of learning to perform policies within the course of simulated hours for both tasks. Provisional parameter explorations indicate that the learning rate and the temperature driving the stochastic processes that govern synaptic learning dynamics need to be regulated for performance improvements to be retained. We conclude by discussing the recent deep reinforcement learning techniques which would be beneficial to increase the functionality of SPORE on visuomotor tasks.
Collapse
Affiliation(s)
- Jacques Kaiser
- FZI Research Center for Information Technology, Karlsruhe, Germany
| | - Michael Hoff
- FZI Research Center for Information Technology, Karlsruhe, Germany
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Andreas Konle
- FZI Research Center for Information Technology, Karlsruhe, Germany
| | | | - David Kappel
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria
- Bernstein Center for Computational Neuroscience, III Physikalisches Institut-Biophysik, Georg-August Universität, Göttingen, Germany
- Technische Universität Dresden, Chair of Highly Parallel VLSI Systems and Neuromorphic Circuits, Dresden, Germany
| | - Daniel Reichard
- FZI Research Center for Information Technology, Karlsruhe, Germany
| | - Anand Subramoney
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Robert Legenstein
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Arne Roennau
- FZI Research Center for Information Technology, Karlsruhe, Germany
| | - Wolfgang Maass
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Rüdiger Dillmann
- FZI Research Center for Information Technology, Karlsruhe, Germany
| |
Collapse
|
14
|
Yan Y, Kappel D, Neumarker F, Partzsch J, Vogginger B, Hoppner S, Furber S, Maass W, Legenstein R, Mayr C. Efficient Reward-Based Structural Plasticity on a SpiNNaker 2 Prototype. IEEE Trans Biomed Circuits Syst 2019; 13:579-591. [PMID: 30932847 DOI: 10.1109/tbcas.2019.2906401] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Advances in neuroscience uncover the mechanisms employed by the brain to efficiently solve complex learning tasks with very limited resources. However, the efficiency is often lost when one tries to port these findings to a silicon substrate, since brain-inspired algorithms often make extensive use of complex functions, such as random number generators, that are expensive to compute on standard general purpose hardware. The prototype chip of the second generation SpiNNaker system is designed to overcome this problem. Low-power advanced RISC machine (ARM) processors equipped with a random number generator and an exponential function accelerator enable the efficient execution of brain-inspired algorithms. We implement the recently introduced reward-based synaptic sampling model that employs structural plasticity to learn a function or task. The numerical simulation of the model requires to update the synapse variables in each time step including an explorative random term. To the best of our knowledge, this is the most complex synapse model implemented so far on the SpiNNaker system. By making efficient use of the hardware accelerators and numerical optimizations, the computation time of one plasticity update is reduced by a factor of 2. This, combined with fitting the model into to the local static random access memory (SRAM), leads to 62% energy reduction compared to the case without accelerators and the use of external dynamic random access memory (DRAM). The model implementation is integrated into the SpiNNaker software framework allowing for scalability onto larger systems. The hardware-software system presented in this paper paves the way for power-efficient mobile and biomedical applications with biologically plausible brain-inspired algorithms.
Collapse
|
15
|
Abstract
Hyperparameters and learning algorithms for neuromorphic hardware are usually chosen by hand to suit a particular task. In contrast, networks of neurons in the brain were optimized through extensive evolutionary and developmental processes to work well on a range of computing and learning tasks. Occasionally this process has been emulated through genetic algorithms, but these require themselves hand-design of their details and tend to provide a limited range of improvements. We employ instead other powerful gradient-free optimization tools, such as cross-entropy methods and evolutionary strategies, in order to port the function of biological optimization processes to neuromorphic hardware. As an example, we show these optimization algorithms enable neuromorphic agents to learn very efficiently from rewards. In particular, meta-plasticity, i.e., the optimization of the learning rule which they use, substantially enhances reward-based learning capability of the hardware. In addition, we demonstrate for the first time Learning-to-Learn benefits from such hardware, in particular, the capability to extract abstract knowledge from prior learning experiences that speeds up the learning of new but related tasks. Learning-to-Learn is especially suited for accelerated neuromorphic hardware, since it makes it feasible to carry out the required very large number of network computations.
Collapse
Affiliation(s)
- Thomas Bohnstingl
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Franz Scherr
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Christian Pehle
- Kirchhoff-Institute for Physics, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany
| | - Karlheinz Meier
- Kirchhoff-Institute for Physics, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany
| | - Wolfgang Maass
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria
| |
Collapse
|
16
|
Liu C, Bellec G, Vogginger B, Kappel D, Partzsch J, Neumärker F, Höppner S, Maass W, Furber SB, Legenstein R, Mayr CG. Memory-Efficient Deep Learning on a SpiNNaker 2 Prototype. Front Neurosci 2018; 12:840. [PMID: 30505263 PMCID: PMC6250847 DOI: 10.3389/fnins.2018.00840] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2018] [Accepted: 10/29/2018] [Indexed: 11/13/2022] Open
Abstract
The memory requirement of deep learning algorithms is considered incompatible with the memory restriction of energy-efficient hardware. A low memory footprint can be achieved by pruning obsolete connections or reducing the precision of connection strengths after the network has been trained. Yet, these techniques are not applicable to the case when neural networks have to be trained directly on hardware due to the hard memory constraints. Deep Rewiring (DEEP R) is a training algorithm which continuously rewires the network while preserving very sparse connectivity all along the training procedure. We apply DEEP R to a deep neural network implementation on a prototype chip of the 2nd generation SpiNNaker system. The local memory of a single core on this chip is limited to 64 KB and a deep network architecture is trained entirely within this constraint without the use of external memory. Throughout training, the proportion of active connections is limited to 1.3%. On the handwritten digits dataset MNIST, this extremely sparse network achieves 96.6% classification accuracy at convergence. Utilizing the multi-processor feature of the SpiNNaker system, we found very good scaling in terms of computation time, per-core memory consumption, and energy constraints. When compared to a X86 CPU implementation, neural network training on the SpiNNaker 2 prototype improves power and energy consumption by two orders of magnitude.
Collapse
Affiliation(s)
- Chen Liu
- Chair of Highly-Parallel VLSI-Systems and Neuromorphic Circuits, Department of Electrical Engineering and Information Technology, Institute of Circuits and Systems, Technische Universität Dresden, Dresden, Germany
| | - Guillaume Bellec
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Bernhard Vogginger
- Chair of Highly-Parallel VLSI-Systems and Neuromorphic Circuits, Department of Electrical Engineering and Information Technology, Institute of Circuits and Systems, Technische Universität Dresden, Dresden, Germany
| | - David Kappel
- Chair of Highly-Parallel VLSI-Systems and Neuromorphic Circuits, Department of Electrical Engineering and Information Technology, Institute of Circuits and Systems, Technische Universität Dresden, Dresden, Germany.,Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria.,Bernstein Center for Computational Neuroscience, III Physikalisches Institut - Biophysik, Georg-August Universität, Göttingen, Germany
| | - Johannes Partzsch
- Chair of Highly-Parallel VLSI-Systems and Neuromorphic Circuits, Department of Electrical Engineering and Information Technology, Institute of Circuits and Systems, Technische Universität Dresden, Dresden, Germany
| | - Felix Neumärker
- Chair of Highly-Parallel VLSI-Systems and Neuromorphic Circuits, Department of Electrical Engineering and Information Technology, Institute of Circuits and Systems, Technische Universität Dresden, Dresden, Germany
| | - Sebastian Höppner
- Chair of Highly-Parallel VLSI-Systems and Neuromorphic Circuits, Department of Electrical Engineering and Information Technology, Institute of Circuits and Systems, Technische Universität Dresden, Dresden, Germany
| | - Wolfgang Maass
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Steve B Furber
- Advanced Processor Technologies Group, School of Computer Science, University of Manchester, Manchester, United Kingdom
| | - Robert Legenstein
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Christian G Mayr
- Chair of Highly-Parallel VLSI-Systems and Neuromorphic Circuits, Department of Electrical Engineering and Information Technology, Institute of Circuits and Systems, Technische Universität Dresden, Dresden, Germany
| |
Collapse
|
17
|
Maass W, Parsons J, Purao S, Storey VC, Woo C. Data-Driven Meets Theory-Driven Research in the Era of Big Data: Opportunities and Challenges for Information Systems Research. J ASSOC INF SYST 2018. [DOI: 10.17705/1jais.00526] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
|
18
|
Jonke Z, Legenstein R, Habenschuss S, Maass W. Feedback Inhibition Shapes Emergent Computational Properties of Cortical Microcircuit Motifs. J Neurosci 2017; 37:8511-8523. [PMID: 28760861 PMCID: PMC6596876 DOI: 10.1523/jneurosci.2078-16.2017] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2016] [Revised: 07/18/2017] [Accepted: 07/23/2017] [Indexed: 01/28/2023] Open
Abstract
Cortical microcircuits are very complex networks, but they are composed of a relatively small number of stereotypical motifs. Hence, one strategy for throwing light on the computational function of cortical microcircuits is to analyze emergent computational properties of these stereotypical microcircuit motifs. We are addressing here the question how spike timing-dependent plasticity shapes the computational properties of one motif that has frequently been studied experimentally: interconnected populations of pyramidal cells and parvalbumin-positive inhibitory cells in layer 2/3. Experimental studies suggest that these inhibitory neurons exert some form of divisive inhibition on the pyramidal cells. We show that this data-based form of feedback inhibition, which is softer than that of winner-take-all models that are commonly considered in theoretical analyses, contributes to the emergence of an important computational function through spike timing-dependent plasticity: The capability to disentangle superimposed firing patterns in upstream networks, and to represent their information content through a sparse assembly code.SIGNIFICANCE STATEMENT We analyze emergent computational properties of a ubiquitous cortical microcircuit motif: populations of pyramidal cells that are densely interconnected with inhibitory neurons. Simulations of this model predict that sparse assembly codes emerge in this microcircuit motif under spike timing-dependent plasticity. Furthermore, we show that different assemblies will represent different hidden sources of upstream firing activity. Hence, we propose that spike timing-dependent plasticity enables this microcircuit motif to perform a fundamental computational operation on neural activity patterns.
Collapse
Affiliation(s)
- Zeno Jonke
- Institute for Theoretical Computer Science, Graz University of Technology, Inffeldgasse 16b/I, 8010 Graz, Austria
| | - Robert Legenstein
- Institute for Theoretical Computer Science, Graz University of Technology, Inffeldgasse 16b/I, 8010 Graz, Austria
| | - Stefan Habenschuss
- Institute for Theoretical Computer Science, Graz University of Technology, Inffeldgasse 16b/I, 8010 Graz, Austria
| | - Wolfgang Maass
- Institute for Theoretical Computer Science, Graz University of Technology, Inffeldgasse 16b/I, 8010 Graz, Austria
| |
Collapse
|
19
|
|
20
|
Abstract
Network of neurons in the brain apply-unlike processors in our current generation of computer hardware-an event-based processing strategy, where short pulses (spikes) are emitted sparsely by neurons to signal the occurrence of an event at a particular point in time. Such spike-based computations promise to be substantially more power-efficient than traditional clocked processing schemes. However, it turns out to be surprisingly difficult to design networks of spiking neurons that can solve difficult computational problems on the level of single spikes, rather than rates of spikes. We present here a new method for designing networks of spiking neurons via an energy function. Furthermore, we show how the energy function of a network of stochastically firing neurons can be shaped in a transparent manner by composing the networks of simple stereotypical network motifs. We show that this design approach enables networks of spiking neurons to produce approximate solutions to difficult (NP-hard) constraint satisfaction problems from the domains of planning/optimization and verification/logical inference. The resulting networks employ noise as a computational resource. Nevertheless, the timing of spikes plays an essential role in their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines) and Gibbs sampling.
Collapse
Affiliation(s)
- Zeno Jonke
- Faculty of Computer Science and Biomedical Engineering, Institute for Theoretical Computer Science, Graz University of Technology Graz, Austria
| | - Stefan Habenschuss
- Faculty of Computer Science and Biomedical Engineering, Institute for Theoretical Computer Science, Graz University of Technology Graz, Austria
| | - Wolfgang Maass
- Faculty of Computer Science and Biomedical Engineering, Institute for Theoretical Computer Science, Graz University of Technology Graz, Austria
| |
Collapse
|
21
|
Abstract
General results from statistical learning theory suggest to understand not only brain computations, but also brain plasticity as probabilistic inference. But a model for that has been missing. We propose that inherently stochastic features of synaptic plasticity and spine motility enable cortical networks of neurons to carry out probabilistic inference by sampling from a posterior distribution of network configurations. This model provides a viable alternative to existing models that propose convergence of parameters to maximum likelihood values. It explains how priors on weight distributions and connection probabilities can be merged optimally with learned experience, how cortical networks can generalize learned information so well to novel experiences, and how they can compensate continuously for unforeseen disturbances of the network. The resulting new theory of network plasticity explains from a functional perspective a number of experimental data on stochastic aspects of synaptic plasticity that previously appeared to be quite puzzling.
Collapse
Affiliation(s)
- David Kappel
- Institute for Theoretical Computer Science, Graz University of Technology, A-8010 Graz, Austria
- * E-mail:
| | - Stefan Habenschuss
- Institute for Theoretical Computer Science, Graz University of Technology, A-8010 Graz, Austria
| | - Robert Legenstein
- Institute for Theoretical Computer Science, Graz University of Technology, A-8010 Graz, Austria
| | - Wolfgang Maass
- Institute for Theoretical Computer Science, Graz University of Technology, A-8010 Graz, Austria
| |
Collapse
|
22
|
Bill J, Buesing L, Habenschuss S, Nessler B, Maass W, Legenstein R. Distributed Bayesian Computation and Self-Organized Learning in Sheets of Spiking Neurons with Local Lateral Inhibition. PLoS One 2015; 10:e0134356. [PMID: 26284370 PMCID: PMC4540468 DOI: 10.1371/journal.pone.0134356] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2015] [Accepted: 07/09/2015] [Indexed: 11/24/2022] Open
Abstract
During the last decade, Bayesian probability theory has emerged as a framework in cognitive science and neuroscience for describing perception, reasoning and learning of mammals. However, our understanding of how probabilistic computations could be organized in the brain, and how the observed connectivity structure of cortical microcircuits supports these calculations, is rudimentary at best. In this study, we investigate statistical inference and self-organized learning in a spatially extended spiking network model, that accommodates both local competitive and large-scale associative aspects of neural information processing, under a unified Bayesian account. Specifically, we show how the spiking dynamics of a recurrent network with lateral excitation and local inhibition in response to distributed spiking input, can be understood as sampling from a variational posterior distribution of a well-defined implicit probabilistic model. This interpretation further permits a rigorous analytical treatment of experience-dependent plasticity on the network level. Using machine learning theory, we derive update rules for neuron and synapse parameters which equate with Hebbian synaptic and homeostatic intrinsic plasticity rules in a neural implementation. In computer simulations, we demonstrate that the interplay of these plasticity rules leads to the emergence of probabilistic local experts that form distributed assemblies of similarly tuned cells communicating through lateral excitatory connections. The resulting sparse distributed spike code of a well-adapted network carries compressed information on salient input features combined with prior experience on correlations among them. Our theory predicts that the emergence of such efficient representations benefits from network architectures in which the range of local inhibition matches the spatial extent of pyramidal cells that share common afferent input.
Collapse
Affiliation(s)
- Johannes Bill
- Institute for Theoretical Computer Science, TU Graz, Graz, Austria
- * E-mail:
| | - Lars Buesing
- Department of Statistics, Columbia University, New York, New York, United States of America
| | | | - Bernhard Nessler
- Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany
| | - Wolfgang Maass
- Institute for Theoretical Computer Science, TU Graz, Graz, Austria
| | | |
Collapse
|
23
|
Kappel D, Nessler B, Maass W. STDP installs in Winner-Take-All circuits an online approximation to hidden Markov model learning. PLoS Comput Biol 2014; 10:e1003511. [PMID: 24675787 PMCID: PMC3967926 DOI: 10.1371/journal.pcbi.1003511] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2013] [Accepted: 01/24/2014] [Indexed: 11/18/2022] Open
Abstract
In order to cross a street without being run over, we need to be able to extract very fast hidden causes of dynamically changing multi-modal sensory stimuli, and to predict their future evolution. We show here that a generic cortical microcircuit motif, pyramidal cells with lateral excitation and inhibition, provides the basis for this difficult but all-important information processing capability. This capability emerges in the presence of noise automatically through effects of STDP on connections between pyramidal cells in Winner-Take-All circuits with lateral excitation. In fact, one can show that these motifs endow cortical microcircuits with functional properties of a hidden Markov model, a generic model for solving such tasks through probabilistic inference. Whereas in engineering applications this model is adapted to specific tasks through offline learning, we show here that a major portion of the functionality of hidden Markov models arises already from online applications of STDP, without any supervision or rewards. We demonstrate the emergent computing capabilities of the model through several computer simulations. The full power of hidden Markov model learning can be attained through reward-gated STDP. This is due to the fact that these mechanisms enable a rejection sampling approximation to theoretically optimal learning. We investigate the possible performance gain that can be achieved with this more accurate learning method for an artificial grammar task.
Collapse
Affiliation(s)
- David Kappel
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Bernhard Nessler
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Wolfgang Maass
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria
| |
Collapse
|
24
|
Nessler B, Pfeiffer M, Buesing L, Maass W. Bayesian computation emerges in generic cortical microcircuits through spike-timing-dependent plasticity. PLoS Comput Biol 2013; 9:e1003037. [PMID: 23633941 PMCID: PMC3636028 DOI: 10.1371/journal.pcbi.1003037] [Citation(s) in RCA: 112] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2012] [Accepted: 03/04/2013] [Indexed: 11/24/2022] Open
Abstract
The principles by which networks of neurons compute, and how spike-timing dependent plasticity (STDP) of synaptic weights generates and maintains their computational function, are unknown. Preceding work has shown that soft winner-take-all (WTA) circuits, where pyramidal neurons inhibit each other via interneurons, are a common motif of cortical microcircuits. We show through theoretical analysis and computer simulations that Bayesian computation is induced in these network motifs through STDP in combination with activity-dependent changes in the excitability of neurons. The fundamental components of this emergent Bayesian computation are priors that result from adaptation of neuronal excitability and implicit generative models for hidden causes that are created in the synaptic weights through STDP. In fact, a surprising result is that STDP is able to approximate a powerful principle for fitting such implicit generative models to high-dimensional spike inputs: Expectation Maximization. Our results suggest that the experimentally observed spontaneous activity and trial-to-trial variability of cortical neurons are essential features of their information processing capability, since their functional role is to represent probability distributions rather than static neural codes. Furthermore it suggests networks of Bayesian computation modules as a new model for distributed information processing in the cortex.
Collapse
Affiliation(s)
- Bernhard Nessler
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria.
| | | | | | | |
Collapse
|
25
|
Abstract
The brain faces the problem of inferring reliable hidden causes from large populations of noisy neurons, for example, the direction of a moving object from spikes in area MT. It is known that a theoretically optimal likelihood decoding could be carried out by simple linear readout neurons if weights of synaptic connections were set to certain values that depend on the tuning functions of sensory neurons. We show here that such theoretically optimal readout weights emerge autonomously through STDP in conjunction with lateral inhibition between readout neurons. In particular, we identify a class of optimal STDP learning rules with homeostatic plasticity, for which the autonomous emergence of optimal readouts can be explained on the basis of a rigorous learning theory. This theory shows that the network motif we consider approximates expectation-maximization for creating internal generative models for hidden causes of high-dimensional spike inputs. Notably, we find that this optimal functionality can be well approximated by a variety of STDP rules beyond those predicted by theory. Furthermore, we show that this learning process is very stable and automatically adjusts weights to changes in the number of readout neurons, the tuning functions of sensory neurons, and the statistics of external stimuli.
Collapse
Affiliation(s)
- Stefan Habenschuss
- Institute for Theoretical Computer Science, Graz University of Technology, A-8010 Graz, Austria.
| | | | | |
Collapse
|
26
|
Rückert EA, Neumann G, Toussaint M, Maass W. Learned graphical models for probabilistic planning provide a new class of movement primitives. Front Comput Neurosci 2013; 6:97. [PMID: 23293598 PMCID: PMC3534186 DOI: 10.3389/fncom.2012.00097] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2012] [Accepted: 12/04/2012] [Indexed: 11/24/2022] Open
Abstract
Biological movement generation combines three interesting aspects: its modular organization in movement primitives (MPs), its characteristics of stochastic optimality under perturbations, and its efficiency in terms of learning. A common approach to motor skill learning is to endow the primitives with dynamical systems. Here, the parameters of the primitive indirectly define the shape of a reference trajectory. We propose an alternative MP representation based on probabilistic inference in learned graphical models with new and interesting properties that complies with salient features of biological movement control. Instead of endowing the primitives with dynamical systems, we propose to endow MPs with an intrinsic probabilistic planning system, integrating the power of stochastic optimal control (SOC) methods within a MP. The parameterization of the primitive is a graphical model that represents the dynamics and intrinsic cost function such that inference in this graphical model yields the control policy. We parameterize the intrinsic cost function using task-relevant features, such as the importance of passing through certain via-points. The system dynamics as well as intrinsic cost function parameters are learned in a reinforcement learning (RL) setting. We evaluate our approach on a complex 4-link balancing task. Our experiments show that our movement representation facilitates learning significantly and leads to better generalization to new task settings without re-learning.
Collapse
Affiliation(s)
- Elmar A Rückert
- Institute for Theoretical Computer Science, Graz University of Technology Austria
| | | | | | | |
Collapse
|
27
|
Hoerzer GM, Legenstein R, Maass W. Emergence of Complex Computational Structures From Chaotic Neural Networks Through Reward-Modulated Hebbian Learning. Cereb Cortex 2012; 24:677-90. [DOI: 10.1093/cercor/bhs348] [Citation(s) in RCA: 90] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
|
28
|
Hauser H, Ijspeert AJ, Füchslin RM, Pfeifer R, Maass W. The role of feedback in morphological computation with compliant bodies. Biol Cybern 2012; 106:595-613. [PMID: 22956025 DOI: 10.1007/s00422-012-0516-4] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/06/2012] [Accepted: 08/10/2012] [Indexed: 05/26/2023]
Abstract
The generation of robust periodic movements of complex nonlinear robotic systems is inherently difficult, especially, if parts of the robots are compliant. It has previously been proposed that complex nonlinear features of a robot, similarly as in biological organisms, might possibly facilitate its control. This bold hypothesis, commonly referred to as morphological computation, has recently received some theoretical support by Hauser et al. (Biol Cybern 105:355-370, doi: 10.1007/s00422-012-0471-0 , 2012). We show in this article that this theoretical support can be extended to cover not only the case of fading memory responses to external signals, but also the essential case of autonomous generation of adaptive periodic patterns, as, e.g., needed for locomotion. The theory predicts that feedback into the morphological computing system is necessary and sufficient for such tasks, for which a fading memory is insufficient. We demonstrate the viability of this theoretical analysis through computer simulations of complex nonlinear mass-spring systems that are trained to generate a large diversity of periodic movements by adapting the weights of a simple linear feedback device. Hence, the results of this article substantially enlarge the theoretically tractable application domain of morphological computation in robotics, and also provide new paradigms for understanding control principles of biological organisms.
Collapse
Affiliation(s)
- Helmut Hauser
- Artificial Intelligence Laboratory, Department of Informatics, University of Zurich, Andreasstrasse 15, 8050 Zurich, Switzerland.
| | | | | | | | | |
Collapse
|
29
|
Klampfl S, David SV, Yin P, Shamma SA, Maass W. A quantitative analysis of information about past and present stimuli encoded by spikes of A1 neurons. J Neurophysiol 2012; 108:1366-80. [PMID: 22696538 DOI: 10.1152/jn.00935.2011] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
To process the rich temporal structure of their acoustic environment, organisms have to integrate information over time into an appropriate neural response. Previous studies have addressed the modulation of responses of auditory neurons to a current sound in dependence of the immediate stimulation history, but a quantitative analysis of this important computational process has been missing. In this study, we analyzed temporal integration of information in the spike output of 122 single neurons in primary auditory cortex (A1) of four awake ferrets in response to random tone sequences. We quantified the information contained in the responses about both current and preceding sounds in two ways: by estimating directly the mutual information between stimulus and response, and by training linear classifiers to decode information about the stimulus from the neural response. We found that 1) many neurons conveyed a significant amount of information not only about the current tone but also simultaneously about the previous tone, 2) the neural response to tone sequences was a nonlinear combination of responses to the tones in isolation, and 3) nevertheless, much of the information about current and previous tones could be extracted by linear decoders. Furthermore, our analysis of these experimental data shows that methods from information theory and the application of standard machine learning methods for extracting specific information yield quite similar results.
Collapse
Affiliation(s)
- Stefan Klampfl
- Institute for Theoretical Computer Science, Graz Univ. of Technology, Austria.
| | | | | | | | | |
Collapse
|
30
|
Abstract
The task of an organism to extract information about the external environment from sensory signals is based entirely on the analysis of ongoing afferent spike activity provided by the sense organs. We investigate the processing of auditory stimuli by an acoustic interneuron of insects. In contrast to most previous work we do this by using stimuli and neurophysiological recordings directly in the nocturnal tropical rainforest, where the insect communicates. Different from typical recordings in sound proof laboratories, strong environmental noise from multiple sound sources interferes with the perception of acoustic signals in these realistic scenarios. We apply a recently developed unsupervised machine learning algorithm based on probabilistic inference to find frequently occurring firing patterns in the response of the acoustic interneuron. We can thus ask how much information the central nervous system of the receiver can extract from bursts without ever being told which type and which variants of bursts are characteristic for particular stimuli. Our results show that the reliability of burst coding in the time domain is so high that identical stimuli lead to extremely similar spike pattern responses, even for different preparations on different dates, and even if one of the preparations is recorded outdoors and the other one in the sound proof lab. Simultaneous recordings in two preparations exposed to the same acoustic environment reveal that characteristics of burst patterns are largely preserved among individuals of the same species. Our study shows that burst coding can provide a reliable mechanism for acoustic insects to classify and discriminate signals under very noisy real-world conditions. This gives new insights into the neural mechanisms potentially used by bushcrickets to discriminate conspecific songs from sounds of predators in similar carrier frequency bands.
Collapse
Affiliation(s)
- Michael Pfeiffer
- Institute for Theoretical Computer Science, TU Graz, Graz, Austria.
| | | | | | | | | |
Collapse
|
31
|
Almeida JS, Grüneberg A, Maass W, Vinga S. Fractal MapReduce decomposition of sequence alignment. Algorithms Mol Biol 2012; 7:12. [PMID: 22551205 PMCID: PMC3394223 DOI: 10.1186/1748-7188-7-12] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2011] [Accepted: 05/02/2012] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND The dramatic fall in the cost of genomic sequencing, and the increasing convenience of distributed cloud computing resources, positions the MapReduce coding pattern as a cornerstone of scalable bioinformatics algorithm development. In some cases an algorithm will find a natural distribution via use of map functions to process vectorized components, followed by a reduce of aggregate intermediate results. However, for some data analysis procedures such as sequence analysis, a more fundamental reformulation may be required. RESULTS In this report we describe a solution to sequence comparison that can be thoroughly decomposed into multiple rounds of map and reduce operations. The route taken makes use of iterated maps, a fractal analysis technique, that has been found to provide a "alignment-free" solution to sequence analysis and comparison. That is, a solution that does not require dynamic programming, relying on a numeric Chaos Game Representation (CGR) data structure. This claim is demonstrated in this report by calculating the length of the longest similar segment by inspecting only the USM coordinates of two analogous units: with no resort to dynamic programming. CONCLUSIONS The procedure described is an attempt at extreme decomposition and parallelization of sequence alignment in anticipation of a volume of genomic sequence data that cannot be met by current algorithmic frameworks. The solution found is delivered with a browser-based application (webApp), highlighting the browser's emergence as an environment for high performance distributed computing. AVAILABILITY Public distribution of accompanying software library with open source and version control at http://usm.github.com. Also available as a webApp through Google Chrome's WebStore http://chrome.google.com/webstore: search with "usm".
Collapse
|
32
|
Probst D, Maass W, Markram H, Gewaltig MO. Liquid Computing in a Simplified Model of Cortical Layer IV: Learning to Balance a Ball. Artificial Neural Networks and Machine Learning – ICANN 2012 2012. [DOI: 10.1007/978-3-642-33269-2_27] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/05/2022]
|
33
|
Hauser H, Ijspeert AJ, Füchslin RM, Pfeifer R, Maass W. Towards a theoretical foundation for morphological computation with compliant bodies. Biol Cybern 2011; 105:355-370. [PMID: 22290137 DOI: 10.1007/s00422-012-0471-0] [Citation(s) in RCA: 113] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/24/2011] [Accepted: 01/08/2012] [Indexed: 05/27/2023]
Abstract
The control of compliant robots is, due to their often nonlinear and complex dynamics, inherently difficult.The vision of morphological computation proposes to view these aspects not only as problems, but rather also as parts of the solution. Non-rigid body parts are not seen anymore as imperfect realizations of rigid body parts, but rather as potential computational resources. The applicability of this vision has already been demonstrated for a variety of complex robot control problems. Nevertheless, a theoretical basis for understanding the capabilities and limitations of morphological computation has been missing so far. We present a model for morphological computation with compliant bodies, where a precise mathematical characterization oft he potential computational contribution of a complex physical body is feasible. The theory suggests that complexity and nonlinearity, typically unwanted properties of robots, are desired features in order to provide computational power. We demonstrate that simple generic models of physical bodies,based on mass-spring systems, can be used to implement complex nonlinear operators. By adding a simple readout(which is static and linear) to the morphology, such devices are able to emulate complex mappings of input to output streams in continuous time. Hence, by outsourcing parts of the computation to the physical body, the difficult problem of learning to control a complex body, could be reduced to a simple and perspicuous learning task, which can not get stuck in local minima of an error function.
Collapse
Affiliation(s)
- Helmut Hauser
- Artificial Intelligence Laboratory, Department of Informatics, University of Zurich, Andreasstrasse 15, 8050 Zurich, Switzerland.
| | | | | | | | | |
Collapse
|
34
|
Buesing L, Bill J, Nessler B, Maass W. Neural dynamics as sampling: a model for stochastic computation in recurrent networks of spiking neurons. PLoS Comput Biol 2011; 7:e1002211. [PMID: 22096452 PMCID: PMC3207943 DOI: 10.1371/journal.pcbi.1002211] [Citation(s) in RCA: 150] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2011] [Accepted: 08/10/2011] [Indexed: 11/19/2022] Open
Abstract
The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons.
Collapse
Affiliation(s)
- Lars Buesing
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria.
| | | | | | | |
Collapse
|
35
|
Deus HF, Correa MC, Stanislaus R, Miragaia M, Maass W, de Lencastre H, Fox R, Almeida JS. S3QL: a distributed domain specific language for controlled semantic integration of life sciences data. BMC Bioinformatics 2011; 12:285. [PMID: 21756325 PMCID: PMC3155508 DOI: 10.1186/1471-2105-12-285] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2011] [Accepted: 07/14/2011] [Indexed: 02/04/2023] Open
Abstract
Background The value and usefulness of data increases when it is explicitly interlinked with related data. This is the core principle of Linked Data. For life sciences researchers, harnessing the power of Linked Data to improve biological discovery is still challenged by a need to keep pace with rapidly evolving domains and requirements for collaboration and control as well as with the reference semantic web ontologies and standards. Knowledge organization systems (KOSs) can provide an abstraction for publishing biological discoveries as Linked Data without complicating transactions with contextual minutia such as provenance and access control. We have previously described the Simple Sloppy Semantic Database (S3DB) as an efficient model for creating knowledge organization systems using Linked Data best practices with explicit distinction between domain and instantiation and support for a permission control mechanism that automatically migrates between the two. In this report we present a domain specific language, the S3DB query language (S3QL), to operate on its underlying core model and facilitate management of Linked Data. Results Reflecting the data driven nature of our approach, S3QL has been implemented as an application programming interface for S3DB systems hosting biomedical data, and its syntax was subsequently generalized beyond the S3DB core model. This achievement is illustrated with the assembly of an S3QL query to manage entities from the Simple Knowledge Organization System. The illustrative use cases include gastrointestinal clinical trials, genomic characterization of cancer by The Cancer Genome Atlas (TCGA) and molecular epidemiology of infectious diseases. Conclusions S3QL was found to provide a convenient mechanism to represent context for interoperation between public and private datasets hosted at biomedical research institutions and linked data formalisms.
Collapse
Affiliation(s)
- Helena F Deus
- Digital Enterprise Research Institute, National University of Ireland at Galway, IDA Business Park, Lower Dangan, Galway, Ireland.
| | | | | | | | | | | | | | | |
Collapse
|
36
|
Hauser H, Neumann G, Ijspeert AJ, Maass W. Biologically inspired kinematic synergies enable linear balance control of a humanoid robot. Biol Cybern 2011; 104:235-249. [PMID: 21523489 DOI: 10.1007/s00422-011-0430-1] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/22/2010] [Accepted: 04/06/2011] [Indexed: 05/30/2023]
Abstract
Despite many efforts, balance control of humanoid robots in the presence of unforeseen external or internal forces has remained an unsolved problem. The difficulty of this problem is a consequence of the high dimensionality of the action space of a humanoid robot, due to its large number of degrees of freedom (joints), and of non-linearities in its kinematic chains. Biped biological organisms face similar difficulties, but have nevertheless solved this problem. Experimental data reveal that many biological organisms reduce the high dimensionality of their action space by generating movements through linear superposition of a rather small number of stereotypical combinations of simultaneous movements of many joints, to which we refer as kinematic synergies in this paper. We show that by constructing two suitable non-linear kinematic synergies for the lower part of the body of a humanoid robot, balance control can in fact be reduced to a linear control problem, at least in the case of relatively slow movements. We demonstrate for a variety of tasks that the humanoid robot HOAP-2 acquires through this approach the capability to balance dynamically against unforeseen disturbances that may arise from external forces or from manipulating unknown loads.
Collapse
Affiliation(s)
- Helmut Hauser
- Institute for Theoretical Computer Science, Graz University of Technology, 8010 Graz, Austria.
| | | | | | | |
Collapse
|
37
|
Kowatsch T, Maass W, Fleisch E. The role of product reviews on mobile devices for in-store purchases: consumers' usage intentions, costs and store preferences. ACTA ACUST UNITED AC 2011. [DOI: 10.1504/ijima.2011.038237] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
38
|
Rasch MJ, Schuch K, Logothetis NK, Maass W. Statistical comparison of spike responses to natural stimuli in monkey area V1 with simulated responses of a detailed laminar network model for a patch of V1. J Neurophysiol 2010; 105:757-78. [PMID: 21106898 DOI: 10.1152/jn.00845.2009] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
A major goal of computational neuroscience is the creation of computer models for cortical areas whose response to sensory stimuli resembles that of cortical areas in vivo in important aspects. It is seldom considered whether the simulated spiking activity is realistic (in a statistical sense) in response to natural stimuli. Because certain statistical properties of spike responses were suggested to facilitate computations in the cortex, acquiring a realistic firing regimen in cortical network models might be a prerequisite for analyzing their computational functions. We present a characterization and comparison of the statistical response properties of the primary visual cortex (V1) in vivo and in silico in response to natural stimuli. We recorded from multiple electrodes in area V1 of 4 macaque monkeys and developed a large state-of-the-art network model for a 5 × 5-mm patch of V1 composed of 35,000 neurons and 3.9 million synapses that integrates previously published anatomical and physiological details. By quantitative comparison of the model response to the "statistical fingerprint" of responses in vivo, we find that our model for a patch of V1 responds to the same movie in a way which matches the statistical structure of the recorded data surprisingly well. The deviation between the firing regimen of the model and the in vivo data are on the same level as deviations among monkeys and sessions. This suggests that, despite strong simplifications and abstractions of cortical network models, they are nevertheless capable of generating realistic spiking activity. To reach a realistic firing state, it was not only necessary to include both N-methyl-d-aspartate and GABA(B) synaptic conductances in our model, but also to markedly increase the strength of excitatory synapses onto inhibitory neurons (>2-fold) in comparison to literature values, hinting at the importance to carefully adjust the effect of inhibition for achieving realistic dynamics in current network models.
Collapse
Affiliation(s)
- Malte J Rasch
- 1Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria.
| | | | | | | |
Collapse
|
39
|
Bill J, Schuch K, Brüderle D, Schemmel J, Maass W, Meier K. Compensating Inhomogeneities of Neuromorphic VLSI Devices Via Short-Term Synaptic Plasticity. Front Comput Neurosci 2010; 4:129. [PMID: 21031027 PMCID: PMC2965017 DOI: 10.3389/fncom.2010.00129] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2010] [Accepted: 08/11/2010] [Indexed: 11/17/2022] Open
Abstract
Recent developments in neuromorphic hardware engineering make mixed-signal VLSI neural network models promising candidates for neuroscientific research tools and massively parallel computing devices, especially for tasks which exhaust the computing power of software simulations. Still, like all analog hardware systems, neuromorphic models suffer from a constricted configurability and production-related fluctuations of device characteristics. Since also future systems, involving ever-smaller structures, will inevitably exhibit such inhomogeneities on the unit level, self-regulation properties become a crucial requirement for their successful operation. By applying a cortically inspired self-adjusting network architecture, we show that the activity of generic spiking neural networks emulated on a neuromorphic hardware system can be kept within a biologically realistic firing regime and gain a remarkable robustness against transistor-level variations. As a first approach of this kind in engineering practice, the short-term synaptic depression and facilitation mechanisms implemented within an analog VLSI model of I&F neurons are functionally utilized for the purpose of network level stabilization. We present experimental data acquired both from the hardware model and from comparative software simulations which prove the applicability of the employed paradigm to neuromorphic VLSI devices.
Collapse
Affiliation(s)
- Johannes Bill
- Kirchhoff Institute for Physics, University of Heidelberg Heidelberg, Germany
| | | | | | | | | | | |
Collapse
|
40
|
Abstract
Neurons in the brain are able to detect and discriminate salient spatiotemporal patterns in the firing activity of presynaptic neurons. It is open how they can learn to achieve this, especially without the help of a supervisor. We show that a well-known unsupervised learning algorithm for linear neurons, slow feature analysis (SFA), is able to acquire the discrimination capability of one of the best algorithms for supervised linear discrimination learning, the Fisher linear discriminant (FLD), given suitable input statistics. We demonstrate the power of this principle by showing that it enables readout neurons from simulated cortical microcircuits to learn without any supervision to discriminate between spoken digits and to detect repeated firing patterns that are embedded into a stream of noise spike trains with the same firing statistics. Both these computer simulations and our theoretical analysis show that slow feature extraction enables neurons to extract and collect information that is spread out over a trajectory of firing states that lasts several hundred ms. In addition, it enables neurons to learn without supervision to keep track of time (relative to a stimulus onset, or the initiation of a motor response). Hence, these results elucidate how the brain could compute with trajectories of firing states rather than only with fixed point attractors. It also provides a theoretical basis for understanding recent experimental results on the emergence of view- and position-invariant classification of visual objects in inferior temporal cortex.
Collapse
Affiliation(s)
- Stefan Klampfl
- Institute for Theoretical Computer Science, Graz University of Technology, A-8010 Graz, Austria.
| | | |
Collapse
|
41
|
Abstract
We introduce a framework for decision making in which the learning of decision making is reduced to its simplest and biologically most plausible form: Hebbian learning on a linear neuron. We cast our Bayesian-Hebb learning rule as reinforcement learning in which certain decisions are rewarded and prove that each synaptic weight will on average converge exponentially fast to the log-odd of receiving a reward when its pre- and postsynaptic neurons are active. In our simple architecture, a particular action is selected from the set of candidate actions by a winner-take-all operation. The global reward assigned to this action then modulates the update of each synapse. Apart from this global reward signal, our reward-modulated Bayesian Hebb rule is a pure Hebb update that depends only on the coactivation of the pre- and postsynaptic neurons, not on the weighted sum of all presynaptic inputs to the postsynaptic neuron as in the perceptron learning rule or the Rescorla-Wagner rule. This simple approach to action-selection learning requires that information about sensory inputs be presented to the Bayesian decision stage in a suitably preprocessed form resulting from other adaptive processes (acting on a larger timescale) that detect salient dependencies among input features. Hence our proposed framework for fast learning of decisions also provides interesting new hypotheses regarding neural nodes and computational goals of cortical areas that provide input to the final decision stage.
Collapse
Affiliation(s)
- Michael Pfeiffer
- Institute for Theoretical Computer Science, Graz University of Technology, A-8010 Graz, Austria.
| | | | | | | |
Collapse
|
42
|
Abstract
Neurons receive thousands of presynaptic input spike trains while emitting a single output spike train. This drastic dimensionality reduction suggests considering a neuron as a bottleneck for information transmission. Extending recent results, we propose a simple learning rule for the weights of spiking neurons derived from the information bottleneck (IB) framework that minimizes the loss of relevant information transmitted in the output spike train. In the IB framework, relevance of information is defined with respect to contextual information, the latter entering the proposed learning rule as a “third” factor besides pre- and postsynaptic activities. This renders the theoretically motivated learning rule a plausible model for experimentally observed synaptic plasticity phenomena involving three factors. Furthermore, we show that the proposed IB learning rule allows spiking neurons to learn a predictive code, that is, to extract those parts of their input that are predictive for future input.
Collapse
Affiliation(s)
- Lars Buesing
- Institute for Theoretical Computer Science, Graz University of Technology, A-8010 Graz, Austria
| | - Wolfgang Maass
- Institute for Theoretical Computer Science, Graz University of Technology, A-8010 Graz, Austria
| |
Collapse
|
43
|
Almeida JS, Deus HF, Maass W. S3DB core: a framework for RDF generation and management in bioinformatics infrastructures. BMC Bioinformatics 2010; 11:387. [PMID: 20646315 PMCID: PMC2918582 DOI: 10.1186/1471-2105-11-387] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2010] [Accepted: 07/20/2010] [Indexed: 11/16/2022] Open
Abstract
Background Biomedical research is set to greatly benefit from the use of semantic web technologies in the design of computational infrastructure. However, beyond well defined research initiatives, substantial issues of data heterogeneity, source distribution, and privacy currently stand in the way towards the personalization of Medicine. Results A computational framework for bioinformatic infrastructure was designed to deal with the heterogeneous data sources and the sensitive mixture of public and private data that characterizes the biomedical domain. This framework consists of a logical model build with semantic web tools, coupled with a Markov process that propagates user operator states. An accompanying open source prototype was developed to meet a series of applications that range from collaborative multi-institution data acquisition efforts to data analysis applications that need to quickly traverse complex data structures. This report describes the two abstractions underlying the S3DB-based infrastructure, logical and numerical, and discusses its generality beyond the immediate confines of existing implementations. Conclusions The emergence of the "web as a computer" requires a formal model for the different functionalities involved in reading and writing to it. The S3DB core model proposed was found to address the design criteria of biomedical computational infrastructure, such as those supporting large scale multi-investigator research, clinical trials, and molecular epidemiology.
Collapse
Affiliation(s)
- Jonas S Almeida
- Department of Bioinformatics and Computational Biology, The University of Texas M D Anderson Cancer Center, 1515 Holcombe Blvd Houston, TX 77030, USA.
| | | | | |
Collapse
|
44
|
Kowatsch T, Maass W. In-store consumer behavior: How mobile recommendation agents influence usage intentions, product purchases, and store preferences. Computers in Human Behavior 2010. [DOI: 10.1016/j.chb.2010.01.006] [Citation(s) in RCA: 55] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
45
|
Ventura J, Teixeira JM, Araujo JP, Sousa JB, Ferreira R, Freitas PP, Langer J, Ocker B, Maass W. Influence of pinholes on MgO-tunnel junction barrier parameters obtained from current-voltage characteristics. J Nanosci Nanotechnol 2010; 10:2731-2734. [PMID: 20355492 DOI: 10.1166/jnn.2010.1377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
Magnetic tunnel junctions (MTJs) with thin barriers are already used as read sensors in recording media. However, the presence of pinholes across such few A thick barriers cannot be excluded and one needs to investigate their effect on the MTJ-transport properties. By applying large electrical currents we could change the electrical resistance of the studied MgO MTJs (due to pinhole-size variations), and study how pinholes influence the barrier parameters (thickness t and height phi) obtained by fitting current-voltage characteristics to Simmons' model. We found that, with decreasing resistance, the barrier thickness (height) decreases (increases). These results were well reproduced by a model of parallel-resistances, allowing us to estimate pinhole-free barrier parameters.
Collapse
Affiliation(s)
- J Ventura
- IN, IFIMUP Unit, Rua do Campo Alegre, 687, 4169-007, Porto, Portugal
| | | | | | | | | | | | | | | | | |
Collapse
|
46
|
Wrona J, Langer J, Ocker B, Maass W, Kanak J, Stobiecki T, Powroźnik W. Low resistance magnetic tunnel junctions with MgO wedge barrier. ACTA ACUST UNITED AC 2010. [DOI: 10.1088/1742-6596/200/5/052032] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
47
|
Abstract
From a theoretical point of view, statistical inference is an attractive model of brain operation. However, it is unclear how to implement these inferential processes in neuronal networks. We offer a solution to this problem by showing in detailed simulations how the belief propagation algorithm on a factor graph can be embedded in a network of spiking neurons. We use pools of spiking neurons as the function nodes of the factor graph. Each pool gathers “messages” in the form of population activities from its input nodes and combines them through its network dynamics. Each of the various output messages to be transmitted over the edges of the graph is computed by a group of readout neurons that feed in their respective destination pools. We use this approach to implement two examples of factor graphs. The first example, drawn from coding theory, models the transmission of signals through an unreliable channel and demonstrates the principles and generality of our network approach. The second, more applied example is of a psychophysical mechanism in which visual cues are used to resolve hypotheses about the interpretation of an object's shape and illumination. These two examples, and also a statistical analysis, demonstrate good agreement between the performance of our networks and the direct numerical evaluation of belief propagation.
Collapse
Affiliation(s)
- Andreas Steimer
- Institute of Neuroinformatics, University of Zürich, and ETH Zürich, Zürich, 8057 Switzerland
| | - Wolfgang Maass
- Institute for Theoretical Computer Science, Technische Universität Graz, A-8010 Graz, Austria
| | - Rodney Douglas
- Institute of Neuroinformatics, University of Zürich, and ETH Zürich, Zürich, 8057 Switzerland
| |
Collapse
|
48
|
Haeusler S, Schuch K, Maass W. Motif distribution, dynamical properties, and computational performance of two data-based cortical microcircuit templates. ACTA ACUST UNITED AC 2009; 103:73-87. [PMID: 19500669 DOI: 10.1016/j.jphysparis.2009.05.006] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
The neocortex is a continuous sheet composed of rather stereotypical local microcircuits that consist of neurons on several laminae with characteristic synaptic connectivity patterns. An understanding of the structure and computational function of these cortical microcircuits may hold the key for understanding the enormous computational power of the neocortex. Two templates for the structure of laminar cortical microcircuits have recently been published by Thomson et al. and Binzegger et al., both resulting from long-lasting experimental studies (but based on different methods). We analyze and compare in this article the structure of these two microcircuit templates. In particular, we examine the distribution of network motifs, i.e. of subcircuits consisting of a small number of neurons. The distribution of these building blocks has recently emerged as a method for characterizing similarities and differences among complex networks. We show that the two microcircuit templates have quite different distributions of network motifs, although they both have a characteristic small-world property. In order to understand the dynamical and computational properties of these two microcircuit templates, we have generated computer models of them, consisting of Hodgkin-Huxley point neurons with conductance based synapses that have a biologically realistic short-term plasticity. The performance of these two cortical microcircuit models was studied for seven generic computational tasks that require accumulation and merging of information contained in two afferent spike inputs. Although the two models exhibit a different performance for some of these tasks, their average computational performance is very similar. When we changed the connectivity structure of these two microcircuit models in order to see which aspects of it are essential for computational performance, we found that the distribution of degrees of nodes is a common key factor for their computational performance. We also show that their computational performance is correlated with specific statistical properties of the circuit dynamics that is induced by a particular distribution of degrees of nodes.
Collapse
Affiliation(s)
- Stefan Haeusler
- Institute for Theoretical Computer Science, Graz University of Technology, Austria.
| | | | | |
Collapse
|
49
|
Abstract
Independent component analysis (or blind source separation) is assumed to be an essential component of sensory processing in the brain and could provide a less redundant representation about the external world. Another powerful processing strategy is the optimization of internal representations according to the information bottleneck method. This method would allow extracting preferentially those components from high-dimensional sensory input streams that are related to other information sources, such as internal predictions or proprioceptive feedback. However, there exists a lack of models that could explain how spiking neurons could learn to execute either of these two processing strategies. We show in this article how stochastically spiking neurons with refractoriness could in principle learn in an unsupervised manner to carry out both information bottleneck optimization and the extraction of independent components. We derive suitable learning rules, which extend the well-known BCM rule, from abstract information optimization principles. These rules will simultaneously keep the firing rate of the neuron within a biologically realistic range.
Collapse
Affiliation(s)
- Stefan Klampfl
- Institute for Theoretical Computer Science, Graz University of Technology, A-8010 Graz, Austria.
| | | | | |
Collapse
|
50
|
Abstract
A conspicuous ability of the brain is to seamlessly assimilate and process spatial and temporal features of sensory stimuli. This ability is indispensable for the recognition of natural stimuli. Yet, a general computational framework for processing spatiotemporal stimuli remains elusive. Recent theoretical and experimental work suggests that spatiotemporal processing emerges from the interaction between incoming stimuli and the internal dynamic state of neural networks, including not only their ongoing spiking activity but also their 'hidden' neuronal states, such as short-term synaptic plasticity.
Collapse
Affiliation(s)
- Dean V Buonomano
- Department of Neurobiology, Brain Research Institute, University of California, Los Angeles, California 90095, USA.
| | | |
Collapse
|