1
|
Gebicke-Haerter PJ. The computational power of the human brain. Front Cell Neurosci 2023; 17:1220030. [PMID: 37608987 PMCID: PMC10441807 DOI: 10.3389/fncel.2023.1220030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Accepted: 07/05/2023] [Indexed: 08/24/2023] Open
Abstract
At the end of the 20th century, analog systems in computer science have been widely replaced by digital systems due to their higher computing power. Nevertheless, the question keeps being intriguing until now: is the brain analog or digital? Initially, the latter has been favored, considering it as a Turing machine that works like a digital computer. However, more recently, digital and analog processes have been combined to implant human behavior in robots, endowing them with artificial intelligence (AI). Therefore, we think it is timely to compare mathematical models with the biology of computation in the brain. To this end, digital and analog processes clearly identified in cellular and molecular interactions in the Central Nervous System are highlighted. But above that, we try to pinpoint reasons distinguishing in silico computation from salient features of biological computation. First, genuinely analog information processing has been observed in electrical synapses and through gap junctions, the latter both in neurons and astrocytes. Apparently opposed to that, neuronal action potentials (APs) or spikes represent clearly digital events, like the yes/no or 1/0 of a Turing machine. However, spikes are rarely uniform, but can vary in amplitude and widths, which has significant, differential effects on transmitter release at the presynaptic terminal, where notwithstanding the quantal (vesicular) release itself is digital. Conversely, at the dendritic site of the postsynaptic neuron, there are numerous analog events of computation. Moreover, synaptic transmission of information is not only neuronal, but heavily influenced by astrocytes tightly ensheathing the majority of synapses in brain (tripartite synapse). At least at this point, LTP and LTD modifying synaptic plasticity and believed to induce short and long-term memory processes including consolidation (equivalent to RAM and ROM in electronic devices) have to be discussed. The present knowledge of how the brain stores and retrieves memories includes a variety of options (e.g., neuronal network oscillations, engram cells, astrocytic syncytium). Also epigenetic features play crucial roles in memory formation and its consolidation, which necessarily guides to molecular events like gene transcription and translation. In conclusion, brain computation is not only digital or analog, or a combination of both, but encompasses features in parallel, and of higher orders of complexity.
Collapse
Affiliation(s)
- Peter J. Gebicke-Haerter
- Institute of Psychopharmacology, Central Institute of Mental Health, Faculty of Medicine, University of Heidelberg, Mannheim, Germany
| |
Collapse
|
2
|
Fernandez-Ruiz A, Sirota A, Lopes-Dos-Santos V, Dupret D. Over and above frequency: Gamma oscillations as units of neural circuit operations. Neuron 2023; 111:936-953. [PMID: 37023717 PMCID: PMC7614431 DOI: 10.1016/j.neuron.2023.02.026] [Citation(s) in RCA: 25] [Impact Index Per Article: 25.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 11/30/2022] [Accepted: 02/16/2023] [Indexed: 04/08/2023]
Abstract
Gamma oscillations (∼30-150 Hz) are widespread correlates of neural circuit functions. These network activity patterns have been described across multiple animal species, brain structures, and behaviors, and are usually identified based on their spectral peak frequency. Yet, despite intensive investigation, whether gamma oscillations implement causal mechanisms of specific brain functions or represent a general dynamic mode of neural circuit operation remains unclear. In this perspective, we review recent advances in the study of gamma oscillations toward a deeper understanding of their cellular mechanisms, neural pathways, and functional roles. We discuss that a given gamma rhythm does not per se implement any specific cognitive function but rather constitutes an activity motif reporting the cellular substrates, communication channels, and computational operations underlying information processing in its generating brain circuit. Accordingly, we propose shifting the attention from a frequency-based to a circuit-level definition of gamma oscillations.
Collapse
Affiliation(s)
| | - Anton Sirota
- Bernstein Center for Computational Neuroscience, Faculty of Medicine, Ludwig-Maximilians Universität München, Planegg-Martinsried, Germany.
| | - Vítor Lopes-Dos-Santos
- Medical Research Council Brain Network Dynamics Unit, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK.
| | - David Dupret
- Medical Research Council Brain Network Dynamics Unit, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK.
| |
Collapse
|
3
|
George VK, Gupta A, Silva GA. Identifying Steady State in the Network Dynamics of Spiking Neural Networks. Heliyon 2023; 9:e13913. [PMID: 36967881 PMCID: PMC10036509 DOI: 10.1016/j.heliyon.2023.e13913] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2022] [Revised: 12/15/2022] [Accepted: 02/15/2023] [Indexed: 03/04/2023] Open
Abstract
Analysis of the dynamics of complex networks can provide valuable information. For example, the dynamics can be used to characterize and differentiate between different network inputs and configurations. However, without quantitatively delineating the network's dynamic regimes, analysis of the network's dynamics is based on heuristics and qualitative signatures of transient or steady-state regimes. This is not ideal because interesting phenomena can occur during the transient regime, steady-state regime, or at the transition between the two dynamic regimes. Moreover, for simulated and observed systems, precise knowledge of the network's dynamical regime is imperative when considering metrics on minimal mathematical descriptions of the dynamics, otherwise either too much or too little data is analyzed. Here, we develop quantitative methods to ascertain the starting point and period of steady-state network activity. Using the precise knowledge of the network's dynamic regimes, we build minimal representations of the network dynamics that form the basis for future work. We show applications of our techniques on idealized signals and on the dynamics of a biologically inspired spiking neural network.
Collapse
|
4
|
Pircher T, Pircher B, Schlücker E, Feigenspan A. The structure dilemma in biological and artificial neural networks. Sci Rep 2021; 11:5621. [PMID: 33692408 PMCID: PMC7970964 DOI: 10.1038/s41598-021-84813-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Accepted: 02/19/2021] [Indexed: 01/22/2023] Open
Abstract
Brain research up to date has revealed that structure and function are highly related. Thus, for example, studies have repeatedly shown that the brains of patients suffering from schizophrenia or other diseases have a different connectome compared to healthy people. Apart from stochastic processes, however, an inherent logic describing how neurons connect to each other has not yet been identified. We revisited this structural dilemma by comparing and analyzing artificial and biological-based neural networks. Namely, we used feed-forward and recurrent artificial neural networks as well as networks based on the structure of the micro-connectome of C. elegans and of the human macro-connectome. We trained these diverse networks, which markedly differ in their architecture, initialization and pruning technique, and we found remarkable parallels between biological-based and artificial neural networks, as we were additionally able to show that the dilemma is also present in artificial neural networks. Our findings show that structure contains all the information, but that this structure is not exclusive. Indeed, the same structure was able to solve completely different problems with only minimal adjustments. We particularly put interest on the influence of weights and the neuron offset value, as they show a different adaption behaviour. Our findings open up new questions in the fields of artificial and biological information processing research.
Collapse
Affiliation(s)
- Thomas Pircher
- Institute of Process Machinery and Systems Engineering, Friedrich-Alexander University Erlangen-Nuremberg, Cauerstraße 4, 91058, Erlangen, Germany.
| | - Bianca Pircher
- Department Biology, Animal Physiology, Friedrich-Alexander University Erlangen-Nuremberg, Staudtstraße 5, 91058, Erlangen, Germany
| | - Eberhard Schlücker
- Institute of Process Machinery and Systems Engineering, Friedrich-Alexander University Erlangen-Nuremberg, Cauerstraße 4, 91058, Erlangen, Germany
| | - Andreas Feigenspan
- Department Biology, Animal Physiology, Friedrich-Alexander University Erlangen-Nuremberg, Staudtstraße 5, 91058, Erlangen, Germany
| |
Collapse
|
5
|
Tian Y, Sun P. Characteristics of the neural coding of causality. Phys Rev E 2021; 103:012406. [PMID: 33601638 DOI: 10.1103/physreve.103.012406] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Accepted: 12/21/2020] [Indexed: 02/02/2023]
Abstract
While causality processing is an essential cognitive capacity of the neural system, a systematic understanding of the neural coding of causality is still elusive. We propose a physically fundamental analysis of this issue and demonstrate that the neural dynamics encodes the original causality between external events near homomorphically. The causality coding is memory robust for the amount of historical information and features high precision but low recall. This coding process creates a sparser representation for the external causality. Finally, we propose a statistic characterization for the neural coding mapping from the original causality to the coded causality in neural dynamics.
Collapse
Affiliation(s)
- Yang Tian
- Department of Psychology, Tsinghua University, Beijing 100084, China and Tsinghua Brain and Intelligence Lab, Beijing 100084, China
| | - Pei Sun
- Department of Psychology, Tsinghua University, Beijing 100084, China and Tsinghua Brain and Intelligence Lab, Beijing 100084, China
| |
Collapse
|
6
|
Cofré R, Maldonado C, Cessac B. Thermodynamic Formalism in Neuronal Dynamics and Spike Train Statistics. ENTROPY (BASEL, SWITZERLAND) 2020; 22:E1330. [PMID: 33266513 PMCID: PMC7712217 DOI: 10.3390/e22111330] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/09/2020] [Revised: 11/13/2020] [Accepted: 11/15/2020] [Indexed: 12/04/2022]
Abstract
The Thermodynamic Formalism provides a rigorous mathematical framework for studying quantitative and qualitative aspects of dynamical systems. At its core, there is a variational principle that corresponds, in its simplest form, to the Maximum Entropy principle. It is used as a statistical inference procedure to represent, by specific probability measures (Gibbs measures), the collective behaviour of complex systems. This framework has found applications in different domains of science. In particular, it has been fruitful and influential in neurosciences. In this article, we review how the Thermodynamic Formalism can be exploited in the field of theoretical neuroscience, as a conceptual and operational tool, in order to link the dynamics of interacting neurons and the statistics of action potentials from either experimental data or mathematical models. We comment on perspectives and open problems in theoretical neuroscience that could be addressed within this formalism.
Collapse
Affiliation(s)
- Rodrigo Cofré
- CIMFAV-Ingemat, Facultad de Ingeniería, Universidad de Valparaíso, Valparaíso 2340000, Chile
| | - Cesar Maldonado
- IPICYT/División de Matemáticas Aplicadas, San Luis Potosí 78216, Mexico;
| | - Bruno Cessac
- Inria Biovision team and Neuromod Institute, Université Côte d’Azur, 06901 CEDEX Inria, France;
| |
Collapse
|
7
|
Estarellas C, Masoliver M, Masoller C, Mirasso CR. Characterizing signal encoding and transmission in class I and class II neurons via ordinal time-series analysis. CHAOS (WOODBURY, N.Y.) 2020; 30:013123. [PMID: 32013495 DOI: 10.1063/1.5121257] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/24/2019] [Accepted: 12/30/2019] [Indexed: 06/10/2023]
Abstract
Neurons encode and transmit information in spike sequences. However, despite the effort devoted to understand the encoding and transmission of information, the mechanisms underlying the neuronal encoding are not yet fully understood. Here, we use a nonlinear method of time-series analysis (known as ordinal analysis) to compare the statistics of spike sequences generated by applying an input signal to the neuronal model of Morris-Lecar. In particular, we consider two different regimes for the neurons which lead to two classes of excitability: class I, where the frequency-current curve is continuous and class II, where the frequency-current curve is discontinuous. By applying ordinal analysis to sequences of inter-spike-intervals (ISIs) our goals are (1) to investigate if different neuron types can generate spike sequences which have similar symbolic properties; (2) to get deeper understanding on the effects that electrical (diffusive) and excitatory chemical (i.e., excitatory synapse) couplings have; and (3) to compare, when a small-amplitude periodic signal is applied to one of the neurons, how the signal features (amplitude and frequency) are encoded and transmitted in the generated ISI sequences for both class I and class II type neurons and electrical or chemical couplings. We find that depending on the frequency, specific combinations of neuron/class and coupling-type allow a more effective encoding, or a more effective transmission of the signal.
Collapse
Affiliation(s)
- C Estarellas
- Instituto de Física Interdisciplinar y Sistemas Complejos (IFISC, UIB-CSIC), Campus Universitat de les Illes Balears E-07122, Palma de Mallorca, Spain
| | - M Masoliver
- Departament de Física, Universitat Politècnica de Catalunya, Terrassa 08222, Spain
| | - C Masoller
- Departament de Física, Universitat Politècnica de Catalunya, Terrassa 08222, Spain
| | - Claudio R Mirasso
- Instituto de Física Interdisciplinar y Sistemas Complejos (IFISC, UIB-CSIC), Campus Universitat de les Illes Balears E-07122, Palma de Mallorca, Spain
| |
Collapse
|
8
|
Nishitani Y, Hosokawa C, Mizuno-Matsumoto Y, Miyoshi T, Tamura S. Learning process for identifying different types of communication via repetitive stimulation: feasibility study in a cultured neuronal network. AIMS Neurosci 2019; 6:240-249. [PMID: 32341980 PMCID: PMC7179351 DOI: 10.3934/neuroscience.2019.4.240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2019] [Accepted: 09/23/2019] [Indexed: 11/26/2022] Open
Abstract
It is well known that various types of information can be learned and memorized via repetitive training. In brain information science, it is very important to determine how neuronal networks comprising neurons with fluctuating characteristics reliably learn and memorize information. The aim of this study is to investigate the learning process in cultured neuronal networks and to address the question described above. Previously, we reported that the spikes resulting from stimulation at a specific neuron propagate as a cluster of excitation waves called spike wave propagation in cultured neuronal networks. We also reported that these waves have an individual spatiotemporal pattern that varies according to the type of neuron that is stimulated. Therefore, different spike wave propagations can be identified via pattern analysis of spike trains at particular neurons. Here, we assessed repetitive stimulation using intervals of 0.5 and 1.5 ms. Subsequently, we analyzed the relationship between the repetition of the stimulation and the identification of the different spike wave propagations. We showed that the various spike wave propagations were identified more precisely after stimulation was repeated several times using an interval of 1.5 ms. These results suggest the existence of a learning process in neuronal networks that occurs via repetitive training using a suitable interval.
Collapse
Affiliation(s)
- Yoshi Nishitani
- Department of Radiology, Graduate School of Medicine, Osaka University, Suita 565-0871, Japan
| | - Chie Hosokawa
- Graduate School of Science Osaka City University, Osaka, 558-8585, Japan
| | | | - Tomomitsu Miyoshi
- Department of Integrative Physiology, Graduate School of Medicine, Osaka University, Suita 565-0871, Japan
| | - Shinichi Tamura
- NBL Technovator Co., Ltd., 631 Shindachimakino, Sennan 590-0522, Japan
| |
Collapse
|
9
|
Abstract
This work makes 2 contributions. First, we present a neural network model of associative memory that stores and retrieves sparse patterns of complex variables. This network can store analog information as fixed-point attractors in the complex domain; it is governed by an energy function and has increased memory capacity compared to early models. Second, we translate complex attractor networks into spiking networks, where the timing of the spike indicates the phase of a complex number. We show that complex fixed points correspond to stable periodic spike patterns. It is demonstrated that such networks can be constructed with resonate-and-fire or integrate-and-fire neurons with biologically plausible mechanisms and be used for robust computations, such as image retrieval. Information coding by precise timing of spikes can be faster and more energy efficient than traditional rate coding. However, spike-timing codes are often brittle, which has limited their use in theoretical neuroscience and computing applications. Here, we propose a type of attractor neural network in complex state space and show how it can be leveraged to construct spiking neural networks with robust computational properties through a phase-to-timing mapping. Building on Hebbian neural associative memories, like Hopfield networks, we first propose threshold phasor associative memory (TPAM) networks. Complex phasor patterns whose components can assume continuous-valued phase angles and binary magnitudes can be stored and retrieved as stable fixed points in the network dynamics. TPAM achieves high memory capacity when storing sparse phasor patterns, and we derive the energy function that governs its fixed-point attractor dynamics. Second, we construct 2 spiking neural networks to approximate the complex algebraic computations in TPAM, a reductionist model with resonate-and-fire neurons and a biologically plausible network of integrate-and-fire neurons with synaptic delays and recurrently connected inhibitory interneurons. The fixed points of TPAM correspond to stable periodic states of precisely timed spiking activity that are robust to perturbation. The link established between rhythmic firing patterns and complex attractor dynamics has implications for the interpretation of spike patterns seen in neuroscience and can serve as a framework for computation in emerging neuromorphic devices.
Collapse
|
10
|
Tamura S, Nishitani Y, Hosokawa C, Mizuno-Matsumoto Y. Asynchronous Multiplex Communication Channels in 2-D Neural Network With Fluctuating Characteristics. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2019; 30:2336-2345. [PMID: 30571647 DOI: 10.1109/tnnls.2018.2880565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Neurons behave like transistors, but have fluctuating characteristics. In this paper, we show that several asynchronous multiplex communication channels can be established in a 2-D mesh neural network with randomly generated weights between eight neighbors. Neurons were simulated by integrate-and-fire neuron models without leakage and with fluctuating refractory period and output delay. If one of the transmitting neuron groups is stimulated, the signal is propagated in the form of spike waves. The corresponding receiving neuron group is able to identify the signal after having learned to form an asynchronous multiplex communication channel. The channel is composed of many intermediate/interstitial neurons working as relays. Each neuron can work as an I/O and as a relay element, i.e., as a multiuse unit. Grouping and synchronic firing is often seen in natural neuronal networks and seems to be effective for stable/robust communication in conjunction with spatial multiplex communication. This communication pattern corresponds to our wet lab experiments on cultured neuronal networks and is similar to sound identification by the ear and mobile adaptive communication systems.
Collapse
|
11
|
Grabowski F, Czyż P, Kochańczyk M, Lipniacki T. Limits to the rate of information transmission through the MAPK pathway. J R Soc Interface 2019; 16:20180792. [PMID: 30836891 PMCID: PMC6451410 DOI: 10.1098/rsif.2018.0792] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023] Open
Abstract
Two important signalling pathways of NF-κB and ERK transmit merely 1 bit of information about the level of extracellular stimulation. It is thus unclear how such systems can coordinate complex cell responses to external cues. We analyse information transmission in the MAPK/ERK pathway that converts both constant and pulsatile EGF stimulation into pulses of ERK activity. Based on an experimentally verified computational model, we demonstrate that, when input consists of sequences of EGF pulses, transmitted information increases nearly linearly with time. Thus, pulse-interval transcoding allows more information to be relayed than the amplitude–amplitude transcoding considered previously for the ERK and NF-κB pathways. Moreover, the information channel capacity C, or simply bitrate, is not limited by the bandwidth B = 1/τ, where τ ≈ 1 h is the relaxation time. Specifically, when the input is provided in the form of sequences of short binary EGF pulses separated by intervals that are multiples of τ/n (but not shorter than τ), then for n = 2, C ≈ 1.39 bit h−1; and for n = 4, C ≈ 1.86 bit h−1. The capability to respond to random sequences of EGF pulses enables cells to propagate spontaneous ERK activity waves across tissue.
Collapse
Affiliation(s)
- Frederic Grabowski
- 1 Faculty of Mathematics, Informatics and Mechanics, University of Warsaw , Warsaw , Poland
| | - Paweł Czyż
- 2 Mathematical, Physical and Life Sciences Division, University of Oxford , Oxford , UK
| | - Marek Kochańczyk
- 3 Institute of Fundamental Technological Research, Polish Academy of Sciences , Warsaw , Poland
| | - Tomasz Lipniacki
- 3 Institute of Fundamental Technological Research, Polish Academy of Sciences , Warsaw , Poland
| |
Collapse
|
12
|
Energy expenditure computation of a single bursting neuron. Cogn Neurodyn 2018; 13:75-87. [PMID: 30728872 PMCID: PMC6339863 DOI: 10.1007/s11571-018-9503-3] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2017] [Revised: 08/22/2018] [Accepted: 08/28/2018] [Indexed: 01/06/2023] Open
Abstract
Brief bursts of high-frequency spikes are a common firing pattern of neurons. The cellular mechanisms of bursting and its biological significance remain a matter of debate. Focusing on the energy aspect, this paper proposes a neural energy calculation method based on the Chay model of bursting. The flow of ions across the membrane of the bursting neuron with or without current stimulation and its power which contributes to the change of the transmembrane electrical potential energy are analyzed here in detail. We find that during the depolarization of spikes in bursting this power becomes negative, which was also discovered in previous research with another energy model. We also find that the neuron’s energy consumption during bursting is minimal. Especially in the spontaneous state without stimulation, the total energy consumption (2.152 × 10−7 J) during 30 s of bursting is very similar to the biological energy consumption (2.468 × 10−7 J) during the generation of a single action potential, as shown in Wang et al. (Neural Plast 2017, 2017a). Our results suggest that this property of low energy consumption could simply be the consequence of the biophysics of generating bursts, which is consistent with the principle of energy minimization. Our results also imply that neural energy plays a critical role in neural coding, which opens a new avenue for research of a central challenge facing neuroscience today.
Collapse
|
13
|
Zeng Y, Wang G, Xu B. A Basal Ganglia Network Centric Reinforcement Learning Model and Its Application in Unmanned Aerial Vehicle. IEEE Trans Cogn Dev Syst 2018. [DOI: 10.1109/tcds.2017.2649564] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
14
|
Nishitani Y, Hosokawa C, Mizuno-Matsumoto Y, Miyoshi T, Tamura S. Effect of correlating adjacent neurons for identifying communications: Feasibility experiment in a cultured neuronal network. AIMS Neurosci 2017; 5:18-31. [PMID: 32341949 PMCID: PMC7181895 DOI: 10.3934/neuroscience.2018.1.18] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2017] [Accepted: 10/15/2017] [Indexed: 11/30/2022] Open
Abstract
Neuronal networks have fluctuating characteristics, unlike the stable characteristics seen in computers. The underlying mechanisms that drive reliable communication among neuronal networks and their ability to perform intelligible tasks remain unknown. Recently, in an attempt to resolve this issue, we showed that stimulated neurons communicate via spikes that propagate temporally, in the form of spike trains. We named this phenomenon “spike wave propagation”. In these previous studies, using neural networks cultured from rat hippocampal neurons, we found that multiple neurons, e.g., 3 neurons, correlate to identify various spike wave propagations in a cultured neuronal network. Specifically, the number of classifiable neurons in the neuronal network increased through correlation of spike trains between current and adjacent neurons. Although we previously obtained similar findings through stimulation, here we report these observations on a physiological level. Considering that individual spike wave propagation corresponds to individual communication, a correlation between some adjacent neurons to improve the quality of communication classification in a neuronal network, similar to a diversity antenna, which is used to improve the quality of communication in artificial data communication systems, is suggested.
Collapse
Affiliation(s)
- Yoshi Nishitani
- Department of Radiology, Graduate School of Medicine, Osaka University, Suita 565-0871, Japan
| | - Chie Hosokawa
- Biomedical Research Institute and Advanced Photonics and Biosensing Open Innovation Laboratory, AIST, Ikeda, Osaka 563-8577, Japan
| | | | - Tomomitsu Miyoshi
- Department of Integrative Physiology, Graduate School of Medicine, Osaka University, Suita 565-0871, Japan
| | | |
Collapse
|
15
|
Yang Y, Mason AJ. Frequency Band Separability Feature Extraction Method With Weighted Haar Wavelet Implementation for Implantable Spike Sorting. IEEE Trans Neural Syst Rehabil Eng 2017; 25:530-538. [DOI: 10.1109/tnsre.2016.2590560] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
16
|
Yang Y, Mason AJ. Hardware Efficient Automatic Thresholding for NEO-Based Neural Spike Detection. IEEE Trans Biomed Eng 2017; 64:826-833. [DOI: 10.1109/tbme.2016.2580319] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
17
|
Carrillo-Medina JL, Latorre R. Implementing Signature Neural Networks with Spiking Neurons. Front Comput Neurosci 2016; 10:132. [PMID: 28066221 PMCID: PMC5167754 DOI: 10.3389/fncom.2016.00132] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2016] [Accepted: 11/30/2016] [Indexed: 11/17/2022] Open
Abstract
Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm—i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data—to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the absence of inhibitory connections. These parameters also modulate the memory capabilities of the network. The dynamical modes observed in the different informational dimensions in a given moment are independent and they only depend on the parameters shaping the information processing in this dimension. In view of these results, we argue that plasticity mechanisms inside individual cells and multicoding strategies can provide additional computational properties to spiking neural networks, which could enhance their capacity and performance in a wide variety of real-world tasks.
Collapse
Affiliation(s)
- José Luis Carrillo-Medina
- Departamento de Eléctrica y Electrónica, Universidad de las Fuerzas Armadas - ESPE Sangolquí, Ecuador
| | - Roberto Latorre
- Grupo de Neurocomputación Biológica, Dpto. de Ingeniería Informática, Escuela Politécnica Superior, Universidad Autónoma de Madrid Madrid, Spain
| |
Collapse
|
18
|
Spike Code Flow in Cultured Neuronal Networks. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2016; 2016:7267691. [PMID: 27217825 PMCID: PMC4863084 DOI: 10.1155/2016/7267691] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/01/2015] [Revised: 09/19/2015] [Accepted: 10/08/2015] [Indexed: 11/18/2022]
Abstract
We observed spike trains produced by one-shot electrical stimulation with 8 × 8 multielectrodes in cultured neuronal networks. Each electrode accepted spikes from several neurons. We extracted the short codes from spike trains and obtained a code spectrum with a nominal time accuracy of 1%. We then constructed code flow maps as movies of the electrode array to observe the code flow of “1101” and “1011,” which are typical pseudorandom sequence such as that we often encountered in a literature and our experiments. They seemed to flow from one electrode to the neighboring one and maintained their shape to some extent. To quantify the flow, we calculated the “maximum cross-correlations” among neighboring electrodes, to find the direction of maximum flow of the codes with lengths less than 8. Normalized maximum cross-correlations were almost constant irrespective of code. Furthermore, if the spike trains were shuffled in interval orders or in electrodes, they became significantly small. Thus, the analysis suggested that local codes of approximately constant shape propagated and conveyed information across the network. Hence, the codes can serve as visible and trackable marks of propagating spike waves as well as evaluating information flow in the neuronal network.
Collapse
|
19
|
Simulation of Code Spectrum and Code Flow of Cultured Neuronal Networks. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2016; 2016:7186092. [PMID: 27239189 PMCID: PMC4863095 DOI: 10.1155/2016/7186092] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/01/2015] [Revised: 10/01/2015] [Accepted: 10/25/2015] [Indexed: 11/18/2022]
Abstract
It has been shown that, in cultured neuronal networks on a multielectrode, pseudorandom-like sequences (codes) are detected, and they flow with some spatial decay constant. Each cultured neuronal network is characterized by a specific spectrum curve. That is, we may consider the spectrum curve as a “signature” of its associated neuronal network that is dependent on the characteristics of neurons and network configuration, including the weight distribution. In the present study, we used an integrate-and-fire model of neurons with intrinsic and instantaneous fluctuations of characteristics for performing a simulation of a code spectrum from multielectrodes on a 2D mesh neural network. We showed that it is possible to estimate the characteristics of neurons such as the distribution of number of neurons around each electrode and their refractory periods. Although this process is a reverse problem and theoretically the solutions are not sufficiently guaranteed, the parameters seem to be consistent with those of neurons. That is, the proposed neural network model may adequately reflect the behavior of a cultured neuronal network. Furthermore, such prospect is discussed that code analysis will provide a base of communication within a neural network that will also create a base of natural intelligence.
Collapse
|
20
|
Tamura S, Nishitani Y, Hosokawa C. Feasibility of Multiplex Communication in a 2D Mesh Asynchronous Neural Network with Fluctuations. AIMS Neurosci 2016. [DOI: 10.3934/neuroscience.2016.4.385] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
|
21
|
Cessac B, Cofré R. Spike train statistics and Gibbs distributions. ACTA ACUST UNITED AC 2013; 107:360-8. [PMID: 23501168 DOI: 10.1016/j.jphysparis.2013.03.001] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2012] [Revised: 02/08/2013] [Accepted: 03/04/2013] [Indexed: 11/27/2022]
Abstract
This paper is based on a lecture given in the LACONEU summer school, Valparaiso, January 2012. We introduce Gibbs distribution in a general setting, including non stationary dynamics, and present then three examples of such Gibbs distributions, in the context of neural networks spike train statistics: (i) maximum entropy model with spatio-temporal constraints; (ii) generalized linear models; and (iii) conductance based integrate and fire model with chemical synapses and gap junctions.
Collapse
Affiliation(s)
- B Cessac
- NeuroMathComp team (INRIA, UNSA LJAD) 2004 Route des Lucioles, 06902 Sophia-Antipolis, France.
| | | |
Collapse
|
22
|
Dopamine modulation of Ih improves temporal fidelity of spike propagation in an unmyelinated axon. J Neurosci 2012; 32:5106-19. [PMID: 22496556 DOI: 10.1523/jneurosci.6320-11.2012] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
We studied how conduction delays of action potentials in an unmyelinated axon depended on the history of activity and how this dependence was changed by the neuromodulator dopamine (DA). The pyloric dilator axons of the stomatogastric nervous system in the lobster, Homarus americanus, exhibited substantial activity-dependent hyperpolarization and changes in spike shape during repetitive activation. The conduction delays varied by several milliseconds per centimeter, and, during activation with realistic burst patterns or Poisson-like patterns, changes in delay occurred over multiple timescales. The mean delay increased, whereas the resting membrane potential hyperpolarized with a time constant of several minutes. Concomitantly with the mean delay, the variability of delay also increased. The variability of delay was not a linear or monotonic function of instantaneous spike frequency or spike shape parameters, and the relationship between these parameters changed with the increase in mean delay. Hyperpolarization was counteracted by a hyperpolarization-activated inward current (I(h)), and the magnitude of I(h) critically determined the temporal fidelity of spike propagation. Pharmacological block of I(h) increased the change in delay and the variability of delay, and increasing I(h) by application of DA diminished both. Consequently, the temporal fidelity of pattern propagation was substantially improved in DA. Standard measurements of changes in excitability or delay with paired stimuli or tonic stimulation failed to capture the dynamics of spike conduction. These results indicate that spike conduction can be extremely sensitive to the history of axonal activity and to the presence of neuromodulators, with potentially important consequences for temporal coding.
Collapse
|
23
|
Ranhel J. Neural assembly computing. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2012; 23:916-927. [PMID: 24806763 DOI: 10.1109/tnnls.2012.2190421] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Spiking neurons can realize several computational operations when firing cooperatively. This is a prevalent notion, although the mechanisms are not yet understood. A way by which neural assemblies compute is proposed in this paper. It is shown how neural coalitions represent things (and world states), memorize them, and control their hierarchical relations in order to perform algorithms. It is described how neural groups perform statistic logic functions as they form assemblies. Neural coalitions can reverberate, becoming bistable loops. Such bistable neural assemblies become short- or long-term memories that represent the event that triggers them. In addition, assemblies can branch and dismantle other neural groups generating new events that trigger other coalitions. Hence, such capabilities and the interaction among assemblies allow neural networks to create and control hierarchical cascades of causal activities, giving rise to parallel algorithms. Computing and algorithms are used here as in a nonstandard computation approach. In this sense, neural assembly computing (NAC) can be seen as a new class of spiking neural network machines. NAC can explain the following points: 1) how neuron groups represent things and states; 2) how they retain binary states in memories that do not require any plasticity mechanism; and 3) how branching, disbanding, and interaction among assemblies may result in algorithms and behavioral responses. Simulations were carried out and the results are in agreement with the hypothesis presented. A MATLAB code is available as a supplementary material.
Collapse
|
24
|
Yanchuk S, Perlikowski P, Popovych OV, Tass PA. Variability of spatio-temporal patterns in non-homogeneous rings of spiking neurons. CHAOS (WOODBURY, N.Y.) 2011; 21:047511. [PMID: 22225385 DOI: 10.1063/1.3665200] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
We show that a ring of unidirectionally delay-coupled spiking neurons may possess a multitude of stable spiking patterns and provide a constructive algorithm for generating a desired spiking pattern. More specifically, for a given time-periodic pattern, in which each neuron fires once within the pattern period at a predefined time moment, we provide the coupling delays and/or coupling strengths leading to this particular pattern. The considered homogeneous networks demonstrate a great multistability of various travelling time- and space-periodic waves which can propagate either along the direction of coupling or in opposite direction. Such a multistability significantly enhances the variability of possible spatio-temporal patterns and potentially increases the coding capability of oscillatory neuronal loops. We illustrate our results using FitzHugh-Nagumo neurons interacting via excitatory chemical synapses as well as limit-cycle oscillators.
Collapse
Affiliation(s)
- Serhiy Yanchuk
- Institute of Mathematics, Humboldt University of Berlin, 10099 Berlin, Germany
| | | | | | | |
Collapse
|
25
|
Popovych OV, Yanchuk S, Tass PA. Delay- and coupling-induced firing patterns in oscillatory neural loops. PHYSICAL REVIEW LETTERS 2011; 107:228102. [PMID: 22182043 DOI: 10.1103/physrevlett.107.228102] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/15/2011] [Indexed: 05/31/2023]
Abstract
For a feedforward loop of oscillatory Hodgkin-Huxley neurons interacting via excitatory chemical synapses, we show that a great variety of spatiotemporal periodic firing patterns can be encoded by properly chosen communication delays and synaptic weights, which contributes to the concept of temporal coding by spikes. These patterns can be obtained by a modulation of the multiple coexisting stable in-phase synchronized states or traveling waves propagating along or against the direction of coupling. We derive explicit conditions for the network parameters allowing us to achieve a desired pattern. Interestingly, whereas the delays directly affect the time differences between spikes of interacting neurons, the synaptic weights control the phase differences. Our results show that already such a simple neural circuit may unfold an impressive spike coding capability.
Collapse
Affiliation(s)
- Oleksandr V Popovych
- Institute of Neuroscience and Medicine-Neuromodulation (INM-7), Research Center Jülich, 52425 Jülich, Germany
| | | | | |
Collapse
|
26
|
Taouali W, Viéville T, Rougier NP, Alexandre F. No clock to rule them all. ACTA ACUST UNITED AC 2011; 105:83-90. [PMID: 21945195 DOI: 10.1016/j.jphysparis.2011.08.005] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2011] [Revised: 08/01/2011] [Accepted: 08/31/2011] [Indexed: 11/19/2022]
Abstract
This article introduces general concepts and definitions related to the notion of asynchronous computation in the framework of artificial neural networks. Using the dynamic field theory as an illustrative example, we explain why one may want to perform such asynchronous computation and how one can implement it since this computational scheme draws several consequences on both the trajectories and the stability of the whole system. After giving an unequivocal definition of asynchronous computation, we present a few practically usable methods and quantitative bounds that can guarantee most of the mesoscopic properties of the system.
Collapse
Affiliation(s)
- Wahiba Taouali
- Lorraine Laboratory of IT Research and its Applications, Mixed Research Unit 7053, National Center for Scientific Research, Campus Scientifique, Vandoeuvre-lès-Nancy Cedex, France.
| | | | | | | |
Collapse
|
27
|
Bucher D, Goaillard JM. Beyond faithful conduction: short-term dynamics, neuromodulation, and long-term regulation of spike propagation in the axon. Prog Neurobiol 2011; 94:307-46. [PMID: 21708220 PMCID: PMC3156869 DOI: 10.1016/j.pneurobio.2011.06.001] [Citation(s) in RCA: 120] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2011] [Revised: 05/27/2011] [Accepted: 06/07/2011] [Indexed: 12/13/2022]
Abstract
Most spiking neurons are divided into functional compartments: a dendritic input region, a soma, a site of action potential initiation, an axon trunk and its collaterals for propagation of action potentials, and distal arborizations and terminals carrying the output synapses. The axon trunk and lower order branches are probably the most neglected and are often assumed to do nothing more than faithfully conducting action potentials. Nevertheless, there are numerous reports of complex membrane properties in non-synaptic axonal regions, owing to the presence of a multitude of different ion channels. Many different types of sodium and potassium channels have been described in axons, as well as calcium transients and hyperpolarization-activated inward currents. The complex time- and voltage-dependence resulting from the properties of ion channels can lead to activity-dependent changes in spike shape and resting potential, affecting the temporal fidelity of spike conduction. Neural coding can be altered by activity-dependent changes in conduction velocity, spike failures, and ectopic spike initiation. This is true under normal physiological conditions, and relevant for a number of neuropathies that lead to abnormal excitability. In addition, a growing number of studies show that the axon trunk can express receptors to glutamate, GABA, acetylcholine or biogenic amines, changing the relative contribution of some channels to axonal excitability and therefore rendering the contribution of this compartment to neural coding conditional on the presence of neuromodulators. Long-term regulatory processes, both during development and in the context of activity-dependent plasticity may also affect axonal properties to an underappreciated extent.
Collapse
Affiliation(s)
- Dirk Bucher
- The Whitney Laboratory and Department of Neuroscience, University of Florida, St. Augustine, FL 32080, USA.
| | | |
Collapse
|
28
|
Li Z, Ouyang G, Li D, Li X. Characterization of the causality between spike trains with permutation conditional mutual information. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2011; 84:021929. [PMID: 21929040 DOI: 10.1103/physreve.84.021929] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/17/2011] [Revised: 07/12/2011] [Indexed: 05/31/2023]
Abstract
Uncovering the causal relationship between spike train recordings from different neurons is a key issue for understanding the neural coding. This paper presents a method, called permutation conditional mutual information (PCMI), for characterizing the causality between a pair of neurons. The performance of this method is demonstrated with the spike trains generated by the Poisson point process model and the Izhikevich neuronal model, including estimation of the directionality index and detection of the temporal dynamics of the causal link. Simulations show that the PCMI method is superior to the transfer entropy and causal entropy methods at identifying the coupling direction between the spike trains. The advantages of PCMI are twofold: It is able to estimate the directionality index under the weak coupling and against the missing and extra spikes.
Collapse
Affiliation(s)
- Zhaohui Li
- Institute of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, People's Republic of China
| | | | | | | |
Collapse
|