1
|
Vidal-Saez MS, Vilarroya O, Garcia-Ojalvo J. Biological computation through recurrence. Biochem Biophys Res Commun 2024; 728:150301. [PMID: 38971000 DOI: 10.1016/j.bbrc.2024.150301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2024] [Accepted: 05/12/2024] [Indexed: 07/08/2024]
Abstract
One of the defining features of living systems is their adaptability to changing environmental conditions. This requires organisms to extract temporal and spatial features of their environment, and use that information to compute the appropriate response. In the last two decades, a growing body of work, mainly coming from the machine learning and computational neuroscience fields, has shown that such complex information processing can be performed by recurrent networks. Temporal computations arise in these networks through the interplay between the external stimuli and the network's internal state. In this article we review our current understanding of how recurrent networks can be used by biological systems, from cells to brains, for complex information processing. Rather than focusing on sophisticated, artificial recurrent architectures such as long short-term memory (LSTM) networks, here we concentrate on simpler network structures and learning algorithms that can be expected to have been found by evolution. We also review studies showing evidence of naturally occurring recurrent networks in living organisms. Lastly, we discuss some relevant evolutionary aspects concerning the emergence of this natural computation paradigm.
Collapse
Affiliation(s)
- María Sol Vidal-Saez
- Department of Medicine and Life Sciences, Universitat Pompeu Fabra, Dr Aiguader 88, 08003 Barcelona, Spain
| | - Oscar Vilarroya
- Department of Psychiatry and Legal Medicine, Universitat Autònoma de Barcelona, 08193, Cerdanyola del Vallès, Spain; Hospital del Mar Medical Research Institute (IMIM), Dr Aiguader 88, 08003, Barcelona, Spain
| | - Jordi Garcia-Ojalvo
- Department of Medicine and Life Sciences, Universitat Pompeu Fabra, Dr Aiguader 88, 08003 Barcelona, Spain.
| |
Collapse
|
2
|
Chen Z, Liang Q, Wei Z, Chen X, Shi Q, Yu Z, Sun T. An Overview of In Vitro Biological Neural Networks for Robot Intelligence. CYBORG AND BIONIC SYSTEMS 2023; 4:0001. [PMID: 37040493 PMCID: PMC10076061 DOI: 10.34133/cbsystems.0001] [Citation(s) in RCA: 16] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Accepted: 10/17/2022] [Indexed: 01/12/2023] Open
Abstract
In vitro biological neural networks (BNNs) interconnected with robots, so-called BNN-based neurorobotic systems, can interact with the external world, so that they can present some preliminary intelligent behaviors, including learning, memory, robot control, etc. This work aims to provide a comprehensive overview of the intelligent behaviors presented by the BNN-based neurorobotic systems, with a particular focus on those related to robot intelligence. In this work, we first introduce the necessary biological background to understand the 2 characteristics of the BNNs: nonlinear computing capacity and network plasticity. Then, we describe the typical architecture of the BNN-based neurorobotic systems and outline the mainstream techniques to realize such an architecture from 2 aspects: from robots to BNNs and from BNNs to robots. Next, we separate the intelligent behaviors into 2 parts according to whether they rely solely on the computing capacity (computing capacity-dependent) or depend also on the network plasticity (network plasticity-dependent), which are then expounded respectively, with a focus on those related to the realization of robot intelligence. Finally, the development trends and challenges of the BNN-based neurorobotic systems are discussed.
Collapse
Affiliation(s)
- Zhe Chen
- School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China
- Key Laboratory of Biomimetic Robots and Systems (Beijing Institute of Technology), Ministry of Education, Beijing 10081, China
- Advanced Innovation Center for Intelligent Robots and Systems, Beijing Institute of Technology, Beijing 100081, China
| | - Qian Liang
- Key Laboratory of Biomimetic Robots and Systems (Beijing Institute of Technology), Ministry of Education, Beijing 10081, China
- Advanced Innovation Center for Intelligent Robots and Systems, Beijing Institute of Technology, Beijing 100081, China
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China
| | - Zihou Wei
- Key Laboratory of Biomimetic Robots and Systems (Beijing Institute of Technology), Ministry of Education, Beijing 10081, China
- Advanced Innovation Center for Intelligent Robots and Systems, Beijing Institute of Technology, Beijing 100081, China
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China
| | - Xie Chen
- Key Laboratory of Biomimetic Robots and Systems (Beijing Institute of Technology), Ministry of Education, Beijing 10081, China
- Advanced Innovation Center for Intelligent Robots and Systems, Beijing Institute of Technology, Beijing 100081, China
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China
| | - Qing Shi
- School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China
- Key Laboratory of Biomimetic Robots and Systems (Beijing Institute of Technology), Ministry of Education, Beijing 10081, China
- Advanced Innovation Center for Intelligent Robots and Systems, Beijing Institute of Technology, Beijing 100081, China
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China
| | - Zhiqiang Yu
- Key Laboratory of Biomimetic Robots and Systems (Beijing Institute of Technology), Ministry of Education, Beijing 10081, China
- Advanced Innovation Center for Intelligent Robots and Systems, Beijing Institute of Technology, Beijing 100081, China
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China
| | - Tao Sun
- Key Laboratory of Biomimetic Robots and Systems (Beijing Institute of Technology), Ministry of Education, Beijing 10081, China
- Advanced Innovation Center for Intelligent Robots and Systems, Beijing Institute of Technology, Beijing 100081, China
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China
| |
Collapse
|
3
|
Leveraging plant physiological dynamics using physical reservoir computing. Sci Rep 2022; 12:12594. [PMID: 35869238 PMCID: PMC9307625 DOI: 10.1038/s41598-022-16874-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Accepted: 07/18/2022] [Indexed: 11/10/2022] Open
Abstract
Plants are complex organisms subject to variable environmental conditions, which influence their physiology and phenotype dynamically. We propose to interpret plants as reservoirs in physical reservoir computing. The physical reservoir computing paradigm originates from computer science; instead of relying on Boolean circuits to perform computations, any substrate that exhibits complex non-linear and temporal dynamics can serve as a computing element. Here, we present the first application of physical reservoir computing with plants. In addition to investigating classical benchmark tasks, we show that Fragaria × ananassa (strawberry) plants can solve environmental and eco-physiological tasks using only eight leaf thickness sensors. Although the results indicate that plants are not suitable for general-purpose computation but are well-suited for eco-physiological tasks such as photosynthetic rate and transpiration rate. Having the means to investigate the information processing by plants improves quantification and understanding of integrative plant responses to dynamic changes in their environment. This first demonstration of physical reservoir computing with plants is key for transitioning towards a holistic view of phenotyping and early stress detection in precision agriculture applications since physical reservoir computing enables us to analyse plant responses in a general way: environmental changes are processed by plants to optimise their phenotype.
Collapse
|
4
|
Bayat FK, Alp Mİ, Bostan S, Gülçür HÖ, Öztürk G, Güveniş A. An improved platform for cultured neuronal network electrophysiology: multichannel optogenetics integrated with MEAs. EUROPEAN BIOPHYSICS JOURNAL : EBJ 2022; 51:503-514. [PMID: 35930029 DOI: 10.1007/s00249-022-01613-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 05/11/2022] [Accepted: 07/24/2022] [Indexed: 06/15/2023]
Abstract
Cultured neuronal networks (CNNs) are powerful tools for studying how neuronal representation and adaptation emerge in networks of controlled populations of neurons. To ensure the interaction of a CNN and an artificial setting, reliable operation in both open and closed loops should be provided. In this study, we integrated optogenetic stimulation with microelectrode array (MEA) recordings using a digital micromirror device and developed an improved research tool with a 64-channel interface for neuronal network control and data acquisition. We determined the ideal stimulation parameters including light intensity, frequency, and duty cycle for our configuration. This resulted in robust and reproducible neuronal responses. We also demonstrated both open and closed loop configurations in the new platform involving multiple bidirectional channels. Unlike previous approaches that combined optogenetic stimulation and MEA recordings, we did not use binary grid patterns, but assigned an adjustable-size, non-binary optical spot to each electrode. This approach allowed simultaneous use of multiple input-output channels and facilitated adaptation of the stimulation parameters. Hence, we advanced a 64-channel interface in that each channel can be controlled individually in both directions simultaneously without any interference or interrupts. The presented setup meets the requirements of research in neuronal plasticity, network encoding and representation, closed-loop control of firing rate and synchronization. Researchers who develop closed-loop control techniques and adaptive stimulation strategies for network activity will benefit much from this novel setup.
Collapse
Affiliation(s)
- F Kemal Bayat
- Institute of Biomedical Engineering, Bogazici University, Istanbul, Turkey.
| | - M İkbal Alp
- Regenerative and Restorative Medicine Research Center (REMER), Research Institute for Health Sciences and Technologies (SABITA), Istanbul Medipol University, Istanbul, Turkey
| | - Sevginur Bostan
- Regenerative and Restorative Medicine Research Center (REMER), Research Institute for Health Sciences and Technologies (SABITA), Istanbul Medipol University, Istanbul, Turkey
| | - H Özcan Gülçür
- Institute of Biomedical Engineering, Bogazici University, Istanbul, Turkey
| | - Gürkan Öztürk
- Regenerative and Restorative Medicine Research Center (REMER), Research Institute for Health Sciences and Technologies (SABITA), Istanbul Medipol University, Istanbul, Turkey
| | - Albert Güveniş
- Institute of Biomedical Engineering, Bogazici University, Istanbul, Turkey
| |
Collapse
|
5
|
Colombi I, Nieus T, Massimini M, Chiappalone M. Spontaneous and Perturbational Complexity in Cortical Cultures. Brain Sci 2021; 11:1453. [PMID: 34827452 PMCID: PMC8615728 DOI: 10.3390/brainsci11111453] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 10/27/2021] [Accepted: 10/27/2021] [Indexed: 12/18/2022] Open
Abstract
Dissociated cortical neurons in vitro display spontaneously synchronized, low-frequency firing patterns, which can resemble the slow wave oscillations characterizing sleep in vivo. Experiments in humans, rodents, and cortical slices have shown that awakening or the administration of activating neuromodulators decrease slow waves, while increasing the spatio-temporal complexity of responses to perturbations. In this study, we attempted to replicate those findings using in vitro cortical cultures coupled with micro-electrode arrays and chemically treated with carbachol (CCh), to modulate sleep-like activity and suppress slow oscillations. We adapted metrics such as neural complexity (NC) and the perturbational complexity index (PCI), typically employed in animal and human brain studies, to quantify complexity in simplified, unstructured networks, both during resting state and in response to electrical stimulation. After CCh administration, we found a decrease in the amplitude of the initial response and a marked enhancement of the complexity during spontaneous activity. Crucially, unlike in cortical slices and intact brains, PCI in cortical cultures displayed only a moderate increase. This dissociation suggests that PCI, a measure of the complexity of causal interactions, requires more than activating neuromodulation and that additional factors, such as an appropriate circuit architecture, may be necessary. Exploring more structured in vitro networks, characterized by the presence of strong lateral connections, recurrent excitation, and feedback loops, may thus help to identify the features that are more relevant to support causal complexity.
Collapse
Affiliation(s)
- Ilaria Colombi
- Brain Development and Disease Laboratory, Istituto Italiano di Tecnologia, 16163 Genova, Italy;
| | - Thierry Nieus
- Department of Biomedical and Clinical Sciences “L. Sacco”, University of Milan, 20157 Milan, Italy; (T.N.); (M.M.)
| | - Marcello Massimini
- Department of Biomedical and Clinical Sciences “L. Sacco”, University of Milan, 20157 Milan, Italy; (T.N.); (M.M.)
- IRCCS, Fondazione Don Carlo Gnocchi, 20148 Milan, Italy
| | - Michela Chiappalone
- Department of Informatics, Bioengineering, Robotics and System Engineering, 16145 Genova, Italy
- Rehab Technologies Lab., Istituto Italiano di Tecnologia, 16163 Genova, Italy
| |
Collapse
|
6
|
Ferdous ZI, Yu A, Zeng Y, Guo X, Yan Z, Berdichevsky Y. Efficient and Accurate Computational Model of Neuron with Spike Frequency Adaptation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:6496-6499. [PMID: 34892598 DOI: 10.1109/embc46164.2021.9629799] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Simplified models of neurons are widely used in computational investigations of large networks. One of the most important performance metrics of simplified models is their accuracy in reproducing action potential (spike) timing. In this article, we developed a simple, computationally efficient neuron model by modifying the adaptive exponential integrate and fire (AdEx) model [1] with sigmoid afterhyperpolarization current (Sigmoid AHP). Our model can precisely match the spike times and spike frequency adaptation of cortical pyramidal neurons. The accuracy was similar to a more complex two compartment biophysically realistic model of the same neurons. This work provides a simplified neuronal model with improved spike timing accuracy for use in modeling of large neural networks.Clinical Relevance- Accurate and computationally efficient single neuron model will enable large network modeling of brain regions involved in neurological and psychiatric disorders and may lead to a better understanding of the disorder mechanisms.
Collapse
|
7
|
Morales GB, Mirasso CR, Soriano MC. Unveiling the role of plasticity rules in reservoir computing. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.05.127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
8
|
Zeng Y, Ferdous ZI, Zhang W, Xu M, Yu A, Patel D, Post V, Guo X, Berdichevsky Y, Yan Z. Understanding the Impact of Neural Variations and Random Connections on Inference. Front Comput Neurosci 2021; 15:612937. [PMID: 34163343 PMCID: PMC8215547 DOI: 10.3389/fncom.2021.612937] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Accepted: 04/19/2021] [Indexed: 11/19/2022] Open
Abstract
Recent research suggests that in vitro neural networks created from dissociated neurons may be used for computing and performing machine learning tasks. To develop a better artificial intelligent system, a hybrid bio-silicon computer is worth exploring, but its performance is still inferior to that of a silicon-based computer. One reason may be that a living neural network has many intrinsic properties, such as random network connectivity, high network sparsity, and large neural and synaptic variability. These properties may lead to new design considerations, and existing algorithms need to be adjusted for living neural network implementation. This work investigates the impact of neural variations and random connections on inference with learning algorithms. A two-layer hybrid bio-silicon platform is constructed and a five-step design method is proposed for the fast development of living neural network algorithms. Neural variations and dynamics are verified by fitting model parameters with biological experimental results. Random connections are generated under different connection probabilities to vary network sparsity. A multi-layer perceptron algorithm is tested with biological constraints on the MNIST dataset. The results show that a reasonable inference accuracy can be achieved despite the presence of neural variations and random network connections. A new adaptive pre-processing technique is proposed to ensure good learning accuracy with different living neural network sparsity.
Collapse
Affiliation(s)
- Yuan Zeng
- Electrical and Computer Engineering Department, Lehigh University, Bethlehem, PA, United States
| | - Zubayer Ibne Ferdous
- Electrical and Computer Engineering Department, Lehigh University, Bethlehem, PA, United States
| | - Weixiang Zhang
- Electrical and Computer Engineering Department, Beihang University, Beijing, China
| | - Mufan Xu
- Electrical and Computer Engineering Department, Beihang University, Beijing, China
| | - Anlan Yu
- Electrical and Computer Engineering Department, Lehigh University, Bethlehem, PA, United States
| | - Drew Patel
- Electrical and Computer Engineering Department, Lehigh University, Bethlehem, PA, United States
| | - Valentin Post
- Electrical and Computer Engineering Department, Lehigh University, Bethlehem, PA, United States
| | - Xiaochen Guo
- Electrical and Computer Engineering Department, Lehigh University, Bethlehem, PA, United States
| | - Yevgeny Berdichevsky
- Electrical and Computer Engineering Department, Lehigh University, Bethlehem, PA, United States.,Bioengineering Department, Lehigh University, Bethlehem, PA, United States
| | - Zhiyuan Yan
- Electrical and Computer Engineering Department, Lehigh University, Bethlehem, PA, United States
| |
Collapse
|
9
|
Dias I, Levers MR, Lamberti M, Hassink GC, van Wezel R, le Feber J. Consolidation of memory traces in cultured cortical networks requires low cholinergic tone, synchronized activity and high network excitability. J Neural Eng 2021; 18. [PMID: 33892486 DOI: 10.1088/1741-2552/abfb3f] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2020] [Accepted: 04/23/2021] [Indexed: 11/11/2022]
Abstract
In systems consolidation, encoded memories are replayed by the hippocampus during slow-wave sleep (SWS), and permanently stored in the neocortex. Declarative memory consolidation is believed to benefit from the oscillatory rhythms and low cholinergic tone observed in this sleep stage, but underlying mechanisms remain unclear. To clarify the role of cholinergic modulation and synchronized activity in memory consolidation, we applied repeated electrical stimulation in mature cultures of dissociated rat cortical neurons with high or low cholinergic tone, mimicking the cue replay observed during systems consolidation under distinct cholinergic concentrations. In the absence of cholinergic input, these cultures display activity patterns hallmarked by network bursts, synchronized events reminiscent of the low frequency oscillations observed during SWS. They display stable activity and connectivity, which mutually interact and achieve an equilibrium. Electrical stimulation reforms the equilibrium to include the stimulus response, a phenomenon interpreted as memory trace formation. Without cholinergic input, activity was burst-dominated. First application of a stimulus induced significant connectivity changes, while subsequent repetition no longer affected connectivity. Presenting a second stimulus at a different electrode had the same effect, whereas returning to the initial stimuli did not induce further connectivity alterations, indicating that the second stimulus did not erase the 'memory trace' of the first. Distinctively, cultures with high cholinergic tone displayed reduced network excitability and dispersed firing, and electrical stimulation did not induce significant connectivity changes. We conclude that low cholinergic tone facilitates memory formation and consolidation, possibly through enhanced network excitability. Network bursts or SWS oscillations may merely reflect high network excitability.
Collapse
Affiliation(s)
- Inês Dias
- Department of Clinical Neurophysiology, University of Twente, Enschede, PO Box 217 7500AE, The Netherlands
| | - Marloes R Levers
- Department of Clinical Neurophysiology, University of Twente, Enschede, PO Box 217 7500AE, The Netherlands
| | - Martina Lamberti
- Department of Clinical Neurophysiology, University of Twente, Enschede, PO Box 217 7500AE, The Netherlands
| | - Gerco C Hassink
- Department of Clinical Neurophysiology, University of Twente, Enschede, PO Box 217 7500AE, The Netherlands
| | - Richard van Wezel
- Department of Biomedical Signals and Systems, University of Twente, Enschede, PO Box 217 7500AE, The Netherlands.,Department of Biophysics, Radboud University, Nijmegen, PO Box 9010 6525AJ, The Netherlands
| | - Joost le Feber
- Department of Clinical Neurophysiology, University of Twente, Enschede, PO Box 217 7500AE, The Netherlands
| |
Collapse
|
10
|
Bale MR, Bitzidou M, Giusto E, Kinghorn P, Maravall M. Sequence Learning Induces Selectivity to Multiple Task Parameters in Mouse Somatosensory Cortex. Curr Biol 2021; 31:473-485.e5. [PMID: 33186553 PMCID: PMC7883307 DOI: 10.1016/j.cub.2020.10.059] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2020] [Revised: 09/01/2020] [Accepted: 10/20/2020] [Indexed: 11/20/2022]
Abstract
Sequential temporal ordering and patterning are key features of natural signals, used by the brain to decode stimuli and perceive them as sensory objects. To explore how cortical neuronal activity underpins sequence discrimination, we developed a task in which mice distinguished between tactile "word" sequences constructed from distinct vibrations delivered to the whiskers, assembled in different orders. Animals licked to report the presence of the target sequence. Mice could respond to the earliest possible cues allowing discrimination, effectively solving the task as a "detection of change" problem, but enhanced their performance when responding later. Optogenetic inactivation showed that the somatosensory cortex was necessary for sequence discrimination. Two-photon imaging in layer 2/3 of the primary somatosensory "barrel" cortex (S1bf) revealed that, in well-trained animals, neurons had heterogeneous selectivity to multiple task variables including not just sensory input but also the animal's action decision and the trial outcome (presence or absence of the predicted reward). Many neurons were activated preceding goal-directed licking, thus reflecting the animal's learned action in response to the target sequence; these neurons were found as soon as mice learned to associate the rewarded sequence with licking. In contrast, learning evoked smaller changes in sensory response tuning: neurons responding to stimulus features were found in naive mice, and training did not generate neurons with enhanced temporal integration or categorical responses. Therefore, in S1bf, sequence learning results in neurons whose activity reflects the learned association between target sequence and licking rather than a refined representation of sensory features.
Collapse
Affiliation(s)
- Michael R Bale
- Sussex Neuroscience, School of Life Sciences, University of Sussex, Brighton BN1 9QG, UK
| | - Malamati Bitzidou
- Sussex Neuroscience, School of Life Sciences, University of Sussex, Brighton BN1 9QG, UK
| | - Elena Giusto
- Sussex Neuroscience, School of Life Sciences, University of Sussex, Brighton BN1 9QG, UK
| | - Paul Kinghorn
- Sussex Neuroscience, School of Life Sciences, University of Sussex, Brighton BN1 9QG, UK
| | - Miguel Maravall
- Sussex Neuroscience, School of Life Sciences, University of Sussex, Brighton BN1 9QG, UK.
| |
Collapse
|
11
|
Nguyen PTM, Hayashi Y, Baptista MDS, Kondo T. Collective almost synchronization-based model to extract and predict features of EEG signals. Sci Rep 2020; 10:16342. [PMID: 33004963 PMCID: PMC7530765 DOI: 10.1038/s41598-020-73346-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2020] [Accepted: 09/15/2020] [Indexed: 01/11/2023] Open
Abstract
Understanding the brain is important in the fields of science, medicine, and engineering. A promising approach to better understand the brain is through computing models. These models were adjusted to reproduce data collected from the brain. One of the most commonly used types of data in neuroscience comes from electroencephalography (EEG), which records the tiny voltages generated when neurons in the brain are activated. In this study, we propose a model based on complex networks of weakly connected dynamical systems (Hindmarsh-Rose neurons or Kuramoto oscillators), set to operate in a dynamic regime recognized as Collective Almost Synchronization (CAS). Our model not only successfully reproduces EEG data from both healthy and epileptic EEG signals, but it also predicts EEG features, the Hurst exponent, and the power spectrum. The proposed model is able to forecast EEG signals 5.76 s in the future. The average forecasting error was 9.22%. The random Kuramoto model produced the outstanding result for forecasting seizure EEG with an error of 11.21%.
Collapse
Affiliation(s)
- Phuong Thi Mai Nguyen
- Department of Computer and Information Sciences, Tokyo University of Agriculture and Technology, Tokyo, 184-8588, Japan
| | - Yoshikatsu Hayashi
- Biomedical Science/Engineering, School of Biological Sciences, University of Reading, Reading, RG6 6UR, UK
| | - Murilo Da Silva Baptista
- Institute for Complex System and Mathematical Biology, University of Aberdeen, Aberdeen, AB24 3UE, UK
| | - Toshiyuki Kondo
- Department of Computer and Information Sciences, Tokyo University of Agriculture and Technology, Tokyo, 184-8588, Japan.
| |
Collapse
|
12
|
Tanaka T, Nakajima K, Aoyagi T. Effect of recurrent infomax on the information processing capability of input-driven recurrent neural networks. Neurosci Res 2020; 156:225-233. [DOI: 10.1016/j.neures.2020.02.001] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2019] [Revised: 01/28/2020] [Accepted: 02/06/2020] [Indexed: 11/29/2022]
|
13
|
Familiarity Detection and Memory Consolidation in Cortical Assemblies. eNeuro 2020; 7:ENEURO.0006-19.2020. [PMID: 32122957 PMCID: PMC7215585 DOI: 10.1523/eneuro.0006-19.2020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2019] [Revised: 01/30/2020] [Accepted: 02/20/2020] [Indexed: 01/12/2023] Open
Abstract
Humans have a large capacity of recognition memory (Dudai, 1997), a fundamental property of higher-order brain functions such as abstraction and generalization (Vogt and Magnussen, 2007). Familiarity is the first step towards recognition memory. We have previously demonstrated using unsupervised neural network simulations that familiarity detection of complex patterns emerges in generic cortical microcircuits with bidirectional synaptic plasticity. It is therefore meaningful to conduct similar experiments on biological neuronal networks to validate these results. Studies of learning and memory in dissociated rodent neuronal cultures remain inconclusive to date. Synchronized network bursts (SNBs) that occur spontaneously and periodically have been speculated to be an intervening factor. By optogenetically stimulating cultured cortical networks with random dot movies (RDMs), we were able to reduce the occurrence of SNBs, after which an ability for familiarity detection emerged: previously seen patterns elicited higher firing rates than novel ones. Differences in firing rate were distributed over the entire network, suggesting that familiarity detection is a system level property. We also studied the change in SNB patterns following familiarity encoding. Support vector machine (SVM) classification results indicate that SNBs may be facilitating memory consolidation of the learned pattern. In addition, using a novel network connectivity probing method, we were able to trace the change in synaptic efficacy induced by familiarity encoding, providing insights on the long-term impact of having SNBs in the cultures.
Collapse
|
14
|
Sumi T, Yamamoto H, Hirano-Iwata A. Suppression of hypersynchronous network activity in cultured cortical neurons using an ultrasoft silicone scaffold. SOFT MATTER 2020; 16:3195-3202. [PMID: 32096811 DOI: 10.1039/c9sm02432h] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The spontaneous activity pattern of cortical neurons in dissociated culture is characterized by burst firing that is highly synchronized among a wide population of cells. The degree of synchrony, however, is excessively higher than that in cortical tissues. Here, we employed polydimethylsiloxane (PDMS) elastomers to establish a novel system for culturing neurons on a scaffold with an elastic modulus resembling brain tissue, and investigated the effect of the scaffold's elasticity on network activity patterns in cultured rat cortical neurons. Using whole-cell patch clamp to assess the scaffold effect on the development of synaptic connections, we found that the amplitude of excitatory postsynaptic current, as well as the frequency of spontaneous transmissions, was reduced in neuronal networks grown on an ultrasoft PDMS with an elastic modulus of 0.5 kPa. Furthermore, the ultrasoft scaffold was found to suppress neural correlations in the spontaneous activity of the cultured neuronal network. The dose of GsMTx-4, an antagonist of stretch-activated cation channels (SACs), required to reduce the generation of the events below 1.0 event per min on PDMS substrates was lower than that for neurons on a glass substrate. This suggests that the difference in the baseline level of SAC activation is a molecular mechanism underlying the alteration in neuronal network activity depending on scaffold stiffness. Our results demonstrate the potential application of PDMS with biomimetic elasticity as cell-culture scaffold for bridging the in vivo-in vitro gap in neuronal systems.
Collapse
Affiliation(s)
- Takuma Sumi
- Research Institute of Electrical Communication, Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577, Japan.
| | - Hideaki Yamamoto
- Research Institute of Electrical Communication, Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577, Japan. and WPI-Advanced Institute for Materials Research (WPI-AIMR), Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577, Japan
| | - Ayumi Hirano-Iwata
- Research Institute of Electrical Communication, Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577, Japan. and WPI-Advanced Institute for Materials Research (WPI-AIMR), Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577, Japan
| |
Collapse
|
15
|
Roberts TP, Kern FB, Fernando C, Szathmáry E, Husbands P, Philippides AO, Staras K. Encoding Temporal Regularities and Information Copying in Hippocampal Circuits. Sci Rep 2019; 9:19036. [PMID: 31836825 PMCID: PMC6910951 DOI: 10.1038/s41598-019-55395-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2019] [Accepted: 11/23/2019] [Indexed: 12/02/2022] Open
Abstract
Discriminating, extracting and encoding temporal regularities is a critical requirement in the brain, relevant to sensory-motor processing and learning. However, the cellular mechanisms responsible remain enigmatic; for example, whether such abilities require specific, elaborately organized neural networks or arise from more fundamental, inherent properties of neurons. Here, using multi-electrode array technology, and focusing on interval learning, we demonstrate that sparse reconstituted rat hippocampal neural circuits are intrinsically capable of encoding and storing sub-second-order time intervals for over an hour timescale, represented in changes in the spatial-temporal architecture of firing relationships among populations of neurons. This learning is accompanied by increases in mutual information and transfer entropy, formal measures related to information storage and flow. Moreover, temporal relationships derived from previously trained circuits can act as templates for copying intervals into untrained networks, suggesting the possibility of circuit-to-circuit information transfer. Our findings illustrate that dynamic encoding and stable copying of temporal relationships are fundamental properties of simple in vitro networks, with general significance for understanding elemental principles of information processing, storage and replication.
Collapse
Affiliation(s)
- Terri P Roberts
- Sussex Neuroscience, University of Sussex, Brighton, BN1 9QG, UK
| | - Felix B Kern
- Sussex Neuroscience, University of Sussex, Brighton, BN1 9QG, UK
- Centre for Computational Neuroscience and Robotics, School of Engineering and Informatics, University of Sussex, Brighton, BN1 9QJ, UK
| | - Chrisantha Fernando
- School of EECS, Queen Mary University of London, E1 4NS, London, UK
- Google DeepMind, London, N1C 4AG, UK
| | - Eörs Szathmáry
- Parmenides Center for the Conceptual Foundations of Science, 82049, Pullach, Munich, Germany
- Institute of Evolution, Centre for Ecological Research, 3 Klebelsberg Kuno Street, 8237, Tihany, Hungary
| | - Phil Husbands
- Sussex Neuroscience, University of Sussex, Brighton, BN1 9QG, UK.
- Centre for Computational Neuroscience and Robotics, School of Engineering and Informatics, University of Sussex, Brighton, BN1 9QJ, UK.
| | - Andrew O Philippides
- Sussex Neuroscience, University of Sussex, Brighton, BN1 9QG, UK
- Centre for Computational Neuroscience and Robotics, School of Engineering and Informatics, University of Sussex, Brighton, BN1 9QJ, UK
| | - Kevin Staras
- Sussex Neuroscience, University of Sussex, Brighton, BN1 9QG, UK.
| |
Collapse
|
16
|
Seoane LF. Evolutionary aspects of reservoir computing. Philos Trans R Soc Lond B Biol Sci 2019; 374:20180377. [PMID: 31006369 PMCID: PMC6553587 DOI: 10.1098/rstb.2018.0377] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/22/2018] [Indexed: 01/31/2023] Open
Abstract
Reservoir computing (RC) is a powerful computational paradigm that allows high versatility with cheap learning. While other artificial intelligence approaches need exhaustive resources to specify their inner workings, RC is based on a reservoir with highly nonlinear dynamics that does not require a fine tuning of its parts. These dynamics project input signals into high-dimensional spaces, where training linear readouts to extract input features is vastly simplified. Thus, inexpensive learning provides very powerful tools for decision-making, controlling dynamical systems, classification, etc. RC also facilitates solving multiple tasks in parallel, resulting in a high throughput. Existing literature focuses on applications in artificial intelligence and neuroscience. We review this literature from an evolutionary perspective. RC's versatility makes it a great candidate to solve outstanding problems in biology, which raises relevant questions. Is RC as abundant in nature as its advantages should imply? Has it evolved? Once evolved, can it be easily sustained? Under what circumstances? (In other words, is RC an evolutionarily stable computing paradigm?) To tackle these issues, we introduce a conceptual morphospace that would map computational selective pressures that could select for or against RC and other computing paradigms. This guides a speculative discussion about the questions above and allows us to propose a solid research line that brings together computation and evolution with RC as test model of the proposed hypotheses. This article is part of the theme issue 'Liquid brains, solid brains: How distributed cognitive architectures process information'.
Collapse
Affiliation(s)
- Luís F. Seoane
- ICREA-Complex Systems Lab, Universitat Pompeu Fabra, Barcelona 08003, Spain
- Institut de Biologia Evolutiva (CSIC-UPF), Barcelona 08003, Spain
| |
Collapse
|
17
|
Tanaka G, Yamane T, Héroux JB, Nakane R, Kanazawa N, Takeda S, Numata H, Nakano D, Hirose A. Recent advances in physical reservoir computing: A review. Neural Netw 2019; 115:100-123. [PMID: 30981085 DOI: 10.1016/j.neunet.2019.03.005] [Citation(s) in RCA: 338] [Impact Index Per Article: 67.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2018] [Revised: 02/24/2019] [Accepted: 03/07/2019] [Indexed: 02/06/2023]
Abstract
Reservoir computing is a computational framework suited for temporal/sequential data processing. It is derived from several recurrent neural network models, including echo state networks and liquid state machines. A reservoir computing system consists of a reservoir for mapping inputs into a high-dimensional space and a readout for pattern analysis from the high-dimensional states in the reservoir. The reservoir is fixed and only the readout is trained with a simple method such as linear regression and classification. Thus, the major advantage of reservoir computing compared to other recurrent neural networks is fast learning, resulting in low training cost. Another advantage is that the reservoir without adaptive updating is amenable to hardware implementation using a variety of physical systems, substrates, and devices. In fact, such physical reservoir computing has attracted increasing attention in diverse fields of research. The purpose of this review is to provide an overview of recent advances in physical reservoir computing by classifying them according to the type of the reservoir. We discuss the current issues and perspectives related to physical reservoir computing, in order to further expand its practical applications and develop next-generation machine learning systems.
Collapse
Affiliation(s)
- Gouhei Tanaka
- Institute for Innovation in International Engineering Education, Graduate School of Engineering, The University of Tokyo, Tokyo 113-8656, Japan; Department of Electrical Engineering and Information Systems, Graduate School of Engineering, The University of Tokyo, Tokyo 113-8656, Japan.
| | | | | | - Ryosho Nakane
- Institute for Innovation in International Engineering Education, Graduate School of Engineering, The University of Tokyo, Tokyo 113-8656, Japan; Department of Electrical Engineering and Information Systems, Graduate School of Engineering, The University of Tokyo, Tokyo 113-8656, Japan
| | | | | | | | | | - Akira Hirose
- Institute for Innovation in International Engineering Education, Graduate School of Engineering, The University of Tokyo, Tokyo 113-8656, Japan; Department of Electrical Engineering and Information Systems, Graduate School of Engineering, The University of Tokyo, Tokyo 113-8656, Japan
| |
Collapse
|
18
|
Closed-Loop Systems and In Vitro Neuronal Cultures: Overview and Applications. ADVANCES IN NEUROBIOLOGY 2019; 22:351-387. [DOI: 10.1007/978-3-030-11135-9_15] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
|
19
|
Forró C, Thompson-Steckel G, Weaver S, Weydert S, Ihle S, Dermutz H, Aebersold MJ, Pilz R, Demkó L, Vörös J. Modular microstructure design to build neuronal networks of defined functional connectivity. Biosens Bioelectron 2018; 122:75-87. [DOI: 10.1016/j.bios.2018.08.075] [Citation(s) in RCA: 40] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2018] [Revised: 08/27/2018] [Accepted: 08/30/2018] [Indexed: 02/01/2023]
|
20
|
Nieus T, D'Andrea V, Amin H, Di Marco S, Safaai H, Maccione A, Berdondini L, Panzeri S. State-dependent representation of stimulus-evoked activity in high-density recordings of neural cultures. Sci Rep 2018; 8:5578. [PMID: 29615719 PMCID: PMC5882875 DOI: 10.1038/s41598-018-23853-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2018] [Accepted: 03/21/2018] [Indexed: 01/01/2023] Open
Abstract
Neuronal responses to external stimuli vary from trial to trial partly because they depend on continuous spontaneous variations of the state of neural circuits, reflected in variations of ongoing activity prior to stimulus presentation. Understanding how post-stimulus responses relate to the pre-stimulus spontaneous activity is thus important to understand how state dependence affects information processing and neural coding, and how state variations can be discounted to better decode single-trial neural responses. Here we exploited high-resolution CMOS electrode arrays to record simultaneously from thousands of electrodes in in-vitro cultures stimulated at specific sites. We used information-theoretic analyses to study how ongoing activity affects the information that neuronal responses carry about the location of the stimuli. We found that responses exhibited state dependence on the time between the last spontaneous burst and the stimulus presentation and that the dependence could be described with a linear model. Importantly, we found that a small number of selected neurons carry most of the stimulus information and contribute to the state-dependent information gain. This suggests that a major value of large-scale recording is that it individuates the small subset of neurons that carry most information and that benefit the most from knowledge of its state dependence.
Collapse
Affiliation(s)
- Thierry Nieus
- NetS3 Laboratory, Neuroscience and Brain Technologies Department, Istituto Italiano di Tecnologia, Genova, Italy. .,Department of Biomedical and Clinical Sciences "Luigi Sacco", Università di Milano, Milano, Italy.
| | - Valeria D'Andrea
- Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems @UniTn, Istituto Italiano di Tecnologia, Rovereto, Italy
| | - Hayder Amin
- NetS3 Laboratory, Neuroscience and Brain Technologies Department, Istituto Italiano di Tecnologia, Genova, Italy
| | - Stefano Di Marco
- NetS3 Laboratory, Neuroscience and Brain Technologies Department, Istituto Italiano di Tecnologia, Genova, Italy.,Scienze cliniche applicate e biotecnologiche, Università dell'Aquila, L'Aquila, Italy
| | - Houman Safaai
- Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems @UniTn, Istituto Italiano di Tecnologia, Rovereto, Italy.,Department of Neurobiology, Harvard Medical School, 02115, Boston, Massachusetts, USA
| | - Alessandro Maccione
- NetS3 Laboratory, Neuroscience and Brain Technologies Department, Istituto Italiano di Tecnologia, Genova, Italy
| | - Luca Berdondini
- NetS3 Laboratory, Neuroscience and Brain Technologies Department, Istituto Italiano di Tecnologia, Genova, Italy
| | - Stefano Panzeri
- Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems @UniTn, Istituto Italiano di Tecnologia, Rovereto, Italy.
| |
Collapse
|
21
|
|
22
|
McFarland DJ. How neuroscience can inform the study of individual differences in cognitive abilities. Rev Neurosci 2018; 28:343-362. [PMID: 28195556 DOI: 10.1515/revneuro-2016-0073] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2016] [Accepted: 12/17/2016] [Indexed: 02/06/2023]
Abstract
Theories of human mental abilities should be consistent with what is known in neuroscience. Currently, tests of human mental abilities are modeled by cognitive constructs such as attention, working memory, and speed of information processing. These constructs are in turn related to a single general ability. However, brains are very complex systems and whether most of the variability between the operations of different brains can be ascribed to a single factor is questionable. Research in neuroscience suggests that psychological processes such as perception, attention, decision, and executive control are emergent properties of interacting distributed networks. The modules that make up these networks use similar computational processes that involve multiple forms of neural plasticity, each having different time constants. Accordingly, these networks might best be characterized in terms of the information they process rather than in terms of abstract psychological processes such as working memory and executive control.
Collapse
|
23
|
Zhang X, Foderaro G, Henriquez C, Ferrari S. A Scalable Weight-Free Learning Algorithm for Regulatory Control of Cell Activity in Spiking Neuronal Networks. Int J Neural Syst 2018; 28:1750015. [DOI: 10.1142/s0129065717500150] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Recent developments in neural stimulation and recording technologies are providing scientists with the ability of recording and controlling the activity of individual neurons in vitro or in vivo, with very high spatial and temporal resolution. Tools such as optogenetics, for example, are having a significant impact in the neuroscience field by delivering optical firing control with the precision and spatiotemporal resolution required for investigating information processing and plasticity in biological brains. While a number of training algorithms have been developed to date for spiking neural network (SNN) models of biological neuronal circuits, exiting methods rely on learning rules that adjust the synaptic strengths (or weights) directly, in order to obtain the desired network-level (or functional-level) performance. As such, they are not applicable to modifying plasticity in biological neuronal circuits, in which synaptic strengths only change as a result of pre- and post-synaptic neuron firings or biological mechanisms beyond our control. This paper presents a weight-free training algorithm that relies solely on adjusting the spatiotemporal delivery of neuron firings in order to optimize the network performance. The proposed weight-free algorithm does not require any knowledge of the SNN model or its plasticity mechanisms. As a result, this training approach is potentially realizable in vitro or in vivo via neural stimulation and recording technologies, such as optogenetics and multielectrode arrays, and could be utilized to control plasticity at multiple scales of biological neuronal circuits. The approach is demonstrated by training SNNs with hundreds of units to control a virtual insect navigating in an unknown environment.
Collapse
Affiliation(s)
- Xu Zhang
- Mechanical Engineering and Materials Science, Duke University, Box 90300 Hudson Hall, Durham, NC, US
| | - Greg Foderaro
- Mechanical Engineering and Materials Science, Duke University, Box 90300 Hudson Hall, Durham, NC, US
| | - Craig Henriquez
- Biomedical Engineering, Duke University, Box 90281 Hudson Hall, Durham, 27708, US
| | - Silvia Ferrari
- Sibley School of Mechanical and Aerospace Engineering, Cornell University, 105 Upson Hall, Ithaca, New York, 14853, US
| |
Collapse
|
24
|
Familiarity Detection is an Intrinsic Property of Cortical Microcircuits with Bidirectional Synaptic Plasticity. eNeuro 2017; 4:eN-NWR-0361-16. [PMID: 28534043 PMCID: PMC5439184 DOI: 10.1523/eneuro.0361-16.2017] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2016] [Revised: 04/26/2017] [Accepted: 04/27/2017] [Indexed: 11/21/2022] Open
Abstract
Humans instantly recognize a previously seen face as “familiar.” To deepen our understanding of familiarity-novelty detection, we simulated biologically plausible neural network models of generic cortical microcircuits consisting of spiking neurons with random recurrent synaptic connections. NMDA receptor (NMDAR)-dependent synaptic plasticity was implemented to allow for unsupervised learning and bidirectional modifications. Network spiking activity evoked by sensory inputs consisting of face images altered synaptic efficacy, which resulted in the network responding more strongly to a previously seen face than a novel face. Network size determined how many faces could be accurately recognized as familiar. When the simulated model became sufficiently complex in structure, multiple familiarity traces could be retained in the same network by forming partially-overlapping subnetworks that differ slightly from each other, thereby resulting in a high storage capacity. Fisher’s discriminant analysis was applied to identify critical neurons whose spiking activity predicted familiar input patterns. Intriguingly, as sensory exposure was prolonged, the selected critical neurons tended to appear at deeper layers of the network model, suggesting recruitment of additional circuits in the network for incremental information storage. We conclude that generic cortical microcircuits with bidirectional synaptic plasticity have an intrinsic ability to detect familiar inputs. This ability does not require a specialized wiring diagram or supervision and can therefore be expected to emerge naturally in developing cortical circuits.
Collapse
|
25
|
Bhinge A, Namboori SC, Zhang X, VanDongen AMJ, Stanton LW. Genetic Correction of SOD1 Mutant iPSCs Reveals ERK and JNK Activated AP1 as a Driver of Neurodegeneration in Amyotrophic Lateral Sclerosis. Stem Cell Reports 2017; 8:856-869. [PMID: 28366453 PMCID: PMC5390134 DOI: 10.1016/j.stemcr.2017.02.019] [Citation(s) in RCA: 91] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2016] [Revised: 02/21/2017] [Accepted: 02/23/2017] [Indexed: 12/14/2022] Open
Abstract
Although mutations in several genes with diverse functions have been known to cause amyotrophic lateral sclerosis (ALS), it is unknown to what extent causal mutations impinge on common pathways that drive motor neuron (MN)-specific neurodegeneration. In this study, we combined induced pluripotent stem cells-based disease modeling with genome engineering and deep RNA sequencing to identify pathways dysregulated by mutant SOD1 in human MNs. Gene expression profiling and pathway analysis followed by pharmacological screening identified activated ERK and JNK signaling as key drivers of neurodegeneration in mutant SOD1 MNs. The AP1 complex member JUN, an ERK/JNK downstream target, was observed to be highly expressed in MNs compared with non-MNs, providing a mechanistic insight into the specific degeneration of MNs. Importantly, investigations of mutant FUS MNs identified activated p38 and ERK, indicating that network perturbations induced by ALS-causing mutations converge partly on a few specific pathways that are drug responsive and provide immense therapeutic potential. Genome correction of SOD1 E100G mutation corrects ALS phenotypes in MNs Activation of MAPK, AP1, WNT, cell-cycle, and p53 signaling in ALS MNs Pharmacological screening uncovers ERK and JNK signaling as therapeutic targets Susceptibility of MNs to degeneration may be due to heightened JUN activity in MNs
Collapse
Affiliation(s)
- Akshay Bhinge
- Stem Cell and Regenerative Biology, Genome Institute of Singapore, Singapore 138672, Singapore.
| | - Seema C Namboori
- Stem Cell and Regenerative Biology, Genome Institute of Singapore, Singapore 138672, Singapore
| | - Xiaoyu Zhang
- Program for Neuroscience and Behavioral Disorders, Duke-NUS Medical School, Singapore 169857, Singapore; NUS Graduate School for Integrative Sciences and Engineering, National University of Singapore, Singapore 119077, Singapore
| | - Antonius M J VanDongen
- Program for Neuroscience and Behavioral Disorders, Duke-NUS Medical School, Singapore 169857, Singapore
| | - Lawrence W Stanton
- Stem Cell and Regenerative Biology, Genome Institute of Singapore, Singapore 138672, Singapore; Department of Biological Sciences, National University of Singapore, Singapore 117543, Singapore.
| |
Collapse
|
26
|
Corticostriatal circuit mechanisms of value-based action selection: Implementation of reinforcement learning algorithms and beyond. Behav Brain Res 2016; 311:110-121. [DOI: 10.1016/j.bbr.2016.05.017] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2015] [Revised: 05/02/2016] [Accepted: 05/06/2016] [Indexed: 01/20/2023]
|
27
|
Goel A, Buonomano DV. Temporal Interval Learning in Cortical Cultures Is Encoded in Intrinsic Network Dynamics. Neuron 2016; 91:320-7. [PMID: 27346530 DOI: 10.1016/j.neuron.2016.05.042] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2015] [Revised: 03/22/2016] [Accepted: 05/24/2016] [Indexed: 10/21/2022]
Abstract
Telling time and anticipating when external events will happen is among the most important tasks the brain performs. Yet the neural mechanisms underlying timing remain elusive. One theory proposes that timing is a general and intrinsic computation of cortical circuits. We tested this hypothesis using electrical and optogenetic stimulation to determine if brain slices could "learn" temporal intervals. Presentation of intervals between 100 and 500 ms altered the temporal profile of evoked network activity in an interval and pathway-specific manner-suggesting that the network learned to anticipate an expected stimulus. Recordings performed during training revealed a progressive increase in evoked network activity, followed by subsequent refinement of temporal dynamics, which was related to a time-window-specific increase in the excitatory-inhibitory balance. These results support the hypothesis that subsecond timing is an intrinsic computation and that timing emerges from network-wide, yet pathway-specific, changes in evoked neural dynamics.
Collapse
Affiliation(s)
- Anubhuti Goel
- Department of Neurology, University of California, Los Angeles, Reed Neurological Research Ctr-A-145, 710 Westwood Plaza, Los Angeles, CA 90095, USA
| | - Dean V Buonomano
- Departments of Neurobiology and Psychology, Integrative Center for Learning and Memory, University of California, Los Angeles, 695 Young Drive, Los Angeles, CA 90095, USA.
| |
Collapse
|
28
|
Enel P, Procyk E, Quilodran R, Dominey PF. Reservoir Computing Properties of Neural Dynamics in Prefrontal Cortex. PLoS Comput Biol 2016; 12:e1004967. [PMID: 27286251 PMCID: PMC4902312 DOI: 10.1371/journal.pcbi.1004967] [Citation(s) in RCA: 70] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2015] [Accepted: 05/08/2016] [Indexed: 11/25/2022] Open
Abstract
Primates display a remarkable ability to adapt to novel situations. Determining what is most pertinent in these situations is not always possible based only on the current sensory inputs, and often also depends on recent inputs and behavioral outputs that contribute to internal states. Thus, one can ask how cortical dynamics generate representations of these complex situations. It has been observed that mixed selectivity in cortical neurons contributes to represent diverse situations defined by a combination of the current stimuli, and that mixed selectivity is readily obtained in randomly connected recurrent networks. In this context, these reservoir networks reproduce the highly recurrent nature of local cortical connectivity. Recombining present and past inputs, random recurrent networks from the reservoir computing framework generate mixed selectivity which provides pre-coded representations of an essentially universal set of contexts. These representations can then be selectively amplified through learning to solve the task at hand. We thus explored their representational power and dynamical properties after training a reservoir to perform a complex cognitive task initially developed for monkeys. The reservoir model inherently displayed a dynamic form of mixed selectivity, key to the representation of the behavioral context over time. The pre-coded representation of context was amplified by training a feedback neuron to explicitly represent this context, thereby reproducing the effect of learning and allowing the model to perform more robustly. This second version of the model demonstrates how a hybrid dynamical regime combining spatio-temporal processing of reservoirs, and input driven attracting dynamics generated by the feedback neuron, can be used to solve a complex cognitive task. We compared reservoir activity to neural activity of dorsal anterior cingulate cortex of monkeys which revealed similar network dynamics. We argue that reservoir computing is a pertinent framework to model local cortical dynamics and their contribution to higher cognitive function. One of the most noteworthy properties of primate behavior is its diversity and adaptability. Human and non-human primates can learn an astonishing variety of novel behaviors that could not have been directly anticipated by evolution. How then can the nervous system be prewired to anticipate the ability to represent such an open class of behaviors? Recent developments in a branch of recurrent neural networks, referred to as reservoir computing, begins to shed light on this question. The novelty of reservoir computing is that the recurrent connections in the network are fixed, and only the connections from these neurons to the output neurons change with learning. The fixed recurrent connections provide the network with an inherent high dimensional dynamics that creates essentially all possible spatial and temporal combinations of the inputs which can then be selected, by learning, to perform the desired task. This high dimensional mixture of activity inherent to reservoirs has begun to be found in the primate cortex. Here we make direct comparisons between dynamic coding in the cortex and in reservoirs performing the same task, and contribute to the emerging evidence that cortex has significant reservoir properties.
Collapse
Affiliation(s)
- Pierre Enel
- Univ Lyon, Université Lyon 1, Inserm, Stem Cell and Brain Research Institute U1208, Bron, France
- Icahn School of Medicine at Mount Sinai, New York, New York, United States of America
| | - Emmanuel Procyk
- Univ Lyon, Université Lyon 1, Inserm, Stem Cell and Brain Research Institute U1208, Bron, France
| | - René Quilodran
- Univ Lyon, Université Lyon 1, Inserm, Stem Cell and Brain Research Institute U1208, Bron, France
- Escuela de Medicina, Departamento de Pre-clínicas, Universidad de Valparaíso, Hontaneda, Valparaíso, Chile
| | - Peter Ford Dominey
- Univ Lyon, Université Lyon 1, Inserm, Stem Cell and Brain Research Institute U1208, Bron, France
- * E-mail:
| |
Collapse
|
29
|
Yada Y, Kanzaki R, Takahashi H. State-Dependent Propagation of Neuronal Sub-Population in Spontaneous Synchronized Bursts. Front Syst Neurosci 2016; 10:28. [PMID: 27065820 PMCID: PMC4815764 DOI: 10.3389/fnsys.2016.00028] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2016] [Accepted: 03/14/2016] [Indexed: 01/05/2023] Open
Abstract
Repeating stable spatiotemporal patterns emerge in synchronized spontaneous activity in neuronal networks. The repertoire of such patterns can serve as memory, or a reservoir of information, in a neuronal network; moreover, the variety of patterns may represent the network memory capacity. However, a neuronal substrate for producing a repertoire of patterns in synchronization remains elusive. We herein hypothesize that state-dependent propagation of a neuronal sub-population is the key mechanism. By combining high-resolution measurement with a 4096-channel complementary metal-oxide semiconductor (CMOS) microelectrode array (MEA) and dimensionality reduction with non-negative matrix factorization (NMF), we investigated synchronized bursts of dissociated rat cortical neurons at approximately 3 weeks in vitro. We found that bursts had a repertoire of repeating spatiotemporal patterns, and different patterns shared a partially similar sequence of sub-population, supporting the idea of sequential structure of neuronal sub-populations during synchronized activity. We additionally found that similar spatiotemporal patterns tended to appear successively and periodically, suggesting a state-dependent fluctuation of propagation, which has been overlooked in existing literature. Thus, such a state-dependent property within the sequential sub-population structure is a plausible neural substrate for performing a repertoire of stable patterns during synchronized activity.
Collapse
Affiliation(s)
- Yuichiro Yada
- Research Center for Advanced Science and Technology, The University of TokyoTokyo, Japan; Department of Mechano-Informatics, Graduate School of Information Science and Technology, The University of TokyoTokyo, Japan; Japan Society for the Promotion of ScienceTokyo, Japan
| | - Ryohei Kanzaki
- Research Center for Advanced Science and Technology, The University of TokyoTokyo, Japan; Department of Mechano-Informatics, Graduate School of Information Science and Technology, The University of TokyoTokyo, Japan
| | - Hirokazu Takahashi
- Research Center for Advanced Science and Technology, The University of TokyoTokyo, Japan; Department of Mechano-Informatics, Graduate School of Information Science and Technology, The University of TokyoTokyo, Japan
| |
Collapse
|
30
|
Mendis GDC, Morrisroe E, Petrou S, Halgamuge SK. Use of adaptive network burst detection methods for multielectrode array data and the generation of artificial spike patterns for method evaluation. J Neural Eng 2016; 13:026009. [PMID: 26861133 DOI: 10.1088/1741-2560/13/2/026009] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
OBJECTIVE Multielectrode arrays are an informative extracellular recording technology that enables the analysis of cultured neuronal networks and network bursts (NBs) are a dominant feature observed in these recordings. This paper focuses on the validation of NB detection methods on different network activity patterns and developing a detection method that performs robustly across a wide variety of activity patterns. APPROACH A firing rate based approach was used to generate artificial spike timestamps where NBs were introduced as episodes where the probability of spiking increases. Variations in firing and bursting characteristics were also included. In addition, an improved methodology of detecting NBs is proposed, based on time-binned average firing rates and time overlaps of single channel bursts. The robustness of the proposed method was compared against three existing algorithms using simulated, publicly available and newly acquired data. MAIN RESULTS A range of activity patterns were generated by changing simulation variables that correspond to NB duration (40-2200 ms), intervals (0.3-16 s), firing rates (0.1-1 spikes s(-1)), local burst percentage (0%-90%), number of channels in local bursts (20-40) as well as the number of tonic and frequently-bursting channels. By extracting simulation parameters directly from real data, we generated synthetic data that closely resemble activity of mouse and rat cortical cultures at native and chemically perturbed states. In 50 simulated data sets with randomly selected parameter values, the improved NB detection method performed better (ascertained by the f-measure) than three existing methods (p < 0.005). The improved method was also able to detect clustered, long-tailed and short-frequent NBs on real data. SIGNIFICANCE This work presents an objective method of assessing the applicability of NB detection methods for different neuronal activity patterns. Furthermore, it proposes an improved NB detection method that can be used robustly across a range of data types.
Collapse
Affiliation(s)
- G D C Mendis
- Department of Mechanical Engineering, University of Melbourne, Parkville, VIC 3010, Australia
| | | | | | | |
Collapse
|
31
|
Tetzlaff C, Dasgupta S, Kulvicius T, Wörgötter F. The Use of Hebbian Cell Assemblies for Nonlinear Computation. Sci Rep 2015; 5:12866. [PMID: 26249242 PMCID: PMC4650703 DOI: 10.1038/srep12866] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2015] [Accepted: 07/10/2015] [Indexed: 11/25/2022] Open
Abstract
When learning a complex task our nervous system self-organizes large groups of neurons into coherent dynamic activity patterns. During this, a network with multiple, simultaneously active, and computationally powerful cell assemblies is created. How such ordered structures are formed while preserving a rich diversity of neural dynamics needed for computation is still unknown. Here we show that the combination of synaptic plasticity with the slower process of synaptic scaling achieves (i) the formation of cell assemblies and (ii) enhances the diversity of neural dynamics facilitating the learning of complex calculations. Due to synaptic scaling the dynamics of different cell assemblies do not interfere with each other. As a consequence, this type of self-organization allows executing a difficult, six degrees of freedom, manipulation task with a robot where assemblies need to learn computing complex non-linear transforms and – for execution – must cooperate with each other without interference. This mechanism, thus, permits the self-organization of computationally powerful sub-structures in dynamic networks for behavior control.
Collapse
Affiliation(s)
- Christian Tetzlaff
- 1] Institute for Physics - Biophysics, Georg-August-University, Friedrich-Hund Platz 1, 37077, Göttingen, Germany [2] Bernstein Center for Computational Neuroscience, Georg-August-University, Friedrich-Hund Platz 1, 37077, Göttingen, Germany [3]
| | - Sakyasingha Dasgupta
- 1] Institute for Physics - Biophysics, Georg-August-University, Friedrich-Hund Platz 1, 37077, Göttingen, Germany [2] Bernstein Center for Computational Neuroscience, Georg-August-University, Friedrich-Hund Platz 1, 37077, Göttingen, Germany [3]
| | - Tomas Kulvicius
- 1] Institute for Physics - Biophysics, Georg-August-University, Friedrich-Hund Platz 1, 37077, Göttingen, Germany [2] Bernstein Center for Computational Neuroscience, Georg-August-University, Friedrich-Hund Platz 1, 37077, Göttingen, Germany [3]
| | - Florentin Wörgötter
- 1] Institute for Physics - Biophysics, Georg-August-University, Friedrich-Hund Platz 1, 37077, Göttingen, Germany [2] Bernstein Center for Computational Neuroscience, Georg-August-University, Friedrich-Hund Platz 1, 37077, Göttingen, Germany
| |
Collapse
|