51
|
Wang X, Lin X, Dang X. Supervised learning in spiking neural networks: A review of algorithms and evaluations. Neural Netw 2020; 125:258-280. [PMID: 32146356 DOI: 10.1016/j.neunet.2020.02.011] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2019] [Revised: 12/15/2019] [Accepted: 02/20/2020] [Indexed: 01/08/2023]
Abstract
As a new brain-inspired computational model of the artificial neural network, a spiking neural network encodes and processes neural information through precisely timed spike trains. Spiking neural networks are composed of biologically plausible spiking neurons, which have become suitable tools for processing complex temporal or spatiotemporal information. However, because of their intricately discontinuous and implicit nonlinear mechanisms, the formulation of efficient supervised learning algorithms for spiking neural networks is difficult, and has become an important problem in this research field. This article presents a comprehensive review of supervised learning algorithms for spiking neural networks and evaluates them qualitatively and quantitatively. First, a comparison between spiking neural networks and traditional artificial neural networks is provided. The general framework and some related theories of supervised learning for spiking neural networks are then introduced. Furthermore, the state-of-the-art supervised learning algorithms in recent years are reviewed from the perspectives of applicability to spiking neural network architecture and the inherent mechanisms of supervised learning algorithms. A performance comparison of spike train learning of some representative algorithms is also made. In addition, we provide five qualitative performance evaluation criteria for supervised learning algorithms for spiking neural networks and further present a new taxonomy for supervised learning algorithms depending on these five performance evaluation criteria. Finally, some future research directions in this research field are outlined.
Collapse
Affiliation(s)
- Xiangwen Wang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, People's Republic of China
| | - Xianghong Lin
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, People's Republic of China.
| | - Xiaochao Dang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, People's Republic of China
| |
Collapse
|
52
|
Specht K. Current Challenges in Translational and Clinical fMRI and Future Directions. Front Psychiatry 2020; 10:924. [PMID: 31969840 PMCID: PMC6960120 DOI: 10.3389/fpsyt.2019.00924] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/17/2019] [Accepted: 11/20/2019] [Indexed: 12/15/2022] Open
Abstract
Translational neuroscience is an important field that brings together clinical praxis with neuroscience methods. In this review article, the focus will be on functional neuroimaging (fMRI) and its applicability in clinical fMRI studies. In the light of the "replication crisis," three aspects will be critically discussed: First, the fMRI signal itself, second, current fMRI praxis, and, third, the next generation of analysis strategies. Current attempts such as resting-state fMRI, meta-analyses, and machine learning will be discussed with their advantages and potential pitfalls and disadvantages. One major concern is that the fMRI signal shows substantial within- and between-subject variability, which affects the reliability of both task-related, but in particularly resting-state fMRI studies. Furthermore, the lack of standardized acquisition and analysis methods hinders the further development of clinical relevant approaches. However, meta-analyses and machine-learning approaches may help to overcome current shortcomings in the methods by identifying new, and yet hidden relationships, and may help to build new models on disorder mechanisms. Furthermore, better control of parameters that may have an influence on the fMRI signal and that can easily be controlled for, like blood pressure, heart rate, diet, time of day, might improve reliability substantially.
Collapse
Affiliation(s)
- Karsten Specht
- Department of Biological and Medical Psychology, University of Bergen, Bergen, Norway
- Mohn Medical Imaging and Visualization Centre, Haukeland University Hospital, Bergen, Norway
- Department of Education, UiT/The Arctic University of Norway, Tromsø, Norway
| |
Collapse
|
53
|
|
54
|
Nakajima M, Schmitt LI. Understanding the circuit basis of cognitive functions using mouse models. Neurosci Res 2019; 152:44-58. [PMID: 31857115 DOI: 10.1016/j.neures.2019.12.009] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2019] [Revised: 12/01/2019] [Accepted: 12/09/2019] [Indexed: 01/13/2023]
Abstract
Understanding how cognitive functions arise from computations occurring in the brain requires the ability to measure and perturb neural activity while the relevant circuits are engaged for specific cognitive processes. Rapid technical advances have led to the development of new approaches to transiently activate and suppress neuronal activity as well as to record simultaneously from hundreds to thousands of neurons across multiple brain regions during behavior. To realize the full potential of these approaches for understanding cognition, however, it is critical that behavioral conditions and stimuli are effectively designed to engage the relevant brain networks. Here, we highlight recent innovations that enable this combined approach. In particular, we focus on how to design behavioral experiments that leverage the ever-growing arsenal of technologies for controlling and measuring neural activity in order to understand cognitive functions.
Collapse
Affiliation(s)
- Miho Nakajima
- McGovern Institute for Brain Research and the Department of Brain and Cognitive Science, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - L Ian Schmitt
- McGovern Institute for Brain Research and the Department of Brain and Cognitive Science, Massachusetts Institute of Technology, Cambridge, MA, United States; Center for Brain Science, RIKEN, Wako, Saitama, Japan.
| |
Collapse
|
55
|
Tandon N, Tandon R. Using machine learning to explain the heterogeneity of schizophrenia. Realizing the promise and avoiding the hype. Schizophr Res 2019; 214:70-75. [PMID: 31500998 DOI: 10.1016/j.schres.2019.08.032] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/28/2019] [Accepted: 08/28/2019] [Indexed: 01/09/2023]
Abstract
Despite extensive research and prodigious advances in neuroscience, our comprehension of the nature of schizophrenia remains rudimentary. Our failure to make progress is attributed to the extreme heterogeneity of this condition, enormous complexity of the human brain, limitations of extant research paradigms, and inadequacy of traditional statistical methods to integrate or interpret increasingly large amounts of multidimensional information relevant to unravelling brain function. Fortunately, the rapidly developing science of machine learning appears to provide tools capable of addressing each of these impediments. Enthusiasm about the potential of machine learning methods to break the current impasse is reflected in the steep increase in the number of scientific publication about the application of machine learning to the study of schizophrenia. Machine learning approaches are, however, poorly understood by schizophrenia researchers and clinicians alike. In this paper, we provide a simple description of the nature and techniques of machine learning and their application to the study of schizophrenia. We then summarize its potential and constraints with illustrations from six studies of machine learning in schizophrenia and address some common misconceptions about machine learning. We suggest some guidelines for researchers, readers, science editors and reviewers of the burgeoning machine learning literature in schizophrenia. In order to realize its enormous promise, we suggest the need for the disciplined application of machine learning methods to the study of schizophrenia with a clear recognition of its capability and challenges accompanied by a concurrent effort to improve machine learning literacy among neuroscientists and mental health professionals.
Collapse
Affiliation(s)
- Neeraj Tandon
- Department of Psychiatry, WMU Homer Stryker School of Medicine, Kalamazoo, MI, United States of America
| | - Rajiv Tandon
- Department of Psychiatry, WMU Homer Stryker School of Medicine, Kalamazoo, MI, United States of America.
| |
Collapse
|
56
|
Tanaka H, Nayebi A, Maheswaranathan N, McIntosh L, Baccus SA, Ganguli S. From deep learning to mechanistic understanding in neuroscience: the structure of retinal prediction. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 2019; 32:8537-8547. [PMID: 35283616 PMCID: PMC8916592] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Recently, deep feedforward neural networks have achieved considerable success in modeling biological sensory processing, in terms of reproducing the input-output map of sensory neurons. However, such models raise profound questions about the very nature of explanation in neuroscience. Are we simply replacing one complex system (a biological circuit) with another (a deep network), without understanding either? Moreover, beyond neural representations, are the deep network's computational mechanisms for generating neural responses the same as those in the brain? Without a systematic approach to extracting and understanding computational mechanisms from deep neural network models, it can be difficult both to assess the degree of utility of deep learning approaches in neuroscience, and to extract experimentally testable hypotheses from deep networks. We develop such a systematic approach by combining dimensionality reduction and modern attribution methods for determining the relative importance of interneurons for specific visual computations. We apply this approach to deep network models of the retina, revealing a conceptual understanding of how the retina acts as a predictive feature extractor that signals deviations from expectations for diverse spatiotemporal stimuli. For each stimulus, our extracted computational mechanisms are consistent with prior scientific literature, and in one case yields a new mechanistic hypothesis. Thus overall, this work not only yields insights into the computational mechanisms underlying the striking predictive capabilities of the retina, but also places the framework of deep networks as neuroscientific models on firmer theoretical foundations, by providing a new roadmap to go beyond comparing neural representations to extracting and understand computational mechanisms.
Collapse
Affiliation(s)
- Hidenori Tanaka
- Physics & Informatics Laboratories, NTT Research, Inc., East Palo Alto, CA, USA
- Department of Applied Physics, Stanford University, Stanford, CA, USA
| | - Aran Nayebi
- Neurosciences PhD Program, Stanford University, Stanford, CA, USA
| | - Niru Maheswaranathan
- Neurosciences PhD Program, Stanford University, Stanford, CA, USA
- Google Brain, Google, Inc., Mountain View, CA, USA
| | - Lane McIntosh
- Neurosciences PhD Program, Stanford University, Stanford, CA, USA
| | | | - Surya Ganguli
- Department of Applied Physics, Stanford University, Stanford, CA, USA
- Google Brain, Google, Inc., Mountain View, CA, USA
| |
Collapse
|
57
|
Koren V, Andrei AR, Hu M, Dragoi V, Obermayer K. Reading-out task variables as a low-dimensional reconstruction of neural spike trains in single trials. PLoS One 2019; 14:e0222649. [PMID: 31622346 PMCID: PMC6797168 DOI: 10.1371/journal.pone.0222649] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2019] [Accepted: 09/03/2019] [Indexed: 11/18/2022] Open
Abstract
We propose a new model of the read-out of spike trains that exploits the multivariate structure of responses of neural ensembles. Assuming the point of view of a read-out neuron that receives synaptic inputs from a population of projecting neurons, synaptic inputs are weighted with a heterogeneous set of weights. We propose that synaptic weights reflect the role of each neuron within the population for the computational task that the network has to solve. In our case, the computational task is discrimination of binary classes of stimuli, and weights are such as to maximize the discrimination capacity of the network. We compute synaptic weights as the feature weights of an optimal linear classifier. Once weights have been learned, they weight spike trains and allow to compute the post-synaptic current that modulates the spiking probability of the read-out unit in real time. We apply the model on parallel spike trains from V1 and V4 areas in the behaving monkey macaca mulatta, while the animal is engaged in a visual discrimination task with binary classes of stimuli. The read-out of spike trains with our model allows to discriminate the two classes of stimuli, while population PSTH entirely fails to do so. Splitting neurons in two subpopulations according to the sign of the weight, we show that population signals of the two functional subnetworks are negatively correlated. Disentangling the superficial, the middle and the deep layer of the cortex, we show that in both V1 and V4, superficial layers are the most important in discriminating binary classes of stimuli.
Collapse
Affiliation(s)
- Veronika Koren
- Neural Information Processing Group, Institute of Software Engineering and Theoretical Computer Science, Technische Universität Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Germany
- * E-mail:
| | - Ariana R. Andrei
- Department of Neurobiology and Anatomy, University of Texas Medical School, Houston, Texas, United States of America
| | - Ming Hu
- Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Valentin Dragoi
- Department of Neurobiology and Anatomy, University of Texas Medical School, Houston, Texas, United States of America
| | - Klaus Obermayer
- Neural Information Processing Group, Institute of Software Engineering and Theoretical Computer Science, Technische Universität Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Germany
| |
Collapse
|
58
|
Seo I, Lee H. Predicting transgenic markers of a neuron by electrophysiological properties using machine learning. Brain Res Bull 2019; 150:102-110. [PMID: 31125599 DOI: 10.1016/j.brainresbull.2019.05.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2018] [Revised: 04/17/2019] [Accepted: 05/18/2019] [Indexed: 10/26/2022]
Abstract
The task of classifying and identifying neurons, the essential components of the nervous system, has been undertaken in a variety of ways. The transcriptomic approach has become more accessible with the development of genetic engineering techniques. Considering the information processing function of the brain, however, it is necessary to consider the physiological characteristics of neurons. Recently, the Allen Institute for Brain Science has published the electrophysiological characteristics of neurons which were tagged with a transgenic reporter. We used these electrophysiological features to predict the transgenic markers of neurons. Using linear regression, random forest, and an artificial neural network, we assessed the performance of supervised machine learning models by comparing the prediction accuracy or the confusion matrix. As a result, in the binary classification problem of classifying excitatory and inhibitory neurons, the accuracy was 90% or more regardless of the model. The models showed better performance than merely distinguishing neurons by suprathreshold features such as the ratio of upstrokes and downstrokes of a single spike (ρ). However, when excitatory neurons were classified, the accuracy was 28˜47%, and the accuracy of classifying inhibitory neurons was 59˜73%. The present study was based on the results of electrophysiological experiments to determine whether transgenic markers of neurons could be predicted. Future research is needed to acquire electrophysiological data and transcriptomic data simultaneously on the single cell level to reveal the correlation between the gene expression and the physiological function of a neuron in building the neural network.
Collapse
Affiliation(s)
- Incheol Seo
- Department of Microbiology, Keimyung University School of Medicine, Daegu, Republic of Korea
| | - Hyunsu Lee
- Department of Anatomy, Keimyung University School of Medicine, Daegu, Republic of Korea.
| |
Collapse
|
59
|
Lucas A, Tomlinson T, Rohani N, Chowdhury R, Solla SA, Katsaggelos AK, Miller LE. Neural Networks for Modeling Neural Spiking in S1 Cortex. Front Syst Neurosci 2019; 13:13. [PMID: 30983978 PMCID: PMC6449471 DOI: 10.3389/fnsys.2019.00013] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2018] [Accepted: 03/11/2019] [Indexed: 11/23/2022] Open
Abstract
Somatosensation is composed of two distinct modalities: touch, arising from sensors in the skin, and proprioception, resulting primarily from sensors in the muscles, combined with these same cutaneous sensors. In contrast to the wealth of information about touch, we know quite less about the nature of the signals giving rise to proprioception at the cortical level. Likewise, while there is considerable interest in developing encoding models of touch-related neurons for application to brain machine interfaces, much less emphasis has been placed on an analogous proprioceptive interface. Here we investigate the use of Artificial Neural Networks (ANNs) to model the relationship between the firing rates of single neurons in area 2, a largely proprioceptive region of somatosensory cortex (S1) and several types of kinematic variables related to arm movement. To gain a better understanding of how these kinematic variables interact to create the proprioceptive responses recorded in our datasets, we train ANNs under different conditions, each involving a different set of input and output variables. We explore the kinematic variables that provide the best network performance, and find that the addition of information about joint angles and/or muscle lengths significantly improves the prediction of neural firing rates. Our results thus provide new insight regarding the complex representations of the limb motion in S1: that the firing rates of neurons in area 2 may be more closely related to the activity of peripheral sensors than it is to extrinsic hand position. In addition, we conduct numerical experiments to determine the sensitivity of ANN models to various choices of training design and hyper-parameters. Our results provide a baseline and new tools for future research that utilizes machine learning to better describe and understand the activity of neurons in S1.
Collapse
Affiliation(s)
- Alice Lucas
- Department of Electrical Engineering and Computer Science, Northwestern University, Evanston, IL, United States
| | - Tucker Tomlinson
- Department of Physiology, Northwestern University, Chicago, IL, United States
| | - Neda Rohani
- Department of Electrical Engineering and Computer Science, Northwestern University, Evanston, IL, United States
| | - Raeed Chowdhury
- Department of Physiology, Northwestern University, Chicago, IL, United States
- Department of Biomedical Engineering, Northwestern University, Evanston, IL, United States
| | - Sara A. Solla
- Department of Physiology, Northwestern University, Chicago, IL, United States
- Department of Physics and Astronomy, Northwestern University, Evanston, IL, United States
| | - Aggelos K. Katsaggelos
- Department of Electrical Engineering and Computer Science, Northwestern University, Evanston, IL, United States
| | - Lee E. Miller
- Department of Physiology, Northwestern University, Chicago, IL, United States
- Department of Biomedical Engineering, Northwestern University, Evanston, IL, United States
- Department of Physical Medicine and Rehabilitation, Northwestern University and Rehabilitation Institute of Chicago, Chicago, IL, United States
| |
Collapse
|