1
|
Tu Y, Li X, Lu ZL, Wang Y. Adaptive smoothing of retinotopic maps based on Teichmüller parametrization. Med Image Anal 2024; 93:103074. [PMID: 38160658 DOI: 10.1016/j.media.2023.103074] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2022] [Revised: 12/20/2023] [Accepted: 12/21/2023] [Indexed: 01/03/2024]
Abstract
Retinotopic mapping, the mapping between visual inputs on the retina and neural responses on the cortical surface, is one of the fundamental topics in visual neuroscience. In human studies, retinotopic maps are conventionally constructed and processed by decoding blood oxygenation-level dependent (BOLD) functional magnetic resonance imaging (fMRI) responses to designed visual stimuli on the cortical surface. However, these methods frequently generate retinotopic maps that do not preserve topology, contradicting a fundamental property of retinotopic maps observed in neurophysiology. To address this problem, we propose an integrated approach to simultaneously refine the flattening from the 3D cortical surface to the 2D parametric space and adaptively smooth retinotopic perception centers in the visual space to make the retinotopic maps topological. One key element of the approach is the enhanced error tolerant Teichmüller mapping, which refines the parametrization by minimizing angle distortions and maximizing alignment to noisy landmarks. We validated our overall approach with synthetic and real retinotopic mapping datasets and applied it to compute cortical magnification factor (CMF). The results showed that the proposed approach was superior to other conventional retinotopic mapping methods in predicting BOLD fMRI time series and preserving topology.
Collapse
Affiliation(s)
- Yanshuai Tu
- School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ, USA
| | - Xin Li
- School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ, USA
| | - Zhong-Lin Lu
- Division of Arts and Sciences, New York University Shanghai, Shanghai, China; Center for Neural Science and Department of Psychology, New York University, New York, NY, USA; NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai, China.
| | - Yalin Wang
- School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ, USA.
| |
Collapse
|
2
|
Wu Z, Zhang H, Lin Y, Li G, Wang M, Tang Y. LIAF-Net: Leaky Integrate and Analog Fire Network for Lightweight and Efficient Spatiotemporal Information Processing. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:6249-6262. [PMID: 33979292 DOI: 10.1109/tnnls.2021.3073016] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Spiking neural networks (SNNs) based on the leaky integrate and fire (LIF) model have been applied to energy-efficient temporal and spatiotemporal processing tasks. Due to the bioplausible neuronal dynamics and simplicity, LIF-SNN benefits from event-driven processing, however, usually face the embarrassment of reduced performance. This may because, in LIF-SNN, the neurons transmit information via spikes. To address this issue, in this work, we propose a leaky integrate and analog fire (LIAF) neuron model so that analog values can be transmitted among neurons, and a deep network termed LIAF-Net is built on it for efficient spatiotemporal processing. In the temporal domain, LIAF follows the traditional LIF dynamics to maintain its temporal processing capability. In the spatial domain, LIAF is able to integrate spatial information through convolutional integration or fully connected integration. As a spatiotemporal layer, LIAF can also be used with traditional artificial neural network (ANN) layers jointly. In addition, the built network can be trained with backpropagation through time (BPTT) directly, which avoids the performance loss caused by ANN to SNN conversion. Experiment results indicate that LIAF-Net achieves comparable performance to the gated recurrent unit (GRU) and long short-term memory (LSTM) on bAbI question answering (QA) tasks and achieves state-of-the-art performance on spatiotemporal dynamic vision sensor (DVS) data sets, including MNIST-DVS, CIFAR10-DVS, and DVS128 Gesture, with much less number of synaptic weights and computational overhead compared with traditional networks built by LSTM, GRU, convolutional LSTM (ConvLSTM), or 3-D convolution (Conv3D). Compared with traditional LIF-SNN, LIAF-Net also shows dramatic accuracy gain on all these experiments. In conclusion, LIAF-Net provides a framework combining the advantages of both ANNs and SNNs for lightweight and efficient spatiotemporal information processing.
Collapse
|
3
|
Ma C, Chen Q, Mitchell DC, Na M, Tucker KL, Gao X. Application of the deep learning algorithm in nutrition research - using serum pyridoxal 5'-phosphate as an example. Nutr J 2022; 21:38. [PMID: 35689265 PMCID: PMC9185886 DOI: 10.1186/s12937-022-00793-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Accepted: 05/31/2022] [Indexed: 01/21/2023] Open
Abstract
Background Multivariable linear regression (MLR) models were previously used to predict serum pyridoxal 5′-phosphate (PLP) concentration, the active coenzyme form of vitamin B6, but with low predictability. We developed a deep learning algorithm (DLA) to predict serum PLP based on dietary intake, dietary supplements, and other potential predictors. Methods This cross-sectional analysis included 3778 participants aged ≥20 years in the National Health and Nutrition Examination Survey (NHANES) 2007-2010, with completed information on studied variables. Dietary intake and supplement use were assessed with two 24-hour dietary recalls. We included potential predictors for serum PLP concentration in the models, including dietary intake and supplement use, sociodemographic variables (age, sex, race-ethnicity, income, and education), lifestyle variables (smoking status and physical activity level), body mass index, medication use, blood pressure, blood lipids, glucose, and C-reactive protein. We used a 4-hidden-layer deep neural network to predict PLP concentration, with 3401 (90%) participants for training and 377 (10%) participants for test using random sampling. We obtained outputs after sending the features of the training set and conducting forward propagation. We then constructed a loss function based on the distances between outputs and labels and optimized it to find good parameters to fit the training set. We also developed a prediction model using MLR. Results After training for 105 steps with the Adam optimization method, the highest R2 was 0.47 for the DLA and 0.18 for the MLR model in the test dataset. Similar results were observed in the sensitivity analyses after we excluded supplement-users or included only variables identified by stepwise regression models. Conclusions DLA achieved superior performance in predicting serum PLP concentration, relative to the traditional MLR model, using a nationally representative sample. As preliminary data analyses, the current study shed light on the use of DLA to understand a modifiable lifestyle factor. Supplementary Information The online version contains supplementary material available at 10.1186/s12937-022-00793-x.
Collapse
Affiliation(s)
- Chaoran Ma
- Channing Division of Network Medicine, Brigham and Women's Hospital and Harvard Medical School, Boston, MA, USA
| | - Qipin Chen
- Department of Mathematics, The Pennsylvania State University, University Park, State College, PA, USA
| | - Diane C Mitchell
- Department of Nutritional Sciences, The Pennsylvania State University, University Park, State College, PA, USA
| | - Muzi Na
- Department of Nutritional Sciences, The Pennsylvania State University, University Park, State College, PA, USA
| | - Katherine L Tucker
- Department of Biomedical & Nutritional Sciences, The University of Massachusetts at Lowell, Lowell, MA, USA
| | - Xiang Gao
- Department of Nutrition and Food Hygiene, School of Public Health, Fudan University, 130 Dongan Rd, Shanghai, China.
| |
Collapse
|
4
|
Spiking Neural Networks for Computational Intelligence: An Overview. BIG DATA AND COGNITIVE COMPUTING 2021. [DOI: 10.3390/bdcc5040067] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Deep neural networks with rate-based neurons have exhibited tremendous progress in the last decade. However, the same level of progress has not been observed in research on spiking neural networks (SNN), despite their capability to handle temporal data, energy-efficiency and low latency. This could be because the benchmarking techniques for SNNs are based on the methods used for evaluating deep neural networks, which do not provide a clear evaluation of the capabilities of SNNs. Particularly, the benchmarking of SNN approaches with regards to energy efficiency and latency requires realization in suitable hardware, which imposes additional temporal and resource constraints upon ongoing projects. This review aims to provide an overview of the current real-world applications of SNNs and identifies steps to accelerate research involving SNNs in the future.
Collapse
|
5
|
Auge D, Hille J, Mueller E, Knoll A. A Survey of Encoding Techniques for Signal Processing in Spiking Neural Networks. Neural Process Lett 2021. [DOI: 10.1007/s11063-021-10562-2] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
AbstractBiologically inspired spiking neural networks are increasingly popular in the field of artificial intelligence due to their ability to solve complex problems while being power efficient. They do so by leveraging the timing of discrete spikes as main information carrier. Though, industrial applications are still lacking, partially because the question of how to encode incoming data into discrete spike events cannot be uniformly answered. In this paper, we summarise the signal encoding schemes presented in the literature and propose a uniform nomenclature to prevent the vague usage of ambiguous definitions. Therefore we survey both, the theoretical foundations as well as applications of the encoding schemes. This work provides a foundation in spiking signal encoding and gives an overview over different application-oriented implementations which utilise the schemes.
Collapse
|
6
|
|
7
|
Wang T, Shi C, Zhou X, Lin Y, He J, Gan P, Li P, Wang Y, Liu L, Wu N, Luo G. CompSNN: A lightweight spiking neural network based on spatiotemporally compressive spike features. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.10.100] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
8
|
Kirkland P, Di Caterina G, Soraghan J, Matich G. Perception Understanding Action: Adding Understanding to the Perception Action Cycle With Spiking Segmentation. Front Neurorobot 2020; 14:568319. [PMID: 33192434 PMCID: PMC7604290 DOI: 10.3389/fnbot.2020.568319] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2020] [Accepted: 10/20/2020] [Indexed: 11/30/2022] Open
Abstract
Traditionally the Perception Action cycle is the first stage of building an autonomous robotic system and a practical way to implement a low latency reactive system within a low Size, Weight and Power (SWaP) package. However, within complex scenarios, this method can lack contextual understanding about the scene, such as object recognition-based tracking or system attention. Object detection, identification and tracking along with semantic segmentation and attention are all modern computer vision tasks in which Convolutional Neural Networks (CNN) have shown significant success, although such networks often have a large computational overhead and power requirements, which are not ideal in smaller robotics tasks. Furthermore, cloud computing and massively parallel processing like in Graphic Processing Units (GPUs) are outside the specification of many tasks due to their respective latency and SWaP constraints. In response to this, Spiking Convolutional Neural Networks (SCNNs) look to provide the feature extraction benefits of CNNs, while maintaining low latency and power overhead thanks to their asynchronous spiking event-based processing. A novel Neuromorphic Perception Understanding Action (PUA) system is presented, that aims to combine the feature extraction benefits of CNNs with low latency processing of SCNNs. The PUA utilizes a Neuromorphic Vision Sensor for Perception that facilitates asynchronous processing within a Spiking fully Convolutional Neural Network (SpikeCNN) to provide semantic segmentation and Understanding of the scene. The output is fed to a spiking control system providing Actions. With this approach, the aim is to bring features of deep learning into the lower levels of autonomous robotics, while maintaining a biologically plausible STDP rule throughout the learned encoding part of the network. The network will be shown to provide a more robust and predictable management of spiking activity with an improved thresholding response. The reported experiments show that this system can deliver robust results of over 96 and 81% for accuracy and Intersection over Union, ensuring such a system can be successfully used within object recognition, classification and tracking problem. This demonstrates that the attention of the system can be tracked accurately, while the asynchronous processing means the controller can give precise track updates with minimal latency.
Collapse
Affiliation(s)
- Paul Kirkland
- Neuromorphic Sensor Signal Processing Lab, Centre for Image and Signal Processing, Electrical and Electronic Engineering, University of Strathclyde, Glasgow, United Kingdom
| | - Gaetano Di Caterina
- Neuromorphic Sensor Signal Processing Lab, Centre for Image and Signal Processing, Electrical and Electronic Engineering, University of Strathclyde, Glasgow, United Kingdom
| | - John Soraghan
- Neuromorphic Sensor Signal Processing Lab, Centre for Image and Signal Processing, Electrical and Electronic Engineering, University of Strathclyde, Glasgow, United Kingdom
| | | |
Collapse
|
9
|
Tu Y, Ta D, Lu ZL, Wang Y. Optimizing Visual Cortex Parameterization with Error-Tolerant Teichmüller Map in Retinotopic Mapping. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2020; 12267:218-227. [PMID: 34291236 PMCID: PMC8291100 DOI: 10.1007/978-3-030-59728-3_22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2023]
Abstract
The mapping between the visual input on the retina to the cortical surface, i.e., retinotopic mapping, is an important topic in vision science and neuroscience. Human retinotopic mapping can be revealed by analyzing cortex functional magnetic resonance imaging (fMRI) signals when the subject is under specific visual stimuli. Conventional methods process, smooth, and analyze the retinotopic mapping based on the parametrization of the (partial) cortical surface. However, the retinotopic maps generated by this approach frequently contradict neuropsychology results. To address this problem, we propose an integrated approach that parameterizes the cortical surface, such that the parametric coordinates linearly relates the visual coordinate. The proposed method helps the smoothing of noisy retinotopic maps and obtains neurophysiological insights in human vision systems. One key element of the approach is the Error-Tolerant Teichmüller Map, which uniforms the angle distortion and maximizes the alignments to self-contradicting landmarks. We validated our overall approach with synthetic and real retinotopic mapping datasets. The experimental results show the proposed approach is superior in accuracy and compatibility. Although we focus on retinotopic mapping, the proposed framework is general and can be applied to process other human sensory maps.
Collapse
Affiliation(s)
- Yanshuai Tu
- Arizona State University, Tempe AZ 85201, USA
| | - Duyan Ta
- Arizona State University, Tempe AZ 85201, USA
| | - Zhong-Lin Lu
- New York University, New York, NY
- NYU Shanghai, Shanghai, China
| | - Yalin Wang
- Arizona State University, Tempe AZ 85201, USA
| |
Collapse
|
10
|
Tan C, Šarlija M, Kasabov N. Spiking Neural Networks: Background, Recent Development and the NeuCube Architecture. Neural Process Lett 2020. [DOI: 10.1007/s11063-020-10322-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
11
|
Kumarasinghe K, Kasabov N, Taylor D. Deep learning and deep knowledge representation in Spiking Neural Networks for Brain-Computer Interfaces. Neural Netw 2019; 121:169-185. [PMID: 31568895 DOI: 10.1016/j.neunet.2019.08.029] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2019] [Revised: 08/26/2019] [Accepted: 08/26/2019] [Indexed: 01/21/2023]
Abstract
OBJECTIVE This paper argues that Brain-Inspired Spiking Neural Network (BI-SNN) architectures can learn and reveal deep in time-space functional and structural patterns from spatio-temporal data. These patterns can be represented as deep knowledge, in a partial case in the form of deep spatio-temporal rules. This is a promising direction for building new types of Brain-Computer Interfaces called Brain-Inspired Brain-Computer Interfaces (BI-BCI). A theoretical framework and its experimental validation on deep knowledge extraction and representation using SNN are presented. RESULTS The proposed methodology was applied in a case study to extract deep knowledge of the functional and structural organisation of the brain's neural network during the execution of a Grasp and Lift task. The BI-BCI successfully extracted the neural trajectories that represent the dorsal and ventral visual information processing streams as well as its connection to the motor cortex in the brain. Deep spatiotemporal rules on functional and structural interaction of distinct brain areas were then used for event prediction in BI-BCI. SIGNIFICANCE The computational framework can be used for unveiling the topological patterns of the brain and such knowledge can be effectively used to enhance the state-of-the-art in BCI.
Collapse
Affiliation(s)
- Kaushalya Kumarasinghe
- Knowledge Engineering and Discovery Research Institute, Auckland University of Technology, Auckland, New Zealand; Health and Rehabilitation Research Institute, Auckland University of Technology, Auckland, New Zealand.
| | - Nikola Kasabov
- Knowledge Engineering and Discovery Research Institute, Auckland University of Technology, Auckland, New Zealand.
| | - Denise Taylor
- Health and Rehabilitation Research Institute, Auckland University of Technology, Auckland, New Zealand.
| |
Collapse
|
12
|
Tavanaei A, Ghodrati M, Kheradpisheh SR, Masquelier T, Maida A. Deep learning in spiking neural networks. Neural Netw 2018; 111:47-63. [PMID: 30682710 DOI: 10.1016/j.neunet.2018.12.002] [Citation(s) in RCA: 205] [Impact Index Per Article: 34.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2018] [Revised: 12/02/2018] [Accepted: 12/03/2018] [Indexed: 12/14/2022]
Abstract
In recent years, deep learning has revolutionized the field of machine learning, for computer vision in particular. In this approach, a deep (multilayer) artificial neural network (ANN) is trained, most often in a supervised manner using backpropagation. Vast amounts of labeled training examples are required, but the resulting classification accuracy is truly impressive, sometimes outperforming humans. Neurons in an ANN are characterized by a single, static, continuous-valued activation. Yet biological neurons use discrete spikes to compute and transmit information, and the spike times, in addition to the spike rates, matter. Spiking neural networks (SNNs) are thus more biologically realistic than ANNs, and are arguably the only viable option if one wants to understand how the brain computes at the neuronal description level. The spikes of biological neurons are sparse in time and space, and event-driven. Combined with bio-plausible local learning rules, this makes it easier to build low-power, neuromorphic hardware for SNNs. However, training deep SNNs remains a challenge. Spiking neurons' transfer function is usually non-differentiable, which prevents using backpropagation. Here we review recent supervised and unsupervised methods to train deep SNNs, and compare them in terms of accuracy and computational cost. The emerging picture is that SNNs still lag behind ANNs in terms of accuracy, but the gap is decreasing, and can even vanish on some tasks, while SNNs typically require many fewer operations and are the better candidates to process spatio-temporal data.
Collapse
Affiliation(s)
- Amirhossein Tavanaei
- School of Computing and Informatics, University of Louisiana at Lafayette, Lafayette, LA 70504, USA.
| | - Masoud Ghodrati
- Department of Physiology, Monash University, Clayton, VIC, Australia
| | - Saeed Reza Kheradpisheh
- Department of Computer Science, Faculty of Mathematical Sciences and Computer, Kharazmi University, Tehran, Iran
| | | | - Anthony Maida
- School of Computing and Informatics, University of Louisiana at Lafayette, Lafayette, LA 70504, USA
| |
Collapse
|