1
|
Roshan SS, Sadeghnejad N, Sharifizadeh F, Ebrahimpour R. A neurocomputational model of decision and confidence in object recognition task. Neural Netw 2024; 175:106318. [PMID: 38643618 DOI: 10.1016/j.neunet.2024.106318] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2023] [Revised: 03/16/2024] [Accepted: 04/11/2024] [Indexed: 04/23/2024]
Abstract
How does the brain process natural visual stimuli to make a decision? Imagine driving through fog. An object looms ahead. What do you do? This decision requires not only identifying the object but also choosing an action based on your decision confidence. In this circumstance, confidence is making a bridge between seeing and believing. Our study unveils how the brain processes visual information to make such decisions with an assessment of confidence, using a model inspired by the visual cortex. To computationally model the process, this study uses a spiking neural network inspired by the hierarchy of the visual cortex in mammals to investigate the dynamics of feedforward object recognition and decision-making in the brain. The model consists of two modules: a temporal dynamic object representation module and an attractor neural network-based decision-making module. Unlike traditional models, ours captures the evolution of evidence within the visual cortex, mimicking how confidence forms in the brain. This offers a more biologically plausible approach to decision-making when encountering real-world stimuli. We conducted experiments using natural stimuli and measured accuracy, reaction time, and confidence. The model's estimated confidence aligns remarkably well with human-reported confidence. Furthermore, the model can simulate the human change-of-mind phenomenon, reflecting the ongoing evaluation of evidence in the brain. Also, this finding offers decision-making and confidence encoding share the same neural circuit.
Collapse
Affiliation(s)
- Setareh Sadat Roshan
- Department of Computer Engineering, Shahid Rajaee Teacher Training University, Tehran, Iran; School of Cognitive Sciences, Institute for Research in Fundamental Sciences (IPM), Tehran 1956836484, Iran
| | - Naser Sadeghnejad
- School of Cognitive Sciences, Institute for Research in Fundamental Sciences (IPM), Tehran 1956836484, Iran
| | - Fatemeh Sharifizadeh
- School of Cognitive Sciences, Institute for Research in Fundamental Sciences (IPM), Tehran 1956836484, Iran
| | - Reza Ebrahimpour
- Center for Cognitive Science, Institute for Convergence Science & Technology, Sharif University of Technology, Tehran 14588-89694, Iran.
| |
Collapse
|
2
|
Feng L, Zhao D, Zeng Y. Spiking generative adversarial network with attention scoring decoding. Neural Netw 2024; 178:106423. [PMID: 38906053 DOI: 10.1016/j.neunet.2024.106423] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 04/22/2024] [Accepted: 05/31/2024] [Indexed: 06/23/2024]
Abstract
Generative models based on neural networks present a substantial challenge within deep learning. As it stands, such models are primarily limited to the domain of artificial neural networks. Spiking neural networks, as the third generation of neural networks, offer a closer approximation to brain-like processing due to their rich spatiotemporal dynamics. However, generative models based on spiking neural networks are not well studied. Particularly, previous works on generative adversarial networks based on spiking neural networks are conducted on simple datasets and do not perform well. In this work, we pioneer constructing a spiking generative adversarial network capable of handling complex images and having higher performance. Our first task is to identify the problems of out-of-domain inconsistency and temporal inconsistency inherent in spiking generative adversarial networks. We address these issues by incorporating the Earth-Mover distance and an attention-based weighted decoding method, significantly enhancing the performance of our algorithm across several datasets. Experimental results reveal that our approach outperforms existing methods on the MNIST, FashionMNIST, CIFAR10, and CelebA. In addition to our examination of static datasets, this study marks our inaugural investigation into event-based data, through which we achieved noteworthy results. Moreover, compared with hybrid spiking generative adversarial networks, where the discriminator is an artificial analog neural network, our methodology demonstrates closer alignment with the information processing patterns observed in the mouse. Our code can be found at https://github.com/Brain-Cog-Lab/sgad.
Collapse
Affiliation(s)
- Linghao Feng
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing, China; School of Future Technology, University of Chinese Academy of Sciences, China.
| | - Dongcheng Zhao
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing, China; Center for Long-term Artificial Intelligence, China.
| | - Yi Zeng
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing, China; Center for Long-term Artificial Intelligence, China; Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, CAS, China; School of Future Technology, University of Chinese Academy of Sciences, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, China.
| |
Collapse
|
3
|
Zhang Z, Xiao M, Ji T, Jiang Y, Lin T, Zhou X, Lin Z. Efficient and generalizable cross-patient epileptic seizure detection through a spiking neural network. Front Neurosci 2024; 17:1303564. [PMID: 38268711 PMCID: PMC10805904 DOI: 10.3389/fnins.2023.1303564] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 12/18/2023] [Indexed: 01/26/2024] Open
Abstract
Introduction Epilepsy is a global chronic disease that brings pain and inconvenience to patients, and an electroencephalogram (EEG) is the main analytical tool. For clinical aid that can be applied to any patient, an automatic cross-patient epilepsy seizure detection algorithm is of great significance. Spiking neural networks (SNNs) are modeled on biological neurons and are energy-efficient on neuromorphic hardware, which can be expected to better handle brain signals and benefit real-world, low-power applications. However, automatic epilepsy seizure detection rarely considers SNNs. Methods In this article, we have explored SNNs for cross-patient seizure detection and discovered that SNNs can achieve comparable state-of-the-art performance or a performance that is even better than artificial neural networks (ANNs). We propose an EEG-based spiking neural network (EESNN) with a recurrent spiking convolution structure, which may better take advantage of temporal and biological characteristics in EEG signals. Results We extensively evaluate the performance of different SNN structures, training methods, and time settings, which builds a solid basis for understanding and evaluation of SNNs in seizure detection. Moreover, we show that our EESNN model can achieve energy reduction by several orders of magnitude compared with ANNs according to the theoretical estimation. Discussion These results show the potential for building high-performance, low-power neuromorphic systems for seizure detection and also broaden real-world application scenarios of SNNs.
Collapse
Affiliation(s)
- Zongpeng Zhang
- Department of Biostatistics, School of Public Health, Peking University, Beijing, China
| | - Mingqing Xiao
- National Key Lab of General AI, School of Intelligence Science and Technology, Peking University, Beijing, China
| | - Taoyun Ji
- Department of Pediatrics, Peking University First Hospital, Beijing, China
| | - Yuwu Jiang
- Department of Pediatrics, Peking University First Hospital, Beijing, China
| | - Tong Lin
- National Key Lab of General AI, School of Intelligence Science and Technology, Peking University, Beijing, China
| | - Xiaohua Zhou
- Department of Biostatistics, School of Public Health, Peking University, Beijing, China
- Beijing International Center for Mathematical Research, Peking University, Beijing, China
- Peking University Chongqing Institute for Big Data, Chongqing, China
| | - Zhouchen Lin
- National Key Lab of General AI, School of Intelligence Science and Technology, Peking University, Beijing, China
- Institute for Artificial Intelligence, Peking University, Beijing, China
| |
Collapse
|
4
|
Castagnetti A, Pegatoquet A, Miramond B. SPIDEN: deep Spiking Neural Networks for efficient image denoising. Front Neurosci 2023; 17:1224457. [PMID: 37638316 PMCID: PMC10450950 DOI: 10.3389/fnins.2023.1224457] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Accepted: 07/27/2023] [Indexed: 08/29/2023] Open
Abstract
In recent years, Deep Convolutional Neural Networks (DCNNs) have outreached the performance of classical algorithms for image restoration tasks. However, most of these methods are not suited for computational efficiency. In this work, we investigate Spiking Neural Networks (SNNs) for the specific and uncovered case of image denoising, with the goal of reaching the performance of conventional DCNN while reducing the computational cost. This task is challenging for two reasons. First, as denoising is a regression task, the network has to predict a continuous value (i.e., the noise amplitude) for each pixel of the image, with high precision. Moreover, state of the art results have been obtained with deep networks that are notably difficult to train in the spiking domain. To overcome these issues, we propose a formal analysis of the information conversion processing carried out by the Integrate and Fire (IF) spiking neurons and we formalize the trade-off between conversion error and activation sparsity in SNNs. We then propose, for the first time, an image denoising solution based on SNNs. The SNN networks are trained directly in the spike domain using surrogate gradient learning and backpropagation through time. Experimental results show that the proposed SNN provides a level of performance close to the state of the art with CNN based solutions. Specifically, our SNN achieves 30.18 dB of signal-to-noise ratio on the Set12 dataset, which is only 0.25 dB below the performance of the equivalent DCNN. Moreover we show that this performance can be achieved with low latency, i.e., using few timesteps, and with a significant level of sparsity. Finally, we analyze the energy consumption for different network latencies and network sizes. We show that the energy consumption of SNNs increases with longer latencies, making them more energy efficient compared to CNNs only for very small inference latencies. However, we also show that by increasing the network size, SNNs can provide competitive denoising performance while reducing the energy consumption by 20%.
Collapse
|
5
|
Qiu XR, Wang ZR, Luan Z, Zhu RJ, Wu X, Zhang ML, Deng LJ. VTSNN: a virtual temporal spiking neural network. Front Neurosci 2023; 17:1091097. [PMID: 37287800 PMCID: PMC10242054 DOI: 10.3389/fnins.2023.1091097] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2022] [Accepted: 04/28/2023] [Indexed: 06/09/2023] Open
Abstract
Spiking neural networks (SNNs) have recently demonstrated outstanding performance in a variety of high-level tasks, such as image classification. However, advancements in the field of low-level assignments, such as image reconstruction, are rare. This may be due to the lack of promising image encoding techniques and corresponding neuromorphic devices designed specifically for SNN-based low-level vision problems. This paper begins by proposing a simple yet effective undistorted weighted-encoding-decoding technique, which primarily consists of an Undistorted Weighted-Encoding (UWE) and an Undistorted Weighted-Decoding (UWD). The former aims to convert a gray image into spike sequences for effective SNN learning, while the latter converts spike sequences back into images. Then, we design a new SNN training strategy, known as Independent-Temporal Backpropagation (ITBP) to avoid complex loss propagation in spatial and temporal dimensions, and experiments show that ITBP is superior to Spatio-Temporal Backpropagation (STBP). Finally, a so-called Virtual Temporal SNN (VTSNN) is formulated by incorporating the above-mentioned approaches into U-net network architecture, fully utilizing the potent multiscale representation capability. Experimental results on several commonly used datasets such as MNIST, F-MNIST, and CIFAR10 demonstrate that the proposed method produces competitive noise-removal performance extremely which is superior to the existing work. Compared to ANN with the same architecture, VTSNN has a greater chance of achieving superiority while consuming ~1/274 of the energy. Specifically, using the given encoding-decoding strategy, a simple neuromorphic circuit could be easily constructed to maximize this low-carbon strategy.
Collapse
Affiliation(s)
- Xue-Rui Qiu
- School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Zhao-Rui Wang
- School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Zheng Luan
- School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Rui-Jie Zhu
- School of Public Affairs and Administration, University of Electronic Science and Technology of China, Chengdu, China
| | - Xiao Wu
- School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu, China
| | - Ma-Lu Zhang
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Liang-Jian Deng
- School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|