1
|
Ren Z, Li J, Xue X, Li X, Yang F, Jiao Z, Gao X. Reconstructing seen image from brain activity by visually-guided cognitive representation and adversarial learning. Neuroimage 2021; 228:117602. [PMID: 33395572 DOI: 10.1016/j.neuroimage.2020.117602] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2019] [Revised: 11/13/2020] [Accepted: 11/23/2020] [Indexed: 11/20/2022] Open
Abstract
Reconstructing perceived stimulus (image) only from human brain activity measured with functional Magnetic Resonance Imaging (fMRI) is a significant task in brain decoding. However, the inconsistent distribution and representation between fMRI signals and visual images cause great 'domain gap'. Moreover, the limited fMRI data instances generally suffer from the issues of low signal noise ratio (SNR), extremely high dimensionality, and limited spatial resolution. Existing methods are often affected by these issues so that a satisfactory reconstruction is still an open problem. In this paper, we show that it is possible to obtain a promising solution by learning visually-guided latent cognitive representations from the fMRI signals, and inversely decoding them to the image stimuli. The resulting framework is called Dual-Variational Autoencoder/ Generative Adversarial Network (D-Vae/Gan), which combines the advantages of adversarial representation learning with knowledge distillation. In addition, we introduce a novel three-stage learning strategy which enables the (cognitive) encoder to gradually distill useful knowledge from the paired (visual) encoder during the learning process. Extensive experimental results on both artificial and natural images have demonstrated that our method could achieve surprisingly good results and outperform the available alternatives.
Collapse
|
2
|
Han K, Wen H, Shi J, Lu KH, Zhang Y, Fu D, Liu Z. Variational autoencoder: An unsupervised model for encoding and decoding fMRI activity in visual cortex. Neuroimage 2019; 198:125-136. [PMID: 31103784 PMCID: PMC6592726 DOI: 10.1016/j.neuroimage.2019.05.039] [Citation(s) in RCA: 55] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2018] [Revised: 04/13/2019] [Accepted: 05/15/2019] [Indexed: 01/21/2023] Open
Abstract
Goal-driven and feedforward-only convolutional neural networks (CNN) have been shown to be able to predict and decode cortical responses to natural images or videos. Here, we explored an alternative deep neural network, variational auto-encoder (VAE), as a computational model of the visual cortex. We trained a VAE with a five-layer encoder and a five-layer decoder to learn visual representations from a diverse set of unlabeled images. Using the trained VAE, we predicted and decoded cortical activity observed with functional magnetic resonance imaging (fMRI) from three human subjects passively watching natural videos. Compared to CNN, VAE could predict the video-evoked cortical responses with comparable accuracy in early visual areas, but relatively lower accuracy in higher-order visual areas. The distinction between CNN and VAE in terms of encoding performance was primarily attributed to their different learning objectives, rather than their different model architecture or number of parameters. Despite lower encoding accuracies, VAE offered a more convenient strategy for decoding the fMRI activity to reconstruct the video input, by first converting the fMRI activity to the VAE's latent variables, and then converting the latent variables to the reconstructed video frames through the VAE's decoder. This strategy was more advantageous than alternative decoding methods, e.g. partial least squares regression, for being able to reconstruct both the spatial structure and color of the visual input. Such findings highlight VAE as an unsupervised model for learning visual representation, as well as its potential and limitations for explaining cortical responses and reconstructing naturalistic and diverse visual experiences.
Collapse
Affiliation(s)
- Kuan Han
- School of Electrical and Computer Engineering, USA; Purdue Institute for Integrative Neuroscience, Purdue University, West Lafayette, IN, 47906, USA
| | - Haiguang Wen
- School of Electrical and Computer Engineering, USA; Purdue Institute for Integrative Neuroscience, Purdue University, West Lafayette, IN, 47906, USA
| | - Junxing Shi
- School of Electrical and Computer Engineering, USA; Purdue Institute for Integrative Neuroscience, Purdue University, West Lafayette, IN, 47906, USA
| | - Kun-Han Lu
- School of Electrical and Computer Engineering, USA; Purdue Institute for Integrative Neuroscience, Purdue University, West Lafayette, IN, 47906, USA
| | - Yizhen Zhang
- School of Electrical and Computer Engineering, USA; Purdue Institute for Integrative Neuroscience, Purdue University, West Lafayette, IN, 47906, USA
| | - Di Fu
- School of Electrical and Computer Engineering, USA; Purdue Institute for Integrative Neuroscience, Purdue University, West Lafayette, IN, 47906, USA
| | - Zhongming Liu
- Weldon School of Biomedical Engineering, USA; School of Electrical and Computer Engineering, USA; Purdue Institute for Integrative Neuroscience, Purdue University, West Lafayette, IN, 47906, USA.
| |
Collapse
|