1
|
Chang CH, Drobotenko N, Ruocco AC, Lee ACH, Nestor A. Perception and memory-based representations of facial emotions: Associations with personality functioning, affective states and recognition abilities. Cognition 2024; 245:105724. [PMID: 38266352 DOI: 10.1016/j.cognition.2024.105724] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2023] [Revised: 11/09/2023] [Accepted: 01/15/2024] [Indexed: 01/26/2024]
Abstract
Personality traits and affective states are associated with biases in facial emotion perception. However, the precise personality impairments and affective states that underlie these biases remain largely unknown. To investigate how relevant factors influence facial emotion perception and recollection, Experiment 1 employed an image reconstruction approach in which community-dwelling adults (N = 89) rated the similarity of pairs of facial expressions, including those recalled from memory. Subsequently, perception- and memory-based expression representations derived from such ratings were assessed across participants and related to measures of personality impairment, state affect, and visual recognition abilities. Impairment in self-direction and level of positive affect accounted for the largest components of individual variability in perception and memory representations, respectively. Additionally, individual differences in these representations were impacted by face recognition ability. In Experiment 2, adult participants (N = 81) rated facial image reconstructions derived in Experiment 1, revealing that individual variability was associated with specific visual face properties, such as expressiveness, representation accuracy, and positivity/negativity. These findings highlight and clarify the influence of personality, affective state, and recognition abilities on individual differences in the perception and recollection of facial expressions.
Collapse
Affiliation(s)
- Chi-Hsun Chang
- Department of Psychology at Scarborough, University of Toronto, 1265 Military Trail, Scarborough, Ontario M1C 1A4, Canada
| | - Natalia Drobotenko
- Department of Psychology at Scarborough, University of Toronto, 1265 Military Trail, Scarborough, Ontario M1C 1A4, Canada
| | - Anthony C Ruocco
- Department of Psychology at Scarborough, University of Toronto, 1265 Military Trail, Scarborough, Ontario M1C 1A4, Canada; Department of Psychological Clinical Science at Scarborough, University of Toronto, 1265 Military Trail, Scarborough, Ontario M1C 1A4, Canada
| | - Andy C H Lee
- Department of Psychology at Scarborough, University of Toronto, 1265 Military Trail, Scarborough, Ontario M1C 1A4, Canada; Rotman Research Institute, Baycrest Centre, 3560 Bathurst St, North York, Ontario M6A 2E1, Canada
| | - Adrian Nestor
- Department of Psychology at Scarborough, University of Toronto, 1265 Military Trail, Scarborough, Ontario M1C 1A4, Canada.
| |
Collapse
|
2
|
Koide-Majima N, Nishimoto S, Majima K. Mental image reconstruction from human brain activity: Neural decoding of mental imagery via deep neural network-based Bayesian estimation. Neural Netw 2024; 170:349-363. [PMID: 38016230 DOI: 10.1016/j.neunet.2023.11.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2023] [Revised: 09/22/2023] [Accepted: 11/08/2023] [Indexed: 11/30/2023]
Abstract
Visual images observed by humans can be reconstructed from their brain activity. However, the visualization (externalization) of mental imagery is challenging. Only a few studies have reported successful visualization of mental imagery, and their visualizable images have been limited to specific domains such as human faces or alphabetical letters. Therefore, visualizing mental imagery for arbitrary natural images stands as a significant milestone. In this study, we achieved this by enhancing a previous method. Specifically, we demonstrated that the visual image reconstruction method proposed in the seminal study by Shen et al. (2019) heavily relied on low-level visual information decoded from the brain and could not efficiently utilize the semantic information that would be recruited during mental imagery. To address this limitation, we extended the previous method to a Bayesian estimation framework and introduced the assistance of semantic information into it. Our proposed framework successfully reconstructed both seen images (i.e., those observed by the human eye) and imagined images from brain activity. Quantitative evaluation showed that our framework could identify seen and imagined images highly accurately compared to the chance accuracy (seen: 90.7%, imagery: 75.6%, chance accuracy: 50.0%). In contrast, the previous method could only identify seen images (seen: 64.3%, imagery: 50.4%). These results suggest that our framework would provide a unique tool for directly investigating the subjective contents of the brain such as illusions, hallucinations, and dreams.
Collapse
Affiliation(s)
- Naoko Koide-Majima
- Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology, Osaka 565-0871, Japan; Graduate School of Frontier Biosciences, Osaka University, Osaka 565-0871, Japan
| | - Shinji Nishimoto
- Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology, Osaka 565-0871, Japan; Graduate School of Frontier Biosciences, Osaka University, Osaka 565-0871, Japan; Graduate School of Medicine, Osaka University, Osaka 565-0871, Japan
| | - Kei Majima
- Institute for Quantum Life Science, National Institutes for Quantum Science and Technology, Chiba 263-8555, Japan; JST PRESTO, Saitama 332-0012, Japan.
| |
Collapse
|
3
|
Chang CH, Zehra S, Nestor A, Lee ACH. Using image reconstruction to investigate face perception in amnesia. Neuropsychologia 2023; 185:108573. [PMID: 37119985 DOI: 10.1016/j.neuropsychologia.2023.108573] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Revised: 04/26/2023] [Accepted: 04/26/2023] [Indexed: 05/01/2023]
Abstract
Damage to the medial temporal lobe (MTL), which is traditionally considered to subserve memory exclusively, has been reported to contribute to impaired face perception. However, it remains unknown how exactly such brain lesions may impact face representations and in particular facial shape and surface information, both of which are crucial for face perception. The present study employed a behavioral-based image reconstruction approach to reveal the pictorial representations of face perception in two amnesic patients: DA, who has an extensive bilateral MTL lesion that extends beyond the MTL in the right hemisphere, and BL, who has damage to the hippocampal dentate gyrus (DG). Both patients and their respective matched controls completed similarity judgments for pairs of faces, from which facial shape and surface features were subsequently derived and synthesized to create images of reconstructed facial appearance. Participants also completed a face oddity judgment task (FOJT) that has previously been shown to be sensitive to MTL cortical damage. While BL exhibited an impaired pattern of performance on the FOJT, DA demonstrated intact performance accuracy. Notably, the recovered pictorial content of faces was comparable between both patients and controls, although there was evidence for atypical face representations in BL particularly with regards to color. Our work provides novel insight into the face representations underlying face perception in two well-studied amnesic patients in the literature and demonstrates the applicability of the image reconstruction approach to individuals with brain damage.
Collapse
Affiliation(s)
- Chi-Hsun Chang
- Department of Psychology (Scarborough), University of Toronto, Toronto, Ontario, Canada
| | - Sukhan Zehra
- Department of Psychology (Scarborough), University of Toronto, Toronto, Ontario, Canada
| | - Adrian Nestor
- Department of Psychology (Scarborough), University of Toronto, Toronto, Ontario, Canada
| | - Andy C H Lee
- Department of Psychology (Scarborough), University of Toronto, Toronto, Ontario, Canada; Rotman Research Institute, Baycrest Centre, Toronto, Ontario, Canada.
| |
Collapse
|
4
|
Face identity coding in the deep neural network and primate brain. Commun Biol 2022; 5:611. [PMID: 35725902 PMCID: PMC9209415 DOI: 10.1038/s42003-022-03557-9] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2021] [Accepted: 06/01/2022] [Indexed: 01/01/2023] Open
Abstract
A central challenge in face perception research is to understand how neurons encode face identities. This challenge has not been met largely due to the lack of simultaneous access to the entire face processing neural network and the lack of a comprehensive multifaceted model capable of characterizing a large number of facial features. Here, we addressed this challenge by conducting in silico experiments using a pre-trained face recognition deep neural network (DNN) with a diverse array of stimuli. We identified a subset of DNN units selective to face identities, and these identity-selective units demonstrated generalized discriminability to novel faces. Visualization and manipulation of the network revealed the importance of identity-selective units in face recognition. Importantly, using our monkey and human single-neuron recordings, we directly compared the response of artificial units with real primate neurons to the same stimuli and found that artificial units shared a similar representation of facial features as primate neurons. We also observed a region-based feature coding mechanism in DNN units as in human neurons. Together, by directly linking between artificial and primate neural systems, our results shed light on how the primate brain performs face recognition tasks.
Collapse
|
5
|
Rakhimberdina Z, Jodelet Q, Liu X, Murata T. Natural Image Reconstruction From fMRI Using Deep Learning: A Survey. Front Neurosci 2021; 15:795488. [PMID: 34987359 PMCID: PMC8722107 DOI: 10.3389/fnins.2021.795488] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Accepted: 11/23/2021] [Indexed: 11/17/2022] Open
Abstract
With the advent of brain imaging techniques and machine learning tools, much effort has been devoted to building computational models to capture the encoding of visual information in the human brain. One of the most challenging brain decoding tasks is the accurate reconstruction of the perceived natural images from brain activities measured by functional magnetic resonance imaging (fMRI). In this work, we survey the most recent deep learning methods for natural image reconstruction from fMRI. We examine these methods in terms of architectural design, benchmark datasets, and evaluation metrics and present a fair performance evaluation across standardized evaluation metrics. Finally, we discuss the strengths and limitations of existing studies and present potential future directions.
Collapse
Affiliation(s)
- Zarina Rakhimberdina
- Department of Computer Science, Tokyo Institute of Technology, Tokyo, Japan
- AIST-Tokyo Tech Real World Big-Data Computation Open Innovation Laboratory, Tokyo, Japan
| | - Quentin Jodelet
- Department of Computer Science, Tokyo Institute of Technology, Tokyo, Japan
- AIST-Tokyo Tech Real World Big-Data Computation Open Innovation Laboratory, Tokyo, Japan
| | - Xin Liu
- AIST-Tokyo Tech Real World Big-Data Computation Open Innovation Laboratory, Tokyo, Japan
- Artificial Intelligence Research Center, National Institute of Advanced Industrial Science and Technology, Tokyo, Japan
- Digital Architecture Research Center, National Institute of Advanced Industrial Science and Technology, Tokyo, Japan
| | - Tsuyoshi Murata
- Department of Computer Science, Tokyo Institute of Technology, Tokyo, Japan
- AIST-Tokyo Tech Real World Big-Data Computation Open Innovation Laboratory, Tokyo, Japan
| |
Collapse
|
6
|
Marvan T, Polák M, Bachmann T, Phillips WA. Apical amplification-a cellular mechanism of conscious perception? Neurosci Conscious 2021; 2021:niab036. [PMID: 34650815 PMCID: PMC8511476 DOI: 10.1093/nc/niab036] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2021] [Revised: 07/09/2021] [Accepted: 09/23/2021] [Indexed: 11/25/2022] Open
Abstract
We present a theoretical view of the cellular foundations for network-level processes involved in producing our conscious experience. Inputs to apical synapses in layer 1 of a large subset of neocortical cells are summed at an integration zone near the top of their apical trunk. These inputs come from diverse sources and provide a context within which the transmission of information abstracted from sensory input to their basal and perisomatic synapses can be amplified when relevant. We argue that apical amplification enables conscious perceptual experience and makes it more flexible, and thus more adaptive, by being sensitive to context. Apical amplification provides a possible mechanism for recurrent processing theory that avoids strong loops. It makes the broadcasting hypothesized by global neuronal workspace theories feasible while preserving the distinct contributions of the individual cells receiving the broadcast. It also provides mechanisms that contribute to the holistic aspects of integrated information theory. As apical amplification is highly dependent on cholinergic, aminergic, and other neuromodulators, it relates the specific contents of conscious experience to global mental states and to fluctuations in arousal when awake. We conclude that apical dendrites provide a cellular mechanism for the context-sensitive selective amplification that is a cardinal prerequisite of conscious perception.
Collapse
Affiliation(s)
- Tomáš Marvan
- Department of Analytic Philosophy, Institute of Philosophy, Czech Academy of Sciences, Jilská 1, Prague 110 00, Czech Republic
| | - Michal Polák
- Department of Philosophy, University of West Bohemia, Sedláčkova 19, Pilsen 306 14, Czech Republic
| | - Talis Bachmann
- School of Law and Cognitive Neuroscience Laboratory, University of Tartu (Tallinn branch), Kaarli pst 3, Tallinn 10119, Estonia
| | - William A Phillips
- Faculty of Natural Sciences, University of Stirling, Stirling FK9 4LA, UK
| |
Collapse
|
7
|
Daube C, Xu T, Zhan J, Webb A, Ince RA, Garrod OG, Schyns PG. Grounding deep neural network predictions of human categorization behavior in understandable functional features: The case of face identity. PATTERNS (NEW YORK, N.Y.) 2021; 2:100348. [PMID: 34693374 PMCID: PMC8515012 DOI: 10.1016/j.patter.2021.100348] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/14/2020] [Revised: 11/30/2020] [Accepted: 08/20/2021] [Indexed: 01/24/2023]
Abstract
Deep neural networks (DNNs) can resolve real-world categorization tasks with apparent human-level performance. However, true equivalence of behavioral performance between humans and their DNN models requires that their internal mechanisms process equivalent features of the stimulus. To develop such feature equivalence, our methodology leveraged an interpretable and experimentally controlled generative model of the stimuli (realistic three-dimensional textured faces). Humans rated the similarity of randomly generated faces to four familiar identities. We predicted these similarity ratings from the activations of five DNNs trained with different optimization objectives. Using information theoretic redundancy, reverse correlation, and the testing of generalization gradients, we show that DNN predictions of human behavior improve because their shape and texture features overlap with those that subsume human behavior. Thus, we must equate the functional features that subsume the behavioral performances of the brain and its models before comparing where, when, and how these features are processed.
Collapse
Affiliation(s)
- Christoph Daube
- Institute of Neuroscience and Psychology, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| | - Tian Xu
- Department of Computer Science and Technology, University of Cambridge, 15 JJ Thomson Avenue, Cambridge CB3 0FD, England, UK
| | - Jiayu Zhan
- Institute of Neuroscience and Psychology, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| | - Andrew Webb
- Institute of Neuroscience and Psychology, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| | - Robin A.A. Ince
- Institute of Neuroscience and Psychology, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| | - Oliver G.B. Garrod
- Institute of Neuroscience and Psychology, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| | - Philippe G. Schyns
- Institute of Neuroscience and Psychology, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| |
Collapse
|