1
|
Guo LL, Niemeier M. Phase-Dependent Visual and Sensorimotor Integration of Features for Grasp Computations before and after Effector Specification. J Neurosci 2024; 44:e2208232024. [PMID: 39019614 PMCID: PMC11326866 DOI: 10.1523/jneurosci.2208-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Revised: 07/03/2024] [Accepted: 07/10/2024] [Indexed: 07/19/2024] Open
Abstract
The simple act of viewing and grasping an object involves complex sensorimotor control mechanisms that have been shown to vary as a function of multiple object and other task features such as object size, shape, weight, and wrist orientation. However, these features have been mostly studied in isolation. In contrast, given the nonlinearity of motor control, its computations require multiple features to be incorporated concurrently. Therefore, the present study tested the hypothesis that grasp computations integrate multiple task features superadditively in particular when these features are relevant for the same action phase. We asked male and female human participants to reach-to-grasp objects of different shapes and sizes with different wrist orientations. Also, we delayed the movement onset using auditory signals to specify which effector to use. Using electroencephalography and representative dissimilarity analysis to map the time course of cortical activity, we found that grasp computations formed superadditive integrated representations of grasp features during different planning phases of grasping. Shape-by-size representations and size-by-orientation representations occurred before and after effector specification, respectively, and could not be explained by single-feature models. These observations are consistent with the brain performing different preparatory, phase-specific computations; visual object analysis to identify grasp points at abstract visual levels; and downstream sensorimotor preparatory computations for reach-to-grasp trajectories. Our results suggest the brain adheres to the needs of nonlinear motor control for integration. Furthermore, they show that examining the superadditive influence of integrated representations can serve as a novel lens to map the computations underlying sensorimotor control.
Collapse
Affiliation(s)
- Lin Lawrence Guo
- Department of Psychology Scarborough, University of Toronto, Toronto, Ontario M1C1A4, Canada
| | - Matthias Niemeier
- Department of Psychology Scarborough, University of Toronto, Toronto, Ontario M1C1A4, Canada
- Centre for Vision Research, York University, Toronto, Ontario M4N3M6, Canada
| |
Collapse
|
2
|
Tautvydaitė D, Burra N. The Timing of Gaze Direction Perception: ERP Decoding and Task Modulation. Neuroimage 2024; 295:120659. [PMID: 38815675 DOI: 10.1016/j.neuroimage.2024.120659] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2024] [Revised: 05/24/2024] [Accepted: 05/27/2024] [Indexed: 06/01/2024] Open
Abstract
Distinguishing the direction of another person's eye gaze is extremely important in everyday social interaction, as it provides critical information about people's attention and, therefore, intentions. The temporal dynamics of gaze processing have been investigated using event-related potentials (ERPs) recorded with electroencephalography (EEG). However, the moment at which our brain distinguishes the gaze direction (GD), irrespectively of other facial cues, remains unclear. To solve this question, the present study aimed to investigate the time course of gaze direction processing, using an ERP decoding approach, based on the combination of a support vector machine and error-correcting output codes. We recorded EEG in young healthy subjects, 32 of them performing GD detection and 34 conducting face orientation tasks. Both tasks presented 3D realistic faces with five different head and gaze orientations each: 30°, 15° to the left or right, and 0°. While the classical ERP analyses did not show clear GD effects, ERP decoding analyses revealed that discrimination of GD, irrespective of head orientation, started at 140 ms in the GD task and at 120 ms in the face orientation task. GD decoding accuracy was higher in the GD task than in the face orientation task and was the highest for the direct gaze in both tasks. These findings suggest that the decoding of brain patterns is modified by task relevance, which changes the latency and the accuracy of GD decoding.
Collapse
Affiliation(s)
- Domilė Tautvydaitė
- Faculty of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland.
| | - Nicolas Burra
- Faculty of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland
| |
Collapse
|
3
|
Tanaka H, Jiang P. P1, N170, and N250 Event-related Potential Components Reflect Temporal Perception Processing in Face and Body Personal Identification. J Cogn Neurosci 2024; 36:1265-1281. [PMID: 38652104 DOI: 10.1162/jocn_a_02167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/25/2024]
Abstract
Human faces and bodies represent various socially important signals. Although adults encounter numerous new people in daily life, they can recognize hundreds to thousands of different individuals. However, the neural mechanisms that differentiate one person from another person are unclear. This study aimed to clarify the temporal dynamics of the cognitive processes of face and body personal identification using face-sensitive ERP components (P1, N170, and N250). The present study performed three blocks (face-face, face-body, and body-body) of different ERP adaptation paradigms. Furthermore, in the above three blocks, ERP components were used to compare brain biomarkers under three conditions (same person, different person of the same sex, and different person of the opposite sex). The results showed that the P1 amplitude for the face-face block was significantly greater than that for the body-body block, that the N170 amplitude for a different person of the same sex condition was greater than that for the same person condition in the right hemisphere only, and that the N250 amplitude gradually increased as the degree of face and body sex-social categorization grew closer (i.e., same person condition > different person of the same sex condition > different person of the opposite sex condition). These results suggest that early processing of the face and body processes the face and body separately and that structural encoding and personal identification of the face and body process the face and body collaboratively.
Collapse
Affiliation(s)
| | - Peilun Jiang
- Kanazawa University Graduate School, Kanazawa City, Japan
| |
Collapse
|
4
|
Sama MA, Nestor A, Cant JS. The Neural Dynamics of Face Ensemble and Central Face Processing. J Neurosci 2024; 44:e1027232023. [PMID: 38148151 PMCID: PMC10869155 DOI: 10.1523/jneurosci.1027-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Revised: 11/21/2023] [Accepted: 12/11/2023] [Indexed: 12/28/2023] Open
Abstract
Extensive work has investigated the neural processing of single faces, including the role of shape and surface properties. However, much less is known about the neural basis of face ensemble perception (e.g., simultaneously viewing several faces in a crowd). Importantly, the contribution of shape and surface properties have not been elucidated in face ensemble processing. Furthermore, how single central faces are processed within the context of an ensemble remains unclear. Here, we probe the neural dynamics of ensemble representation using pattern analyses as applied to electrophysiology data in healthy adults (seven males, nine females). Our investigation relies on a unique set of stimuli, depicting different facial identities, which vary parametrically and independently along their shape and surface properties. These stimuli were organized into ensemble displays consisting of six surround faces arranged in a circle around one central face. Overall, our results indicate that both shape and surface properties play a significant role in face ensemble encoding, with the latter demonstrating a more pronounced contribution. Importantly, we find that the neural processing of the center face precedes that of the surround faces in an ensemble. Further, the temporal profile of center face decoding is similar to that of single faces, while those of single faces and face ensembles diverge extensively from each other. Thus, our work capitalizes on a new center-surround paradigm to elucidate the neural dynamics of ensemble processing and the information that underpins it. Critically, our results serve to bridge the study of single and ensemble face perception.
Collapse
Affiliation(s)
- Marco Agazio Sama
- Department of Psychology, University of Toronto Scarborough, Toronto, Ontario M1C 1A4, Canada
| | - Adrian Nestor
- Department of Psychology, University of Toronto Scarborough, Toronto, Ontario M1C 1A4, Canada
| | - Jonathan Samuel Cant
- Department of Psychology, University of Toronto Scarborough, Toronto, Ontario M1C 1A4, Canada
| |
Collapse
|
5
|
Yao L, Fu Q, Liu CH. The roles of edge-based and surface-based information in the dynamic neural representation of objects. Neuroimage 2023; 283:120425. [PMID: 37890562 DOI: 10.1016/j.neuroimage.2023.120425] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Revised: 10/22/2023] [Accepted: 10/24/2023] [Indexed: 10/29/2023] Open
Abstract
We combined multivariate pattern analysis (MVPA) and electroencephalogram (EEG) to investigate the role of edge, color, and other surface information in the neural representation of visual objects. Participants completed a one-back task in which they were presented with color photographs, grayscale images, and line drawings of animals, tools, and fruits. Our results provide the first neural evidence that line drawings elicit similar neural activities as color photographs and grayscale images during the 175-305 ms window after the stimulus onset. Furthermore, we found that other surface information, rather than color information, facilitates decoding accuracy in the early stages of object representations and affects the speed of this. These results provide new insights into the role of edge-based and surface-based information in the dynamic process of neural representations of visual objects.
Collapse
Affiliation(s)
- Liansheng Yao
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Qiufang Fu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China.
| | - Chang Hong Liu
- Department of Psychology, Bournemouth University, Fern Barrow, Poole, UK
| |
Collapse
|
6
|
Kovács G, Li C, Ambrus GG, Burton AM. The neural dynamics of familiarity-dependent face identity representation. Psychophysiology 2023; 60:e14304. [PMID: 37009756 DOI: 10.1111/psyp.14304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 03/20/2023] [Accepted: 03/20/2023] [Indexed: 04/04/2023]
Abstract
Recognizing a face as belonging to a given identity is essential in our everyday life. Clearly, the correct identification of a face is only possible for familiar people, but 'familiarity' covers a wide range-from people we see every day to those we barely know. Although several studies have shown that the processing of familiar and unfamiliar faces is substantially different, little is known about how the degree of familiarity affects the neural dynamics of face identity processing. Here, we report the results of a multivariate EEG analysis, examining the representational dynamics of face identity across several familiarity levels. Participants viewed highly variable face images of 20 identities, including the participants' own face, personally familiar (PF), celebrity and unfamiliar faces. Linear discriminant classifiers were trained and tested on EEG patterns to discriminate pairs of identities of the same familiarity level. Time-resolved classification revealed that the neural representations of identity discrimination emerge around 100 ms post-stimulus onset, relatively independently of familiarity level. In contrast, identity decoding between 200 and 400 ms is determined to a large extent by familiarity: it can be recovered with higher accuracy and for a longer duration in the case of more familiar faces. In addition, we found no increased discriminability for faces of PF persons compared to those of highly familiar celebrities. One's own face benefits from processing advantages only in a relatively late time-window. Our findings provide new insights into how the brain represents face identity with various degrees of familiarity and show that the degree of familiarity modulates the available identity-specific information at a relatively early time window.
Collapse
Affiliation(s)
- Gyula Kovács
- Department of Biological Psychology and Cognitive Neurosciences, Institute of Psychology, Friedrich-Schiller-Universität Jena, Jena, Germany
| | - Chenglin Li
- Department of Biological Psychology and Cognitive Neurosciences, Institute of Psychology, Friedrich-Schiller-Universität Jena, Jena, Germany
- School of Psychology, Zhejiang Normal University, Jinhua, China
| | - Géza Gergely Ambrus
- Department of Psychology, Faculty of Science and Technology, Bournemouth University, Poole, UK
| | - A Mike Burton
- Department of Psychology, University of York, York, UK
- Faculty of Society and Design, Bond University, Gold Coast, Qld, Australia
| |
Collapse
|
7
|
Dalski A, Kovács G, Ambrus GG. No semantic information is necessary to evoke general neural signatures of face familiarity: evidence from cross-experiment classification. Brain Struct Funct 2023; 228:449-462. [PMID: 36244002 PMCID: PMC9944719 DOI: 10.1007/s00429-022-02583-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Accepted: 10/09/2022] [Indexed: 11/28/2022]
Abstract
Recent theories on the neural correlates of face identification stressed the importance of the available identity-specific semantic and affective information. However, whether such information is essential for the emergence of neural signal of familiarity has not yet been studied in detail. Here, we explored the shared representation of face familiarity between perceptually and personally familiarized identities. We applied a cross-experiment multivariate pattern classification analysis (MVPA), to test if EEG patterns for passive viewing of personally familiar and unfamiliar faces are useful in decoding familiarity in a matching task where familiarity was attained thorough a short perceptual task. Importantly, no additional semantic, contextual, or affective information was provided for the familiarized identities during perceptual familiarization. Although the two datasets originate from different sets of participants who were engaged in two different tasks, familiarity was still decodable in the sorted, same-identity matching trials. This finding indicates that the visual processing of the faces of personally familiar and purely perceptually familiarized identities involve similar mechanisms, leading to cross-classifiable neural patterns.
Collapse
Affiliation(s)
- Alexia Dalski
- Department of Psychology, Philipps-Universität Marburg, 35039 Marburg, Germany ,Center for Mind, Brain and Behavior – CMBB, Philipps-Universität Marburg and Justus-Liebig-Universität Giessen, 35039 Marburg, Germany
| | - Gyula Kovács
- Institute of Psychology, Friedrich Schiller University Jena, 07743 Jena, Germany
| | - Géza Gergely Ambrus
- Institute of Psychology, Friedrich Schiller University Jena, 07743, Jena, Germany. .,Department of Psychology, Bournemouth University, Poole House, Talbot Campus, Fern Barrow, Poole, BH12 5BB, Dorset, UK.
| |
Collapse
|
8
|
Gladhill KA, Mioni G, Wiener M. Dissociable effects of emotional stimuli on electrophysiological indices of time and decision-making. PLoS One 2022; 17:e0276200. [PMCID: PMC9671475 DOI: 10.1371/journal.pone.0276200] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Accepted: 10/02/2022] [Indexed: 11/18/2022] Open
Abstract
Previous research has demonstrated that emotional faces affect time perception, however, the underlying mechanisms are not fully understood. Earlier attempts focus on effects at the different stages of the pacemaker-accumulator model (clock, memory, and/or decision-making) including, an increase in pacemaker rate or accumulation rate via arousal or attention, respectively, or by biasing decision-making. A visual temporal bisection task with sub-second intervals was conducted in two groups to further investigate these effects; one group was strictly behavioral whereas the second included a 64-channel electroencephalogram (EEG). To separate the influence of face and timing responses, participants timed a visual stimulus, temporally flanked (before and after) by two faces, either negative or neutral, creating three trial-types: Neg→Neut, Neut→Neg, or Neut→Neut. We found a leftward shift in bisection point (BP) in Neg→Neut relative to Neut→Neut suggests an overestimation of the temporal stimulus when preceded by a negative face. Neurally, we found the face-responsive N170 was larger for negative faces and the N1 and contingent negative variation (CNV) were larger when the temporal stimulus was preceded by a negative face. Additionally, there was an interaction effect between condition and response for the late positive component of timing (LPCt) and a significant difference between response (short/long) in the neutral condition. We concluded that a preceding negative face affects the clock stage leading to more pulses being accumulated, either through attention or arousal, as indexed by a larger N1, CNV, and N170; whereas viewing a negative face after impacted decision-making mechanisms, as evidenced by the LPCt.
Collapse
Affiliation(s)
- Keri Anne Gladhill
- Psychology Department, George Mason University, Fairfax, Virginia, United States of America
- * E-mail:
| | - Giovanna Mioni
- Department of General Psychology, University of Padova, Padova, Italy
| | - Martin Wiener
- Psychology Department, George Mason University, Fairfax, Virginia, United States of America
| |
Collapse
|
9
|
Li Y, Zhang M, Liu S, Luo W. EEG decoding of multidimensional information from emotional faces. Neuroimage 2022; 258:119374. [PMID: 35700944 DOI: 10.1016/j.neuroimage.2022.119374] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Revised: 06/03/2022] [Accepted: 06/10/2022] [Indexed: 10/18/2022] Open
Abstract
Humans can detect and recognize faces quickly, but there has been little research on the temporal dynamics of the different dimensional face information that is extracted. The present study aimed to investigate the time course of neural responses to the representation of different dimensional face information, such as age, gender, emotion, and identity. We used support vector machine decoding to obtain representational dissimilarity matrices of event-related potential responses to different faces for each subject over time. In addition, we performed representational similarity analysis with the model representational dissimilarity matrices that contained different dimensional face information. Three significant findings were observed. First, the extraction process of facial emotion occurred before that of facial identity and lasted for a long time, which was specific to the right frontal region. Second, arousal was preferentially extracted before valence during the processing of facial emotional information. Third, different dimensional face information exhibited representational stability during different periods. In conclusion, these findings reveal the precise temporal dynamics of multidimensional information processing in faces and provide powerful support for computational models on emotional face perception.
Collapse
Affiliation(s)
- Yiwen Li
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| | - Mingming Zhang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| | - Shuaicheng Liu
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| | - Wenbo Luo
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China.
| |
Collapse
|
10
|
Jin H, Hayward WG, Corballis PM. All-or-none neural mechanisms underlying face categorization: evidence from the N170. Cereb Cortex 2022; 33:777-793. [PMID: 35288746 PMCID: PMC9890453 DOI: 10.1093/cercor/bhac101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2021] [Revised: 02/16/2022] [Accepted: 02/18/2022] [Indexed: 02/04/2023] Open
Abstract
Categorization of visual stimuli is an intrinsic aspect of human perception. Whether the cortical mechanisms underlying categorization operate in an all-or-none or graded fashion remains unclear. In this study, we addressed this issue in the context of the face-specific N170. Specifically, we investigated whether N170 amplitudes grade with the amount of face information available in an image, or a full response is generated whenever a face is perceived. We employed linear mixed-effects modeling to inspect the dependency of N170 amplitudes on stimulus properties and duration, and their relationships to participants' subjective perception. Consistent with previous studies, we found a stronger N170 evoked by faces presented for longer durations. However, further analysis with equivalence tests revealed that this duration effect was eliminated when only faces perceived with high confidence were considered. Therefore, previous evidence supporting the graded hypothesis is more likely to be an artifact of mixing heterogeneous "all" and "none" trial types in signal averaging. These results support the hypothesis that the N170 is generated in an all-or-none manner and, by extension, suggest that categorization of faces may follow a similar pattern.
Collapse
Affiliation(s)
- Haiyang Jin
- Corresponding author: Haiyang Jin, Department of Psychology, New York University Abu Dhabi, Saadiyat Island, Abu Dhabi, United Arab Emirates.
| | - William G Hayward
- Department of Psychology, University of Hong Kong, Centennial Campus, Pokfulam Road, Hong Kong, China
| | - Paul M Corballis
- School of Psychology, University of Auckland, 23 Symonds Street, Auckland Central, Auckland, 1010, New Zealand,Centre for Brain Research, University of Auckland, 85 Park Road, Grafton, Auckland 1023, New Zealand
| |
Collapse
|
11
|
Csizmadia P, Czigler I, Nagy B, Gaál ZA. Does Creativity Influence Visual Perception? - An Event-Related Potential Study With Younger and Older Adults. Front Psychol 2021; 12:742116. [PMID: 34733213 PMCID: PMC8558308 DOI: 10.3389/fpsyg.2021.742116] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Accepted: 09/29/2021] [Indexed: 11/18/2022] Open
Abstract
We do not know enough about the cognitive background of creativity despite its significance. Using an active oddball paradigm with unambiguous and ambiguous portrait paintings as the standard stimuli, our aim was to examine whether: creativity in the figural domain influences the perception of visual stimuli; any stages of visual processing; or if healthy aging has an effect on these processes. We investigated event related potentials (ERPs) and applied ERP decoding analyses in four groups: younger less creative; younger creative; older less creative; and older creative adults. The early visual processing did not differ between creativity groups. In the later ERP stages the amplitude for the creative compared with the less creative groups was larger between 300 and 500 ms. The stimuli types were clearly distinguishable: within the 300–500 ms range the amplitude was larger for ambiguous rather than unambiguous paintings, but this difference in the traditional ERP analysis was only observable in the younger, not elderly groups, who also had this difference when using decoding analysis. Our results could not prove that visual creativity influences the early stage of perception, but showed creativity had an effect on stimulus processing in the 300–500 ms range, in indexing differences in top-down control, and having more flexible cognitive control in the younger creative group.
Collapse
Affiliation(s)
- Petra Csizmadia
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Budapest, Hungary.,Doctoral School of Psychology (Cognitive Science), Budapest University of Technology and Economics, Budapest, Hungary
| | - István Czigler
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Budapest, Hungary
| | - Boglárka Nagy
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Budapest, Hungary.,Doctoral School of Psychology (Cognitive Science), Budapest University of Technology and Economics, Budapest, Hungary
| | - Zsófia Anna Gaál
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Budapest, Hungary
| |
Collapse
|
12
|
Guo LL, Oghli YS, Frost A, Niemeier M. Multivariate Analysis of Electrophysiological Signals Reveals the Time Course of Precision Grasps Programs: Evidence for Nonhierarchical Evolution of Grasp Control. J Neurosci 2021; 41:9210-9222. [PMID: 34551938 PMCID: PMC8570828 DOI: 10.1523/jneurosci.0992-21.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2021] [Revised: 09/13/2021] [Accepted: 09/16/2021] [Indexed: 11/21/2022] Open
Abstract
Current understanding of the neural processes underlying human grasping suggests that grasp computations involve gradients of higher to lower level representations and, relatedly, visual to motor processes. However, it is unclear whether these processes evolve in a strictly canonical manner from higher to intermediate and to lower levels given that this knowledge importantly relies on functional imaging, which lacks temporal resolution. To examine grasping in fine temporal detail here we used multivariate EEG analysis. We asked participants to grasp objects while controlling the time at which crucial elements of grasp programs were specified. We first specified the orientation with which participants should grasp objects, and only after a delay we instructed participants about which effector to use to grasp, either the right or the left hand. We also asked participants to grasp with both hands because bimanual and left-hand grasping share intermediate-level grasp representations. We observed that grasp programs evolved in a canonical manner from visual representations, which were independent of effectors to motor representations that distinguished between effectors. However, we found that intermediate representations of effectors that partially distinguished between effectors arose after representations that distinguished among all effector types. Our results show that grasp computations do not proceed in a strictly hierarchically canonical fashion, highlighting the importance of the fine temporal resolution of EEG for a comprehensive understanding of human grasp control.SIGNIFICANCE STATEMENT A long-standing assumption of the grasp computations is that grasp representations progress from higher to lower level control in a regular, or canonical, fashion. Here, we combined EEG and multivariate pattern analysis to characterize the temporal dynamics of grasp representations while participants viewed objects and were subsequently cued to execute an unimanual or bimanual grasp. Interrogation of the temporal dynamics revealed that lower level effector representations emerged before intermediate levels of grasp representations, thereby suggesting a partially noncanonical progression from higher to lower and then to intermediate level grasp control.
Collapse
Affiliation(s)
- Lin Lawrence Guo
- Department of Psychology, University of Toronto Scarborough, Toronto, Ontario M1C 1A4, Canada
| | - Yazan Shamli Oghli
- Department of Psychology, University of Toronto Scarborough, Toronto, Ontario M1C 1A4, Canada
| | - Adam Frost
- Department of Psychology, University of Toronto Scarborough, Toronto, Ontario M1C 1A4, Canada
| | - Matthias Niemeier
- Department of Psychology, University of Toronto Scarborough, Toronto, Ontario M1C 1A4, Canada
- Centre for Vision Research, York University, Toronto, Ontario M4N 3M6, Canada
- Vision: Science to Applications, York University, Toronto, Ontario M3J 1P3, Canada
| |
Collapse
|
13
|
Is the n-back task a measure of unstructured working memory capacity? Towards understanding its connection to other working memory tasks. Acta Psychol (Amst) 2021; 219:103398. [PMID: 34419689 DOI: 10.1016/j.actpsy.2021.103398] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Revised: 07/10/2021] [Accepted: 08/10/2021] [Indexed: 01/24/2023] Open
Abstract
Working memory is fundamental to human cognitive functioning, and it is often measured with the n-back task. However, it is not clear whether the n-back task is a valid measure of working memory. Importantly, previous studies have found poor correlations with measures of complex span, whereas a recent study (Frost et al., 2019) showed that n-back performance was correlated with a transsaccadic memory task but dissociated from performance on the change detection task, a well-accepted measure of working memory capacity. To test whether capacity is involved in the n-back task we correlated a spatial version of the test with different versions of the change detection task. Experiment 1 introduced perceptual and cognitive disruptions to the change detection task. This impacted task performance, however, all versions of the change detection task remained highly correlated with one another whereas there was no significant correlation with the n-back task. Experiment 2 removed spatial and non-spatial context from the change detection task. This produced a correlation with n-back. Our results indicate that the n-back task is supported by faculties similar to those that support change detection, but that this commonality is hidden when contextual information is available to be exploited in a change detection task such that structured representations can form. We suggest that n-back might be a valid measure of working memory, and that the ability to exploit contextual information is an important faculty captured by some versions of the change detection task.
Collapse
|
14
|
Quek GL, Rossion B, Liu-Shuang J. Critical information thresholds underlying generic and familiar face categorisation at the same face encounter. Neuroimage 2021; 243:118481. [PMID: 34416398 DOI: 10.1016/j.neuroimage.2021.118481] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Revised: 08/06/2021] [Accepted: 08/17/2021] [Indexed: 11/29/2022] Open
Abstract
Seeing a face in the real world provokes a host of automatic categorisations related to sex, emotion, identity, and more. Such individual facets of human face recognition have been extensively examined using overt categorisation judgements, yet their relative informational dependencies during the same face encounter are comparatively unknown. Here we used EEG to assess how increasing access to sensory input governs two ecologically relevant brain functions elicited by seeing a face: Distinguishing faces and nonfaces, and recognising people we know. Observers viewed a large set of natural images that progressively increased in either image duration (experiment 1) or spatial frequency content (experiment 2). We show that in the absence of an explicit categorisation task, the human brain requires less sensory input to categorise a stimulus as a face than it does to recognise whether that face is familiar. Moreover, where sensory thresholds for distinguishing faces/nonfaces were remarkably consistent across observers, there was high inter-individual variability in the lower informational bound for familiar face recognition, underscoring the neurofunctional distinction between these categorisation functions. By i) indexing a form of face recognition that goes beyond simple low-level differences between categories, and ii) tapping multiple recognition functions elicited by the same face encounters, the information minima we report bear high relevance to real-world face encounters, where the same stimulus is categorised along multiple dimensions at once. Thus, our finding of lower informational requirements for generic vs. familiar face recognition constitutes some of the strongest evidence to date for the intuitive notion that sensory input demands should be lower for recognising face category than face identity.
Collapse
Affiliation(s)
- Genevieve L Quek
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands; School of Psychology, The University of Sydney, Sydney, Australia.
| | - Bruno Rossion
- Institute of Research in Psychology (IPSY), University of Louvain, Louvain, Belgium; Université de Lorraine, CNRS, CRAN, F-54000 Nancy, France; Université de Lorraine, CHRU-Nancy, Service de Neurologie, Lorraine F-54000, France
| | - Joan Liu-Shuang
- Institute of Research in Psychology (IPSY), University of Louvain, Louvain, Belgium
| |
Collapse
|
15
|
Ambrus GG, Eick CM, Kaiser D, Kovács G. Getting to Know You: Emerging Neural Representations during Face Familiarization. J Neurosci 2021; 41:5687-5698. [PMID: 34031162 PMCID: PMC8244976 DOI: 10.1523/jneurosci.2466-20.2021] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Revised: 02/22/2021] [Accepted: 04/05/2021] [Indexed: 11/21/2022] Open
Abstract
The successful recognition of familiar persons is critical for social interactions. Despite extensive research on the neural representations of familiar faces, we know little about how such representations unfold as someone becomes familiar. In three EEG experiments on human participants of both sexes, we elucidated how representations of face familiarity and identity emerge from different qualities of familiarization: brief perceptual exposure (Experiment 1), extensive media familiarization (Experiment 2), and real-life personal familiarization (Experiment 3). Time-resolved representational similarity analysis revealed that familiarization quality has a profound impact on representations of face familiarity: they were strongly visible after personal familiarization, weaker after media familiarization, and absent after perceptual familiarization. Across all experiments, we found no enhancement of face identity representation, suggesting that familiarity and identity representations emerge independently during face familiarization. Our results emphasize the importance of extensive, real-life familiarization for the emergence of robust face familiarity representations, constraining models of face perception and recognition memory.SIGNIFICANCE STATEMENT Despite extensive research on the neural representations of familiar faces, we know little about how such representations unfold as someone becomes familiar. To elucidate how face representations change as we get familiar with someone, we conducted three EEG experiments where we used brief perceptual exposure, extensive media familiarization, or real-life personal familiarization. Using multivariate representational similarity analysis, we demonstrate that the method of familiarization has a profound impact on face representations, and emphasize the importance of real-life familiarization. Additionally, familiarization shapes representations of face familiarity and identity differently: as we get to know someone, familiarity signals seem to appear before the formation of identity representations.
Collapse
Affiliation(s)
- Géza Gergely Ambrus
- Institute of Psychology, Friedrich Schiller University Jena, Leutragraben 1, D-07743 Jena, Germany
| | - Charlotta Marina Eick
- Institute of Psychology, Friedrich Schiller University Jena, Leutragraben 1, D-07743 Jena, Germany
| | - Daniel Kaiser
- Department of Psychology, University of York, Heslington, York, YO10 5DD, United Kingdom
| | - Gyula Kovács
- Institute of Psychology, Friedrich Schiller University Jena, Leutragraben 1, D-07743 Jena, Germany
| |
Collapse
|
16
|
Matyjek M, Kroczek B, Senderecka M. Socially induced negative affective knowledge modulates early face perception but not gaze cueing of attention. Psychophysiology 2021; 58:e13876. [PMID: 34110019 PMCID: PMC8459251 DOI: 10.1111/psyp.13876] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2020] [Revised: 05/11/2021] [Accepted: 05/12/2021] [Indexed: 12/12/2022]
Abstract
Prior affective and social knowledge about other individuals has been shown to modulate perception of their faces and gaze‐related attentional processes. However, it remains unclear whether emotionally charged knowledge acquired through interactive social learning also modulates face processing and attentional control. Thus, the aim of this study was to test whether affective knowledge induced through social interactions in a naturalistic exchange game can influence early stages of face processing and attentional shifts in a subsequent gaze‐cueing task. As indicated by self‐reported ratings, the game was successful in inducing valenced affective knowledge towards positive and negative players. In the subsequent task, in which the locations of future targets were cued by the gaze of the game players, we observed enhanced early neural activity (larger amplitude of the P1 component) in response to a photograph of the negative player. This indicates that negative affective knowledge about an individual indeed modulates very early stages of the processing of this individual's face. Our study contributes to the existing literature by providing further evidence for the saliency of interactive social exchange paradigms that are used to induce affective knowledge. Moreover, it extends the previous research by presenting a very early modulation of perception by socially learned affective knowledge. Importantly, it also offers increased ecological validity of the findings due to the use of naturalistic social exchange in the study design. This research complements previous evidence that experimentally induced socio‐affective knowledge about other individuals, modulates processing of their faces, and shows that negative (but not positive) affect enhances very early face processing (the P1). Importantly, we provide an effective affect induction tool—an interactive social exchange game—which offers increased social ecological validity in experimental settings.
Collapse
Affiliation(s)
| | - Bartłomiej Kroczek
- Faculty of Mathematics and Computer Science, Jagiellonian University, Kraków, Poland
| | | |
Collapse
|
17
|
Bayer M, Berhe O, Dziobek I, Johnstone T. Rapid Neural Representations of Personally Relevant Faces. Cereb Cortex 2021; 31:4699-4708. [PMID: 33987643 DOI: 10.1093/cercor/bhab116] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2020] [Revised: 03/15/2021] [Accepted: 04/08/2021] [Indexed: 01/27/2023] Open
Abstract
The faces of those most personally relevant to us are our primary source of social information, making their timely perception a priority. Recent research indicates that gender, age and identity of faces can be decoded from EEG/MEG data within 100 ms. Yet, the time course and neural circuitry involved in representing the personal relevance of faces remain unknown. We applied simultaneous EEG-fMRI to examine neural responses to emotional faces of female participants' romantic partners, friends, and a stranger. Combining EEG and fMRI in cross-modal representational similarity analyses, we provide evidence that representations of personal relevance start prior to structural encoding at 100 ms, with correlated representations in visual cortex, but also in prefrontal and midline regions involved in value representation, and monitoring and recall of self-relevant information. Our results add to an emerging body of research that suggests that models of face perception need to be updated to account for rapid detection of personal relevance in cortical circuitry beyond the core face processing network.
Collapse
Affiliation(s)
- Mareike Bayer
- Berlin School of Mind and Brain, Department of Psychology, Humboldt-Universität zu Berlin, 10999 Berlin, Germany
| | - Oksana Berhe
- Berlin School of Mind and Brain, Department of Psychology, Humboldt-Universität zu Berlin, 10999 Berlin, Germany.,Department of Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim/Heidelberg University, 68159 Mannheim, Germany
| | - Isabel Dziobek
- Berlin School of Mind and Brain, Department of Psychology, Humboldt-Universität zu Berlin, 10999 Berlin, Germany
| | - Tom Johnstone
- Centre for Integrative Neuroscience and Neurodynamics, School of Psychology and Clinical Language Sciences, The University of Reading, RG6 6AH Reading, UK.,School of Health Sciences, Swinburne University of Technology, 3184 Hawthorn, Australia
| |
Collapse
|
18
|
Word and Face Recognition Processing Based on Response Times and Ex-Gaussian Components. ENTROPY 2021; 23:e23050580. [PMID: 34066797 PMCID: PMC8151452 DOI: 10.3390/e23050580] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/04/2021] [Revised: 04/29/2021] [Accepted: 05/05/2021] [Indexed: 11/17/2022]
Abstract
The face is a fundamental feature of our identity. In humans, the existence of specialized processing modules for faces is now widely accepted. However, identifying the processes involved for proper names is more problematic. The aim of the present study is to examine which of the two treatments is produced earlier and whether the social abilities are influent. We selected 100 university students divided into two groups: Spanish and USA students. They had to recognize famous faces or names by using a masked priming task. An analysis of variance about the reaction times (RT) was used to determine whether significant differences could be observed in word or face recognition and between the Spanish or USA group. Additionally, and to examine the role of outliers, the Gaussian distribution has been modified exponentially. Famous faces were recognized faster than names, and differences were observed between Spanish and North American participants, but not for unknown distracting faces. The current results suggest that response times to face processing might be faster than name recognition, which supports the idea of differences in processing nature.
Collapse
|
19
|
van Driel J, Olivers CNL, Fahrenfort JJ. High-pass filtering artifacts in multivariate classification of neural time series data. J Neurosci Methods 2021; 352:109080. [PMID: 33508412 DOI: 10.1016/j.jneumeth.2021.109080] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2019] [Revised: 01/13/2021] [Accepted: 01/15/2021] [Indexed: 12/11/2022]
Abstract
BACKGROUND Traditionally, EEG/MEG data are high-pass filtered and baseline-corrected to remove slow drifts. Minor deleterious effects of high-pass filtering in traditional time-series analysis have been well-documented, including temporal displacements. However, its effects on time-resolved multivariate pattern classification analyses (MVPA) are largely unknown. NEW METHOD To prevent potential displacement effects, we extend an alternative method of removing slow drift noise - robust detrending - with a procedure in which we mask out all cortical events from each trial. We refer to this method as trial-masked robust detrending. RESULTS In both real and simulated EEG data of a working memory experiment, we show that both high-pass filtering and standard robust detrending create artifacts that result in the displacement of multivariate patterns into activity silent periods, particularly apparent in temporal generalization analyses, and especially in combination with baseline correction. We show that trial-masked robust detrending is free from such displacements. COMPARISON WITH EXISTING METHOD(S) Temporal displacement may emerge even with modest filter cut-off settings such as 0.05 Hz, and even in regular robust detrending. However, trial-masked robust detrending results in artifact-free decoding without displacements. Baseline correction may unwittingly obfuscate spurious decoding effects and displace them to the rest of the trial. CONCLUSIONS Decoding analyses benefit from trial-masked robust detrending, without the unwanted side effects introduced by filtering or regular robust detrending. However, for sufficiently clean data sets and sufficiently strong signals, no filtering or detrending at all may work adequately. Implications for other types of data are discussed, followed by a number of recommendations.
Collapse
Affiliation(s)
- Joram van Driel
- Institute for Brain and Behaviour Amsterdam, Vrije Universiteit Amsterdam, the Netherlands; Department of Experimental and Applied Psychology - Cognitive Psychology, Vrije Universiteit Amsterdam, the Netherlands; Faculty of Behavioural and Movement Sciences, Vrije Universiteit Amsterdam, the Netherlands
| | - Christian N L Olivers
- Institute for Brain and Behaviour Amsterdam, Vrije Universiteit Amsterdam, the Netherlands; Department of Experimental and Applied Psychology - Cognitive Psychology, Vrije Universiteit Amsterdam, the Netherlands; Faculty of Behavioural and Movement Sciences, Vrije Universiteit Amsterdam, the Netherlands
| | - Johannes J Fahrenfort
- Institute for Brain and Behaviour Amsterdam, Vrije Universiteit Amsterdam, the Netherlands; Department of Experimental and Applied Psychology - Cognitive Psychology, Vrije Universiteit Amsterdam, the Netherlands; Faculty of Behavioural and Movement Sciences, Vrije Universiteit Amsterdam, the Netherlands; Department of Psychology, University of Amsterdam, Amsterdam 1001 NK, the Netherlands; Amsterdam Brain and Cognition (ABC), University of Amsterdam, Amsterdam 1001 NK, the Netherlands.
| |
Collapse
|
20
|
Bae GY. The Time Course of Face Representations during Perception and Working Memory Maintenance. Cereb Cortex Commun 2020; 2:tgaa093. [PMID: 34296148 PMCID: PMC8152903 DOI: 10.1093/texcom/tgaa093] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Revised: 11/06/2020] [Accepted: 12/09/2020] [Indexed: 12/16/2022] Open
Abstract
Successful social communication requires accurate perception and maintenance of invariant (face identity) and variant (facial expression) aspects of faces. While numerous studies investigated how face identity and expression information is extracted from faces during perception, less is known about the temporal aspects of the face information during perception and working memory (WM) maintenance. To investigate how face identity and expression information evolve over time, I recorded electroencephalography (EEG) while participants were performing a face WM task where they remembered a face image and reported either the identity or the expression of the face image after a short delay. Using multivariate event-related potential (ERP) decoding analyses, I found that the two types of information exhibited dissociable temporal dynamics: Although face identity was decoded better than facial expression during perception, facial expression was decoded better than face identity during WM maintenance. Follow-up analyses suggested that this temporal dissociation was driven by differential maintenance mechanisms: Face identity information was maintained in a more “activity-silent” manner compared to facial expression information, presumably because invariant face information does not need to be actively tracked in the task. Together, these results provide important insights into the temporal evolution of face information during perception and WM maintenance.
Collapse
Affiliation(s)
- Gi-Yeul Bae
- Department of Psychology, Arizona State University, Tempe, AZ 85287, USA
| |
Collapse
|
21
|
Kovács G. Getting to Know Someone: Familiarity, Person Recognition, and Identification in the Human Brain. J Cogn Neurosci 2020; 32:2205-2225. [DOI: 10.1162/jocn_a_01627] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
Abstract
Abstract
In our everyday life, we continuously get to know people, dominantly through their faces. Several neuroscientific experiments showed that familiarization changes the behavioral processing and underlying neural representation of faces of others. Here, we propose a model of the process of how we actually get to know someone. First, the purely visual familiarization of unfamiliar faces occurs. Second, the accumulation of associated, nonsensory information refines person representation, and finally, one reaches a stage where the effortless identification of very well-known persons occurs. We offer here an overview of neuroimaging studies, first evaluating how and in what ways the processing of unfamiliar and familiar faces differs and, second, by analyzing the fMRI adaptation and multivariate pattern analysis results we estimate where identity-specific representation is found in the brain. The available neuroimaging data suggest that different aspects of the information emerge gradually as one gets more and more familiar with a person within the same network. We propose a novel model of familiarity and identity processing, where the differential activation of long-term memory and emotion processing areas is essential for correct identification.
Collapse
|
22
|
Rubianes M, Muñoz F, Casado P, Hernández-Gutiérrez D, Jiménez-Ortega L, Fondevila S, Sánchez J, Martínez-de-Quel O, Martín-Loeches M. Am I the same person across my life span? An event-related brain potentials study of the temporal perspective in self-identity. Psychophysiology 2020; 58:e13692. [PMID: 32996616 DOI: 10.1111/psyp.13692] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Revised: 07/24/2020] [Accepted: 08/28/2020] [Indexed: 11/28/2022]
Abstract
While self-identity recognition has been largely explored, less is known on how self-identity changes as a function of time. The present work aims to explore the influence of the temporal perspective on self-identity by studying event-related brain potentials (ERP) associated with face processing. To this purpose, participants had to perform a recognition task in two blocks with different task demands: (i) identity recognition (self, close-friend, unknown), and (ii) life stage recognition (adulthood -current-, adolescence, and childhood). The results showed that the N170 component was sensitive to changes in the global face configuration when comparing adulthood with other life stages. The N250 was the earliest neural marker discriminating self from other identities and may be related to a preferential deployment of attentional resources to recognize own face. The P3 was a robust index of self-specificity, reflecting stimulus categorization and presumably adding an emotional value. The results of interest emerged for the subsequent late positive complex (LPC). The larger amplitude for the LPC to the self-face was probably associated with further personal significance. The LPC, therefore, was able to distinguish the continuity of the self over time (i.e., between current self and past selves). Likewise, this component also could discriminate, at each life stage, the self-identity from other identities (e.g., between past self and past close-friend). This would confirm a remarkable role of the LPC reflecting higher self-relevance processes. Taken together, the neural representation of oneself (i.e., "I am myself") seems to be stable and also updated across time.
Collapse
Affiliation(s)
- Miguel Rubianes
- Center UCM-ISCIII for Human Evolution and Behavior, Madrid, Spain.,Psychobiology & Methods for the Behavioral Sciences Department, Complutense University of Madrid, Madrid, Spain
| | - Francisco Muñoz
- Center UCM-ISCIII for Human Evolution and Behavior, Madrid, Spain.,Psychobiology & Methods for the Behavioral Sciences Department, Complutense University of Madrid, Madrid, Spain
| | - Pilar Casado
- Center UCM-ISCIII for Human Evolution and Behavior, Madrid, Spain.,Psychobiology & Methods for the Behavioral Sciences Department, Complutense University of Madrid, Madrid, Spain
| | | | - Laura Jiménez-Ortega
- Center UCM-ISCIII for Human Evolution and Behavior, Madrid, Spain.,Psychobiology & Methods for the Behavioral Sciences Department, Complutense University of Madrid, Madrid, Spain
| | - Sabela Fondevila
- Center UCM-ISCIII for Human Evolution and Behavior, Madrid, Spain.,Psychobiology & Methods for the Behavioral Sciences Department, Complutense University of Madrid, Madrid, Spain
| | - José Sánchez
- Center UCM-ISCIII for Human Evolution and Behavior, Madrid, Spain
| | | | - Manuel Martín-Loeches
- Center UCM-ISCIII for Human Evolution and Behavior, Madrid, Spain.,Psychobiology & Methods for the Behavioral Sciences Department, Complutense University of Madrid, Madrid, Spain
| |
Collapse
|
23
|
Nestor A, Lee ACH, Plaut DC, Behrmann M. The Face of Image Reconstruction: Progress, Pitfalls, Prospects. Trends Cogn Sci 2020; 24:747-759. [PMID: 32674958 PMCID: PMC7429291 DOI: 10.1016/j.tics.2020.06.006] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2020] [Revised: 05/27/2020] [Accepted: 06/15/2020] [Indexed: 10/23/2022]
Abstract
Recent research has demonstrated that neural and behavioral data acquired in response to viewing face images can be used to reconstruct the images themselves. However, the theoretical implications, promises, and challenges of this direction of research remain unclear. We evaluate the potential of this research for elucidating the visual representations underlying face recognition. Specifically, we outline complementary and converging accounts of the visual content, the representational structure, and the neural dynamics of face processing. We illustrate how this research addresses fundamental questions in the study of normal and impaired face recognition, and how image reconstruction provides a powerful framework for uncovering face representations, for unifying multiple types of empirical data, and for facilitating both theoretical and methodological progress.
Collapse
Affiliation(s)
- Adrian Nestor
- Department of Psychology at Scarborough, University of Toronto, Toronto, Ontario, Canada.
| | - Andy C H Lee
- Department of Psychology at Scarborough, University of Toronto, Toronto, Ontario, Canada; Rotman Research Institute, Baycrest Centre, Toronto, Ontario, Canada
| | - David C Plaut
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, USA; Carnegie Mellon Neuroscience Institute, Pittsburgh, PA, USA
| | - Marlene Behrmann
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, USA; Carnegie Mellon Neuroscience Institute, Pittsburgh, PA, USA
| |
Collapse
|
24
|
De Pascalis V, Cirillo G, Vecchio A, Ciorciari J. Event-Related Potential to Conscious and Nonconscious Emotional Face Perception in Females with Autistic-Like Traits. J Clin Med 2020; 9:jcm9072306. [PMID: 32708073 PMCID: PMC7408869 DOI: 10.3390/jcm9072306] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2020] [Revised: 07/15/2020] [Accepted: 07/16/2020] [Indexed: 11/16/2022] Open
Abstract
This study explored the electrocortical correlates of conscious and nonconscious perceptions of emotionally laden faces in neurotypical adult women with varying levels of autistic-like traits (Autism Spectrum Quotient—AQ). Event-related potentials (ERPs) were recorded during the viewing of backward-masked images for happy, neutral, and sad faces presented either below (16 ms—subliminal) or above the level of visual conscious awareness (167 ms—supraliminal). Sad compared to happy faces elicited larger frontal-central N1, N2, and occipital P3 waves. We observed larger N1 amplitudes to sad faces than to happy and neutral faces in High-AQ (but not Low-AQ) scorers. Additionally, High-AQ scorers had a relatively larger P3 at the occipital region to sad faces. Regardless of the AQ score, subliminal perceived emotional faces elicited shorter N1, N2, and P3 latencies than supraliminal faces. Happy and sad faces had shorter N170 latency in the supraliminal than subliminal condition. High-AQ participants had a longer N1 latency over the occipital region than Low-AQ ones. In Low-AQ individuals (but not in High-AQ ones), emotional recognition with female faces produced a longer N170 latency than with male faces. N4 latency was shorter to female faces than male faces. These findings are discussed in view of their clinical implications and extension to autism.
Collapse
Affiliation(s)
- Vilfredo De Pascalis
- Department of Psychology, La Sapienza University of Rome, 00185 Rome, Italy; (G.C.); (A.V.)
- Correspondence:
| | - Giuliana Cirillo
- Department of Psychology, La Sapienza University of Rome, 00185 Rome, Italy; (G.C.); (A.V.)
| | - Arianna Vecchio
- Department of Psychology, La Sapienza University of Rome, 00185 Rome, Italy; (G.C.); (A.V.)
| | - Joseph Ciorciari
- Centre for Mental Health, Department of Psychological Sciences, Swinburne University of Technology, Hawthorn, VIC 3122, Australia;
| |
Collapse
|
25
|
Mares I, Ewing L, Farran EK, Smith FW, Smith ML. Developmental changes in the processing of faces as revealed by EEG decoding. Neuroimage 2020; 211:116660. [DOI: 10.1016/j.neuroimage.2020.116660] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Revised: 01/24/2020] [Accepted: 02/14/2020] [Indexed: 10/25/2022] Open
|
26
|
Bae GY, Leonard CJ, Hahn B, Gold JM, Luck SJ. Assessing the information content of ERP signals in schizophrenia using multivariate decoding methods. NEUROIMAGE-CLINICAL 2020; 25:102179. [PMID: 31954988 PMCID: PMC6965722 DOI: 10.1016/j.nicl.2020.102179] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/15/2019] [Revised: 12/22/2019] [Accepted: 01/13/2020] [Indexed: 02/01/2023]
Abstract
This study took multivariate decoding methods that are widely used to assess the nature of neural representations in neurotypical people and applied them to a comparison of people with schizophrenia and matched control subjects. Participants performed a visual working memory task that required remembering 1–5 items from one side of the display and ignoring an equal number of items on the other side of the display. We attempted to decode which side was being held in working memory from the scalp distribution of the ERP activity during the delay period of the working memory task, and we found greater decoding accuracy in people with schizophrenia than in control subjects when a single item was being held in memory. These results support the hyperfocusing hypothesis of cognitive dysfunction in schizophrenia, and they provide an important proof of concept for applying multivariate decoding methods to comparisons of neural representations in psychiatric and non-psychiatric populations.
Multivariate pattern classification (decoding) methods are commonly employed to study mechanisms of neurocognitive processing in typical individuals, where they can be used to quantify the information that is present in single-participant neural signals. These decoding methods are also potentially valuable in determining how the representation of information differs between psychiatric and non-psychiatric populations. Here, we examined ERPs from people with schizophrenia (PSZ) and healthy control subjects (HCS) in a working memory task that involved remembering 1, 3, or 5 items from one side of the display and ignoring the other side. We used the spatial pattern of ERPs to decode which side of the display was being held in working memory. One might expect that decoding accuracy would be inevitably lower in PSZ as a result of increased noise (i.e., greater trial-to-trial variability). However, we found that decoding accuracy was greater in PSZ than in HCS at memory load 1, consistent with previous research in which memory-related ERP signals were larger in PSZ than in HCS at memory load 1. We also observed that decoding accuracy was strongly related to the ratio of the memory-related ERP activity and the noise level. In addition, we found similar noise levels in PSZ and HCS, counter to the expectation that PSZ would exhibit greater trial-to-trial variability. Together, these results demonstrate that multivariate decoding methods can be validly applied at the individual-participant level to understand the nature of impaired cognitive function in a psychiatric population.
Collapse
Affiliation(s)
- Gi-Yeul Bae
- Department of Psychology, Arizona State University, 950 S. McAllister Ave, Tempe, AZ 85287, USA.
| | - Carly J Leonard
- Department of Psychology, University of Colorado - Denver, USA
| | - Britta Hahn
- Maryland Psychiatric Research Center and School of Medicine, University of Maryland, USA
| | - James M Gold
- Maryland Psychiatric Research Center and School of Medicine, University of Maryland, USA
| | - Steven J Luck
- Center for Mind & Brain and Department of Psychology, University of California - Davis, USA
| |
Collapse
|
27
|
Nemrodov D, Ling S, Nudnou I, Roberts T, Cant JS, Lee ACH, Nestor A. A multivariate investigation of visual word, face, and ensemble processing: Perspectives from EEG‐based decoding and feature selection. Psychophysiology 2019; 57:e13511. [DOI: 10.1111/psyp.13511] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2019] [Revised: 10/11/2019] [Accepted: 11/13/2019] [Indexed: 01/24/2023]
Affiliation(s)
- Dan Nemrodov
- Department of Psychology at Scarborough University of Toronto Toronto Ontario Canada
| | - Shouyu Ling
- Department of Psychology at Scarborough University of Toronto Toronto Ontario Canada
| | - Ilya Nudnou
- Department of Psychology at Scarborough University of Toronto Toronto Ontario Canada
| | - Tyler Roberts
- Department of Psychology at Scarborough University of Toronto Toronto Ontario Canada
| | - Jonathan S. Cant
- Department of Psychology at Scarborough University of Toronto Toronto Ontario Canada
| | - Andy C. H. Lee
- Department of Psychology at Scarborough University of Toronto Toronto Ontario Canada
| | - Adrian Nestor
- Department of Psychology at Scarborough University of Toronto Toronto Ontario Canada
| |
Collapse
|
28
|
Multivariate Analysis of Electrophysiological Signals Reveals the Temporal Properties of Visuomotor Computations for Precision Grips. J Neurosci 2019; 39:9585-9597. [PMID: 31628180 DOI: 10.1523/jneurosci.0914-19.2019] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2019] [Revised: 10/08/2019] [Accepted: 10/15/2019] [Indexed: 11/21/2022] Open
Abstract
The frontoparietal networks underlying grasping movements have been extensively studied, especially using fMRI. Accordingly, whereas much is known about their cortical locus much less is known about the temporal dynamics of visuomotor transformations. Here, we show that multivariate EEG analysis allows for detailed insights into the time course of visual and visuomotor computations of precision grasps. Male and female human participants first previewed one of several objects and, upon its reappearance, reached to grasp it with the thumb and index finger along one of its two symmetry axes. Object shape classifiers reached transient accuracies of 70% at ∼105 ms, especially based on scalp sites over visual cortex, dropping to lower levels thereafter. Grasp orientation classifiers relied on a system of occipital-to-frontal electrodes. Their accuracy rose concurrently with shape classification but ramped up more gradually, and the slope of the classification curve predicted individual reaction times. Further, cross-temporal generalization revealed that dynamic shape representation involved early and late neural generators that reactivated one another. In contrast, grasp computations involved a chain of generators attaining a sustained state about 100 ms before movement onset. Our results reveal the progression of visual and visuomotor representations over the course of planning and executing grasp movements.SIGNIFICANCE STATEMENT Grasping an object requires the brain to perform visual-to-motor transformations of the object's properties. Although much of the neuroanatomic basis of visuomotor transformations has been uncovered, little is known about its time course. Here, we orthogonally manipulated object visual characteristics and grasp orientation, and used multivariate EEG analysis to reveal that visual and visuomotor computations follow similar time courses but display different properties and dynamics.
Collapse
|
29
|
Elucidating the Neural Representation and the Processing Dynamics of Face Ensembles. J Neurosci 2019; 39:7737-7747. [PMID: 31413074 DOI: 10.1523/jneurosci.0471-19.2019] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Revised: 08/02/2019] [Accepted: 08/06/2019] [Indexed: 11/21/2022] Open
Abstract
Extensive behavioral work has documented the ability of the human visual system to extract summary representations from face ensembles (e.g., the average identity of a crowd of faces). Yet, the nature of such representations, their underlying neural mechanisms, and their temporal dynamics await elucidation. Here, we examine summary representations of facial identity in human adults (of both sexes) with the aid of pattern analyses, as applied to EEG data, along with behavioral testing. Our findings confirm the ability of the visual system to form such representations both explicitly and implicitly (i.e., with or without the use of specific instructions). We show that summary representations, rather than individual ensemble constituents, can be decoded from neural signals elicited by ensemble perception, we describe the properties of such representations by appeal to multidimensional face space constructs, and we visualize their content through neural-based image reconstruction. Further, we show that the temporal profile of ensemble processing diverges systematically from that of single faces consistent with a slower, more gradual accumulation of perceptual information. Thus, our findings reveal the representational basis of ensemble processing, its fine-grained visual content, and its neural dynamics.SIGNIFICANCE STATEMENT Humans encounter groups of faces, or ensembles, in a variety of environments. Previous behavioral research has investigated how humans process face ensembles as well as the types of summary representations that can be derived from them, such as average emotion, gender, and identity. However, the neural mechanisms mediating these processes are unclear. Here, we demonstrate that ensemble representations, with different facial identity summaries, can be decoded and even visualized from neural data through multivariate analyses. These results provide, to our knowledge, the first detailed investigation into the status and the visual content of neural ensemble representations of faces. Further, the current findings shed light on the temporal dynamics of face ensembles and its relationship with single-face processing.
Collapse
|
30
|
Shehzad Z, McCarthy G. Perceptual and Semantic Phases of Face Identification Processing: A Multivariate Electroencephalography Study. J Cogn Neurosci 2019; 31:1827-1839. [PMID: 31368824 DOI: 10.1162/jocn_a_01453] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Rapid identification of a familiar face requires an image-invariant representation of person identity. A varying sample of familiar faces is necessary to disentangle image-level from person-level processing. We investigated the time course of face identity processing using a multivariate electroencephalography analysis. Participants saw ambient exemplars of celebrity faces that differed in pose, lighting, hairstyle, and so forth. A name prime preceded a face on half of the trials to preactivate person-specific information, whereas a neutral prime was used on the remaining half. This manipulation helped dissociate perceptual- and semantic-based identification. Two time intervals within the post-face onset electroencephalography epoch were sensitive to person identity. The early perceptual phase spanned 110-228 msec and was not modulated by the name prime. The late semantic phase spanned 252-1000 msec and was sensitive to person knowledge activated by the name prime. Within this late phase, the identity response occurred earlier in time (300-600 msec) for the name prime with a scalp topography similar to the FN400 ERP. This may reflect a matching of the person primed in memory with the face on the screen. Following a neutral prime, the identity response occurred later in time (500-800 msec) with a scalp topography similar to the P600f ERP. This may reflect activation of semantic knowledge associated with the identity. Our results suggest that processing of identity begins early (110 msec), with some tolerance to image-level variations, and then progresses in stages sensitive to perceptual and then to semantic features.
Collapse
|
31
|
Smith FW, Smith ML. Decoding the dynamic representation of facial expressions of emotion in explicit and incidental tasks. Neuroimage 2019; 195:261-271. [PMID: 30940611 DOI: 10.1016/j.neuroimage.2019.03.065] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2018] [Revised: 03/18/2019] [Accepted: 03/27/2019] [Indexed: 11/24/2022] Open
Abstract
Faces transmit a wealth of important social signals. While previous studies have elucidated the network of cortical regions important for perception of facial expression, and the associated temporal components such as the P100, N170 and EPN, it is still unclear how task constraints may shape the representation of facial expression (or other face categories) in these networks. In the present experiment, we used Multivariate Pattern Analysis (MVPA) with EEG to investigate the neural information available across time about two important face categories (expression and identity) when those categories are either perceived under explicit (e.g. decoding facial expression category from the EEG when task is on expression) or incidental task contexts (e.g. decoding facial expression category from the EEG when task is on identity). Decoding of both face categories, across both task contexts, peaked in time-windows spanning 91-170 ms (across posterior electrodes). Peak decoding of expression, however, was not affected by task context whereas peak decoding of identity was significantly reduced under incidental processing conditions. In addition, errors in EEG decoding correlated with errors in behavioral categorization under explicit processing for both expression and identity, however under incidental conditions only errors in EEG decoding of expression correlated with behavior. Furthermore, decoding time-courses and the spatial pattern of informative electrodes showed consistently better decoding of identity under explicit conditions at later-time periods, with weak evidence for similar effects for decoding of expression at isolated time-windows. Taken together, these results reveal differences and commonalities in the processing of face categories under explicit Vs incidental task contexts and suggest that facial expressions are processed to a richer degree under incidental processing conditions, consistent with prior work indicating the relative automaticity by which emotion is processed. Our work further demonstrates the utility in applying multivariate decoding analyses to EEG for revealing the dynamics of face perception.
Collapse
Affiliation(s)
- Fraser W Smith
- School of Psychology, University of East Anglia, Norwich, UK.
| | - Marie L Smith
- School of Psychological Sciences, Birkbeck College, University of London, London, UK
| |
Collapse
|
32
|
Dobs K, Isik L, Pantazis D, Kanwisher N. How face perception unfolds over time. Nat Commun 2019; 10:1258. [PMID: 30890707 PMCID: PMC6425020 DOI: 10.1038/s41467-019-09239-1] [Citation(s) in RCA: 98] [Impact Index Per Article: 19.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2018] [Accepted: 02/24/2019] [Indexed: 11/08/2022] Open
Abstract
Within a fraction of a second of viewing a face, we have already determined its gender, age and identity. A full understanding of this remarkable feat will require a characterization of the computational steps it entails, along with the representations extracted at each. Here, we used magnetoencephalography (MEG) to measure the time course of neural responses to faces, thereby addressing two fundamental questions about how face processing unfolds over time. First, using representational similarity analysis, we found that facial gender and age information emerged before identity information, suggesting a coarse-to-fine processing of face dimensions. Second, identity and gender representations of familiar faces were enhanced very early on, suggesting that the behavioral benefit for familiar faces results from tuning of early feed-forward processing mechanisms. These findings start to reveal the time course of face processing in humans, and provide powerful new constraints on computational theories of face perception.
Collapse
Affiliation(s)
- Katharina Dobs
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA.
- McGovern Institute of Brain Research, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA.
- The Center for Brains, Minds, and Machines, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA.
| | - Leyla Isik
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
- McGovern Institute of Brain Research, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
- The Center for Brains, Minds, and Machines, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Dimitrios Pantazis
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
- McGovern Institute of Brain Research, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Nancy Kanwisher
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
- McGovern Institute of Brain Research, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
- The Center for Brains, Minds, and Machines, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| |
Collapse
|
33
|
Ambrus GG, Kaiser D, Cichy RM, Kovács G. The Neural Dynamics of Familiar Face Recognition. Cereb Cortex 2019; 29:4775-4784. [DOI: 10.1093/cercor/bhz010] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2018] [Revised: 12/20/2018] [Accepted: 01/15/2019] [Indexed: 11/14/2022] Open
Affiliation(s)
- Géza Gergely Ambrus
- Institute of Psychology, Friedrich Schiller University Jena, Leutragraben 1, Jena, Germany
| | - Daniel Kaiser
- Department of Education and Psychology, Freie Universität Berlin, Habelschwerdter Allee 45, Berlin, Germany
| | - Radoslaw Martin Cichy
- Department of Education and Psychology, Freie Universität Berlin, Habelschwerdter Allee 45, Berlin, Germany
- Berlin School of Mind and Brain, Humboldt-Universität Berlin, Luisenstraβe 56, Haus 1, Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Philippstraβe 13/Haus 6, Berlin, Germany
| | - Gyula Kovács
- Institute of Psychology, Friedrich Schiller University Jena, Leutragraben 1, Jena, Germany
| |
Collapse
|
34
|
Nemrodov D, Behrmann M, Niemeier M, Drobotenko N, Nestor A. Multimodal evidence on shape and surface information in individual face processing. Neuroimage 2019; 184:813-825. [DOI: 10.1016/j.neuroimage.2018.09.083] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2018] [Revised: 09/22/2018] [Accepted: 09/30/2018] [Indexed: 11/27/2022] Open
|
35
|
Dima DC, Perry G, Messaritaki E, Zhang J, Singh KD. Spatiotemporal dynamics in human visual cortex rapidly encode the emotional content of faces. Hum Brain Mapp 2018; 39:3993-4006. [PMID: 29885055 PMCID: PMC6175429 DOI: 10.1002/hbm.24226] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2018] [Revised: 04/13/2018] [Accepted: 05/14/2018] [Indexed: 12/05/2022] Open
Abstract
Recognizing emotion in faces is important in human interaction and survival, yet existing studies do not paint a consistent picture of the neural representation supporting this task. To address this, we collected magnetoencephalography (MEG) data while participants passively viewed happy, angry and neutral faces. Using time-resolved decoding of sensor-level data, we show that responses to angry faces can be discriminated from happy and neutral faces as early as 90 ms after stimulus onset and only 10 ms later than faces can be discriminated from scrambled stimuli, even in the absence of differences in evoked responses. Time-resolved relevance patterns in source space track expression-related information from the visual cortex (100 ms) to higher-level temporal and frontal areas (200-500 ms). Together, our results point to a system optimised for rapid processing of emotional faces and preferentially tuned to threat, consistent with the important evolutionary role that such a system must have played in the development of human social interactions.
Collapse
Affiliation(s)
- Diana C. Dima
- Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff UniversityCardiffCF24 4HQUnited Kingdom
| | - Gavin Perry
- Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff UniversityCardiffCF24 4HQUnited Kingdom
| | - Eirini Messaritaki
- BRAIN Unit, School of MedicineCardiff UniversityCardiffCF24 4HQUnited Kingdom
- Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff UniversityCardiffCF24 4HQUnited Kingdom
| | - Jiaxiang Zhang
- Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff UniversityCardiffCF24 4HQUnited Kingdom
| | - Krish D. Singh
- Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff UniversityCardiffCF24 4HQUnited Kingdom
| |
Collapse
|
36
|
Bae GY, Luck SJ. Decoding motion direction using the topography of sustained ERPs and alpha oscillations. Neuroimage 2018; 184:242-255. [PMID: 30223063 DOI: 10.1016/j.neuroimage.2018.09.029] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2018] [Revised: 08/15/2018] [Accepted: 09/10/2018] [Indexed: 10/28/2022] Open
Abstract
The present study sought to determine whether scalp electroencephalogram (EEG) signals contain decodable information about the direction of motion in random dot kinematograms (RDKs), in which the motion information is spatially distributed and mixed with random noise. Any direction of motion from 0 to 360° was possible, and observers reported the precise direction of motion at the end of a 1500-ms stimulus display. We decoded the direction of motion separately during the motion period (during which motion information was being accumulated) and the report period (during which a shift of attention was necessary to make a fine-tuned direction report). Machine learning was used to decode the precise direction of motion (within ±11.25°) from the scalp distribution of either alpha-band EEG activity or sustained event-related potentials (ERPs). We found that ERP-based decoding was above chance (1/16) during both the stimulus and the report periods, whereas alpha-based decoding was above chance only during the report period. Thus, sustained ERPs contain information about spatially distributed direction-of-motion, providing a new method for observing the accumulation of sensory information with high temporal resolution. By contrast, the scalp topography of alpha-band EEG activity appeared to mainly reflect spatially focused attentional processes rather than sensory information.
Collapse
Affiliation(s)
- Gi-Yeul Bae
- Center for Mind & Brain and Department of Psychology, University of California - Davis, Davis, 95618, CA, USA.
| | - Steven J Luck
- Center for Mind & Brain and Department of Psychology, University of California - Davis, Davis, 95618, CA, USA
| |
Collapse
|
37
|
Quek GL, Liu-Shuang J, Goffaux V, Rossion B. Ultra-coarse, single-glance human face detection in a dynamic visual stream. Neuroimage 2018; 176:465-476. [DOI: 10.1016/j.neuroimage.2018.04.034] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2017] [Revised: 04/09/2018] [Accepted: 04/13/2018] [Indexed: 12/24/2022] Open
|
38
|
The Neural Dynamics of Facial Identity Processing: Insights from EEG-Based Pattern Analysis and Image Reconstruction. eNeuro 2018; 5:eN-NWR-0358-17. [PMID: 29492452 PMCID: PMC5829556 DOI: 10.1523/eneuro.0358-17.2018] [Citation(s) in RCA: 41] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2017] [Revised: 01/11/2018] [Accepted: 01/12/2018] [Indexed: 11/21/2022] Open
Abstract
Uncovering the neural dynamics of facial identity processing along with its representational basis outlines a major endeavor in the study of visual processing. To this end, here, we record human electroencephalography (EEG) data associated with viewing face stimuli; then, we exploit spatiotemporal EEG information to determine the neural correlates of facial identity representations and to reconstruct the appearance of the corresponding stimuli. Our findings indicate that multiple temporal intervals support: facial identity classification, face space estimation, visual feature extraction and image reconstruction. In particular, we note that both classification and reconstruction accuracy peak in the proximity of the N170 component. Further, aggregate data from a larger interval (50–650 ms after stimulus onset) support robust reconstruction results, consistent with the availability of distinct visual information over time. Thus, theoretically, our findings shed light on the time course of face processing while, methodologically, they demonstrate the feasibility of EEG-based image reconstruction.
Collapse
|
39
|
Baker DH. Decoding eye-of-origin outside of awareness. Neuroimage 2017; 147:89-96. [PMID: 27940075 DOI: 10.1016/j.neuroimage.2016.12.008] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2016] [Revised: 11/25/2016] [Accepted: 12/03/2016] [Indexed: 11/29/2022] Open
Abstract
In the primary visual cortex of many mammals, ocular dominance columns segregate information from the two eyes. Yet under controlled conditions, most human observers are unable to correctly report the eye to which a stimulus has been shown, indicating that this information is lost during subsequent processing. This study investigates whether eye-of-origin information is available in the pattern of electrophysiological activity evoked by visual stimuli, recorded using EEG and decoded using multivariate pattern analysis. Observers (N=24) viewed sine-wave grating and plaid stimuli of different orientations, shown to either the left or right eye (or both). Using a support vector machine, eye-of-origin could be decoded above chance at around 140 and 220ms post stimulus onset, yet observers were at chance for reporting this information. Other stimulus features, such as binocularity, orientation, spatial pattern, and the presence of interocular conflict (i.e. rivalry), could also be decoded using the same techniques, though all of these were perceptually discriminable above chance. A control analysis found no evidence to support the possibility that eye dominance was responsible for the eye-of-origin effects. These results support a structural explanation for multivariate decoding of electrophysiological signals - information organised in cortical columns can be decoded, even when observers are unaware of this information.
Collapse
|