1
|
Duan Y, Zhan J, Gross J, Ince RAA, Schyns PG. Pre-frontal cortex guides dimension-reducing transformations in the occipito-ventral pathway for categorization behaviors. Curr Biol 2024; 34:3392-3404.e5. [PMID: 39029470 DOI: 10.1016/j.cub.2024.06.050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2024] [Revised: 05/10/2024] [Accepted: 06/20/2024] [Indexed: 07/21/2024]
Abstract
To interpret our surroundings, the brain uses a visual categorization process. Current theories and models suggest that this process comprises a hierarchy of different computations that transforms complex, high-dimensional inputs into lower-dimensional representations (i.e., manifolds) in support of multiple categorization behaviors. Here, we tested this hypothesis by analyzing these transformations reflected in dynamic MEG source activity while individual participants actively categorized the same stimuli according to different tasks: face expression, face gender, pedestrian gender, and vehicle type. Results reveal three transformation stages guided by the pre-frontal cortex. At stage 1 (high-dimensional, 50-120 ms), occipital sources represent both task-relevant and task-irrelevant stimulus features; task-relevant features advance into higher ventral/dorsal regions, whereas task-irrelevant features halt at the occipital-temporal junction. At stage 2 (121-150 ms), stimulus feature representations reduce to lower-dimensional manifolds, which then transform into the task-relevant features underlying categorization behavior over stage 3 (161-350 ms). Our findings shed light on how the brain's network mechanisms transform high-dimensional inputs into specific feature manifolds that support multiple categorization behaviors.
Collapse
Affiliation(s)
- Yaocong Duan
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK
| | - Jiayu Zhan
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK
| | - Joachim Gross
- Institute for Biomagnetism and Biosignalanalysis, University of Münster, Malmedyweg 15, Münster 48149, Germany
| | - Robin A A Ince
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK
| | - Philippe G Schyns
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK.
| |
Collapse
|
2
|
Yan Y, Zhan J, Ince RAA, Schyns PG. Network Communications Flexibly Predict Visual Contents That Enhance Representations for Faster Visual Categorization. J Neurosci 2023; 43:5391-5405. [PMID: 37369588 PMCID: PMC10359031 DOI: 10.1523/jneurosci.0156-23.2023] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 05/25/2023] [Accepted: 05/30/2023] [Indexed: 06/29/2023] Open
Abstract
Models of visual cognition generally assume that brain networks predict the contents of a stimulus to facilitate its subsequent categorization. However, understanding prediction and categorization at a network level has remained challenging, partly because we need to reverse engineer their information processing mechanisms from the dynamic neural signals. Here, we used connectivity measures that can isolate the communications of a specific content to reconstruct these network mechanisms in each individual participant (N = 11, both sexes). Each was cued to the spatial location (left vs right) and contents [low spatial frequency (LSF) vs high spatial frequency (HSF)] of a predicted Gabor stimulus that they then categorized. Using each participant's concurrently measured MEG, we reconstructed networks that predict and categorize LSF versus HSF contents for behavior. We found that predicted contents flexibly propagate top down from temporal to lateralized occipital cortex, depending on task demands, under supervisory control of prefrontal cortex. When they reach lateralized occipital cortex, predictions enhance the bottom-up LSF versus HSF representations of the stimulus, all the way from occipital-ventral-parietal to premotor cortex, in turn producing faster categorization behavior. Importantly, content communications are subsets (i.e., 55-75%) of the signal-to-signal communications typically measured between brain regions. Hence, our study isolates functional networks that process the information of cognitive functions.SIGNIFICANCE STATEMENT An enduring cognitive hypothesis states that our perception is partly influenced by the bottom-up sensory input but also by top-down expectations. However, cognitive explanations of the dynamic brain networks mechanisms that flexibly predict and categorize the visual input according to task-demands remain elusive. We addressed them in a predictive experimental design by isolating the network communications of cognitive contents from all other communications. Our methods revealed a Prediction Network that flexibly communicates contents from temporal to lateralized occipital cortex, with explicit frontal control, and an occipital-ventral-parietal-frontal Categorization Network that represents more sharply the predicted contents from the shown stimulus, leading to faster behavior. Our framework and results therefore shed a new light of cognitive information processing on dynamic brain activity.
Collapse
Affiliation(s)
- Yuening Yan
- School of Psychology and Neuroscience, University of Glasgow, G12 8QB Glasgow, United Kingdom
| | - Jiayu Zhan
- School of Psychological and Cognitive Sciences, Peking University, Beijing 100871, China
| | - Robin A A Ince
- School of Psychology and Neuroscience, University of Glasgow, G12 8QB Glasgow, United Kingdom
| | - Philippe G Schyns
- School of Psychology and Neuroscience, University of Glasgow, G12 8QB Glasgow, United Kingdom
| |
Collapse
|
3
|
Bruchmann M, Mertens L, Schindler S, Straube T. Potentiated early neural responses to fearful faces are not driven by specific face parts. Sci Rep 2023; 13:4613. [PMID: 36944705 PMCID: PMC10030637 DOI: 10.1038/s41598-023-31752-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Accepted: 03/16/2023] [Indexed: 03/23/2023] Open
Abstract
Prioritized processing of fearful compared to neutral faces is reflected in increased amplitudes of components of the event-related potential (ERP). It is unknown whether specific face parts drive these modulations. Here, we investigated the contributions of face parts on ERPs to task-irrelevant fearful and neutral faces using an ERP-dependent facial decoding technique and a large sample of participants (N = 83). Classical ERP analyses showed typical and robust increases of N170 and EPN amplitudes by fearful relative to neutral faces. Facial decoding further showed that the absolute amplitude of these components, as well as the P1, was driven by the low-frequency contrast of specific face parts. However, the difference between fearful and neutral faces was not driven by any specific face part, as supported by Bayesian statistics. Furthermore, there were no correlations between trait anxiety and main effects or interactions. These results suggest that increased N170 and EPN amplitudes to task-irrelevant fearful compared to neutral faces are not driven by specific facial regions but represent a holistic face processing effect.
Collapse
Affiliation(s)
- Maximilian Bruchmann
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, Von-Esmarch-Str. 52, 48149, Münster, Germany.
- Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Münster, Germany.
| | - Léa Mertens
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, Von-Esmarch-Str. 52, 48149, Münster, Germany
| | - Sebastian Schindler
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, Von-Esmarch-Str. 52, 48149, Münster, Germany
- Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Münster, Germany
| | - Thomas Straube
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, Von-Esmarch-Str. 52, 48149, Münster, Germany
- Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Münster, Germany
| |
Collapse
|
4
|
Schyns PG, Zhan J, Jack RE, Ince RAA. Revealing the information contents of memory within the stimulus information representation framework. Philos Trans R Soc Lond B Biol Sci 2020; 375:20190705. [PMID: 32248774 PMCID: PMC7209912 DOI: 10.1098/rstb.2019.0705] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The information contents of memory are the cornerstone of the most influential models in cognition. To illustrate, consider that in predictive coding, a prediction implies that specific information is propagated down from memory through the visual hierarchy. Likewise, recognizing the input implies that sequentially accrued sensory evidence is successfully matched with memorized information (categorical knowledge). Although the existing models of prediction, memory, sensory representation and categorical decision are all implicitly cast within an information processing framework, it remains a challenge to precisely specify what this information is, and therefore where, when and how the architecture of the brain dynamically processes it to produce behaviour. Here, we review a framework that addresses these challenges for the studies of perception and categorization–stimulus information representation (SIR). We illustrate how SIR can reverse engineer the information contents of memory from behavioural and brain measures in the context of specific cognitive tasks that involve memory. We discuss two specific lessons from this approach that generally apply to memory studies: the importance of task, to constrain what the brain does, and of stimulus variations, to identify the specific information contents that are memorized, predicted, recalled and replayed. This article is part of the Theo Murphy meeting issue ‘Memory reactivation: replaying events past, present and future’.
Collapse
Affiliation(s)
- Philippe G Schyns
- Institute of Neuroscience and Psychology, University of Glasgow, Scotland G12 8QB, UK.,School of Psychology, University of Glasgow, Scotland G12 8QB, UK
| | - Jiayu Zhan
- Institute of Neuroscience and Psychology, University of Glasgow, Scotland G12 8QB, UK
| | - Rachael E Jack
- School of Psychology, University of Glasgow, Scotland G12 8QB, UK
| | - Robin A A Ince
- Institute of Neuroscience and Psychology, University of Glasgow, Scotland G12 8QB, UK
| |
Collapse
|
5
|
Jaworska K, Yi F, Ince RAA, van Rijsbergen NJ, Schyns PG, Rousselet GA. Healthy aging delays the neural processing of face features relevant for behavior by 40 ms. Hum Brain Mapp 2019; 41:1212-1225. [PMID: 31782861 PMCID: PMC7268067 DOI: 10.1002/hbm.24869] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2019] [Revised: 10/16/2019] [Accepted: 11/10/2019] [Indexed: 12/18/2022] Open
Abstract
Fast and accurate face processing is critical for everyday social interactions, but it declines and becomes delayed with age, as measured by both neural and behavioral responses. Here, we addressed the critical challenge of understanding how aging changes neural information processing mechanisms to delay behavior. Young (20-36 years) and older (60-86 years) adults performed the basic social interaction task of detecting a face versus noise while we recorded their electroencephalogram (EEG). In each participant, using a new information theoretic framework we reconstructed the features supporting face detection behavior, and also where, when and how EEG activity represents them. We found that occipital-temporal pathway activity dynamically represents the eyes of the face images for behavior ~170 ms poststimulus, with a 40 ms delay in older adults that underlies their 200 ms behavioral deficit of slower reaction times. Our results therefore demonstrate how aging can change neural information processing mechanisms that underlie behavioral slow down.
Collapse
Affiliation(s)
- Katarzyna Jaworska
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK
| | - Fei Yi
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK
| | - Robin A A Ince
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK
| | | | - Philippe G Schyns
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK
| | | |
Collapse
|
6
|
Smith FW, Smith ML. Decoding the dynamic representation of facial expressions of emotion in explicit and incidental tasks. Neuroimage 2019; 195:261-271. [PMID: 30940611 DOI: 10.1016/j.neuroimage.2019.03.065] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2018] [Revised: 03/18/2019] [Accepted: 03/27/2019] [Indexed: 11/24/2022] Open
Abstract
Faces transmit a wealth of important social signals. While previous studies have elucidated the network of cortical regions important for perception of facial expression, and the associated temporal components such as the P100, N170 and EPN, it is still unclear how task constraints may shape the representation of facial expression (or other face categories) in these networks. In the present experiment, we used Multivariate Pattern Analysis (MVPA) with EEG to investigate the neural information available across time about two important face categories (expression and identity) when those categories are either perceived under explicit (e.g. decoding facial expression category from the EEG when task is on expression) or incidental task contexts (e.g. decoding facial expression category from the EEG when task is on identity). Decoding of both face categories, across both task contexts, peaked in time-windows spanning 91-170 ms (across posterior electrodes). Peak decoding of expression, however, was not affected by task context whereas peak decoding of identity was significantly reduced under incidental processing conditions. In addition, errors in EEG decoding correlated with errors in behavioral categorization under explicit processing for both expression and identity, however under incidental conditions only errors in EEG decoding of expression correlated with behavior. Furthermore, decoding time-courses and the spatial pattern of informative electrodes showed consistently better decoding of identity under explicit conditions at later-time periods, with weak evidence for similar effects for decoding of expression at isolated time-windows. Taken together, these results reveal differences and commonalities in the processing of face categories under explicit Vs incidental task contexts and suggest that facial expressions are processed to a richer degree under incidental processing conditions, consistent with prior work indicating the relative automaticity by which emotion is processed. Our work further demonstrates the utility in applying multivariate decoding analyses to EEG for revealing the dynamics of face perception.
Collapse
Affiliation(s)
- Fraser W Smith
- School of Psychology, University of East Anglia, Norwich, UK.
| | - Marie L Smith
- School of Psychological Sciences, Birkbeck College, University of London, London, UK
| |
Collapse
|
7
|
Schendan HE. Memory influences visual cognition across multiple functional states of interactive cortical dynamics. PSYCHOLOGY OF LEARNING AND MOTIVATION 2019. [DOI: 10.1016/bs.plm.2019.07.007] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
|
8
|
Dupuis-Roy N, Faghel-Soubeyrand S, Gosselin F. Time course of the use of chromatic and achromatic facial information for sex categorization. Vision Res 2018; 157:36-43. [PMID: 30201473 DOI: 10.1016/j.visres.2018.08.004] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2018] [Revised: 07/29/2018] [Accepted: 08/29/2018] [Indexed: 11/27/2022]
Abstract
The most useful facial features for sex categorization are the eyes, the eyebrows, and the mouth. Dupuis-Roy et al. reported a large positive correlation between the use of the mouth region and rapid correct answers [Journal of Vision 9 (2009) 1-8]. Given the chromatic information in this region, they hypothesized that the extraction of chromatic and achromatic cues may have different time courses. Here, we tested this hypothesis directly: 110 participants categorized the sex of 300 face images whose chromatic and achromatic content was partially revealed through time (200 ms) and space using randomly located spatio-temporal Gaussian apertures (i.e. the Bubbles technique). This also allowed us to directly compare, for the first time, the relative importance of chromatic and achromatic facial cues for sex categorization. Results showed that face-sex categorization relies mostly on achromatic (luminance) information concentrated in the eye and eyebrow regions, especially the left eye and eyebrow. Additional analyses indicated that chromatic information located in the mouth/philtrum region was used earlier-peaking as early as 35 ms after stimulus onset-than achromatic information in the eye regions-peaking between 165 and 176 ms after stimulus onset-as was speculated by Dupuis-Roy et al. A non-linear analysis failed to support Yip and Sinha's proposal that processing of chromatic variations can improve subsequent processing of achromatic spatial cues, possibly via surface segmentation [Perception 31 (2002) 995-1003]. Instead, we argue that the brain prioritizes chromatic information to compensate for the sluggishness of chromatic processing in early visual areas, and allow chromatic and achromatic information to reach higher-level visual areas simultaneously.
Collapse
Affiliation(s)
- N Dupuis-Roy
- Département de psychologie, Université de Montréal, Canada
| | | | - F Gosselin
- Département de psychologie, Université de Montréal, Canada.
| |
Collapse
|
9
|
Okazawa G, Sha L, Purcell BA, Kiani R. Psychophysical reverse correlation reflects both sensory and decision-making processes. Nat Commun 2018; 9:3479. [PMID: 30154467 PMCID: PMC6113286 DOI: 10.1038/s41467-018-05797-y] [Citation(s) in RCA: 42] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2018] [Accepted: 07/20/2018] [Indexed: 11/17/2022] Open
Abstract
Goal-directed behavior depends on both sensory mechanisms that gather information from the outside world and decision-making mechanisms that select appropriate behavior based on that sensory information. Psychophysical reverse correlation is commonly used to quantify how fluctuations of sensory stimuli influence behavior and is generally believed to uncover the spatiotemporal weighting functions of sensory processes. Here we show that reverse correlations also reflect decision-making processes and can deviate significantly from the true sensory filters. Specifically, changes of decision bound and mechanisms of evidence integration systematically alter psychophysical reverse correlations. Similarly, trial-to-trial variability of sensory and motor delays and decision times causes systematic distortions in psychophysical kernels that should not be attributed to sensory mechanisms. We show that ignoring details of the decision-making process results in misinterpretation of reverse correlations, but proper use of these details turns reverse correlation into a powerful method for studying both sensory and decision-making mechanisms.
Collapse
Affiliation(s)
- Gouki Okazawa
- Center for Neural Science, New York University, New York, NY, 10003, USA
| | - Long Sha
- Center for Neural Science, New York University, New York, NY, 10003, USA
| | - Braden A Purcell
- Center for Neural Science, New York University, New York, NY, 10003, USA
| | - Roozbeh Kiani
- Center for Neural Science, New York University, New York, NY, 10003, USA.
- Department of Psychology, New York University, New York, NY, 10003, USA.
- Neuroscience Institute, NYU Langone Medical Center, New York, NY, 10016, USA.
| |
Collapse
|
10
|
Royer J, Blais C, Charbonneau I, Déry K, Tardif J, Duchaine B, Gosselin F, Fiset D. Greater reliance on the eye region predicts better face recognition ability. Cognition 2018; 181:12-20. [PMID: 30103033 DOI: 10.1016/j.cognition.2018.08.004] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2017] [Revised: 08/03/2018] [Accepted: 08/06/2018] [Indexed: 01/17/2023]
Abstract
Interest in using individual differences in face recognition ability to better understand the perceptual and cognitive mechanisms supporting face processing has grown substantially in recent years. The goal of this study was to determine how varying levels of face recognition ability are linked to changes in visual information extraction strategies in an identity recognition task. To address this question, fifty participants completed six tasks measuring face and object processing abilities. Using the Bubbles method (Gosselin & Schyns, 2001), we also measured each individual's use of visual information in face recognition. At the group level, our results replicate previous findings demonstrating the importance of the eye region for face identification. More importantly, we show that face processing ability is related to a systematic increase in the use of the eye area, especially the left eye from the observer's perspective. Indeed, our results suggest that the use of this region accounts for approximately 20% of the variance in face processing ability. These results support the idea that individual differences in face processing are at least partially related to the perceptual extraction strategy used during face identification.
Collapse
Affiliation(s)
- Jessica Royer
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Canada
| | - Caroline Blais
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Canada
| | - Isabelle Charbonneau
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Canada
| | - Karine Déry
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Canada
| | - Jessica Tardif
- Département de Psychologie, Université de Montréal, Canada
| | - Brad Duchaine
- Department of Psychological and Brain Sciences, Dartmouth College, United States
| | | | - Daniel Fiset
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Canada.
| |
Collapse
|
11
|
Peters JC, Goebel R, Goffaux V. From coarse to fine: Interactive feature processing precedes local feature analysis in human face perception. Biol Psychol 2018; 138:1-10. [PMID: 30076873 DOI: 10.1016/j.biopsycho.2018.07.009] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2017] [Revised: 07/19/2018] [Accepted: 07/31/2018] [Indexed: 11/15/2022]
Abstract
Face perception depends on a dynamic interplay of a "holistic" Interactive Feature Processing (IFP) and a Local Feature Processing (LFP) style. However, it is unclear whether features are processed locally before they are integrated into a holistic percept (Fine-to-Coarse strategy), or whether local feature processing occurs only after a holistic percept is established (Coarse-to-Fine strategy). The present Event-Related Potentials study investigates whether IFP precedes LFP (Coarse-to-Fine) or vice versa (Fine-to-Coarse). Participants matched target features within face pairs (here the eye region), in which distracter features (nose and mouth) called for the same or a different response (congruent and incongruent, respectively). Psychophysical results replicated previous findings. That is, dissimilar target features are locally processed (LFP), which minimizes interference from surrounding incongruent distracters. Conversely, an IFP mode is elicited when similar target features are embedded in dissimilar contexts. In IFP mode, incongruent distracters do interfere with the processing of similar target features, thereby deteriorating task performance. Face inversion, which preserves input properties but disrupts high-level face perception, annihilated these incongruency effects. Psychophysical observations were reflected at the neural level. The IFP and LFP modes of face perception elicited distinct time-courses in occipito-temporal cortex. IFP was affected by inversion as soon as 176 ms post-stimulus onset (coinciding with the N170 peak). In contrast, the first robust indications of LFP occurred 120 ms later, at 296 ms. Thus, the contribution of IFP to high-level face perception appears to temporally precede LFP. Moreover, results showed that the IFP and LFP modes did not only operate in distinct time intervals, but also in different brain areas: activity associated with the IFP mode was right-lateralized, whereas the LPF mode engaged the left hemisphere. In sum, interactive "holistic" encoding of facial features temporally precedes their local analysis. This agrees with models suggesting a Coarse-to-Fine strategy for face perception, in line with generic descriptions of visual perception in which global scene analysis precedes the examination of local details.
Collapse
Affiliation(s)
- Judith C Peters
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands; Department of Vision and Cognition, Netherlands Institute for Neuroscience, An institute of the Royal Netherlands Academy of Arts and Sciences (KNAW), Amsterdam, The Netherlands.
| | - Rainer Goebel
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands; Department of Vision and Cognition, Netherlands Institute for Neuroscience, An institute of the Royal Netherlands Academy of Arts and Sciences (KNAW), Amsterdam, The Netherlands
| | - Valerie Goffaux
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands; Psychological Sciences Research Institute (IPSY), Institute of Neuroscience (IONS), Université Catholique de Louvain, Louvain-la-Neuve, Belgium
| |
Collapse
|
12
|
Ince RA, Giordano BL, Kayser C, Rousselet GA, Gross J, Schyns PG. A statistical framework for neuroimaging data analysis based on mutual information estimated via a gaussian copula. Hum Brain Mapp 2017; 38:1541-1573. [PMID: 27860095 PMCID: PMC5324576 DOI: 10.1002/hbm.23471] [Citation(s) in RCA: 144] [Impact Index Per Article: 20.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2016] [Revised: 10/25/2016] [Accepted: 11/07/2016] [Indexed: 12/17/2022] Open
Abstract
We begin by reviewing the statistical framework of information theory as applicable to neuroimaging data analysis. A major factor hindering wider adoption of this framework in neuroimaging is the difficulty of estimating information theoretic quantities in practice. We present a novel estimation technique that combines the statistical theory of copulas with the closed form solution for the entropy of Gaussian variables. This results in a general, computationally efficient, flexible, and robust multivariate statistical framework that provides effect sizes on a common meaningful scale, allows for unified treatment of discrete, continuous, unidimensional and multidimensional variables, and enables direct comparisons of representations from behavioral and brain responses across any recording modality. We validate the use of this estimate as a statistical test within a neuroimaging context, considering both discrete stimulus classes and continuous stimulus features. We also present examples of analyses facilitated by these developments, including application of multivariate analyses to MEG planar magnetic field gradients, and pairwise temporal interactions in evoked EEG responses. We show the benefit of considering the instantaneous temporal derivative together with the raw values of M/EEG signals as a multivariate response, how we can separately quantify modulations of amplitude and direction for vector quantities, and how we can measure the emergence of novel information over time in evoked responses. Open-source Matlab and Python code implementing the new methods accompanies this article. Hum Brain Mapp 38:1541-1573, 2017. © 2016 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Robin A.A. Ince
- Institute of Neuroscience and Psychology, University of GlasgowGlasgowUnited Kingdom
| | - Bruno L. Giordano
- Institute of Neuroscience and Psychology, University of GlasgowGlasgowUnited Kingdom
| | - Christoph Kayser
- Institute of Neuroscience and Psychology, University of GlasgowGlasgowUnited Kingdom
| | | | - Joachim Gross
- Institute of Neuroscience and Psychology, University of GlasgowGlasgowUnited Kingdom
| | - Philippe G. Schyns
- Institute of Neuroscience and Psychology, University of GlasgowGlasgowUnited Kingdom
| |
Collapse
|
13
|
Towler J, Fisher K, Eimer M. The Cognitive and Neural Basis of Developmental Prosopagnosia. Q J Exp Psychol (Hove) 2017; 70:316-344. [DOI: 10.1080/17470218.2016.1165263] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Developmental prosopagnosia (DP) is a severe impairment of visual face recognition in the absence of any apparent brain damage. The factors responsible for DP have not yet been fully identified. This article provides a selective review of recent studies investigating cognitive and neural processes that may contribute to the face recognition deficits in DP, focusing primarily on event-related brain potential (ERP) measures of face perception and recognition. Studies that measured the face-sensitive N170 component as a marker of perceptual face processing have shown that the perceptual discrimination between faces and non-face objects is intact in DP. Other N170 studies suggest that faces are not represented in the typical fashion in DP. Individuals with DP appear to have specific difficulties in processing spatial and contrast deviations from canonical upright visual–perceptual face templates. The rapid detection of emotional facial expressions appears to be unaffected in DP. ERP studies of the activation of visual memory for individual faces and of the explicit identification of particular individuals have revealed differences between DPs and controls in the timing of these processes and in the links between visual face memory and explicit face recognition. These observations suggest that the speed and efficiency of information propagation through the cortical face network is altered in DP. The nature of the perceptual impairments in DP suggests that atypical visual experience with the eye region of faces over development may be an important contributing factor to DP.
Collapse
Affiliation(s)
- John Towler
- Department of Psychological Sciences, Birkbeck College, University of London, London, UK
| | - Katie Fisher
- Department of Psychological Sciences, Birkbeck College, University of London, London, UK
| | - Martin Eimer
- Department of Psychological Sciences, Birkbeck College, University of London, London, UK
| |
Collapse
|
14
|
Groen IIA, Silson EH, Baker CI. Contributions of low- and high-level properties to neural processing of visual scenes in the human brain. Philos Trans R Soc Lond B Biol Sci 2017; 372:rstb.2016.0102. [PMID: 28044013 DOI: 10.1098/rstb.2016.0102] [Citation(s) in RCA: 90] [Impact Index Per Article: 12.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/20/2016] [Indexed: 11/12/2022] Open
Abstract
Visual scene analysis in humans has been characterized by the presence of regions in extrastriate cortex that are selectively responsive to scenes compared with objects or faces. While these regions have often been interpreted as representing high-level properties of scenes (e.g. category), they also exhibit substantial sensitivity to low-level (e.g. spatial frequency) and mid-level (e.g. spatial layout) properties, and it is unclear how these disparate findings can be united in a single framework. In this opinion piece, we suggest that this problem can be resolved by questioning the utility of the classical low- to high-level framework of visual perception for scene processing, and discuss why low- and mid-level properties may be particularly diagnostic for the behavioural goals specific to scene perception as compared to object recognition. In particular, we highlight the contributions of low-level vision to scene representation by reviewing (i) retinotopic biases and receptive field properties of scene-selective regions and (ii) the temporal dynamics of scene perception that demonstrate overlap of low- and mid-level feature representations with those of scene category. We discuss the relevance of these findings for scene perception and suggest a more expansive framework for visual scene analysis.This article is part of the themed issue 'Auditory and visual scene analysis'.
Collapse
Affiliation(s)
- Iris I A Groen
- Laboratory of Brain and Cognition, National Institutes of Health, 10 Center Drive 10-3N228, Bethesda, MD, USA
| | - Edward H Silson
- Laboratory of Brain and Cognition, National Institutes of Health, 10 Center Drive 10-3N228, Bethesda, MD, USA
| | - Chris I Baker
- Laboratory of Brain and Cognition, National Institutes of Health, 10 Center Drive 10-3N228, Bethesda, MD, USA
| |
Collapse
|
15
|
Kwon M, Liu R, Chien L. Compensation for Blur Requires Increase in Field of View and Viewing Time. PLoS One 2016; 11:e0162711. [PMID: 27622710 PMCID: PMC5021298 DOI: 10.1371/journal.pone.0162711] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2016] [Accepted: 08/26/2016] [Indexed: 11/19/2022] Open
Abstract
Spatial resolution is an important factor for human pattern recognition. In particular, low resolution (blur) is a defining characteristic of low vision. Here, we examined spatial (field of view) and temporal (stimulus duration) requirements for blurry object recognition. The spatial resolution of an image such as letter or face, was manipulated with a low-pass filter. In experiment 1, studying spatial requirement, observers viewed a fixed-size object through a window of varying sizes, which was repositioned until object identification (moving window paradigm). Field of view requirement, quantified as the number of "views" (window repositions) for correct recognition, was obtained for three blur levels, including no blur. In experiment 2, studying temporal requirement, we determined threshold viewing time, the stimulus duration yielding criterion recognition accuracy, at six blur levels, including no blur. For letter and face recognition, we found blur significantly increased the number of views, suggesting a larger field of view is required to recognize blurry objects. We also found blur significantly increased threshold viewing time, suggesting longer temporal integration is necessary to recognize blurry objects. The temporal integration reflects the tradeoff between stimulus intensity and time. While humans excel at recognizing blurry objects, our findings suggest compensating for blur requires increased field of view and viewing time. The need for larger spatial and longer temporal integration for recognizing blurry objects may further challenge object recognition in low vision. Thus, interactions between blur and field of view should be considered for developing low vision rehabilitation or assistive aids.
Collapse
Affiliation(s)
- MiYoung Kwon
- Department of Ophthalmology, School of Medicine, University of Alabama at Birmingham, Birmingham, Alabama, United States of America
| | - Rong Liu
- Department of Ophthalmology, School of Medicine, University of Alabama at Birmingham, Birmingham, Alabama, United States of America
| | - Lillian Chien
- Department of Ophthalmology, School of Medicine, University of Alabama at Birmingham, Birmingham, Alabama, United States of America
| |
Collapse
|
16
|
de Haas B, Schwarzkopf DS, Alvarez I, Lawson RP, Henriksson L, Kriegeskorte N, Rees G. Perception and Processing of Faces in the Human Brain Is Tuned to Typical Feature Locations. J Neurosci 2016; 36:9289-302. [PMID: 27605606 PMCID: PMC5013182 DOI: 10.1523/jneurosci.4131-14.2016] [Citation(s) in RCA: 46] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2014] [Revised: 05/06/2016] [Accepted: 05/15/2016] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED Faces are salient social stimuli whose features attract a stereotypical pattern of fixations. The implications of this gaze behavior for perception and brain activity are largely unknown. Here, we characterize and quantify a retinotopic bias implied by typical gaze behavior toward faces, which leads to eyes and mouth appearing most often in the upper and lower visual field, respectively. We found that the adult human visual system is tuned to these contingencies. In two recognition experiments, recognition performance for isolated face parts was better when they were presented at typical, rather than reversed, visual field locations. The recognition cost of reversed locations was equal to ∼60% of that for whole face inversion in the same sample. Similarly, an fMRI experiment showed that patterns of activity evoked by eye and mouth stimuli in the right inferior occipital gyrus could be separated with significantly higher accuracy when these features were presented at typical, rather than reversed, visual field locations. Our findings demonstrate that human face perception is determined not only by the local position of features within a face context, but by whether features appear at the typical retinotopic location given normal gaze behavior. Such location sensitivity may reflect fine-tuning of category-specific visual processing to retinal input statistics. Our findings further suggest that retinotopic heterogeneity might play a role for face inversion effects and for the understanding of conditions affecting gaze behavior toward faces, such as autism spectrum disorders and congenital prosopagnosia. SIGNIFICANCE STATEMENT Faces attract our attention and trigger stereotypical patterns of visual fixations, concentrating on inner features, like eyes and mouth. Here we show that the visual system represents face features better when they are shown at retinal positions where they typically fall during natural vision. When facial features were shown at typical (rather than reversed) visual field locations, they were discriminated better by humans and could be decoded with higher accuracy from brain activity patterns in the right occipital face area. This suggests that brain representations of face features do not cover the visual field uniformly. It may help us understand the well-known face-inversion effect and conditions affecting gaze behavior toward faces, such as prosopagnosia and autism spectrum disorders.
Collapse
Affiliation(s)
- Benjamin de Haas
- Institute of Cognitive Neuroscience, Wellcome Trust Centre for Neuroimaging, Experimental Psychology, and
| | | | - Ivan Alvarez
- Institute of Child Health, University College London, London WC1H 0AP, United Kingdom, Oxford University Centre for Functional MRI of the Brain, Oxford OX3 9DU, United Kingdom
| | - Rebecca P Lawson
- Institute of Cognitive Neuroscience, Wellcome Trust Centre for Neuroimaging
| | - Linda Henriksson
- MRC Cognition and Brain Sciences Unit, Cambridge CB2 7EF, United Kingdom, and Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo FI-00076, Finland
| | | | - Geraint Rees
- Institute of Cognitive Neuroscience, Wellcome Trust Centre for Neuroimaging
| |
Collapse
|
17
|
Ince RAA, Jaworska K, Gross J, Panzeri S, van Rijsbergen NJ, Rousselet GA, Schyns PG. The Deceptively Simple N170 Reflects Network Information Processing Mechanisms Involving Visual Feature Coding and Transfer Across Hemispheres. Cereb Cortex 2016; 26:4123-4135. [PMID: 27550865 PMCID: PMC5066825 DOI: 10.1093/cercor/bhw196] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
A key to understanding visual cognition is to determine “where”, “when”, and “how” brain responses reflect the processing of the specific visual features that modulate categorization behavior—the “what”. The N170 is the earliest Event-Related Potential (ERP) that preferentially responds to faces. Here, we demonstrate that a paradigmatic shift is necessary to interpret the N170 as the product of an information processing network that dynamically codes and transfers face features across hemispheres, rather than as a local stimulus-driven event. Reverse-correlation methods coupled with information-theoretic analyses revealed that visibility of the eyes influences face detection behavior. The N170 initially reflects coding of the behaviorally relevant eye contralateral to the sensor, followed by a causal communication of the other eye from the other hemisphere. These findings demonstrate that the deceptively simple N170 ERP hides a complex network information processing mechanism involving initial coding and subsequent cross-hemispheric transfer of visual features.
Collapse
Affiliation(s)
- Robin A A Ince
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow G12 8QB, UK
| | - Katarzyna Jaworska
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow G12 8QB, UK
| | - Joachim Gross
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow G12 8QB, UK
| | - Stefano Panzeri
- Laboratory of Neural Computation, Istituto Italiano di Tecnologia, Rovereto 38068, Italy
| | | | - Guillaume A Rousselet
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow G12 8QB, UK
| | - Philippe G Schyns
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow G12 8QB, UK
| |
Collapse
|
18
|
Stimulus features coded by single neurons of a macaque body category selective patch. Proc Natl Acad Sci U S A 2016; 113:E2450-9. [PMID: 27071095 DOI: 10.1073/pnas.1520371113] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Body category-selective regions of the primate temporal cortex respond to images of bodies, but it is unclear which fragments of such images drive single neurons' responses in these regions. Here we applied the Bubbles technique to the responses of single macaque middle superior temporal sulcus (midSTS) body patch neurons to reveal the image fragments the neurons respond to. We found that local image fragments such as extremities (limbs), curved boundaries, and parts of the torso drove the large majority of neurons. Bubbles revealed the whole body in only a few neurons. Neurons coded the features in a manner that was tolerant to translation and scale changes. Most image fragments were excitatory but for a few neurons both inhibitory and excitatory fragments (opponent coding) were present in the same image. The fragments we reveal here in the body patch with Bubbles differ from those suggested in previous studies of face-selective neurons in face patches. Together, our data indicate that the majority of body patch neurons respond to local image fragments that occur frequently, but not exclusively, in bodies, with a coding that is tolerant to translation and scale. Overall, the data suggest that the body category selectivity of the midSTS body patch depends more on the feature statistics of bodies (e.g., extensions occur more frequently in bodies) than on semantics (bodies as an abstract category).
Collapse
|
19
|
Tracing the Flow of Perceptual Features in an Algorithmic Brain Network. Sci Rep 2015; 5:17681. [PMID: 26635299 PMCID: PMC4669501 DOI: 10.1038/srep17681] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2015] [Accepted: 11/03/2015] [Indexed: 11/08/2022] Open
Abstract
The model of the brain as an information processing machine is a profound hypothesis in which neuroscience, psychology and theory of computation are now deeply rooted. Modern neuroscience aims to model the brain as a network of densely interconnected functional nodes. However, to model the dynamic information processing mechanisms of perception and cognition, it is imperative to understand brain networks at an algorithmic level–i.e. as the information flow that network nodes code and communicate. Here, using innovative methods (Directed Feature Information), we reconstructed examples of possible algorithmic brain networks that code and communicate the specific features underlying two distinct perceptions of the same ambiguous picture. In each observer, we identified a network architecture comprising one occipito-temporal hub where the features underlying both perceptual decisions dynamically converge. Our focus on detailed information flow represents an important step towards a new brain algorithmics to model the mechanisms of perception and cognition.
Collapse
|
20
|
Towler J, Parketny J, Eimer M. Perceptual face processing in developmental prosopagnosia is not sensitive to the canonical location of face parts. Cortex 2015; 74:53-66. [PMID: 26649913 DOI: 10.1016/j.cortex.2015.10.018] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2015] [Revised: 09/13/2015] [Accepted: 10/22/2015] [Indexed: 11/29/2022]
Abstract
Individuals with developmental prosopagnosia (DP) are strongly impaired in recognizing faces, but it is controversial whether this deficit is linked to atypical visual-perceptual face processing mechanisms. Previous behavioural studies have suggested that face perception in DP might be less sensitive to the canonical spatial configuration of face parts in upright faces. To test this prediction, we recorded event-related brain potentials (ERPs) to intact upright faces and to faces with spatially scrambled parts (eyes, nose, and mouth) in a group of ten participants with DP and a group of ten age-matched control participants with normal face recognition abilities. The face-sensitive N170 component and the vertex positive potential (VPP) were both enhanced and delayed for scrambled as compared to intact faces in the control group. In contrast, N170 and VPP amplitude enhancements to scrambled faces were absent in the DP group. For control participants, the N170 to scrambled faces was also sensitive to feature locations, with larger and delayed N170 components contralateral to the side where all features appeared in a non-canonical position. No such differences were present in the DP group. These findings suggest that spatial templates of the prototypical feature locations within an upright face are selectively impaired in DP.
Collapse
Affiliation(s)
- John Towler
- Department of Psychological Sciences, Birkbeck College, University of London, UK.
| | - Joanna Parketny
- Department of Psychological Sciences, Birkbeck College, University of London, UK
| | - Martin Eimer
- Department of Psychological Sciences, Birkbeck College, University of London, UK
| |
Collapse
|
21
|
Modulations of eye movement patterns by spatial filtering during the learning and testing phases of an old/new face recognition task. Atten Percept Psychophys 2015; 77:536-50. [PMID: 25287618 DOI: 10.3758/s13414-014-0778-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In two experiments, we examined the effects of varying the spatial frequency (SF) content of face images on eye movements during the learning and testing phases of an old/new recognition task. At both learning and testing, participants were presented with face stimuli band-pass filtered to 11 different SF bands, as well as an unfiltered baseline condition. We found that eye movements varied significantly as a function of SF. Specifically, the frequency of transitions between facial features showed a band-pass pattern, with more transitions for middle-band faces (≈5-20 cycles/face) than for low-band (≈<5 cpf) or high-band (≈>20 cpf) ones. These findings were similar for the learning and testing phases. The distributions of transitions across facial features were similar for the middle-band, high-band, and unfiltered faces, showing a concentration on the eyes and mouth; conversely, low-band faces elicited mostly transitions involving the nose and nasion. The eye movement patterns elicited by low, middle, and high bands are similar to those previous researchers have suggested reflect holistic, configural, and featural processing, respectively. More generally, our results are compatible with the hypotheses that eye movements are functional, and that the visual system makes flexible use of visuospatial information in face processing. Finally, our finding that only middle spatial frequencies yielded the same number and distribution of fixations as unfiltered faces adds more evidence to the idea that these frequencies are especially important for face recognition, and reveals a possible mediator for the superior performance that they elicit.
Collapse
|
22
|
Towler J, Eimer M. Early stages of perceptual face processing are confined to the contralateral hemisphere: Evidence from the N170 component. Cortex 2015; 64:89-101. [DOI: 10.1016/j.cortex.2014.09.013] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2014] [Revised: 07/09/2014] [Accepted: 09/13/2014] [Indexed: 10/24/2022]
|
23
|
Meinhardt G, Meinhardt-Injac B, Persike M. The complete design in the composite face paradigm: role of response bias, target certainty, and feedback. Front Hum Neurosci 2014; 8:885. [PMID: 25400573 PMCID: PMC4215786 DOI: 10.3389/fnhum.2014.00885] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2014] [Accepted: 10/14/2014] [Indexed: 11/25/2022] Open
Abstract
Some years ago an improved design (the “complete design”) was proposed to assess the composite face effect in terms of a congruency effect, defined as the performance difference for congruent and incongruent target to no-target relationships (Cheung et al., 2008). In a recent paper Rossion (2013) questioned whether the congruency effect was a valid hallmark of perceptual integration, because it may contain confounds with face-unspecific interference effects. Here we argue that the complete design is well-balanced and allows one to separate face-specific from face-unspecific effects. We used the complete design for a same/different composite stimulus matching task with face and non-face objects (watches). Subjects performed the task with and without trial-by-trial feedback, and with low and high certainty about the target half. Results showed large congruency effects for faces, particularly when subjects were informed late in the trial about which face halves had to be matched. Analysis of response bias revealed that subjects preferred the “different” response in incongruent trials, which is expected when upper and lower face halves are integrated perceptually at the encoding stage. The results pattern was observed in the absence of feedback, while providing feedback generally attenuated the congruency effect, and led to an avoidance of response bias. For watches no or marginal congruency effects and a moderate global “same” bias were observed. We conclude that the congruency effect, when complemented by an evaluation of response bias, is a valid hallmark of feature integration that allows one to separate faces from non-face objects.
Collapse
Affiliation(s)
- Günter Meinhardt
- Department of Psychology, Johannes Gutenberg University Mainz Mainz, Germany
| | | | - Malte Persike
- Department of Psychology, Johannes Gutenberg University Mainz Mainz, Germany
| |
Collapse
|
24
|
Németh K, Kovács P, Vakli P, Kovács G, Zimmer M. Phase noise reveals early category-specific modulation of the event-related potentials. Front Psychol 2014; 5:367. [PMID: 24795689 PMCID: PMC4006031 DOI: 10.3389/fpsyg.2014.00367] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2013] [Accepted: 04/07/2014] [Indexed: 11/13/2022] Open
Abstract
Previous studies have found that the amplitude of the early event-related potential (ERP) components evoked by faces, such as N170 and P2, changes systematically as a function of noise added to the stimuli. This change has been linked to an increased perceptual processing demand and to enhanced difficulty in perceptual decision making about faces. However, to date it has not yet been tested whether noise manipulation affects the neural correlates of decisions about face and non-face stimuli similarly. To this end, we measured the ERPs for faces and cars at three different phase noise levels. Subjects performed the same two-alternative age-discrimination task on stimuli chosen from young–old morphing continua that were created from faces as well as cars and were calibrated to lead to similar performances at each noise-level. Adding phase noise to the stimuli reduced performance and enhanced response latency for the two categories to the same extent. Parallel to that, phase noise reduced the amplitude and prolonged the latency of the face-specific N170 component. The amplitude of the P1 showed category-specific noise dependence: it was enhanced over the right hemisphere for cars and over the left hemisphere for faces as a result of adding phase noise to the stimuli, but remained stable across noise levels for cars over the left and for faces over the right hemisphere. Moreover, noise modulation altered the category-selectivity of the N170, while the P2 ERP component, typically associated with task decision difficulty, was larger for the more noisy stimuli regardless of stimulus category. Our results suggest that the category-specificity of noise-induced modulations of ERP responses starts at around 100 ms post-stimulus.
Collapse
Affiliation(s)
- Kornél Németh
- Department of Cognitive Science, Budapest University of Technology and Economics Budapest, Hungary
| | - Petra Kovács
- Department of Cognitive Science, Budapest University of Technology and Economics Budapest, Hungary
| | - Pál Vakli
- Department of Cognitive Science, Budapest University of Technology and Economics Budapest, Hungary
| | - Gyula Kovács
- Department of Cognitive Science, Budapest University of Technology and Economics Budapest, Hungary ; DFG Research Unit Person Perception, Friedrich Schiller University of Jena Jena, Germany ; Institute of Psychology, Friedrich Schiller University of Jena Jena, Germany
| | - Márta Zimmer
- Department of Cognitive Science, Budapest University of Technology and Economics Budapest, Hungary
| |
Collapse
|
25
|
Petro LS, Smith FW, Schyns PG, Muckli L. Decoding face categories in diagnostic subregions of primary visual cortex. Eur J Neurosci 2013; 37:1130-9. [PMID: 23373719 PMCID: PMC3816327 DOI: 10.1111/ejn.12129] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2012] [Revised: 12/07/2012] [Accepted: 12/13/2012] [Indexed: 01/07/2023]
Abstract
Higher visual areas in the occipitotemporal cortex contain discrete regions for face processing, but it remains unclear if V1 is modulated by top-down influences during face discrimination, and if this is widespread throughout V1 or localized to retinotopic regions processing task-relevant facial features. Employing functional magnetic resonance imaging (fMRI), we mapped the cortical representation of two feature locations that modulate higher visual areas during categorical judgements – the eyes and mouth. Subjects were presented with happy and fearful faces, and we measured the fMRI signal of V1 regions processing the eyes and mouth whilst subjects engaged in gender and expression categorization tasks. In a univariate analysis, we used a region-of-interest-based general linear model approach to reveal changes in activation within these regions as a function of task. We then trained a linear pattern classifier to classify facial expression or gender on the basis of V1 data from ‘eye’ and ‘mouth’ regions, and from the remaining non-diagnostic V1 region. Using multivariate techniques, we show that V1 activity discriminates face categories both in local ‘diagnostic’ and widespread ‘non-diagnostic’ cortical subregions. This indicates that V1 might receive the processed outcome of complex facial feature analysis from other cortical (i.e. fusiform face area, occipital face area) or subcortical areas (amygdala).
Collapse
Affiliation(s)
- Lucy S Petro
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, College of Medical, Veterinary and Life Sciences, University of Glasgow, 58 Hillhead Street, Glasgow, G12 8QB, UK
| | | | | | | |
Collapse
|
26
|
Abstract
Functional magnetic resonance imaging (fMRI) has revealed multiple subregions in monkey inferior temporal cortex (IT) that are selective for images of faces over other objects. The earliest of these subregions, the posterior lateral face patch (PL), has not been studied previously at the neurophysiological level. Perhaps not surprisingly, we found that PL contains a high concentration of "face-selective" cells when tested with standard image sets comparable to those used previously to define the region at the level of fMRI. However, we here report that several different image sets and analytical approaches converge to show that nearly all face-selective PL cells are driven by the presence of a single eye in the context of a face outline. Most strikingly, images containing only an eye, even when incorrectly positioned in an outline, drove neurons nearly as well as full-face images, and face images lacking only this feature led to longer latency responses. Thus, bottom-up face processing is relatively local and linearly integrates features-consistent with parts-based models-grounding investigation of how the presence of a face is first inferred in the IT face processing hierarchy.
Collapse
|
27
|
Meinhardt-Injac B, Persike M, Meinhardt G. Holistic Face Processing is Induced by Shape and Texture. Perception 2013; 42:716-32. [DOI: 10.1068/p7462] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
There is increasing evidence that shape and texture are integral parts of face identity. However, it is less clear whether face-specific processing mechanisms are triggered by face shape alone, or if texture might play an important role. We address this question by studying mechanisms involved in holistic face processing. Face stimuli were either full-color pictures of real faces (shape and texture) or line drawings of the same faces (shape without texture). In a change detection task subjects judged whether eyes and eyebrows in two otherwise identical, sequentially presented faces were different in size or not. Afterwards, subjects had to identify the just presented face among two distractor faces (forced-choice identification task). The results obtained from the two tasks give rise to the conclusion that face identification and change detection tasks engage different processing strategies, which capture different aspects of holistic processing. Real faces were processed holistically, irrespective of task requirements, whereas line drawings were processed holistically only if face identification was required. On the basis of the data we conclude that face shape is relevant for the initial processing stage and feature binding, whereas face texture seems to be involved in processing of face configuration more specifically. Moreover, results demonstrate considerable flexibility of the face processing systems allowing for goal-directed and task-specific recall of face information.
Collapse
Affiliation(s)
- Bozana Meinhardt-Injac
- Department of Psychology, Johannes Gutenberg University, Binger Strasse 14-16, 55122 Mainz, Germany
| | - Malte Persike
- Department of Psychology, Johannes Gutenberg University, Binger Strasse 14-16, 55122 Mainz, Germany
| | - Günter Meinhardt
- Department of Psychology, Johannes Gutenberg University, Binger Strasse 14-16, 55122 Mainz, Germany
| |
Collapse
|
28
|
Willenbockel V, Lepore F, Nguyen DK, Bouthillier A, Gosselin F. Spatial Frequency Tuning during the Conscious and Non-Conscious Perception of Emotional Facial Expressions - An Intracranial ERP Study. Front Psychol 2012; 3:237. [PMID: 23055988 PMCID: PMC3458489 DOI: 10.3389/fpsyg.2012.00237] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2012] [Accepted: 06/22/2012] [Indexed: 11/16/2022] Open
Abstract
Previous studies have shown that complex visual stimuli, such as emotional facial expressions, can influence brain activity independently of the observers’ awareness. Little is known yet, however, about the “informational correlates” of consciousness – i.e., which low-level information correlates with brain activation during conscious vs. non-conscious perception. Here, we investigated this question in the spatial frequency (SF) domain. We examined which SFs in disgusted and fearful faces modulate activation in the insula and amygdala over time and as a function of awareness, using a combination of intracranial event-related potentials (ERPs), SF Bubbles (Willenbockel et al., 2010a), and Continuous Flash Suppression (CFS; Tsuchiya and Koch, 2005). Patients implanted with electrodes for epilepsy monitoring viewed face photographs (13° × 7°) that were randomly SF filtered on a trial-by-trial basis. In the conscious condition, the faces were visible; in the non-conscious condition, they were rendered invisible using CFS. The data were analyzed by performing multiple linear regressions on the SF filters from each trial and the transformed ERP amplitudes across time. The resulting classification images suggest that many SFs are involved in the conscious and non-conscious perception of emotional expressions, with SFs between 6 and 10 cycles per face width being particularly important early on. The results also revealed qualitative differences between the awareness conditions for both regions. Non-conscious processing relied on low SFs more and was faster than conscious processing. Overall, our findings are consistent with the idea that different pathways are employed for the processing of emotional stimuli under different degrees of awareness. The present study represents a first step to mapping how SF information “flows” through the emotion-processing network with a high temporal resolution and to shedding light on the informational correlates of consciousness in general.
Collapse
Affiliation(s)
- Verena Willenbockel
- Centre de Recherche en Neuropsychologie et Cognition, Département de Psychologie, Université de Montréal Montréal, QC, Canada
| | | | | | | | | |
Collapse
|
29
|
An examination of the processing capacity of features in the Thatcher illusion. Atten Percept Psychophys 2012; 74:1475-87. [DOI: 10.3758/s13414-012-0330-z] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
30
|
Ruiz-Soler M, Beltran FS. The Relative Salience of Facial Features When Differentiating Faces Based on an Interference Paradigm. JOURNAL OF NONVERBAL BEHAVIOR 2012. [DOI: 10.1007/s10919-012-0131-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
31
|
Effects of Spatial Frequencies on Recognition of Facial Identity and Facial Expression. ACTA PSYCHOLOGICA SINICA 2012. [DOI: 10.3724/sp.j.1041.2011.00373] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
32
|
Measuring internal representations from behavioral and brain data. Curr Biol 2012; 22:191-6. [PMID: 22264608 DOI: 10.1016/j.cub.2011.11.061] [Citation(s) in RCA: 52] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2011] [Revised: 10/28/2011] [Accepted: 11/28/2011] [Indexed: 11/22/2022]
Abstract
The study of internal knowledge representations is a cornerstone of the research agenda in the interdisciplinary study of cognition. An influential proposal assumes that the brain uses its internal knowledge of the external world to constrain, in a top-down manner, high-dimensional sensory data into a lower-dimensional representation that enables perceptual decisions and other higher-level cognitive functions [1-9]. This proposal relies on a precise formulation of the observer-specific internal knowledge (i.e., the internal representations, or models) that guides reduction of the high-dimensional retinal input onto a low-dimensional code. Here, we directly revealed the content of subjective internal representations by instructing five observers to detect a face in the presence of only white noise, to force a pure top-down, knowledge-based task. We used reverse correlation methods to visualize each observer's internal representation that supports detection of an illusory face. Using reverse correlation again, this time applied to observers' electroencephalogram activity, we established where and when in the brain specific internal knowledge conceptually interprets the input white noise as a face. We show that internal representations can be reconstructed experimentally from behavioral and brain data, and that their content drives neural activity first over frontal and then over occipitotemporal cortex.
Collapse
|
33
|
Rousselet GA, Pernet CR, Caldara R, Schyns PG. Visual Object Categorization in the Brain: What Can We Really Learn from ERP Peaks? Front Hum Neurosci 2011; 5:156. [PMID: 22144959 PMCID: PMC3228234 DOI: 10.3389/fnhum.2011.00156] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2011] [Accepted: 11/14/2011] [Indexed: 11/13/2022] Open
Affiliation(s)
- Guillaume A Rousselet
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow Glasgow, UK
| | | | | | | |
Collapse
|
34
|
Kourtzi Z, Connor CE. Neural representations for object perception: structure, category, and adaptive coding. Annu Rev Neurosci 2011; 34:45-67. [PMID: 21438683 DOI: 10.1146/annurev-neuro-060909-153218] [Citation(s) in RCA: 100] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Object perception is one of the most remarkable capacities of the primate brain. Owing to the large and indeterminate dimensionality of object space, the neural basis of object perception has been difficult to study and remains controversial. Recent work has provided a more precise picture of how 2D and 3D object structure is encoded in intermediate and higher-level visual cortices. Yet, other studies suggest that higher-level visual cortex represents categorical identity rather than structure. Furthermore, object responses are surprisingly adaptive to changes in environmental statistics, implying that learning through evolution, development, and also shorter-term experience during adulthood may optimize the object code. Future progress in reconciling these findings will depend on more effective sampling of the object domain and direct comparison of these competing hypotheses.
Collapse
Affiliation(s)
- Zoe Kourtzi
- School of Psychology, University of Birmingham, Edgbaston, Birmingham, B15 2TT, United Kingdom.
| | | |
Collapse
|
35
|
Letourneau SM, Mitchell TV. Gaze patterns during identity and emotion judgments in hearing adults and deaf users of American Sign Language. Perception 2011; 40:563-75. [PMID: 21882720 DOI: 10.1068/p6858] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Deaf individuals rely on facial expressions for emotional, social, and linguistic cues. In order to test the hypothesis that specialized experience with faces can alter typically observed gaze patterns, twelve hearing adults and twelve deaf, early-users of American Sign Language judged the emotion and identity of expressive faces (including whole faces, and isolated top and bottom halves), while accuracy and fixations were recorded. Both groups recognized individuals more accurately from top than bottom halves, and emotional expressions from bottom than top halves. Hearing adults directed the majority of fixations to the top halves of faces in both tasks, but fixated the bottom half slightly more often when judging emotion than identity. In contrast, deaf adults often split fixations evenly between the top and bottom halves regardless of task demands. These results suggest that deaf adults have habitual fixation patterns that may maximize their ability to gather information from expressive faces.
Collapse
|
36
|
Mayhew SD, Li S, Storrar JK, Tsvetanov KA, Kourtzi Z. Learning Shapes the Representation of Visual Categories in the Aging Human Brain. J Cogn Neurosci 2010; 22:2899-912. [PMID: 20044888 DOI: 10.1162/jocn.2010.21415] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
The ability to make categorical decisions and interpret sensory experiences is critical for survival and interactions across the lifespan. However, little is known about the human brain mechanisms that mediate the learning and representation of visual categories in aging. Here we combine behavioral measurements and fMRI measurements to investigate the neural processes that mediate flexible category learning in the aging human brain. Our findings show that training changes the decision criterion (i.e., categorical boundary) that young and older observers use for making categorical judgments. Comparing the behavioral choices of human observers with those of a pattern classifier based upon multivoxel fMRI signals, we demonstrate learning-dependent changes in similar cortical areas for young and older adults. In particular, we show that neural signals in occipito-temporal and posterior parietal regions change through learning to reflect the perceived visual categories. Information in these areas about the perceived visual categories is preserved in aging, whereas information content is compromised in more anterior parietal and frontal circuits. Thus, these findings provide novel evidence for flexible category learning in aging that shapes the neural representations of visual categories to reflect the observers' behavioral judgments.
Collapse
|
37
|
Gosselin F, Spezio ML, Tranel D, Adolphs R. Asymmetrical use of eye information from faces following unilateral amygdala damage. Soc Cogn Affect Neurosci 2010; 6:330-7. [PMID: 20478833 DOI: 10.1093/scan/nsq040] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
The human amygdalae are involved in processing visual information about the eyes within faces, and play an essential role in the use of information from the eye region of the face in order to judge emotional expressions, as well as in directing gaze to the eyes in conversations with real people. However, the roles played here by the left and right amygdala individually remain unknown. Here we investigated this question by applying the 'Bubbles' method, which asks viewers to discriminate facial emotions from randomly sampled small regions of a face, to 23 neurological participants with focal, unilateral amygdala damage (10 to the right amygdala). We found a statistically significant asymmetry in the use of eye information when comparing those with unilateral left lesions to those with unilateral right lesions, specifically during emotion judgments. The findings have implications for the amygdala's role in emotion recognition and gaze direction during face processing.
Collapse
Affiliation(s)
- Frédéric Gosselin
- Département de psychologie, Université de Montréal, succursale Centre-ville, Montréal, Québec, Canada.
| | | | | | | |
Collapse
|
38
|
Extracting the internal representation of faces from human brain activity: An analogue to reverse correlation. Neuroimage 2010; 51:373-90. [DOI: 10.1016/j.neuroimage.2010.02.021] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2009] [Revised: 02/06/2010] [Accepted: 02/09/2010] [Indexed: 11/22/2022] Open
|
39
|
van Rijsbergen NJ, Schyns PG. Dynamics of trimming the content of face representations for categorization in the brain. PLoS Comput Biol 2009; 5:e1000561. [PMID: 19911045 PMCID: PMC2768819 DOI: 10.1371/journal.pcbi.1000561] [Citation(s) in RCA: 36] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2009] [Accepted: 10/13/2009] [Indexed: 11/23/2022] Open
Abstract
To understand visual cognition, it is imperative to determine when, how and with what information the human brain categorizes the visual input. Visual categorization consistently involves at least an early and a late stage: the occipito-temporal N170 event related potential related to stimulus encoding and the parietal P300 involved in perceptual decisions. Here we sought to understand how the brain globally transforms its representations of face categories from their early encoding to the later decision stage over the 400 ms time window encompassing the N170 and P300 brain events. We applied classification image techniques to the behavioral and electroencephalographic data of three observers who categorized seven facial expressions of emotion and report two main findings: (1) over the 400 ms time course, processing of facial features initially spreads bilaterally across the left and right occipito-temporal regions to dynamically converge onto the centro-parietal region; (2) concurrently, information processing gradually shifts from encoding common face features across all spatial scales (e.g., the eyes) to representing only the finer scales of the diagnostic features that are richer in useful information for behavior (e.g., the wide opened eyes in ‘fear’; the detailed mouth in ‘happy’). Our findings suggest that the brain refines its diagnostic representations of visual categories over the first 400 ms of processing by trimming a thorough encoding of features over the N170, to leave only the detailed information important for perceptual decisions over the P300. How the brain uses visual information to construct representations of categories is a central question of cognitive neuroscience. With our methods we visualize how the brain transforms its representations of facial expressions. Using electroencephalographic data, we analyze how representations change over the first 450 ms of processing both in feature content (e.g., which aspects of the face, such as the eyes or the mouth are represented across time) and level of detail. We show that facial expressions are initially encoded with most of their features (i.e., mouth and eyes) across all levels of details in the occipito-temporal regions. In a later phase, we show that a gradual reorganization of representations occurs, whereby only task relevant face features are kept (e.g., the mouth in “happy”) at only the finest level of details. We describe this elimination of irrelevant and redundant information as ‘trimming’. We suggest that this may be an example of the brain optimizing categorical representations.
Collapse
|
40
|
Kourtzi Z. Visual learning for perceptual and categorical decisions in the human brain. Vision Res 2009; 50:433-40. [PMID: 19818361 DOI: 10.1016/j.visres.2009.09.025] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2008] [Revised: 09/30/2009] [Accepted: 09/30/2009] [Indexed: 10/20/2022]
Abstract
Successful actions and interactions in the complex environments we inhabit entail making fast and optimal perceptual decisions. Extracting the key features from our sensory experiences and deciding how to interpret them is a computationally challenging task that is far from understood. Accumulating evidence suggests that the brain may solve this challenge by combining sensory information and previous knowledge about the environment acquired through evolution, development, and everyday experience. Here, we review the role of visual learning and experience-dependent plasticity in shaping decisions. We propose that learning plays an important role in translating sensory experiences to decisions and actions by shaping neural representations across cortical circuits in a task-dependent manner.
Collapse
Affiliation(s)
- Zoe Kourtzi
- School of Psychology, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK.
| |
Collapse
|
41
|
Saether L, Van Belle W, Laeng B, Brennen T, Øvervoll M. Anchoring gaze when categorizing faces' sex: evidence from eye-tracking data. Vision Res 2009; 49:2870-80. [PMID: 19733582 DOI: 10.1016/j.visres.2009.09.001] [Citation(s) in RCA: 45] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2008] [Revised: 08/13/2009] [Accepted: 09/01/2009] [Indexed: 11/25/2022]
Abstract
Previous research has shown that during recognition of frontal views of faces, the preferred landing positions of eye fixations are either on the nose or the eye region. Can these findings generalize to other facial views and a simpler perceptual task? An eye-tracking experiment investigated categorization of the sex of faces seen in four views. The results revealed a strategy, preferred in all views, which consisted of focusing gaze within an 'infraorbital region' of the face. This region was fixated more in the first than in subsequent fixations. Males anchored gaze lower and more centrally than females.
Collapse
Affiliation(s)
- Line Saether
- University Library, Department of Psychology and Law, University of Tromsø, Norway.
| | | | | | | | | |
Collapse
|
42
|
Burton AM, Bindemann M. The role of view in human face detection. Vision Res 2009; 49:2026-36. [DOI: 10.1016/j.visres.2009.05.012] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2009] [Revised: 05/08/2009] [Accepted: 05/15/2009] [Indexed: 10/20/2022]
|
43
|
Susac A, Ilmoniemi RJ, Pihko E, Nurminen J, Supek S. Early dissociation of face and object processing: a magnetoencephalographic study. Hum Brain Mapp 2009; 30:917-27. [PMID: 18344191 DOI: 10.1002/hbm.20557] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
The early dissociation in cortical responses to faces and objects was explored with magnetoencephalographic (MEG) recordings and source localization. To control for differences in the low-level stimulus features, which are known to modulate early brain responses, we created a novel set of stimuli so that their combinations did not have any differences in the visual-field location, spatial frequency, or luminance contrast. Differing responses to face and object (flower) stimuli were found at about 100 ms after stimulus onset in the occipital cortex. Our data also confirm that the brain response to a complex visual stimulus is not merely a sum of the responses to its constituent parts; the nonlinearity in the response was largest for meaningful stimuli.
Collapse
Affiliation(s)
- Ana Susac
- Department of Physics, Faculty of Science, University of Zagreb, Zagreb, Croatia.
| | | | | | | | | |
Collapse
|
44
|
Heberlein AS, Atkinson AP. Neuroscientific Evidence for Simulation and Shared Substrates in Emotion Recognition: Beyond Faces. EMOTION REVIEW 2009. [DOI: 10.1177/1754073908100441] [Citation(s) in RCA: 78] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
According to simulation or shared-substrates models of emotion recognition, our ability to recognize the emotions expressed by other individuals relies, at least in part, on processes that internally simulate the same emotional state in ourselves. The term “emotional expressions” is nearly synonymous, in many people's minds, with facial expressions of emotion. However, vocal prosody and whole-body cues also convey emotional information. What is the relationship between these various channels of emotional communication? We first briefly review simulation models of emotion recognition, and then discuss neuroscientific evidence related to these models, including studies using facial expressions, whole-body cues, and vocal prosody. We conclude by discussing these data in the context of simulation and shared-substrates models of emotion recognition.
Collapse
|
45
|
Keil MS. "I look in your eyes, honey": internal face features induce spatial frequency preference for human face processing. PLoS Comput Biol 2009; 5:e1000329. [PMID: 19325870 PMCID: PMC2653192 DOI: 10.1371/journal.pcbi.1000329] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2008] [Accepted: 02/10/2009] [Indexed: 11/19/2022] Open
Abstract
Numerous psychophysical experiments found that humans preferably rely on a narrow band of spatial frequencies for recognition of face identity. A recently conducted theoretical study by the author suggests that this frequency preference reflects an adaptation of the brain's face processing machinery to this specific stimulus class (i.e., faces). The purpose of the present study is to examine this property in greater detail and to specifically elucidate the implication of internal face features (i.e., eyes, mouth, and nose). To this end, I parameterized Gabor filters to match the spatial receptive field of contrast sensitive neurons in the primary visual cortex (simple and complex cells). Filter responses to a large number of face images were computed, aligned for internal face features, and response-equalized ("whitened"). The results demonstrate that the frequency preference is caused by internal face features. Thus, the psychophysically observed human frequency bias for face processing seems to be specifically caused by the intrinsic spatial frequency content of internal face features.
Collapse
Affiliation(s)
- Matthias S Keil
- Basic Psychology Department, Faculty for Psychology, University of Barcelona, Barcelona, Spain.
| |
Collapse
|
46
|
Unilateral left prosopometamorphopsia: A neuropsychological case study. Neuropsychologia 2009; 47:942-8. [DOI: 10.1016/j.neuropsychologia.2008.12.015] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2008] [Revised: 11/24/2008] [Accepted: 12/12/2008] [Indexed: 11/18/2022]
|
47
|
Smith ML, Fries P, Gosselin F, Goebel R, Schyns PG. Inverse Mapping the Neuronal Substrates of Face Categorizations. Cereb Cortex 2009; 19:2428-38. [DOI: 10.1093/cercor/bhn257] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
48
|
Schyns PG, Gosselin F, Smith ML. Information processing algorithms in the brain. Trends Cogn Sci 2008; 13:20-6. [PMID: 19070533 DOI: 10.1016/j.tics.2008.09.008] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2008] [Revised: 09/29/2008] [Accepted: 09/29/2008] [Indexed: 11/26/2022]
Abstract
If the brain is a machine that processes information, then its cognitive activity can be interpreted as a set of information processing states linking stimulus to response (i.e. as a mechanism or an algorithm). The cornerstone of this research agenda is the existence of a method to translate the measurable states of brain activity into the information processing states of a cognitive theory. Here, we contend that reverse correlation methods can provide this translation and we frame the transitions between information processing states in the context of automata theory. We illustrate, using examples from visual cognition, how this novel framework can be applied to understand the information processing algorithms of the brain in cognitive neuroscience.
Collapse
Affiliation(s)
- Philippe G Schyns
- Centre for Cognitive Neuroimaging, Department of Psychology, University of Glasgow, 58 Hillhead Street, Glasgow G12 8QB, UK.
| | | | | |
Collapse
|
49
|
Hsiao JHW, Shieh DX, Cottrell GW. Convergence of the visual field split: hemispheric modeling of face and object recognition. J Cogn Neurosci 2008; 20:2298-307. [PMID: 18457514 PMCID: PMC7360338 DOI: 10.1162/jocn.2008.20162] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Anatomical evidence shows that our visual field is initially split along the vertical midline and contralaterally projected to different hemispheres. It remains unclear at which processing stage the split information converges. In the current study, we applied the Double Filtering by Frequency (DFF) theory (Ivry & Robertson, 1998) to modeling the visual field split; the theory assumes a right-hemisphere/low-frequency bias. We compared three cognitive architectures with different timings of convergence and examined their cognitive plausibility to account for the left-side bias effect in face perception observed in human data. We show that the early convergence model failed to show the left-side bias effect. The modeling, hence, suggests that the convergence may take place at an intermediate or late stage, at least after information has been extracted/encoded separately in the two hemispheres, a fact that is often overlooked in computational modeling of cognitive processes. Comparative anatomical data suggest that this separate encoding process that results in differential frequency biases in the two hemispheres may be engaged from V1 up to the level of area V3a and V4v, and converge at least after the lateral occipital region. The left-side bias effect in our model was also observed in Greeble recognition; the modeling, hence, also provides testable predictions about whether the left-side bias effect may also be observed in (expertise-level) object recognition.
Collapse
Affiliation(s)
- Janet Hui-wen Hsiao
- Department of Computer Science and Engineering, University of California, San Diego, La Jolla, CA 92093-0404, USA.
| | | | | |
Collapse
|
50
|
Gaspar C, Sekuler AB, Bennett PJ. Spatial frequency tuning of upright and inverted face identification. Vision Res 2008; 48:2817-26. [PMID: 18835403 DOI: 10.1016/j.visres.2008.09.015] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2008] [Revised: 09/09/2008] [Accepted: 09/12/2008] [Indexed: 11/26/2022]
Abstract
Previous research suggests that observers use information near the eyes and eyebrows to identify both upright and inverted faces [Sekuler, A. B., Gaspar, C. M., Gold, J. M., & Bennett, P. J. (2004). Inversion leads to quantitative, not qualitative, changes in face processing. Current Biology, 14(5), 391-396]. Here we ask whether more significant differences between upright and inverted face processing exist in the spatial frequency domain. Thresholds were measured in a 1-of-10 identification task with upright and inverted faces presented in no noise, white Gaussian noise, and in low-pass and high-pass filtered noises with various cutoff frequencies. In Experiment 1, all faces were presented in fronto-parallel view; in Experiment 2, viewpoint varied across trials. Thresholds were higher for inverted faces, but the magnitude of the inversion effect did not vary across conditions or experiments. Moreover, the shapes of the noise-masking functions obtained with low-pass and high-pass noise were the same for upright and inverted faces, did not vary between experiments, and revealed that identification was based on information carried by a 1.5 octave wide band of spatial frequencies centered on approximately 7 cycles per face width. Finally, individual differences in the magnitude of the inversion effect were not related to individual differences in the frequency selectivity of face identification. The results indicate that the face inversion effect for identification judgments is not due to subjects using different bands of spatial frequencies to identify upright and inverted faces.
Collapse
Affiliation(s)
- Carl Gaspar
- Centre for Cognitive Neuroimaging, Department of Psychology, University of Glasgow, G12 8QB, UK.
| | | | | |
Collapse
|