51
|
Korolkova OA. The role of temporal inversion in the perception of realistic and morphed dynamic transitions between facial expressions. Vision Res 2017; 143:42-51. [PMID: 29274357 DOI: 10.1016/j.visres.2017.10.007] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2017] [Revised: 10/04/2017] [Accepted: 10/04/2017] [Indexed: 10/18/2022]
Abstract
Recent studies suggest that video recordings of human facial expressions are perceived differently than linear morphing between the first and last frames of these records. Also, observers can differentiate dynamic expressions presented in normal versus time-reversed frame orders. To date, the simultaneous influence of dynamics (natural or linear) and timeline (normal or reversed) has not yet been tested on a wide range of dynamic emotional expressions and the transitions between them. We compared the perception of dynamic transitions between basic emotions in realistic (human-posed) and artificial (linearly morphed) stimuli which were presented in reversed or non-reversed order. The nonlinearity of realistic stimuli was demonstrated by automated facial structure analysis. The results of the behavioral study revealed that the recognition of emotions in time-reversed stimuli significantly differed from recognition of the normally presented ones, and this difference was substantially higher for videos of a dynamic human face than for linear morphs. Emotions displayed at the end of the transitions were recognized better than the first-frame emotions in all types of stimuli except in the time-reversed videos, which showed a similar recognition rate for both the starting and ending emotions. Our findings suggest that nonlinearity, which is present in a realistic facial display but absent in linear morphing, is an important cue for emotion perception, and that unnatural perceptual conditions (inversion in time) make the recognition of emotions more difficult. These results confirm the ability of the human visual system to use subtle dynamic cues on an interlocutor's face, and reveal its sensitivity to the timeline organization of the displayed emotions.
Collapse
Affiliation(s)
- Olga A Korolkova
- Center for Experimental Psychology, Moscow State University of Psychology and Education, 2a Shelepikhinskaya Quay, 123290 Moscow, Russia.
| |
Collapse
|
52
|
Foley E, Rippon G, Senior C. Modulation of Neural Oscillatory Activity during Dynamic Face Processing. J Cogn Neurosci 2017; 30:338-352. [PMID: 29160744 DOI: 10.1162/jocn_a_01209] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Various neuroimaging and neurophysiological methods have been used to examine neural activation patterns in response to faces. However, much of previous research has relied on static images of faces, which do not allow a complete description of the temporal structure of face-specific neural activities to be made. More recently, insights are emerging from fMRI studies about the neural substrates that underpin our perception of naturalistic dynamic face stimuli, but the temporal and spectral oscillatory activity associated with processing dynamic faces has yet to be fully characterized. Here, we used MEG and beamformer source localization to examine the spatiotemporal profile of neurophysiological oscillatory activity in response to dynamic faces. Source analysis revealed a number of regions showing enhanced activation in response to dynamic relative to static faces in the distributed face network, which were spatially coincident with regions that were previously identified with fMRI. Furthermore, our results demonstrate that perception of realistic dynamic facial stimuli activates a distributed neural network at varying time points facilitated by modulations in low-frequency power within alpha and beta frequency ranges (8-30 Hz). Naturalistic dynamic face stimuli may provide a better means of representing the complex nature of perceiving facial expressions in the real world, and neural oscillatory activity can provide additional insights into the associated neural processes.
Collapse
|
53
|
Hirsch J, Zhang X, Noah JA, Ono Y. Frontal temporal and parietal systems synchronize within and across brains during live eye-to-eye contact. Neuroimage 2017; 157:314-330. [PMID: 28619652 PMCID: PMC5863547 DOI: 10.1016/j.neuroimage.2017.06.018] [Citation(s) in RCA: 106] [Impact Index Per Article: 15.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2017] [Revised: 06/02/2017] [Accepted: 06/07/2017] [Indexed: 02/03/2023] Open
Abstract
Human eye-to-eye contact is a primary source of social cues and communication. In spite of the biological significance of this interpersonal interaction, the underlying neural processes are not well-understood. This knowledge gap, in part, reflects limitations of conventional neuroimaging methods, including solitary confinement in the bore of a scanner and minimal tolerance of head movement that constrain investigations of natural, two-person interactions. However, these limitations are substantially resolved by recent technical developments in functional near-infrared spectroscopy (fNIRS), a non-invasive spectral absorbance technique that detects changes in blood oxygen levels in the brain by using surface-mounted optical sensors. Functional NIRS is tolerant of limited head motion and enables simultaneous acquisitions of neural signals from two interacting partners in natural conditions. We employ fNIRS to advance a data-driven theoretical framework for two-person neuroscience motivated by the Interactive Brain Hypothesis which proposes that interpersonal interaction between individuals evokes neural mechanisms not engaged during solo, non-interactive, behaviors. Within this context, two specific hypotheses related to eye-to-eye contact, functional specificity and functional synchrony, were tested. The functional specificity hypothesis proposes that eye-to-eye contact engages specialized, within-brain, neural systems; and the functional synchrony hypothesis proposes that eye-to-eye contact engages specialized, across-brain, neural processors that are synchronized between dyads. Signals acquired during eye-to-eye contact between partners (interactive condition) were compared to signals acquired during mutual gaze at the eyes of a picture-face (non-interactive condition). In accordance with the specificity hypothesis, responses during eye-to-eye contact were greater than eye-to-picture gaze for a left frontal cluster that included pars opercularis (associated with canonical language production functions known as Broca's region), pre- and supplementary motor cortices (associated with articulatory systems), as well as the subcentral area. This frontal cluster was also functionally connected to a cluster located in the left superior temporal gyrus (associated with canonical language receptive functions known as Wernicke's region), primary somatosensory cortex, and the subcentral area. In accordance with the functional synchrony hypothesis, cross-brain coherence during eye-to-eye contact relative to eye-to-picture gaze increased for signals originating within left superior temporal, middle temporal, and supramarginal gyri as well as the pre- and supplementary motor cortices of both interacting brains. These synchronous cross-brain regions are also associated with known language functions, and were partner-specific (i.e., disappeared with randomly assigned partners). Together, both within and across-brain neural correlates of eye-to-eye contact included components of previously established productive and receptive language systems. These findings reveal a left frontal, temporal, and parietal long-range network that mediates neural responses during eye-to-eye contact between dyads, and advance insight into elemental mechanisms of social and interpersonal interactions.
Collapse
Affiliation(s)
- Joy Hirsch
- Department of Psychiatry, Yale School of Medicine, New Haven, CT 06511, USA; Department of Neuroscience, Yale School of Medicine, New Haven, CT 06511, USA; Department of Comparative Medicine, Yale School of Medicine, New Haven, CT 06511, USA; Department of Medical Physics and Biomedical Engineering, University College London, UK.
| | - Xian Zhang
- Department of Psychiatry, Yale School of Medicine, New Haven, CT 06511, USA
| | - J Adam Noah
- Department of Psychiatry, Yale School of Medicine, New Haven, CT 06511, USA
| | - Yumie Ono
- Department of Electronics and Bioinformatics, School of Science and Technology, Meiji University, Kawasaki, Kanagawa, Japan
| |
Collapse
|
54
|
Perdikis D, Volhard J, Müller V, Kaulard K, Brick TR, Wallraven C, Lindenberger U. Brain synchronization during perception of facial emotional expressions with natural and unnatural dynamics. PLoS One 2017; 12:e0181225. [PMID: 28723957 PMCID: PMC5517022 DOI: 10.1371/journal.pone.0181225] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2016] [Accepted: 06/28/2017] [Indexed: 11/19/2022] Open
Abstract
Research on the perception of facial emotional expressions (FEEs) often uses static images that do not capture the dynamic character of social coordination in natural settings. Recent behavioral and neuroimaging studies suggest that dynamic FEEs (videos or morphs) enhance emotion perception. To identify mechanisms associated with the perception of FEEs with natural dynamics, the present EEG (Electroencephalography)study compared (i) ecologically valid stimuli of angry and happy FEEs with natural dynamics to (ii) FEEs with unnatural dynamics, and to (iii) static FEEs. FEEs with unnatural dynamics showed faces moving in a biologically possible but unpredictable and atypical manner, generally resulting in ambivalent emotional content. Participants were asked to explicitly recognize FEEs. Using whole power (WP) and phase synchrony (Phase Locking Index, PLI), we found that brain responses discriminated between natural and unnatural FEEs (both static and dynamic). Differences were primarily observed in the timing and brain topographies of delta and theta PLI and WP, and in alpha and beta WP. Our results support the view that biologically plausible, albeit atypical, FEEs are processed by the brain by different mechanisms than natural FEEs. We conclude that natural movement dynamics are essential for the perception of FEEs and the associated brain processes.
Collapse
Affiliation(s)
- Dionysios Perdikis
- Center for Lifespan Psychology, Max Planck Institute for Human Development, Berlin, Germany
- * E-mail:
| | - Jakob Volhard
- Center for Lifespan Psychology, Max Planck Institute for Human Development, Berlin, Germany
| | - Viktor Müller
- Center for Lifespan Psychology, Max Planck Institute for Human Development, Berlin, Germany
| | - Kathrin Kaulard
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Timothy R. Brick
- Center for Lifespan Psychology, Max Planck Institute for Human Development, Berlin, Germany
| | - Christian Wallraven
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea
| | - Ulman Lindenberger
- Center for Lifespan Psychology, Max Planck Institute for Human Development, Berlin, Germany
| |
Collapse
|
55
|
Furl N, Lohse M, Pizzorni-Ferrarese F. Low-frequency oscillations employ a general coding of the spatio-temporal similarity of dynamic faces. Neuroimage 2017; 157:486-499. [PMID: 28619657 PMCID: PMC6390175 DOI: 10.1016/j.neuroimage.2017.06.023] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2016] [Revised: 06/01/2017] [Accepted: 06/09/2017] [Indexed: 12/14/2022] Open
Abstract
Brain networks use neural oscillations as information transfer mechanisms. Although the face perception network in occipitotemporal cortex is well-studied, contributions of oscillations to face representation remain an open question. We tested for links between oscillatory responses that encode facial dimensions and the theoretical proposal that faces are encoded in similarity-based "face spaces". We quantified similarity-based encoding of dynamic faces in magnetoencephalographic sensor-level oscillatory power for identity, expression, physical and perceptual similarity of facial form and motion. Our data show that evoked responses manifest physical and perceptual form similarity that distinguishes facial identities. Low-frequency induced oscillations (< 20Hz) manifested more general similarity structure, which was not limited to identity, and spanned physical and perceived form and motion. A supplementary fMRI-constrained source reconstruction implicated fusiform gyrus and V5 in this similarity-based representation. These findings introduce a potential link between "face space" encoding and oscillatory network communication, which generates new hypotheses about the potential oscillation-mediated mechanisms that might encode facial dimensions.
Collapse
Affiliation(s)
- Nicholas Furl
- Department of Psychology, Royal Holloway, University of London, Surrey TW20 0EX, United Kingdom; Cognition and Brain Sciences Unit, Medical Research Council, Cambridge CB2 7EF, United Kingdom.
| | - Michael Lohse
- Cognition and Brain Sciences Unit, Medical Research Council, Cambridge CB2 7EF, United Kingdom; Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford OX1 3QX, United Kingdom
| | | |
Collapse
|
56
|
Abstract
Behavioral studies have found a striking decline in the processing of low-level motion in healthy aging whereas the processing of more relevant and familiar biological motion is relatively preserved. This functional magnetic resonance imaging (fMRI) study investigated the neural correlates of low-level radial motion processing and biological motion processing in 19 healthy older adults (age range 62–78 years) and in 19 younger adults (age range 20–30 years). Brain regions related to both types of motion stimuli were evaluated and the magnitude and time courses of activation in those regions of interest were calculated. Whole-brain comparisons showed increased temporal and frontal activation in the older group for low-level motion but no differences for biological motion. Time-course analyses in regions of interest known to be involved in both types of motion processing likewise did not reveal any age differences for biological motion. Our results show that low-level motion processing in healthy aging requires the recruitment of additional resources, whereas areas related to the processing of biological motion processing seem to be relatively preserved.
Collapse
Affiliation(s)
| | | | - Gordon D Waiter
- Aberdeen Biomedical Imaging Centre, The Institute of Medical Sciences, University of Aberdeen, Aberdeen, UK
| | - Karin S Pilz
- School of Psychology, University of Aberdeen, Aberdeen, UK
| |
Collapse
|
57
|
Liang Y, Liu B, Xu J, Zhang G, Li X, Wang P, Wang B. Decoding facial expressions based on face-selective and motion-sensitive areas. Hum Brain Mapp 2017; 38:3113-3125. [PMID: 28345150 PMCID: PMC6866795 DOI: 10.1002/hbm.23578] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2016] [Revised: 02/21/2017] [Accepted: 03/09/2017] [Indexed: 11/07/2022] Open
Abstract
Humans can easily recognize others' facial expressions. Among the brain substrates that enable this ability, considerable attention has been paid to face-selective areas; in contrast, whether motion-sensitive areas, which clearly exhibit sensitivity to facial movements, are involved in facial expression recognition remained unclear. The present functional magnetic resonance imaging (fMRI) study used multi-voxel pattern analysis (MVPA) to explore facial expression decoding in both face-selective and motion-sensitive areas. In a block design experiment, participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise) in images, videos, and eyes-obscured videos. Due to the use of multiple stimulus types, the impacts of facial motion and eye-related information on facial expression decoding were also examined. It was found that motion-sensitive areas showed significant responses to emotional expressions and that dynamic expressions could be successfully decoded in both face-selective and motion-sensitive areas. Compared with static stimuli, dynamic expressions elicited consistently higher neural responses and decoding performance in all regions. A significant decrease in both activation and decoding accuracy due to the absence of eye-related information was also observed. Overall, the findings showed that emotional expressions are represented in motion-sensitive areas in addition to conventional face-selective areas, suggesting that motion-sensitive regions may also effectively contribute to facial expression recognition. The results also suggested that facial motion and eye-related information played important roles by carrying considerable expression information that could facilitate facial expression recognition. Hum Brain Mapp 38:3113-3125, 2017. © 2017 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Yin Liang
- School of Computer Science and TechnologyTianjin Key Laboratory of Cognitive Computing and Application, Tianjin UniversityTianjin300350People's Republic of China
| | - Baolin Liu
- School of Computer Science and TechnologyTianjin Key Laboratory of Cognitive Computing and Application, Tianjin UniversityTianjin300350People's Republic of China
- State Key Laboratory of Intelligent Technology and SystemsNational Laboratory for Information Science and Technology, Tsinghua UniversityBeijing100084People's Republic of China
| | - Junhai Xu
- School of Computer Science and TechnologyTianjin Key Laboratory of Cognitive Computing and Application, Tianjin UniversityTianjin300350People's Republic of China
| | - Gaoyan Zhang
- School of Computer Science and TechnologyTianjin Key Laboratory of Cognitive Computing and Application, Tianjin UniversityTianjin300350People's Republic of China
| | - Xianglin Li
- Medical Imaging Research Institute, Binzhou Medical UniversityYantaiShandong264003People's Republic of China
| | - Peiyuan Wang
- Department of RadiologyYantai Affiliated Hospital of Binzhou Medical UniversityYantaiShandong264003People's Republic of China
| | - Bin Wang
- Medical Imaging Research Institute, Binzhou Medical UniversityYantaiShandong264003People's Republic of China
| |
Collapse
|
58
|
Vukusic S, Ciorciari J, Crewther DP. Electrophysiological Correlates of Subliminal Perception of Facial Expressions in Individuals with Autistic Traits: A Backward Masking Study. Front Hum Neurosci 2017; 11:256. [PMID: 28588465 PMCID: PMC5440466 DOI: 10.3389/fnhum.2017.00256] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2016] [Accepted: 04/28/2017] [Indexed: 12/28/2022] Open
Abstract
People with Autism spectrum disorder (ASD) show difficulty in social communication, especially in the rapid assessment of emotion in faces. This study examined the processing of emotional faces in typically developing adults with high and low levels of autistic traits (measured using the Autism Spectrum Quotient—AQ). Event-related potentials (ERPs) were recorded during viewing of backward-masked neutral, fearful and happy faces presented under two conditions: subliminal (16 ms, below the level of visual conscious awareness) and supraliminal (166 ms, above the time required for visual conscious awareness). Individuals with low and high AQ differed in the processing of subliminal faces, with the low AQ group showing an enhanced N2 amplitude for subliminal happy faces. Some group differences were found in the condition effects, with the Low AQ showing shorter frontal P3b and N4 latencies for subliminal vs. supraliminal condition. Although results did not show any group differences on the face-specific N170 component, there were shorter N170 latencies for supraliminal vs. subliminal conditions across groups. The results observed on the N2, showing group differences in subliminal emotion processing, suggest that decreased sensitivity to the reward value of social stimuli is a common feature both of people with ASD as well as people with high autistic traits from the normal population.
Collapse
Affiliation(s)
- Svjetlana Vukusic
- Centre for Human Psychopharmacology, Swinburne University of TechnologyMelbourne, VIC, Australia
| | - Joseph Ciorciari
- Centre for Human Psychopharmacology, Swinburne University of TechnologyMelbourne, VIC, Australia
| | - David P Crewther
- Centre for Human Psychopharmacology, Swinburne University of TechnologyMelbourne, VIC, Australia
| |
Collapse
|
59
|
Butcher N, Lander K, Jagger R. A search advantage for dynamic same-race and other-race faces. VISUAL COGNITION 2016. [DOI: 10.1080/13506285.2016.1262487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- Natalie Butcher
- Social Futures Institute, Teesside University, Middlesbrough, UK
| | - Karen Lander
- School of Psychological Sciences, University of Manchester, Manchester, UK
| | - Rachel Jagger
- School of Psychological Sciences, University of Manchester, Manchester, UK
| |
Collapse
|
60
|
Sex Differences in Emotion Recognition and Emotional Inferencing Following Severe Traumatic Brain Injury. BRAIN IMPAIR 2016. [DOI: 10.1017/brimp.2016.22] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
The primary objective of the current study was to determine if men and women with traumatic brain injury (TBI) differ in their emotion recognition and emotional inferencing abilities. In addition to overall accuracy, we explored whether differences were contingent upon the target emotion for each task, or upon high- and low-intensity facial and vocal emotion expressions. A total of 160 participants (116 men) with severe TBI completed three tasks – a task measuring facial emotion recognition (DANVA-Faces), vocal emotion recognition (DANVA-Voices) and one measuring emotional inferencing (emotional inference from stories test (EIST)). Results showed that women with TBI were significantly more accurate in their recognition of vocal emotion expressions and also for emotional inferencing. Further analyses of task performance showed that women were significantly better than men at recognising fearful facial expressions and also facial emotion expressions high in intensity. Women also displayed increased response accuracy for sad vocal expressions and low-intensity vocal emotion expressions. Analysis of the EIST task showed that women were more accurate than men at emotional inferencing in sad and fearful stories. A similar proportion of women and men with TBI were impaired (≥ 2 SDs when compared to normative means) at facial emotion perception, χ2 = 1.45, p = 0.228, but a larger proportion of men was impaired at vocal emotion recognition, χ2 = 7.13, p = 0.008, and emotional inferencing, χ2 = 7.51, p = 0.006.
Collapse
|
61
|
Affiliation(s)
- Karin S. Pilz
- School of Psychology, University of Aberdeen, Aberdeen, Scotland, UK
| | - Ian M. Thornton
- Department of Cognitive Science, Faculty of Media & Knowledge Science, University of Malta, Msida, Malta
| |
Collapse
|
62
|
Zupan B, Neumann D. Exploring the Use of Isolated Expressions and Film Clips to Evaluate Emotion Recognition by People with Traumatic Brain Injury. J Vis Exp 2016. [PMID: 27213280 DOI: 10.3791/53774] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022] Open
Abstract
The current study presented 60 people with traumatic brain injury (TBI) and 60 controls with isolated facial emotion expressions, isolated vocal emotion expressions, and multimodal (i.e., film clips) stimuli that included contextual cues. All stimuli were presented via computer. Participants were required to indicate how the person in each stimulus was feeling using a forced-choice format. Additionally, for the film clips, participants had to indicate how they felt in response to the stimulus, and the level of intensity with which they experienced that emotion.
Collapse
Affiliation(s)
- Barbra Zupan
- Department of Applied Linguistics, Brock University;
| | - Dawn Neumann
- Department of Physical Medicine and Rehabilitation, Indiana School of Medicine and Rehabilitation Hospital of Indiana
| |
Collapse
|
63
|
Yovel G, O’Toole AJ. Recognizing People in Motion. Trends Cogn Sci 2016; 20:383-395. [DOI: 10.1016/j.tics.2016.02.005] [Citation(s) in RCA: 83] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2016] [Revised: 02/18/2016] [Accepted: 02/18/2016] [Indexed: 11/15/2022]
|
64
|
Brown JA, Hux K, Knollman-Porter K, Wallace SE. Use of Visual Cues by Adults With Traumatic Brain Injuries to Interpret Explicit and Inferential Information. J Head Trauma Rehabil 2016; 31:E32-41. [PMID: 26098256 DOI: 10.1097/htr.0000000000000148] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE Concomitant visual and cognitive impairments following traumatic brain injuries (TBIs) may be problematic when the visual modality serves as a primary source for receiving information. Further difficulties comprehending visual information may occur when interpretation requires processing inferential rather than explicit content. The purpose of this study was to compare the accuracy with which people with and without severe TBI interpreted information in contextually rich drawings. PARTICIPANTS Fifteen adults with and 15 adults without severe TBI. DESIGN Repeated-measures between-groups design. MAIN MEASURES Participants were asked to match images to sentences that either conveyed explicit (ie, main action or background) or inferential (ie, physical or mental inference) information. The researchers compared accuracy between participant groups and among stimulus conditions. RESULTS Participants with TBI demonstrated significantly poorer accuracy than participants without TBI extracting information from images. In addition, participants with TBI demonstrated significantly higher response accuracy when interpreting explicit rather than inferential information; however, no significant difference emerged between sentences referencing main action versus background information or sentences providing physical versus mental inference information for this participant group. CONCLUSIONS Difficulties gaining information from visual environmental cues may arise for people with TBI given their difficulties interpreting inferential content presented through the visual modality.
Collapse
Affiliation(s)
- Jessica A Brown
- Department of Special Education and Communication Disorders, University of Nebraska-Lincoln, Lincoln, Nebraska (Dr Brown, Dr Hux); Department of Speech Pathology and Audiology, Miami University, Oxford, Ohio (Dr Knollman-Porter); and Department of Speech-Language Pathology, Duquesne University, Pittsburg, Pennsylvania (Dr Wallace)
| | | | | | | |
Collapse
|
65
|
|
66
|
Korkmaz Hacialihafiz D, Bartels A. Motion responses in scene-selective regions. Neuroimage 2015; 118:438-44. [DOI: 10.1016/j.neuroimage.2015.06.031] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2015] [Revised: 05/18/2015] [Accepted: 06/09/2015] [Indexed: 10/23/2022] Open
|
67
|
Reinl M, Bartels A. Perception of temporal asymmetries in dynamic facial expressions. Front Psychol 2015; 6:1107. [PMID: 26300807 PMCID: PMC4523710 DOI: 10.3389/fpsyg.2015.01107] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2015] [Accepted: 07/20/2015] [Indexed: 11/13/2022] Open
Abstract
In the current study we examined whether timeline-reversals and emotional direction of dynamic facial expressions affect subjective experience of human observers. We recorded natural movies of faces that increased or decreased their expressions of fear, and played them either in the natural frame order or reversed from last to first frame (reversed timeline). This led to four conditions of increasing or decreasing fear, either following the natural or reversed temporal trajectory of facial dynamics. This 2-by-2 factorial design controlled for visual low-level properties, static visual content, and motion energy across the different factors. It allowed us to examine perceptual consequences that would occur if the timeline trajectory of facial muscle movements during the increase of an emotion are not the exact mirror of the timeline during the decrease. It additionally allowed us to study perceptual differences between increasing and decreasing emotional expressions. Perception of these time-dependent asymmetries have not yet been quantified. We found that three emotional measures, emotional intensity, artificialness of facial movement, and convincingness or plausibility of emotion portrayal, were affected by timeline-reversals as well as by the emotional direction of the facial expressions. Our results imply that natural dynamic facial expressions contain temporal asymmetries, and show that deviations from the natural timeline lead to a reduction of perceived emotional intensity and convincingness, and to an increase of perceived artificialness of the dynamic facial expression. In addition, they show that decreasing facial expressions are judged as less plausible than increasing facial expressions. Our findings are of relevance for both, behavioral as well as neuroimaging studies, as processing and perception are influenced by temporal asymmetries.
Collapse
Affiliation(s)
| | - Andreas Bartels
- Vision and Cognition Lab, Centre for Integrative Neuroscience, University of Tübingen, Tübingen, Germany
| |
Collapse
|
68
|
Two neural pathways of face processing: A critical evaluation of current models. Neurosci Biobehav Rev 2015; 55:536-46. [DOI: 10.1016/j.neubiorev.2015.06.010] [Citation(s) in RCA: 127] [Impact Index Per Article: 14.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2014] [Revised: 04/22/2015] [Accepted: 06/05/2015] [Indexed: 11/15/2022]
|
69
|
Abstract
The dynamic facial expressions of emotion automatically elicit multifaceted psychological activities; however, the temporal profiles and dynamic interaction patterns of brain activities remain unknown. We investigated these issues using magnetoencephalography. Participants passively observed dynamic facial expressions of fear and happiness, or dynamic mosaics. Source-reconstruction analyses utilizing functional magnetic-resonance imaging data revealed higher activation in broad regions of the bilateral occipital and temporal cortices in response to dynamic facial expressions than in response to dynamic mosaics at 150–200 ms and some later time points. The right inferior frontal gyrus exhibited higher activity for dynamic faces versus mosaics at 300–350 ms. Dynamic causal-modeling analyses revealed that dynamic faces activated the dual visual routes and visual–motor route. Superior influences of feedforward and feedback connections were identified before and after 200 ms, respectively. These results indicate that hierarchical, bidirectional neural network dynamics within a few hundred milliseconds implement the processing of dynamic facial expressions.
Collapse
|
70
|
Abstract
Several neuroimaging studies have revealed that the superior temporal sulcus (STS) is highly implicated in the processing of facial motion. A limitation of these investigations, however, is that many of them utilize unnatural stimuli (e.g., morphed videos) or those which contain many confounding spatial cues. As a result, the underlying mechanisms may not be fully engaged during such perception. The aim of the current study was to build upon the existing literature by implementing highly detailed and accurate models of facial movement. Accordingly, neurologically healthy participants viewed simultaneous sequences of rigid and nonrigid motion that was retargeted onto a standard computer generated imagery face model. Their task was to discriminate between different facial motion videos in a two-alternative forced choice paradigm. Presentations varied between upright and inverted orientations. In corroboration with previous data, the perception of natural facial motion strongly activated a portion of the posterior STS. The analysis also revealed engagement of the lingual gyrus, fusiform gyrus, precentral gyrus, and cerebellum. These findings therefore suggest that the processing of dynamic facial information is supported by a network of visuomotor substrates.
Collapse
Affiliation(s)
- Christine Girges
- a College of Health and Life Sciences, Department of Psychology , Brunel University , London , UK
| | - Justin O'Brien
- a College of Health and Life Sciences, Department of Psychology , Brunel University , London , UK
| | - Janine Spencer
- a College of Health and Life Sciences, Department of Psychology , Brunel University , London , UK
| |
Collapse
|
71
|
Hillmann TE, Kempkensteffen J, Lincoln TM. Visual Attention to Threat-Related Faces and Delusion-Proneness: An Eye Tracking Study Using Dynamic Stimuli. COGNITIVE THERAPY AND RESEARCH 2015. [DOI: 10.1007/s10608-015-9699-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
72
|
Reconstructing dynamic mental models of facial expressions in prosopagnosia reveals distinct representations for identity and expression. Cortex 2015; 65:50-64. [PMID: 25638352 DOI: 10.1016/j.cortex.2014.11.015] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2013] [Revised: 05/07/2014] [Accepted: 11/25/2014] [Indexed: 11/21/2022]
|
73
|
Amaral CP, Simões MA, Castelo-Branco MS. Neural signals evoked by stimuli of increasing social scene complexity are detectable at the single-trial level and right lateralized. PLoS One 2015; 10:e0121970. [PMID: 25807525 PMCID: PMC4373781 DOI: 10.1371/journal.pone.0121970] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2014] [Accepted: 02/06/2015] [Indexed: 11/24/2022] Open
Abstract
Classification of neural signals at the single-trial level and the study of their relevance in affective and cognitive neuroscience are still in their infancy. Here we investigated the neurophysiological correlates of conditions of increasing social scene complexity using 3D human models as targets of attention, which may also be important in autism research. Challenging single-trial statistical classification of EEG neural signals was attempted for detection of oddball stimuli with increasing social scene complexity. Stimuli had an oddball structure and were as follows: 1) flashed schematic eyes, 2) simple 3D faces flashed between averted and non-averted gaze (only eye position changing), 3) simple 3D faces flashed between averted and non-averted gaze (head and eye position changing), 4) animated avatar alternated its gaze direction to the left and to the right (head and eye position), 5) environment with 4 animated avatars all of which change gaze and one of which is the target of attention. We found a late (> 300 ms) neurophysiological oddball correlate for all conditions irrespective of their complexity as assessed by repeated measures ANOVA. We attempted single-trial detection of this signal with automatic classifiers and obtained a significant balanced accuracy classification of around 79%, which is noteworthy given the amount of scene complexity. Lateralization analysis showed a specific right lateralization only for more complex realistic social scenes. In sum, complex ecological animations with social content elicit neurophysiological events which can be characterized even at the single-trial level. These signals are right lateralized. These finding paves the way for neuroscientific studies in affective neuroscience based on complex social scenes, and given the detectability at the single trial level this suggests the feasibility of brain computer interfaces that can be applied to social cognition disorders such as autism.
Collapse
Affiliation(s)
- Carlos P Amaral
- IBILI-Institute for Biomedical Imaging in Life Sciences, Faculty of Medicine, University of Coimbra, Coimbra, Portugal
| | - Marco A Simões
- IBILI-Institute for Biomedical Imaging in Life Sciences, Faculty of Medicine, University of Coimbra, Coimbra, Portugal
| | - Miguel S Castelo-Branco
- IBILI-Institute for Biomedical Imaging in Life Sciences, Faculty of Medicine, University of Coimbra, Coimbra, Portugal; ICNAS, Brain Imaging Network of Portugal, Coimbra, Portugal
| |
Collapse
|
74
|
Lander K, Butcher N. Independence of face identity and expression processing: exploring the role of motion. Front Psychol 2015; 6:255. [PMID: 25821441 PMCID: PMC4358059 DOI: 10.3389/fpsyg.2015.00255] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2014] [Accepted: 02/20/2015] [Indexed: 11/13/2022] Open
Abstract
According to the classic Bruce and Young (1986) model of face recognition, identity and emotional expression information from the face are processed in parallel and independently. Since this functional model was published, a growing body of research has challenged this viewpoint and instead support an interdependence view. In addition, neural models of face processing emphasize differences in terms of the processing of changeable and invariant aspects of faces. This article provides a critical appraisal of this literature and discusses the role of motion in both expression and identity recognition and the intertwined nature of identity, expression and motion processing. We conclude by discussing recent advancements in this area and research questions that still need to be addressed.
Collapse
Affiliation(s)
- Karen Lander
- School of Psychological Sciences, University of Manchester , Manchester, UK
| | - Natalie Butcher
- School of Social Sciences, Business and Law, Teesside University , Middlesbrough, UK
| |
Collapse
|
75
|
Maguinness C, Newell FN. Non-rigid, but not rigid, motion interferes with the processing of structural face information in developmental prosopagnosia. Neuropsychologia 2015; 70:281-95. [PMID: 25737056 DOI: 10.1016/j.neuropsychologia.2015.02.038] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2014] [Revised: 02/11/2015] [Accepted: 02/27/2015] [Indexed: 11/30/2022]
Abstract
There is growing evidence to suggest that facial motion is an important cue for face recognition. However, it is poorly understood whether motion is integrated with facial form information or whether it provides an independent cue to identity. To provide further insight into this issue, we compared the effect of motion on face perception in two developmental prosopagnosics and age-matched controls. Participants first learned faces presented dynamically (video), or in a sequence of static images, in which rigid (viewpoint) or non-rigid (expression) changes occurred. Immediately following learning, participants were required to match a static face image to the learned face. Test face images varied by viewpoint (Experiment 1) or expression (Experiment 2) and were learned or novel face images. We found similar performance across prosopagnosics and controls in matching facial identity across changes in viewpoint when the learned face was shown moving in a rigid manner. However, non-rigid motion interfered with face matching across changes in expression in both individuals with prosopagnosia compared to the performance of control participants. In contrast, non-rigid motion did not differentially affect the matching of facial expressions across changes in identity for either prosopagnosics (Experiment 3). Our results suggest that whilst the processing of rigid motion information of a face may be preserved in developmental prosopagnosia, non-rigid motion can specifically interfere with the representation of structural face information. Taken together, these results suggest that both form and motion cues are important in face perception and that these cues are likely integrated in the representation of facial identity.
Collapse
Affiliation(s)
- Corrina Maguinness
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland.
| | - Fiona N Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland.
| |
Collapse
|
76
|
Abstract
Advances in marker-less motion capture technology now allow the accurate replication of facial motion and deformation in computer-generated imagery (CGI). A forced-choice discrimination paradigm using such CGI facial animations showed that human observers can categorize identity solely from facial motion cues. Animations were generated from motion captures acquired during natural speech, thus eliciting both rigid (head rotations and translations) and nonrigid (expressional changes) motion. To limit interferences from individual differences in facial form, all animations shared the same appearance. Observers were required to discriminate between different videos of facial motion and between the facial motions of different people. Performance was compared to the control condition of orientation-inverted facial motion. The results show that observers are able to make accurate discriminations of identity in the absence of all cues except facial motion. A clear inversion effect in both tasks provided consistency with previous studies, supporting the configural view of human face perception. The accuracy of this motion capture technology thus allowed stimuli to be generated that closely resembled real moving faces. Future studies may wish to implement such methodology when studying human face perception.
Collapse
|
77
|
Cheetham M, Suter P, Jancke L. Perceptual discrimination difficulty and familiarity in the Uncanny Valley: more like a "Happy Valley". Front Psychol 2014; 5:1219. [PMID: 25477829 PMCID: PMC4237038 DOI: 10.3389/fpsyg.2014.01219] [Citation(s) in RCA: 43] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2013] [Accepted: 10/08/2014] [Indexed: 12/01/2022] Open
Abstract
The Uncanny Valley Hypothesis (UVH) predicts that greater difficulty perceptually discriminating between categorically ambiguous human and humanlike characters (e.g., highly realistic robot) evokes negatively valenced (i.e., uncanny) affect. An ABX perceptual discrimination task and signal detection analysis was used to examine the profile of perceptual discrimination (PD) difficulty along the UVH' dimension of human likeness (DHL). This was represented using avatar-to-human morph continua. Rejecting the implicitly assumed profile of PD difficulty underlying the UVH' prediction, Experiment 1 showed that PD difficulty was reduced for categorically ambiguous faces but, notably, enhanced for human faces. Rejecting the UVH' predicted relationship between PD difficulty and negative affect (assessed in terms of the UVH' familiarity dimension), Experiment 2 demonstrated that greater PD difficulty correlates with more positively valenced affect. Critically, this effect was strongest for the ambiguous faces, suggesting a correlative relationship between PD difficulty and feelings of familiarity more consistent with the metaphor happy valley. This relationship is also consistent with a fluency amplification instead of the hitherto proposed hedonic fluency account of affect along the DHL. Experiment 3 found no evidence that the asymmetry in the profile of PD along the DHL is attributable to a differential processing bias (cf. other-race effect), i.e., processing avatars at a category level but human faces at an individual level. In conclusion, the present data for static faces show clear effects that, however, strongly challenge the UVH' implicitly assumed profile of PD difficulty along the DHL and the predicted relationship between this and feelings of familiarity.
Collapse
Affiliation(s)
- Marcus Cheetham
- Department of Neuropsychology, University of ZurichZurich, Switzerland
- Department of Psychology, Nungin UniversitySeoul, South Korea
| | - Pascal Suter
- Department of Neuropsychology, University of ZurichZurich, Switzerland
| | - Lutz Jancke
- Department of Neuropsychology, University of ZurichZurich, Switzerland
| |
Collapse
|
78
|
Abstract
Visual categorization of complex, natural stimuli has been studied for some time in human and nonhuman primates. Recent interest in the rodent as a model for visual perception, including higher-level functional specialization, leads to the question of how rodents would perform on a categorization task using natural stimuli. To answer this question, rats were trained in a two-alternative forced choice task to discriminate movies containing rats from movies containing other objects and from scrambled movies (ordinate-level categorization). Subsequently, transfer to novel, previously unseen stimuli was tested, followed by a series of control probes. The results show that the animals are capable of acquiring a decision rule by abstracting common features from natural movies to generalize categorization to new stimuli. Control probes demonstrate that they did not use single low-level features, such as motion energy or (local) luminance. Significant generalization was even present with stationary snapshots from untrained movies. The variability within and between training and test stimuli, the complexity of natural movies, and the control experiments and analyses all suggest that a more high-level rule based on more complex stimulus features than local luminance-based cues was used to classify the novel stimuli. In conclusion, natural stimuli can be used to probe ordinate-level categorization in rats.
Collapse
|
79
|
Reinl M, Bartels A. Face processing regions are sensitive to distinct aspects of temporal sequence in facial dynamics. Neuroimage 2014; 102 Pt 2:407-15. [PMID: 25132020 DOI: 10.1016/j.neuroimage.2014.08.011] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2014] [Revised: 07/25/2014] [Accepted: 08/04/2014] [Indexed: 12/16/2022] Open
Abstract
Facial movement conveys important information for social interactions, yet its neural processing is poorly understood. Computational models propose that shape- and temporal sequence sensitive mechanisms interact in processing dynamic faces. While face processing regions are known to respond to facial movement, their sensitivity to particular temporal sequences has barely been studied. Here we used fMRI to examine the sensitivity of human face-processing regions to two aspects of directionality in facial movement trajectories. We presented genuine movie recordings of increasing and decreasing fear expressions, each of which were played in natural or reversed frame order. This two-by-two factorial design matched low-level visual properties, static content and motion energy within each factor, emotion-direction (increasing or decreasing emotion) and timeline (natural versus artificial). The results showed sensitivity for emotion-direction in FFA, which was timeline-dependent as it only occurred within the natural frame order, and sensitivity to timeline in the STS, which was emotion-direction-dependent as it only occurred for decreased fear. The occipital face area (OFA) was sensitive to the factor timeline. These findings reveal interacting temporal sequence sensitive mechanisms that are responsive to both ecological meaning and to prototypical unfolding of facial dynamics. These mechanisms are temporally directional, provide socially relevant information regarding emotional state or naturalness of behavior, and agree with predictions from modeling and predictive coding theory.
Collapse
Affiliation(s)
- Maren Reinl
- Vision and Cognition Lab, Centre for Integrative Neuroscience, University of Tübingen, and Max Planck Institute for Biological Cybernetics, Tübingen 72076, Germany
| | - Andreas Bartels
- Vision and Cognition Lab, Centre for Integrative Neuroscience, University of Tübingen, and Max Planck Institute for Biological Cybernetics, Tübingen 72076, Germany.
| |
Collapse
|
80
|
Miki K, Kakigi R. Magnetoencephalographic study on facial movements. Front Hum Neurosci 2014; 8:550. [PMID: 25120453 PMCID: PMC4114328 DOI: 10.3389/fnhum.2014.00550] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2014] [Accepted: 07/07/2014] [Indexed: 11/15/2022] Open
Abstract
In this review, we introduced our three studies that focused on facial movements. In the first study, we examined the temporal characteristics of neural responses elicited by viewing mouth movements, and assessed differences between the responses to mouth opening and closing movements and an averting eyes condition. Our results showed that the occipitotemporal area, the human MT/V5 homologue, was active in the perception of both mouth and eye motions. Viewing mouth and eye movements did not elicit significantly different activity in the occipitotemporal area, which indicated that perception of the movement of facial parts may be processed in the same manner, and this is different from motion in general. In the second study, we investigated whether early activity in the occipitotemporal region evoked by eye movements was influenced by the facial contour and/or features such as the mouth. Our results revealed specific information processing for eye movements in the occipitotemporal region, and this activity was significantly influenced by whether movements appeared with the facial contour and/or features, in other words, whether the eyes moved, even if the movement itself was the same. In the third study, we examined the effects of inverting the facial contour (hair and chin) and features (eyes, nose, and mouth) on processing for static and dynamic face perception. Our results showed the following: (1) In static face perception, activity in the right fusiform area was affected more by the inversion of features while that in the left fusiform area was affected more by a disruption in the spatial relationship between the contour and features; and (2) In dynamic face perception, activity in the right occipitotemporal area was affected by the inversion of the facial contour.
Collapse
Affiliation(s)
- Kensaku Miki
- Department of Integrative Physiology, National Institute for Physiological Sciences Okazaki, Japan ; Department of Physiological Sciences, School of Life Science, The Graduate University for Advanced Studies (SOKENDAI) Hayama, Kanagawa, Japan
| | - Ryusuke Kakigi
- Department of Integrative Physiology, National Institute for Physiological Sciences Okazaki, Japan ; Department of Physiological Sciences, School of Life Science, The Graduate University for Advanced Studies (SOKENDAI) Hayama, Kanagawa, Japan
| |
Collapse
|
81
|
Vermaercke B, Gerich FJ, Ytebrouck E, Arckens L, Op de Beeck HP, Van den Bergh G. Functional specialization in rat occipital and temporal visual cortex. J Neurophysiol 2014; 112:1963-83. [PMID: 24990566 DOI: 10.1152/jn.00737.2013] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Recent studies have revealed a surprising degree of functional specialization in rodent visual cortex. Anatomically, suggestions have been made about the existence of hierarchical pathways with similarities to the ventral and dorsal pathways in primates. Here we aimed to characterize some important functional properties in part of the supposed "ventral" pathway in rats. We investigated the functional properties along a progression of five visual areas in awake rats, from primary visual cortex (V1) over lateromedial (LM), latero-intermediate (LI), and laterolateral (LL) areas up to the newly found lateral occipito-temporal cortex (TO). Response latency increased >20 ms from areas V1/LM/LI to areas LL and TO. Orientation and direction selectivity for the used grating patterns increased gradually from V1 to TO. Overall responsiveness and selectivity to shape stimuli decreased from V1 to TO and was increasingly dependent upon shape motion. Neural similarity for shapes could be accounted for by a simple computational model in V1, but not in the other areas. Across areas, we find a gradual change in which stimulus pairs are most discriminable. Finally, tolerance to position changes increased toward TO. These findings provide unique information about possible commonalities and differences between rodents and primates in hierarchical cortical processing.
Collapse
Affiliation(s)
- Ben Vermaercke
- Laboratory of Biological Psychology, KU Leuven, Leuven, Belgium; and
| | - Florian J Gerich
- Laboratory of Biological Psychology, KU Leuven, Leuven, Belgium; and
| | - Ellen Ytebrouck
- Laboratory of Neuroplasticity and Neuroproteomics, KU Leuven, Leuven, Belgium
| | - Lutgarde Arckens
- Laboratory of Neuroplasticity and Neuroproteomics, KU Leuven, Leuven, Belgium
| | | | | |
Collapse
|
82
|
|
83
|
Xiao NG, Perrotta S, Quinn PC, Wang Z, Sun YHP, Lee K. On the facilitative effects of face motion on face recognition and its development. Front Psychol 2014; 5:633. [PMID: 25009517 PMCID: PMC4067594 DOI: 10.3389/fpsyg.2014.00633] [Citation(s) in RCA: 41] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2014] [Accepted: 06/04/2014] [Indexed: 11/23/2022] Open
Abstract
For the past century, researchers have extensively studied human face processing and its development. These studies have advanced our understanding of not only face processing, but also visual processing in general. However, most of what we know about face processing was investigated using static face images as stimuli. Therefore, an important question arises: to what extent does our understanding of static face processing generalize to face processing in real-life contexts in which faces are mostly moving? The present article addresses this question by examining recent studies on moving face processing to uncover the influence of facial movements on face processing and its development. First, we describe evidence on the facilitative effects of facial movements on face recognition and two related theoretical hypotheses: the supplementary information hypothesis and the representation enhancement hypothesis. We then highlight several recent studies suggesting that facial movements optimize face processing by activating specific face processing strategies that accommodate to task requirements. Lastly, we review the influence of facial movements on the development of face processing in the first year of life. We focus on infants' sensitivity to facial movements and explore the facilitative effects of facial movements on infants' face recognition performance. We conclude by outlining several future directions to investigate moving face processing and emphasize the importance of including dynamic aspects of facial information to further understand face processing in real-life contexts.
Collapse
Affiliation(s)
- Naiqi G. Xiao
- Department of Psychology, Zhejiang Sci-Tech UniversityHangzhou, China
- Applied Psychology and Human Development, University of TorontoToronto, ON, Canada
| | - Steve Perrotta
- Applied Psychology and Human Development, University of TorontoToronto, ON, Canada
| | - Paul C. Quinn
- Department of Psychology, University of DelawareNewark, DE, USA
| | - Zhe Wang
- Department of Psychology, Zhejiang Sci-Tech UniversityHangzhou, China
| | - Yu-Hao P. Sun
- Department of Psychology, Zhejiang Sci-Tech UniversityHangzhou, China
| | - Kang Lee
- Applied Psychology and Human Development, University of TorontoToronto, ON, Canada
| |
Collapse
|
84
|
Collins JA, Olson IR. Beyond the FFA: The role of the ventral anterior temporal lobes in face processing. Neuropsychologia 2014; 61:65-79. [PMID: 24937188 DOI: 10.1016/j.neuropsychologia.2014.06.005] [Citation(s) in RCA: 137] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2013] [Revised: 05/19/2014] [Accepted: 06/08/2014] [Indexed: 11/17/2022]
Abstract
Extensive research has supported the existence of a specialized face-processing network that is distinct from the visual processing areas used for general object recognition. The majority of this work has been aimed at characterizing the response properties of the fusiform face area (FFA) and the occipital face area (OFA), which together are thought to constitute the core network of brain areas responsible for facial identification. Although accruing evidence has shown that face-selective patches in the ventral anterior temporal lobes (vATLs) are interconnected with the FFA and OFA, and that they play a role in facial identification, the relative contribution of these brain areas to the core face-processing network has remained unarticulated. Here we review recent research critically implicating the vATLs in face perception and memory. We propose that current models of face processing should be revised such that the ventral anterior temporal lobes serve a centralized role in the visual face-processing network. We speculate that a hierarchically organized system of face processing areas extends bilaterally from the inferior occipital gyri to the vATLs, with facial representations becoming increasingly complex and abstracted from low-level perceptual features as they move forward along this network. The anterior temporal face areas may serve as the apex of this hierarchy, instantiating the final stages of face recognition. We further argue that the anterior temporal face areas are ideally suited to serve as an interface between face perception and face memory, linking perceptual representations of individual identity with person-specific semantic knowledge.
Collapse
Affiliation(s)
- Jessica A Collins
- Department of Psychology, Temple University, 1701 North 13th street, Philadelphia, PA 19122, USA.
| | - Ingrid R Olson
- Department of Psychology, Temple University, 1701 North 13th street, Philadelphia, PA 19122, USA.
| |
Collapse
|
85
|
Riediger M, Studtmann M, Westphal A, Rauers A, Weber H. No smile like another: adult age differences in identifying emotions that accompany smiles. Front Psychol 2014; 5:480. [PMID: 24904493 PMCID: PMC4034151 DOI: 10.3389/fpsyg.2014.00480] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2013] [Accepted: 05/02/2014] [Indexed: 11/25/2022] Open
Abstract
People smile in various emotional contexts, for example, when they are amused or angry or simply being polite. We investigated whether younger and older adults differ in how well they are able to identify the emotional experiences accompanying smile expressions, and whether the age of the smiling person plays a role in this respect. With this aim, we produced 80 video episodes of three types of smile expressions: positive-affect smiles had been spontaneously displayed by target persons as they were watching amusing film clips and cartoons. Negative-affect smiles had been displayed spontaneously by target persons during an interaction in which they were being unfairly accused. Affectively neutral smiles were posed upon request. Differences in the accompanying emotional experiences were validated by target persons' self-reports. These smile videos served as experimental stimuli in two studies with younger and older adult participants. In Study 1, older participants were less likely to attribute positive emotions to smiles, and more likely to assume that a smile was posed. Furthermore, younger participants were more accurate than older adults at identifying emotional experiences accompanying smiles. In Study 2, both younger and older participants attributed positive emotions more frequently to smiles shown by older as compared to younger target persons, but older participants did so less frequently than younger participants. Again, younger participants were more accurate than older participants in identifying emotional experiences accompanying smiles, but this effect was attenuated for older target persons. Older participants could better identify the emotional state accompanying smiles shown by older than by younger target persons. Taken together, these findings indicate that there is an age-related decline in the ability to decipher the emotional meaning of smiles presented without context, which, however, is attenuated when the smiling person is also an older adult.
Collapse
Affiliation(s)
- Michaela Riediger
- Max Planck Research Group "Affect Across the Lifespan," Max Planck Institute for Human Development Berlin, Germany
| | - Markus Studtmann
- Max Planck Research Group "Affect Across the Lifespan," Max Planck Institute for Human Development Berlin, Germany
| | - Andrea Westphal
- Max Planck Research Group "Affect Across the Lifespan," Max Planck Institute for Human Development Berlin, Germany
| | - Antje Rauers
- Max Planck Research Group "Affect Across the Lifespan," Max Planck Institute for Human Development Berlin, Germany
| | - Hannelore Weber
- Institute for Psychology, University of Greifswald Greifswald, Germany
| |
Collapse
|
86
|
Furl N, Henson RN, Friston KJ, Calder AJ. Network Interactions Explain Sensitivity to Dynamic Faces in the Superior Temporal Sulcus. Cereb Cortex 2014; 25:2876-82. [PMID: 24770707 PMCID: PMC4537434 DOI: 10.1093/cercor/bhu083] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
The superior temporal sulcus (STS) in the human and monkey is sensitive to the motion of complex forms such as facial and bodily actions. We used functional magnetic resonance imaging (fMRI) to explore network-level explanations for how the form and motion information in dynamic facial expressions might be combined in the human STS. Ventral occipitotemporal areas selective for facial form were localized in occipital and fusiform face areas (OFA and FFA), and motion sensitivity was localized in the more dorsal temporal area V5. We then tested various connectivity models that modeled communication between the ventral form and dorsal motion pathways. We show that facial form information modulated transmission of motion information from V5 to the STS, and that this face-selective modulation likely originated in OFA. This finding shows that form-selective motion sensitivity in the STS can be explained in terms of modulation of gain control on information flow in the motion pathway, and provides a substantial constraint for theories of the perception of faces and biological motion.
Collapse
Affiliation(s)
- Nicholas Furl
- MRC Cognition and Brain Sciences Unit, Cambridge CB2 7EF, UK
| | | | - Karl J Friston
- Wellcome Centre for Imaging Neuroscience, University College London, 12 Queen Square, London WC1N 3BG, UK
| | - Andrew J Calder
- MRC Cognition and Brain Sciences Unit, Cambridge CB2 7EF, UK
| |
Collapse
|
87
|
Rossion B. Understanding individual face discrimination by means of fast periodic visual stimulation. Exp Brain Res 2014; 232:1599-621. [PMID: 24728131 DOI: 10.1007/s00221-014-3934-9] [Citation(s) in RCA: 88] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2013] [Accepted: 03/24/2014] [Indexed: 11/30/2022]
Abstract
This paper reviews a fast periodic visual stimulation (FPVS) approach developed recently to make significant progress in understanding visual discrimination of individual faces. Displaying pictures of faces at a periodic frequency rate leads to a high signal-to-noise ratio (SNR) response in the human electroencephalogram, at the exact frequency of stimulation, a so-called steady-state visual evoked potential (SSVEP, Regan in Electroencephalogr Clin Neurophysiol 20:238-248, 1966). For fast periodic frequency rates, i.e., between 3 and 9 Hz, this response is reduced if the exact same face identity is repeated compared to the presentation of different face identities, the largest difference being observed over the right occipito-temporal cortex. A 6-Hz stimulation rate (cycle duration of ~170 ms) provides the largest difference between different and repeated faces, as also evidenced in face-selective areas of the ventral occipito-temporal cortex in functional magnetic resonance imaging. This high-level discrimination response is reduced following inversion and contrast-reversal of the faces and can be isolated without subtraction thanks to a fast periodic oddball paradigm. Overall, FPVS provides a response that is objective (i.e., at an experimentally defined frequency), implicit, has a high SNR and is directly quantifiable in a short amount of time. Although the approach is particularly appealing for understanding face perception, it can be generalized to study visual discrimination of complex visual patterns such as objects and visual scenes. The advantages of the approach make it also particularly well-suited to investigate these functions in populations who cannot provide overt behavioral responses and can only be tested for short durations, such as infants, young children and clinical populations.
Collapse
Affiliation(s)
- Bruno Rossion
- Psychological Sciences Research Institute (IPSY) and Institute of Neuroscience (IoNS), University of Louvain (UCL), Place du Cardinal Mercier, 10, 1348, Louvain-la-Neuve, Belgium,
| |
Collapse
|
88
|
Luo S, Shi Z, Yang X, Wang X, Han S. Reminders of mortality decrease midcingulate activity in response to others' suffering. Soc Cogn Affect Neurosci 2014; 9:477-86. [PMID: 23327932 PMCID: PMC3989130 DOI: 10.1093/scan/nst010] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2012] [Accepted: 01/09/2013] [Indexed: 12/30/2022] Open
Abstract
Reminders of mortality influence human social cognition, but whether and how reminders of mortality affect brain activity underlying social cognition remains unclear. To test whether increasing mortality salience modulates neural responses to others' suffering, we scanned healthy adults who viewed video clips showing others in pain using functional magnetic resonance imaging. One group of participants were primed to increase mortality salience and another group were primed with negative affect in terms of fear/anxiety. We found that perceiving painful vs non-painful stimuli in the pre-priming session activated the midcingulate/dorsal medial prefrontal cortex (MCC/dMPFC), bilateral anterior insula/inferior frontal cortex, bilateral secondary somatosensory cortex and left middle temporal gyrus. However, MCC/dMPFC activity in response to perceived pain in others was significantly decreased in the post-priming session by the mortality salience priming, but was not influenced by the negative affect priming. Moreover, subjective fear of death induced by the priming procedures mediated the change in MCC/dMPFC activity across the priming procedures. Subjective fear of death also moderated the co-variation of MCC/dMPFC and left insular activity during perception of others in pain. Our findings indicate that reminders of mortality decrease neural responses to others' suffering and this effect is mediated by the subjective fear of death.
Collapse
Affiliation(s)
- Siyang Luo
- Department of Psychology, Peking University, 5 Yiheyuan Road, Beijing 100871, P. R. China.
| | | | | | | | | |
Collapse
|
89
|
Gentile F, Rossion B. Temporal frequency tuning of cortical face-sensitive areas for individual face perception. Neuroimage 2014; 90:256-65. [PMID: 24321556 DOI: 10.1016/j.neuroimage.2013.11.053] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2013] [Revised: 11/21/2013] [Accepted: 11/25/2013] [Indexed: 11/16/2022] Open
Affiliation(s)
- Francesco Gentile
- Institute of Research in Psychology (IPSY), University of Louvain, Belgium; Institute of Neuroscience (IoNS), Brussels, Belgium; Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, The Netherlands; Maastricht Brain Imaging Center (M-BIC), Maastricht University, The Netherlands.
| | - Bruno Rossion
- Institute of Research in Psychology (IPSY), University of Louvain, Belgium; Institute of Neuroscience (IoNS), Brussels, Belgium; Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, The Netherlands; Maastricht Brain Imaging Center (M-BIC), Maastricht University, The Netherlands
| |
Collapse
|
90
|
Kaufman J, Johnston PJ. Facial motion engages predictive visual mechanisms. PLoS One 2014; 9:e91038. [PMID: 24632821 PMCID: PMC3954613 DOI: 10.1371/journal.pone.0091038] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2013] [Accepted: 02/10/2014] [Indexed: 11/18/2022] Open
Abstract
We employed a novel cuing paradigm to assess whether dynamically versus statically presented facial expressions differentially engaged predictive visual mechanisms. Participants were presented with a cueing stimulus that was either the static depiction of a low intensity expressed emotion; or a dynamic sequence evolving from a neutral expression to the low intensity expressed emotion. Following this cue and a backwards mask, participants were presented with a probe face that displayed either the same emotion (congruent) or a different emotion (incongruent) with respect to that displayed by the cue although expressed at a high intensity. The probe face had either the same or different identity from the cued face. The participants' task was to indicate whether or not the probe face showed the same emotion as the cue. Dynamic cues and same identity cues both led to a greater tendency towards congruent responding, although these factors did not interact. Facial motion also led to faster responding when the probe face was emotionally congruent to the cue. We interpret these results as indicating that dynamic facial displays preferentially invoke predictive visual mechanisms, and suggest that motoric simulation may provide an important basis for the generation of predictions in the visual system.
Collapse
Affiliation(s)
- Jordy Kaufman
- Swinburne University of Technology, Hawthorn, Victoria, Australia
| | | |
Collapse
|
91
|
Stoesz BM, Jakobson LS. Developmental changes in attention to faces and bodies in static and dynamic scenes. Front Psychol 2014; 5:193. [PMID: 24639664 PMCID: PMC3944146 DOI: 10.3389/fpsyg.2014.00193] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2013] [Accepted: 02/18/2014] [Indexed: 11/13/2022] Open
Abstract
Typically developing individuals show a strong visual preference for faces and face-like stimuli; however, this may come at the expense of attending to bodies or to other aspects of a scene. The primary goal of the present study was to provide additional insight into the development of attentional mechanisms that underlie perception of real people in naturalistic scenes. We examined the looking behaviors of typical children, adolescents, and young adults as they viewed static and dynamic scenes depicting one or more people. Overall, participants showed a bias to attend to faces more than on other parts of the scenes. Adding motion cues led to a reduction in the number, but an increase in the average duration of face fixations in single-character scenes. When multiple characters appeared in a scene, motion-related effects were attenuated and participants shifted their gaze from faces to bodies, or made off-screen glances. Children showed the largest effects related to the introduction of motion cues or additional characters, suggesting that they find dynamic faces difficult to process, and are especially prone to look away from faces when viewing complex social scenes-a strategy that could reduce the cognitive and the affective load imposed by having to divide one's attention between multiple faces. Our findings provide new insights into the typical development of social attention during natural scene viewing, and lay the foundation for future work examining gaze behaviors in typical and atypical development.
Collapse
Affiliation(s)
- Brenda M. Stoesz
- Department of Psychology, University of ManitobaWinnipeg, MB, Canada
| | - Lorna S. Jakobson
- Department of Psychology, University of ManitobaWinnipeg, MB, Canada
| |
Collapse
|
92
|
Abstract
The visual cortex is sensitive to emotional stimuli. This sensitivity is typically assumed to arise when amygdala modulates visual cortex via backwards connections. Using human fMRI, we compared dynamic causal connectivity models of sensitivity with fearful faces. This model comparison tested whether amygdala modulates distinct cortical areas, depending on dynamic or static face presentation. The ventral temporal fusiform face area showed sensitivity to fearful expressions in static faces. However, for dynamic faces, we found fear sensitivity in dorsal motion-sensitive areas within hMT+/V5 and superior temporal sulcus. The model with the greatest evidence included connections modulated by dynamic and static fear from amygdala to dorsal and ventral temporal areas, respectively. According to this functional architecture, amygdala could enhance encoding of fearful expression movements from video and the form of fearful expressions from static images. The amygdala may therefore optimize visual encoding of socially charged and salient information.
Collapse
|
93
|
Leeds DD, Seibert DA, Pyles JA, Tarr MJ. Comparing visual representations across human fMRI and computational vision. J Vis 2013; 13:25. [PMID: 24273227 PMCID: PMC3839261 DOI: 10.1167/13.13.25] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2013] [Accepted: 09/16/2013] [Indexed: 11/24/2022] Open
Abstract
Feedforward visual object perception recruits a cortical network that is assumed to be hierarchical, progressing from basic visual features to complete object representations. However, the nature of the intermediate features related to this transformation remains poorly understood. Here, we explore how well different computer vision recognition models account for neural object encoding across the human cortical visual pathway as measured using fMRI. These neural data, collected during the viewing of 60 images of real-world objects, were analyzed with a searchlight procedure as in Kriegeskorte, Goebel, and Bandettini (2006): Within each searchlight sphere, the obtained patterns of neural activity for all 60 objects were compared to model responses for each computer recognition algorithm using representational dissimilarity analysis (Kriegeskorte et al., 2008). Although each of the computer vision methods significantly accounted for some of the neural data, among the different models, the scale invariant feature transform (Lowe, 2004), encoding local visual properties gathered from "interest points," was best able to accurately and consistently account for stimulus representations within the ventral pathway. More generally, when present, significance was observed in regions of the ventral-temporal cortex associated with intermediate-level object perception. Differences in model effectiveness and the neural location of significant matches may be attributable to the fact that each model implements a different featural basis for representing objects (e.g., more holistic or more parts-based). Overall, we conclude that well-known computer vision recognition systems may serve as viable proxies for theories of intermediate visual object representation.
Collapse
Affiliation(s)
- Daniel D. Leeds
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA, USA
- Department of Computer and Information Science, Fordham University, Bronx, NY, USA
| | - Darren A. Seibert
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - John A. Pyles
- Center for the Neural Basis of Cognition and Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Michael J. Tarr
- Center for the Neural Basis of Cognition and Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, USA
| |
Collapse
|
94
|
Kohls G, Perino MT, Taylor JM, Madva EN, Cayless SJ, Troiani V, Price E, Faja S, Herrington JD, Schultz RT. The nucleus accumbens is involved in both the pursuit of social reward and the avoidance of social punishment. Neuropsychologia 2013; 51:2062-9. [PMID: 23911778 DOI: 10.1016/j.neuropsychologia.2013.07.020] [Citation(s) in RCA: 105] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2013] [Revised: 07/16/2013] [Accepted: 07/24/2013] [Indexed: 10/26/2022]
Abstract
Human social motivation is characterized by the pursuit of social reward and the avoidance of social punishment. The ventral striatum/nucleus accumbens (VS/Nacc), in particular, has been implicated in the reward component of social motivation, i.e., the 'wanting' of social incentives like approval. However, it is unclear to what extent the VS/Nacc is involved in avoiding social punishment like disapproval, an intrinsically pleasant outcome. Thus, we conducted an event-related functional magnetic resonance imaging (fMRI) study using a social incentive delay task with dynamic video stimuli instead of static pictures as social incentives in order to examine participants' motivation for social reward gain and social punishment avoidance. As predicted, the anticipation of avoidable social punishment (i.e., disapproval) recruited the VS/Nacc in a manner that was similar to VS/Nacc activation observed during the anticipation of social reward gain (i.e., approval). Stronger VS/Nacc activity was accompanied by faster reaction times of the participants to obtain those desired outcomes. This data support the assumption that dynamic social incentives elicit robust VS/Nacc activity, which likely reflects motivation to obtain social reward and to avoid social punishment. Clinical implications regarding the involvement of the VS/Nacc in social motivation dysfunction in autism and social phobia are discussed.
Collapse
Affiliation(s)
- Gregor Kohls
- Center for Autism Research, The Children's Hospital of Philadelphia, PA, USA; Child Neuropsychology Section, Department of Child and Adolescent Psychiatry and Psychotherapy, RWTH Aachen University, Aachen, Germany.
| | | | | | | | | | | | | | | | | | | |
Collapse
|
95
|
Iidaka T. Role of the fusiform gyrus and superior temporal sulcus in face perception and recognition: An empirical review. JAPANESE PSYCHOLOGICAL RESEARCH 2013. [DOI: 10.1111/jpr.12018] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
96
|
Abstract
Facial motion carries essential information about other people's emotions and intentions. Most previous studies have suggested that facial motion is mainly processed in the superior temporal sulcus (STS), but several recent studies have also shown involvement of ventral temporal face-sensitive regions. Up to now, it is not known whether the increased response to facial motion is due to an increased amount of static information in the stimulus, to the deformation of the face over time, or to increased attentional demands. We presented nonrigidly moving faces and control stimuli to participants performing a demanding task unrelated to the face stimuli. We manipulated the amount of static information by using movies with different frame rates. The fluidity of the motion was manipulated by presenting movies with frames either in the order in which they were recorded or in scrambled order. Results confirm higher activation for moving compared with static faces in STS and under certain conditions in ventral temporal face-sensitive regions. Activation was maximal at a frame rate of 12.5 Hz and smaller for scrambled movies. These results indicate that both the amount of static information and the fluid facial motion per se are important factors for the processing of dynamic faces.
Collapse
Affiliation(s)
- Johannes Schultz
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany.
| | | | | | | |
Collapse
|
97
|
Pyles JA, Verstynen TD, Schneider W, Tarr MJ. Explicating the face perception network with white matter connectivity. PLoS One 2013; 8:e61611. [PMID: 23630602 PMCID: PMC3632522 DOI: 10.1371/journal.pone.0061611] [Citation(s) in RCA: 100] [Impact Index Per Article: 9.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2013] [Accepted: 03/11/2013] [Indexed: 11/29/2022] Open
Abstract
A network of multiple brain regions is recruited in face perception. Our understanding of the functional properties of this network can be facilitated by explicating the structural white matter connections that exist between its functional nodes. We accomplished this using functional MRI (fMRI) in combination with fiber tractography on high angular resolution diffusion weighted imaging data. We identified the three nodes of the core face network: the "occipital face area" (OFA), the "fusiform face area" (mid-fusiform gyrus or mFus), and the superior temporal sulcus (STS). Additionally, a region of the anterior temporal lobe (aIT), implicated as being important for face perception was identified. Our data suggest that we can further divide the OFA into multiple anatomically distinct clusters - a partitioning consistent with several recent neuroimaging results. More generally, structural white matter connectivity within this network revealed: 1) Connectivity between aIT and mFus, and between aIT and occipital regions, consistent with studies implicating this posterior to anterior pathway as critical to normal face processing; 2) Strong connectivity between mFus and each of the occipital face-selective regions, suggesting that these three areas may subserve different functional roles; 3) Almost no connectivity between STS and mFus, or between STS and the other face-selective regions. Overall, our findings suggest a re-evaluation of the "core" face network with respect to what functional areas are or are not included in this network.
Collapse
Affiliation(s)
- John A Pyles
- Center for the Neural Basis of Cognition, Carnegie Mellon University and University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America.
| | | | | | | |
Collapse
|
98
|
Ceccarini F, Caudek C. Anger superiority effect: The importance of dynamic emotional facial expressions. VISUAL COGNITION 2013. [DOI: 10.1080/13506285.2013.807901] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
99
|
Dynamic and static facial expressions decoded from motion-sensitive areas in the macaque monkey. J Neurosci 2013; 32:15952-62. [PMID: 23136433 DOI: 10.1523/jneurosci.1992-12.2012] [Citation(s) in RCA: 56] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Humans adeptly use visual motion to recognize socially relevant facial information. The macaque provides a model visual system for studying neural coding of expression movements, as its superior temporal sulcus (STS) possesses brain areas selective for faces and areas sensitive to visual motion. We used functional magnetic resonance imaging and facial stimuli to localize motion-sensitive areas [motion in faces (Mf) areas], which responded more to dynamic faces compared with static faces, and face-selective areas, which responded selectively to faces compared with objects and places. Using multivariate analysis, we found that information about both dynamic and static facial expressions could be robustly decoded from Mf areas. By contrast, face-selective areas exhibited relatively less facial expression information. Classifiers trained with expressions from one motion type (dynamic or static) showed poor generalization to the other motion type, suggesting that Mf areas employ separate and nonconfusable neural codes for dynamic and static presentations of the same expressions. We also show that some of the motion sensitivity elicited by facial stimuli was not specific to faces but could also be elicited by moving dots, particularly in fundus of the superior temporal and middle superior temporal polysensory/lower superior temporal areas, confirming their already well established low-level motion sensitivity. A different pattern was found in anterior STS, which responded more to dynamic than to static faces but was not sensitive to dot motion. Overall, we show that emotional expressions are mostly represented outside of face-selective cortex, in areas sensitive to motion. These regions may play a fundamental role in enhancing recognition of facial expression despite the complex stimulus changes associated with motion.
Collapse
|
100
|
Johnston P, Mayes A, Hughes M, Young AW. Brain networks subserving the evaluation of static and dynamic facial expressions. Cortex 2013; 49:2462-72. [PMID: 23410736 DOI: 10.1016/j.cortex.2013.01.002] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2012] [Revised: 10/05/2012] [Accepted: 01/07/2013] [Indexed: 10/27/2022]
Abstract
Because moving depictions of face emotion have greater ecological validity than their static counterparts, it has been suggested that still photographs may not engage 'authentic' mechanisms used to recognize facial expressions in everyday life. To date, however, no neuroimaging studies have adequately addressed the question of whether the processing of static and dynamic expressions rely upon different brain substrates. To address this, we performed an functional magnetic resonance imaging (fMRI) experiment wherein participants made emotional expression discrimination and Sex discrimination judgements to static and moving face images. Compared to Sex discrimination, Emotion discrimination was associated with widespread increased activation in regions of occipito-temporal, parietal and frontal cortex. These regions were activated both by moving and by static emotional stimuli, indicating a general role in the interpretation of emotion. However, portions of the inferior frontal gyri and supplementary/pre-supplementary motor area showed task by motion interaction. These regions were most active during emotion judgements to static faces. Our results demonstrate a common neural substrate for recognizing static and moving facial expressions, but suggest a role for the inferior frontal gyrus in supporting simulation processes that are invoked more strongly to disambiguate static emotional cues.
Collapse
Affiliation(s)
- Patrick Johnston
- Department of Psychology and York Neuroimaging Centre, University of York, Heslington, UK.
| | | | | | | |
Collapse
|