1
|
Er G, Sweeny TD. Similarity in motion binds and bends judgments of aspect ratio. Vision Res 2024; 220:108400. [PMID: 38603923 DOI: 10.1016/j.visres.2024.108400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 03/11/2024] [Accepted: 03/23/2024] [Indexed: 04/13/2024]
Abstract
It is well known that objects become grouped in perceptual organization when they share some visual feature, like a common direction of motion. Less well known is that grouping can change how people perceive a set of objects. For example, when a pair of shapes consistently share a common region of space, their aspect ratios tend to be perceived as more similar (are attracted toward each other). Conversely, when shapes are assigned to different regions in space their aspect ratios repel from each other. Here we examine whether the visual system produce both attractive and repulsive distortions when the state of grouping between a pair of shapes changes on a moment-to-moment basis. Observers viewed a pair of ellipses that differed in terms of how flat or tall they were and reported the aspect ratio of one ellipse from the pair. Each ellipse was defined by a cloud of coherently-moving dots, and the dots within the two ellipses had either the same or different directions of motion, varying from trial-to-trial. We found that the cued ellipse's aspect ratio was reported to be repelled from the aspect ratio of the uncued ellipse when the shapes had different directions of motion compared to when they had the same direction of motion. These results suggest that the visual system can adaptively alter visual experience based on grouping, in particular, repelling the appearance of objects when they do not appear to go together, and it can do so quickly and flexibly.
Collapse
Affiliation(s)
- Görkem Er
- Department of Psychology, University of Denver, United States.
| | | |
Collapse
|
2
|
Sama MA, Nestor A, Cant JS. The Neural Dynamics of Face Ensemble and Central Face Processing. J Neurosci 2024; 44:e1027232023. [PMID: 38148151 PMCID: PMC10869155 DOI: 10.1523/jneurosci.1027-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Revised: 11/21/2023] [Accepted: 12/11/2023] [Indexed: 12/28/2023] Open
Abstract
Extensive work has investigated the neural processing of single faces, including the role of shape and surface properties. However, much less is known about the neural basis of face ensemble perception (e.g., simultaneously viewing several faces in a crowd). Importantly, the contribution of shape and surface properties have not been elucidated in face ensemble processing. Furthermore, how single central faces are processed within the context of an ensemble remains unclear. Here, we probe the neural dynamics of ensemble representation using pattern analyses as applied to electrophysiology data in healthy adults (seven males, nine females). Our investigation relies on a unique set of stimuli, depicting different facial identities, which vary parametrically and independently along their shape and surface properties. These stimuli were organized into ensemble displays consisting of six surround faces arranged in a circle around one central face. Overall, our results indicate that both shape and surface properties play a significant role in face ensemble encoding, with the latter demonstrating a more pronounced contribution. Importantly, we find that the neural processing of the center face precedes that of the surround faces in an ensemble. Further, the temporal profile of center face decoding is similar to that of single faces, while those of single faces and face ensembles diverge extensively from each other. Thus, our work capitalizes on a new center-surround paradigm to elucidate the neural dynamics of ensemble processing and the information that underpins it. Critically, our results serve to bridge the study of single and ensemble face perception.
Collapse
Affiliation(s)
- Marco Agazio Sama
- Department of Psychology, University of Toronto Scarborough, Toronto, Ontario M1C 1A4, Canada
| | - Adrian Nestor
- Department of Psychology, University of Toronto Scarborough, Toronto, Ontario M1C 1A4, Canada
| | - Jonathan Samuel Cant
- Department of Psychology, University of Toronto Scarborough, Toronto, Ontario M1C 1A4, Canada
| |
Collapse
|
3
|
Li P, Zhu C, Geng P, He W, Luo W. Implicit induction of expressive suppression in regulation of happy crowd emotions. Soc Neurosci 2024; 19:37-48. [PMID: 38595063 DOI: 10.1080/17470919.2024.2340806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Accepted: 04/03/2024] [Indexed: 04/11/2024]
Abstract
Implicit emotion regulation provides an effective means of controlling emotions triggered by a single face without conscious awareness and effort. Crowd emotion has been proposed to be perceived as more intense than it actually is, but it is still unclear how to regulate it implicitly. In this study, participants viewed sets of faces of varying emotionality (e.g. happy to angry) and estimated the mean emotion of each set after being primed with an expressive suppression goal, a cognitive reappraisal goal, or a neutral goal. Faster discrimination for happy than angry crowds was observed. After induction of the expressive suppression goal instead of the cognitive reappraisal goal, augmented N170 and early posterior negativity (EPN) amplitudes, as well as attenuated late positive potential (LPP) amplitudes, were observed in response to happy crowds compared to the neutral goal. Differential processing of angry crowds was not observed after the induction of both regulatory goals compared to the neutral goal. Our findings thus reveal the happy-superiority effect and that implicit induction of expressive suppression improves happy crowd emotion recognition, promotes selective coding, and successfully downregulates the neural response to happy crowds.
Collapse
Affiliation(s)
- Ping Li
- Department of Investigation, Liaoning Police College, Dalian, China
| | - Chuanlin Zhu
- School of Educational Science, Yangzhou University, Dalian, China
| | - Peiyao Geng
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, China
- Key Laboratory of Brain and Cognitive Neuroscience, Dalian, Liaoning, China
| | - Weiqi He
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, China
- Key Laboratory of Brain and Cognitive Neuroscience, Dalian, Liaoning, China
| | - Wenbo Luo
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, China
- Key Laboratory of Brain and Cognitive Neuroscience, Dalian, Liaoning, China
| |
Collapse
|
4
|
Effects of aging on face processing: An ERP study of the own-age bias with neutral and emotional faces. Cortex 2023; 161:13-25. [PMID: 36878097 DOI: 10.1016/j.cortex.2023.01.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Revised: 07/25/2022] [Accepted: 01/19/2023] [Indexed: 02/20/2023]
Abstract
Older adults systematically show an enhanced N170 amplitude during the visualization of facial expressions of emotion. The present study aimed to replicate this finding, further investigating if this effect is specific to facial stimuli, present in other neural correlates of face processing, and modulated by own-age faces. To this purpose, younger (n = 25; Mage = 28.36), middle-aged (n = 23; Mage = 48.74), and older adults (n = 25; Mage = 67.36) performed two face/emotion identification tasks during an EEG recording. The results showed that groups did not differ regarding P100 amplitude, but older adults had increased N170 amplitude for both facial and non-facial stimuli. The event-related potentials analysed were not modulated by an own-age bias, but older faces elicited larger N170 in the Emotion Identification Task for all groups. This increased amplitude may reflect a higher ambiguity of older faces due to age-related changes in their physical features, which may elicit higher neural resources to decode. Regarding P250, older faces elicited decreased amplitudes than younger faces, which may reflect a reduced processing of the emotional content of older faces. This interpretation is consistent with the lower accuracy obtained for this category of stimuli across groups. These results have important social implications and suggest that aging may hamper the neural processing of facial expressions of emotion, especially for own-age peers.
Collapse
|
5
|
Zhang T, Yang Y, Xu L, Tang X, Hu Y, Xiong X, Wei Y, Cui H, Tang Y, Liu H, Chen T, Liu Z, Hui L, Li C, Guo X, Wang J. Inefficient integration during multiple facial processing in pre-morbid and early phases of psychosis. World J Biol Psychiatry 2022; 23:361-373. [PMID: 34842500 DOI: 10.1080/15622975.2021.2011402] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
OBJECTIVES We used eye-tracking to evaluate multiple facial context processing and event-related potential (ERP) to evaluate multiple facial recognition in individuals at clinical high risk (CHR) for psychosis. METHODS In total, 173 subjects (83 CHRs and 90 healthy controls [HCs]) were included and their emotion perception performances were accessed. A total of 40 CHRs and 40 well-matched HCs completed an eye-tracking task where they viewed pictures depicting a person in the foreground, presented as context-free, context-compatible, and context-incompatible. During the two-year follow-up, 26 CHRs developed psychosis, including 17 individuals who developed first-episode schizophrenia (FES). Eighteen well-matched HCs were made to complete the face number detection ERP task with image stimuli of one, two, or three faces. RESULTS Compared to the HC group, the CHR group showed reduced visual attention to contextual processing when viewing multiple faces. With the increasing complexity of contextual faces, the differences in eye-tracking characteristics also increased. In the ERP task, the N170 amplitude decreased with a higher face number in FES patients, while it increased with a higher face number in HCs. CONCLUSIONS Individuals in the very early phase of psychosis showed facial processing deficits with supporting evidence of different scan paths during context processing and disruption of N170 during multiple facial recognition.
Collapse
Affiliation(s)
- TianHong Zhang
- Shanghai Mental Health Center, Shanghai Jiaotong University School of Medicine, Shanghai Key Laboratory of Psychotic Disorders(13dz2260500), Shanghai, China
| | - YingYu Yang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - LiHua Xu
- Shanghai Mental Health Center, Shanghai Jiaotong University School of Medicine, Shanghai Key Laboratory of Psychotic Disorders(13dz2260500), Shanghai, China
| | - XiaoChen Tang
- Shanghai Mental Health Center, Shanghai Jiaotong University School of Medicine, Shanghai Key Laboratory of Psychotic Disorders(13dz2260500), Shanghai, China
| | - YeGang Hu
- Shanghai Mental Health Center, Shanghai Jiaotong University School of Medicine, Shanghai Key Laboratory of Psychotic Disorders(13dz2260500), Shanghai, China
| | - Xin Xiong
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - YanYan Wei
- Shanghai Mental Health Center, Shanghai Jiaotong University School of Medicine, Shanghai Key Laboratory of Psychotic Disorders(13dz2260500), Shanghai, China
| | - HuiRu Cui
- Shanghai Mental Health Center, Shanghai Jiaotong University School of Medicine, Shanghai Key Laboratory of Psychotic Disorders(13dz2260500), Shanghai, China
| | - YingYing Tang
- Shanghai Mental Health Center, Shanghai Jiaotong University School of Medicine, Shanghai Key Laboratory of Psychotic Disorders(13dz2260500), Shanghai, China
| | - HaiChun Liu
- Department of Automation, Shanghai Jiao Tong University, Shanghai, China
| | - Tao Chen
- Big Data Research Lab, University of Waterloo, Waterloo, Ontario, Canada.,Senior Research Fellow, Labor and Worklife Program, Harvard University, Cambridge, Massachusetts, United States.,Niacin (Shanghai) Technology Co., Ltd, Shanghai, China
| | - Zhi Liu
- School of Communication and Information Engineering, Shanghai University, Shanghai, China
| | - Li Hui
- Institute of Mental Health, The Affiliated Guangji Hospital of Soochow University, Soochow University, Suzhou, Jiangsu, China
| | - ChunBo Li
- Shanghai Mental Health Center, Shanghai Jiaotong University School of Medicine, Shanghai Key Laboratory of Psychotic Disorders(13dz2260500), Shanghai, China
| | - XiaoLi Guo
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - JiJun Wang
- Shanghai Mental Health Center, Shanghai Jiaotong University School of Medicine, Shanghai Key Laboratory of Psychotic Disorders(13dz2260500), Shanghai, China.,Center for Excellence in Brain Science and Intelligence Technology (CEBSIT), Chinese Academy of Science, Shanghai, China.,Institute of Psychology and Behavioral Science, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
6
|
Treatment effects on event-related EEG potentials and oscillations in Alzheimer's disease. Int J Psychophysiol 2022; 177:179-201. [PMID: 35588964 DOI: 10.1016/j.ijpsycho.2022.05.008] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Revised: 05/11/2022] [Accepted: 05/12/2022] [Indexed: 11/23/2022]
Abstract
Alzheimer's disease dementia (ADD) is the most diffuse neurodegenerative disorder belonging to mild cognitive impairment (MCI) and dementia in old persons. This disease is provoked by an abnormal accumulation of amyloid-beta and tauopathy proteins in the brain. Very recently, the first disease-modifying drug has been licensed with reserve (i.e., Aducanumab). Therefore, there is a need to identify and use biomarkers probing the neurophysiological underpinnings of human cognitive functions to test the clinical efficacy of that drug. In this regard, event-related electroencephalographic potentials (ERPs) and oscillations (EROs) are promising candidates. Here, an Expert Panel from the Electrophysiology Professional Interest Area of the Alzheimer's Association and Global Brain Consortium reviewed the field literature on the effects of the most used symptomatic drug against ADD (i.e., Acetylcholinesterase inhibitors) on ERPs and EROs in ADD patients with MCI and dementia at the group level. The most convincing results were found in ADD patients. In those patients, Acetylcholinesterase inhibitors partially normalized ERP P300 peak latency and amplitude in oddball paradigms using visual stimuli. In these same paradigms, those drugs partially normalize ERO phase-locking at the theta band (4-7 Hz) and spectral coherence between electrode pairs at the gamma (around 40 Hz) band. These results are of great interest and may motivate multicentric, double-blind, randomized, and placebo-controlled clinical trials in MCI and ADD patients for final cross-validation.
Collapse
|
7
|
Developmental Differences in Neuromagnetic Cortical Activation and Phase Synchrony Elicited by Scenes with Faces during Movie Watching. eNeuro 2022; 9:ENEURO.0494-21.2022. [PMID: 35443990 PMCID: PMC9087730 DOI: 10.1523/eneuro.0494-21.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Revised: 03/07/2022] [Accepted: 03/11/2022] [Indexed: 12/01/2022] Open
Abstract
The neural underpinnings of humans’ ability to process faces and how it changes over typical development have been extensively studied using paradigms where face stimuli are oversimplified, isolated, and decontextualized. The prevalence of this approach, however, has resulted in limited knowledge of face processing in ecologically valid situations, in which faces are accompanied by contextual information at multiple time scales. In the present study, we use a naturalistic movie paradigm to investigate how neuromagnetic activation and phase synchronization elicited by faces from movie scenes in humans differ between children and adults. We used MEG data from 22 adults (6 females, 3 left handed; mean age, 27.7 ± 5.28 years) and 20 children (7 females, 1 left handed; mean age, 9.5 ± 1.52 years) collected during movie viewing. We investigated neuromagnetic time-locked activation and phase synchronization elicited by movie scenes containing faces in contrast to other movie scenes. Statistical differences between groups were tested using a multivariate data-driven approach. Our results revealed lower face-elicited activation and theta/alpha phase synchrony between 120 and 330 ms in children compared with adults. Reduced connectivity in children was observed between the primary visual areas as well as their connections with higher-order frontal and parietal cortical areas. This is the first study to map neuromagnetic developmental changes in face processing in a time-locked manner using a naturalistic movie paradigm. It supports and extends the existing evidence of core face-processing network maturation accompanied by the development of an extended system of higher-order cortical areas engaged in face processing.
Collapse
|
8
|
Kawai N, Guo Z, Nakata R. A human voice, but not human visual image makes people perceive food to taste better and to eat more: "Social" facilitation of eating in a digital media. Appetite 2021; 167:105644. [PMID: 34416287 DOI: 10.1016/j.appet.2021.105644] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2020] [Revised: 04/09/2021] [Accepted: 08/07/2021] [Indexed: 01/02/2023]
Abstract
Food tastes better and people eat more when eating with others compared to eating alone. Although previous research has shown that watching television facilitates eating, the influencing factors regarding video content are unclear. We compared videos of a person speaking with those of only objects (food and a cell phone) in Experiment 1, and videos of groups of four people talking in Experiment 2. Half of these videos presented human voices (including the objects-only video), while the other half had no audio. Results showed participants rated the popcorn as tasting better and consumed more when eating alone while listening to someone talking, irrespective of whether the person was present or absent in the video in Experiment 1. A similar result was found in Experiment 2, irrespective of the increased number of people talking in the video. In Experiment 3, we assessed to what extent human voices contributed to an increase in food intake and the perceived taste of food by substituting sine-wave speech (SWS) for human voices used in Experiment 1 and found that the perceived taste of food and food intake were not facilitated when participants watched videos with SWS. The present study indicates that the human voice plays a crucial role in the perceived taste of food and consumption amount when people eat alone while watching television. Suggestions to improve food enjoyment when dining alone are discussed.
Collapse
Affiliation(s)
- Nobuyuki Kawai
- Department of Cognitive and Psychological Sciences, Nagoya University, Japan; Academy of Emerging Science, Chubu University, Japan.
| | - Zhuogen Guo
- Department of Cognitive and Psychological Sciences, Nagoya University, Japan
| | - Ryuzaburo Nakata
- Department of Cognitive and Psychological Sciences, Nagoya University, Japan
| |
Collapse
|
9
|
Nunes AS, Mamashli F, Kozhemiako N, Khan S, McGuiggan NM, Losh A, Joseph RM, Ahveninen J, Doesburg SM, Hämäläinen MS, Kenet T. Classification of evoked responses to inverted faces reveals both spatial and temporal cortical response abnormalities in Autism spectrum disorder. Neuroimage Clin 2020; 29:102501. [PMID: 33310630 PMCID: PMC7734307 DOI: 10.1016/j.nicl.2020.102501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2020] [Revised: 11/03/2020] [Accepted: 11/07/2020] [Indexed: 11/23/2022]
Abstract
The neurophysiology of face processing has been studied extensively in the context of social impairments associated with autism spectrum disorder (ASD), but the existing studies have concentrated mainly on univariate analyses of responses to upright faces, and, less frequently, inverted faces. The small number of existing studies on neurophysiological responses to inverted face in ASD have used univariate approaches, with divergent results. Here, we used a data-driven, classification-based, multivariate machine learning decoding approach to investigate the temporal and spatial properties of the neurophysiological evoked response for upright and inverted faces, relative to the neurophysiological evoked response for houses, a neutral stimulus. 21 (2 females) ASD and 29 (4 females) TD participants ages 7 to 19 took part in this study. Group level classification accuracies were obtained for each condition, using first the temporal domain of the evoked responses, and then the spatial distribution of the evoked responses on the cortical surface, each separately. We found that classification of responses to inverted neutral faces vs. houses was less accurate in ASD compared to TD, in both the temporal and spatial domains. In contrast, there were no group differences in the classification of evoked responses to upright neutral faces relative to houses. Using the classification in the temporal domain, lower decoding accuracies in ASD were found around 120 ms and 170 ms, corresponding the known components of the evoked responses to faces. Using the classification in the spatial domain, lower decoding accuracies in ASD were found in the right superior marginal gyrus (SMG), intra-parietal sulcus (IPS) and posterior superior temporal sulcus (pSTS), but not in core face processing areas. Importantly, individual classification accuracies from both the temporal and spatial classifiers correlated with ASD severity, confirming the relevance of the results to the ASD phenotype.
Collapse
Affiliation(s)
- Adonay S Nunes
- Department of Neurology, MGH, Harvard Medical School, Boston, MA, USA; Department of Biomedical Physiology and Kinesiology, Simon Fraser University, Vancouver, British Columbia, Canada
| | - Fahimeh Mamashli
- Department of Radiology, MGH, Harvard Medical School, Boston, MA, USA; Athinoula A. Martinos Center for Biomedical Imaging, MGH/HST, Charlestown, MA, USA
| | - Nataliia Kozhemiako
- Department of Neurology, MGH, Harvard Medical School, Boston, MA, USA; Department of Biomedical Physiology and Kinesiology, Simon Fraser University, Vancouver, British Columbia, Canada
| | - Sheraz Khan
- Department of Radiology, MGH, Harvard Medical School, Boston, MA, USA; Athinoula A. Martinos Center for Biomedical Imaging, MGH/HST, Charlestown, MA, USA
| | - Nicole M McGuiggan
- Department of Neurology, MGH, Harvard Medical School, Boston, MA, USA; Athinoula A. Martinos Center for Biomedical Imaging, MGH/HST, Charlestown, MA, USA
| | - Ainsley Losh
- Department of Neurology, MGH, Harvard Medical School, Boston, MA, USA
| | | | - Jyrki Ahveninen
- Department of Radiology, MGH, Harvard Medical School, Boston, MA, USA; Athinoula A. Martinos Center for Biomedical Imaging, MGH/HST, Charlestown, MA, USA
| | - Sam M Doesburg
- Department of Biomedical Physiology and Kinesiology, Simon Fraser University, Vancouver, British Columbia, Canada; Behavioural and Cognitive Neuroscience Institute, Simon Fraser University, Vancouver, British Columbia, Canada
| | - Matti S Hämäläinen
- Department of Radiology, MGH, Harvard Medical School, Boston, MA, USA; Athinoula A. Martinos Center for Biomedical Imaging, MGH/HST, Charlestown, MA, USA; Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, Espoo, Finland
| | - Tal Kenet
- Department of Neurology, MGH, Harvard Medical School, Boston, MA, USA; Athinoula A. Martinos Center for Biomedical Imaging, MGH/HST, Charlestown, MA, USA.
| |
Collapse
|
10
|
Yang YF, Brunet-Gouet E, Burca M, Kalunga EK, Amorim MA. Brain Processes While Struggling With Evidence Accumulation During Facial Emotion Recognition: An ERP Study. Front Hum Neurosci 2020; 14:340. [PMID: 33100986 PMCID: PMC7497730 DOI: 10.3389/fnhum.2020.00340] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2020] [Accepted: 08/03/2020] [Indexed: 11/30/2022] Open
Abstract
The human brain is tuned to recognize emotional facial expressions in faces having a natural upright orientation. The relative contributions of featural, configural, and holistic processing to decision-making are as yet poorly understood. This study used a diffusion decision model (DDM) of decision-making to investigate the contribution of early face-sensitive processes to emotion recognition from physiognomic features (the eyes, nose, and mouth) by determining how experimental conditions tapping those processes affect early face-sensitive neuroelectric reflections (P100, N170, and P250) of processes determining evidence accumulation at the behavioral level. We first examined the effects of both stimulus orientation (upright vs. inverted) and stimulus type (photographs vs. sketches) on behavior and neuroelectric components (amplitude and latency). Then, we explored the sources of variance common to the experimental effects on event-related potentials (ERPs) and the DDM parameters. Several results suggest that the N170 indicates core visual processing for emotion recognition decision-making: (a) the additive effect of stimulus inversion and impoverishment on N170 latency; and (b) multivariate analysis suggesting that N170 neuroelectric activity must be increased to counteract the detrimental effects of face inversion on drift rate and of stimulus impoverishment on the stimulus encoding component of non-decision times. Overall, our results show that emotion recognition is still possible even with degraded stimulation, but at a neurocognitive cost, reflecting the extent to which our brain struggles to accumulate sensory evidence of a given emotion. Accordingly, we theorize that: (a) the P100 neural generator would provide a holistic frame of reference to the face percept through categorical encoding; (b) the N170 neural generator would maintain the structural cohesiveness of the subtle configural variations in facial expressions across our experimental manipulations through coordinate encoding of the facial features; and (c) building on the previous configural processing, the neurons generating the P250 would be responsible for a normalization process adapting to the facial features to match the stimulus to internal representations of emotional expressions.
Collapse
Affiliation(s)
- Yu-Fang Yang
- CIAMS, Université Paris-Saclay, Orsay, France.,CIAMS, Université d'Orléans, Orléans, France
| | - Eric Brunet-Gouet
- Centre Hospitalier de Versailles, Hôpital Mignot, Le Chesnay, France.,CESP, DevPsy, Université Paris-Saclay, UVSQ, Inserm, Villejuif, France
| | - Mariana Burca
- Centre Hospitalier de Versailles, Hôpital Mignot, Le Chesnay, France.,CESP, DevPsy, Université Paris-Saclay, UVSQ, Inserm, Villejuif, France
| | | | - Michel-Ange Amorim
- CIAMS, Université Paris-Saclay, Orsay, France.,CIAMS, Université d'Orléans, Orléans, France
| |
Collapse
|
11
|
Nunes AS, Kozhemiako N, Moiseev A, Seymour RA, Cheung TPL, Ribary U, Doesburg SM. Neuromagnetic activation and oscillatory dynamics of stimulus-locked processing during naturalistic viewing. Neuroimage 2019; 216:116414. [PMID: 31794854 DOI: 10.1016/j.neuroimage.2019.116414] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2019] [Revised: 10/21/2019] [Accepted: 11/27/2019] [Indexed: 10/25/2022] Open
Abstract
Naturalistic stimuli such as watching a movie while in the scanner provide an ecologically valid paradigm that has the potential of extracting valuable information on how the brain processes complex stimuli in realistic visual and auditory contexts. Naturalistic viewing is also easier to conduct with challenging participant groups including patients and children. Given the high temporal resolution of MEG, in the present study, we demonstrate how a short movie clip can be used to map distinguishable activation and connectivity dynamics underlying the processing of specific classes of visual stimuli such as face and hand manipulations, as well as contrasting activation dynamics for auditory words and non-words. MEG data were collected from 22 healthy volunteers (6 females, 3 left handed, mean age - 27.7 ± 5.28 years) during the presentation of naturalistic audiovisual stimuli. The MEG data were split into trials with the onset of the stimuli belonging to classes of interest (words, non-words, faces, hand manipulations). Based on the components of the averaged sensor ERFs time-locked to the visual and auditory stimulus onset, four and three time-windows, respectively, were defined to explore brain activation dynamics. Pseudo-Z, defined as the ratio of the source-projected time-locked power to the projected noise power for each vertex, was computed and used as a proxy of time-locked brain activation. Statistical testing using the mean-centered Partial Least Squares analysis indicated periods where a given visual or auditory stimuli had higher activation. Based on peak pseudo-Z differences between the visual conditions, time-frequency resolved analyses were performed to assess beta band desynchronization in motor-related areas, and inter-trial phase synchronization between face processing areas. Our results provide the first evidence that activation and connectivity dynamics in canonical brain regions associated with the processing of particular classes of visual and auditory stimuli can be reliably mapped using MEG during presentation of naturalistic stimuli. Given the strength of MEG for brain mapping in temporal and frequency domains, the use of naturalistic stimuli may open new techniques in analyzing brain dynamics during ecologically valid sensation and perception.
Collapse
Affiliation(s)
- Adonay S Nunes
- Biomedical Physiology and Kinesiology, Simon Fraser University, Burnaby, BC, Canada.
| | - Nataliia Kozhemiako
- Biomedical Physiology and Kinesiology, Simon Fraser University, Burnaby, BC, Canada
| | - Alexander Moiseev
- Behavioral & Cognitive Neuroscience Institute, Simon Fraser University, Burnaby, BC, Canada
| | - Robert A Seymour
- Aston Brain Centre, School of Life and Health Sciences, Aston University, Birmingham, UK; Department of Cognitive Science, Macquarie University, Sydney, Australia
| | - Teresa P L Cheung
- School of Engineering Science, Simon Fraser University, Burnaby, BC, Canada
| | - Urs Ribary
- Behavioral & Cognitive Neuroscience Institute, Simon Fraser University, Burnaby, BC, Canada; Department Pediatrics and Psychiatry, University of British Columbia, Vancouver, BC, Canada; B.C. Children's Hospital Research Institute, Vancouver, BC, Canada; Department of Psychology, Simon Fraser University, Burnaby, BC, Canada
| | - Sam M Doesburg
- Biomedical Physiology and Kinesiology, Simon Fraser University, Burnaby, BC, Canada; Behavioral & Cognitive Neuroscience Institute, Simon Fraser University, Burnaby, BC, Canada
| |
Collapse
|
12
|
Monciunskaite R, Malden L, Lukstaite I, Ruksenas O, Griksiene R. Do oral contraceptives modulate an ERP response to affective pictures? Biol Psychol 2019; 148:107767. [PMID: 31509765 DOI: 10.1016/j.biopsycho.2019.107767] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2019] [Revised: 08/23/2019] [Accepted: 09/06/2019] [Indexed: 12/22/2022]
Abstract
Indications exist that use of oral contraceptives affects women's socio-emotional behaviour, brain function and, cognitive abilities, but the information is still scarce and ambiguous. We aimed to examine affective processing of visual stimuli between oral contraceptive users (OC, n = 33) and naturally cycling women (NC, n = 37) using the event-related potential (ERP) method. The main findings are: (i) emotionally arousing stimuli elicited significantly enlarged late positive potential (LPP) amplitudes compared to neutral stimuli, (ii) anti-androgenic OC users demonstrated diminished brain reactivity to visual stimuli, and (iii) significantly blunted reaction to highly unpleasant images. In addition, a positive relationship between GFP evoked by the highly unpleasant and highly pleasant visual emotional stimuli and progesterone was observed in NC women, while OC users demonstrated a trend of negative relationship between GFP and progesterone level. These findings suggest possible modulations of affective processing of visual stimuli when hormonal contraceptives are used.
Collapse
Affiliation(s)
- R Monciunskaite
- Institute of Biosciences, Life Sciences Center, Vilnius University, Vilnius, Lithuania.
| | - L Malden
- Institute of Biosciences, Life Sciences Center, Vilnius University, Vilnius, Lithuania
| | - I Lukstaite
- Institute of Biosciences, Life Sciences Center, Vilnius University, Vilnius, Lithuania
| | - O Ruksenas
- Institute of Biosciences, Life Sciences Center, Vilnius University, Vilnius, Lithuania
| | - R Griksiene
- Institute of Biosciences, Life Sciences Center, Vilnius University, Vilnius, Lithuania
| |
Collapse
|
13
|
Elucidating the Neural Representation and the Processing Dynamics of Face Ensembles. J Neurosci 2019; 39:7737-7747. [PMID: 31413074 DOI: 10.1523/jneurosci.0471-19.2019] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Revised: 08/02/2019] [Accepted: 08/06/2019] [Indexed: 11/21/2022] Open
Abstract
Extensive behavioral work has documented the ability of the human visual system to extract summary representations from face ensembles (e.g., the average identity of a crowd of faces). Yet, the nature of such representations, their underlying neural mechanisms, and their temporal dynamics await elucidation. Here, we examine summary representations of facial identity in human adults (of both sexes) with the aid of pattern analyses, as applied to EEG data, along with behavioral testing. Our findings confirm the ability of the visual system to form such representations both explicitly and implicitly (i.e., with or without the use of specific instructions). We show that summary representations, rather than individual ensemble constituents, can be decoded from neural signals elicited by ensemble perception, we describe the properties of such representations by appeal to multidimensional face space constructs, and we visualize their content through neural-based image reconstruction. Further, we show that the temporal profile of ensemble processing diverges systematically from that of single faces consistent with a slower, more gradual accumulation of perceptual information. Thus, our findings reveal the representational basis of ensemble processing, its fine-grained visual content, and its neural dynamics.SIGNIFICANCE STATEMENT Humans encounter groups of faces, or ensembles, in a variety of environments. Previous behavioral research has investigated how humans process face ensembles as well as the types of summary representations that can be derived from them, such as average emotion, gender, and identity. However, the neural mechanisms mediating these processes are unclear. Here, we demonstrate that ensemble representations, with different facial identity summaries, can be decoded and even visualized from neural data through multivariate analyses. These results provide, to our knowledge, the first detailed investigation into the status and the visual content of neural ensemble representations of faces. Further, the current findings shed light on the temporal dynamics of face ensembles and its relationship with single-face processing.
Collapse
|
14
|
Jeantet C, Laprevote V, Schwan R, Schwitzer T, Maillard L, Lighezzolo-Alnot J, Caharel S. Time course of spatial frequency integration in face perception: An ERP study. Int J Psychophysiol 2019; 143:105-115. [PMID: 31276696 DOI: 10.1016/j.ijpsycho.2019.07.001] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2018] [Revised: 06/21/2019] [Accepted: 07/01/2019] [Indexed: 10/26/2022]
Abstract
Face perception is based on the processing and integration of multiple spatial frequency (SF) ranges. However, the temporal dynamics of SF integration to form an early face representation in the human brain is still a matter of debate. To address this issue, we recorded event-related potentials (ERPs) during the presentation of spatial frequency-manipulated facial images. Twenty-six participants performed a gender discrimination task on non-filtered, low-, high-, and band-pass filtered face images, corresponding, respectively, to the full range, spatial frequencies up to 8 cycles/image, above 32 cycles/image, and from 8 to 16 cycles/image. Behaviorally, the task related-performance was more accurate and faster for non-filtered (NF) and mid-range SF (MSF) than for low SF (LSF) and high SF (HSF) stimuli. At both behavioral and electrophysiological levels, response to MSF contained in faces did not differ from the responses to full spectrum non-filtered (NF) facial images. In ERPs, LSF facial images evoked the largest P1 amplitude while HSF facial images evoked the largest N170 amplitude compared with the other three conditions. Since LSFs and HSFs would transmit global and local information respectively, our observations lend further support to the "coarse-to-fine" processing theory of faces. Furthermore, they offer original evidence of the effectiveness and adequacy of the mid-range spatial frequency in face perception. Possible theoretical interpretations of our findings are discussed.
Collapse
Affiliation(s)
- Coline Jeantet
- Université de Lorraine, Laboratoire Lorrain de Psychologie et Neurosciences (2LPN - EA 7489), Nancy F-54000, France; Université de Lorraine, Laboratoire InterPsy (EA 4432), Nancy F-54000, France; Centre Psychothérapique de Nancy, Pôle Hospitalo-universitaire de Psychiatrie d'Adultes du Grand Nancy, Laxou F-54520, France
| | - Vincent Laprevote
- Centre Psychothérapique de Nancy, Pôle Hospitalo-universitaire de Psychiatrie d'Adultes du Grand Nancy, Laxou F-54520, France; Institut National de la Santé et de la Recherche Médicale U1114, Pôle de Psychiatrie, Fédération de Médecine Translationnelle de Strasbourg, Centre Hospitalier Régional Universitaire de Strasbourg, Université de Strasbourg, Strasbourg, France; Université de Lorraine, Faculté de Médecine, Vandoeuvre-lès-Nancy, F-54500 France
| | - Raymund Schwan
- Centre Psychothérapique de Nancy, Pôle Hospitalo-universitaire de Psychiatrie d'Adultes du Grand Nancy, Laxou F-54520, France; Institut National de la Santé et de la Recherche Médicale U1114, Pôle de Psychiatrie, Fédération de Médecine Translationnelle de Strasbourg, Centre Hospitalier Régional Universitaire de Strasbourg, Université de Strasbourg, Strasbourg, France; CHRU Nancy, Maison des Addictions, Nancy F-54000, France; Université de Lorraine, Faculté de Médecine, Vandoeuvre-lès-Nancy, F-54500 France
| | - Thomas Schwitzer
- Centre Psychothérapique de Nancy, Pôle Hospitalo-universitaire de Psychiatrie d'Adultes du Grand Nancy, Laxou F-54520, France; Institut National de la Santé et de la Recherche Médicale U1114, Pôle de Psychiatrie, Fédération de Médecine Translationnelle de Strasbourg, Centre Hospitalier Régional Universitaire de Strasbourg, Université de Strasbourg, Strasbourg, France; Université de Lorraine, Faculté de Médecine, Vandoeuvre-lès-Nancy, F-54500 France
| | - Louis Maillard
- Université de Lorraine, CNRS, CRAN - UMR 7039, Nancy F-54000, France; CHRU Nancy, Service de Neurologie, Nancy F-54000, France
| | | | - Stéphanie Caharel
- Université de Lorraine, Laboratoire Lorrain de Psychologie et Neurosciences (2LPN - EA 7489), Nancy F-54000, France; Institut Universitaire de France, Paris F-75000, France.
| |
Collapse
|
15
|
ERP evidence on how gaze convergence affects social attention. Sci Rep 2019; 9:7586. [PMID: 31110239 PMCID: PMC6527578 DOI: 10.1038/s41598-019-44058-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2018] [Accepted: 04/29/2019] [Indexed: 11/30/2022] Open
Abstract
How people process gaze cues from multiple others is an important topic but rarely studied. Our study investigated this question using an adapted gaze cueing paradigm to examine the cueing effect of multiple gazes and its neural correlates. We manipulated gaze directions from two human avatars to be either convergent, created by the two avatars simultaneously averting their gazes to the same direction, or non-convergent, when only one of the two avatars shifted its gaze. Our results showed faster reaction times and larger target-congruency effects following convergent gazes shared by the avatars, compared with the non-convergent gaze condition. These findings complement previous research to demonstrate that observing shared gazes from as few as two persons is sufficient to enhance gaze cueing. Additionally, ERP analyses revealed that (1) convergent gazes evoked both left and right hemisphere N170, while non-convergent gazes evoked N170 mainly in the hemisphere contralateral to the cueing face; (2) effects of target congruency on target-locked N1 and P3 were modulated by gaze convergence. These findings shed light on temporal features of the processing of multi-gaze cues.
Collapse
|
16
|
Bublatzky F, Pittig A, Schupp HT, Alpers GW. Face-to-face: Perceived personal relevance amplifies face processing. Soc Cogn Affect Neurosci 2018; 12:811-822. [PMID: 28158672 PMCID: PMC5460051 DOI: 10.1093/scan/nsx001] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2016] [Accepted: 01/16/2017] [Indexed: 11/13/2022] Open
Abstract
The human face conveys emotional and social information, but it is not well understood how these two aspects influence face perception. In order to model a group situation, two faces displaying happy, neutral or angry expressions were presented. Importantly, faces were either facing the observer, or they were presented in profile view directed towards, or looking away from each other. In Experiment 1 (n = 64), face pairs were rated regarding perceived relevance, wish-to-interact, and displayed interactivity, as well as valence and arousal. All variables revealed main effects of facial expression (emotional > neutral), face orientation (facing observer > towards > away) and interactions showed that evaluation of emotional faces strongly varies with their orientation. Experiment 2 (n = 33) examined the temporal dynamics of perceptual-attentional processing of these face constellations with event-related potentials. Processing of emotional and neutral faces differed significantly in N170 amplitudes, early posterior negativity (EPN), and sustained positive potentials. Importantly, selective emotional face processing varied as a function of face orientation, indicating early emotion-specific (N170, EPN) and late threat-specific effects (LPP, sustained positivity). Taken together, perceived personal relevance to the observer-conveyed by facial expression and face direction-amplifies emotional face processing within triadic group situations.
Collapse
Affiliation(s)
- Florian Bublatzky
- Department of Psychology, Clinical Psychology and Biological Psychology and Psychotherapy, School of Social Sciences, University of Mannheim, Mannheim, Germany
| | - Andre Pittig
- Department of Psychology, Clinical Psychology and Biological Psychology and Psychotherapy, School of Social Sciences, University of Mannheim, Mannheim, Germany
| | - Harald T Schupp
- Department of Psychology, University of Konstanz, Konstanz, Germany
| | - Georg W Alpers
- Department of Psychology, Clinical Psychology and Biological Psychology and Psychotherapy, School of Social Sciences, University of Mannheim, Mannheim, Germany
| |
Collapse
|
17
|
Ji L, Rossi V, Pourtois G. Mean emotion from multiple facial expressions can be extracted with limited attention: Evidence from visual ERPs. Neuropsychologia 2018; 111:92-102. [PMID: 29371095 DOI: 10.1016/j.neuropsychologia.2018.01.022] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2017] [Revised: 12/04/2017] [Accepted: 01/16/2018] [Indexed: 11/30/2022]
Abstract
Human observers can readily extract the mean emotion from multiple faces shown briefly. However, it remains currently debated whether this ability depends on attention or not. To address this question, in this study, we recorded lateralized event-related brain potentials (i.e., N2pc and SPCN) to track covert shifts of spatial attention, while healthy adult participants discriminated the mean emotion of four faces shown in the periphery at an attended or unattended spatial location, using a cueing technique. As a control condition, they were asked to discriminate the emotional expression of a single face shown in the periphery. Analyses of saccade-free data showed that the mean emotion discrimination ability was above chance level but statistically undistinguishable between the attended and unattended location, suggesting that attention was not a pre-requisite for averaging. Interestingly, at the ERP level, covert shifts of spatial attention were captured by the N2pc and SPCN components. All together, these novel findings suggest that averaging multiple facial expressions shown in the periphery can operate with limited attention.
Collapse
Affiliation(s)
- Luyan Ji
- Department of Experimental-Clinical and Health Psychology, Ghent University, Ghent, Belgium.
| | - Valentina Rossi
- Department of Experimental-Clinical and Health Psychology, Ghent University, Ghent, Belgium
| | - Gilles Pourtois
- Department of Experimental-Clinical and Health Psychology, Ghent University, Ghent, Belgium
| |
Collapse
|
18
|
Turano MT, Lao J, Richoz AR, de Lissa P, Degosciu SBA, Viggiano MP, Caldara R. Fear boosts the early neural coding of faces. Soc Cogn Affect Neurosci 2017; 12:1959-1971. [PMID: 29040780 PMCID: PMC5716185 DOI: 10.1093/scan/nsx110] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2016] [Revised: 09/18/2017] [Accepted: 10/02/2017] [Indexed: 11/14/2022] Open
Abstract
The rapid extraction of facial identity and emotional expressions is critical for adapted social interactions. These biologically relevant abilities have been associated with early neural responses on the face sensitive N170 component. However, whether all facial expressions uniformly modulate the N170, and whether this effect occurs only when emotion categorization is task-relevant, is still unclear. To clarify this issue, we recorded high-resolution electrophysiological signals while 22 observers perceived the six basic expressions plus neutral. We used a repetition suppression paradigm, with an adaptor followed by a target face displaying the same identity and expression (trials of interest). We also included catch trials to which participants had to react, by varying identity (identity-task), expression (expression-task) or both (dual-task) on the target face. We extracted single-trial Repetition Suppression (stRS) responses using a data-driven spatiotemporal approach with a robust hierarchical linear model to isolate adaptation effects on the trials of interest. Regardless of the task, fear was the only expression modulating the N170, eliciting the strongest stRS responses. This observation was corroborated by distinct behavioral performance during the catch trials for this facial expression. Altogether, our data reinforce the view that fear elicits distinct neural processes in the brain, enhancing attention and facilitating the early coding of faces.
Collapse
Affiliation(s)
- Maria Teresa Turano
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
- Department of Neuroscience, Psychology, Drug Research & Child's Health, University of Florence, Florence, Italy
| | - Junpeng Lao
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Anne-Raphaëlle Richoz
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Peter de Lissa
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Sarah B A Degosciu
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Maria Pia Viggiano
- Department of Neuroscience, Psychology, Drug Research & Child's Health, University of Florence, Florence, Italy
| | - Roberto Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| |
Collapse
|
19
|
Parada FJ, Rossi A. Commentary: Brain-to-Brain Synchrony Tracks Real-World Dynamic Group Interactions in the Classroom and Cognitive Neuroscience: Synchronizing Brains in the Classroom. Front Hum Neurosci 2017; 11:554. [PMID: 29209185 PMCID: PMC5702329 DOI: 10.3389/fnhum.2017.00554] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2017] [Accepted: 11/01/2017] [Indexed: 11/13/2022] Open
Affiliation(s)
- Francisco J Parada
- Laboratorio de Neurociencia Cognitiva y Social, Facultad de Psicología, Diego Portales University, Santiago, Chile
| | - Alejandra Rossi
- Laboratorio de Neurociencia Cognitiva y Social, Facultad de Psicología, Diego Portales University, Santiago, Chile
| |
Collapse
|
20
|
Bublatzky F, Alpers GW. Facing two faces: Defense activation varies as a function of personal relevance. Biol Psychol 2017; 125:64-69. [PMID: 28267568 DOI: 10.1016/j.biopsycho.2017.03.001] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2016] [Revised: 03/01/2017] [Accepted: 03/02/2017] [Indexed: 11/13/2022]
Abstract
It can be unsettling to be watched by a group of people, and when they express anger or hostility, this can prime defensive behavior. In contrast, when others smile at us, this may be comforting. This study tested to which degree the impact of facial expressions (happy, neutral, and angry) varies with the personal relevance of a social situation. Modelling a triadic situation, two faces looked either directly at the participant, faced each other, or they were back to back. Results confirmed that this variation constitutes a gradient of personal relevance (directed frontally > towards > away), as reflected by corresponding defensive startle modulation and autonomic nervous system activity. This gradient was particularly pronounced for angry faces and it was steeper in participants with higher levels of social anxiety. Thus, sender-recipient constellations modulate the processing of facial emotions in favor of adequate behavioral responding (e.g., avoidance) in group settings.
Collapse
Affiliation(s)
- Florian Bublatzky
- Clinical Psychology, Biological Psychology and Psychotherapy, Department of Psychology, School of Social Sciences, University of Mannheim, Germany.
| | - Georg W Alpers
- Clinical Psychology, Biological Psychology and Psychotherapy, Department of Psychology, School of Social Sciences, University of Mannheim, Germany
| |
Collapse
|
21
|
Liu T, Pinheiro AP, Zhao Z, Nestor PG, McCarley RW, Niznikiewicz M. Simultaneous face and voice processing in schizophrenia. Behav Brain Res 2016; 305:76-86. [PMID: 26804362 DOI: 10.1016/j.bbr.2016.01.039] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2015] [Revised: 01/06/2016] [Accepted: 01/17/2016] [Indexed: 12/19/2022]
Abstract
While several studies have consistently demonstrated abnormalities in the unisensory processing of face and voice in schizophrenia (SZ), the extent of abnormalities in the simultaneous processing of both types of information remains unclear. To address this issue, we used event-related potentials (ERP) methodology to probe the multisensory integration of face and non-semantic sounds in schizophrenia. EEG was recorded from 18 schizophrenia patients and 19 healthy control (HC) subjects in three conditions: neutral faces (visual condition-VIS); neutral non-semantic sounds (auditory condition-AUD); neutral faces presented simultaneously with neutral non-semantic sounds (audiovisual condition-AUDVIS). When compared with HC, the schizophrenia group showed less negative N170 to both face and face-voice stimuli; later P270 peak latency in the multimodal condition of face-voice relative to unimodal condition of face (the reverse was true in HC); reduced P400 amplitude and earlier P400 peak latency in the face but not in the voice-face condition. Thus, the analysis of ERP components suggests that deficits in the encoding of facial information extend to multimodal face-voice stimuli and that delays exist in feature extraction from multimodal face-voice stimuli in schizophrenia. In contrast, categorization processes seem to benefit from the presentation of simultaneous face-voice information. Timepoint by timepoint tests of multimodal integration did not suggest impairment in the initial stages of processing in schizophrenia.
Collapse
Affiliation(s)
- Taosheng Liu
- Department of Psychology, Second Military Medical University (SMMU), Shanghai, China; Department of Neurology, Changzheng Hospital, SMMU, Shanghai, China
| | - Ana P Pinheiro
- Clinical Neuroscience Division, Laboratory of Neuroscience, Department of Psychiatry, Boston VA Healthcare System, Brockton Division and Harvard Medical School Boston, MA, United States; Neuropsychophysiology Laboratory, CIPsi, School of Psychology, University of Minho, Braga, Portugal
| | - Zhongxin Zhao
- Department of Neurology, Changzheng Hospital, SMMU, Shanghai, China
| | - Paul G Nestor
- Clinical Neuroscience Division, Laboratory of Neuroscience, Department of Psychiatry, Boston VA Healthcare System, Brockton Division and Harvard Medical School Boston, MA, United States; University of Massachusetts, Boston, MA, United States
| | - Robert W McCarley
- Clinical Neuroscience Division, Laboratory of Neuroscience, Department of Psychiatry, Boston VA Healthcare System, Brockton Division and Harvard Medical School Boston, MA, United States
| | - Margaret Niznikiewicz
- Clinical Neuroscience Division, Laboratory of Neuroscience, Department of Psychiatry, Boston VA Healthcare System, Brockton Division and Harvard Medical School Boston, MA, United States.
| |
Collapse
|
22
|
Toepel U, Bielser ML, Forde C, Martin N, Voirin A, le Coutre J, Murray MM, Hudry J. Brain dynamics of meal size selection in humans. Neuroimage 2015; 113:133-42. [PMID: 25812716 DOI: 10.1016/j.neuroimage.2015.03.041] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2014] [Revised: 02/18/2015] [Accepted: 03/16/2015] [Indexed: 01/09/2023] Open
Abstract
Although neuroimaging research has evidenced specific responses to visual food stimuli based on their nutritional quality (e.g., energy density, fat content), brain processes underlying portion size selection remain largely unexplored. We identified spatio-temporal brain dynamics in response to meal images varying in portion size during a task of ideal portion selection for prospective lunch intake and expected satiety. Brain responses to meal portions judged by the participants as 'too small', 'ideal' and 'too big' were measured by means of electro-encephalographic (EEG) recordings in 21 normal-weight women. During an early stage of meal viewing (105-145 ms), data showed an incremental increase of the head-surface global electric field strength (quantified via global field power; GFP) as portion judgments ranged from 'too small' to 'too big'. Estimations of neural source activity revealed that brain regions underlying this effect were located in the insula, middle frontal gyrus and middle temporal gyrus, and are similar to those reported in previous studies investigating responses to changes in food nutritional content. In contrast, during a later stage (230-270 ms), GFP was maximal for the 'ideal' relative to the 'non-ideal' portion sizes. Greater neural source activity to 'ideal' vs. 'non-ideal' portion sizes was observed in the inferior parietal lobule, superior temporal gyrus and mid-posterior cingulate gyrus. Collectively, our results provide evidence that several brain regions involved in attention and adaptive behavior track 'ideal' meal portion sizes as early as 230 ms during visual encounter. That is, responses do not show an increase paralleling the amount of food viewed (and, in extension, the amount of reward), but are shaped by regulatory mechanisms.
Collapse
Affiliation(s)
- Ulrike Toepel
- The Laboratory for Investigative Neurophysiology (The LINE), The Department of Radiology and Department of Clinical Neurosciences, Vaudois University Hospital Center, University of Lausanne, 1011 Lausanne, Switzerland
| | - Marie-Laure Bielser
- The Laboratory for Investigative Neurophysiology (The LINE), The Department of Radiology and Department of Clinical Neurosciences, Vaudois University Hospital Center, University of Lausanne, 1011 Lausanne, Switzerland
| | - Ciaran Forde
- Nestlé Research Center, Vers-chez-les-Blanc, 1000 Lausanne 26, Switzerland
| | - Nathalie Martin
- Nestlé Research Center, Vers-chez-les-Blanc, 1000 Lausanne 26, Switzerland
| | - Alexandre Voirin
- Nestlé Research Center, Vers-chez-les-Blanc, 1000 Lausanne 26, Switzerland
| | - Johannes le Coutre
- Nestlé Research Center, Vers-chez-les-Blanc, 1000 Lausanne 26, Switzerland; The University of Tokyo, Organization for Interdisciplinary Research Projects, 1-1-1 Yayoi, Bunkyo-ku, Tokyo 113-8657, Japan
| | - Micah M Murray
- The Laboratory for Investigative Neurophysiology (The LINE), The Department of Radiology and Department of Clinical Neurosciences, Vaudois University Hospital Center, University of Lausanne, 1011 Lausanne, Switzerland; Electroencephalography Brain Mapping Core, Center for Biomedical Imaging (CIBM) of Lausanne and Geneva, Switzerland; Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN 37232, USA
| | - Julie Hudry
- Nestlé Research Center, Vers-chez-les-Blanc, 1000 Lausanne 26, Switzerland.
| |
Collapse
|