1
|
Pasqualette L, Kulke L. Effects of emotional content on social inhibition of gaze in live social and non-social situations. Sci Rep 2023; 13:14151. [PMID: 37644088 PMCID: PMC10465544 DOI: 10.1038/s41598-023-41154-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2021] [Accepted: 08/22/2023] [Indexed: 08/31/2023] Open
Abstract
In real-life interactions, it is crucial that humans adequately respond to others' emotional expressions. Emotion perception so far has mainly been studied in highly controlled laboratory tasks. However, recent research suggests that attention and gaze behaviour significantly differ between watching a person on a controlled laboratory screen compared to in real world interactions. Therefore, the current study aimed to investigate effects of emotional expression on participants' gaze in social and non-social situations. We compared looking behaviour towards a confederate showing positive, neutral or negative facial expressions between live social and non-social waiting room situations. Participants looked more often and longer to the confederate on the screen, than when physically present in the room. Expressions displayed by the confederate and individual traits (social anxiety and autistic traits) of participants did not reliably relate to gaze behaviour. Indications of covert attention also occurred more often and longer during the non-social, than during the social condition. Findings indicate that social norm is a strong factor modulating gaze behaviour in social contexts. PROTOCOL REGISTRATION: The stage 1 protocol for this Registered Report was accepted in principle on September 13, 2021. The protocol, as accepted by the journal, can be found at: https://doi.org/10.6084/m9.figshare.16628290 .
Collapse
Affiliation(s)
- Laura Pasqualette
- Department of Neurocognitive Developmental Psychology, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Developmental Psychology with Educational Psychology, University of Bremen, Bremen, Germany
| | - Louisa Kulke
- Department of Neurocognitive Developmental Psychology, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.
- Developmental Psychology with Educational Psychology, University of Bremen, Bremen, Germany.
| |
Collapse
|
2
|
Stimulus-response congruency effects depend on quality of perceptual evidence: A diffusion model account. Atten Percept Psychophys 2023; 85:1335-1354. [PMID: 36725783 DOI: 10.3758/s13414-022-02642-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/20/2022] [Indexed: 02/03/2023]
Abstract
Individuals often need to make quick decisions based on incomplete or "noisy" information. This requires the coordination of attentional, perceptual, cognitive, and behavioral mechanisms. This poses a challenge for isolating the unique effects of each subprocess from behavioral data, which reflect the summation of all subprocesses combined. Sequential sampling models offer a more detailed examination of behavioral data, enabling us to separate decisional and non-decisional processes at play in a task. Participants were required to identify briefly presented shapes while perceptual (duration, size, location) and response features (location-congruent/-incongruent/-neutral) of the task were manipulated. The diffusion model (Ratcliff, 1978) was used to dissociate decisional and executive processes in the task. In Experiment 1, stimuli were presented for either 20 or 80 ms to the left or right of a central fixation while response keys were positioned horizontally. In Experiment 2, stimulus size was manipulated rather than duration. In Experiment 3, response keys were positioned vertically. Results showed a duration x response mapping interaction. Participants displayed stimulus-response (S-R) congruency biases only on short-duration trials. This effect was observed for both horizontal and vertical response key mappings. Stimulus size affected participant response speed, but did not elicit S-R congruency biases. The present findings show that when perceptual quality of evidence is poor, individuals rely more heavily on spatial-motor mechanisms when making speeded choice decisions. Furthermore, positioning response keys vertically is insufficient to eliminate S-R congruency effects. Diffusion model parameters are presented and implications of the model are discussed.
Collapse
|
3
|
Menéndez Granda M, Iannotti GR, Darqué A, Ptak R. Does mental rotation emulate motor processes? An electrophysiological study of objects and body parts. Front Hum Neurosci 2022; 16:983137. [PMID: 36304589 PMCID: PMC9592819 DOI: 10.3389/fnhum.2022.983137] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Accepted: 09/21/2022] [Indexed: 12/01/2022] Open
Abstract
Several arguments suggest that motor planning may share embodied neural mechanisms with mental rotation (MR). However, it is not well established whether this overlap occurs regardless of the type of stimulus that is manipulated, in particular manipulable or non-manipulable objects and body parts. We here used high-density electroencephalography (EEG) to examine the cognitive similarity between MR of objects that do not afford specific hand actions (chairs) and bodily stimuli (hands). Participants had identical response options for both types of stimuli, and they gave responses orally in order to prevent possible interference with motor imagery. MR of hands and chairs generated very similar behavioral responses, time-courses and neural sources of evoked-response potentials (ERPs). ERP segmentation analysis revealed distinct time windows during which differential effects of stimulus type and angular disparity were observed. An early period (90-160 ms) differentiated only between stimulus types, and was associated with occipito-temporal activity. A later period (290-330 ms) revealed strong effects of angular disparity, associated with electrical sources in the right angular gyrus and primary motor/somatosensory cortex. These data suggest that spatial transformation processes and motor planning are recruited simultaneously, supporting the involvement of motor emulation processes in MR.
Collapse
Affiliation(s)
- Marta Menéndez Granda
- Laboratory of Cognitive Neurorehabilitation, Department of Clinical Neurosciences, Faculty of Medicine, University of Geneva, Geneva, Switzerland
| | - Giannina Rita Iannotti
- Laboratory of Cognitive Neurorehabilitation, Department of Clinical Neurosciences, Faculty of Medicine, University of Geneva, Geneva, Switzerland
- Swiss Foundation for Innovation and Training in Surgery, University Hospitals of Geneva, Geneva, Switzerland
| | - Alexandra Darqué
- Laboratory of Cognitive Neurorehabilitation, Department of Clinical Neurosciences, Faculty of Medicine, University of Geneva, Geneva, Switzerland
| | - Radek Ptak
- Laboratory of Cognitive Neurorehabilitation, Department of Clinical Neurosciences, Faculty of Medicine, University of Geneva, Geneva, Switzerland
- Division of Neurorehabilitation, University Hospitals of Geneva, Geneva, Switzerland
| |
Collapse
|
4
|
Can faces affect object-based attention? Evidence from online experiments. Atten Percept Psychophys 2022; 84:1220-1233. [PMID: 35396617 PMCID: PMC8992784 DOI: 10.3758/s13414-022-02473-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/06/2022] [Indexed: 11/23/2022]
Abstract
This study tested how human faces affect object-based attention (OBA) through two online experiments in a modified double-rectangle paradigm. The results of Experiment 1 revealed that faces did not elicit the OBA effect as non-face objects, which was caused by a longer response time (RT) when attention is focused on faces relative to non-face objects. In addition, by observing faster RTs when attention was engaged horizontally rather than vertically, we found a significant horizontal attention bias, which might override the OBA effect if vertical rectangles were the only items presented; these results were replicated in Experiment 2 (using only vertical rectangles) after directly measuring horizontal bias and excluding its influence on the OBA effect. This study suggested that faces cannot elicit the same-object advantage in the double-rectangle paradigm and provided a method to measure the OBA effect free from horizontal bias.
Collapse
|
5
|
Clarifying the effect of facial emotional expression on inattentional blindness. Conscious Cogn 2022; 100:103304. [DOI: 10.1016/j.concog.2022.103304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Revised: 12/08/2021] [Accepted: 02/19/2022] [Indexed: 11/20/2022]
|
6
|
Word and Face Recognition Processing Based on Response Times and Ex-Gaussian Components. ENTROPY 2021; 23:e23050580. [PMID: 34066797 PMCID: PMC8151452 DOI: 10.3390/e23050580] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/04/2021] [Revised: 04/29/2021] [Accepted: 05/05/2021] [Indexed: 11/17/2022]
Abstract
The face is a fundamental feature of our identity. In humans, the existence of specialized processing modules for faces is now widely accepted. However, identifying the processes involved for proper names is more problematic. The aim of the present study is to examine which of the two treatments is produced earlier and whether the social abilities are influent. We selected 100 university students divided into two groups: Spanish and USA students. They had to recognize famous faces or names by using a masked priming task. An analysis of variance about the reaction times (RT) was used to determine whether significant differences could be observed in word or face recognition and between the Spanish or USA group. Additionally, and to examine the role of outliers, the Gaussian distribution has been modified exponentially. Famous faces were recognized faster than names, and differences were observed between Spanish and North American participants, but not for unknown distracting faces. The current results suggest that response times to face processing might be faster than name recognition, which supports the idea of differences in processing nature.
Collapse
|
7
|
Sollfrank T, Kohnen O, Hilfiker P, Kegel LC, Jokeit H, Brugger P, Loertscher ML, Rey A, Mersch D, Sternagel J, Weber M, Grunwald T. The Effects of Dynamic and Static Emotional Facial Expressions of Humans and Their Avatars on the EEG: An ERP and ERD/ERS Study. Front Neurosci 2021; 15:651044. [PMID: 33967681 PMCID: PMC8100234 DOI: 10.3389/fnins.2021.651044] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2021] [Accepted: 03/30/2021] [Indexed: 11/13/2022] Open
Abstract
This study aimed to examine whether the cortical processing of emotional faces is modulated by the computerization of face stimuli ("avatars") in a group of 25 healthy participants. Subjects were passively viewing 128 static and dynamic facial expressions of female and male actors and their respective avatars in neutral or fearful conditions. Event-related potentials (ERPs), as well as alpha and theta event-related synchronization and desynchronization (ERD/ERS), were derived from the EEG that was recorded during the task. All ERP features, except for the very early N100, differed in their response to avatar and actor faces. Whereas the N170 showed differences only for the neutral avatar condition, later potentials (N300 and LPP) differed in both emotional conditions (neutral and fear) and the presented agents (actor and avatar). In addition, we found that the avatar faces elicited significantly stronger reactions than the actor face for theta and alpha oscillations. Especially theta EEG frequencies responded specifically to visual emotional stimulation and were revealed to be sensitive to the emotional content of the face, whereas alpha frequency was modulated by all the stimulus types. We can conclude that the computerized avatar faces affect both, ERP components and ERD/ERS and evoke neural effects that are different from the ones elicited by real faces. This was true, although the avatars were replicas of the human faces and contained similar characteristics in their expression.
Collapse
Affiliation(s)
| | | | | | - Lorena C. Kegel
- Swiss Epilepsy Center, Zurich, Switzerland
- Department of Psychology, University of Zurich, Zurich, Switzerland
| | - Hennric Jokeit
- Swiss Epilepsy Center, Zurich, Switzerland
- Department of Psychology, University of Zurich, Zurich, Switzerland
| | - Peter Brugger
- Valens Rehabilitation Centre, Valens, Switzerland
- Psychiatric University Hospital Zurich, Zurich, Switzerland
| | - Miriam L. Loertscher
- Institute for the Performing Arts and Film, Zurich University of the Arts, Zurich, Switzerland
| | - Anton Rey
- Institute for the Performing Arts and Film, Zurich University of the Arts, Zurich, Switzerland
| | - Dieter Mersch
- Institute for Critical Theory, Zurich University of the Arts, Zurich, Switzerland
| | - Joerg Sternagel
- Institute for Critical Theory, Zurich University of the Arts, Zurich, Switzerland
| | - Michel Weber
- Institute for the Performing Arts and Film, Zurich University of the Arts, Zurich, Switzerland
| | | |
Collapse
|
8
|
Tomkins B. Right visual field advantage for lexical decision dependent on stimulus size and visibility: Evidence for an early processing account of hemispheric asymmetry. Laterality 2020; 26:539-563. [PMID: 33297840 DOI: 10.1080/1357650x.2020.1856126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Previous research suggests that the right visual field advantage on the lexical decision task occurs independent of the visual quality of stimuli [Chiarello, C., Senehi, J., & Soulier, M. (1986). Viewing conditions and hemisphere asymmetry for the lexical decision. Neuropsychologia, 24(4), 521-529]. However, previous studies examining these effects have had methodological limitations that were addressed and controlled for in the present study. Participants performed a divided visual field, lexical decision task for words that varied in size (Experiment 1) and visibility (Experiment 2). Results showed a quality by visual field interaction effect. In both experiments, response times were faster for targets presented to the right visual field in the high quality (i.e., large font, high visibility) conditions; however, visual quality resulted in no differences for targets presented to the left visual field. Furthermore, this quality by visual field interaction effect was only observed when the target was a word. These results suggest that the left hemisphere advantage for lexical decision depends on the perceptual quality of targets, consistent with an early stage of processing account of hemispheric asymmetry during lexical decision. Findings are discussed within the context of word recognition and decision-based models.
Collapse
Affiliation(s)
- Blaine Tomkins
- Department of Psychology, DePaul University, Chicago, IL, USA
| |
Collapse
|
9
|
Barzegaran E, Norcia AM. Neural sources of letter and Vernier acuity. Sci Rep 2020; 10:15449. [PMID: 32963270 PMCID: PMC7509830 DOI: 10.1038/s41598-020-72370-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2019] [Accepted: 09/01/2020] [Indexed: 01/23/2023] Open
Abstract
Visual acuity can be measured in many different ways, including with letters and Vernier offsets. Prior psychophysical work has suggested that the two acuities are strongly linked given that they both depend strongly on retinal eccentricity and both are similarly affected in amblyopia. Here we used high-density EEG recordings to ask whether the underlying neural sources are common as suggested by the psychophysics or distinct. To measure visual acuity for letters, we recorded evoked potentials to 3 Hz alternations between intact and scrambled text comprised of letters of varying size. To measure visual acuity for Vernier offsets, we recorded evoked potentials to 3 Hz alternations between bar gratings with and without a set of Vernier offsets. Both alternation types elicited robust activity at the 3 Hz stimulus frequency that scaled in amplitude with both letter and offset size, starting near threshold. Letter and Vernier offset responses differed in both their scalp topography and temporal dynamics. The earliest evoked responses to letters occurred on lateral occipital visual areas, predominantly over the left hemisphere. Later responses were measured at electrodes over early visual cortex, suggesting that letter structure is first extracted in second-tier extra-striate areas and that responses over early visual areas are due to feedback. Responses to Vernier offsets, by contrast, occurred first at medial occipital electrodes, with responses at later time-points being more broadly distributed—consistent with feedforward pathway mediation. The previously observed commonalities between letter and Vernier acuity may be due to common bottlenecks in early visual cortex but not because the two tasks are subserved by a common network of visual areas.
Collapse
Affiliation(s)
- Elham Barzegaran
- Wu Tsai Neurosciences Institute, 290 Jane Stanford Way, Stanford, CA, 94305, USA.
| | - Anthony M Norcia
- Wu Tsai Neurosciences Institute, 290 Jane Stanford Way, Stanford, CA, 94305, USA.
| |
Collapse
|
10
|
Gheorghiu E, Dering BR. Shape facilitates number: brain potentials and microstates reveal the interplay between shape and numerosity in human vision. Sci Rep 2020; 10:12413. [PMID: 32709892 PMCID: PMC7381628 DOI: 10.1038/s41598-020-68788-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2020] [Accepted: 06/19/2020] [Indexed: 12/04/2022] Open
Abstract
Recognition of simple shapes and numerosity estimation for small quantities are often studied independently of each other, but we know that these processes are both rapid and accurate, suggesting that they may be mediated by common neural mechanisms. Here we address this issue by examining how spatial configuration, shape complexity, and luminance polarity of elements affect numerosity estimation. We directly compared the Event Related Potential (ERP) time-course for numerosity estimation under shape and random configurations and found a larger N2 component for shape over lateral-occipital electrodes (250–400 ms), which also increased with higher numbers. We identified a Left Mid Frontal (LMF; 400–650 ms) component over left-lateralised medial frontal sites that specifically separated low and high numbers of elements, irrespective of their spatial configuration. Different luminance-polarities increased N2 amplitude only, suggesting that shape but not numerosity is selective to polarity. Functional microstates confined numerosity to a strict topographic distribution occurring within the LMF time-window, while a microstate responding only to shape-configuration was evidenced earlier, in the N2 time-window. We conclude that shape-coding precedes numerosity estimation, which can be improved when the number of elements and shape vertices are matched. Thus, numerosity estimation around the subitizing range is facilitated by a shape-template matching process.
Collapse
Affiliation(s)
- Elena Gheorghiu
- Department of Psychology, University of Stirling, Stirling, FK9 4LA, Scotland, UK.
| | - Benjamin R Dering
- Department of Psychology, University of Stirling, Stirling, FK9 4LA, Scotland, UK
| |
Collapse
|
11
|
Usée F, Jacobs AM, Lüdtke J. From Abstract Symbols to Emotional (In-)Sights: An Eye Tracking Study on the Effects of Emotional Vignettes and Pictures. Front Psychol 2020; 11:905. [PMID: 32528357 PMCID: PMC7264705 DOI: 10.3389/fpsyg.2020.00905] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2019] [Accepted: 04/14/2020] [Indexed: 02/04/2023] Open
Abstract
Reading is known to be a highly complex, emotion-inducing process, usually involving connected and cohesive sequences of sentences and paragraphs. However, most empirical results, especially from studies using eye tracking, are either restricted to simple linguistic materials (e.g., isolated words, single sentences) or disregard valence-driven effects. The present study addressed the need for ecologically valid stimuli by examining the emotion potential of and reading behavior in emotional vignettes, often used in applied psychological contexts and discourse comprehension. To allow for a cross-domain comparison in the area of emotion induction, negatively and positively valenced vignettes were constructed based on pre-selected emotional pictures from the Nencki Affective Picture System (NAPS; Marchewka et al., 2014). We collected ratings of perceived valence and arousal for both material groups and recorded eye movements of 42 participants during reading and picture viewing. Linear mixed-effects models were performed to analyze effects of valence (i.e., valence category, valence rating) and stimulus domain (i.e., textual, pictorial) on ratings of perceived valence and arousal, eye movements in reading, and eye movements in picture viewing. Results supported the success of our experimental manipulation: emotionally positive stimuli (i.e., vignettes, pictures) were perceived more positively and less arousing than emotionally negative ones. The cross-domain comparison indicated that vignettes are able to induce stronger valence effects than their pictorial counterparts, no differences between vignettes and pictures regarding effects on perceived arousal were found. Analyses of eye movements in reading replicated results from experiments using isolated words and sentences: perceived positive text valence attracted shorter reading times than perceived negative valence at both the supralexical and lexical level. In line with previous findings, no emotion effects on eye movements in picture viewing were found. This is the first eye tracking study reporting superior valence effects for vignettes compared to pictures and valence-specific effects on eye movements in reading at the supralexical level.
Collapse
Affiliation(s)
- Franziska Usée
- Department of Experimental and Neurocognitive Psychology, Freie Universität Berlin, Berlin, Germany
| | - Arthur M Jacobs
- Department of Experimental and Neurocognitive Psychology, Freie Universität Berlin, Berlin, Germany.,Center for Cognitive Neuroscience Berlin, Freie Universität Berlin, Berlin, Germany
| | - Jana Lüdtke
- Department of Experimental and Neurocognitive Psychology, Freie Universität Berlin, Berlin, Germany
| |
Collapse
|
12
|
Morrisey MN, Hofrichter R, Rutherford MD. Human faces capture attention and attract first saccades without longer fixation. VISUAL COGNITION 2019. [DOI: 10.1080/13506285.2019.1631925] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Marcus N. Morrisey
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Canada
| | - Ruth Hofrichter
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Canada
| | - M. D. Rutherford
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Canada
| |
Collapse
|
13
|
Márquez C, Nicolini H, Crowley MJ, Solís-Vivanco R. Early processing (N170) of infant faces in mothers of children with autism spectrum disorder and its association with maternal sensitivity. Autism Res 2019; 12:744-758. [PMID: 30973210 DOI: 10.1002/aur.2102] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2018] [Revised: 03/19/2019] [Accepted: 03/21/2019] [Indexed: 11/09/2022]
Abstract
Individuals with autism spectrum disorder (ASD) exhibit impaired adult facial processing, as shown by the N170 event-related potential. However, few studies explore such processing in mothers of children with ASD, and none has assessed the early processing of infant faces in these women. Moreover, whether processing of infant facial expressions in mothers of children with ASD is related to their response to their child's needs (maternal sensitivity [MS]) remains unknown. This study explored the N170 related to infant faces in a group of mothers of children with ASD (MA) and a reference group of mothers of children without ASD. For both emotional (crying, smiling) and neutral expressions, the MA group exhibited larger amplitudes of N170 in the right hemisphere, while the reference group showed similar interhemispheric amplitudes. This lateralization effect within the MA group was not present for nonfaces and was stronger in the mothers with higher MS. We propose that mothers of ASD children use specialized perceptual resources to process infant faces, and this specialization is mediated by MS. Our findings suggest that having an ASD child modulates mothers' early neurophysiological responsiveness to infant cues. Whether this modulation represents a biological marker or a response given by experience remains to be explored. Autism Research 2019, 12: 744-758. © 2019 International Society for Autism Research, Wiley Periodicals, Inc. LAY SUMMARY: When mothers of children with autism spectrum disorder (ASD) see baby faces expressing emotions, they show a right-sided electrical response in the brain. This lateralization was stronger in mothers who were more sensitive to their children's needs. We conclude that having a child with ASD and being more attuned to their behavior generates a specialized pattern of brain activity when processing infant faces. Whether this pattern is biological or given by experience remains to be explored.
Collapse
Affiliation(s)
- Carla Márquez
- School of Psychology, Universidad Nacional Autónoma de México, Mexico City, Mexico.,Neuropsychology Department, Instituto Nacional de Neurología y Neurocirugía Manuel Velasco Suárez, Mexico City, Mexico.,Neuropsychiatric and Neurodegenerative Diseases Laboratory, Instituto Nacional de Medicina Genómica, Mexico City, Mexico
| | - Humberto Nicolini
- Neuropsychiatric and Neurodegenerative Diseases Laboratory, Instituto Nacional de Medicina Genómica, Mexico City, Mexico
| | | | - Rodolfo Solís-Vivanco
- School of Psychology, Universidad Nacional Autónoma de México, Mexico City, Mexico.,Neuropsychology Department, Instituto Nacional de Neurología y Neurocirugía Manuel Velasco Suárez, Mexico City, Mexico
| |
Collapse
|
14
|
Barik K, Daimi SN, Jones R, Bhattacharya J, Saha G. A machine learning approach to predict perceptual decisions: an insight into face pareidolia. Brain Inform 2019; 6:2. [PMID: 30721365 PMCID: PMC6363645 DOI: 10.1186/s40708-019-0094-5] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2018] [Accepted: 01/17/2019] [Indexed: 11/21/2022] Open
Abstract
The perception of an external stimulus not only depends upon the characteristics of the stimulus but is also influenced by the ongoing brain activity prior to its presentation. In this work, we directly tested whether spontaneous electrical brain activities in prestimulus period could predict perceptual outcome in face pareidolia (visualizing face in noise images) on a trial-by-trial basis. Participants were presented with only noise images but with the prior information that some faces would be hidden in these images, while their electrical brain activities were recorded; participants reported their perceptual decision, face or no-face, on each trial. Using differential hemispheric asymmetry features based on large-scale neural oscillations in a machine learning classifier, we demonstrated that prestimulus brain activities could achieve a classification accuracy, discriminating face from no-face perception, of 75% across trials. The time–frequency features representing hemispheric asymmetry yielded the best classification performance, and prestimulus alpha oscillations were found to be mostly involved in predicting perceptual decision. These findings suggest a mechanism of how prior expectations in the prestimulus period may affect post-stimulus decision making.
Collapse
Affiliation(s)
- Kasturi Barik
- Department of Electronics and Electrical Communication Engineering, Indian Institute of Technology, Kharagpur, India.
| | - Syed Naser Daimi
- Department of Electronics and Electrical Communication Engineering, Indian Institute of Technology, Kharagpur, India
| | - Rhiannon Jones
- Department of Psychology, University of Winchester, Winchester, UK
| | | | - Goutam Saha
- Department of Electronics and Electrical Communication Engineering, Indian Institute of Technology, Kharagpur, India
| |
Collapse
|
15
|
Abstract
In this study, we explore the automaticity of encoding for different facial characteristics and ask whether it is influenced by face familiarity. We used a matching task in which participants had to report whether the gender, identity, race, or expression of two briefly presented faces was the same or different. The task was made challenging by allowing nonrelevant dimensions to vary across trials. To test for automaticity, we compared performance on trials in which the task instruction was given at the beginning of the trial, with trials in which the task instruction was given at the end of the trial. As a strong criterion for automatic processing, we reasoned that if perception of a given characteristic (gender, race, identity, or emotion) is fully automatic, the timing of the instruction should not influence performance. We compared automaticity for the perception of familiar and unfamiliar faces. Performance with unfamiliar faces was higher for all tasks when the instruction was given at the beginning of the trial. However, we found a significant interaction between instruction and task with familiar faces. Accuracy of gender and identity judgments to familiar faces was the same regardless of whether the instruction was given before or after the trial, suggesting automatic processing of these properties. In contrast, there was an effect of instruction for judgments of expression and race to familiar faces. These results show that familiarity enhances the automatic processing of some types of facial information more than others.
Collapse
|
16
|
Early and late cortical responses to directly gazing faces are task dependent. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2018; 18:796-809. [PMID: 29736681 DOI: 10.3758/s13415-018-0605-5] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Gender categorisation of human faces is facilitated when gaze is directed toward the observer (i.e., a direct gaze), compared with situations where gaze is averted or the eyes are closed (Macrae, Hood, Milne, Rowe, & Mason, Psychological Science, 13(5), 460-464, 2002). However, the temporal dynamics underlying this phenomenon remain to some extent unknown. Here, we used electroencephalography (EEG) to assess the neural correlates of this effect, focusing on the event-related potential (ERP) components known to be sensitive to gaze perception (i.e., P1, N170, and P3b). We first replicated the seminal findings of Macrae et al. (2002, Experiment 1) regarding facilitated gender discrimination, and subsequently measured the underlying neural responses. Our data revealed an early preferential processing of direct gaze as compared with averted gaze and closed eyes at the P1, which reverberated at the P3b (Experiment 2). Critically, using the same material, we failed to reproduce these effects when gender categorisation was not required (Experiment 3). Taken together, our data confirm that direct gaze enhances the early P1, as well as later cortical responses to face processing, although the effect appears to be task dependent.
Collapse
|
17
|
Jorge L, Canário N, Castelhano J, Castelo-Branco M. Processing of performance-matched visual object categories: faces and places are related to lower processing load in the frontoparietal executive network than other objects. Eur J Neurosci 2018; 47:938-946. [DOI: 10.1111/ejn.13892] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2017] [Revised: 02/22/2018] [Accepted: 02/23/2018] [Indexed: 11/27/2022]
Affiliation(s)
- Lília Jorge
- CIBIT, CNC.IBILI - Center for Biomedical Imaging and Translational Research; Faculty of Medicine; University of Coimbra; Coimbra Portugal
- ICNAS - Institute for Nuclear Sciences Applied to Health; Brain Imaging Network of Portugal; Coimbra Portugal
| | - Nádia Canário
- CIBIT, CNC.IBILI - Center for Biomedical Imaging and Translational Research; Faculty of Medicine; University of Coimbra; Coimbra Portugal
- ICNAS - Institute for Nuclear Sciences Applied to Health; Brain Imaging Network of Portugal; Coimbra Portugal
| | - João Castelhano
- CIBIT, CNC.IBILI - Center for Biomedical Imaging and Translational Research; Faculty of Medicine; University of Coimbra; Coimbra Portugal
- ICNAS - Institute for Nuclear Sciences Applied to Health; Brain Imaging Network of Portugal; Coimbra Portugal
| | - Miguel Castelo-Branco
- CIBIT, CNC.IBILI - Center for Biomedical Imaging and Translational Research; Faculty of Medicine; University of Coimbra; Coimbra Portugal
- ICNAS - Institute for Nuclear Sciences Applied to Health; Brain Imaging Network of Portugal; Coimbra Portugal
- Laboratório de Neurociências da Visão - IBILI; FMUC; Azinhaga Santa Comba; Celas Coimbra 3000-548 Portugal
| |
Collapse
|
18
|
Wright D, Mitchell C, Dering BR, Gheorghiu E. Luminance-polarity distribution across the symmetry axis affects the electrophysiological response to symmetry. Neuroimage 2018; 173:484-497. [PMID: 29427849 PMCID: PMC5929902 DOI: 10.1016/j.neuroimage.2018.02.008] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2017] [Revised: 01/25/2018] [Accepted: 02/05/2018] [Indexed: 11/03/2022] Open
Abstract
Electrophysiological studies of symmetry have found a difference wave termed the Sustained Posterior Negativity (SPN) related to the presence of symmetry. Yet the extent to which the SPN is modulated by luminance-polarity and colour content is unknown. Here we examine how luminance-polarity distribution across the symmetry axis, grouping by luminance polarity, and the number of colours in the stimuli, modulate the SPN. Stimuli were dot patterns arranged either symmetrically or quasi-randomly. There were several arrangements: 'segregated'-symmetric dots were of one polarity and randomly-positioned dots were of the other; 'unsegregated'-symmetric dots were of both polarities in equal proportions; 'anti-symmetric'-dots were of opposite polarity across the symmetry axis; 'polarity-grouped anti-symmetric'-this is the same as anti-symmetric but with half the pattern of one polarity and the other half of opposite polarity; multi-colour symmetric patterns made of two, three to four colours. We found that the SPN is: (i) reduced by the amount of position-symmetry, (ii) sensitive to luminance-polarity mismatch across the symmetry axis, and (iii) not modulated by the number of colours in the stimuli. Our results show that the sustained nature of the SPN coincides with the late onset of a topographic microstate sensitive to symmetry. These findings emphasise the importance of not only position symmetry, but also luminance polarity matching across the symmetry axis.
Collapse
Affiliation(s)
- Damien Wright
- University of Stirling, Department of Psychology, Stirling, FK9 4LA, Scotland, United Kingdom.
| | - Claire Mitchell
- University of Stirling, Department of Psychology, Stirling, FK9 4LA, Scotland, United Kingdom
| | - Benjamin R Dering
- University of Stirling, Department of Psychology, Stirling, FK9 4LA, Scotland, United Kingdom
| | - Elena Gheorghiu
- University of Stirling, Department of Psychology, Stirling, FK9 4LA, Scotland, United Kingdom
| |
Collapse
|
19
|
Król ME, Król M. “Economies of Experience”-Disambiguation of Degraded Stimuli Leads to a Decreased Dispersion of Eye-Movement Patterns. Cogn Sci 2017; 42 Suppl 3:728-756. [DOI: 10.1111/cogs.12566] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2017] [Revised: 08/03/2017] [Accepted: 10/09/2017] [Indexed: 10/18/2022]
Affiliation(s)
- Magdalena Ewa Król
- Faculty of Psychology II; SWPS University of Social Sciences and Humanities; Wrocław
| | - Michał Król
- Department of Economics; School of Social Sciences; University of Manchester
| |
Collapse
|
20
|
Flechsenhar AF, Gamer M. Top-down influence on gaze patterns in the presence of social features. PLoS One 2017; 12:e0183799. [PMID: 28837673 PMCID: PMC5570331 DOI: 10.1371/journal.pone.0183799] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2017] [Accepted: 08/13/2017] [Indexed: 11/29/2022] Open
Abstract
Visual saliency maps reflecting locations that stand out from the background in terms of their low-level physical features have proven to be very useful for empirical research on attentional exploration and reliably predict gaze behavior. In the present study we tested these predictions for socially relevant stimuli occurring in naturalistic scenes using eye tracking. We hypothesized that social features (i.e. human faces or bodies) would be processed preferentially over non-social features (i.e. objects, animals) regardless of their low-level saliency. To challenge this notion, we included three tasks that deliberately addressed non-social attributes. In agreement with our hypothesis, social information, especially heads, was preferentially attended compared to highly salient image regions across all tasks. Social information was never required to solve a task but was regarded nevertheless. More so, after completing the task requirements, viewing behavior reverted back to that of free-viewing with heavy prioritization of social features. Additionally, initial eye movements reflecting potentially automatic shifts of attention, were predominantly directed towards heads irrespective of top-down task demands. On these grounds, we suggest that social stimuli may provide exclusive access to the priority map, enabling social attention to override reflexive and controlled attentional processes. Furthermore, our results challenge the generalizability of saliency-based attention models.
Collapse
Affiliation(s)
| | - Matthias Gamer
- Department of Psychology, Julius Maximilian University of Wuerzburg, Wuerzburg, Germany
| |
Collapse
|
21
|
The impact of facial abnormalities and their spatial position on perception of cuteness and attractiveness of infant faces. PLoS One 2017; 12:e0180499. [PMID: 28749958 PMCID: PMC5531456 DOI: 10.1371/journal.pone.0180499] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2016] [Accepted: 06/16/2017] [Indexed: 12/03/2022] Open
Abstract
Research has demonstrated that how “cute” an infant is perceived to be has consequences for caregiving. Infants with facial abnormalities receive lower ratings of cuteness, but relatively little is known about how different abnormalities and their location affect these aesthetic judgements. The objective of the current study was to compare the impact of different abnormalities on the perception of infant faces, while controlling for infant identity. In two experiments, adult participants gave ratings of cuteness and attractiveness in response to face images that had been edited to introduce common facial abnormalities. Stimulus faces displayed either a haemangioma (a small, benign birth mark), strabismus (an abnormal alignment of the eyes) or a cleft lip (an abnormal opening in the upper lip). In Experiment 1, haemangioma had less of a detrimental effect on ratings than the more severe abnormalities. In Experiment 2, we manipulated the position of a haemangioma on the face. We found small but robust effects of this position, with abnormalities in the top and on the left of the face receiving lower cuteness ratings. This is consistent with previous research showing that people attend more to the top of the face (particularly the eyes) and to the left hemifield.
Collapse
|
22
|
Restricted attention to social cues in schizophrenia patients. Eur Arch Psychiatry Clin Neurosci 2016; 266:649-61. [PMID: 27305925 DOI: 10.1007/s00406-016-0705-6] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/19/2015] [Accepted: 06/02/2016] [Indexed: 10/21/2022]
Abstract
Deficits of psychosocial functioning are a robust finding in schizophrenia. Research on social cognition may open a new avenue for the development of effective interventions. As a correlate of social perceptive information processing deficits, schizophrenia patients (SZP) show deviant gaze behavior (GB) while viewing emotional faces. As understanding of a social environment requires gathering complex social information, our study aimed at investigating the gaze behavior of SZP related to social interactions and its impact on the level of social and role functioning. GB of 32 SZP and 37 healthy control individuals (HCI) was investigated with a high-resolution eye tracker during an unguided viewing of 12 complex pictures of social interaction scenes. Regarding whole pictures, SZP showed a shorter scanpath length, fewer fixations and a shorter mean distance between fixations. Furthermore, SZP exhibited fewer and shorter fixations on faces, but not on the socially informative bodies nor on the background, suggesting a cue-specific abnormality. Logistic regression with bootstrapping yielded a model including two GB parameters; a subsequent ROC curve analysis indicated an excellent ability of group discrimination (AUC .85). Face-related GB aberrations correlated with lower social and role functioning and with delusional thinking, but not with negative symptoms. Training of spontaneous integration of face-related social information seems promising to enable a holistic perception of social information, which may in turn improve social and role functioning. The observed ability to discriminate SZP from HCI warrants further research on the predictive validity of GB in psychosis risk prediction.
Collapse
|
23
|
Dissociating Attention Effects from Categorical Perception with ERP Functional Microstates. PLoS One 2016; 11:e0163336. [PMID: 27657921 PMCID: PMC5033484 DOI: 10.1371/journal.pone.0163336] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2016] [Accepted: 09/07/2016] [Indexed: 11/19/2022] Open
Abstract
When faces appear in our visual environment we naturally attend to them, possibly to the detriment of other visual information. Evidence from behavioural studies suggests that faces capture attention because they are more salient than other types of visual stimuli, reflecting a category-dependent modulation of attention. By contrast, neuroimaging data has led to a domain-specific account of face perception that rules out the direct contribution of attention, suggesting a dedicated neural network for face perception. Here we sought to dissociate effects of attention from categorical perception using Event Related Potentials. Participants viewed physically matched face and butterfly images, with each category acting as a target stimulus during different blocks in an oddball paradigm. Using a data-driven approach based on functional microstates, we show that the locus of endogenous attention effects with ERPs occurs in the N1 time range. Earlier categorical effects were also found around the level of the P1, reflecting either an exogenous increase in attention towards face stimuli, or a putative face-selective measure. Both category and attention effects were dissociable from one another hinting at the role that faces may play in early capturing of attention before top-down control of attention is observed. Our data support the conclusion that certain object categories, in this experiment, faces, may capture attention before top-down voluntary control of attention is initiated.
Collapse
|
24
|
Abstract
Sensitivity to temporal change places fundamental limits on object processing in the visual system. An emerging consensus from the behavioral and neuroimaging literature suggests that temporal resolution differs substantially for stimuli of different complexity and for brain areas at different levels of the cortical hierarchy. Here, we used steady-state visually evoked potentials to directly measure three fundamental parameters that characterize the underlying neural response to text and face images: temporal resolution, peak temporal frequency, and response latency. We presented full-screen images of text or a human face, alternated with a scrambled image, at temporal frequencies between 1 and 12 Hz. These images elicited a robust response at the first harmonic that showed differential tuning, scalp topography, and delay for the text and face images. Face-selective responses were maximal at 4 Hz, but text-selective responses, by contrast, were maximal at 1 Hz. The topography of the text image response was strongly left-lateralized at higher stimulation rates, whereas the response to the face image was slightly right-lateralized but nearly bilateral at all frequencies. Both text and face images elicited steady-state activity at more than one apparent latency; we observed early (141-160 msec) and late (>250 msec) text- and face-selective responses. These differences in temporal tuning profiles are likely to reflect differences in the nature of the computations performed by word- and face-selective cortex. Despite the close proximity of word- and face-selective regions on the cortical surface, our measurements demonstrate substantial differences in the temporal dynamics of word- versus face-selective responses.
Collapse
|
25
|
Silvert L, Funes MJ. When do fearful faces override inhibition of return? Acta Psychol (Amst) 2016; 163:124-34. [PMID: 26642227 DOI: 10.1016/j.actpsy.2015.11.002] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2014] [Revised: 11/03/2015] [Accepted: 11/15/2015] [Indexed: 11/15/2022] Open
Abstract
Inhibition of return (IOR) occurs when more than about 300 ms elapses between the cue and the target in atypical peripheral cueing task: reaction times (RTs) become longer when the cue and target locations are the same versus different. IOR could serve the adaptive role of optimizing visual search by discouraging the re-inspection of previously attended locations. As such, IOR should not reduce our chances of noticing relevant event information and emotional stimuli, in particular. However, previous studies have led to inconsistent results. The present study offers a systematic investigation of the conditions under which target fearful faces can modulate either the magnitude or the time course of the IOR effect. Notably, we manipulated the depth of facial processing required to perform the task and/or the task relevance of the facial expressions. When participants localized target faces (Experiment 1) or discriminated them from non-face stimuli (Experiment 2), their emotional expression had no impact on IOR whatsoever. However, IOR occurred later for fearful versus neutral faces when the participants performed emotion (Experiment 3) or gender (Experiment 4) discrimination tasks. These findings are discussed with regard to the mechanisms responsible for IOR and to the processing of emotional facial expressions.
Collapse
Affiliation(s)
- Laetitia Silvert
- Clermont Université, Université Blaise Pascal, Laboratoire de Psychologie Sociale et Cognitive, BP 10448, F-63000 Clermont-Ferrand, France; CNRS, UMR 6024, LAPSCO, F-63037 Clermont-Ferrand, France.
| | - María J Funes
- Mind Brain and Behaviour Research Center (CIMCYC), University of Granada, Spain; Department of Experimental Psychology, University of Granada, Spain
| |
Collapse
|
26
|
Sweegers CCG, Coleman GA, van Poppel EAM, Cox R, Talamini LM. Mental Schemas Hamper Memory Storage of Goal-Irrelevant Information. Front Hum Neurosci 2015; 9:629. [PMID: 26635582 PMCID: PMC4659923 DOI: 10.3389/fnhum.2015.00629] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2015] [Accepted: 11/03/2015] [Indexed: 11/16/2022] Open
Abstract
Mental schemas exert top-down control on information processing, for instance by facilitating the storage of schema-related information. However, given capacity-limits and competition in neural network processing, schemas may additionally exert their effects by suppressing information with low momentary relevance. In particular, when existing schemas suffice to guide goal-directed behavior, this may actually reduce encoding of the redundant sensory input, in favor of gaining efficiency in task performance. The present experiment set out to test this schema-induced shallow encoding hypothesis. Our approach involved a memory task in which faces had to be coupled to homes. For half of the faces the responses could be guided by a pre-learned schema, for the other half of the faces such a schema was not available. Memory storage was compared between schema-congruent and schema-incongruent items. To characterize putative schema effects, memory was assessed both with regard to visual details and contextual aspects of each item. The depth of encoding was also assessed through an objective neural measure: the parietal old/new ERP effect. This ERP effect, observed between 500–800 ms post-stimulus onset, is thought to reflect the extent of recollection: the retrieval of a vivid memory, including various contextual details from the learning episode. We found that schema-congruency induced substantial impairments in item memory and even larger ones in context memory. Furthermore, the parietal old/new ERP effect indicated higher recollection for the schema-incongruent than the schema-congruent memories. The combined findings indicate that, when goals can be achieved using existing schemas, this can hinder the in-depth processing of novel input, impairing the formation of perceptually detailed and contextually rich memory traces. Taking into account both current and previous findings, we suggest that schemas can both positively and negatively bias the processing of sensory input. An important determinant in this matter is likely related to momentary goals, such that mental schemas facilitate memory processing of goal-relevant input, but suppress processing of goal-irrelevant information.
Collapse
Affiliation(s)
- C C G Sweegers
- Department of Psychology, University of Amsterdam Amsterdam, Netherlands
| | - G A Coleman
- Department of Psychology, University of Amsterdam Amsterdam, Netherlands
| | - E A M van Poppel
- Department of Psychology, University of Amsterdam Amsterdam, Netherlands
| | - R Cox
- Department of Psychology, University of Amsterdam Amsterdam, Netherlands ; Department of Psychiatry, Beth Israel Deaconess Medical Center Boston, MA, USA ; Department of Psychiatry, Harvard Medical School Boston, MA, USA
| | - L M Talamini
- Department of Psychology, University of Amsterdam Amsterdam, Netherlands
| |
Collapse
|
27
|
Pegna AJ, Gehring E, Meyer G, Del Zotto M. Direction of Biological Motion Affects Early Brain Activation: A Link with Social Cognition. PLoS One 2015; 10:e0131551. [PMID: 26121591 PMCID: PMC4487996 DOI: 10.1371/journal.pone.0131551] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2014] [Accepted: 06/03/2015] [Indexed: 11/21/2022] Open
Abstract
A number of EEG studies have investigated the time course of brain activation for biological movement over this last decade, however the temporal dynamics of processing are still debated. Moreover, the role of direction of movement has not received much attention even though it is an essential component allowing us to determine the intentions of the moving agent, and thus permitting the anticipation of potential social interactions. In this study, we examined event-related responses (ERPs) in 15 healthy human participants to light point walkers and their scrambled counterparts, whose movements occurred either in the radial or in the lateral plane. Compared to scrambled motion (SM), biological motion (BM) showed an enhanced negativity between 210 and 360ms. A source localization algorithm (sLORETA) revealed that this was due to an increase in superior and middle temporal lobe activity. Regarding direction, we found that radial BM produced an enhanced P1 compared to lateral BM, lateral SM and radial SM. This heightened P1 was due to an increase in activity in extrastriate regions, as well as in superior temporal, medial parietal and medial prefrontal areas. This network is known to be involved in decoding the underlying intentionality of the movement and in the attribution of mental states. The social meaning signaled by the direction of biological motion therefore appears to trigger an early response in brain activity.
Collapse
Affiliation(s)
- Alan John Pegna
- University of Geneva, Faculty of Psychology and Educational Science, Geneva, Switzerland
- Laboratory of Experimental Neuropsychology, Neuropsychology Unit / Neurology Clinic, Geneva University Hospital, Geneva, Switzerland
- * E-mail:
| | - Elise Gehring
- Laboratory of Experimental Neuropsychology, Neuropsychology Unit / Neurology Clinic, Geneva University Hospital, Geneva, Switzerland
| | - Georg Meyer
- University of Liverpool, Dept of Psychological Sciences, Eleanor Rathbone Building, Liverpool, United Kingdom
| | - Marzia Del Zotto
- University of Geneva, Faculty of Psychology and Educational Science, Geneva, Switzerland
- Laboratory of Experimental Neuropsychology, Neuropsychology Unit / Neurology Clinic, Geneva University Hospital, Geneva, Switzerland
| |
Collapse
|
28
|
Bortolon C, Capdevielle D, Raffard S. Face recognition in schizophrenia disorder: A comprehensive review of behavioral, neuroimaging and neurophysiological studies. Neurosci Biobehav Rev 2015; 53:79-107. [PMID: 25800172 DOI: 10.1016/j.neubiorev.2015.03.006] [Citation(s) in RCA: 76] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2014] [Revised: 02/11/2015] [Accepted: 03/12/2015] [Indexed: 12/20/2022]
Abstract
Facial emotion processing has been extensively studied in schizophrenia patients while general face processing has received less attention. The already published reviews do not address the current scientific literature in a complete manner. Therefore, here we tried to answer some questions that remain to be clarified, particularly: are the non-emotional aspects of facial processing in fact impaired in schizophrenia patients? At the behavioral level, our key conclusions are that visual perception deficit in schizophrenia patients: are not specific to faces; are most often present when the cognitive (e.g. attention) and perceptual demands of the tasks are important; and seems to worsen with the illness chronification. Although, currently evidence suggests impaired second order configural processing, more studies are necessary to determine whether or not holistic processing is impaired in schizophrenia patients. Neural and neurophysiological evidence suggests impaired earlier levels of visual processing, which might involve the deficits in interaction of the magnocellular and parvocellular pathways impacting on further processing. These deficits seem to be present even before the disorder out-set. Although evidence suggests that this deficit may be not specific to faces, further evidence on this question is necessary, in particularly more ecological studies including context and body processing.
Collapse
Affiliation(s)
- Catherine Bortolon
- Epsylon Laboratory, EA 4556 Montpellier, France; University Department of Adult Psychiatry, CHU Montpellier, Montpellier, France.
| | - Delphine Capdevielle
- University Department of Adult Psychiatry, CHU Montpellier, Montpellier, France; French National Institute of Health and Medical Research (INSERM), U1061 Pathologies of the Nervous System: Epidemiological and Clinical Research, La Colombiere Hospital, 34093 Montpellier Cedex 5, France
| | - Stéphane Raffard
- Epsylon Laboratory, EA 4556 Montpellier, France; University Department of Adult Psychiatry, CHU Montpellier, Montpellier, France
| |
Collapse
|
29
|
Kreegipuu K, Kuldkepp N, Sibolt O, Toom M, Allik J, Näätänen R. vMMN for schematic faces: automatic detection of change in emotional expression. Front Hum Neurosci 2013; 7:714. [PMID: 24191149 PMCID: PMC3808786 DOI: 10.3389/fnhum.2013.00714] [Citation(s) in RCA: 47] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2013] [Accepted: 10/08/2013] [Indexed: 11/17/2022] Open
Abstract
Our brain is able to automatically detect changes in sensory stimulation, including in vision. A large variety of changes of features in stimulation elicit a deviance-reflecting event-related potential (ERP) component known as the mismatch negativity (MMN). The present study has three main goals: (1) to register vMMN using a rapidly presented stream of schematic faces (neutral, happy, and angry; adapted from Öhman etal., 2001); (2) to compare elicited vMMNs to angry and happy schematic faces in two different paradigms, in a traditional oddball design with frequent standard and rare target and deviant stimuli (12.5% each) and in an version of an optimal multi-feature paradigm with several deviant stimuli (altogether 37.5%) in the stimulus block; (3) to compare vMMNs to subjective ratings of valence, arousal and attention capture for happy and angry schematic faces, i.e., to estimate the effect of affective value of stimuli on their automatic detection. Eleven observers (19–32 years, six women) took part in both experiments, an oddball and optimum paradigm. Stimuli were rapidly presented schematic faces and an object with face-features that served as the target stimulus to be detected by a button-press. Results show that a vMMN-type response at posterior sites was equally elicited in both experiments. Post-experimental reports confirmed that the angry face attracted more automatic attention than the happy face but the difference did not emerge directly at the ERP level. Thus, when interested in studying change detection in facial expressions we encourage the use of the optimum (multi-feature) design in order to save time and other experimental resources.
Collapse
Affiliation(s)
- Kairi Kreegipuu
- Department of Experimental Psychology, Institute of Psychology, University of Tartu Tartu, Estonia
| | | | | | | | | | | |
Collapse
|
30
|
Yee M, Jones SS, Smith LB. Changes in visual object recognition precede the shape bias in early noun learning. Front Psychol 2012; 3:533. [PMID: 23227015 PMCID: PMC3512352 DOI: 10.3389/fpsyg.2012.00533] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2012] [Accepted: 11/12/2012] [Indexed: 11/13/2022] Open
Abstract
Two of the most formidable skills that characterize human beings are language and our prowess in visual object recognition. They may also be developmentally intertwined. Two experiments, a large sample cross-sectional study and a smaller sample 6-month longitudinal study of 18- to 24-month-olds, tested a hypothesized developmental link between changes in visual object representation and noun learning. Previous findings in visual object recognition indicate that children's ability to recognize common basic level categories from sparse structural shape representations of object shape emerges between the ages of 18 and 24 months, is related to noun vocabulary size, and is lacking in children with language delay. Other research shows in artificial noun learning tasks that during this same developmental period, young children systematically generalize object names by shape, that this shape bias predicts future noun learning, and is lacking in children with language delay. The two experiments examine the developmental relation between visual object recognition and the shape bias for the first time. The results show that developmental changes in visual object recognition systematically precede the emergence of the shape bias. The results suggest a developmental pathway in which early changes in visual object recognition that are themselves linked to category learning enable the discovery of higher-order regularities in category structure and thus the shape bias in novel noun learning tasks. The proposed developmental pathway has implications for understanding the role of specific experience in the development of both visual object recognition and the shape bias in early noun learning.
Collapse
Affiliation(s)
- Meagan Yee
- Department of Psychological and Brain Sciences, Indiana UniversityBloomington, IN, USA
| | - Susan S. Jones
- Department of Psychological and Brain Sciences, Indiana UniversityBloomington, IN, USA
| | - Linda B. Smith
- Department of Psychological and Brain Sciences, Indiana UniversityBloomington, IN, USA
| |
Collapse
|
31
|
Interactions between facial emotion and identity in face processing: Evidence based on redundancy gains. Atten Percept Psychophys 2012; 74:1692-711. [DOI: 10.3758/s13414-012-0345-5] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
32
|
The influence of social comparison on visual representation of one's face. PLoS One 2012; 7:e36742. [PMID: 22662124 PMCID: PMC3360758 DOI: 10.1371/journal.pone.0036742] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2011] [Accepted: 04/10/2012] [Indexed: 11/24/2022] Open
Abstract
Can the effects of social comparison extend beyond explicit evaluation to visual self-representation—a perceptual stimulus that is objectively verifiable, unambiguous, and frequently updated? We morphed images of participants' faces with attractive and unattractive references. With access to a mirror, participants selected the morphed image they perceived as depicting their face. Participants who engaged in upward comparison with relevant attractive targets selected a less attractive morph compared to participants exposed to control images (Study 1). After downward comparison with relevant unattractive targets compared to control images, participants selected a more attractive morph (Study 2). Biased representations were not the products of cognitive accessibility of beauty constructs; comparisons did not influence representations of strangers' faces (Study 3). We discuss implications for vision, social comparison, and body image.
Collapse
|
33
|
Graewe B, De Weerd P, Farivar R, Castelo-Branco M. Stimulus dependency of object-evoked responses in human visual cortex: an inverse problem for category specificity. PLoS One 2012; 7:e30727. [PMID: 22363479 PMCID: PMC3281870 DOI: 10.1371/journal.pone.0030727] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2011] [Accepted: 12/27/2011] [Indexed: 11/19/2022] Open
Abstract
Many studies have linked the processing of different object categories to specific event-related potentials (ERPs) such as the face-specific N170. Despite reports showing that object-related ERPs are influenced by visual stimulus features, there is consensus that these components primarily reflect categorical aspects of the stimuli. Here, we re-investigated this idea by systematically measuring the effects of visual feature manipulations on ERP responses elicited by both structure-from-motion (SFM)-defined and luminance-defined object stimuli. SFM objects elicited a novel component at 200-250 ms (N250) over parietal and posterior temporal sites. We found, however, that the N250 amplitude was unaffected by restructuring SFM stimuli into meaningless objects based on identical visual cues. This suggests that this N250 peak was not uniquely linked to categorical aspects of the objects, but is strongly determined by visual stimulus features. We provide strong support for this hypothesis by parametrically manipulating the depth range of both SFM- and luminance-defined object stimuli and showing that the N250 evoked by SFM stimuli as well as the well-known N170 to static faces were sensitive to this manipulation. Importantly, this effect could not be attributed to compromised object categorization in low depth stimuli, confirming a strong impact of visual stimulus features on object-related ERP signals. As ERP components linked with visual categorical object perception are likely determined by multiple stimulus features, this creates an interesting inverse problem when deriving specific perceptual processes from variations in ERP components.
Collapse
Affiliation(s)
- Britta Graewe
- Department of Cognitive Neuroscience, Faculty of Psychology & Neuroscience, Maastricht University, Maastricht, The Netherlands.
| | | | | | | |
Collapse
|
34
|
Pitts MA, Martínez A, Hillyard SA. Visual Processing of Contour Patterns under Conditions of Inattentional Blindness. J Cogn Neurosci 2012; 24:287-303. [PMID: 21812561 DOI: 10.1162/jocn_a_00111] [Citation(s) in RCA: 104] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
An inattentional blindness paradigm was adapted to measure ERPs elicited by visual contour patterns that were or were not consciously perceived. In the first phase of the experiment, subjects performed an attentionally demanding task while task-irrelevant line segments formed square-shaped patterns or random configurations. After the square patterns had been presented 240 times, subjects' awareness of these patterns was assessed. More than half of all subjects, when queried, failed to notice the square patterns and were thus considered inattentionally blind during this first phase. In the second phase of the experiment, the task and stimuli were the same, but following this phase, all of the subjects reported having seen the patterns. ERPs recorded over the occipital pole differed in amplitude from 220 to 260 msec for the pattern stimuli compared with the random arrays regardless of whether subjects were aware of the patterns. At subsequent latencies (300–340 msec) however, ERPs over bilateral occipital-parietal areas differed between patterns and random arrays only when subjects were aware of the patterns. Finally, in a third phase of the experiment, subjects viewed the same stimuli, but the task was altered so that the patterns became task relevant. Here, the same two difference components were evident but were followed by a series of additional components that were absent in the first two phases of the experiment. We hypothesize that the ERP difference at 220–260 msec reflects neural activity associated with automatic contour integration whereas the difference at 300–340 msec reflects visual awareness, both of which are dissociable from task-related postperceptual processing.
Collapse
Affiliation(s)
| | - Antígona Martínez
- 1University of California—San Diego
- 2Nathan Kline Institute for Psychiatric Research, Orangeburg, NY
| | | |
Collapse
|
35
|
Dering B, Martin CD, Moro S, Pegna AJ, Thierry G. Face-sensitive processes one hundred milliseconds after picture onset. Front Hum Neurosci 2011; 5:93. [PMID: 21954382 PMCID: PMC3173839 DOI: 10.3389/fnhum.2011.00093] [Citation(s) in RCA: 64] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2011] [Accepted: 08/13/2011] [Indexed: 11/13/2022] Open
Abstract
The human face is the most studied object category in visual neuroscience. In a quest for markers of face processing, event-related potential (ERP) studies have debated whether two peaks of activity – P1 and N170 – are category-selective. Whilst most studies have used photographs of unaltered images of faces, others have used cropped faces in an attempt to reduce the influence of features surrounding the “face–object” sensu stricto. However, results from studies comparing cropped faces with unaltered objects from other categories are inconsistent with results from studies comparing whole faces and objects. Here, we recorded ERPs elicited by full front views of faces and cars, either unaltered or cropped. We found that cropping artificially enhanced the N170 whereas it did not significantly modulate P1. In a second experiment, we compared faces and butterflies, either unaltered or cropped, matched for size and luminance across conditions, and within a narrow contrast bracket. Results of Experiment 2 replicated the main findings of Experiment 1. We then used face–car morphs in a third experiment to manipulate the perceived face-likeness of stimuli (100% face, 70% face and 30% car, 30% face and 70% car, or 100% car) and the N170 failed to differentiate between faces and cars. Critically, in all three experiments, P1 amplitude was modulated in a face-sensitive fashion independent of cropping or morphing. Therefore, P1 is a reliable event sensitive to face processing as early as 100 ms after picture onset.
Collapse
|
36
|
Dickstein DP, Castellanos FX. Face processing in attention deficit/hyperactivity disorder. Curr Top Behav Neurosci 2011; 9:219-37. [PMID: 21956612 DOI: 10.1007/7854_2011_157] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
Abstract
ADHD is one of the most common and impairing psychiatric conditions affecting children today. Thus far, much of the phenomenological and neurobiological research has emphasized the core symptoms of inattention, hyperactivity, and impulsivity which are thought to be mediated by frontostriatal alterations. However, increasing evidence suggests that ADHD involves emotional problems in addition to cognitive impairments. Here, we review the neurobiology of face processing and suggest that face-processing alterations offer a window into the emotional dysfunction often accompanying ADHD.
Collapse
Affiliation(s)
- Daniel P Dickstein
- PediMIND Program, E.P. Bradley Hospital, Alpert Medical School of Brown University, East Providence, USA,
| | | |
Collapse
|
37
|
Ortigue S, Sinigaglia C, Rizzolatti G, Grafton ST. Understanding actions of others: the electrodynamics of the left and right hemispheres. A high-density EEG neuroimaging study. PLoS One 2010; 5:e12160. [PMID: 20730095 PMCID: PMC2921336 DOI: 10.1371/journal.pone.0012160] [Citation(s) in RCA: 81] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2010] [Accepted: 07/21/2010] [Indexed: 11/18/2022] Open
Abstract
Background When we observe an individual performing a motor act (e.g. grasping a cup) we get two types of information on the basis of how the motor act is done and the context: what the agent is doing (i.e. grasping) and the intention underlying it (i.e. grasping for drinking). Here we examined the temporal dynamics of the brain activations that follow the observation of a motor act and underlie the observer's capacity to understand what the agent is doing and why. Methodology/Principal Findings Volunteers were presented with two-frame video-clips. The first frame (T0) showed an object with or without context; the second frame (T1) showed a hand interacting with the object. The volunteers were instructed to understand the intention of the observed actions while their brain activity was recorded with a high-density 128-channel EEG system. Visual event-related potentials (VEPs) were recorded time-locked with the frame showing the hand-object interaction (T1). The data were analyzed by using electrical neuroimaging, which combines a cluster analysis performed on the group-averaged VEPs with the localization of the cortical sources that give rise to different spatio-temporal states of the global electrical field. Electrical neuroimaging results revealed four major steps: 1) bilateral posterior cortical activations; 2) a strong activation of the left posterior temporal and inferior parietal cortices with almost a complete disappearance of activations in the right hemisphere; 3) a significant increase of the activations of the right temporo-parietal region with simultaneously co-active left hemispheric sources, and 4) a significant global decrease of cortical activity accompanied by the appearance of activation of the orbito-frontal cortex. Conclusions/Significance We conclude that the early striking left hemisphere involvement is due to the activation of a lateralized action-observation/action execution network. The activation of this lateralized network mediates the understanding of the goal of object-directed motor acts (mirror mechanism). The successive right hemisphere activation indicates that this hemisphere plays an important role in understanding the intention of others.
Collapse
Affiliation(s)
- Stephanie Ortigue
- 4D Brain Electrodynamics Laboratory, Department of Psychology, UCSB Brain Imaging Center, Institute for Collaborative Biotechnologies, University of California Santa Barbara, Santa Barbara, California, United States of America
- Laboratory for Advanced Translational Neuroscience, Department of Psychology, Central New York Medical Center, Syracuse University, Syracuse, New York, United States of America
| | | | - Giacomo Rizzolatti
- Department of Neuroscience, University of Parma, Parma, Italy
- Istituto Italiano di Tecnologia, Unità di Parma, Parma, Italy
- * E-mail:
| | - Scott T. Grafton
- 4D Brain Electrodynamics Laboratory, Department of Psychology, UCSB Brain Imaging Center, Institute for Collaborative Biotechnologies, University of California Santa Barbara, Santa Barbara, California, United States of America
| |
Collapse
|
38
|
Lippé S, Bulteau C, Dorfmuller G, Audren F, Delalande O, Jambaqué I. Cognitive outcome of parietooccipital resection in children with epilepsy. Epilepsia 2010; 51:2047-57. [DOI: 10.1111/j.1528-1167.2010.02651.x] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
39
|
DENG XH, ZHANG DX, HUANG SX, YUAN W, ZHOU XL. Effects of Supra- and Sub-liminal Emotional Cues on Inhibition of Return. ACTA PSYCHOLOGICA SINICA 2010. [DOI: 10.3724/sp.j.1041.2010.00325] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
40
|
Susac A, Ilmoniemi RJ, Pihko E, Nurminen J, Supek S. Early dissociation of face and object processing: a magnetoencephalographic study. Hum Brain Mapp 2009; 30:917-27. [PMID: 18344191 DOI: 10.1002/hbm.20557] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
The early dissociation in cortical responses to faces and objects was explored with magnetoencephalographic (MEG) recordings and source localization. To control for differences in the low-level stimulus features, which are known to modulate early brain responses, we created a novel set of stimuli so that their combinations did not have any differences in the visual-field location, spatial frequency, or luminance contrast. Differing responses to face and object (flower) stimuli were found at about 100 ms after stimulus onset in the occipital cortex. Our data also confirm that the brain response to a complex visual stimulus is not merely a sum of the responses to its constituent parts; the nonlinearity in the response was largest for meaningful stimuli.
Collapse
Affiliation(s)
- Ana Susac
- Department of Physics, Faculty of Science, University of Zagreb, Zagreb, Croatia.
| | | | | | | | | |
Collapse
|
41
|
Pereira AF, Smith LB. Developmental changes in visual object recognition between 18 and 24 months of age. Dev Sci 2009; 12:67-80. [PMID: 19120414 DOI: 10.1111/j.1467-7687.2008.00747.x] [Citation(s) in RCA: 46] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Two experiments examined developmental changes in children's visual recognition of common objects during the period of 18 to 24 months. Experiment 1 examined children's ability to recognize common category instances that presented three different kinds of information: (1) richly detailed and prototypical instances that presented both local and global shape information, color, textural and featural information, (2) the same rich and prototypical shapes but no color, texture or surface featural information, or (3) that presented only abstract and global representations of object shape in terms of geometric volumes. Significant developmental differences were observed only for the abstract shape representations in terms of geometric volumes, the kind of shape representation that has been hypothesized to underlie mature object recognition. Further, these differences were strongly linked in individual children to the number of object names in their productive vocabulary. Experiment 2 replicated these results and showed further that the less advanced children's object recognition was based on the piecemeal use of individual features and parts, rather than overall shape. The results provide further evidence for significant and rapid developmental changes in object recognition during the same period children first learn object names. The implications of the results for theories of visual object recognition, the relation of object recognition to category learning, and underlying developmental processes are discussed.
Collapse
Affiliation(s)
- Alfredo F Pereira
- Department of Psychological and Brain Sciences, Indiana University Bloomington, IN 47405, USA.
| | | |
Collapse
|
42
|
Abstract
The human body, like the human face, is a rich source of socially relevant information about other individuals. Evidence from studies of both humans and non-human primates points to focal regions of the higher-level visual cortex that are specialized for the visual perception of the body. These body-selective regions, which can be dissociated from regions involved in face perception, have been implicated in the perception of the self and the 'body schema', the perception of others' emotions and the understanding of actions.
Collapse
Affiliation(s)
- Marius V Peelen
- Centre for Cognitive Neuroscience, School of Psychology, Brigantia Building, University of Wales, Bangor, Gwynedd, LL57 2AS, UK
| | | |
Collapse
|
43
|
Thierry G, Martin CD, Downing P, Pegna AJ. Controlling for interstimulus perceptual variance abolishes N170 face selectivity. Nat Neurosci 2007; 10:505-11. [PMID: 17334361 DOI: 10.1038/nn1864] [Citation(s) in RCA: 152] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2006] [Accepted: 02/06/2007] [Indexed: 11/08/2022]
Abstract
Establishing when and how the human brain differentiates between object categories is key to understanding visual cognition. Event-related potential (ERP) investigations have led to the consensus that faces selectively elicit a negative wave peaking 170 ms after presentation, the 'N170'. In such experiments, however, faces are nearly always presented from a full front view, whereas other stimuli are more perceptually variable, leading to uncontrolled interstimulus perceptual variance (ISPV). Here, we compared ERPs elicited by faces, cars and butterflies while--for the first time--controlling ISPV (low or high). Surprisingly, the N170 was sensitive, not to object category, but to ISPV. In addition, we found category effects independent of ISPV 70 ms earlier than has been generally reported. These results demonstrate early ERP category effects in the visual domain, call into question the face selectivity of the N170 and establish ISPV as a critical factor to control in experiments relying on multitrial averaging.
Collapse
Affiliation(s)
- Guillaume Thierry
- School of Psychology, Brigantia Building, Penrallt Road, University of Wales, Bangor, LL57 2AS, UK.
| | | | | | | |
Collapse
|
44
|
Khateb A, Pegna AJ, Landis T, Michel CM, Brunet D, Seghier ML, Annoni JM. Rhyme processing in the brain: an ERP mapping study. Int J Psychophysiol 2007; 63:240-50. [PMID: 17222476 DOI: 10.1016/j.ijpsycho.2006.11.001] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2006] [Revised: 07/17/2006] [Accepted: 11/08/2006] [Indexed: 11/26/2022]
Abstract
The event-related potential (ERP) N450 component has been described in rhyme detection tasks as a negative response elicited by non-rhyming words in comparison to rhyming ones. This response, which peaked around 450 ms over the midline and right hemisphere recording sites, has been subsequently suggested to start already at approximately 300 ms. Moreover, although, the phonological N450 has first been linked to the semantic N400 component, its cognitive nature and cerebral origin remained debated. In this study, we re-investigated the time course of the electrophysiological responses to rhyming and non-rhyming words and estimated their cerebral generators using source localization methods. Waveform analysis showed that, prior to the N450 response to non-rhyming, a slightly earlier negativity characterized the rhyming condition over left fronto-temporal electrodes and peaked at approximately 350 ms. The analysis of the ERP map series in terms of functional microstates revealed a specific map segment in the rhyming condition and another one in the non-rhyming condition. Source localization indicated that the rhyming-elicited microstate engaged predominantly left frontal and temporal areas while the non rhyming-specific response recruited temporal and parietal regions bilaterally. Our results suggest that, similar to the N400 component that is also induced by mismatch contexts, the N450 might rely on temporal generators.
Collapse
Affiliation(s)
- Asaid Khateb
- Laboratory of Experimental Neuropsychology, Department of Neurology, University Hospital, Geneva, Switzerland.
| | | | | | | | | | | | | |
Collapse
|
45
|
Vuilleumier P, Pourtois G. Distributed and interactive brain mechanisms during emotion face perception: Evidence from functional neuroimaging. Neuropsychologia 2007; 45:174-94. [PMID: 16854439 DOI: 10.1016/j.neuropsychologia.2006.06.003] [Citation(s) in RCA: 762] [Impact Index Per Article: 44.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Brain imaging studies in humans have shown that face processing in several areas is modulated by the affective significance of faces, particularly with fearful expressions, but also with other social signals such gaze direction. Here we review haemodynamic and electrical neuroimaging results indicating that activity in the face-selective fusiform cortex may be enhanced by emotional (fearful) expressions, without explicit voluntary control, and presumably through direct feedback connections from the amygdala. fMRI studies show that these increased responses in fusiform cortex to fearful faces are abolished by amygdala damage in the ipsilateral hemisphere, despite preserved effects of voluntary attention on fusiform; whereas emotional increases can still arise despite deficits in attention or awareness following parietal damage, and appear relatively unaffected by pharmacological increases in cholinergic stimulation. Fear-related modulations of face processing driven by amygdala signals may implicate not only fusiform cortex, but also earlier visual areas in occipital cortex (e.g., V1) and other distant regions involved in social, cognitive, or somatic responses (e.g., superior temporal sulcus, cingulate, or parietal areas). In the temporal domain, evoked-potentials show a widespread time-course of emotional face perception, with some increases in the amplitude of responses recorded over both occipital and frontal regions for fearful relative to neutral faces (as well as in the amygdala and orbitofrontal cortex, when using intracranial recordings), but with different latencies post-stimulus onset. Early emotional responses may arise around 120ms, prior to a full visual categorization stage indexed by the face-selective N170 component, possibly reflecting rapid emotion processing based on crude visual cues in faces. Other electrical components arise at later latencies and involve more sustained activities, probably generated in associative or supramodal brain areas, and resulting in part from the modulatory signals received from amygdala. Altogether, these fMRI and ERP results demonstrate that emotion face perception is a complex process that cannot be related to a single neural event taking place in a single brain regions, but rather implicates an interactive network with distributed activity in time and space. Moreover, although traditional models in cognitive neuropsychology have often considered that facial expression and facial identity are processed along two separate pathways, evidence from fMRI and ERPs suggests instead that emotional processing can strongly affect brain systems responsible for face recognition and memory. The functional implications of these interactions remain to be fully explored, but might play an important role in the normal development of face processing skills and in some neuropsychiatric disorders.
Collapse
Affiliation(s)
- Patrik Vuilleumier
- Laboratory for Behavioral Neurology & Imaging of Cognition, Clinic of Neurology, University Hospital of Geneva, Geneva, Switzerland.
| | | |
Collapse
|
46
|
Gruber T, Giabbiconi CM, Trujillo-Barreto NJ, Müller MM. Repetition suppression of induced gamma band responses is eliminated by task switching. Eur J Neurosci 2006; 24:2654-60. [PMID: 17100853 DOI: 10.1111/j.1460-9568.2006.05130.x] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
The formation of cortical object representations requires the activation of cell assemblies, correlated by induced oscillatory bursts of activity > 20 Hz (induced gamma band responses; iGBRs). One marker of the functional dynamics within such cell assemblies is the suppression of iGBRs elicited by repeated stimuli. This effect is commonly interpreted as a signature of 'sharpening' processes within cell-assemblies, which are behaviourally mirrored in repetition priming effects. The present study investigates whether the sharpening of primed objects is an automatic consequence of repeated stimulus processing, or whether it depends on task demands. Participants performed either a 'living/non-living' or a 'bigger/smaller than a shoebox' classification on repeated pictures of everyday objects. We contrasted repetition-related iGBR effects after the same task was used for initial and repeated presentations (no-switch condition) with repetitions after a task-switch occurred (switch condition). Furthermore, we complemented iGBR analysis by examining other brain responses known to be modulated by repetition-related memory processes (evoked gamma oscillations and event-related potentials; ERPs). The results obtained for the 'no-switch' condition replicated previous findings of repetition suppression of iGBRs at 200-300 ms after stimulus onset. Source modelling showed that this effect was distributed over widespread cortical areas. By contrast, after a task-switch no iGBR suppression was found. We concluded that iGBRs reflect the sharpening of a cell assembly only within the same task. After a task switch the complete object representation is reactivated. The ERP (220-380 ms) revealed suppression effects independent of task demands in bilateral posterior areas and might indicate correlates of repetition priming in perceptual structures.
Collapse
Affiliation(s)
- Thomas Gruber
- Universität Leipzig, Institut für Psychologie I, Seeburgstrasse 14-20, 04103 Leipzig, Germany.
| | | | | | | |
Collapse
|
47
|
Thierry G, Pegna AJ, Dodds C, Roberts M, Basan S, Downing P. An event-related potential component sensitive to images of the human body. Neuroimage 2006; 32:871-9. [PMID: 16750639 DOI: 10.1016/j.neuroimage.2006.03.060] [Citation(s) in RCA: 145] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2005] [Revised: 03/23/2006] [Accepted: 03/24/2006] [Indexed: 11/21/2022] Open
Abstract
One of the critical functions of vision is to provide information about other individuals. Neuroimaging experiments examining the cortical regions that analyze the appearance of other people have found partially overlapping networks that respond selectively to human faces and bodies. In event-related potential (ERP) studies, faces systematically elicit a negative component peaking 170 ms after presentation - the N170. To characterize the electrophysiological response to human bodies, we compared the ERPs elicited by faces, bodies and various control stimuli. In Experiment 1, a comparison of ERPs elicited by faces, bodies, objects and places showed that pictures of the human body (without the head) elicit a negative component peaking at 190 ms (an N190). While broadly similar to the N170, the N190 differs in both spatial distribution and amplitude from the N1 components elicited by faces, objects and scenes and peaks significantly later than the N170. The difference between N190 and N170 was further supported using topographic analyses of ERPs and source localization techniques. A unique, stable map topography was found to characterize human bodies between 130 and 230 ms. In Experiment 2, we tested the four conditions from Experiment 1, as well as intact and scrambled silhouettes and stick figures of the human body. We found that intact silhouettes and stick figures elicited significantly greater N190 amplitudes than their scrambled counterparts. Thus, the N190 generalizes to some degree to schematic depictions of the human form. Overall, our findings are consistent with intertwined, but functionally distinct, neural representations of the human face and body.
Collapse
Affiliation(s)
- Guillaume Thierry
- School of Psychology, University of Wales, Bangor, Gwynedd LL57 2AS, UK.
| | | | | | | | | | | |
Collapse
|
48
|
Abstract
In this review we examine how attention is involved in detecting faces, recognizing facial identity and registering and discriminating between facial expressions of emotion. The first section examines whether these aspects of face perception are "automatic", in that they are especially rapid, non-conscious, mandatory and capacity-free. The second section discusses whether limited-capacity selective attention mechanisms are preferentially recruited by faces and facial expressions. Evidence from behavioral, neuropsychological, neuroimaging and psychophysiological studies from humans and single-unit recordings from primates is examined and the neural systems involved in processing faces, emotion and attention are highlighted. Avenues for further research are identified.
Collapse
Affiliation(s)
- Romina Palermo
- Macquarie Centre for Cognitive Science (MACCS), Macquarie University, NSW 2109, Sydney, Australia.
| | | |
Collapse
|
49
|
Proverbio AM, Brignone V, Matarazzo S, Del Zotto M, Zani A. Gender differences in hemispheric asymmetry for face processing. BMC Neurosci 2006; 7:44. [PMID: 16762056 PMCID: PMC1523199 DOI: 10.1186/1471-2202-7-44] [Citation(s) in RCA: 104] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2006] [Accepted: 06/08/2006] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Current cognitive neuroscience models predict a right-hemispheric dominance for face processing in humans. However, neuroimaging and electromagnetic data in the literature provide conflicting evidence of a right-sided brain asymmetry for decoding the structural properties of faces. The purpose of this study was to investigate whether this inconsistency might be due to gender differences in hemispheric asymmetry. RESULTS In this study, event-related brain potentials (ERPs) were recorded in 40 healthy, strictly right-handed individuals (20 women and 20 men) while they observed infants' faces expressing a variety of emotions. Early face-sensitive P1 and N1 responses to neutral vs. affective expressions were measured over the occipital/temporal cortices, and the responses were analyzed according to viewer gender. Along with a strong right hemispheric dominance for men, the results showed a lack of asymmetry for face processing in the amplitude of the occipito-temporal N1 response in women to both neutral and affective faces. CONCLUSION Men showed an asymmetric functioning of visual cortex while decoding faces and expressions, whereas women showed a more bilateral functioning. These results indicate the importance of gender effects in the lateralization of the occipito-temporal response during the processing of face identity, structure, familiarity, or affective content.
Collapse
Affiliation(s)
- Alice M Proverbio
- Department of Psychology, University of Milano-Bicocca, Viale dell'Innovazione 10, 20126, Milan, Italy
- Institute of Bioimaging and Molecular Physiology (CNR), Via Fratelli Cervi 93, 20090, Milano-Segrate, Italy
| | - Valentina Brignone
- Department of Psychology, University of Milano-Bicocca, Viale dell'Innovazione 10, 20126, Milan, Italy
| | - Silvia Matarazzo
- Department of Psychology, University of Milano-Bicocca, Viale dell'Innovazione 10, 20126, Milan, Italy
| | - Marzia Del Zotto
- Department of Psychology, University of Milano-Bicocca, Viale dell'Innovazione 10, 20126, Milan, Italy
- Institute of Bioimaging and Molecular Physiology (CNR), Via Fratelli Cervi 93, 20090, Milano-Segrate, Italy
| | - Alberto Zani
- Institute of Bioimaging and Molecular Physiology (CNR), Via Fratelli Cervi 93, 20090, Milano-Segrate, Italy
| |
Collapse
|
50
|
Proverbio AM, Brignone V, Matarazzo S, Del Zotto M, Zani A. Gender and parental status affect the visual cortical response to infant facial expression. Neuropsychologia 2006; 44:2987-99. [PMID: 16879841 DOI: 10.1016/j.neuropsychologia.2006.06.015] [Citation(s) in RCA: 124] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2006] [Revised: 06/08/2006] [Accepted: 06/18/2006] [Indexed: 10/24/2022]
Abstract
This study sought to determine the influence of gender and parental status on the brain potentials elicited by viewing infant facial expressions. We used ERP recording during a judgement task of infant happy/distressed expression to investigate if viewer gender or parental status affects the visual cortical response at various stages of perceptual processing. ERPs were recorded in 38 adults (male/female, parents/non-parents) during processing of infant facial expressions that varied in valence and intensity. All infants were unfamiliar to viewers. The lateral occipital P110 response was much larger in women than in men, regardless of facial expression, thus indicating a gender difference in early visual processing. The occipitotemporal N160 response provided the first evidence of discrimination of expressions of discomfort and distress and demonstrated a significant gender difference within the parent group, thus suggesting a strong interactive influence of genetic predisposition and parental status on the responsivity of visual brain areas. The N245 component exhibited complete coding of the intensity of facial expression, including positive expressions. At this processing stage the cerebral responses of female and male non-parents were significantly smaller than those of parents and insensitive to differences in the intensity of infant suffering. Smaller P300 amplitudes were elicited in mothers versus fathers, especially with infant expressions of suffering. No major group differences were observed in cerebral responses to happy or comfortable expressions. These findings suggest that mere familiarity with infant faces does not explain group differences.
Collapse
Affiliation(s)
- Alice Mado Proverbio
- Department of Psychology, University of Milano-Bicocca, Viale dell'Innovazione 10, 20126 Milan, Italy.
| | | | | | | | | |
Collapse
|