1
|
Franchak JM, Kadooka K. Age differences in orienting to faces in dynamic scenes depend on face centering, not visual saliency. INFANCY 2022; 27:1032-1051. [PMID: 35932474 DOI: 10.1111/infa.12492] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
The current study investigated how infants (6-24 months), children (2-12 years), and adults differ in how visual cues-visual saliency and centering-guide their attention to faces in videos. We report a secondary analysis of Kadooka and Franchak (2020), in which observers' eye movements were recorded during viewing of television clips containing a variety of faces. For every face on every video frame, we calculated its visual saliency (based on both static and dynamic image features) and calculated how close the face was to the center of the image. Results revealed that participants of every age looked more often at each face when it was more salient compared to less salient. In contrast, centering did not increase the likelihood that infants looked at a given face, but in later childhood and adulthood, centering became a stronger cue for face looking. A control analysis determined that the age-related change in centering was specific to face looking; participants of all ages were more likely to look at the center of the image, and this center bias did not change with age. The implications for using videos in educational and diagnostic contexts are discussed.
Collapse
|
2
|
Romeo Z, Fusina F, Semenzato L, Bonato M, Angrilli A, Spironelli C. Comparison of Slides and Video Clips as Different Methods for Inducing Emotions: An Electroencephalographic Alpha Modulation Study. Front Hum Neurosci 2022; 16:901422. [PMID: 35734350 PMCID: PMC9207173 DOI: 10.3389/fnhum.2022.901422] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Accepted: 05/18/2022] [Indexed: 11/13/2022] Open
Abstract
Films, compared with emotional static pictures, represent true-to-life dynamic stimuli that are both ecological and effective in inducing an emotional response given the involvement of multimodal stimulation (i.e., visual and auditory systems). We hypothesized that a direct comparison between the two methods would have shown greater efficacy of movies, compared to standardized slides, in eliciting emotions at both subjective and neurophysiological levels. To this end, we compared these two methods of emotional stimulation in a group of 40 young adults (20 females). Electroencephalographic (EEG) Alpha rhythm (8–12 Hz) was recorded from 64 scalp sites while participants watched (in counterbalanced order across participants) two separate blocks of 45 slides and 45 clips. Each block included three groups of 15 validated stimuli classified as Erotic, Neutral and Fear content. Greater self-perceived arousal was found after the presentation of Fear and Erotic video clips compared with the same slide categories. sLORETA analysis showed a different lateralization pattern: slides induced decreased Alpha power (greater activation) in the left secondary visual area (Brodmann Area, BA, 18) to Erotic and Fear compared with the Neutral stimuli. Instead, video clips elicited reduced Alpha in the homologous right secondary visual area (BA 18) again to both Erotic and Fear contents compared with Neutral ones. Comparison of emotional stimuli showed smaller Alpha power to Erotic than to Fear stimuli in the left precuneus/posterior cingulate cortex (BA 7/31) for the slide condition, and in the left superior parietal lobule (BA 7) for the clip condition. This result matched the parallel analysis of the overlapped Mu rhythm (corresponding to the upper Alpha band) and can be interpreted as Mu/Alpha EEG suppression elicited by greater motor action tendency to Erotic (approach motivation) compared to Fear (withdrawal motivation) stimuli. Correlation analysis found lower Alpha in the left middle temporal gyrus (BA 21) associated with greater pleasantness to Erotic slides (r38 = –0.62, p = 0.009), whereas lower Alpha in the right supramarginal/angular gyrus (BA 40/39) was associated with greater pleasantness to Neutral clips (r38 = –0.69, p = 0.012). Results point to stronger emotion elicitation of movies vs. slides, but also to a specific involvement of the two hemispheres during emotional processing of slides vs. video clips, with a shift from the left to the right associative visual areas.
Collapse
Affiliation(s)
- Zaira Romeo
- Department of General Psychology, University of Padova, Padua, Italy
| | - Francesca Fusina
- Department of General Psychology, University of Padova, Padua, Italy
- Padova Neuroscience Center, University of Padova, Padua, Italy
| | - Luca Semenzato
- Department of General Psychology, University of Padova, Padua, Italy
| | - Mario Bonato
- Department of General Psychology, University of Padova, Padua, Italy
- Padova Neuroscience Center, University of Padova, Padua, Italy
| | - Alessandro Angrilli
- Department of General Psychology, University of Padova, Padua, Italy
- Padova Neuroscience Center, University of Padova, Padua, Italy
| | - Chiara Spironelli
- Department of General Psychology, University of Padova, Padua, Italy
- Padova Neuroscience Center, University of Padova, Padua, Italy
- *Correspondence: Chiara Spironelli,
| |
Collapse
|
3
|
Inhibiting saccades to a social stimulus: a developmental study. Sci Rep 2020; 10:4615. [PMID: 32165671 PMCID: PMC7067843 DOI: 10.1038/s41598-020-61188-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2019] [Accepted: 02/07/2020] [Indexed: 11/08/2022] Open
Abstract
Faces are an important source of social signal throughout the lifespan. In adults, they have a prioritized access to the orienting system. Here we investigate when this effect emerges during development. We tested 139 children, early adolescents, adolescents and adults in a mixed pro- and anti-saccades task with faces, cars or noise patterns as visual targets. We observed an improvement in performance until about 15 years of age, replicating studies that used only meaningless stimuli as targets. Also, as previously reported, we observed that adults made more direction errors to faces than abstract patterns and cars. The children showed this effect too with regards to noise patterns but it was not specific since performance for cars and faces did not differ. The adolescents, in contrast, made more errors for faces than for cars but as many errors for noise patterns and faces. In all groups latencies for pro-saccades were faster towards faces. We discuss these findings with regards to the development of executive control in childhood and adolescence and the influence of social stimuli at different ages.
Collapse
|
4
|
Pollux PM, Craddock M, Guo K. Gaze patterns in viewing static and dynamic body expressions. Acta Psychol (Amst) 2019; 198:102862. [PMID: 31226535 DOI: 10.1016/j.actpsy.2019.05.014] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2019] [Revised: 05/09/2019] [Accepted: 05/26/2019] [Indexed: 11/25/2022] Open
Abstract
Evidence for the importance of bodily cues for emotion recognition has grown over the last two decades. Despite this growing literature, it is underspecified how observers view whole bodies for body expression recognition. Here we investigate to which extent body-viewing is face- and context-specific when participants are categorizing whole body expressions in static (Experiment 1) and dynamic displays (Experiment 2). Eye-movement recordings showed that observers viewed the face exclusively when visible in dynamic displays, whereas viewing was distributed over head, torso and arms in static displays and in dynamic displays with faces not visible. The strong face bias in dynamic face-visible expressions suggests that viewing of the body responds flexibly to the informativeness of facial cues for emotion categorisation. However, when facial expressions are static or not visible, observers adopt a viewing strategy that includes all upper body regions. This viewing strategy is further influenced by subtle viewing biases directed towards emotion-specific body postures and movements to optimise recruitment of diagnostic information for emotion categorisation.
Collapse
|
5
|
Children’s visual attention to emotional expressions varies with stimulus movement. J Exp Child Psychol 2018; 172:13-24. [DOI: 10.1016/j.jecp.2018.03.001] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2017] [Revised: 03/02/2018] [Accepted: 03/02/2018] [Indexed: 10/17/2022]
|
6
|
Koch FS, Sundqvist A, Herbert J, Tjus T, Heimann M. Changes in infant visual attention when observing repeated actions. Infant Behav Dev 2018; 50:189-197. [PMID: 29407428 DOI: 10.1016/j.infbeh.2018.01.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2017] [Revised: 01/19/2018] [Accepted: 01/20/2018] [Indexed: 11/17/2022]
Abstract
Infants' early visual preferences for faces, and their observational learning abilities, are well-established in the literature. The current study examines how infants' attention changes as they become increasingly familiar with a person and the actions that person is demonstrating. The looking patterns of 12- (n = 61) and 16-month-old infants (n = 29) were tracked while they watched videos of an adult presenting novel actions with four different objects three times. A face-to-action ratio in visual attention was calculated for each repetition and summarized as a mean across all videos. The face-to-action ratio increased with each action repetition, indicating that there was an increase in attention to the face relative to the action each additional time the action was demonstrated. Infant's prior familiarity with the object used was related to face-to-action ratio in 12-month-olds and initial looking behavior was related to face-to-action ratio in the whole sample. Prior familiarity with the presenter, and infant gender and age, were not related to face-to-action ratio. This study has theoretical implications for face preference and action observations in dynamic contexts.
Collapse
Affiliation(s)
- Felix-Sebastian Koch
- Infant and Child Lab, Department of Behavioural Sciences and Learning, Linköping University, Sweden.
| | - Anett Sundqvist
- Infant and Child Lab, Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Jane Herbert
- School of Psychology, University of Wollongong, Australia
| | - Tomas Tjus
- Department of Psychology, University of Gothenburg, Sweden
| | - Mikael Heimann
- Infant and Child Lab, Department of Behavioural Sciences and Learning, Linköping University, Sweden
| |
Collapse
|
7
|
Abstract
The human body is a highly familiar and socially very important object. Does this mean that the human body has a special status with respect to visual attention? In the current paper we tested whether people in natural scenes attract attention and “pop out” or, alternatively, are at least searched for more efficiently than targets of another category (machines). Observers in our study searched a visual array for dynamic or static scenes containing humans amidst scenes containing machines and vice versa. The arrays consisted of 2, 4, 6 or 8 scenes arranged in a circular array, with targets being present or absent. Search times increased with set size for dynamic and static human and machine targets, arguing against pop out. However, search for human targets was more efficient than for machine targets as indicated by shallower search slopes for human targets. Eye tracking further revealed that observers made more first fixations to human than to machine targets and that their on-target fixation durations were shorter for human compared to machine targets. In summary, our results suggest that searching for people in natural scenes is more efficient than searching for other categories even though people do not pop out.
Collapse
Affiliation(s)
- Katja M. Mayer
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, United Kingdom
- Neural Mechanisms of Human Communication, Max Plank Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- * E-mail:
| | - Quoc C. Vuong
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, United Kingdom
| | - Ian M. Thornton
- Department of Cognitive Science, Faculty of Media & Knowledge Sciences, University of Malta, Msida, Malta
| |
Collapse
|
8
|
Robbins RA, Coltheart M. The relative importance of heads, bodies, and movement to person recognition across development. J Exp Child Psychol 2015; 138:1-14. [DOI: 10.1016/j.jecp.2015.04.006] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2013] [Revised: 04/13/2015] [Accepted: 04/15/2015] [Indexed: 11/28/2022]
|
9
|
Gregory NJ, Lόpez B, Graham G, Marshman P, Bate S, Kargas N. Reduced gaze following and attention to heads when viewing a "live" social scene. PLoS One 2015; 10:e0121792. [PMID: 25853239 PMCID: PMC4390321 DOI: 10.1371/journal.pone.0121792] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2014] [Accepted: 02/04/2015] [Indexed: 11/18/2022] Open
Abstract
Social stimuli are known to both attract and direct our attention, but most research on social attention has been conducted in highly controlled laboratory settings lacking in social context. This study examined the role of social context on viewing behaviour of participants whilst they watched a dynamic social scene, under three different conditions. In two social groups, participants believed they were watching a live webcam of other participants. The socially-engaged group believed they would later complete a group task with the people in the video, whilst the non-engaged group believed they would not meet the people in the scene. In a third condition, participants simply free-viewed the same video with the knowledge that it was pre-recorded, with no suggestion of a later interaction. Results demonstrated that the social context in which the stimulus was viewed significantly influenced viewing behaviour. Specifically, participants in the social conditions allocated less visual attention towards the heads of the actors in the scene and followed their gaze less than those in the free-viewing group. These findings suggest that by underestimating the impact of social context in social attention, researchers risk coming to inaccurate conclusions about how we attend to others in the real world.
Collapse
Affiliation(s)
- Nicola Jean Gregory
- Psychology Research Centre, Faculty of Science and Technology, Bournemouth University, Poole, Dorset, United Kingdom
- Department of Psychology, University of Portsmouth, Portsmouth, Hampshire, United Kingdom
- * E-mail:
| | - Beatriz Lόpez
- Department of Psychology, University of Portsmouth, Portsmouth, Hampshire, United Kingdom
| | - Gemma Graham
- Department of Psychology, University of Portsmouth, Portsmouth, Hampshire, United Kingdom
| | - Paul Marshman
- Department of Psychology, University of Portsmouth, Portsmouth, Hampshire, United Kingdom
| | - Sarah Bate
- Psychology Research Centre, Faculty of Science and Technology, Bournemouth University, Poole, Dorset, United Kingdom
| | - Niko Kargas
- Department of Psychology, University of Portsmouth, Portsmouth, Hampshire, United Kingdom
| |
Collapse
|
10
|
Amaral CP, Simões MA, Castelo-Branco MS. Neural signals evoked by stimuli of increasing social scene complexity are detectable at the single-trial level and right lateralized. PLoS One 2015; 10:e0121970. [PMID: 25807525 PMCID: PMC4373781 DOI: 10.1371/journal.pone.0121970] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2014] [Accepted: 02/06/2015] [Indexed: 11/24/2022] Open
Abstract
Classification of neural signals at the single-trial level and the study of their relevance in affective and cognitive neuroscience are still in their infancy. Here we investigated the neurophysiological correlates of conditions of increasing social scene complexity using 3D human models as targets of attention, which may also be important in autism research. Challenging single-trial statistical classification of EEG neural signals was attempted for detection of oddball stimuli with increasing social scene complexity. Stimuli had an oddball structure and were as follows: 1) flashed schematic eyes, 2) simple 3D faces flashed between averted and non-averted gaze (only eye position changing), 3) simple 3D faces flashed between averted and non-averted gaze (head and eye position changing), 4) animated avatar alternated its gaze direction to the left and to the right (head and eye position), 5) environment with 4 animated avatars all of which change gaze and one of which is the target of attention. We found a late (> 300 ms) neurophysiological oddball correlate for all conditions irrespective of their complexity as assessed by repeated measures ANOVA. We attempted single-trial detection of this signal with automatic classifiers and obtained a significant balanced accuracy classification of around 79%, which is noteworthy given the amount of scene complexity. Lateralization analysis showed a specific right lateralization only for more complex realistic social scenes. In sum, complex ecological animations with social content elicit neurophysiological events which can be characterized even at the single-trial level. These signals are right lateralized. These finding paves the way for neuroscientific studies in affective neuroscience based on complex social scenes, and given the detectability at the single trial level this suggests the feasibility of brain computer interfaces that can be applied to social cognition disorders such as autism.
Collapse
Affiliation(s)
- Carlos P Amaral
- IBILI-Institute for Biomedical Imaging in Life Sciences, Faculty of Medicine, University of Coimbra, Coimbra, Portugal
| | - Marco A Simões
- IBILI-Institute for Biomedical Imaging in Life Sciences, Faculty of Medicine, University of Coimbra, Coimbra, Portugal
| | - Miguel S Castelo-Branco
- IBILI-Institute for Biomedical Imaging in Life Sciences, Faculty of Medicine, University of Coimbra, Coimbra, Portugal; ICNAS, Brain Imaging Network of Portugal, Coimbra, Portugal
| |
Collapse
|