1
|
Cracco E, Papeo L, Wiersema JR. Evidence for a role of synchrony but not common fate in the perception of biological group movements. Eur J Neurosci 2024; 60:3557-3571. [PMID: 38706370 DOI: 10.1111/ejn.16356] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 03/16/2024] [Accepted: 04/05/2024] [Indexed: 05/07/2024]
Abstract
Extensive research has shown that observers are able to efficiently extract summary information from groups of people. However, little is known about the cues that determine whether multiple people are represented as a social group or as independent individuals. Initial research on this topic has primarily focused on the role of static cues. Here, we instead investigate the role of dynamic cues. In two experiments with male and female human participants, we use EEG frequency tagging to investigate the influence of two fundamental Gestalt principles - synchrony and common fate - on the grouping of biological movements. In Experiment 1, we find that brain responses coupled to four point-light figures walking together are enhanced when they move in sync vs. out of sync, but only when they are presented upright. In contrast, we found no effect of movement direction (i.e., common fate). In Experiment 2, we rule out that synchrony takes precedence over common fate by replicating the null effect of movement direction while keeping synchrony constant. These results suggest that synchrony plays an important role in the processing of biological group movements. In contrast, the role of common fate is less clear and will require further research.
Collapse
Affiliation(s)
- Emiel Cracco
- Department of Experimental Clinical and Health Psychology, Ghent University, Ghent, Belgium
| | - Liuba Papeo
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon 1, Bron, France
| | - Jan R Wiersema
- Department of Experimental Clinical and Health Psychology, Ghent University, Ghent, Belgium
| |
Collapse
|
2
|
Williams EH, Chakrabarti B. The integration of head and body cues during the perception of social interactions. Q J Exp Psychol (Hove) 2024; 77:776-788. [PMID: 37232389 PMCID: PMC10960325 DOI: 10.1177/17470218231181001] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 03/10/2023] [Accepted: 05/10/2023] [Indexed: 05/27/2023]
Abstract
Humans spend a large proportion of time participating in social interactions. The ability to accurately detect and respond to human interactions is vital for social functioning, from early childhood through to older adulthood. This detection ability arguably relies on integrating sensory information from the interactants. Within the visual modality, directional information from a person's eyes, head, and body are integrated to inform where another person is looking and who they are interacting with. To date, social cue integration research has focused largely on the perception of isolated individuals. Across two experiments, we investigated whether observers integrate body information with head information when determining whether two people are interacting, and manipulated frame of reference (one of the interactants facing observer vs. facing away from observer) and the eye-region visibility of the interactant. Results demonstrate that individuals integrate information from the body with head information when perceiving dyadic interactions, and that integration is influenced by the frame of reference and visibility of the eye-region. Interestingly, self-reported autistics traits were associated with a stronger influence of body information on interaction perception, but only when the eye-region was visible. This study investigated the recognition of dyadic interactions using whole-body stimuli while manipulating eye visibility and frame of reference, and provides crucial insights into social cue integration, as well as how autistic traits affect cue integration, during perception of social interactions.
Collapse
Affiliation(s)
- Elin H Williams
- Centre for Autism, School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Bhismadev Chakrabarti
- Centre for Autism, School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
- India Autism Centre, Kolkata, India
- Department of Psychology, Ashoka University, Sonipat, India
| |
Collapse
|
3
|
Charbonneau M, Curioni A, McEllin L, Strachan JWA. Flexible Cultural Learning Through Action Coordination. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2024; 19:201-222. [PMID: 37458767 DOI: 10.1177/17456916231182923] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2024]
Abstract
The cultural transmission of technical know-how has proven vital to the success of our species. The broad diversity of learning contexts and social configurations, as well as the various kinds of coordinated interactions they involve, speaks to our capacity to flexibly adapt to and succeed in transmitting vital knowledge in various learning contexts. Although often recognized by ethnographers, the flexibility of cultural learning has so far received little attention in terms of cognitive mechanisms. We argue that a key feature of the flexibility of cultural learning is that both the models and learners recruit cognitive mechanisms of action coordination to modulate their behavior contingently on the behavior of their partner, generating a process of mutual adaptation supporting the successful transmission of technical skills in diverse and fluctuating learning environments. We propose that the study of cultural learning would benefit from the experimental methods, results, and insights of joint-action research and, complementarily, that the field of joint-action research could expand its scope by integrating a learning and cultural dimension. Bringing these two fields of research together promises to enrich our understanding of cultural learning, its contextual flexibility, and joint action coordination.
Collapse
Affiliation(s)
- Mathieu Charbonneau
- Africa Institute for Research in Economics and Social Sciences, Université Mohammed VI Polytechnique
| | | | - Luke McEllin
- Department of Cognitive Science, Central European University
| | | |
Collapse
|
4
|
Skripkauskaite S, Mihai I, Koldewyn K. Attentional bias towards social interactions during viewing of naturalistic scenes. Q J Exp Psychol (Hove) 2023; 76:2303-2311. [PMID: 36377819 PMCID: PMC10503253 DOI: 10.1177/17470218221140879] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 09/30/2022] [Accepted: 11/04/2022] [Indexed: 09/16/2023]
Abstract
Human visual attention is readily captured by the social information in scenes. Multiple studies have shown that social areas of interest (AOIs) such as faces and bodies attract more attention than non-social AOIs (e.g., objects or background). However, whether this attentional bias is moderated by the presence (or absence) of a social interaction remains unclear. Here, the gaze of 70 young adults was tracked during the free viewing of 60 naturalistic scenes. All photographs depicted two people, who were either interacting or not. Analyses of dwell time revealed that more attention was spent on human than background AOIs in the interactive pictures. In non-interactive pictures, however, dwell time did not differ between AOI type. In the time-to-first-fixation analysis, humans always captured attention before other elements of the scene, although this difference was slightly larger in interactive than non-interactive scenes. These findings confirm the existence of a bias towards social information in attentional capture and suggest our attention values social interactions beyond the presence of two people.
Collapse
Affiliation(s)
- Simona Skripkauskaite
- School of Psychology, Bangor University, Bangor, UK
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Ioana Mihai
- School of Psychology, Bangor University, Bangor, UK
| | | |
Collapse
|
5
|
Barzy M, Morgan R, Cook R, Gray KLH. Are social interactions preferentially attended in real-world scenes? Evidence from change blindness. Q J Exp Psychol (Hove) 2023; 76:2293-2302. [PMID: 36847458 PMCID: PMC10503233 DOI: 10.1177/17470218231161044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 10/03/2022] [Accepted: 11/02/2022] [Indexed: 03/01/2023]
Abstract
In change detection paradigms, changes to social or animate aspects of a scene are detected better and faster compared with non-social or inanimate aspects. While previous studies have focused on how changes to individual faces/bodies are detected, it is possible that individuals presented within a social interaction may be further prioritised, as the accurate interpretation of social interactions may convey a competitive advantage. Over three experiments, we explored change detection to complex real-world scenes, in which changes either occurred by the removal of (a) an individual on their own, (b) an individual who was interacting with others, or (c) an object. In Experiment 1 (N = 50), we measured change detection for non-interacting individuals versus objects. In Experiment 2 (N = 49), we measured change detection for interacting individuals versus objects. Finally, in Experiment 3 (N = 85), we measured change detection for non-interacting versus interacting individuals. We also ran an inverted version of each task to determine whether differences were driven by low-level visual features. In Experiments 1 and 2, we found that changes to non-interacting and interacting individuals were detected better and more quickly than changes to objects. We also found inversion effects for both non-interaction and interaction changes, whereby they were detected more quickly when upright compared with inverted. No such inversion effect was seen for objects. This suggests that the high-level, social content of the images was driving the faster change detection for social versus object targets. Finally, we found that changes to individuals in non-interactions were detected faster than those presented within an interaction. Our results replicate the social advantage often found in change detection paradigms. However, we find that changes to individuals presented within social interaction configurations do not appear to be more quickly and easily detected than those in non-interacting configurations.
Collapse
Affiliation(s)
- Mahsa Barzy
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Rachel Morgan
- School of Mathematics and Statistics, University of Reading, Reading, UK
| | - Richard Cook
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
- Department of Psychology, University of York, York, UK
| | - Katie LH Gray
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| |
Collapse
|
6
|
Xu Z, Chen H, Wang Y. Invisible social grouping facilitates the recognition of individual faces. Conscious Cogn 2023; 113:103556. [PMID: 37541010 DOI: 10.1016/j.concog.2023.103556] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Revised: 07/09/2023] [Accepted: 07/28/2023] [Indexed: 08/06/2023]
Abstract
Emerging evidence suggests a specialized mechanism supporting perceptual grouping of social entities. However, the stage at which social grouping is processed is unclear. Through four experiments, here we showed that participants' recognition of a visible face was facilitated by the presence of a second facing (thus forming a social grouping) relative to a nonfacing face, even when the second face was invisible. Using a monocular/dichoptic paradigm, we further found that the social grouping facilitation effect occurred when the two faces were presented dichoptically to different eyes rather than monocularly to the same eye, suggesting that social grouping relies on binocular rather than monocular neural channels. The above effects were not found for inverted face dyads, thereby ruling out the contribution of nonsocial factors. Taken together, these findings support the unconscious influence of social grouping on visual perception and suggest an early origin of social grouping processing in the visual pathway.
Collapse
Affiliation(s)
- Zhenjie Xu
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou 310028, Zhejiang, China
| | - Hui Chen
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou 310028, Zhejiang, China.
| | - Yingying Wang
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou 310028, Zhejiang, China.
| |
Collapse
|
7
|
Goupil N, Hochmann JR, Papeo L. Intermodulation responses show integration of interacting bodies in a new whole. Cortex 2023; 165:129-140. [PMID: 37279640 DOI: 10.1016/j.cortex.2023.04.013] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 03/31/2023] [Accepted: 04/30/2023] [Indexed: 06/08/2023]
Abstract
People are often seen among other people, relating to and interacting with one another. Recent studies suggest that socially relevant spatial relations between bodies, such as the face-to-face positioning, or facingness, change the visual representation of those bodies, relative to when the same items appear unrelated (e.g., back-to-back) or in isolation. The current study addresses the hypothesis that face-to-face bodies give rise to a new whole, an integrated representation of individual bodies in a new perceptual unit. Using frequency-tagging EEG, we targeted, as a measure of integration, an EEG correlate of the non-linear combination of the neural responses to each of two individual bodies presented either face-to-face as if interacting, or back-to-back. During EEG recording, participants (N = 32) viewed two bodies, either face-to-face or back-to-back, flickering at two different frequencies (F1 and F2), yielding two distinctive responses in the EEG signal. Spectral analysis examined the responses at the intermodulation frequencies (nF1±mF2), signaling integration of individual responses. An anterior intermodulation response was observed for face-to-face bodies, but not for back-to-back bodies, nor for face-to-face chairs and machines. These results show that interacting bodies are integrated into a representation that is more than the sum of its parts. This effect, specific to body dyads, may mark an early step in the transformation towards an integrated representation of a social event, from the visual representation of individual participants in that event.
Collapse
Affiliation(s)
- Nicolas Goupil
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de La Recherche Scientifique (CNRS), Université Claude Bernard Lyon 1, Bron, France.
| | - Jean-Rémy Hochmann
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de La Recherche Scientifique (CNRS), Université Claude Bernard Lyon 1, Bron, France
| | - Liuba Papeo
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de La Recherche Scientifique (CNRS), Université Claude Bernard Lyon 1, Bron, France.
| |
Collapse
|
8
|
Parovel G. Perceiving animacy from kinematics: visual specification of life-likeness in simple geometric patterns. Front Psychol 2023; 14:1167809. [PMID: 37333577 PMCID: PMC10273680 DOI: 10.3389/fpsyg.2023.1167809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Accepted: 05/11/2023] [Indexed: 06/20/2023] Open
Abstract
Since the seminal work of Heider and Simmel, and Michotte's research, many studies have shown that, under appropriate conditions, displays of simple geometric shapes elicit rich and vivid impressions of animacy and intentionality. The main purpose of this review is to emphasize the close relationship between kinematics and perceived animacy by showing which specific motion cues and spatiotemporal patterns automatically trigger visual perceptions of animacy and intentionality. The animacy phenomenon has been demonstrated to be rather fast, automatic, irresistible, and highly stimulus-driven. Moreover, there is growing evidence that animacy attributions, although usually associated with higher-level cognition and long-term memory, may reflect highly specialized visual processes that have evolved to support adaptive behaviors critical for survival. The hypothesis of a life-detector hardwired in the perceptual system is also supported by recent studies in early development and animal cognition, as well as by the issue of the "irresistibility" criterion, i.e., the persistence of animacy perception in adulthood even in the face of conflicting background knowledge. Finally, further support for the hypothesis that animacy is processed in the earliest stages of vision comes from recent experimental evidence on the interaction of animacy with other visual processes, such as visuomotor performance, visual memory, and speed estimation. Summarizing, the ability to detect animacy in all its nuances may be related to the visual system's sensitivity to those changes in kinematics - considered as a multifactorial relational system - that are associated with the presence of living beings, as opposed to the natural, inert behavior of physically constrained, form-invariant objects, or even mutually independent moving agents. This broad predisposition would allow the observer not only to identify the presence of animates and to distinguish them from inanimate, but also to quickly grasp their psychological, emotional, and social characteristics.
Collapse
Affiliation(s)
- Giulia Parovel
- Department of Social, Political and Cognitive Sciences, University of Siena, Siena, Italy
| |
Collapse
|
9
|
Lu X, Dai A, Guo Y, Shen M, Gao Z. Is the social chunking of agent actions in working memory resource-demanding? Cognition 2022; 229:105249. [PMID: 35961161 DOI: 10.1016/j.cognition.2022.105249] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Revised: 07/20/2022] [Accepted: 08/03/2022] [Indexed: 12/01/2022]
Abstract
Retaining social interactions in working memory (WM) for further social activities is vital for a successful social life. Researchers have noted a social chunking phenomenon in WM: WM involuntarily uses the social interaction cues embedded in the individual actions and chunks them as one unit. Our study is the first to examine whether the social chunking in WM is an automatic process, by asking whether social chunking of agent actions in WM is resource-demanding, a key hallmark of automaticity. We achieved this by probing whether retaining agent interactions in WM as a chunk required more attention than retaining actions without interaction. We employed a WM change-detection task with actions containing social interaction cues as memory stimuli, and required participants only memorizing individual actions. As domain-general attention and object-based attention are suggested playing a key role in retaining chunks in WM, a secondary task was inserted in the WM maintenance phase to consume these two types of attention. We reestablished the fact that the social chunking in WM required no voluntary control (Experiments 1 and 2). Critically, we demonstrated substantial evidence that social chunking in WM did not require extra domain-general attention (Experiment 1) or object-based attention (Experiment 2). These findings imply that the social chunking of agent actions in WM is not resource-demanding, supporting an automatic view of social chunking in WM.
Collapse
Affiliation(s)
- Xiqian Lu
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hang Zhou, China
| | - Alessandro Dai
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hang Zhou, China
| | - Yang Guo
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hang Zhou, China
| | - Mowei Shen
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hang Zhou, China.
| | - Zaifeng Gao
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hang Zhou, China.
| |
Collapse
|
10
|
Affiliation(s)
- Ilenia Paparella
- Institut des Sciences Cognitives— Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon1, Lyon, France
| | - Liuba Papeo
- Institut des Sciences Cognitives— Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon1, Lyon, France
| |
Collapse
|
11
|
Gehdu BK, Gray KLH, Cook R. Impaired grouping of ambient facial images in autism. Sci Rep 2022; 12:6665. [PMID: 35461345 PMCID: PMC9035147 DOI: 10.1038/s41598-022-10630-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Accepted: 04/04/2022] [Indexed: 11/27/2022] Open
Abstract
Ambient facial images depict individuals from a variety of viewing angles, with a range of poses and expressions, under different lighting conditions. Exposure to ambient images is thought to help observers form robust representations of the individuals depicted. Previous results suggest that autistic people may derive less benefit from exposure to this exemplar variation than non-autistic people. To date, however, it remains unclear why. One possibility is that autistic individuals possess atypical perceptual learning mechanisms. Alternatively, however, the learning mechanisms may be intact, but receive low-quality perceptual input from face encoding processes. To examine this second possibility, we investigated whether autistic people are less able to group ambient images of unfamiliar individuals based on their identity. Participants were asked to identify which of four ambient images depicted an oddball identity. Each trial assessed the grouping of different facial identities, thereby preventing face learning across trials. As such, the task assessed participants’ ability to group ambient images of unfamiliar people. In two experiments we found that matched non-autistic controls correctly identified the oddball identities more often than our autistic participants. These results imply that poor face learning from variation by autistic individuals may well be attributable to low-quality perceptual input, not aberrant learning mechanisms.
Collapse
Affiliation(s)
- Bayparvah Kaur Gehdu
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E 7HX, UK
| | - Katie L H Gray
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Richard Cook
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E 7HX, UK. .,Department of Psychology, University of York, York, UK.
| |
Collapse
|
12
|
Oomen D, Cracco E, Brass M, Wiersema JR. EEG frequency tagging evidence of social interaction recognition. Soc Cogn Affect Neurosci 2022; 17:1044-1053. [PMID: 35452523 PMCID: PMC9629471 DOI: 10.1093/scan/nsac032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Revised: 03/04/2022] [Accepted: 05/04/2022] [Indexed: 01/12/2023] Open
Abstract
Previous neuroscience studies have provided important insights into the neural processing of third-party social interaction recognition. Unfortunately, however, the methods they used are limited by a high susceptibility to noise. Electroencephalogram (EEG) frequency tagging is a promising technique to overcome this limitation, as it is known for its high signal-to-noise ratio. So far, EEG frequency tagging has mainly been used with simplistic stimuli (e.g. faces), but more complex stimuli are needed to study social interaction recognition. It therefore remains unknown whether this technique could be exploited to study third-party social interaction recognition. To address this question, we first created and validated a wide variety of stimuli that depict social scenes with and without social interaction, after which we used these stimuli in an EEG frequency tagging experiment. As hypothesized, we found enhanced neural responses to social scenes with social interaction compared to social scenes without social interaction. This effect appeared laterally at occipitoparietal electrodes and strongest over the right hemisphere. Hence, we find that EEG frequency tagging can measure the process of inferring social interaction from varying contextual information. EEG frequency tagging is particularly valuable for research into populations that require a high signal-to-noise ratio like infants, young children and clinical populations.
Collapse
Affiliation(s)
- Danna Oomen
- Correspondence should be addressed to Danna Oomen, Department of Experimental Clinical and Health Psychology, Ghent University, Henri Dunantlaan 2, Ghent B-9000, Belgium. E-mail:
| | - Emiel Cracco
- Department of Experimental Clinical and Health Psychology, Ghent University, Ghent B-9000, Belgium,EXPLORA, Ghent University, Ghent B-9000, Belgium
| | - Marcel Brass
- Department of Experimental Psychology, Ghent University, Ghent B-9000, Belgium,School of Mind and Brain/Department of Psychology, Humboldt Universität zu Berlin, Berlin 10099, Germany,EXPLORA, Ghent University, Ghent B-9000, Belgium
| | - Jan R Wiersema
- Department of Experimental Clinical and Health Psychology, Ghent University, Ghent B-9000, Belgium,EXPLORA, Ghent University, Ghent B-9000, Belgium
| |
Collapse
|
13
|
Sensitivity to orientation is not unique to social attention cueing. Sci Rep 2022; 12:5059. [PMID: 35322128 PMCID: PMC8943057 DOI: 10.1038/s41598-022-09011-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Accepted: 03/16/2022] [Indexed: 12/02/2022] Open
Abstract
It is well-established that faces and bodies cue observers’ visuospatial attention; for example, target items are found faster when their location is cued by the directionality of a task-irrelevant face or body. Previous results suggest that these cueing effects are greatly reduced when the orientation of the task-irrelevant stimulus is inverted. It remains unclear, however, whether sensitivity to orientation is a unique hallmark of “social” attention cueing or a more general phenomenon. In the present study, we sought to determine whether the cueing effects produced by common objects (power drills, desk lamps, desk fans, cameras, bicycles, and cars) are also attenuated by inversion. When cueing stimuli were shown upright, all six object classes produced highly significant cueing effects. When shown upside-down, however, the results were mixed. Some of the cueing effects (e.g., those induced by bicycles and cameras) behaved liked faces and bodies: they were greatly reduced by orientation inversion. However, other cueing effects (e.g., those induced by cars and power drills) were insensitive to orientation: upright and inverted exemplars produced significant cueing effects of comparable strength. We speculate that (i) cueing effects depend on the rapid identification of stimulus directionality, and (ii) some cueing effects are sensitive to orientation because upright exemplars of those categories afford faster processing of directionality, than inverted exemplars. Contrary to the view that attenuation-by-inversion is a unique hallmark of social attention, our findings indicate that some non-social cueing effects also exhibit sensitivity to orientation.
Collapse
|
14
|
Tsantani M, Podgajecka V, Gray KLH, Cook R. How does the presence of a surgical face mask impair the perceived intensity of facial emotions? PLoS One 2022; 17:e0262344. [PMID: 35025948 PMCID: PMC8758043 DOI: 10.1371/journal.pone.0262344] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Accepted: 12/22/2021] [Indexed: 12/31/2022] Open
Abstract
The use of surgical-type face masks has become increasingly common during the COVID-19 pandemic. Recent findings suggest that it is harder to categorise the facial expressions of masked faces, than of unmasked faces. To date, studies of the effects of mask-wearing on emotion recognition have used categorisation paradigms: authors have presented facial expression stimuli and examined participants’ ability to attach the correct label (e.g., happiness, disgust). While the ability to categorise particular expressions is important, this approach overlooks the fact that expression intensity is also informative during social interaction. For example, when predicting an interactant’s future behaviour, it is useful to know whether they are slightly fearful or terrified, contented or very happy, slightly annoyed or angry. Moreover, because categorisation paradigms force observers to pick a single label to describe their percept, any additional dimensionality within observers’ interpretation is lost. In the present study, we adopted a complementary emotion-intensity rating paradigm to study the effects of mask-wearing on expression interpretation. In an online experiment with 120 participants (82 female), we investigated how the presence of face masks affects the perceived emotional profile of prototypical expressions of happiness, sadness, anger, fear, disgust, and surprise. For each of these facial expressions, we measured the perceived intensity of all six emotions. We found that the perceived intensity of intended emotions (i.e., the emotion that the actor intended to convey) was reduced by the presence of a mask for all expressions except for anger. Additionally, when viewing all expressions except surprise, masks increased the perceived intensity of non-intended emotions (i.e., emotions that the actor did not intend to convey). Intensity ratings were unaffected by presentation duration (500ms vs 3000ms), or attitudes towards mask wearing. These findings shed light on the ambiguity that arises when interpreting the facial expressions of masked faces.
Collapse
Affiliation(s)
- Maria Tsantani
- Department of Psychological Sciences, Birkbeck, University of London, London, United Kingdom
- * E-mail:
| | - Vita Podgajecka
- Department of Psychological Sciences, Birkbeck, University of London, London, United Kingdom
| | - Katie L. H. Gray
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, United Kingdom
| | - Richard Cook
- Department of Psychological Sciences, Birkbeck, University of London, London, United Kingdom
- Department of Psychology, University of York, York, United Kingdom
| |
Collapse
|
15
|
Goupil N, Papeo L, Hochmann J. Visual perception grounding of social cognition in preverbal infants. INFANCY 2022; 27:210-231. [DOI: 10.1111/infa.12453] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Revised: 11/22/2021] [Accepted: 01/02/2022] [Indexed: 11/28/2022]
Affiliation(s)
- Nicolas Goupil
- Institut des Sciences Cognitives—Marc Jeannerod UMR5229 Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon1 Bron France
| | - Liuba Papeo
- Institut des Sciences Cognitives—Marc Jeannerod UMR5229 Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon1 Bron France
| | - Jean‐Rémy Hochmann
- Institut des Sciences Cognitives—Marc Jeannerod UMR5229 Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon1 Bron France
| |
Collapse
|
16
|
Flavell JC, Over H, Vestner T, Cook R, Tipper SP. Rapid detection of social interactions is the result of domain general attentional processes. PLoS One 2022; 17:e0258832. [PMID: 35030168 PMCID: PMC8759659 DOI: 10.1371/journal.pone.0258832] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Accepted: 10/06/2021] [Indexed: 11/19/2022] Open
Abstract
Using visual search displays of interacting and non-interacting pairs, it has been demonstrated that detection of social interactions is facilitated. For example, two people facing each other are found faster than two people with their backs turned: an effect that may reflect social binding. However, recent work has shown the same effects with non-social arrow stimuli, where towards facing arrows are detected faster than away facing arrows. This latter work suggests a primary mechanism is an attention orienting process driven by basic low-level direction cues. However, evidence for lower level attentional processes does not preclude a potential additional role of higher-level social processes. Therefore, in this series of experiments we test this idea further by directly comparing basic visual features that orient attention with representations of socially interacting individuals. Results confirm the potency of orienting of attention via low-level visual features in the detection of interacting objects. In contrast, there is little evidence for the representation of social interactions influencing initial search performance.
Collapse
Affiliation(s)
- Jonathan C. Flavell
- Department of Psychology, University of York, York, North Yorkshire, United Kingdom
| | - Harriet Over
- Department of Psychology, University of York, York, North Yorkshire, United Kingdom
| | - Tim Vestner
- Department of Psychology, Birkbeck, University of London, London, Greater London, United Kingdom
| | - Richard Cook
- Department of Psychology, Birkbeck, University of London, London, Greater London, United Kingdom
| | - Steven P. Tipper
- Department of Psychology, University of York, York, North Yorkshire, United Kingdom
| |
Collapse
|
17
|
The spatial distance compression effect is due to social interaction and not mere configuration. Psychon Bull Rev 2021; 29:828-836. [PMID: 34918281 DOI: 10.3758/s13423-021-02045-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/29/2021] [Indexed: 11/08/2022]
Abstract
In recent years, there has been a surge of interest in perception, evaluation, and memory for social interactions from a third-person perspective. One intriguing finding is a spatial distance compression effect when target dyads are facing each other. Specifically, face-to-face dyads are remembered as being spatially closer than back-to-back dyads. There is a vibrant debate about the mechanism behind this effect, and two hypotheses have been proposed. According to the social interaction hypothesis, face-to-face dyads engage a binding process that represents them as a social unit, which compresses the perceived distance between them. In contrast, the configuration hypothesis holds that the effect is produced by the front-to-front configuration of the two visual targets. In the present research we sought to test these accounts. In Experiment 1 we successfully replicated the distance compression effect with two upright faces that were facing each other, but not with inverted faces. In contrast, we found no distance compression effect with three types of nonsocial stimuli: arrows (Experiment 2a), fans (Experiment 2b), and cars (Experiment 3). In Experiment 4, we replicated this effect with another social stimuli: upright bodies. Taken together, these results provide strong support for the social interaction hypothesis.
Collapse
|
18
|
The neural coding of face and body orientation in occipitotemporal cortex. Neuroimage 2021; 246:118783. [PMID: 34879251 DOI: 10.1016/j.neuroimage.2021.118783] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2021] [Revised: 11/09/2021] [Accepted: 12/04/2021] [Indexed: 11/20/2022] Open
Abstract
Face and body orientation convey important information for us to understand other people's actions, intentions and social interactions. It has been shown that several occipitotemporal areas respond differently to faces or bodies of different orientations. However, whether face and body orientation are processed by partially overlapping or completely separate brain networks remains unclear, as the neural coding of face and body orientation is often investigated separately. Here, we recorded participants' brain activity using fMRI while they viewed faces and bodies shown from three different orientations, while attending to either orientation or identity information. Using multivoxel pattern analysis we investigated which brain regions process face and body orientation respectively, and which regions encode both face and body orientation in a stimulus-independent manner. We found that patterns of neural responses evoked by different stimulus orientations in the occipital face area, extrastriate body area, lateral occipital complex and right early visual cortex could generalise across faces and bodies, suggesting a stimulus-independent encoding of person orientation in occipitotemporal cortex. This finding was consistent across functionally defined regions of interest and a whole-brain searchlight approach. The fusiform face area responded to face but not body orientation, suggesting that orientation responses in this area are face-specific. Moreover, neural responses to orientation were remarkably consistent regardless of whether participants attended to the orientation of faces and bodies or not. Together, these results demonstrate that face and body orientation are processed in a partially overlapping brain network, with a stimulus-independent neural code for face and body orientation in occipitotemporal cortex.
Collapse
|
19
|
Cracco E, Lee H, van Belle G, Quenon L, Haggard P, Rossion B, Orgs G. EEG Frequency Tagging Reveals the Integration of Form and Motion Cues into the Perception of Group Movement. Cereb Cortex 2021; 32:2843-2857. [PMID: 34734972 PMCID: PMC9247417 DOI: 10.1093/cercor/bhab385] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Revised: 09/06/2021] [Accepted: 09/07/2021] [Indexed: 11/14/2022] Open
Abstract
The human brain has dedicated mechanisms for processing other people’s movements. Previous research has revealed how these mechanisms contribute to perceiving the movements of individuals but has left open how we perceive groups of people moving together. Across three experiments, we test whether movement perception depends on the spatiotemporal relationships among the movements of multiple agents. In Experiment 1, we combine EEG frequency tagging with apparent human motion and show that posture and movement perception can be dissociated at harmonically related frequencies of stimulus presentation. We then show that movement but not posture processing is enhanced when observing multiple agents move in synchrony. Movement processing was strongest for fluently moving synchronous groups (Experiment 2) and was perturbed by inversion (Experiment 3). Our findings suggest that processing group movement relies on binding body postures into movements and individual movements into groups. Enhanced perceptual processing of movement synchrony may form the basis for higher order social phenomena such as group alignment and its social consequences.
Collapse
Affiliation(s)
- Emiel Cracco
- Department of Experimental Psychology, Ghent University, 9000 Ghent, Belgium
| | - Haeeun Lee
- Department of Psychology, Goldsmiths, University of London, SE14 6NW London, UK
| | - Goedele van Belle
- Psychological Sciences Research Institute, Université Catholique de Louvain, 1340 Ottignies-Louvain-la-Neuve, Belgium
| | - Lisa Quenon
- Institute of Neuroscience, Université Catholique de Louvain, 1000 Brussels, Belgium
| | - Patrick Haggard
- Institute of Cognitive Neuroscience, University College London, WC1N 3AZ London, UK
| | - Bruno Rossion
- Université de Lorraine, CNRS, CRAN, F-54000 Nancy, France.,CHRU-Nancy, Service de Neurologie, F-54000 Nancy, France
| | - Guido Orgs
- Department of Psychology, Goldsmiths, University of London, SE14 6NW London, UK
| |
Collapse
|
20
|
Tsantani M, Vestner T, Cook R. The Twenty Item Prosopagnosia Index (PI20) provides meaningful evidence of face recognition impairment. ROYAL SOCIETY OPEN SCIENCE 2021; 8:202062. [PMID: 34737872 PMCID: PMC8564608 DOI: 10.1098/rsos.202062] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/12/2020] [Accepted: 10/06/2021] [Indexed: 05/04/2023]
Abstract
The Twenty Item Prosopagnosia Index (PI20) is a self-report questionnaire used for quantifying prosopagnosic traits. This scale is intended to help researchers identify cases of developmental prosopagnosia by providing standardized self-report evidence to complement diagnostic evidence obtained from objective computer-based tasks. In order to respond appropriately to items, prosopagnosics must have some insight that their face recognition is well below average, while non-prosopagnosics need to understand that their relative face recognition ability falls within the typical range. There has been considerable debate about whether participants have the necessary insight into their face recognition abilities to respond appropriately. In the present study, we sought to determine whether the PI20 provides meaningful evidence of face recognition impairment. In keeping with the intended use of the instrument, we used PI20 scores to identify two groups: high-PI20 scorers (those with self-reported face recognition difficulties) and low-PI20 scorers (those with no self-reported face recognition difficulties). We found that participant groups distinguished on the basis of PI20 scores clearly differed in terms of their mean performance on objective measures of face recognition ability. We also found that high-PI20 scorers were more likely to achieve levels of face recognition accuracy associated with developmental prosopagnosia.
Collapse
Affiliation(s)
- Maria Tsantani
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| | - Tim Vestner
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| | - Richard Cook
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| |
Collapse
|
21
|
Faust KM, Carouso-Peck S, Elson MR, Goldstein MH. The Origins of Social Knowledge in Altricial Species. ANNUAL REVIEW OF DEVELOPMENTAL PSYCHOLOGY 2021; 2:225-246. [PMID: 34553142 DOI: 10.1146/annurev-devpsych-051820-121446] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Human infants are altricial, born relatively helpless and dependent on parental care for an extended period of time. This protracted time to maturity is typically regarded as a necessary epiphenomenon of evolving and developing large brains. We argue that extended altriciality is itself adaptive, as a prolonged necessity for parental care allows extensive social learning to take place. Human adults possess a suite of complex social skills, such as language, empathy, morality, and theory of mind. Rather than requiring hardwired, innate knowledge of social abilities, evolution has outsourced the necessary information to parents. Critical information for species-typical development, such as species recognition, may originate from adults rather than from genes, aided by underlying perceptual biases for attending to social stimuli and capacities for statistical learning of social actions. We draw on extensive comparative findings to illustrate that, across species, altriciality functions as an adaptation for social learning from caregivers.
Collapse
Affiliation(s)
- Katerina M Faust
- Department of Psychology, Cornell University, Ithaca, New York 14853, USA
| | | | - Mary R Elson
- Department of Psychology, Cornell University, Ithaca, New York 14853, USA
| | | |
Collapse
|
22
|
Vestner T, Over H, Gray KLH, Tipper SP, Cook R. Searching for people: Non-facing distractor pairs hinder the visual search of social scenes more than facing distractor pairs. Cognition 2021; 214:104737. [PMID: 33901835 PMCID: PMC8346951 DOI: 10.1016/j.cognition.2021.104737] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2020] [Revised: 03/05/2021] [Accepted: 04/12/2021] [Indexed: 11/24/2022]
Abstract
There is growing interest in the visual and attentional processes recruited when human observers view social scenes containing multiple people. Findings from visual search paradigms have helped shape this emerging literature. Previous research has established that, when hidden amongst pairs of individuals facing in the same direction (leftwards or rightwards), pairs of individuals arranged front-to-front are found faster than pairs of individuals arranged back-to-back. Here, we describe a second, closely-related effect with important theoretical implications. When searching for a pair of individuals facing in the same direction (leftwards or rightwards), target dyads are found faster when hidden amongst distractor pairs arranged front-to-front, than when hidden amongst distractor pairs arranged back-to-back. This distractor arrangement effect was also obtained with target and distractor pairs constructed from arrows and types of common objects that cue visuospatial attention. These findings argue against the view that pairs of people arranged front-to-front capture exogenous attention due to a domain-specific orienting mechanism. Rather, it appears that salient direction cues (e.g., gaze direction, body orientation, arrows) hamper systematic search and impede efficient interpretation, when distractor pairs are arranged back-to-back.
Collapse
Affiliation(s)
- Tim Vestner
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| | - Harriet Over
- Department of Psychology, University of York, York, UK
| | - Katie L H Gray
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | | | - Richard Cook
- Department of Psychological Sciences, Birkbeck, University of London, London, UK; Department of Psychology, University of York, York, UK.
| |
Collapse
|
23
|
Ramamoorthy N, Jamieson O, Imaan N, Plaisted-Grant K, Davis G. Enhanced detection of gaze toward an object: Sociocognitive influences on visual search. Psychon Bull Rev 2021; 28:494-502. [PMID: 33174087 PMCID: PMC8062376 DOI: 10.3758/s13423-020-01841-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/26/2020] [Indexed: 11/17/2022]
Abstract
Another person's gaze direction is a rich source of social information, especially eyes gazing toward prominent or relevant objects. To guide attention to these important stimuli, visual search mechanisms may incorporate sophisticated coding of eye-gaze and its spatial relationship to other objects. Alternatively, any guidance might reflect the action of simple perceptual 'templates' tuned to visual features of socially relevant objects, or intrinsic salience of direct-gazing eyes for human vision. Previous findings that direct gaze (toward oneself) is prioritised over averted gaze do not distinguish between these accounts. To resolve this issue, we compared search for eyes gazing toward a prominent object versus gazing away, finding more efficient search for eyes 'gazing toward' the object. This effect was most clearly seen in target-present trials when gaze was task-relevant. Visual search mechanisms appear to specify gazer-object relations, a computational building-block of theory of mind.
Collapse
Affiliation(s)
| | - Oliver Jamieson
- Department of Psychology, University of Cambridge, Cambridge, UK
| | - Nahiyan Imaan
- Department of Psychology, University of Cambridge, Cambridge, UK
| | | | - Greg Davis
- Department of Psychology, University of Cambridge, Cambridge, UK
| |
Collapse
|
24
|
Bellot E, Abassi E, Papeo L. Moving Toward versus Away from Another: How Body Motion Direction Changes the Representation of Bodies and Actions in the Visual Cortex. Cereb Cortex 2021; 31:2670-2685. [PMID: 33401307 DOI: 10.1093/cercor/bhaa382] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Revised: 11/05/2020] [Accepted: 11/25/2020] [Indexed: 11/12/2022] Open
Abstract
Representing multiple agents and their mutual relations is a prerequisite to understand social events such as interactions. Using functional magnetic resonance imaging on human adults, we show that visual areas dedicated to body form and body motion perception contribute to processing social events, by holding the representation of multiple moving bodies and encoding the spatial relations between them. In particular, seeing animations of human bodies facing and moving toward (vs. away from) each other increased neural activity in the body-selective cortex [extrastriate body area (EBA)] and posterior superior temporal sulcus (pSTS) for biological motion perception. In those areas, representation of body postures and movements, as well as of the overall scene, was more accurate for facing body (vs. nonfacing body) stimuli. Effective connectivity analysis with dynamic causal modeling revealed increased coupling between EBA and pSTS during perception of facing body stimuli. The perceptual enhancement of multiple-body scenes featuring cues of interaction (i.e., face-to-face positioning, spatial proximity, and approaching signals) was supported by the participants' better performance in a recognition task with facing body versus nonfacing body stimuli. Thus, visuospatial cues of interaction in multiple-person scenarios affect the perceptual representation of body and body motion and, by promoting functional integration, streamline the process from body perception to action representation.
Collapse
Affiliation(s)
- Emmanuelle Bellot
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon 1, 69675 Bron, France
| | - Etienne Abassi
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon 1, 69675 Bron, France
| | - Liuba Papeo
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon 1, 69675 Bron, France
| |
Collapse
|
25
|
Hafri A, Firestone C. The Perception of Relations. Trends Cogn Sci 2021; 25:475-492. [PMID: 33812770 DOI: 10.1016/j.tics.2021.01.006] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2020] [Revised: 01/05/2021] [Accepted: 01/18/2021] [Indexed: 11/16/2022]
Abstract
The world contains not only objects and features (red apples, glass bowls, wooden tables), but also relations holding between them (apples contained in bowls, bowls supported by tables). Representations of these relations are often developmentally precocious and linguistically privileged; but how does the mind extract them in the first place? Although relations themselves cast no light onto our eyes, a growing body of work suggests that even very sophisticated relations display key signatures of automatic visual processing. Across physical, eventive, and social domains, relations such as support, fit, cause, chase, and even socially interact are extracted rapidly, are impossible to ignore, and influence other perceptual processes. Sophisticated and structured relations are not only judged and understood, but also seen - revealing surprisingly rich content in visual perception itself.
Collapse
Affiliation(s)
- Alon Hafri
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD 21218, USA; Department of Cognitive Science, Johns Hopkins University, Baltimore, MD 21218, USA.
| | - Chaz Firestone
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD 21218, USA; Department of Cognitive Science, Johns Hopkins University, Baltimore, MD 21218, USA; Department of Philosophy, Johns Hopkins University, Baltimore, MD 21218, USA.
| |
Collapse
|
26
|
Bunce C, Gray KLH, Cook R. The perception of interpersonal distance is distorted by the Müller-Lyer illusion. Sci Rep 2021; 11:494. [PMID: 33436801 PMCID: PMC7803751 DOI: 10.1038/s41598-020-80073-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2020] [Accepted: 12/14/2020] [Indexed: 11/10/2022] Open
Abstract
There is growing interest in how human observers perceive social scenes containing multiple people. Interpersonal distance is a critical feature when appraising these scenes; proxemic cues are used by observers to infer whether two people are interacting, the nature of their relationship, and the valence of their current interaction. Presently, however, remarkably little is known about how interpersonal distance is encoded within the human visual system. Here we show that the perception of interpersonal distance is distorted by the Müller-Lyer illusion. Participants perceived the distance between two target points to be compressed or expanded depending on whether face pairs were positioned inside or outside the to-be-judged interval. This illusory bias was found to be unaffected by manipulations of face direction. These findings aid our understanding of how human observers perceive interpersonal distance and may inform theoretical accounts of the Müller-Lyer illusion.
Collapse
Affiliation(s)
- Carl Bunce
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E7HX, UK
| | - Katie L H Gray
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Richard Cook
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E7HX, UK.
| |
Collapse
|
27
|
Vestner T, Gray KLH, Cook R. Visual search for facing and non-facing people: The effect of actor inversion. Cognition 2020; 208:104550. [PMID: 33360076 DOI: 10.1016/j.cognition.2020.104550] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Revised: 12/08/2020] [Accepted: 12/11/2020] [Indexed: 10/22/2022]
Abstract
In recent years, there has been growing interest in how human observers perceive, attend to, and recall, social interactions viewed from third-person perspectives. One of the interesting findings to emerge from this new literature is the search advantage for facing dyads. When hidden amongst pairs of individuals facing in the same direction, pairs of individuals arranged front-to-front are found faster in visual search tasks than pairs of individuals arranged back-to-back. Interestingly, the search advantage for facing dyads appears to be sensitive to the orientation of the people depicted. While front-to-front target pairs are found faster than back-to-back targets when target and distractor pairings are shown upright, front-to-front and back-to-back targets are found equally quickly when pairings are shown upside-down. In the present study, we sought to better understand why the search advantage for facing dyads is sensitive to the orientation of the people depicted. To begin, we show that the orientation sensitivity of the search advantage is seen with dyads constructed from faces only, and from bodies with the head and face occluded. We replicate these effects using two different visual search paradigms. We go on to show that individual faces and bodies, viewed in profile, produce strong attentional cueing effects when shown upright, but not when presented upside-down. Together with recent evidence that arrows arranged front-to-front also produce the search advantage for facing dyads, these findings support the view that the search advantage is a by-product of the ability of constituent elements to direct observers' visuo-spatial attention.
Collapse
Affiliation(s)
- Tim Vestner
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| | - Katie L H Gray
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Richard Cook
- Department of Psychological Sciences, Birkbeck, University of London, London, UK; Department of Psychology, University of York, York, UK.
| |
Collapse
|
28
|
|