1
|
Cracco E, Papeo L, Wiersema JR. Evidence for a role of synchrony but not common fate in the perception of biological group movements. Eur J Neurosci 2024; 60:3557-3571. [PMID: 38706370 DOI: 10.1111/ejn.16356] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 03/16/2024] [Accepted: 04/05/2024] [Indexed: 05/07/2024]
Abstract
Extensive research has shown that observers are able to efficiently extract summary information from groups of people. However, little is known about the cues that determine whether multiple people are represented as a social group or as independent individuals. Initial research on this topic has primarily focused on the role of static cues. Here, we instead investigate the role of dynamic cues. In two experiments with male and female human participants, we use EEG frequency tagging to investigate the influence of two fundamental Gestalt principles - synchrony and common fate - on the grouping of biological movements. In Experiment 1, we find that brain responses coupled to four point-light figures walking together are enhanced when they move in sync vs. out of sync, but only when they are presented upright. In contrast, we found no effect of movement direction (i.e., common fate). In Experiment 2, we rule out that synchrony takes precedence over common fate by replicating the null effect of movement direction while keeping synchrony constant. These results suggest that synchrony plays an important role in the processing of biological group movements. In contrast, the role of common fate is less clear and will require further research.
Collapse
Affiliation(s)
- Emiel Cracco
- Department of Experimental Clinical and Health Psychology, Ghent University, Ghent, Belgium
| | - Liuba Papeo
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon 1, Bron, France
| | - Jan R Wiersema
- Department of Experimental Clinical and Health Psychology, Ghent University, Ghent, Belgium
| |
Collapse
|
2
|
Goupil N, Rayson H, Serraille É, Massera A, Ferrari PF, Hochmann JR, Papeo L. Visual Preference for Socially Relevant Spatial Relations in Humans and Monkeys. Psychol Sci 2024; 35:681-693. [PMID: 38683657 DOI: 10.1177/09567976241242995] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/01/2024] Open
Abstract
As a powerful social signal, a body, face, or gaze facing toward oneself holds an individual's attention. We asked whether, going beyond an egocentric stance, facingness between others has a similar effect and why. In a preferential-looking time paradigm, human adults showed spontaneous preference to look at two bodies facing toward (vs. away from) each other (Experiment 1a, N = 24). Moreover, facing dyads were rated higher on social semantic dimensions, showing that facingness adds social value to stimuli (Experiment 1b, N = 138). The same visual preference was found in juvenile macaque monkeys (Experiment 2, N = 21). Finally, on the human development timescale, this preference emerged by 5 years, although young infants by 7 months of age already discriminate visual scenes on the basis of body positioning (Experiment 3, N = 120). We discuss how the preference for facing dyads-shared by human adults, young children, and macaques-can signal a new milestone in social cognition development, supporting processing and learning from third-party social interactions.
Collapse
Affiliation(s)
- Nicolas Goupil
- Institut des Sciences Cognitives Marc Jeannerod, Bron, France; Centre National de la Recherche Scientifique (CNRS), Paris, France; and Université Claude Bernard Lyon 1
| | - Holly Rayson
- Institut des Sciences Cognitives Marc Jeannerod, Bron, France; Centre National de la Recherche Scientifique (CNRS), Paris, France; and Université Claude Bernard Lyon 1
| | - Émilie Serraille
- Institut des Sciences Cognitives Marc Jeannerod, Bron, France; Centre National de la Recherche Scientifique (CNRS), Paris, France; and Université Claude Bernard Lyon 1
| | - Alice Massera
- Institut des Sciences Cognitives Marc Jeannerod, Bron, France; Centre National de la Recherche Scientifique (CNRS), Paris, France; and Université Claude Bernard Lyon 1
| | - Pier Francesco Ferrari
- Institut des Sciences Cognitives Marc Jeannerod, Bron, France; Centre National de la Recherche Scientifique (CNRS), Paris, France; and Université Claude Bernard Lyon 1
| | - Jean-Rémy Hochmann
- Institut des Sciences Cognitives Marc Jeannerod, Bron, France; Centre National de la Recherche Scientifique (CNRS), Paris, France; and Université Claude Bernard Lyon 1
| | - Liuba Papeo
- Institut des Sciences Cognitives Marc Jeannerod, Bron, France; Centre National de la Recherche Scientifique (CNRS), Paris, France; and Université Claude Bernard Lyon 1
| |
Collapse
|
3
|
Lee Masson H, Chen J, Isik L. A shared neural code for perceiving and remembering social interactions in the human superior temporal sulcus. Neuropsychologia 2024; 196:108823. [PMID: 38346576 DOI: 10.1016/j.neuropsychologia.2024.108823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Revised: 01/15/2024] [Accepted: 02/09/2024] [Indexed: 02/20/2024]
Abstract
Recognizing and remembering social information is a crucial cognitive skill. Neural patterns in the superior temporal sulcus (STS) support our ability to perceive others' social interactions. However, despite the prominence of social interactions in memory, the neural basis of remembering social interactions is still unknown. To fill this gap, we investigated the brain mechanisms underlying memory of others' social interactions during free spoken recall of a naturalistic movie. By applying machine learning-based fMRI encoding analyses to densely labeled movie and recall data we found that a subset of the STS activity evoked by viewing social interactions predicted neural responses in not only held-out movie data, but also during memory recall. These results provide the first evidence that activity in the STS is reinstated in response to specific social content and that its reactivation underlies our ability to remember others' interactions. These findings further suggest that the STS contains representations of social interactions that are not only perceptually driven, but also more abstract or conceptual in nature.
Collapse
Affiliation(s)
- Haemy Lee Masson
- Department of Psychology, Durham University, Durham, DH1 3LE, United Kingdom; Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, 21218, United States.
| | - Janice Chen
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, 21218, United States
| | - Leyla Isik
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, 21218, United States.
| |
Collapse
|
4
|
Thorsson M, Galazka MA, Åsberg Johnels J, Hadjikhani N. Influence of autistic traits and communication role on eye contact behavior during face-to-face interaction. Sci Rep 2024; 14:8162. [PMID: 38589489 PMCID: PMC11001951 DOI: 10.1038/s41598-024-58701-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Accepted: 04/02/2024] [Indexed: 04/10/2024] Open
Abstract
Eye contact is a central component in face-to-face interactions. It is important in structuring communicative exchanges and offers critical insights into others' interests and intentions. To better understand eye contact in face-to-face interactions, we applied a novel, non-intrusive deep-learning-based dual-camera system and investigated associations between eye contact and autistic traits as well as self-reported eye contact discomfort during a referential communication task, where participants and the experimenter had to guess, in turn, a word known by the other individual. Corroborating previous research, we found that participants' eye gaze and mutual eye contact were inversely related to autistic traits. In addition, our findings revealed different behaviors depending on the role in the dyad: listening and guessing were associated with increased eye contact compared with describing words. In the listening and guessing condition, only a subgroup who reported eye contact discomfort had a lower amount of eye gaze and eye contact. When describing words, higher autistic traits were associated with reduced eye gaze and eye contact. Our data indicate that eye contact is inversely associated with autistic traits when describing words, and that eye gaze is modulated by the communicative role in a conversation.
Collapse
Affiliation(s)
- Max Thorsson
- Gillberg Neuropsychiatry Centre, Institute of Neuroscience and Physiology, University of Gothenburg, Gothenburg, Sweden.
| | - Martyna A Galazka
- Gillberg Neuropsychiatry Centre, Institute of Neuroscience and Physiology, University of Gothenburg, Gothenburg, Sweden
- Division of Cognition and Communication, Department of Applied Information Technology, University of Gothenburg, Gothenburg, Sweden
| | - Jakob Åsberg Johnels
- Gillberg Neuropsychiatry Centre, Institute of Neuroscience and Physiology, University of Gothenburg, Gothenburg, Sweden
- Section of Speech and Language Pathology, Institute of Neuroscience and Physiology, University of Gothenburg, Gothenburg, Sweden
| | - Nouchine Hadjikhani
- Gillberg Neuropsychiatry Centre, Institute of Neuroscience and Physiology, University of Gothenburg, Gothenburg, Sweden
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
5
|
Williams EH, Chakrabarti B. The integration of head and body cues during the perception of social interactions. Q J Exp Psychol (Hove) 2024; 77:776-788. [PMID: 37232389 PMCID: PMC10960325 DOI: 10.1177/17470218231181001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 03/10/2023] [Accepted: 05/10/2023] [Indexed: 05/27/2023]
Abstract
Humans spend a large proportion of time participating in social interactions. The ability to accurately detect and respond to human interactions is vital for social functioning, from early childhood through to older adulthood. This detection ability arguably relies on integrating sensory information from the interactants. Within the visual modality, directional information from a person's eyes, head, and body are integrated to inform where another person is looking and who they are interacting with. To date, social cue integration research has focused largely on the perception of isolated individuals. Across two experiments, we investigated whether observers integrate body information with head information when determining whether two people are interacting, and manipulated frame of reference (one of the interactants facing observer vs. facing away from observer) and the eye-region visibility of the interactant. Results demonstrate that individuals integrate information from the body with head information when perceiving dyadic interactions, and that integration is influenced by the frame of reference and visibility of the eye-region. Interestingly, self-reported autistics traits were associated with a stronger influence of body information on interaction perception, but only when the eye-region was visible. This study investigated the recognition of dyadic interactions using whole-body stimuli while manipulating eye visibility and frame of reference, and provides crucial insights into social cue integration, as well as how autistic traits affect cue integration, during perception of social interactions.
Collapse
Affiliation(s)
- Elin H Williams
- Centre for Autism, School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Bhismadev Chakrabarti
- Centre for Autism, School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
- India Autism Centre, Kolkata, India
- Department of Psychology, Ashoka University, Sonipat, India
| |
Collapse
|
6
|
Liu H, Tang E, Guan C, Li J, Zheng J, Zhou D, Shen M, Chen H. Not socially blind: Unimpaired perception of social interaction in schizophrenia. Schizophr Res 2024; 264:448-450. [PMID: 38262311 DOI: 10.1016/j.schres.2023.12.027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/29/2023] [Revised: 12/01/2023] [Accepted: 12/25/2023] [Indexed: 01/25/2024]
Affiliation(s)
- Huiying Liu
- Department of Psychology and Behavioral Sciences, Zhejiang University, China
| | - Enze Tang
- Department of Psychology and Behavioral Sciences, Zhejiang University, China
| | - Chenxiao Guan
- Department of Psychology and Behavioral Sciences, Zhejiang University, China
| | - Jian Li
- Department of Psychology and Behavioral Sciences, Zhejiang University, China
| | - Jiewei Zheng
- Department of Psychology and Behavioral Sciences, Zhejiang University, China
| | | | - Mowei Shen
- Department of Psychology and Behavioral Sciences, Zhejiang University, China.
| | - Hui Chen
- Department of Psychology and Behavioral Sciences, Zhejiang University, China.
| |
Collapse
|
7
|
Charbonneau M, Curioni A, McEllin L, Strachan JWA. Flexible Cultural Learning Through Action Coordination. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2024; 19:201-222. [PMID: 37458767 DOI: 10.1177/17456916231182923] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2024]
Abstract
The cultural transmission of technical know-how has proven vital to the success of our species. The broad diversity of learning contexts and social configurations, as well as the various kinds of coordinated interactions they involve, speaks to our capacity to flexibly adapt to and succeed in transmitting vital knowledge in various learning contexts. Although often recognized by ethnographers, the flexibility of cultural learning has so far received little attention in terms of cognitive mechanisms. We argue that a key feature of the flexibility of cultural learning is that both the models and learners recruit cognitive mechanisms of action coordination to modulate their behavior contingently on the behavior of their partner, generating a process of mutual adaptation supporting the successful transmission of technical skills in diverse and fluctuating learning environments. We propose that the study of cultural learning would benefit from the experimental methods, results, and insights of joint-action research and, complementarily, that the field of joint-action research could expand its scope by integrating a learning and cultural dimension. Bringing these two fields of research together promises to enrich our understanding of cultural learning, its contextual flexibility, and joint action coordination.
Collapse
Affiliation(s)
- Mathieu Charbonneau
- Africa Institute for Research in Economics and Social Sciences, Université Mohammed VI Polytechnique
| | | | - Luke McEllin
- Department of Cognitive Science, Central European University
| | | |
Collapse
|
8
|
Kristjánsson Á, Kristjánsson T. Attentional priming in Go No-Go search tasks. Vision Res 2023; 213:108313. [PMID: 37689007 DOI: 10.1016/j.visres.2023.108313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Revised: 07/19/2023] [Accepted: 08/17/2023] [Indexed: 09/11/2023]
Abstract
Go/No-Go responses in visual search yield different estimates of the operation of visual attention than more standard present versus absent tasks. Such minor methodological tweaks have a surprisingly large effect on measures that have, for the last half-century or so, formed the backbone of prominent theories of visual attention. Secondly, priming effects in visual search have a dominating influence on visual search, accounting for effects that have been attributed to top-down guidance in standard theories. Priming effects in visual search have, however, never been investigated for searches involving Go/No-Go present/absent decisions. Here, Go/No-Go tasks were used to assess visual search for an odd-one-out face, defined either by color or facial expression. The Go/No-Go responses for the color-based task were very fast for both present and absent trials and notably, they resulted in negative slopes of RT and set size. Interestingly "Go" responses were even faster for the target absent case. The "Go" responses were, on the other hand, much slower for expression and became higher with increased set-size, particularly for the target-absent response. Priming effects were considerable for the feature search, but for expression, the target absent priming was strong, but did not occur for target present trials, arguing that repetition priming for this search mainly reflects priming of context rather than target features. Overall, the results reinforce the point that Go/No-Go tasks are highly informative for theoretical accounts of visual attention and are shown here to cast a new light on attentional priming.
Collapse
|
9
|
McMahon E, Isik L. Seeing social interactions. Trends Cogn Sci 2023; 27:1165-1179. [PMID: 37805385 PMCID: PMC10841760 DOI: 10.1016/j.tics.2023.09.001] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Revised: 09/01/2023] [Accepted: 09/05/2023] [Indexed: 10/09/2023]
Abstract
Seeing the interactions between other people is a critical part of our everyday visual experience, but recognizing the social interactions of others is often considered outside the scope of vision and grouped with higher-level social cognition like theory of mind. Recent work, however, has revealed that recognition of social interactions is efficient and automatic, is well modeled by bottom-up computational algorithms, and occurs in visually-selective regions of the brain. We review recent evidence from these three methodologies (behavioral, computational, and neural) that converge to suggest the core of social interaction perception is visual. We propose a computational framework for how this process is carried out in the brain and offer directions for future interdisciplinary investigations of social perception.
Collapse
Affiliation(s)
- Emalie McMahon
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA
| | - Leyla Isik
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA; Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA.
| |
Collapse
|
10
|
Skripkauskaite S, Mihai I, Koldewyn K. Attentional bias towards social interactions during viewing of naturalistic scenes. Q J Exp Psychol (Hove) 2023; 76:2303-2311. [PMID: 36377819 PMCID: PMC10503253 DOI: 10.1177/17470218221140879] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 09/30/2022] [Accepted: 11/04/2022] [Indexed: 09/16/2023]
Abstract
Human visual attention is readily captured by the social information in scenes. Multiple studies have shown that social areas of interest (AOIs) such as faces and bodies attract more attention than non-social AOIs (e.g., objects or background). However, whether this attentional bias is moderated by the presence (or absence) of a social interaction remains unclear. Here, the gaze of 70 young adults was tracked during the free viewing of 60 naturalistic scenes. All photographs depicted two people, who were either interacting or not. Analyses of dwell time revealed that more attention was spent on human than background AOIs in the interactive pictures. In non-interactive pictures, however, dwell time did not differ between AOI type. In the time-to-first-fixation analysis, humans always captured attention before other elements of the scene, although this difference was slightly larger in interactive than non-interactive scenes. These findings confirm the existence of a bias towards social information in attentional capture and suggest our attention values social interactions beyond the presence of two people.
Collapse
Affiliation(s)
- Simona Skripkauskaite
- School of Psychology, Bangor University, Bangor, UK
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Ioana Mihai
- School of Psychology, Bangor University, Bangor, UK
| | | |
Collapse
|
11
|
Barzy M, Morgan R, Cook R, Gray KLH. Are social interactions preferentially attended in real-world scenes? Evidence from change blindness. Q J Exp Psychol (Hove) 2023; 76:2293-2302. [PMID: 36847458 PMCID: PMC10503233 DOI: 10.1177/17470218231161044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 10/03/2022] [Accepted: 11/02/2022] [Indexed: 03/01/2023]
Abstract
In change detection paradigms, changes to social or animate aspects of a scene are detected better and faster compared with non-social or inanimate aspects. While previous studies have focused on how changes to individual faces/bodies are detected, it is possible that individuals presented within a social interaction may be further prioritised, as the accurate interpretation of social interactions may convey a competitive advantage. Over three experiments, we explored change detection to complex real-world scenes, in which changes either occurred by the removal of (a) an individual on their own, (b) an individual who was interacting with others, or (c) an object. In Experiment 1 (N = 50), we measured change detection for non-interacting individuals versus objects. In Experiment 2 (N = 49), we measured change detection for interacting individuals versus objects. Finally, in Experiment 3 (N = 85), we measured change detection for non-interacting versus interacting individuals. We also ran an inverted version of each task to determine whether differences were driven by low-level visual features. In Experiments 1 and 2, we found that changes to non-interacting and interacting individuals were detected better and more quickly than changes to objects. We also found inversion effects for both non-interaction and interaction changes, whereby they were detected more quickly when upright compared with inverted. No such inversion effect was seen for objects. This suggests that the high-level, social content of the images was driving the faster change detection for social versus object targets. Finally, we found that changes to individuals in non-interactions were detected faster than those presented within an interaction. Our results replicate the social advantage often found in change detection paradigms. However, we find that changes to individuals presented within social interaction configurations do not appear to be more quickly and easily detected than those in non-interacting configurations.
Collapse
Affiliation(s)
- Mahsa Barzy
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Rachel Morgan
- School of Mathematics and Statistics, University of Reading, Reading, UK
| | - Richard Cook
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
- Department of Psychology, University of York, York, UK
| | - Katie LH Gray
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| |
Collapse
|
12
|
Xu Z, Chen H, Wang Y. Invisible social grouping facilitates the recognition of individual faces. Conscious Cogn 2023; 113:103556. [PMID: 37541010 DOI: 10.1016/j.concog.2023.103556] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Revised: 07/09/2023] [Accepted: 07/28/2023] [Indexed: 08/06/2023]
Abstract
Emerging evidence suggests a specialized mechanism supporting perceptual grouping of social entities. However, the stage at which social grouping is processed is unclear. Through four experiments, here we showed that participants' recognition of a visible face was facilitated by the presence of a second facing (thus forming a social grouping) relative to a nonfacing face, even when the second face was invisible. Using a monocular/dichoptic paradigm, we further found that the social grouping facilitation effect occurred when the two faces were presented dichoptically to different eyes rather than monocularly to the same eye, suggesting that social grouping relies on binocular rather than monocular neural channels. The above effects were not found for inverted face dyads, thereby ruling out the contribution of nonsocial factors. Taken together, these findings support the unconscious influence of social grouping on visual perception and suggest an early origin of social grouping processing in the visual pathway.
Collapse
Affiliation(s)
- Zhenjie Xu
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou 310028, Zhejiang, China
| | - Hui Chen
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou 310028, Zhejiang, China.
| | - Yingying Wang
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou 310028, Zhejiang, China.
| |
Collapse
|
13
|
Landsiedel J, Koldewyn K. Auditory dyadic interactions through the "eye" of the social brain: How visual is the posterior STS interaction region? IMAGING NEUROSCIENCE (CAMBRIDGE, MASS.) 2023; 1:1-20. [PMID: 37719835 PMCID: PMC10503480 DOI: 10.1162/imag_a_00003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Accepted: 05/17/2023] [Indexed: 09/19/2023]
Abstract
Human interactions contain potent social cues that meet not only the eye but also the ear. Although research has identified a region in the posterior superior temporal sulcus as being particularly sensitive to visually presented social interactions (SI-pSTS), its response to auditory interactions has not been tested. Here, we used fMRI to explore brain response to auditory interactions, with a focus on temporal regions known to be important in auditory processing and social interaction perception. In Experiment 1, monolingual participants listened to two-speaker conversations (intact or sentence-scrambled) and one-speaker narrations in both a known and an unknown language. Speaker number and conversational coherence were explored in separately localised regions-of-interest (ROI). In Experiment 2, bilingual participants were scanned to explore the role of language comprehension. Combining univariate and multivariate analyses, we found initial evidence for a heteromodal response to social interactions in SI-pSTS. Specifically, right SI-pSTS preferred auditory interactions over control stimuli and represented information about both speaker number and interactive coherence. Bilateral temporal voice areas (TVA) showed a similar, but less specific, profile. Exploratory analyses identified another auditory-interaction sensitive area in anterior STS. Indeed, direct comparison suggests modality specific tuning, with SI-pSTS preferring visual information while aSTS prefers auditory information. Altogether, these results suggest that right SI-pSTS is a heteromodal region that represents information about social interactions in both visual and auditory domains. Future work is needed to clarify the roles of TVA and aSTS in auditory interaction perception and further probe right SI-pSTS interaction-selectivity using non-semantic prosodic cues.
Collapse
Affiliation(s)
- Julia Landsiedel
- Department of Psychology, School of Human and Behavioural Sciences, Bangor University, Bangor, United Kingdom
| | - Kami Koldewyn
- Department of Psychology, School of Human and Behavioural Sciences, Bangor University, Bangor, United Kingdom
| |
Collapse
|
14
|
Goupil N, Hochmann JR, Papeo L. Intermodulation responses show integration of interacting bodies in a new whole. Cortex 2023; 165:129-140. [PMID: 37279640 DOI: 10.1016/j.cortex.2023.04.013] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 03/31/2023] [Accepted: 04/30/2023] [Indexed: 06/08/2023]
Abstract
People are often seen among other people, relating to and interacting with one another. Recent studies suggest that socially relevant spatial relations between bodies, such as the face-to-face positioning, or facingness, change the visual representation of those bodies, relative to when the same items appear unrelated (e.g., back-to-back) or in isolation. The current study addresses the hypothesis that face-to-face bodies give rise to a new whole, an integrated representation of individual bodies in a new perceptual unit. Using frequency-tagging EEG, we targeted, as a measure of integration, an EEG correlate of the non-linear combination of the neural responses to each of two individual bodies presented either face-to-face as if interacting, or back-to-back. During EEG recording, participants (N = 32) viewed two bodies, either face-to-face or back-to-back, flickering at two different frequencies (F1 and F2), yielding two distinctive responses in the EEG signal. Spectral analysis examined the responses at the intermodulation frequencies (nF1±mF2), signaling integration of individual responses. An anterior intermodulation response was observed for face-to-face bodies, but not for back-to-back bodies, nor for face-to-face chairs and machines. These results show that interacting bodies are integrated into a representation that is more than the sum of its parts. This effect, specific to body dyads, may mark an early step in the transformation towards an integrated representation of a social event, from the visual representation of individual participants in that event.
Collapse
Affiliation(s)
- Nicolas Goupil
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de La Recherche Scientifique (CNRS), Université Claude Bernard Lyon 1, Bron, France.
| | - Jean-Rémy Hochmann
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de La Recherche Scientifique (CNRS), Université Claude Bernard Lyon 1, Bron, France
| | - Liuba Papeo
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de La Recherche Scientifique (CNRS), Université Claude Bernard Lyon 1, Bron, France.
| |
Collapse
|
15
|
Dapor C, Sperandio I, Meconi F. Fading boundaries between the physical and the social world: Insights and novel techniques from the intersection of these two fields. Front Psychol 2023; 13:1028150. [PMID: 36861005 PMCID: PMC9969107 DOI: 10.3389/fpsyg.2022.1028150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 12/12/2022] [Indexed: 02/15/2023] Open
Abstract
This review focuses on the subtle interactions between sensory input and social cognition in visual perception. We suggest that body indices, such as gait and posture, can mediate such interactions. Recent trends in cognitive research are trying to overcome approaches that define perception as stimulus-centered and are pointing toward a more embodied agent-dependent perspective. According to this view, perception is a constructive process in which sensory inputs and motivational systems contribute to building an image of the external world. A key notion emerging from new theories on perception is that the body plays a critical role in shaping our perception. Depending on our arm's length, height and capacity of movement, we create our own image of the world based on a continuous compromise between sensory inputs and expected behavior. We use our bodies as natural "rulers" to measure both the physical and the social world around us. We point out the necessity of an integrative approach in cognitive research that takes into account the interplay between social and perceptual dimensions. To this end, we review long-established and novel techniques aimed at measuring bodily states and movements, and their perception, with the assumption that only by combining the study of visual perception and social cognition can we deepen our understanding of both fields.
Collapse
Affiliation(s)
- Cecilia Dapor
- Department of Psychology and Cognitive Science, University of Trento, Rovereto, Italy
| | | | | |
Collapse
|
16
|
Yin J, Csibra G, Tatone D. Structural asymmetries in the representation of giving and taking events. Cognition 2022; 229:105248. [PMID: 35961163 DOI: 10.1016/j.cognition.2022.105248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Revised: 07/28/2022] [Accepted: 08/01/2022] [Indexed: 11/15/2022]
Abstract
Across languages, GIVE and TAKE verbs have different syntactic requirements: GIVE mandates a patient argument to be made explicit in the clause structure, whereas TAKE does not. Experimental evidence suggests that this asymmetry is rooted in prelinguistic assumptions about the minimal number of event participants that each action entails. The present study provides corroborating evidence for this proposal by investigating whether the observation of giving and taking actions modulates the inclusion of patients in the represented event. Participants were shown events featuring an agent (A) transferring an object to, or collecting it from, an animate target (B) or an inanimate target (a rock), and their sensitivity to changes in pair composition (AB vs. AC) and action role (AB vs. BA) was measured. Change sensitivity was affected by the type of target approached when the agent transferred the object (Experiment 1), but not when she collected it (Experiment 2), or when an outside force carried out the transfer (Experiment 3). Although these object-displacing actions could be equally interpreted as interactive (i.e., directed towards B), this construal was adopted only when B could be perceived as putative patient of a giving action. This evidence buttresses the proposal that structural asymmetries in giving and taking, as reflected in their syntactic requirements, may originate from prelinguistic assumptions about the minimal event participants required for each action to be teleologically well-formed.
Collapse
Affiliation(s)
- Jun Yin
- Department of Psychology, Ningbo University, Ningbo, PR China.
| | - Gergely Csibra
- Department of Cognitive Science, Central European University, Vienna, Austria; Department of Psychological Sciences, Birkbeck, University of London, UK
| | - Denis Tatone
- Department of Cognitive Science, Central European University, Vienna, Austria.
| |
Collapse
|
17
|
Affiliation(s)
- Ilenia Paparella
- Institut des Sciences Cognitives— Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon1, Lyon, France
| | - Liuba Papeo
- Institut des Sciences Cognitives— Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon1, Lyon, France
| |
Collapse
|
18
|
Basyouni R, Parkinson C. Mapping the social landscape: tracking patterns of interpersonal relationships. Trends Cogn Sci 2022; 26:204-221. [DOI: 10.1016/j.tics.2021.12.006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Revised: 12/18/2021] [Accepted: 12/23/2021] [Indexed: 11/16/2022]
|
19
|
Goupil N, Papeo L, Hochmann J. Visual perception grounding of social cognition in preverbal infants. INFANCY 2022; 27:210-231. [DOI: 10.1111/infa.12453] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Revised: 11/22/2021] [Accepted: 01/02/2022] [Indexed: 11/28/2022]
Affiliation(s)
- Nicolas Goupil
- Institut des Sciences Cognitives—Marc Jeannerod UMR5229 Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon1 Bron France
| | - Liuba Papeo
- Institut des Sciences Cognitives—Marc Jeannerod UMR5229 Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon1 Bron France
| | - Jean‐Rémy Hochmann
- Institut des Sciences Cognitives—Marc Jeannerod UMR5229 Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon1 Bron France
| |
Collapse
|
20
|
Flavell JC, Over H, Vestner T, Cook R, Tipper SP. Rapid detection of social interactions is the result of domain general attentional processes. PLoS One 2022; 17:e0258832. [PMID: 35030168 PMCID: PMC8759659 DOI: 10.1371/journal.pone.0258832] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Accepted: 10/06/2021] [Indexed: 11/19/2022] Open
Abstract
Using visual search displays of interacting and non-interacting pairs, it has been demonstrated that detection of social interactions is facilitated. For example, two people facing each other are found faster than two people with their backs turned: an effect that may reflect social binding. However, recent work has shown the same effects with non-social arrow stimuli, where towards facing arrows are detected faster than away facing arrows. This latter work suggests a primary mechanism is an attention orienting process driven by basic low-level direction cues. However, evidence for lower level attentional processes does not preclude a potential additional role of higher-level social processes. Therefore, in this series of experiments we test this idea further by directly comparing basic visual features that orient attention with representations of socially interacting individuals. Results confirm the potency of orienting of attention via low-level visual features in the detection of interacting objects. In contrast, there is little evidence for the representation of social interactions influencing initial search performance.
Collapse
Affiliation(s)
- Jonathan C. Flavell
- Department of Psychology, University of York, York, North Yorkshire, United Kingdom
| | - Harriet Over
- Department of Psychology, University of York, York, North Yorkshire, United Kingdom
| | - Tim Vestner
- Department of Psychology, Birkbeck, University of London, London, Greater London, United Kingdom
| | - Richard Cook
- Department of Psychology, Birkbeck, University of London, London, Greater London, United Kingdom
| | - Steven P. Tipper
- Department of Psychology, University of York, York, North Yorkshire, United Kingdom
| |
Collapse
|
21
|
The spatial distance compression effect is due to social interaction and not mere configuration. Psychon Bull Rev 2021; 29:828-836. [PMID: 34918281 DOI: 10.3758/s13423-021-02045-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/29/2021] [Indexed: 11/08/2022]
Abstract
In recent years, there has been a surge of interest in perception, evaluation, and memory for social interactions from a third-person perspective. One intriguing finding is a spatial distance compression effect when target dyads are facing each other. Specifically, face-to-face dyads are remembered as being spatially closer than back-to-back dyads. There is a vibrant debate about the mechanism behind this effect, and two hypotheses have been proposed. According to the social interaction hypothesis, face-to-face dyads engage a binding process that represents them as a social unit, which compresses the perceived distance between them. In contrast, the configuration hypothesis holds that the effect is produced by the front-to-front configuration of the two visual targets. In the present research we sought to test these accounts. In Experiment 1 we successfully replicated the distance compression effect with two upright faces that were facing each other, but not with inverted faces. In contrast, we found no distance compression effect with three types of nonsocial stimuli: arrows (Experiment 2a), fans (Experiment 2b), and cars (Experiment 3). In Experiment 4, we replicated this effect with another social stimuli: upright bodies. Taken together, these results provide strong support for the social interaction hypothesis.
Collapse
|
22
|
Faust KM, Carouso-Peck S, Elson MR, Goldstein MH. The Origins of Social Knowledge in Altricial Species. ANNUAL REVIEW OF DEVELOPMENTAL PSYCHOLOGY 2021; 2:225-246. [PMID: 34553142 DOI: 10.1146/annurev-devpsych-051820-121446] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Human infants are altricial, born relatively helpless and dependent on parental care for an extended period of time. This protracted time to maturity is typically regarded as a necessary epiphenomenon of evolving and developing large brains. We argue that extended altriciality is itself adaptive, as a prolonged necessity for parental care allows extensive social learning to take place. Human adults possess a suite of complex social skills, such as language, empathy, morality, and theory of mind. Rather than requiring hardwired, innate knowledge of social abilities, evolution has outsourced the necessary information to parents. Critical information for species-typical development, such as species recognition, may originate from adults rather than from genes, aided by underlying perceptual biases for attending to social stimuli and capacities for statistical learning of social actions. We draw on extensive comparative findings to illustrate that, across species, altriciality functions as an adaptation for social learning from caregivers.
Collapse
Affiliation(s)
- Katerina M Faust
- Department of Psychology, Cornell University, Ithaca, New York 14853, USA
| | | | - Mary R Elson
- Department of Psychology, Cornell University, Ithaca, New York 14853, USA
| | | |
Collapse
|
23
|
Vestner T, Over H, Gray KLH, Tipper SP, Cook R. Searching for people: Non-facing distractor pairs hinder the visual search of social scenes more than facing distractor pairs. Cognition 2021; 214:104737. [PMID: 33901835 PMCID: PMC8346951 DOI: 10.1016/j.cognition.2021.104737] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2020] [Revised: 03/05/2021] [Accepted: 04/12/2021] [Indexed: 11/24/2022]
Abstract
There is growing interest in the visual and attentional processes recruited when human observers view social scenes containing multiple people. Findings from visual search paradigms have helped shape this emerging literature. Previous research has established that, when hidden amongst pairs of individuals facing in the same direction (leftwards or rightwards), pairs of individuals arranged front-to-front are found faster than pairs of individuals arranged back-to-back. Here, we describe a second, closely-related effect with important theoretical implications. When searching for a pair of individuals facing in the same direction (leftwards or rightwards), target dyads are found faster when hidden amongst distractor pairs arranged front-to-front, than when hidden amongst distractor pairs arranged back-to-back. This distractor arrangement effect was also obtained with target and distractor pairs constructed from arrows and types of common objects that cue visuospatial attention. These findings argue against the view that pairs of people arranged front-to-front capture exogenous attention due to a domain-specific orienting mechanism. Rather, it appears that salient direction cues (e.g., gaze direction, body orientation, arrows) hamper systematic search and impede efficient interpretation, when distractor pairs are arranged back-to-back.
Collapse
Affiliation(s)
- Tim Vestner
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| | - Harriet Over
- Department of Psychology, University of York, York, UK
| | - Katie L H Gray
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | | | - Richard Cook
- Department of Psychological Sciences, Birkbeck, University of London, London, UK; Department of Psychology, University of York, York, UK.
| |
Collapse
|
24
|
The Eyes Have It: Perception of Social Interaction Unfolds Through Pupil Dilation. Neurosci Bull 2021; 37:1595-1598. [PMID: 34212296 DOI: 10.1007/s12264-021-00739-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Accepted: 04/15/2021] [Indexed: 10/21/2022] Open
|
25
|
Bellot E, Abassi E, Papeo L. Moving Toward versus Away from Another: How Body Motion Direction Changes the Representation of Bodies and Actions in the Visual Cortex. Cereb Cortex 2021; 31:2670-2685. [PMID: 33401307 DOI: 10.1093/cercor/bhaa382] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Revised: 11/05/2020] [Accepted: 11/25/2020] [Indexed: 11/12/2022] Open
Abstract
Representing multiple agents and their mutual relations is a prerequisite to understand social events such as interactions. Using functional magnetic resonance imaging on human adults, we show that visual areas dedicated to body form and body motion perception contribute to processing social events, by holding the representation of multiple moving bodies and encoding the spatial relations between them. In particular, seeing animations of human bodies facing and moving toward (vs. away from) each other increased neural activity in the body-selective cortex [extrastriate body area (EBA)] and posterior superior temporal sulcus (pSTS) for biological motion perception. In those areas, representation of body postures and movements, as well as of the overall scene, was more accurate for facing body (vs. nonfacing body) stimuli. Effective connectivity analysis with dynamic causal modeling revealed increased coupling between EBA and pSTS during perception of facing body stimuli. The perceptual enhancement of multiple-body scenes featuring cues of interaction (i.e., face-to-face positioning, spatial proximity, and approaching signals) was supported by the participants' better performance in a recognition task with facing body versus nonfacing body stimuli. Thus, visuospatial cues of interaction in multiple-person scenarios affect the perceptual representation of body and body motion and, by promoting functional integration, streamline the process from body perception to action representation.
Collapse
Affiliation(s)
- Emmanuelle Bellot
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon 1, 69675 Bron, France
| | - Etienne Abassi
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon 1, 69675 Bron, France
| | - Liuba Papeo
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon 1, 69675 Bron, France
| |
Collapse
|
26
|
Hafri A, Firestone C. The Perception of Relations. Trends Cogn Sci 2021; 25:475-492. [PMID: 33812770 DOI: 10.1016/j.tics.2021.01.006] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2020] [Revised: 01/05/2021] [Accepted: 01/18/2021] [Indexed: 11/16/2022]
Abstract
The world contains not only objects and features (red apples, glass bowls, wooden tables), but also relations holding between them (apples contained in bowls, bowls supported by tables). Representations of these relations are often developmentally precocious and linguistically privileged; but how does the mind extract them in the first place? Although relations themselves cast no light onto our eyes, a growing body of work suggests that even very sophisticated relations display key signatures of automatic visual processing. Across physical, eventive, and social domains, relations such as support, fit, cause, chase, and even socially interact are extracted rapidly, are impossible to ignore, and influence other perceptual processes. Sophisticated and structured relations are not only judged and understood, but also seen - revealing surprisingly rich content in visual perception itself.
Collapse
Affiliation(s)
- Alon Hafri
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD 21218, USA; Department of Cognitive Science, Johns Hopkins University, Baltimore, MD 21218, USA.
| | - Chaz Firestone
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD 21218, USA; Department of Cognitive Science, Johns Hopkins University, Baltimore, MD 21218, USA; Department of Philosophy, Johns Hopkins University, Baltimore, MD 21218, USA.
| |
Collapse
|
27
|
Han Q, Wang Y, Jiang Y, Bao M. The relevance to social interaction modulates bistable biological-motion perception. Cognition 2021; 209:104584. [PMID: 33450439 DOI: 10.1016/j.cognition.2021.104584] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2020] [Revised: 12/30/2020] [Accepted: 12/31/2020] [Indexed: 10/22/2022]
Abstract
Social interaction, the process through which individuals act and react toward each other, is arguably the building block of society. As the very first step for successful social interaction, we need to derive the orientation and immediate social relevance of other people: a person facing toward us is much more likely to initiate communications than a person who is back to us. Reversely, however, it remains elusive whether the relevance to social interaction modulates how we perceive the other's orientation. Here, we adopted the bistable point-light walker (PLW) which is ambiguous in its in-depth orientation. Participants were asked to report the orientation (facing the viewer or facing away from the viewer) of the PLWs. Three factors that are task-irrelevant but critically pertinent to social interaction, the distance, the speed, and the size of the PLW, were systematically manipulated. The nearer a person is, the more likely it initiates interactions with us. The larger a person is, the larger influence it may exert. The faster a person is, the shorter time is left for us to respond. Results revealed that participants tended to perceive the PLW as facing them more frequently than facing away when the PLW was nearer, faster, or larger. These same factors produced different patterns of effects on a non-biological rotating cylinder. These findings demonstrate that the relevance to social interaction modulates the visual perception of biological motion and highlight that bistable biological motion perception not only reflects competitions of low-level features but is also strongly linked to high-level social cognition.
Collapse
Affiliation(s)
- Qiu Han
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Ying Wang
- Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China; State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai, China
| | - Yi Jiang
- Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China; State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai, China; Chinese Institute for Brain Research, Beijing; Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, China.
| | - Min Bao
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China; State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China.
| |
Collapse
|
28
|
Bunce C, Gray KLH, Cook R. The perception of interpersonal distance is distorted by the Müller-Lyer illusion. Sci Rep 2021; 11:494. [PMID: 33436801 PMCID: PMC7803751 DOI: 10.1038/s41598-020-80073-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2020] [Accepted: 12/14/2020] [Indexed: 11/10/2022] Open
Abstract
There is growing interest in how human observers perceive social scenes containing multiple people. Interpersonal distance is a critical feature when appraising these scenes; proxemic cues are used by observers to infer whether two people are interacting, the nature of their relationship, and the valence of their current interaction. Presently, however, remarkably little is known about how interpersonal distance is encoded within the human visual system. Here we show that the perception of interpersonal distance is distorted by the Müller-Lyer illusion. Participants perceived the distance between two target points to be compressed or expanded depending on whether face pairs were positioned inside or outside the to-be-judged interval. This illusory bias was found to be unaffected by manipulations of face direction. These findings aid our understanding of how human observers perceive interpersonal distance and may inform theoretical accounts of the Müller-Lyer illusion.
Collapse
Affiliation(s)
- Carl Bunce
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E7HX, UK
| | - Katie L H Gray
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Richard Cook
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E7HX, UK.
| |
Collapse
|
29
|
Vestner T, Gray KLH, Cook R. Visual search for facing and non-facing people: The effect of actor inversion. Cognition 2020; 208:104550. [PMID: 33360076 DOI: 10.1016/j.cognition.2020.104550] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Revised: 12/08/2020] [Accepted: 12/11/2020] [Indexed: 10/22/2022]
Abstract
In recent years, there has been growing interest in how human observers perceive, attend to, and recall, social interactions viewed from third-person perspectives. One of the interesting findings to emerge from this new literature is the search advantage for facing dyads. When hidden amongst pairs of individuals facing in the same direction, pairs of individuals arranged front-to-front are found faster in visual search tasks than pairs of individuals arranged back-to-back. Interestingly, the search advantage for facing dyads appears to be sensitive to the orientation of the people depicted. While front-to-front target pairs are found faster than back-to-back targets when target and distractor pairings are shown upright, front-to-front and back-to-back targets are found equally quickly when pairings are shown upside-down. In the present study, we sought to better understand why the search advantage for facing dyads is sensitive to the orientation of the people depicted. To begin, we show that the orientation sensitivity of the search advantage is seen with dyads constructed from faces only, and from bodies with the head and face occluded. We replicate these effects using two different visual search paradigms. We go on to show that individual faces and bodies, viewed in profile, produce strong attentional cueing effects when shown upright, but not when presented upside-down. Together with recent evidence that arrows arranged front-to-front also produce the search advantage for facing dyads, these findings support the view that the search advantage is a by-product of the ability of constituent elements to direct observers' visuo-spatial attention.
Collapse
Affiliation(s)
- Tim Vestner
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| | - Katie L H Gray
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Richard Cook
- Department of Psychological Sciences, Birkbeck, University of London, London, UK; Department of Psychology, University of York, York, UK.
| |
Collapse
|
30
|
|
31
|
Gandolfo M, Downing PE. Asymmetric visual representation of sex from human body shape. Cognition 2020; 205:104436. [PMID: 32919115 DOI: 10.1016/j.cognition.2020.104436] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2020] [Revised: 08/05/2020] [Accepted: 08/07/2020] [Indexed: 01/21/2023]
Abstract
We efficiently infer others' states and traits from their appearance, and these inferences powerfully shape our social behaviour. One key trait is sex, which is strongly cued by the appearance of the body. What are the visual representations that link body shape to sex? Previous studies of visual sex judgment tasks find observers have a bias to report "male", particularly for ambiguous stimuli. This finding implies a representational asymmetry - that for the processes that generate a sex percept, the default output is "male", and "female" is determined by the presence of additional perceptual evidence. That is, female body shapes are positively coded by reference to a male default shape. This perspective makes a novel prediction in line with Treisman's studies of visual search asymmetries: female body targets should be more readily detected amongst male distractors than vice versa. Across 10 experiments (N = 32 each) we confirmed this prediction and ruled out alternative low-level explanations. The asymmetry was found with profile and frontal body silhouettes, frontal photographs, and schematised icons. Low-level confounds were controlled by balancing silhouette images for size and homogeneity, and by matching physical properties of photographs. The female advantage was nulled for inverted icons, but intact for inverted photographs, suggesting reliance on distinct cues to sex for different body depictions. Together, these findings demonstrate a principle of the perceptual coding that links bodily appearance with a significant social trait: the female body shape is coded as an extension of a male default. We conclude by offering a visual experience account of how these asymmetric representations arise in the first place.
Collapse
|
32
|
Why are social interactions found quickly in visual search tasks? Cognition 2020; 200:104270. [PMID: 32220782 PMCID: PMC7315127 DOI: 10.1016/j.cognition.2020.104270] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2019] [Revised: 03/12/2020] [Accepted: 03/16/2020] [Indexed: 11/25/2022]
Abstract
When asked to find a target dyad amongst non-interacting individuals, participants respond faster when the individuals in the target dyad are shown face-to-face (suggestive of a social interaction), than when they are presented back-to-back. Face-to-face dyads may be found faster because social interactions recruit specialized processing. However, human faces and bodies are salient directional cues that exert a strong influence on how observers distribute their attention. Here we report that a similar search advantage exists for ‘point-to-point’ and ‘point-to-face’ target arrangements constructed using arrows – a non-social directional cue. These findings indicate that the search advantage seen for face-to-face dyads is a product of the directional cues present within arrangements, not the fact that they are processed as social interactions, per se. One possibility is that, when arranged in the face-to-face or point-to-point configuration, pairs of directional cues (faces, bodies, arrows) create an attentional ‘hot-spot’ – a region of space in between the elements to which attention is directed by multiple cues. Due to the presence of this hot-spot, observers' attention may be drawn to the target location earlier in a serial visual search.
Collapse
|
33
|
Walbrin J, Mihai I, Landsiedel J, Koldewyn K. Developmental changes in visual responses to social interactions. Dev Cogn Neurosci 2020; 42:100774. [PMID: 32452460 PMCID: PMC7075793 DOI: 10.1016/j.dcn.2020.100774] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2019] [Revised: 02/03/2020] [Accepted: 03/02/2020] [Indexed: 11/09/2022] Open
Abstract
Children show less interaction selectivity in the pSTS than adults. Adults show bilateral pSTS selectivity, while children are more right-lateralized. Exploratory findings suggest interaction selectivity in pSTS is more focally tuned in adults.
Recent evidence demonstrates that a region of the posterior superior temporal sulcus (pSTS) is selective to visually observed social interactions in adults. In contrast, little is known about neural responses to social interactions in children. Here, we used fMRI to ask whether the pSTS is ‘tuned’ to social interactions in children at all, and if so, how selectivity might differ from adults. This was investigated in the pSTS, along with several other socially-tuned regions in neighbouring temporal cortex: extrastriate body area, face selective STS, fusiform face area, and mentalizing selective temporo-parietal junction. Both children and adults showed selectivity to social interaction within right pSTS, while only adults showed selectivity on the left. Adults also showed both more focal and greater selectivity than children (6–12 years) bilaterally. Exploratory sub-group analyses showed that younger children (6–8), but not older children (9–12), are less selective than adults on the right, while there was a continuous developmental trend (adults > older > younger) in left pSTS. These results suggest that, over development, the neural response to social interactions is characterized by increasingly more selective, focal, and bilateral pSTS responses, a process that likely continues into adolescence.
Collapse
Affiliation(s)
- Jon Walbrin
- School of Psychology, Bangor University, Wales, United Kingdom.
| | - Ioana Mihai
- School of Psychology, Bangor University, Wales, United Kingdom
| | | | - Kami Koldewyn
- School of Psychology, Bangor University, Wales, United Kingdom
| |
Collapse
|
34
|
Yin J, Tatone D, Csibra G. Giving, but not taking, actions are spontaneously represented as social interactions: Evidence from modulation of lower alpha oscillations. Neuropsychologia 2020; 139:107363. [PMID: 32007510 DOI: 10.1016/j.neuropsychologia.2020.107363] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2019] [Revised: 12/19/2019] [Accepted: 01/24/2020] [Indexed: 11/19/2022]
Abstract
Unlike taking, which can be redescribed in non-social and object-directed terms, acts of giving are invariably expressed across languages in a three-argument structure relating agent, patient, and object. Developmental evidence suggests this difference in the syntactic entailment of the patient role to be rooted in a prelinguistic understanding of giving as a patient-directed, hence obligatorily social, action. We hypothesized that minimal cues of possession transfer, known to induce this interpretation in preverbal infants, should similarly encourage adults to perceive the patient of giving, but not taking, actions as integral participant of the observed event, even without cues of overt involvement in the transfer. To test this hypothesis, we measured a known electrophysiological correlate of action understanding (the suppression of alpha-band oscillations) during the observation of giving and taking events, under the assumption that the functional grouping of agent and patient should have induced greater suppression that the representation of individual object-directed actions. As predicted, the observation of giving produced stronger lower alpha suppression than superficially similar acts of object disposal, whereas no difference emerged between taking from an animate patient or an inanimate target. These results suggest that the participants spontaneously represented giving, but not kinematically identical taking actions, as social interactions, and crucially restricted this interpretation to transfer events featuring animate patients. This evidence gives empirical traction to the idea that such asymmetry, rather than being an interpretive propensity circumscribed to the first year of life, is attributable to an ontogenetically stable system dedicated to the efficient identification of interactions based on active transfer.
Collapse
Affiliation(s)
- Jun Yin
- Department of Psychology, Ningbo University, Ningbo, PR China; Cognitive Development Center, Department of Cognitive Science, Central European University, Budapest, Hungary.
| | - Denis Tatone
- Cognitive Development Center, Department of Cognitive Science, Central European University, Budapest, Hungary
| | - Gergely Csibra
- Cognitive Development Center, Department of Cognitive Science, Central European University, Budapest, Hungary; Department of Psychological Sciences, Birkbeck, University of London, United Kingdom
| |
Collapse
|
35
|
The Representation of Two-Body Shapes in the Human Visual Cortex. J Neurosci 2019; 40:852-863. [PMID: 31801812 DOI: 10.1523/jneurosci.1378-19.2019] [Citation(s) in RCA: 54] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2019] [Revised: 11/21/2019] [Accepted: 11/27/2019] [Indexed: 11/21/2022] Open
Abstract
Human social nature has shaped visual perception. A signature of the relationship between vision and sociality is a particular visual sensitivity to social entities such as faces and bodies. We asked whether human vision also exhibits a special sensitivity to spatial relations that reliably correlate with social relations. In general, interacting people are more often situated face-to-face than back-to-back. Using functional MRI and behavioral measures in female and male human participants, we show that visual sensitivity to social stimuli extends to images including two bodies facing toward (vs away from) each other. In particular, the inferior lateral occipital cortex, which is involved in visual-object perception, is organized such that the inferior portion encodes the number of bodies (one vs two) and the superior portion is selectively sensitive to the spatial relation between bodies (facing vs nonfacing). Moreover, functionally localized, body-selective visual cortex responded to facing bodies more strongly than identical, but nonfacing, bodies. In this area, multivariate pattern analysis revealed an accurate representation of body dyads with sharpening of the representation of single-body postures in facing dyads, which demonstrates an effect of visual context on the perceptual analysis of a body. Finally, the cost of body inversion (upside-down rotation) on body recognition, a behavioral signature of a specialized mechanism for body perception, was larger for facing versus nonfacing dyads. Thus, spatial relations between multiple bodies are encoded in regions for body perception and affect the way in which bodies are processed.SIGNIFICANCE STATEMENT Human social nature has shaped visual perception. Here, we show that human vision is not only attuned to socially relevant entities, such as bodies, but also to socially relevant spatial relations between those entities. Body-selective regions of visual cortex respond more strongly to multiple bodies that appear to be interacting (i.e., face-to-face), relative to unrelated bodies, and more accurately represent single body postures in interacting scenarios. Moreover, recognition of facing bodies is particularly susceptible to perturbation by upside-down rotation, indicative of a particular visual sensitivity to the canonical appearance of facing bodies. This encoding of relations between multiple bodies in areas for body-shape recognition suggests that the visual context in which a body is encountered deeply affects its perceptual analysis.
Collapse
|
36
|
Kvasova D, Garcia-Vernet L, Soto-Faraco S. Characteristic Sounds Facilitate Object Search in Real-Life Scenes. Front Psychol 2019; 10:2511. [PMID: 31749751 PMCID: PMC6848886 DOI: 10.3389/fpsyg.2019.02511] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2019] [Accepted: 10/23/2019] [Indexed: 12/02/2022] Open
Abstract
Real-world events do not only provide temporally and spatially correlated information across the senses, but also semantic correspondences about object identity. Prior research has shown that object sounds can enhance detection, identification, and search performance of semantically consistent visual targets. However, these effects are always demonstrated in simple and stereotyped displays that lack ecological validity. In order to address identity-based cross-modal relationships in real-world scenarios, we designed a visual search task using complex, dynamic scenes. Participants searched for objects in video clips recorded from real-life scenes. Auditory cues, embedded in the background sounds, could be target-consistent, distracter-consistent, neutral, or just absent. We found that, in these naturalistic scenes, characteristic sounds improve visual search for task-relevant objects but fail to increase the salience of irrelevant distracters. Our findings generalize previous results on object-based cross-modal interactions with simple stimuli and shed light upon how audio-visual semantically congruent relationships play out in real-life contexts.
Collapse
Affiliation(s)
- Daria Kvasova
- Center for Brain and Cognition, Universitat Pompeu Fabra, Barcelona, Spain
| | - Laia Garcia-Vernet
- Center for Brain and Cognition, Universitat Pompeu Fabra, Barcelona, Spain
| | - Salvador Soto-Faraco
- Center for Brain and Cognition, Universitat Pompeu Fabra, Barcelona, Spain
- ICREA – Catalan Institution for Research and Advanced Studies, Barcelona, Spain
| |
Collapse
|