1
|
Hafri A, Bonner MF, Landau B, Firestone C. A Phone in a Basket Looks Like a Knife in a Cup: Role-Filler Independence in Visual Processing. Open Mind (Camb) 2024; 8:766-794. [PMID: 38957507 PMCID: PMC11219067 DOI: 10.1162/opmi_a_00146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 04/17/2024] [Indexed: 07/04/2024] Open
Abstract
When a piece of fruit is in a bowl, and the bowl is on a table, we appreciate not only the individual objects and their features, but also the relations containment and support, which abstract away from the particular objects involved. Independent representation of roles (e.g., containers vs. supporters) and "fillers" of those roles (e.g., bowls vs. cups, tables vs. chairs) is a core principle of language and higher-level reasoning. But does such role-filler independence also arise in automatic visual processing? Here, we show that it does, by exploring a surprising error that such independence can produce. In four experiments, participants saw a stream of images containing different objects arranged in force-dynamic relations-e.g., a phone contained in a basket, a marker resting on a garbage can, or a knife sitting in a cup. Participants had to respond to a single target image (e.g., a phone in a basket) within a stream of distractors presented under time constraints. Surprisingly, even though participants completed this task quickly and accurately, they false-alarmed more often to images matching the target's relational category than to those that did not-even when those images involved completely different objects. In other words, participants searching for a phone in a basket were more likely to mistakenly respond to a knife in a cup than to a marker on a garbage can. Follow-up experiments ruled out strategic responses and also controlled for various confounding image features. We suggest that visual processing represents relations abstractly, in ways that separate roles from fillers.
Collapse
Affiliation(s)
- Alon Hafri
- Department of Linguistics and Cognitive Science, University of Delaware
- Department of Cognitive Science, Johns Hopkins University
- Department of Psychological and Brain Sciences, Johns Hopkins University
| | | | - Barbara Landau
- Department of Cognitive Science, Johns Hopkins University
| | - Chaz Firestone
- Department of Cognitive Science, Johns Hopkins University
- Department of Psychological and Brain Sciences, Johns Hopkins University
| |
Collapse
|
2
|
Goupil N, Rayson H, Serraille É, Massera A, Ferrari PF, Hochmann JR, Papeo L. Visual Preference for Socially Relevant Spatial Relations in Humans and Monkeys. Psychol Sci 2024; 35:681-693. [PMID: 38683657 DOI: 10.1177/09567976241242995] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/01/2024] Open
Abstract
As a powerful social signal, a body, face, or gaze facing toward oneself holds an individual's attention. We asked whether, going beyond an egocentric stance, facingness between others has a similar effect and why. In a preferential-looking time paradigm, human adults showed spontaneous preference to look at two bodies facing toward (vs. away from) each other (Experiment 1a, N = 24). Moreover, facing dyads were rated higher on social semantic dimensions, showing that facingness adds social value to stimuli (Experiment 1b, N = 138). The same visual preference was found in juvenile macaque monkeys (Experiment 2, N = 21). Finally, on the human development timescale, this preference emerged by 5 years, although young infants by 7 months of age already discriminate visual scenes on the basis of body positioning (Experiment 3, N = 120). We discuss how the preference for facing dyads-shared by human adults, young children, and macaques-can signal a new milestone in social cognition development, supporting processing and learning from third-party social interactions.
Collapse
Affiliation(s)
- Nicolas Goupil
- Institut des Sciences Cognitives Marc Jeannerod, Bron, France; Centre National de la Recherche Scientifique (CNRS), Paris, France; and Université Claude Bernard Lyon 1
| | - Holly Rayson
- Institut des Sciences Cognitives Marc Jeannerod, Bron, France; Centre National de la Recherche Scientifique (CNRS), Paris, France; and Université Claude Bernard Lyon 1
| | - Émilie Serraille
- Institut des Sciences Cognitives Marc Jeannerod, Bron, France; Centre National de la Recherche Scientifique (CNRS), Paris, France; and Université Claude Bernard Lyon 1
| | - Alice Massera
- Institut des Sciences Cognitives Marc Jeannerod, Bron, France; Centre National de la Recherche Scientifique (CNRS), Paris, France; and Université Claude Bernard Lyon 1
| | - Pier Francesco Ferrari
- Institut des Sciences Cognitives Marc Jeannerod, Bron, France; Centre National de la Recherche Scientifique (CNRS), Paris, France; and Université Claude Bernard Lyon 1
| | - Jean-Rémy Hochmann
- Institut des Sciences Cognitives Marc Jeannerod, Bron, France; Centre National de la Recherche Scientifique (CNRS), Paris, France; and Université Claude Bernard Lyon 1
| | - Liuba Papeo
- Institut des Sciences Cognitives Marc Jeannerod, Bron, France; Centre National de la Recherche Scientifique (CNRS), Paris, France; and Université Claude Bernard Lyon 1
| |
Collapse
|
3
|
Zanon M, Lemaire BS, Papeo L, Vallortigara G. Innate sensitivity to face-to-face biological motion. iScience 2024; 27:108793. [PMID: 38299110 PMCID: PMC10828802 DOI: 10.1016/j.isci.2024.108793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Revised: 12/08/2023] [Accepted: 01/02/2024] [Indexed: 02/02/2024] Open
Abstract
Sensitivity to face-to-face stimuli configurations, which likely indicates interaction, seems to appear early in infants' development, and recently a preference for face-to-face (vs. other spatial configurations) has been shown to occur in macaque monkeys. It is unknown, however, whether such a preference is acquired through experience or as an evolutionary-given biological predisposition. Here, we exploited a precocial social animal, the domestic chick, as a model system to address this question. Visually naive chicks were tested for their spontaneous preferences for face-to-face vs. back-to-back hen dyads of point-light displays depicting biological motion. We found that female chicks have a spontaneous preference for the facing interactive configuration. Males showed no preference, as expected due to the well-known low social motivation of males in this highly polygynous species. These findings support the idea of an innate and sex-dependent predisposition toward social and interacting stimuli in a vertebrate brain such as that of chicks.
Collapse
Affiliation(s)
- Mirko Zanon
- Center for Mind/Brain Sciences, University of Trento, Rovereto, Italy
| | | | - Liuba Papeo
- Institut des Sciences Cognitives - Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon 1, France
| | | |
Collapse
|
4
|
McMahon E, Isik L. Seeing social interactions. Trends Cogn Sci 2023; 27:1165-1179. [PMID: 37805385 PMCID: PMC10841760 DOI: 10.1016/j.tics.2023.09.001] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Revised: 09/01/2023] [Accepted: 09/05/2023] [Indexed: 10/09/2023]
Abstract
Seeing the interactions between other people is a critical part of our everyday visual experience, but recognizing the social interactions of others is often considered outside the scope of vision and grouped with higher-level social cognition like theory of mind. Recent work, however, has revealed that recognition of social interactions is efficient and automatic, is well modeled by bottom-up computational algorithms, and occurs in visually-selective regions of the brain. We review recent evidence from these three methodologies (behavioral, computational, and neural) that converge to suggest the core of social interaction perception is visual. We propose a computational framework for how this process is carried out in the brain and offer directions for future interdisciplinary investigations of social perception.
Collapse
Affiliation(s)
- Emalie McMahon
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA
| | - Leyla Isik
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA; Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA.
| |
Collapse
|
5
|
Thiele M, Kalinke S, Michel C, Haun DBM. Direct and Observed Joint Attention Modulate 9-Month-Old Infants' Object Encoding. Open Mind (Camb) 2023; 7:917-946. [PMID: 38053630 PMCID: PMC10695677 DOI: 10.1162/opmi_a_00114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Accepted: 09/25/2023] [Indexed: 12/07/2023] Open
Abstract
Sharing joint visual attention to an object with another person biases infants to encode qualitatively different object properties compared to a parallel attention situation lacking interpersonal sharedness. This study investigated whether merely observing joint attention amongst others shows the same effect. In Experiment 1 (first-party replication experiment), N = 36 9-month-old German infants were presented with a violation-of-expectation task during which they saw an adult looking either in the direction of the infant (eye contact) or to the side (no eye contact) before and after looking at an object. Following an occlusion phase, infants saw one of three different outcomes: the same object reappeared at the same screen position (no change), the same object reappeared at a novel position (location change), or a novel object appeared at the same position (identity change). We found that infants looked longer at identity change outcomes (vs. no changes) in the "eye contact" condition compared to the "no eye contact" condition. In contrast, infants' response to location changes was not influenced by the presence of eye contact. In Experiment 2, we found the same result pattern in a matched third-party design, in which another sample of N = 36 9-month-old German infants saw two adults establishing eye contact (or no eye contact) before alternating their gaze between an object and their partner without ever looking at the infant. These findings indicate that infants learn similarly from interacting with others and observing others interact, suggesting that infant cultural learning extends beyond infant-directed interactions.
Collapse
Affiliation(s)
- Maleen Thiele
- Department of Comparative Cultural Psychology, Max Planck Institute for Evolutionary Anthropology, Leipzig, Germany
| | - Steven Kalinke
- Department of Comparative Cultural Psychology, Max Planck Institute for Evolutionary Anthropology, Leipzig, Germany
| | - Christine Michel
- Department of Early Child Development and Culture, Leipzig University, Leipzig, Germany
- SRH University of Applied Health Sciences, Gera, Germany
| | - Daniel B. M. Haun
- Department of Comparative Cultural Psychology, Max Planck Institute for Evolutionary Anthropology, Leipzig, Germany
| |
Collapse
|
6
|
Skripkauskaite S, Mihai I, Koldewyn K. Attentional bias towards social interactions during viewing of naturalistic scenes. Q J Exp Psychol (Hove) 2023; 76:2303-2311. [PMID: 36377819 PMCID: PMC10503253 DOI: 10.1177/17470218221140879] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 09/30/2022] [Accepted: 11/04/2022] [Indexed: 09/16/2023]
Abstract
Human visual attention is readily captured by the social information in scenes. Multiple studies have shown that social areas of interest (AOIs) such as faces and bodies attract more attention than non-social AOIs (e.g., objects or background). However, whether this attentional bias is moderated by the presence (or absence) of a social interaction remains unclear. Here, the gaze of 70 young adults was tracked during the free viewing of 60 naturalistic scenes. All photographs depicted two people, who were either interacting or not. Analyses of dwell time revealed that more attention was spent on human than background AOIs in the interactive pictures. In non-interactive pictures, however, dwell time did not differ between AOI type. In the time-to-first-fixation analysis, humans always captured attention before other elements of the scene, although this difference was slightly larger in interactive than non-interactive scenes. These findings confirm the existence of a bias towards social information in attentional capture and suggest our attention values social interactions beyond the presence of two people.
Collapse
Affiliation(s)
- Simona Skripkauskaite
- School of Psychology, Bangor University, Bangor, UK
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Ioana Mihai
- School of Psychology, Bangor University, Bangor, UK
| | | |
Collapse
|
7
|
Hochmann JR. Incomplete language-of-thought in infancy. Behav Brain Sci 2023; 46:e278. [PMID: 37766647 DOI: 10.1017/s0140525x23001826] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/29/2023]
Abstract
The view that infants possess a full-fledged propositional language-of-thought (LoT) is appealing, providing a unifying account for infants' precocious reasoning skills in many domains. However, careful appraisal of empirical evidence suggests that there is still no convincing evidence that infants possess discrete representations of abstract relations, suggesting that infants' LoT remains incomplete. Parallel arguments hold for perception.
Collapse
Affiliation(s)
- Jean-Rémy Hochmann
- CNRS UMR5229 - Institut des Sciences Cognitives Marc Jeannerod, Bron, France. ://sites.google.com/site/jrhochmann/
- Université Lyon 1 Claude Bernard, Lyon, France
| |
Collapse
|
8
|
Xu Z, Chen H, Wang Y. Invisible social grouping facilitates the recognition of individual faces. Conscious Cogn 2023; 113:103556. [PMID: 37541010 DOI: 10.1016/j.concog.2023.103556] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Revised: 07/09/2023] [Accepted: 07/28/2023] [Indexed: 08/06/2023]
Abstract
Emerging evidence suggests a specialized mechanism supporting perceptual grouping of social entities. However, the stage at which social grouping is processed is unclear. Through four experiments, here we showed that participants' recognition of a visible face was facilitated by the presence of a second facing (thus forming a social grouping) relative to a nonfacing face, even when the second face was invisible. Using a monocular/dichoptic paradigm, we further found that the social grouping facilitation effect occurred when the two faces were presented dichoptically to different eyes rather than monocularly to the same eye, suggesting that social grouping relies on binocular rather than monocular neural channels. The above effects were not found for inverted face dyads, thereby ruling out the contribution of nonsocial factors. Taken together, these findings support the unconscious influence of social grouping on visual perception and suggest an early origin of social grouping processing in the visual pathway.
Collapse
Affiliation(s)
- Zhenjie Xu
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou 310028, Zhejiang, China
| | - Hui Chen
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou 310028, Zhejiang, China.
| | - Yingying Wang
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou 310028, Zhejiang, China.
| |
Collapse
|
9
|
Processing third-party social interactions in the human infant brain. Infant Behav Dev 2022; 68:101727. [PMID: 35667276 DOI: 10.1016/j.infbeh.2022.101727] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 05/25/2022] [Accepted: 05/25/2022] [Indexed: 11/21/2022]
Abstract
The understanding of developing social brain functions during infancy relies on research that has focused on studying how infants engage in first-person social interactions or view individual agents and their actions. Behavioral research suggests that observing and learning from third-party social interactions plays a foundational role in early social and moral development. However, the brain systems involved in observing third-party social interactions during infancy are unknown. The current study tested the hypothesis that brain systems in prefrontal and temporal cortex, previously identified in adults and children, begin to specialize in third-party social interaction processing during infancy. Infants (N = 62), ranging from 6 to 13 months in age, had their brain responses measured using functional near-infrared spectroscopy (fNIRS) while viewing third-party social interactions and two control conditions, infants viewing two individual actions and infants viewing inverted social interactions. The results show that infants preferentially engage brain regions localized within the dorsomedial prefrontal cortex when viewing third-party social interactions. These findings suggest that brain systems processing third-party social interaction begin to develop early in human ontogeny and may thus play a foundational role in supporting the interpretation of and learning from social interactions.
Collapse
|