1
|
Zhao M, Wang J. Consistent social information perceived in animated backgrounds improves ensemble perception of facial expressions. Perception 2024; 53:563-578. [PMID: 38725355 DOI: 10.1177/03010066241253073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/13/2024]
Abstract
Observers can rapidly extract the mean emotion from a set of faces with remarkable precision, known as ensemble coding. Previous studies have demonstrated that matched physical backgrounds improve the precision of ongoing ensemble tasks. However, it remains unknown whether this facilitation effect still occurs when matched social information is perceived from the backgrounds. In two experiments, participants decided whether the test face in the retrieving phase appeared more disgusted or neutral than the mean emotion of the face set in the encoding phase. Both phases were paired with task-irrelevant animated backgrounds, which included either the forward movement trajectory carrying the "cooperatively chasing" information, or the backward movement trajectory conveying no such chasing information. The backgrounds in the encoding and retrieving phases were either mismatched (i.e., forward and backward replays of the same trajectory), or matched (i.e., two identical forward movement trajectories in Experiment 1, or two different forward movement trajectories in Experiment 2). Participants in both experiments showed higher ensemble precisions and better discrimination sensitivities when backgrounds matched. The findings suggest that consistent social information perceived from memory-related context exerts a context-matching facilitation effect on ensemble coding and, more importantly, this effect is independent of consistent physical information.
Collapse
Affiliation(s)
- Mengfei Zhao
- School of Psychology, Zhejiang Normal University, Jinhua, PR China
| | - Jun Wang
- School of Psychology, Zhejiang Normal University, Jinhua, PR China
- Zhejiang Philosophy and Social Science Laboratory for the Mental Health and Crisis Intervention of Children and Adolescents, Zhejiang Normal University, Jinhua, PR China
| |
Collapse
|
2
|
Peng Y, Burling JM, Todorova GK, Neary C, Pollick FE, Lu H. Patterns of saliency and semantic features distinguish gaze of expert and novice viewers of surveillance footage. Psychon Bull Rev 2024; 31:1745-1758. [PMID: 38273144 PMCID: PMC11358171 DOI: 10.3758/s13423-024-02454-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/04/2024] [Indexed: 01/27/2024]
Abstract
When viewing the actions of others, we not only see patterns of body movements, but we also "see" the intentions and social relations of people. Experienced forensic examiners - Closed Circuit Television (CCTV) operators - have been shown to convey superior performance in identifying and predicting hostile intentions from surveillance footage than novices. However, it remains largely unknown what visual content CCTV operators actively attend to, and whether CCTV operators develop different strategies for active information seeking from what novices do. Here, we conducted computational analysis for the gaze-centered stimuli captured by experienced CCTV operators and novices' eye movements when viewing the same surveillance footage. Low-level image features were extracted by a visual saliency model, whereas object-level semantic features were extracted by a deep convolutional neural network (DCNN), AlexNet, from gaze-centered regions. We found that the looking behavior of CCTV operators differs from novices by actively attending to visual contents with different patterns of saliency and semantic features. Expertise in selectively utilizing informative features at different levels of visual hierarchy may play an important role in facilitating the efficient detection of social relationships between agents and the prediction of harmful intentions.
Collapse
Affiliation(s)
- Yujia Peng
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, 100871, China.
- Institute for Artificial Intelligence, Peking University, Beijing, China.
- National Key Laboratory of General Artificial Intelligence, Beijing Institute for General Artificial Intelligence, Beijing, China.
- Department of Psychology, University of California, Los Angeles, CA, USA.
| | - Joseph M Burling
- Department of Psychology, University of California, Los Angeles, CA, USA
| | - Greta K Todorova
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, UK
| | - Catherine Neary
- School of Health and Social Wellbeing, The University of the West of England, Bristol, UK
| | - Frank E Pollick
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, UK
| | - Hongjing Lu
- Department of Psychology, University of California, Los Angeles, CA, USA
- Department of Statistics, University of California, Los Angeles, CA, USA
| |
Collapse
|
3
|
Goldman EJ, Poulin-Dubois D. Children's anthropomorphism of inanimate agents. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2024; 15:e1676. [PMID: 38659105 DOI: 10.1002/wcs.1676] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 10/30/2023] [Accepted: 01/12/2024] [Indexed: 04/26/2024]
Abstract
This review article examines the extant literature on animism and anthropomorphism in infants and young children. A substantial body of work indicates that both infants and young children have a broad concept of what constitutes a sentient agent and react to inanimate objects as they do to people in the same context. The literature has also revealed a developmental pattern in which anthropomorphism decreases with age, but social robots appear to be an exception to this pattern. Additionally, the review shows that children attribute psychological properties to social robots less so than people but still anthropomorphize them. Importantly, some research suggests that anthropomorphism of social robots is dependent upon their morphology and human-like behaviors. The extent to which children anthropomorphize robots is dependent on their exposure to them and the presence of human-like features. Based on the existing literature, we conclude that in infancy, a large range of inanimate objects (e.g., boxes, geometric figures) that display animate motion patterns trigger the same behaviors observed in child-adult interactions, suggesting some implicit form of anthropomorphism. The review concludes that additional research is needed to understand what infants and children judge as social agents and how the perception of inanimate agents changes over the lifespan. As exposure to robots and virtual assistants increases, future research must focus on better understanding the full impact that regular interactions with such partners will have on children's anthropomorphizing. This article is categorized under: Psychology > Learning Cognitive Biology > Cognitive Development Computer Science and Robotics > Robotics.
Collapse
|
4
|
Scholl BJ. Perceptual (roots of) core knowledge. Behav Brain Sci 2024; 47:e140. [PMID: 38934457 DOI: 10.1017/s0140525x23003023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/28/2024]
Abstract
Some core knowledge may be rooted in - or even identical to - well-characterized mechanisms of mid-level visual perception and attention. In the decades since it was first proposed, this possibility has inspired (and has been supported by) several discoveries in both infant cognition and adult perception, but it also faces several challenges. To what degree does What Babies Know reflect how babies see and attend?
Collapse
Affiliation(s)
- Brian J Scholl
- Department of Psychology, Yale University, New Haven, CT, USA ://perception.yale.edu/
| |
Collapse
|
5
|
Vicovaro M, Squadrelli Saraceno F, Dalmaso M. Exploring the influence of self-identification on perceptual judgments of physical and social causality. PeerJ 2024; 12:e17449. [PMID: 38799071 PMCID: PMC11122051 DOI: 10.7717/peerj.17449] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2024] [Accepted: 05/02/2024] [Indexed: 05/29/2024] Open
Abstract
People tend to overestimate the causal contribution of the self to the observed outcome in various situations, a cognitive bias known as the 'illusion of control.' This study delves into whether this cognitive bias impacts causality judgments in animations depicting physical and social causal interactions. In two experiments, participants were instructed to associate themselves and a hypothetical stranger identity with two geometrical shapes (a circle and a square). Subsequently, they viewed animations portraying these shapes assuming the roles of agent and patient in causal interactions. Within one block, the shape related to the self served as the agent, while the shape associated with the stranger played the role of the patient. Conversely, in the other block, the identity-role association was reversed. We posited that the perception of the self as a causal agent might influence explicit judgments of physical and social causality. Experiment 1 demonstrated that physical causality ratings were solely shaped by kinematic cues. In Experiment 2, emphasising social causality, the dominance of kinematic parameters was confirmed. Therefore, contrary to the hypothesis anticipating diminished causality ratings with specific identity-role associations, results indicated negligible impact of our manipulation. The study contributes to understanding the interplay between kinematic and non-kinematic cues in human causal reasoning. It suggests that explicit judgments of causality in simple animations primarily rely on low-level kinematic cues, with the cognitive bias of overestimating the self's contribution playing a negligible role.
Collapse
|
6
|
McMahon E, Bonner MF, Isik L. Hierarchical organization of social action features along the lateral visual pathway. Curr Biol 2023; 33:5035-5047.e8. [PMID: 37918399 PMCID: PMC10841461 DOI: 10.1016/j.cub.2023.10.015] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 09/01/2023] [Accepted: 10/10/2023] [Indexed: 11/04/2023]
Abstract
Recent theoretical work has argued that in addition to the classical ventral (what) and dorsal (where/how) visual streams, there is a third visual stream on the lateral surface of the brain specialized for processing social information. Like visual representations in the ventral and dorsal streams, representations in the lateral stream are thought to be hierarchically organized. However, no prior studies have comprehensively investigated the organization of naturalistic, social visual content in the lateral stream. To address this question, we curated a naturalistic stimulus set of 250 3-s videos of two people engaged in everyday actions. Each clip was richly annotated for its low-level visual features, mid-level scene and object properties, visual social primitives (including the distance between people and the extent to which they were facing), and high-level information about social interactions and affective content. Using a condition-rich fMRI experiment and a within-subject encoding model approach, we found that low-level visual features are represented in early visual cortex (EVC) and middle temporal (MT) area, mid-level visual social features in extrastriate body area (EBA) and lateral occipital complex (LOC), and high-level social interaction information along the superior temporal sulcus (STS). Communicative interactions, in particular, explained unique variance in regions of the STS after accounting for variance explained by all other labeled features. Taken together, these results provide support for representation of increasingly abstract social visual content-consistent with hierarchical organization-along the lateral visual stream and suggest that recognizing communicative actions may be a key computational goal of the lateral visual pathway.
Collapse
Affiliation(s)
- Emalie McMahon
- Department of Cognitive Science, Zanvyl Krieger School of Arts & Sciences, Johns Hopkins University, 237 Krieger Hall, 3400 N. Charles Street, Baltimore, MD 21218, USA.
| | - Michael F Bonner
- Department of Cognitive Science, Zanvyl Krieger School of Arts & Sciences, Johns Hopkins University, 237 Krieger Hall, 3400 N. Charles Street, Baltimore, MD 21218, USA
| | - Leyla Isik
- Department of Cognitive Science, Zanvyl Krieger School of Arts & Sciences, Johns Hopkins University, 237 Krieger Hall, 3400 N. Charles Street, Baltimore, MD 21218, USA; Department of Biomedical Engineering, Whiting School of Engineering, Johns Hopkins University, Suite 400 West, Wyman Park Building, 3400 N. Charles Street, Baltimore, MD 21218, USA
| |
Collapse
|
7
|
McMahon E, Isik L. Seeing social interactions. Trends Cogn Sci 2023; 27:1165-1179. [PMID: 37805385 PMCID: PMC10841760 DOI: 10.1016/j.tics.2023.09.001] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Revised: 09/01/2023] [Accepted: 09/05/2023] [Indexed: 10/09/2023]
Abstract
Seeing the interactions between other people is a critical part of our everyday visual experience, but recognizing the social interactions of others is often considered outside the scope of vision and grouped with higher-level social cognition like theory of mind. Recent work, however, has revealed that recognition of social interactions is efficient and automatic, is well modeled by bottom-up computational algorithms, and occurs in visually-selective regions of the brain. We review recent evidence from these three methodologies (behavioral, computational, and neural) that converge to suggest the core of social interaction perception is visual. We propose a computational framework for how this process is carried out in the brain and offer directions for future interdisciplinary investigations of social perception.
Collapse
Affiliation(s)
- Emalie McMahon
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA
| | - Leyla Isik
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA; Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA.
| |
Collapse
|
8
|
Vicovaro M, Brunello L, Parovel G. The psychophysics of bouncing: Perceptual constraints, physical constraints, animacy, and phenomenal causality. PLoS One 2023; 18:e0285448. [PMID: 37594993 PMCID: PMC10437946 DOI: 10.1371/journal.pone.0285448] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Accepted: 04/23/2023] [Indexed: 08/20/2023] Open
Abstract
In the present study we broadly explored the perception of physical and animated motion in bouncing-like scenarios through four experiments. In the first experiment, participants were asked to categorize bouncing-like displays as physical bounce, animated motion, or other. Several parameters of the animations were manipulated, that is, the simulated coefficient of restitution, the value of simulated gravitational acceleration, the motion pattern (uniform acceleration/deceleration or constant speed) and the number of bouncing cycles. In the second experiment, a variable delay at the moment of the collision between the bouncing object and the bouncing surface was introduced. Main results show that, although observers appear to have realistic representations of physical constraints like energy conservation and gravitational acceleration/deceleration, the amount of visual information available in the scene has a strong modulation effect on the extent to which they rely on these representations. A coefficient of restitution >1 was a crucial cue to animacy in displays showing three bouncing cycles, but not in displays showing one bouncing cycle. Additionally, bouncing impressions appear to be driven by perceptual constraints that are unrelated to the physical realism of the scene, like preference for simulated gravitational attraction smaller than g and perceived temporal contiguity between the different phases of bouncing. In the third experiment, the visible opaque bouncing surface was removed from the scene, and the results showed that this did not have any substantial effect on the resulting impressions of physical bounce or animated motion, suggesting that the visual system can fill-in the scene with the missing element. The fourth experiment explored visual impressions of causality in bouncing scenarios. At odds with claims of current causal perception theories, results indicate that a passive object can be perceived as the direct cause of the motion behavior of an active object.
Collapse
Affiliation(s)
- Michele Vicovaro
- Department of General Psychology, University of Padova, Padova, Italy
| | - Loris Brunello
- Department of General Psychology, University of Padova, Padova, Italy
| | - Giulia Parovel
- Department of Social, Political and Cognitive Sciences, University of Siena, Siena, Italy
| |
Collapse
|
9
|
Parovel G. Perceiving animacy from kinematics: visual specification of life-likeness in simple geometric patterns. Front Psychol 2023; 14:1167809. [PMID: 37333577 PMCID: PMC10273680 DOI: 10.3389/fpsyg.2023.1167809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Accepted: 05/11/2023] [Indexed: 06/20/2023] Open
Abstract
Since the seminal work of Heider and Simmel, and Michotte's research, many studies have shown that, under appropriate conditions, displays of simple geometric shapes elicit rich and vivid impressions of animacy and intentionality. The main purpose of this review is to emphasize the close relationship between kinematics and perceived animacy by showing which specific motion cues and spatiotemporal patterns automatically trigger visual perceptions of animacy and intentionality. The animacy phenomenon has been demonstrated to be rather fast, automatic, irresistible, and highly stimulus-driven. Moreover, there is growing evidence that animacy attributions, although usually associated with higher-level cognition and long-term memory, may reflect highly specialized visual processes that have evolved to support adaptive behaviors critical for survival. The hypothesis of a life-detector hardwired in the perceptual system is also supported by recent studies in early development and animal cognition, as well as by the issue of the "irresistibility" criterion, i.e., the persistence of animacy perception in adulthood even in the face of conflicting background knowledge. Finally, further support for the hypothesis that animacy is processed in the earliest stages of vision comes from recent experimental evidence on the interaction of animacy with other visual processes, such as visuomotor performance, visual memory, and speed estimation. Summarizing, the ability to detect animacy in all its nuances may be related to the visual system's sensitivity to those changes in kinematics - considered as a multifactorial relational system - that are associated with the presence of living beings, as opposed to the natural, inert behavior of physically constrained, form-invariant objects, or even mutually independent moving agents. This broad predisposition would allow the observer not only to identify the presence of animates and to distinguish them from inanimate, but also to quickly grasp their psychological, emotional, and social characteristics.
Collapse
Affiliation(s)
- Giulia Parovel
- Department of Social, Political and Cognitive Sciences, University of Siena, Siena, Italy
| |
Collapse
|
10
|
Peng M, Liang M, Huang H, Fan J, Yu L, Liao J. The effect of different animated brand logos on consumer response —— an event-related potential and self-response study. COMPUTERS IN HUMAN BEHAVIOR 2023. [DOI: 10.1016/j.chb.2023.107701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/15/2023]
|
11
|
Socially evaluative contexts facilitate mentalizing. Trends Cogn Sci 2023; 27:17-29. [PMID: 36357300 DOI: 10.1016/j.tics.2022.10.003] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Revised: 10/08/2022] [Accepted: 10/10/2022] [Indexed: 11/09/2022]
Abstract
Our ability to understand others' minds stands at the foundation of human learning, communication, cooperation, and social life more broadly. Although humans' ability to mentalize has been well-studied throughout the cognitive sciences, little attention has been paid to whether and how mentalizing differs across contexts. Classic developmental studies have examined mentalizing within minimally social contexts, in which a single agent seeks a neutral inanimate object. Such object-directed acts may be common, but they are typically consequential only to the object-seeking agent themselves. Here, we review a host of indirect evidence suggesting that contexts providing the opportunity to evaluate prospective social partners may facilitate mentalizing across development. Our article calls on cognitive scientists to study mentalizing in contexts where it counts.
Collapse
|
12
|
Lemaire BS, Vallortigara G. Life is in motion (through a chick's eye). Anim Cogn 2023; 26:129-140. [PMID: 36222937 PMCID: PMC9877072 DOI: 10.1007/s10071-022-01703-8] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Revised: 10/01/2022] [Accepted: 10/04/2022] [Indexed: 01/29/2023]
Abstract
Cognitive scientists, social psychologists, computer scientists, neuroscientists, ethologists and many others have all wondered how brains detect and interpret the motion of living organisms. It appears that specific cues, incorporated into our brains by natural selection, serve to signal the presence of living organisms. A simple geometric figure such as a triangle put in motion with specific kinematic rules can look alive, and it can even seem to have intentions and goals. In this article, we survey decades of parallel investigations on the motion cues that drive animacy perception-the sensation that something is alive-in non-human animals, especially in precocial species, such as the domestic chick, to identify inborn biological predispositions. At the same time, we highlight the relevance of these studies for an understanding of human typical and atypical cognitive development.
Collapse
Affiliation(s)
- Bastien S Lemaire
- Center for Mind and Brain Sciences, University of Trento, Trento, Italy.
| | | |
Collapse
|
13
|
Lemaire BS, Rosa-Salva O, Fraja M, Lorenzi E, Vallortigara G. Spontaneous preference for unpredictability in the temporal contingencies between agents' motion in naive domestic chicks. Proc Biol Sci 2022; 289:20221622. [PMID: 36350221 PMCID: PMC9653227 DOI: 10.1098/rspb.2022.1622] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Accepted: 10/12/2022] [Indexed: 08/24/2023] Open
Abstract
The ability to recognize animate agents based on their motion has been investigated in humans and animals alike. When the movements of multiple objects are interdependent, humans perceive the presence of social interactions and goal-directed behaviours. Here, we investigated how visually naive domestic chicks respond to agents whose motion was reciprocally contingent in space and time (i.e. the time and direction of motion of one object can be predicted from the time and direction of motion of another object). We presented a 'social aggregation' stimulus, in which three smaller discs repeatedly converged towards a bigger disc, moving in a manner resembling a mother hen and chicks (versus a control stimulus lacking such interactions). Remarkably, chicks preferred stimuli in which the timing of the motion of one object could not be predicted by that of other objects. This is the first demonstration of a sensitivity to the temporal relationships between the motion of different objects in naive animals, a trait that could be at the basis of the development of the perception of social interaction and goal-directed behaviours.
Collapse
Affiliation(s)
- Bastien S. Lemaire
- Center for Mind/Brain Sciences, University of Trento, Piazza Manifattura, 1, 38068 Rovereto, TN, Italy
| | - Orsola Rosa-Salva
- Center for Mind/Brain Sciences, University of Trento, Piazza Manifattura, 1, 38068 Rovereto, TN, Italy
| | - Margherita Fraja
- Center for Mind/Brain Sciences, University of Trento, Piazza Manifattura, 1, 38068 Rovereto, TN, Italy
| | - Elena Lorenzi
- Center for Mind/Brain Sciences, University of Trento, Piazza Manifattura, 1, 38068 Rovereto, TN, Italy
| | - Giorgio Vallortigara
- Center for Mind/Brain Sciences, University of Trento, Piazza Manifattura, 1, 38068 Rovereto, TN, Italy
| |
Collapse
|
14
|
Abdai J, Uccheddu S, Gácsi M, Miklósi Á. Exploring the advantages of using artificial agents to investigate animacy perception in cats and dogs. BIOINSPIRATION & BIOMIMETICS 2022; 17:065009. [PMID: 36130608 DOI: 10.1088/1748-3190/ac93d9] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Accepted: 09/21/2022] [Indexed: 06/15/2023]
Abstract
Self-propelled motion cues elicit the perception of inanimate objects as animate. Studies usually rely on the looking behaviour of subjects towards stimuli displayed on a screen, but utilizing artificial unidentified moving objects (UMOs) provides a more natural, interactive context. Here, we investigated whether cats and dogs discriminate between UMOs showing animate vs inanimate motion, and how they react to the UMOs' interactive behaviour. Subjects first observed, in turn, the motion of an animate and an inanimate UMO, and then they could move freely for 2 min while both UMOs were present (two-way choice phase). In the following specific motion phase, the animate UMO showed one of three interactive behaviours: pushing a ball, a luring motion, or moving towards the subject (between-subject design). Then, subjects could move freely for 2 min again while the UMO was motionless. At the end, subjects were free to move in the room while the UMO was moving semi-randomly in the room. We found that dogs approached and touched the UMO(s) sooner and more frequently than cats, regardless of the context. In the two-way choice phase, dogs looked at the animate UMO more often, and both species touched the animate UMO more frequently. However, whether the UMO showed playing, luring or assertive behaviour did not influence subjects' behaviour. In summary, both species displayed distinctive behaviour towards the animate UMO, but in dogs, in addition to the physical contact this was also reflected by the looking behaviour. Overall, dogs were more keen to explore and interact with the UMO than cats, which might be due to the general increased stress of cats in novel environments. The findings indicate the importance of measuring multiple behaviours when assessing responses to animacy. The live demonstration using artificial agents provides a unique opportunity to study social perception in nonhuman species.
Collapse
Affiliation(s)
- Judit Abdai
- MTA-ELTE Comparative Ethology Research Group, Budapest, Hungary
| | | | - Márta Gácsi
- MTA-ELTE Comparative Ethology Research Group, Budapest, Hungary
- Department of Ethology, Eötvös Loránd University, Budapest, Hungary
| | - Ádám Miklósi
- MTA-ELTE Comparative Ethology Research Group, Budapest, Hungary
- Department of Ethology, Eötvös Loránd University, Budapest, Hungary
| |
Collapse
|
15
|
Lisøy RS, Biegler R, Haghish EF, Veckenstedt R, Moritz S, Pfuhl G. Seeing minds - a signal detection study of agency attribution along the autism-psychosis continuum. Cogn Neuropsychiatry 2022; 27:356-372. [PMID: 35579601 DOI: 10.1080/13546805.2022.2075721] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
INTRODUCTION Diametrically aberrant mentalising biases, namely hypermentalising in psychosis and hypomentalising in autism, are postulated by some theoretical models. To test this hypothesis, we measured psychotic-like experiences, autistic traits and mentalising biases in a visual chasing paradigm. METHODS Participants from the general population (N = 300) and psychotic patients (N=26) judged the absence or presence of a chase during five-second long displays of seemingly randomly moving dots. Hypermentalising is seeing a chase where there is none, whereas hypomentalising is missing to see a chase. RESULTS Psychotic-like experiences were associated with hypermentalising. Autistic traits were not associated with hypomentalising, but with a reduced ability to discriminate chasing from non-chasing trials. Given the high correlation (τ = .41) between autistic traits and psychotic-like experiences, we controlled for concomitant symptom severity on agency detection. We found that all but those with many autistic and psychotic traits showed hypomentalising, suggesting an additive effect of traits on mentalising. In the second study, we found no hypermentalising in patients with psychosis, who performed also similarly to a matched control group. CONCLUSIONS The results suggest that hypermentalising is a cognitive bias restricted to subclinical psychotic-like experiences. There was no support for a diametrically opposite mentalising bias along the autism-psychosis continuum.
Collapse
Affiliation(s)
- Rebekka Solvik Lisøy
- Department of Psychology, Norwegian University of Science and Technology, Trondheim, Norway
| | - Robert Biegler
- Department of Psychology, Norwegian University of Science and Technology, Trondheim, Norway
| | | | - Ruth Veckenstedt
- Department of Psychiatry and Psychotherapy, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Steffen Moritz
- Department of Psychiatry and Psychotherapy, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Gerit Pfuhl
- Department of Psychology, UiT - The Arctic University of Norway, Tromso, Norway
| |
Collapse
|
16
|
Kominsky JF, Lucca K, Thomas AJ, Frank MC, Hamlin JK. Simplicity and validity in infant research. COGNITIVE DEVELOPMENT 2022. [DOI: 10.1016/j.cogdev.2022.101213] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
17
|
Schultz J, Frith CD. Animacy and the prediction of behaviour. Neurosci Biobehav Rev 2022; 140:104766. [DOI: 10.1016/j.neubiorev.2022.104766] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2022] [Revised: 06/24/2022] [Accepted: 07/01/2022] [Indexed: 10/17/2022]
|
18
|
Patton CE, Wickens CD, Clegg BA, Noble KM, Smith CAP. How history trails and set size influence detection of hostile intentions. Cogn Res Princ Implic 2022; 7:41. [PMID: 35556185 PMCID: PMC9098711 DOI: 10.1186/s41235-022-00395-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Accepted: 04/28/2022] [Indexed: 11/17/2022] Open
Abstract
Previous research suggests people struggle to detect a series of movements that might imply hostile intentions of a vessel, yet this ability is crucial in many real world Naval scenarios. To investigate possible mechanisms for improving performance, participants engaged in a simple, simulated ship movement task. One of two hostile behaviors were present in one of the vessels: Shadowing—mirroring the participant’s vessel’s movements; and Hunting—closing in on the participant’s vessel. In the first experiment, history trails, showing the previous nine positions of each ship connected by a line, were introduced as a potential diagnostic aid. In a second experiment, the number of computer-controlled ships on the screen also varied. Smaller set size improved detection performance. History trails also consistently improved detection performance for both behaviors, although still falling well short of optimal, even with the smaller set size. These findings suggest that working memory plays a critical role in performance on this dynamic decision making task, and the constraints of working memory capacity can be decreased through a simple visual aid and an overall reduction in the number of objects being tracked. The implications for the detection of hostile intentions are discussed.
Collapse
Affiliation(s)
| | | | | | | | - C A P Smith
- Colorado State University, Fort Collins, USA
| |
Collapse
|
19
|
Liu J, Hu J, Li Q, Zhao X, Liu Y, Liu S. Atypical processing pattern of gaze cues in dynamic situations in autism spectrum disorders. Sci Rep 2022; 12:4120. [PMID: 35260744 PMCID: PMC8904572 DOI: 10.1038/s41598-022-08080-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2021] [Accepted: 03/02/2022] [Indexed: 11/22/2022] Open
Abstract
Psychological studies have generally shown that individuals with Autism Spectrum Disorder (ASD) have particularity in the processing of social information by using static or abstract images. Yet, a recent study showed that there was no difference in their use of social or non-social cues in dynamic interactive situations. To establish the cause of the inconsistent results, we added gaze cues in different directions to the chase detection paradigm to explore whether they would affect the performance of participants with ASD. Meanwhile, eye-tracking methodology was used to investigate whether the processing patterns of gaze cues were different between individuals with ASD and TD. In this study, unlike typical controls, participants with ASD showed no detection advantage when the direction of gaze was consistent with the direction of movement (oriented condition). The results suggested that individuals with ASD may utilize an atypical processing pattern, which makes it difficult for them to use social information contained in oriented gaze cues in dynamic interactive situations.
Collapse
Affiliation(s)
- Jia Liu
- College of Psychology, Liaoning Normal University, Dalian, 116029, China
| | - Jinsheng Hu
- College of Psychology, Liaoning Normal University, Dalian, 116029, China.
| | - Qi Li
- College of Psychology, Liaoning Normal University, Dalian, 116029, China
| | - Xiaoning Zhao
- College of Psychology, Liaoning Normal University, Dalian, 116029, China
| | - Ying Liu
- College of Psychology, Liaoning Normal University, Dalian, 116029, China
| | - Shuqing Liu
- College of Basic Medical Sciences, Dalian Medical University, Dalian, 116044, China
| |
Collapse
|
20
|
Abdai J, Miklósi Á. Selection for specific behavioural traits does not influence preference of chasing motion and visual strategy in dogs. Sci Rep 2022; 12:2370. [PMID: 35149772 PMCID: PMC8837786 DOI: 10.1038/s41598-022-06382-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Accepted: 01/13/2022] [Indexed: 11/22/2022] Open
Abstract
Perception of inanimate objects as animate based on motion cues alone seems to be present in phylogenetically distant species, from birth (humans and chicks). However, we do not know whether the species’ social and ecological environment has an influence on this phenomenon. Dogs serve as a unique species to investigate whether selection for specific behavioural traits influences animacy perception. We tested purebred companion dogs, and assigned them into two groups based on the type of work they were originally selected for: (1) Chasers, tracking and chasing prey; (2) Retrievers, mark and remember downed game. We displayed isosceles triangles presenting a chasing pattern vs moving independently, in parallel on a screen. We hypothesised that Chasers prefer to look at chasing and Retrievers eventually focus their visual attention on the independent motion. Overall, we did not find a significant difference between groups regarding the looking duration of dogs or the frequency of their gaze alternation between the chasing and independent motions. Thus it seems that selection for specific traits does not influence the perception of animate entities within the species.
Collapse
Affiliation(s)
- Judit Abdai
- MTA-ELTE Comparative Ethology Research Group, Budapest, Hungary
| | - Ádám Miklósi
- MTA-ELTE Comparative Ethology Research Group, Budapest, Hungary. .,Department of Ethology, Eötvös Loránd University, Budapest, Hungary.
| |
Collapse
|
21
|
Liu R, Xu F. Learning about others and learning from others: Bayesian probabilistic models of intuitive psychology and social learning. ADVANCES IN CHILD DEVELOPMENT AND BEHAVIOR 2022; 63:309-343. [DOI: 10.1016/bs.acdb.2022.04.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
22
|
Jessop A, Chang F. Thematic role tracking difficulties across multiple visual events influences role use in language production. VISUAL COGNITION 2021. [DOI: 10.1080/13506285.2021.2013374] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Andrew Jessop
- School of Psychology, The University of Liverpool, Liverpool, UK
- Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands
| | - Franklin Chang
- Department of English Studies, Kobe City University for Foreign Studies, Kobe, Japan
| |
Collapse
|
23
|
Muthesius A, Grothey F, Cunningham C, Hölzer S, Vogeley K, Schultz J. Preserved metacognition despite impaired perception of intentionality cues in schizophrenia. SCHIZOPHRENIA RESEARCH-COGNITION 2021; 27:100215. [PMID: 34692428 PMCID: PMC8517602 DOI: 10.1016/j.scog.2021.100215] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Revised: 08/31/2021] [Accepted: 08/31/2021] [Indexed: 11/03/2022]
Abstract
Social cognition and metacognition are frequently impaired in schizophrenia, and these impairments complicate recovery. Recent work suggests that different aspects of metacognition may not be impaired to the same degree. Furthermore, metacognition and the cognitive capacity being monitored need not be similarly impaired. Here, we assessed performance in detecting cues of intentional behaviour as well as metacognition about detecting those cues in schizophrenia. Thirty patients and controls categorized animations of moving dots into those displaying a dyadic interaction demonstrating a chase or no chase and indicated their confidence in these judgments. Perception and metacognition were assessed using signal detection theoretic measures, which were analysed using frequentist and Bayesian statistics. Patients showed a deficit compared to controls in detecting intentionality cues, but showed preserved metacognitive performance into this task. Our study reveals a selective deficit in the perception of intentionality cues, but preserved metacognitive insight into the validity of this perception. It thus appears that impairment of metacognition in schizophrenia varies across cognitive domains - metacognition should not be considered a monolithic stone that is either impaired or unimpaired.
Collapse
Affiliation(s)
- Ana Muthesius
- Department of Psychiatry and Psychotherapy, University of Cologne, Cologne, Germany
| | - Farina Grothey
- Department of Psychiatry and Psychotherapy, University of Cologne, Cologne, Germany
| | - Carter Cunningham
- Masters in Neuroscience Program, Medical Faculty, University of Bonn, Bonn, Germany
| | - Susanne Hölzer
- Department of Psychiatry and Psychotherapy, University of Cologne, Cologne, Germany
| | - Kai Vogeley
- Department of Psychiatry and Psychotherapy, University of Cologne, Cologne, Germany.,Institute of Neuroscience and Medicine, Cognitive Neuroscience (INM-3), Research Centre Jülich, Jülich, Germany
| | - Johannes Schultz
- Center for Economics and Neuroscience, University of Bonn, Bonn, Germany.,Institute of Experimental Epileptology and Cognition Research, Medical Faculty, University of Bonn, Bonn, Germany
| |
Collapse
|
24
|
Sosa FA, Ullman T, Tenenbaum JB, Gershman SJ, Gerstenberg T. Moral dynamics: Grounding moral judgment in intuitive physics and intuitive psychology. Cognition 2021; 217:104890. [PMID: 34487974 DOI: 10.1016/j.cognition.2021.104890] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2019] [Revised: 08/17/2021] [Accepted: 08/19/2021] [Indexed: 11/19/2022]
Abstract
When holding others morally responsible, we care about what they did, and what they thought. Traditionally, research in moral psychology has relied on vignette studies, in which a protagonist's actions and thoughts are explicitly communicated. While this research has revealed what variables are important for moral judgment, such as actions and intentions, it is limited in providing a more detailed understanding of exactly how these variables affect moral judgment. Using dynamic visual stimuli that allow for a more fine-grained experimental control, recent studies have proposed a direct mapping from visual features to moral judgments. We embrace the use of visual stimuli in moral psychology, but question the plausibility of a feature-based theory of moral judgment. We propose that the connection from visual features to moral judgments is mediated by an inference about what the observed action reveals about the agent's mental states, and what causal role the agent's action played in bringing about the outcome. We present a computational model that formalizes moral judgments of agents in visual scenes as computations over an intuitive theory of physics combined with an intuitive theory of mind. We test the model's quantitative predictions in three experiments across a wide variety of dynamic interactions.
Collapse
Affiliation(s)
- Felix A Sosa
- Department of Psychology, Harvard University, United States; Center for Brains, Minds, and Machines, MIT, United States
| | - Tomer Ullman
- Department of Psychology, Harvard University, United States; Center for Brains, Minds, and Machines, MIT, United States
| | - Joshua B Tenenbaum
- Department of Brain and Cognitive Sciences, MIT, United States; Center for Brains, Minds, and Machines, MIT, United States
| | - Samuel J Gershman
- Department of Psychology, Harvard University, United States; Center for Brain Science, Harvard University, United States; Center for Brains, Minds, and Machines, MIT, United States
| | | |
Collapse
|
25
|
Shu T, Peng Y, Zhu SC, Lu H. A unified psychological space for human perception of physical and social events. Cogn Psychol 2021; 128:101398. [PMID: 34217107 DOI: 10.1016/j.cogpsych.2021.101398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Revised: 06/10/2021] [Accepted: 06/14/2021] [Indexed: 11/27/2022]
Abstract
One of the great feats of human perception is the generation of quick impressions of both physical and social events based on sparse displays of motion trajectories. Here we aim to provide a unified theory that captures the interconnections between perception of physical and social events. A simulation-based approach is used to generate a variety of animations depicting rich behavioral patterns. Human experiments used these animations to reveal that perception of dynamic stimuli undergoes a gradual transition from physical to social events. A learning-based computational framework is proposed to account for human judgments. The model learns to identify latent forces by inferring a family of potential functions capturing physical laws, and value functions describing the goals of agents. The model projects new animations into a sociophysical space with two psychological dimensions: an intuitive sense of whether physical laws are violated, and an impression of whether an agent possesses intentions to perform goal-directed actions. This derived sociophysical space predicts a meaningful partition between physical and social events, as well as a gradual transition from physical to social perception. The space also predicts human judgments of whether individual objects are lifeless objects in motion, or human agents performing goal-directed actions. These results demonstrate that a theoretical unification based on physical potential functions and goal-related values can account for the human ability to form an immediate impression of physical and social events. This ability provides an important pathway from perception to higher cognition.
Collapse
Affiliation(s)
- Tianmin Shu
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, USA.
| | - Yujia Peng
- School of Psychological and Cognitive Sciences, Peking University, China
| | - Song-Chun Zhu
- Beijing Institute for General Artificial Intelligence, China; Department of Automation, Tsinghua University, China; Institute for Artificial Intelligence, Peking University, China
| | - Hongjing Lu
- Department of Psychology, University of California, Los Angeles, USA
| |
Collapse
|
26
|
Salatiello A, Hovaidi-Ardestani M, Giese MA. A Dynamical Generative Model of Social Interactions. Front Neurorobot 2021; 15:648527. [PMID: 34177508 PMCID: PMC8220068 DOI: 10.3389/fnbot.2021.648527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2020] [Accepted: 04/23/2021] [Indexed: 11/24/2022] Open
Abstract
The ability to make accurate social inferences makes humans able to navigate and act in their social environment effortlessly. Converging evidence shows that motion is one of the most informative cues in shaping the perception of social interactions. However, the scarcity of parameterized generative models for the generation of highly-controlled stimuli has slowed down both the identification of the most critical motion features and the understanding of the computational mechanisms underlying their extraction and processing from rich visual inputs. In this work, we introduce a novel generative model for the automatic generation of an arbitrarily large number of videos of socially interacting agents for comprehensive studies of social perception. The proposed framework, validated with three psychophysical experiments, allows generating as many as 15 distinct interaction classes. The model builds on classical dynamical system models of biological navigation and is able to generate visual stimuli that are parametrically controlled and representative of a heterogeneous set of social interaction classes. The proposed method represents thus an important tool for experiments aimed at unveiling the computational mechanisms mediating the perception of social interactions. The ability to generate highly-controlled stimuli makes the model valuable not only to conduct behavioral and neuroimaging studies, but also to develop and validate neural models of social inference, and machine vision systems for the automatic recognition of social interactions. In fact, contrasting human and model responses to a heterogeneous set of highly-controlled stimuli can help to identify critical computational steps in the processing of social interaction stimuli.
Collapse
Affiliation(s)
- Alessandro Salatiello
- Section for Computational Sensomotorics, Department of Cognitive Neurology, Centre for Integrative Neuroscience, Hertie Institute for Clinical Brain Research, University Clinic Tübingen, Tübingen, Germany
| | - Mohammad Hovaidi-Ardestani
- Section for Computational Sensomotorics, Department of Cognitive Neurology, Centre for Integrative Neuroscience, Hertie Institute for Clinical Brain Research, University Clinic Tübingen, Tübingen, Germany
| | - Martin A Giese
- Section for Computational Sensomotorics, Department of Cognitive Neurology, Centre for Integrative Neuroscience, Hertie Institute for Clinical Brain Research, University Clinic Tübingen, Tübingen, Germany
| |
Collapse
|
27
|
Hafri A, Firestone C. The Perception of Relations. Trends Cogn Sci 2021; 25:475-492. [PMID: 33812770 DOI: 10.1016/j.tics.2021.01.006] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2020] [Revised: 01/05/2021] [Accepted: 01/18/2021] [Indexed: 11/16/2022]
Abstract
The world contains not only objects and features (red apples, glass bowls, wooden tables), but also relations holding between them (apples contained in bowls, bowls supported by tables). Representations of these relations are often developmentally precocious and linguistically privileged; but how does the mind extract them in the first place? Although relations themselves cast no light onto our eyes, a growing body of work suggests that even very sophisticated relations display key signatures of automatic visual processing. Across physical, eventive, and social domains, relations such as support, fit, cause, chase, and even socially interact are extracted rapidly, are impossible to ignore, and influence other perceptual processes. Sophisticated and structured relations are not only judged and understood, but also seen - revealing surprisingly rich content in visual perception itself.
Collapse
Affiliation(s)
- Alon Hafri
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD 21218, USA; Department of Cognitive Science, Johns Hopkins University, Baltimore, MD 21218, USA.
| | - Chaz Firestone
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD 21218, USA; Department of Cognitive Science, Johns Hopkins University, Baltimore, MD 21218, USA; Department of Philosophy, Johns Hopkins University, Baltimore, MD 21218, USA.
| |
Collapse
|
28
|
Westra E, Terrizzi BF, van Baal ST, Beier JS, Michael J. Beyond avatars and arrows: Testing the mentalising and submentalising hypotheses with a novel entity paradigm. Q J Exp Psychol (Hove) 2021; 74:1709-1723. [PMID: 33752520 PMCID: PMC8392802 DOI: 10.1177/17470218211007388] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
In recent years, there has been a heated debate about how to interpret findings that seem to show that humans rapidly and automatically calculate the visual perspectives of others. In this study, we investigated the question of whether automatic interference effects found in the dot-perspective task are the product of domain-specific perspective-taking processes or of domain-general “submentalising” processes. Previous attempts to address this question have done so by implementing inanimate controls, such as arrows, as stimuli. The rationale for this is that submentalising processes that respond to directionality should be engaged by such stimuli, whereas domain-specific perspective-taking mechanisms, if they exist, should not. These previous attempts have been limited, however, by the implied intentionality of the stimuli they have used (e.g., arrows), which may have invited participants to imbue them with perspectival agency. Drawing inspiration from “novel entity” paradigms from infant gaze–following research, we designed a version of the dot-perspective task that allowed us to precisely control whether a central stimulus was viewed as animate or inanimate. Across four experiments, we found no evidence that automatic “perspective-taking” effects in the dot-perspective task are modulated by beliefs about the animacy of the central stimulus. Our results also suggest that these effects may be due to the task-switching elements of the dot-perspective paradigm, rather than automatic directional orienting. Together, these results indicate that neither the perspective-taking nor the standard submentalising interpretations of the dot-perspective task are fully correct.
Collapse
Affiliation(s)
- Evan Westra
- Department of Philosophy, York University, Toronto, Ontario, Canada
| | - Brandon F Terrizzi
- Division of General and Community Pediatrics, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | - Simon T van Baal
- Department of Philosophy, Monash University, Clayton, Victoria, Australia
| | | | - John Michael
- Department of Cognitive Science, Central European University, Budapest, Hungary
| |
Collapse
|
29
|
Di Giorgio E, Lunghi M, Vallortigara G, Simion F. Newborns' sensitivity to speed changes as a building block for animacy perception. Sci Rep 2021; 11:542. [PMID: 33436701 PMCID: PMC7803759 DOI: 10.1038/s41598-020-79451-3] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2020] [Accepted: 12/03/2020] [Indexed: 02/06/2023] Open
Abstract
The human visual system can discriminate between animate beings vs. inanimate objects on the basis of some kinematic cues, such as starting from rest and speed changes by self-propulsion. The ontogenetic origin of such capability is still under debate. Here we investigate for the first time whether newborns manifest an attentional bias toward objects that abruptly change their speed along a trajectory as contrasted with objects that move at a constant speed. To this end, we systematically manipulated the motion speed of two objects. An object that moves with a constant speed was contrasted with an object that suddenly increases (Experiment 1) or with one that suddenly decreases its speed (Experiment 2). When presented with a single speed change, newborns did not show any visual preference. However, newborns preferred an object that abruptly increases and then decreases its speed (Experiment 3), but they did not show any visual preference for the reverse sequence pattern (Experiment 4). Overall, results are discussed in line with the hypothesis of the existence of attentional biases in newborns that trigger their attention towards some visual cues of motion that characterized animate perception in adults.
Collapse
Affiliation(s)
- Elisa Di Giorgio
- Dipartimento Di Psicologia Dello Sviluppo E Della Socializzazione, Università Degli Studi Di Padova, Via Venezia 8, 35131, Padua, PD, Italy.
| | - Marco Lunghi
- Dipartimento Di Psicologia Dello Sviluppo E Della Socializzazione, Università Degli Studi Di Padova, Via Venezia 8, 35131, Padua, PD, Italy
| | - Giorgio Vallortigara
- CIMeC, Center for Mind/Brain Sciences, Università Degli Studi Di Trento, Trento, Italy
| | - Francesca Simion
- Dipartimento Di Psicologia Dello Sviluppo E Della Socializzazione, Università Degli Studi Di Padova, Via Venezia 8, 35131, Padua, PD, Italy
| |
Collapse
|
30
|
Parovel G, Guidi S. Speed Overestimation of the Moving Away Object in the Intentional Reaction Causal Effect. Iperception 2020; 11:2041669520980019. [PMID: 33489073 PMCID: PMC7768325 DOI: 10.1177/2041669520980019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Accepted: 11/20/2020] [Indexed: 11/15/2022] Open
Abstract
We describe a new illusory speed effect arising in visual events developed by Michotte (1946/1963) in studies of causal perception and, more specifically, within the so-called intentional reaction effect: When an Object B is seen intentionally escaping from another Object A, its perceived speed is overestimated. In Experiment 1, we used two-alternative forced choice comparisons to estimate perceived speed scale values for a small square moving either alone or in different contexts known to elicit different impressions of animacy (Parovel et al., 2018). The results showed that B's speed was overestimated only in the condition in which it moved away from another approaching square moving in a nonrigid way, like a caterpillar. In Experiment 2, we psychophysically measured the magnitude of speed overestimation in that condition and tested whether it could be affected by further animacy cues related to the escaping object (the actual velocity of the square) and to the approaching square (its type of motion: caterpillar or linear). Results confirmed that B's speed was overestimated up to 10% and that the degree of overestimation was affected by both experimental factors, being greater at higher speeds and when the chasing object moved in an animate fashion. This speed bias might be related to a higher sensitivity of the visual processes to threat-related events such as fighting and chasing, leading to evolutionary adaptive behaviours such as speedy flight from predators, but also empathy and emotion understanding.
Collapse
Affiliation(s)
- Giulia Parovel
- Department of Social, Political and Cognitive Sciences, University of Siena, Siena, Italy
| | - Stefano Guidi
- Department of Social, Political and Cognitive Sciences, University of Siena, Siena, Italy
| |
Collapse
|
31
|
Abstract
Introduction: People with schizophrenia perform poorly on theory-of-mind (ToM) tasks. They also generate less mental-state language to describe test stimuli depicting intentionality. Some of these individuals also show excessive mentalising when objective cues of intentionality are absent. We tested perceiving and attributing intentionality to resolve this paradox. Methods: 23 schizophrenia patients and 20 healthy controls completed the chasing detection task to assess perceptual sensitivity to cues of intentionality. Other tasks assessed spontaneous attributions of intentionality (irrespective of accuracy) and accurate ToM inferences. Results: Perceptual sensitivity to cues of intentionality did not differ between groups. Patients were less likely to spontaneously attribute intentionality (irrespective of accuracy) or perform ToM tasks accurately. Chasing-detection response bias, but not perceptual sensitivity, correlated with attributions of intentionality. Referential (and to less extent) persecutory ideation associated with excessive mentalising when cues of intentionality were absent. Conclusions: Intentionality can be directly perceived, independent of attributions or inferences, in people with schizophrenia. We conclude that the flow of information from intact perceptual detection to evoke spontaneous attributions of intentionality is disrupted in schizophrenia, with flow-on detrimental effects on accurate ToM reasoning. Referential/persecutory ideation motivates inappropriate mentalising when objective cues of intentionality are absent.
Collapse
Affiliation(s)
- Robyn Langdon
- Department of Cognitive Science, Macquarie University, Sydney, NSW, Australia
| | - Kelsie Boulton
- Department of Cognitive Science, Macquarie University, Sydney, NSW, Australia
| | - Emily Connaughton
- Department of Cognitive Science, Macquarie University, Sydney, NSW, Australia
| | - Tao Gao
- Departments of Communication and Statistics, UCLA, Los Angeles, CA, USA
| |
Collapse
|
32
|
He X, Yang Y, Lin J, Wu X, Yin J. Attributions of Social Interaction Depend on the Integration of the Actor's Simple Goal and the Influence on Recipients. SOCIAL COGNITION 2020. [DOI: 10.1521/soco.2020.38.3.266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
|
33
|
Gao T, Baker CL, Tang N, Xu H, Tenenbaum JB. The Cognitive Architecture of Perceived Animacy: Intention, Attention, and Memory. Cogn Sci 2019; 43:e12775. [PMID: 31446655 DOI: 10.1111/cogs.12775] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2016] [Revised: 04/30/2019] [Accepted: 05/28/2019] [Indexed: 11/30/2022]
Abstract
Human vision supports social perception by efficiently detecting agents and extracting rich information about their actions, goals, and intentions. Here, we explore the cognitive architecture of perceived animacy by constructing Bayesian models that integrate domain-specific hypotheses of social agency with domain-general cognitive constraints on sensory, memory, and attentional processing. Our model posits that perceived animacy combines a bottom-up, feature-based, parallel search for goal-directed movements with a top-down selection process for intent inference. The interaction of these architecturally distinct processes makes perceived animacy fast, flexible, and yet cognitively efficient. In the context of chasing, in which a predator (the "wolf") pursues a prey (the "sheep"), our model addresses the computational challenge of identifying target agents among varying numbers of distractor objects, despite a quadratic increase in the number of possible interactions as more objects appear in a scene. By comparing modeling results with human psychophysics in several studies, we show that the effectiveness and efficiency of human perceived animacy can be explained by a Bayesian ideal observer model with realistic cognitive constraints. These results provide an understanding of perceived animacy at the algorithmic level-how it is achieved by cognitive mechanisms such as attention and working memory, and how it can be integrated with higher-level reasoning about social agency.
Collapse
Affiliation(s)
- Tao Gao
- Departments of Statistics and Communication, University of California, Los Angeles
| | | | - Ning Tang
- Departments of Statistics and Communication, University of California, Los Angeles
| | - Haokui Xu
- Departments of Statistics and Communication, University of California, Los Angeles
| | - Joshua B Tenenbaum
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
| |
Collapse
|
34
|
The Ventral Visual Pathway Represents Animal Appearance over Animacy, Unlike Human Behavior and Deep Neural Networks. J Neurosci 2019; 39:6513-6525. [PMID: 31196934 DOI: 10.1523/jneurosci.1714-18.2019] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2018] [Revised: 04/09/2019] [Accepted: 05/06/2019] [Indexed: 11/21/2022] Open
Abstract
Recent studies showed agreement between how the human brain and neural networks represent objects, suggesting that we might start to understand the underlying computations. However, we know that the human brain is prone to biases at many perceptual and cognitive levels, often shaped by learning history and evolutionary constraints. Here, we explore one such perceptual phenomenon, perceiving animacy, and use the performance of neural networks as a benchmark. We performed an fMRI study that dissociated object appearance (what an object looks like) from object category (animate or inanimate) by constructing a stimulus set that includes animate objects (e.g., a cow), typical inanimate objects (e.g., a mug), and, crucially, inanimate objects that look like the animate objects (e.g., a cow mug). Behavioral judgments and deep neural networks categorized images mainly by animacy, setting all objects (lookalike and inanimate) apart from the animate ones. In contrast, activity patterns in ventral occipitotemporal cortex (VTC) were better explained by object appearance: animals and lookalikes were similarly represented and separated from the inanimate objects. Furthermore, the appearance of an object interfered with proper object identification, such as failing to signal that a cow mug is a mug. The preference in VTC to represent a lookalike as animate was even present when participants performed a task requiring them to report the lookalikes as inanimate. In conclusion, VTC representations, in contrast to neural networks, fail to represent objects when visual appearance is dissociated from animacy, probably due to a preferred processing of visual features typical of animate objects.SIGNIFICANCE STATEMENT How does the brain represent objects that we perceive around us? Recent advances in artificial intelligence have suggested that object categorization and its neural correlates have now been approximated by neural networks. Here, we show that neural networks can predict animacy according to human behavior but do not explain visual cortex representations. In ventral occipitotemporal cortex, neural activity patterns were strongly biased toward object appearance, to the extent that objects with visual features resembling animals were represented closely to real animals and separated from other objects from the same category. This organization that privileges animals and their features over objects might be the result of learning history and evolutionary constraints.
Collapse
|
35
|
Walbrin J, Koldewyn K. Dyadic interaction processing in the posterior temporal cortex. Neuroimage 2019; 198:296-302. [PMID: 31100434 PMCID: PMC6610332 DOI: 10.1016/j.neuroimage.2019.05.027] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2019] [Revised: 04/04/2019] [Accepted: 05/10/2019] [Indexed: 11/21/2022] Open
Abstract
Recent behavioural evidence shows that visual displays of two individuals interacting are not simply encoded as separate individuals, but as an interactive unit that is 'more than the sum of its parts'. Recent functional magnetic resonance imaging (fMRI) evidence shows the importance of the posterior superior temporal sulcus (pSTS) in processing human social interactions, and suggests that it may represent human-object interactions as qualitatively 'greater' than the average of their constituent parts. The current study aimed to investigate whether the pSTS or other posterior temporal lobe region(s): 1) Demonstrated evidence of a dyadic information effect - that is, qualitatively different responses to an interacting dyad than to averaged responses of the same two interactors, presented in isolation, and; 2) Significantly differentiated between different types of social interactions. Multivoxel pattern analysis was performed in which a classifier was trained to differentiate between qualitatively different types of dyadic interactions. Above-chance classification of interactions was observed in 'interaction selective' pSTS-I and extrastriate body area (EBA), but not in other regions of interest (i.e. face-selective STS and mentalizing-selective temporo-parietal junction). A dyadic information effect was not observed in the pSTS-I, but instead was shown in the EBA; that is, classification of dyadic interactions did not fully generalise to averaged responses to the isolated interactors, indicating that dyadic representations in the EBA contain unique information that cannot be recovered from the interactors presented in isolation. These findings complement previous observations for congruent grouping of human bodies and objects in the broader lateral occipital temporal cortex area. pSTS and EBA classify between different dynamic interactions. EBA is sensitive to (uniquely) dyadic interaction information. These findings support previous evidence for grouping of interacting people/objects in LOTC.
Collapse
Affiliation(s)
- Jon Walbrin
- School of Psychology, Bangor University, Wales, UK.
| | | |
Collapse
|
36
|
Rasmussen CE, Jiang YV. Judging social interaction in the Heider and Simmel movie. Q J Exp Psychol (Hove) 2019; 72:2350-2361. [PMID: 30827187 DOI: 10.1177/1747021819838764] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Simple displays of moving shapes can give rise to percepts of animacy. These films elicit impoverished narratives in some individuals, such as those with an autism spectrum disorder. However, the verbal demand of producing a narrative limits the utility of this task. Non-verbal tasks have so far focused on detecting animate objects. Lacking from previous research is a task that relies less on verbal description but more than animacy perception. Here, we presented data using a new social interaction judgement task. Healthy young adults viewed the Heider and Simmel movie and pressed one button whenever they perceived social interaction and another button when no social interaction was perceived. We measured the time points at which social judgement began, the fluctuation of the judgement in relation to stimulus kinematic properties, and the overall mean of social judgement. Participants with higher autism traits reported lower levels of social interaction. Reversing the film in time produced lower social interaction judgements, though the temporal profile was preserved. Our study suggests that both low-level motion characteristics and high-level understanding contribute to social interaction judgement. The finding may facilitate future research on other populations and stimulate computational vision work on factors that drive social judgements.
Collapse
Affiliation(s)
- Carly E Rasmussen
- Department of Psychology, University of Minnesota, Minneapolis, MN, USA
| | - Yuhong V Jiang
- Department of Psychology, University of Minnesota, Minneapolis, MN, USA
| |
Collapse
|
37
|
Hofrichter R, Rutherford MD. Early Attentional Capture of Animate Motion: 4-Year-Olds Show a Pop-Out Effect for Chasing Stimuli. Perception 2019; 48:228-236. [DOI: 10.1177/0301006619828256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Preferential attention to animate motion develops early in life, and adults and infants are particularly attuned to chasing motion. Adults can detect chasing objects among up to 10 distractors and are better at detecting a chase among nonchasing distractors than a nonchase among chasing distractors. We tested whether an attentional preference for chasing has developed by the age of 4, and whether 4-year-olds can explicitly point out chasing objects. On a touch screen, participants were shown a chasing pair of circles among a varying number of distractors (2,4,6,8,10). Participants had to touch the chaser. Reaction time for adults or 4-year-olds was independent of distractor numbers, consistent with a pop-out effect for chasing stimuli. As early as 4 years of age, children show a pop-out effect for chasing objects and can identify them via touch.
Collapse
Affiliation(s)
- Ruth Hofrichter
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Ontario, Canada
| | - M. D. Rutherford
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Ontario, Canada
| |
Collapse
|
38
|
Wick FA, Alaoui Soce A, Garg S, Grace RC, Wolfe JM. Perception in dynamic scenes: What is your Heider capacity? J Exp Psychol Gen 2019; 148:252-271. [PMID: 30667269 PMCID: PMC6396302 DOI: 10.1037/xge0000557] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The classic animation experiment by Heider and Simmel (1944) revealed that humans have a strong tendency to impose narrative even on displays showing interactions between simple geometric shapes. In their most famous animation with three simple shapes, observers almost inevitably interpreted them as rational agents with intentions, desires, and beliefs ("That nasty big triangle!"). Much work on dynamic scenes has identified basic visual properties that can make shapes seem animate. Here, we investigate the limits on the ability to use narrative to share information about animated scenes. We created 30 second Heider-style cartoons with 3-9 items. Item trajectories were generated automatically by a simple set of rules, but without a script. In Experiments 1 and 2, 10 observers wrote short narratives for each cartoon. Next, new observers were shown a cartoon and then presented with a narrative generated for that specific cartoon or one generated for a different cartoon having the same items. Observers rated the fit of the narrative to the cartoon on a scale from 1 (clearly does not fit) to 5 (clearly fits). Performance declined markedly when the number of items was larger than 3. Experiment 3 had observers determine if a short clip of a cartoon came from a longer clip. Experiment 4 had observers determine which of two narratives fit a cartoon. Finally, in Experiment 5, narratives always mentioned every item in a display. In all cases of matching narrative to cartoon, performance drops most dramatically between 3 and 4 items. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Collapse
Affiliation(s)
- Farahnaz A Wick
- Visual Attention Lab, Harvard Medical School/Brigham & Women's Hospital
| | | | - Sahaj Garg
- Department of Computer Science, Stanford University
| | - River C Grace
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
| | - Jeremy M Wolfe
- Visual Attention Lab, Harvard Medical School/Brigham & Women's Hospital
| |
Collapse
|
39
|
Fleeing or not: Responsivity of a chased target influences the cognitive representation of the chasing action. Atten Percept Psychophys 2018; 80:1205-1213. [PMID: 29557036 DOI: 10.3758/s13414-018-1508-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The chasing action, in which an actor chases a target, is a fundamental activity for the evolutionary shaping of social abilities. Where previous research has emphasized the chaser's role, the current study explored whether the fleeing responsivity of a chased target influences the cognitive representation of the chasing action. We investigated this with a change-detection task, in which a set of chasing actions, either exhibiting or not exhibiting fleeing behavior, were memorized in sequence, and it was tested whether a memorized action reappeared after altering an object's appearance. The results suggest that the target's fleeing responsivity influenced the detection of representation-related changes in the appearance of the involved agents, especially when the appearance of one target was replaced with another (i.e., a new pair, but with the same functional role), showing impaired sensitivity to change for the chasing action (Experiment 1). This effect disappeared, however, when the perceived chasing from presented movements was impaired, while displaying largely the same low-level differences as those present in earlier trials through the use of mirrored chasing (Experiment 2) and setting the faced direction opposite of the moving direction (Experiment 3). These findings suggest that the fleeing responsivity of the chased target can influence the stored representation of the action. This differentiation may be attributed to the stronger construction of social interaction structure for chasing action with fleeing than without, since the fleeing behavior can be deemed a contingency cue for social interaction interpretation.
Collapse
|
40
|
KENWARD B, BERGGREN M, KITAZAKI M, ITAKURA S, KANAKOGI Y. IMPLICIT SOCIAL ASSOCIATIONS FOR GEOMETRIC-SHAPE AGENTS MORE STRONGLY INFLUENCED BY VISUAL FORM THAN BY EXPLICITLY IDENTIFIED SOCIAL ACTIONS. PSYCHOLOGIA 2018. [DOI: 10.2117/psysoc.2019-a005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
41
|
Shu T, Peng Y, Fan L, Lu H, Zhu S. Perception of Human Interaction Based on Motion Trajectories: From Aerial Videos to Decontextualized Animations. Top Cogn Sci 2017; 10:225-241. [DOI: 10.1111/tops.12313] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2017] [Accepted: 10/06/2017] [Indexed: 11/29/2022]
Affiliation(s)
- Tianmin Shu
- Department of Statistics University of California Los Angeles
| | - Yujia Peng
- Department of Psychology University of California Los Angeles
| | - Lifeng Fan
- Department of Statistics University of California Los Angeles
| | - Hongjing Lu
- Department of Statistics University of California Los Angeles
- Department of Psychology University of California Los Angeles
| | - Song‐Chun Zhu
- Department of Statistics University of California Los Angeles
- Department of Computer Science University of California Los Angeles
| |
Collapse
|
42
|
|
43
|
Abstract
The aim of this research was to explore the effect of different spatiotemporal contexts on the perceptual saliency of animacy, and the extent of the relationship between animacy and other related properties such as emotions and intentionality. Paired-comparisons and ratings were used to compare the impressions of animacy elicited by a small square moving on the screen, either alone or in the context of a second square. The context element was either static or moving showing an animate-like or a physical-like trajectory, and the target object moved either toward it or away from it. The movement of the target could also include animacy cues (caterpillar-like expanding/contracting phases). To determine the effect of different contexts on the emergence of emotions and intentions, we also recorded and analysed the phenomenological reports of participants. The results show that the context significantly influences the perception of animacy, which is stronger in dynamic contexts than in static ones, and also when the target is moving away from the context element than when it is approaching it. The free reports reveal different proportions in emotional or intentional attributions in the different conditions: in particular, the "moving away" condition is related to negative emotions, while the "approaching" condition evokes positive emotions. Overall, the results suggest that animacy is a graded concept that can be articulated in more general characteristics, like simple aliveness, and more specific ones, like intentions or emotions, and that the spatiotemporal contingencies of the context play a crucial role in making them evident.
Collapse
|
44
|
Abdai J, Ferdinandy B, Terencio CB, Pogány Á, Miklósi Á. Perception of animacy in dogs and humans. Biol Lett 2017; 13:rsbl.2017.0156. [PMID: 28659418 DOI: 10.1098/rsbl.2017.0156] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2017] [Accepted: 06/05/2017] [Indexed: 11/12/2022] Open
Abstract
Humans have a tendency to perceive inanimate objects as animate based on simple motion cues. Although animacy is considered as a complex cognitive property, this recognition seems to be spontaneous. Researchers have found that young human infants discriminate between dependent and independent movement patterns. However, quick visual perception of animate entities may be crucial to non-human species as well. Based on general mammalian homology, dogs may possess similar skills to humans. Here, we investigated whether dogs and humans discriminate similarly between dependent and independent motion patterns performed by geometric shapes. We projected a side-by-side video display of the two patterns and measured looking times towards each side, in two trials. We found that in Trial 1, both dogs and humans were equally interested in the two patterns, but in Trial 2 of both species, looking times towards the dependent pattern decreased, whereas they increased towards the independent pattern. We argue that dogs and humans spontaneously recognized the specific pattern and habituated to it rapidly, but continued to show interest in the 'puzzling' pattern. This suggests that both species tend to recognize inanimate agents as animate relying solely on their motions.
Collapse
Affiliation(s)
- Judit Abdai
- Department of Ethology, Eötvös Loránd University, Pázmány Péter prom 1/c, H-1117, Budapest, Hungary
| | - Bence Ferdinandy
- Department of Biological Physics, Eötvös Loránd University, Pázmány Péter prom 1/a, H-1117 Budapest, Hungary.,MTA-ELTE Comparative Ethology Research Group, Pázmány Péter prom 1/c, H-1117 Budapest, Hungary
| | - Cristina Baño Terencio
- Department of Ethology, Eötvös Loránd University, Pázmány Péter prom 1/c, H-1117, Budapest, Hungary.,University of Valencia, Dr. Moliner street 50, ES-46100 Burjassot, Valencia, Spain
| | - Ákos Pogány
- Department of Ethology, Eötvös Loránd University, Pázmány Péter prom 1/c, H-1117, Budapest, Hungary
| | - Ádám Miklósi
- Department of Ethology, Eötvös Loránd University, Pázmány Péter prom 1/c, H-1117, Budapest, Hungary.,MTA-ELTE Comparative Ethology Research Group, Pázmány Péter prom 1/c, H-1117 Budapest, Hungary
| |
Collapse
|
45
|
Meyerhoff HS, Schwan S, Huff M. Oculomotion mediates attentional guidance toward temporarily close objects. VISUAL COGNITION 2017. [DOI: 10.1080/13506285.2017.1399950] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
| | | | - Markus Huff
- Department of Psychology, University of Tübingen, Tübingen, Germany
- Department of Research Infrastructures, German Research Institute for Adult Education, Bonn, Germany
| |
Collapse
|
46
|
Duan J, Yang Z, He X, Shao M, Yin J. Automatic attribution of social coordination information to chasing scenes: evidence from mu suppression. Exp Brain Res 2017; 236:117-127. [PMID: 29058052 DOI: 10.1007/s00221-017-5111-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2017] [Accepted: 10/16/2017] [Indexed: 10/18/2022]
Abstract
This study explored whether social coordination information that extends beyond individual goals is attributed to impoverished movements produced by simple geometric shapes. We manipulated coordination information by presenting two chasers and one common target performing coordinated or individual (i.e., uncoordinated) chases, and measured mu rhythms (electroencephalogram oscillations within the 8-13 Hz range at sensorimotor regions) related to understanding social interactions. We found that although the participants' task was completely unrelated to processing chasing motion, mu rhythms were more suppressed for coordinated chasing than in the control condition (backward replay for chasing motion), and this effect disappeared for uncoordinated chasing. Moreover, mu suppression increased with higher post-test ratings of social coordination but did not correlate with uncoordinated information. Such effects cannot be explained by general attentional involvement, as there was no difference in attention-related occipital alpha suppression across conditions. These findings are consistent with interpretations of processing coordinated actions, suggesting that our visual system can automatically attribute social coordination information to motion, at least in chasing scenes.
Collapse
Affiliation(s)
- Jipeng Duan
- Department of Psychology, Ningbo University, No. 616 Fenghua Rd, Ningbo, 315211, China
| | - Zhangxiang Yang
- Department of Psychology, Ningbo University, No. 616 Fenghua Rd, Ningbo, 315211, China
| | - Xiaoyan He
- Department of Psychology, Ningbo University, No. 616 Fenghua Rd, Ningbo, 315211, China
| | - Meixuan Shao
- Department of Psychology, Ningbo University, No. 616 Fenghua Rd, Ningbo, 315211, China
| | - Jun Yin
- Department of Psychology, Ningbo University, No. 616 Fenghua Rd, Ningbo, 315211, China.
| |
Collapse
|
47
|
Vanmarcke S, van de Cruys S, Moors P, Wagemans J. Intact animacy perception during chase detection in ASD. Sci Rep 2017; 7:11851. [PMID: 28928448 PMCID: PMC5605503 DOI: 10.1038/s41598-017-12204-x] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2017] [Accepted: 09/05/2017] [Indexed: 11/27/2022] Open
Abstract
We explored the strength of implicit social inferences in adolescents with and without Autism Spectrum Disorder (ASD) using a chasing paradigm in which participants judged the absence/presence of a chase within a display of four seemingly randomly moving dots. While two of these dots always moved randomly, the two others could fulfill the role of being either the chasing (wolf) or chased (sheep) dot. In the chase-present (but not the chase-absent) trials the wolf displayed chasing behavior defined by the degree to which the dot reliably moved towards the sheep (chasing subtlety). Previous research indicated that chasing subtlety strongly influenced chase detection in typically developing (TD) adults. We intended to replicate and extend this finding to adolescents with and without ASD, while also adding either a social or a non-social cue to the displays. Our results confirmed the importance of chasing subtlety and indicated that adding social, but not non-social, information further improved chase detection performance. Interestingly, the performance of adolescents with ASD was less dependent on chasing subtlety than that of their TD counterparts. Nonetheless, adolescents with and without ASD did not differ in their use of the added social (or non-social) cue.
Collapse
Affiliation(s)
- Steven Vanmarcke
- Brain and Cognition, KU Leuven, Leuven, 3000, Belgium. .,Leuven Autism Research (LAuRes), KU Leuven, Leuven, 3000, Belgium.
| | - Sander van de Cruys
- Brain and Cognition, KU Leuven, Leuven, 3000, Belgium.,Leuven Autism Research (LAuRes), KU Leuven, Leuven, 3000, Belgium
| | - Pieter Moors
- Brain and Cognition, KU Leuven, Leuven, 3000, Belgium
| | - Johan Wagemans
- Brain and Cognition, KU Leuven, Leuven, 3000, Belgium.,Leuven Autism Research (LAuRes), KU Leuven, Leuven, 3000, Belgium
| |
Collapse
|
48
|
van Buren B, Scholl BJ. Minds in motion in memory: Enhanced spatial memory driven by the perceived animacy of simple shapes. Cognition 2017; 163:87-92. [DOI: 10.1016/j.cognition.2017.02.006] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2016] [Revised: 02/13/2017] [Accepted: 02/15/2017] [Indexed: 10/20/2022]
|
49
|
Abdai J, Baño Terencio C, Miklósi Á. Novel approach to study the perception of animacy in dogs. PLoS One 2017; 12:e0177010. [PMID: 28472117 PMCID: PMC5417633 DOI: 10.1371/journal.pone.0177010] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2016] [Accepted: 04/20/2017] [Indexed: 11/19/2022] Open
Abstract
Humans tend to perceive inanimate objects as animate based on simple motion cues. So far this perceptual bias has been studied mostly in humans by utilizing two-dimensional video and interactive displays. Considering its importance for survival, the perception of animacy is probably also widespread among animals, however two-dimensional displays are not necessarily the best approach to study the phenomenon in non-human species. Here we applied a novel method to study whether dogs recognize a dependent (chasing-like) movement pattern performed by inanimate agents in live demonstration. We found that dogs showed more interest toward the agents that demonstrated the chasing-like motion, compared to those that were involved in the independent movement. We suggest that dogs spontaneously recognized the chasing-like pattern and thus they may have considered the interacting partners as animate agents. This methodological approach may be useful to test perceptual animacy in other non-human species.
Collapse
Affiliation(s)
- Judit Abdai
- Department of Ethology, Eötvös Loránd University, Budapest, Hungary
- * E-mail:
| | - Cristina Baño Terencio
- Department of Ethology, Eötvös Loránd University, Budapest, Hungary
- University of Valencia, Valencia, Spain
| | - Ádám Miklósi
- Department of Ethology, Eötvös Loránd University, Budapest, Hungary
- MTA-ELTE Comparative Ethology Research Group, Budapest, Hungary
| |
Collapse
|
50
|
Horowitz TS, Saiki J. Editorial: Search: A New Perspective to Understand Cognitive Dynamics. JAPANESE PSYCHOLOGICAL RESEARCH 2017. [DOI: 10.1111/jpr.12156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|