1
|
Samuel S, Erle TM, Kirsch LP, Surtees A, Apperly I, Bukowski H, Auvray M, Catmur C, Kessler K, Quesque F. Three key questions to move towards a theoretical framework of visuospatial perspective taking. Cognition 2024; 247:105787. [PMID: 38583320 DOI: 10.1016/j.cognition.2024.105787] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 02/12/2024] [Accepted: 03/29/2024] [Indexed: 04/09/2024]
Abstract
What would a theory of visuospatial perspective taking (VSPT) look like? Here, ten researchers in the field, many with different theoretical viewpoints and empirical approaches, present their consensus on the three big questions we need to answer in order to bring this theory (or these theories) closer.
Collapse
Affiliation(s)
- Steven Samuel
- Department of Psychology, School of Health and Psychological Sciences, City, University of London, U.K.
| | - Thorsten M Erle
- Department of Social Psychology, Tilburg School of Social and Behavioral Sciences, Tilburg University, Tilburg, the Netherlands
| | - Louise P Kirsch
- Université Paris Cité, INCC UMR 8002, CNRS, F-75006 Paris, France
| | - Andrew Surtees
- Centre for Developmental Science, School of Psychology, University of Birmingham, Edgbaston, Birmingham, UK; Birmingham Women's and Children's NHS Foundation Trust, Steelhouse Lane, Birmingham, UK
| | - Ian Apperly
- Centre for Developmental Science, School of Psychology, University of Birmingham, Edgbaston, Birmingham, UK
| | - Henryk Bukowski
- Institute of Psychological Sciences, Université catholique de Louvain, Louvain-La-Neuve, Belgium
| | - Malika Auvray
- Sorbonne Université, CNRS, Institut des Systèmes Intelligents et de Robotique, Paris, France
| | - Caroline Catmur
- Department of Psychology, Institute of Psychiatry, Psychology and Neuroscience, King's College London, UK
| | - Klaus Kessler
- School of Psychology, University College Dublin, Dublin, Ireland
| | - Francois Quesque
- Centre de Recherche en Neurosciences de Lyon CRNL, U1028, UMR5292, Trajectoires, F-69500 Bron, France; Centre Ressource de Réhabilitation Psychosociale, CH Le Vinatier, Lyon, France
| |
Collapse
|
2
|
Samuel S, Cole GG, Eacott MJ. It's Not You, It's Me: A Review of Individual Differences in Visuospatial Perspective Taking. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2023; 18:293-308. [PMID: 35994772 PMCID: PMC10018059 DOI: 10.1177/17456916221094545] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Visuospatial perspective taking (VSPT) concerns the ability to understand something about the visual relationship between an agent or observation point on the one hand and a target or scene on the other. Despite its importance to a wide variety of other abilities, from communication to navigation, and decades of research, there is as yet no theory of VSPT. Indeed, the heterogeneity of results from different (and sometimes the same) VSPT tasks point to a complex picture suggestive of multiple VSPT strategies, individual differences in performance, and context-specific factors that together have a bearing on both the efficiency and accuracy of outcomes. In this article, we review the evidence in search of patterns in the data. We found a number of predictors of VSPT performance but also a number of gaps in understanding that suggest useful pathways for future research and, possibly, a theory (or theories) of VSPT. Overall, this review makes the case for understanding VSPT by better understanding the perspective taker rather than the target agents or their perception.
Collapse
Affiliation(s)
- Steven Samuel
- Department of Psychology, University of
Plymouth
- Department of Psychology, University of
Essex
- Steven Samuel, Department of Psychology,
University of Plymouth
| | | | | |
Collapse
|
3
|
Cole GG, Samuel S, Eacott MJ. A return of mental imagery: The pictorial theory of visual perspective-taking. Conscious Cogn 2022; 102:103352. [DOI: 10.1016/j.concog.2022.103352] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Revised: 04/29/2022] [Accepted: 05/03/2022] [Indexed: 12/01/2022]
|
4
|
Vanbeneden A, Woltin KA, Yzerbyt V. Influence of membership in outgroups varying in competence and warmth on observers' Level-2 visual perspective taking. Br J Psychol 2022; 113:938-959. [PMID: 35704512 DOI: 10.1111/bjop.12579] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2021] [Revised: 05/23/2022] [Accepted: 05/29/2022] [Indexed: 11/28/2022]
Abstract
Visual perspective taking (VPT), the ability to adopt another person's viewpoint, entails two distinct processes, Level-1 (L1)-VPT and Level-2 (L2-VPT), referring to the ability to perceive whether and how a target sees an object, respectively. Whereas previous efforts investigated the impact of targets' social characteristics on L1-VPT, the present work is the first to do so regarding L2-VPT. Specifically, we investigate the impact of targets' membership in outgroups varying in perceived competence and warmth, the two fundamental dimensions of social perception. Participants in four experiments engaged in a L2-VPT task. Avatars belonged to a low competence low warmth group (LCLW; e.g. the homeless) or to a high competence low warmth group (HCLW; e.g. bankers) in Experiments 1-3, and to a LCLW or high competence high warmth group (HCHW; e.g. female students) in Experiment 4. Participants answered as quickly as possible whether a cued number matched a number present in a scene from either their own or the avatar's perspective. We consistently found support for the presence of both egocentric and altercentric interference, but this was not modulated by group competence and warmth, suggesting that membership in outgroups varying in competence and warmth does not influence L2-VPT. We discuss the findings' implications in the light of recent views on VPT.
Collapse
Affiliation(s)
- Antoine Vanbeneden
- Institut de Recherche en Sciences Psychologiques, Université Catholique de Louvain, Louvain-la-Neuve, Belgium
| | - Karl-Andrew Woltin
- Institut de Recherche en Sciences Psychologiques, Université Catholique de Louvain, Louvain-la-Neuve, Belgium
| | - Vincent Yzerbyt
- Institut de Recherche en Sciences Psychologiques, Université Catholique de Louvain, Louvain-la-Neuve, Belgium
| |
Collapse
|
5
|
Abraham A. How We Tell Apart Fiction from Reality. AMERICAN JOURNAL OF PSYCHOLOGY 2022. [DOI: 10.5406/19398298.135.1.01] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Abstract
The human ability to tell apart reality from fiction is intriguing. Through a range of media, such as novels and movies, we are able to readily engage in fictional worlds and experience alternative realities. Yet even when we are completely immersed and emotionally engaged within these worlds, we have little difficulty in leaving the fictional landscapes and getting back to the day-to-day of our own world. How are we able to do this? How do we acquire our understanding of our real world? How is this similar to and different from the development of our knowledge of fictional worlds? In exploring these questions, this article makes the case for a novel multilevel explanation (called BLINCS) of our implicit understanding of the reality–fiction distinction, namely that it is derived from the fact that the worlds of fiction, relative to reality, are bounded, inference-light, curated, and sparse.
Collapse
|
6
|
Samuel S, Hagspiel K, Cole GG, Eacott MJ. 'Seeing' proximal representations: Testing attitudes to the relationship between vision and images. PLoS One 2021; 16:e0256658. [PMID: 34415982 PMCID: PMC8378678 DOI: 10.1371/journal.pone.0256658] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Accepted: 08/11/2021] [Indexed: 11/30/2022] Open
Abstract
Corrections applied by the visual system, like size constancy, provide us with a coherent and stable perspective from ever-changing retinal images. In the present experiment we investigated how willing adults are to examine their own vision as if it were an uncorrected 2D image, much like a photograph. We showed adult participants two lines on a wall, both of which were the same length but one was closer to the participant and hence appeared visually longer. Despite the instruction to base their judgements on appearance specifically, approximately half of the participants judged the lines to appear the same. When they took a photo of the lines and were asked how long they appeared in the image their responses shifted; now the closer line appeared longer. However, when they were asked again about their own view they reverted to their original response. These results suggest that many adults are resistant to imagining their own vision as if it were a flat image. We also place these results within the context of recent views on visual perspective-taking.
Collapse
Affiliation(s)
- Steven Samuel
- Department of Psychology, University of Essex, Colchester, United Kingdom
- Department of Psychology, University of Plymouth, Plymouth, United Kingdom
- * E-mail:
| | - Klara Hagspiel
- Department of Psychology, University of Essex, Colchester, United Kingdom
| | - Geoff G. Cole
- Department of Psychology, University of Essex, Colchester, United Kingdom
| | - Madeline J. Eacott
- Department of Psychology, University of Essex, Colchester, United Kingdom
| |
Collapse
|
7
|
People Do not Automatically Take the Level-1 Visual Perspective of Humanoid Robot Avatars. Int J Soc Robot 2021. [DOI: 10.1007/s12369-021-00773-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|