1
|
Christensen JF, Fernández A, Smith RA, Michalareas G, Yazdi SHN, Farahi F, Schmidt EM, Bahmanian N, Roig G. EMOKINE: A software package and computational framework for scaling up the creation of highly controlled emotional full-body movement datasets. Behav Res Methods 2024; 56:7498-7542. [PMID: 38918315 DOI: 10.3758/s13428-024-02433-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/15/2024] [Indexed: 06/27/2024]
Abstract
EMOKINE is a software package and dataset creation suite for emotional full-body movement research in experimental psychology, affective neuroscience, and computer vision. A computational framework, comprehensive instructions, a pilot dataset, observer ratings, and kinematic feature extraction code are provided to facilitate future dataset creations at scale. In addition, the EMOKINE framework outlines how complex sequences of movements may advance emotion research. Traditionally, often emotional-'action'-based stimuli are used in such research, like hand-waving or walking motions. Here instead, a pilot dataset is provided with short dance choreographies, repeated several times by a dancer who expressed different emotional intentions at each repetition: anger, contentment, fear, joy, neutrality, and sadness. The dataset was simultaneously filmed professionally, and recorded using XSENS® motion capture technology (17 sensors, 240 frames/second). Thirty-two statistics from 12 kinematic features were extracted offline, for the first time in one single dataset: speed, acceleration, angular speed, angular acceleration, limb contraction, distance to center of mass, quantity of motion, dimensionless jerk (integral), head angle (with regards to vertical axis and to back), and space (convex hull 2D and 3D). Average, median absolute deviation (MAD), and maximum value were computed as applicable. The EMOKINE software is appliable to other motion-capture systems and is openly available on the Zenodo Repository. Releases on GitHub include: (i) the code to extract the 32 statistics, (ii) a rigging plugin for Python for MVNX file-conversion to Blender format (MVNX=output file XSENS® system), and (iii) a Python-script-powered custom software to assist with blurring faces; latter two under GPLv3 licenses.
Collapse
Affiliation(s)
- Julia F Christensen
- Department of Cognitive Neuropsychology, Max Planck Institute for Empirical Aesthetics, Frankfurt/M, Germany.
| | - Andrés Fernández
- Methods of Machine Learning, University of Tübingen, Tübingen, Germany
- International Max Planck Research School for Intelligent Systems, Tübingen, Germany
| | - Rebecca A Smith
- Department of Psychology, University of Glasgow, Glasgow, Scotland
| | - Georgios Michalareas
- Department of Cognitive Neuropsychology, Max Planck Institute for Empirical Aesthetics, Frankfurt/M, Germany
| | | | | | - Eva-Madeleine Schmidt
- Department of Cognitive Neuropsychology, Max Planck Institute for Empirical Aesthetics, Frankfurt/M, Germany
- Max Planck School of Cognition, Leipzig, Germany
- Center for Humans and Machines, Max Planck Institute for Human Development, Berlin, Germany
| | - Nasimeh Bahmanian
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt/M, Germany
- Department of Modern Languages, Goethe University, Frankfurt/M, Germany
| | - Gemma Roig
- Computer Science Department, Goethe University, Frankfurt/M, Germany
- The Hessian Center for Artificial Intelligence (hessian.AI), Darmstadt, Germany
| |
Collapse
|
2
|
Bidet-Ildei C, BenAhmed O, Bouidaine D, Francisco V, Decatoire A, Blandin Y, Pylouster J, Fernandez-Maloigne C. SmartDetector: Automatic and vision-based approach to point-light display generation for human action perception. Behav Res Methods 2024:10.3758/s13428-024-02478-1. [PMID: 39138735 DOI: 10.3758/s13428-024-02478-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/11/2024] [Indexed: 08/15/2024]
Abstract
Over the past four decades, point-light displays (PLD) have been integrated into psychology and psychophysics, providing a valuable means to probe human perceptual skills. Leveraging the inherent kinematic information and controllable display parameters, researchers have utilized this technique to examine the mechanisms involved in learning and rehabilitation. However, classical PLD generation methods (e.g., motion capture) are difficult to apply for behavior analysis in real-world situations, such as patient care or sports activities. Therefore, there is a demand for automated and affordable tools that enable efficient and real-world-compatible generation of PLDs for psychological research. In this paper, we propose SmartDetector, a new artificial intelligence (AI)-based tool for automatic PLD creation from RGB videos. To evaluate humans' perceptual skills for processing PLD building with SmartDetector, 126 participants were randomly assigned to recognition, discrimination, or detection tasks. Results demonstrated that, irrespective of the task, PLDs generated by SmartDetector exhibited commendable perceptual performance in terms of accuracy and response times compared to literature findings. Moreover, to enhance usability and broaden accessibility, we developed an intuitive web interface for our method, making it available to a wider audience. The resulting application is available at https://plavimop.prd.fr/index.php/en/automatic-creation-pld . SmartDetector offers interesting possibilities for using PLD in research and makes the use of PLD more accessible for nonacademic applications.
Collapse
Affiliation(s)
- Christel Bidet-Ildei
- CNRS, Centre de Recherches sur la Cognition et l'Apprentissage CeRCA/MSHS, Université de Poitiers, Université de Tours, Bâtiment A5, 5, rue Théodore Lefebvre, TSA 21103, 86073, Poitiers Cedex 9, France.
- Institut Universitaire de France (IUF), Paris, France.
| | - Olfa BenAhmed
- XLIM Research Institute, UMR CNRS 7252, University of Poitiers, Poitiers, France
| | - Diaddin Bouidaine
- XLIM Research Institute, UMR CNRS 7252, University of Poitiers, Poitiers, France
| | - Victor Francisco
- CNRS, Centre de Recherches sur la Cognition et l'Apprentissage CeRCA/MSHS, Université de Poitiers, Université de Tours, Bâtiment A5, 5, rue Théodore Lefebvre, TSA 21103, 86073, Poitiers Cedex 9, France
- ISAE-ENSMA, CNRS, PPRIME, Université de Poitiers, Poitiers, France
- Melioris, Centre de Médecine Physique et de Réadaptation Fonctionnelle Le Grand Feu, Niort, France
| | - Arnaud Decatoire
- ISAE-ENSMA, CNRS, PPRIME, Université de Poitiers, Poitiers, France
| | - Yannick Blandin
- CNRS, Centre de Recherches sur la Cognition et l'Apprentissage CeRCA/MSHS, Université de Poitiers, Université de Tours, Bâtiment A5, 5, rue Théodore Lefebvre, TSA 21103, 86073, Poitiers Cedex 9, France
| | - Jean Pylouster
- CNRS, Centre de Recherches sur la Cognition et l'Apprentissage CeRCA/MSHS, Université de Poitiers, Université de Tours, Bâtiment A5, 5, rue Théodore Lefebvre, TSA 21103, 86073, Poitiers Cedex 9, France
| | | |
Collapse
|
3
|
Christensen JF, Bruhn L, Schmidt EM, Bahmanian N, Yazdi SHN, Farahi F, Sancho-Escanero L, Menninghaus W. A 5-emotions stimuli set for emotion perception research with full-body dance movements. Sci Rep 2023; 13:8757. [PMID: 37253770 DOI: 10.1038/s41598-023-33656-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Accepted: 04/17/2023] [Indexed: 06/01/2023] Open
Abstract
Ekman famously contended that there are different channels of emotional expression (face, voice, body), and that emotion recognition ability confers an adaptive advantage to the individual. Yet, still today, much emotion perception research is focussed on emotion recognition from the face, and few validated emotionally expressive full-body stimuli sets are available. Based on research on emotional speech perception, we created a new, highly controlled full-body stimuli set. We used the same-sequence approach, and not emotional actions (e.g., jumping of joy, recoiling in fear): One professional dancer danced 30 sequences of (dance) movements five times each, expressing joy, anger, fear, sadness or a neutral state, one at each repetition. We outline the creation of a total of 150, 6-s-long such video stimuli, that show the dancer as a white silhouette on a black background. Ratings from 90 participants (emotion recognition, aesthetic judgment) showed that intended emotion was recognized above chance (chance: 20%; joy: 45%, anger: 48%, fear: 37%, sadness: 50%, neutral state: 51%), and that aesthetic judgment was sensitive to the intended emotion (beauty ratings: joy > anger > fear > neutral state, and sad > fear > neutral state). The stimuli set, normative values and code are available for download.
Collapse
Affiliation(s)
- Julia F Christensen
- Department of Language and Literature, Max-Planck-Institute for Empirical Aesthetics, Frankfurt/M, Germany.
- Department of Cognitive Neuropsychology, Max Planck Institute for Empirical Aesthetics, Frankfurt/M, Germany.
| | - Laura Bruhn
- Department of Language and Literature, Max-Planck-Institute for Empirical Aesthetics, Frankfurt/M, Germany
| | - Eva-Madeleine Schmidt
- Department of Language and Literature, Max-Planck-Institute for Empirical Aesthetics, Frankfurt/M, Germany
- Max Planck School of Cognition, Max Planck Institute, Leipzig, Germany
| | - Nasimeh Bahmanian
- Department of Language and Literature, Max-Planck-Institute for Empirical Aesthetics, Frankfurt/M, Germany
- Department of Modern Languages, Goethe University, Frankfurt, Germany
| | | | | | | | - Winfried Menninghaus
- Department of Language and Literature, Max-Planck-Institute for Empirical Aesthetics, Frankfurt/M, Germany
| |
Collapse
|
4
|
Smith RA, Cross ES. The McNorm library: creating and validating a new library of emotionally expressive whole body dance movements. PSYCHOLOGICAL RESEARCH 2023; 87:484-508. [PMID: 35385989 PMCID: PMC8985749 DOI: 10.1007/s00426-022-01669-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2021] [Accepted: 02/23/2022] [Indexed: 11/28/2022]
Abstract
The ability to exchange affective cues with others plays a key role in our ability to create and maintain meaningful social relationships. We express our emotions through a variety of socially salient cues, including facial expressions, the voice, and body movement. While significant advances have been made in our understanding of verbal and facial communication, to date, understanding of the role played by human body movement in our social interactions remains incomplete. To this end, here we describe the creation and validation of a new set of emotionally expressive whole-body dance movement stimuli, named the Motion Capture Norming (McNorm) Library, which was designed to reconcile a number of limitations associated with previous movement stimuli. This library comprises a series of point-light representations of a dancer's movements, which were performed to communicate to observers neutrality, happiness, sadness, anger, and fear. Based on results from two validation experiments, participants could reliably discriminate the intended emotion expressed in the clips in this stimulus set, with accuracy rates up to 60% (chance = 20%). We further explored the impact of dance experience and trait empathy on emotion recognition and found that neither significantly impacted emotion discrimination. As all materials for presenting and analysing this movement library are openly available, we hope this resource will aid other researchers in further exploration of affective communication expressed by human bodily movement.
Collapse
Affiliation(s)
- Rebecca A. Smith
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland
| | - Emily S. Cross
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland ,Department of Cognitive Science, Macquarie University, Sydney, Australia
| |
Collapse
|
5
|
EmBody/EmFace as a new open tool to assess emotion recognition from body and face expressions. Sci Rep 2022; 12:14165. [PMID: 35986068 PMCID: PMC9391359 DOI: 10.1038/s41598-022-17866-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Accepted: 08/02/2022] [Indexed: 01/29/2023] Open
Abstract
Nonverbal expressions contribute substantially to social interaction by providing information on another person’s intentions and feelings. While emotion recognition from dynamic facial expressions has been widely studied, dynamic body expressions and the interplay of emotion recognition from facial and body expressions have attracted less attention, as suitable diagnostic tools are scarce. Here, we provide validation data on a new open source paradigm enabling the assessment of emotion recognition from both 3D-animated emotional body expressions (Task 1: EmBody) and emotionally corresponding dynamic faces (Task 2: EmFace). Both tasks use visually standardized items depicting three emotional states (angry, happy, neutral), and can be used alone or together. We here demonstrate successful psychometric matching of the EmBody/EmFace items in a sample of 217 healthy subjects with excellent retest reliability and validity (correlations with the Reading-the-Mind-in-the-Eyes-Test and Autism-Spectrum Quotient, no correlations with intelligence, and given factorial validity). Taken together, the EmBody/EmFace is a novel, effective (< 5 min per task), highly standardized and reliably precise tool to sensitively assess and compare emotion recognition from body and face stimuli. The EmBody/EmFace has a wide range of potential applications in affective, cognitive and social neuroscience, and in clinical research studying face- and body-specific emotion recognition in patient populations suffering from social interaction deficits such as autism, schizophrenia, or social anxiety.
Collapse
|
6
|
Takarae Y, McBeath MK, Krynen RC. Perception of Dynamic Point Light Facial Expression. AMERICAN JOURNAL OF PSYCHOLOGY 2021. [DOI: 10.5406/amerjpsyc.134.4.0373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Abstract
This study uses point light displays both to investigate the roles of global and local motion analyses in the perception of dynamic facial expressions and to measure the information threshold for reliable recognition of emotions. We videotaped the faces of actors wearing black makeup with white dots while they dynamically produced each of 6 basic Darwin/Ekman emotional expressions. The number of point lights was varied to systematically manipulate amount of information available. For all but one of the expressions, discriminability (d′) increased approximately linearly with number of point lights, with most remaining largely discriminable with as few as only 6 point lights. This finding supports reliance on global motion patterns produced by facial muscles. However, discriminability for the happy expression was notably higher and largely unaffected by number of point lights and thus appears to rely on characteristic local motion, probably the unique upward curvature of the mouth. The findings indicate that recognition of facial expression is not a unitary process and that different expressions may be conveyed by different perceptual information, but in general, basic facial emotional expressions typically remain largely discriminable with as few as 6 dynamic point lights.
Collapse
Affiliation(s)
| | - Michael K. McBeath
- Arizona State University and Max Planck Institute for Empirical Aesthetics
| | | |
Collapse
|
7
|
Hagen S, Vuong QC, Chin MD, Scott LS, Curran T, Tanaka JW. Bird expertise does not increase motion sensitivity to bird flight motion. J Vis 2021; 21:5. [PMID: 33951142 PMCID: PMC8107655 DOI: 10.1167/jov.21.5.5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023] Open
Abstract
While motion information is important for the early stages of vision, it also contributes to later stages of object recognition. For example, human observers can detect the presence of a human, judge its actions, and judge its gender and identity simply based on motion cues conveyed in a point-light display. Here we examined whether object expertise enhances the observer's sensitivity to its characteristic movement. Bird experts and novices were shown point-light displays of upright and inverted birds in flight, or upright and inverted human walkers, and asked to discriminate them from spatially scrambled point-light displays of the same stimuli. While the spatially scrambled stimuli retained the local motion of each dot of the moving objects, it disrupted the global percept of the object in motion. To estimate a detection threshold in each object domain, we systematically varied the number of noise dots in which the stimuli were embedded using an adaptive staircase approach. Contrary to our predictions, the experts did not show disproportionately higher sensitivity to bird motion, and both groups showed no inversion cost. However, consistent with previous work showing a robust inversion effect for human motion, both groups were more sensitive to upright human walkers than their inverted counterparts. Thus, the result suggests that real-world experience in the bird domain has little to no influence on the sensitivity to bird motion and that birds do not show the typical inversion effect seen with humans and other terrestrial movement.
Collapse
Affiliation(s)
- Simen Hagen
- Department of Psychology, University of Victoria, Victoria, BC, Canada.,Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.,
| | - Quoc C Vuong
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, United Kingdom.,
| | - Michael D Chin
- Department of Psychology, University of Victoria, Victoria, BC, Canada.,
| | - Lisa S Scott
- Department of Psychology, University of Florida, Gainesville, FL, USA.,
| | - Tim Curran
- Department of Psychology and Neuroscience, University of Colorado Boulder, Boulder, CO, USA.,
| | - James W Tanaka
- Department of Psychology, University of Victoria, Victoria, BC, Canada.,
| |
Collapse
|
8
|
Okruszek Ł, Chrustowicz M. Social Perception and Interaction Database-A Novel Tool to Study Social Cognitive Processes With Point-Light Displays. Front Psychiatry 2020; 11:123. [PMID: 32218745 PMCID: PMC7078367 DOI: 10.3389/fpsyt.2020.00123] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/06/2019] [Accepted: 02/12/2020] [Indexed: 01/03/2023] Open
Abstract
Introduction: The ability to detect and interpret social interactions (SI) is one of the crucial skills enabling people to operate in the social world. Multiple lines of evidence converge to indicate the preferential processing of SI when compared to the individual actions of multiple agents, even if the actions were visually degraded to minimalistic point-light displays (PLDs). Here, we present a novel PLD dataset (Social Perception and Interaction Database; SoPID) that may be used for studying multiple levels of social information processing. Methods: During a motion-capture session, two pairs of actors were asked to perform a wide range of 3-second actions, including: (1) neutral, gesture-based communicative interactions (COM); (2) emotional exchanges (Happy/Angry); (3) synchronous interactive physical activity of actors (SYNC); and (4) independent actions of agents, either object-related (ORA) or non-object related (NORA). An interface that allows single/dyadic PLD stimuli to be presented from either the second person (action aimed toward the viewer) or third person (observation of actions presented toward other agents) perspective was implemented on the basis on the recorded actions. Two validation studies (each with 20 healthy individuals) were then performed to establish the recognizability of the SoPID vignettes. Results: The first study showed a ceiling level accuracy for discrimination of communicative vs. individual actions (93% ± 5%) and high accuracy for interpreting specific types of actions (85 ± 4%) from the SoPID. In the second study, a robust effect of scrambling on the recognizability of SoPID stimuli was observed in an independent sample of healthy individuals. Discussion: These results suggest that the SoPID may be effectively used to examine processes associated with communicative interactions and intentions processing. The database can be accessed via the Open Science Framework (https://osf.io/dcht8/).
Collapse
Affiliation(s)
- Łukasz Okruszek
- Social Neuroscience Lab, Institute of Psychology, Polish Academy of Sciences, Warsaw, Poland
| | | |
Collapse
|
9
|
Ida H, Fukuhara K, Ishii M, Inoue T. Anticipatory judgements associated with vision of an opponent’s end-effector: An approach by motion perturbation and spatial occlusion. Q J Exp Psychol (Hove) 2018; 72:1131-1140. [DOI: 10.1177/1747021818782419] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
This study was aimed at determining how the visual information of an end-effector (racket) and the intermediate extremity (arm) of a tennis server contribute to the receiver’s anticipatory judgement of ball direction. In all, 15 experienced tennis players and 15 novice counterparts viewed a spatially occluded computer graphics animation of a tennis serve (no-occlusion, racket-occlusion, and body-occlusion) and made anticipatory judgements of ball direction on a visual analogue scale (VAS). The patterns of the serve motions were generated by a simulation technique that computationally perturbs the rotation speed of the selected racket-arm joint (forearm pronation and elbow extension) on a captured serve motion. The results suggested that the anticipatory judgements were monotonically attuned with the perturbation rate of the forearm pronation speed excepting under the conditions of the racket-occlusion model. Although such attunements were not observed in the elbow perturbation conditions, the results of correlation analysis indicated that the residual information in the spatially occluded models had a similar effect to the no-occlusion model within the individual experienced participants. The findings support the notion that end-effector (racket) provides deterministic cues for anticipation, as well as imply that players are able to benefit from the relative motion of an intermediate extremity (elbow extension).
Collapse
Affiliation(s)
- Hirofumi Ida
- Department of Sports and Health Management, Jobu University, Isesaki, Japan
| | - Kazunobu Fukuhara
- Department of Health Promotion Science, Tokyo Metropolitan University, Hachioji, Japan
| | - Motonobu Ishii
- Department of Human System Science, Tokyo Institute of Technology, Tokyo, Japan
| | - Tetsuri Inoue
- Department of Network and Communication, Kanagawa Institute of Technology, Atsugi, Japan
| |
Collapse
|
10
|
Potential for social involvement modulates activity within the mirror and the mentalizing systems. Sci Rep 2017; 7:14967. [PMID: 29097704 PMCID: PMC5668415 DOI: 10.1038/s41598-017-14476-9] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2017] [Accepted: 10/05/2017] [Indexed: 11/17/2022] Open
Abstract
Processing biological motion is fundamental for everyday life activities, such as social interaction, motor learning and nonverbal communication. The ability to detect the nature of a motor pattern has been investigated by means of point-light displays (PLD), sets of moving light points reproducing human kinematics, easily recognizable as meaningful once in motion. Although PLD are rudimentary, the human brain can decipher their content including social intentions. Neuroimaging studies suggest that inferring the social meaning conveyed by PLD could rely on both the Mirror Neuron System (MNS) and the Mentalizing System (MS), but their specific role to this endeavor remains uncertain. We describe a functional magnetic resonance imaging experiment in which participants had to judge whether visually presented PLD and videoclips of human-like walkers (HL) were facing towards or away from them. Results show that coding for stimulus direction specifically engages the MNS when considering PLD moving away from the observer, while the nature of the stimulus reveals a dissociation between MNS -mainly involved in coding for PLD- and MS, recruited by HL moving away. These results suggest that the contribution of the two systems can be modulated by the nature of the observed stimulus and its potential for social involvement.
Collapse
|
11
|
von der Lühe T, Manera V, Barisic I, Becchio C, Vogeley K, Schilbach L. Interpersonal predictive coding, not action perception, is impaired in autism. Philos Trans R Soc Lond B Biol Sci 2016; 371:rstb.2015.0373. [PMID: 27069050 PMCID: PMC4843611 DOI: 10.1098/rstb.2015.0373] [Citation(s) in RCA: 62] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/16/2016] [Indexed: 12/16/2022] Open
Abstract
This study was conducted to examine interpersonal predictive coding in individuals with high-functioning autism (HFA). Healthy and HFA participants observed point-light displays of two agents (A and B) performing separate actions. In the ‘communicative’ condition, the action performed by agent B responded to a communicative gesture performed by agent A. In the ‘individual’ condition, agent A's communicative action was substituted by a non-communicative action. Using a simultaneous masking-detection task, we demonstrate that observing agent A's communicative gesture enhanced visual discrimination of agent B for healthy controls, but not for participants with HFA. These results were not explained by differences in attentional factors as measured via eye-tracking, or by differences in the recognition of the point-light actions employed. Our findings, therefore, suggest that individuals with HFA are impaired in the use of social information to predict others' actions and provide behavioural evidence that such deficits could be closely related to impairments of predictive coding.
Collapse
Affiliation(s)
- T von der Lühe
- Department of Psychiatry, University Hospital Cologne, 50937 Cologne, Germany
| | - V Manera
- CoBtek Laboratory, University of Nice Sophia Antipolis, 06103 Nice, France
| | - I Barisic
- Cognitive Science Department, ETH Zürich, 8092 Zürich, Switzerland
| | - C Becchio
- C'MON Cognition Motion and Neuroscience Unit, Fondazione Istituto Italiano di Tecnologia, Genova, Italy Department of Psychology, University of Turin, Turin, Italy
| | - K Vogeley
- Department of Psychiatry, University Hospital Cologne, 50937 Cologne, Germany Research Centre Juelich, Institute of Neuroscience and Medicine (INM-3), 52428 Juelich, Germany
| | - L Schilbach
- Department of Psychiatry, University Hospital Cologne, 50937 Cologne, Germany Max Planck Institute of Psychiatry, 80804 Munich, Germany
| |
Collapse
|
12
|
Human biological and nonbiological point-light movements: Creation and validation of the dataset. Behav Res Methods 2016; 49:2083-2092. [DOI: 10.3758/s13428-016-0843-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
13
|
Piwek L, Petrini K, Pollick F. A dyadic stimulus set of audiovisual affective displays for the study of multisensory, emotional, social interactions. Behav Res Methods 2016; 48:1285-1295. [PMID: 26542970 PMCID: PMC5101291 DOI: 10.3758/s13428-015-0654-4] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
We describe the creation of the first multisensory stimulus set that consists of dyadic, emotional, point-light interactions combined with voice dialogues. Our set includes 238 unique clips, which present happy, angry and neutral emotional interactions at low, medium and high levels of emotional intensity between nine different actor dyads. The set was evaluated in a between-design experiment, and was found to be suitable for a broad potential application in the cognitive and neuroscientific study of biological motion and voice, perception of social interactions and multisensory integration. We also detail in this paper a number of supplementary materials, comprising AVI movie files for each interaction, along with text files specifying the three dimensional coordinates of each point-light in each frame of the movie, as well as unprocessed AIFF audio files for each dialogue captured. The full set of stimuli is available to download from: http://motioninsocial.com/stimuli_set/ .
Collapse
Affiliation(s)
- Lukasz Piwek
- Centre for the Study of Behaviour Change and Influence, University of the West of England, 4D17, Coldharbour Lane, BS16 1QY Bristol, UK
| | - Karin Petrini
- Department of Psychology, University of Bath, Claverton Down, BA2 7AY Bath, UK
| | - Frank Pollick
- School of Psychology, University of Glasgow, 58 Hillhead Street, G12 8QB Glasgow, UK
| |
Collapse
|
14
|
Manera V, von der Lühe T, Schilbach L, Verfaillie K, Becchio C. Communicative interactions in point-light displays: Choosing among multiple response alternatives. Behav Res Methods 2016; 48:1580-1590. [PMID: 26487054 PMCID: PMC5101265 DOI: 10.3758/s13428-015-0669-x] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Vision scientists are increasingly relying on the point-light technique as a way to investigate the perception of human motion. Unfortunately, the lack of standardized stimulus sets has so far limited the use of this technique for studying social interaction. Here, we describe a new tool to study the interaction between two agents starting from point-light displays: the Communicative Interaction Database - 5AFC format (CID-5). The CID-5 consists of 14 communicative and seven non-communicative individual actions performed by two agents. Stimuli were constructed by combining motion capture techniques and 3-D animation software to provide precise control over the computer-generated actions. For each action stimulus, we provide coordinate files and movie files depicting the action as seen from four different perspectives. Furthermore, the archive contains a text file with a list of five alternative action descriptions to construct forced-choice paradigms. In order to validate the CID-5 format, we provide normative data collected to assess action identification within a 5AFC tasks. The CID-5 archive is freely downloadable from http://bsb-lab.org/research/ and from the supplementary materials of this article.
Collapse
Affiliation(s)
- Valeria Manera
- CoBTek Laboratory, University of Nice Sophia Antipolis, Nice, France
| | - Tabea von der Lühe
- Department of Psychiatry and Psychotherapy, Heinrich-Heine-University of Düsseldorf, Rhineland State Clinics Düsseldorf, Düsseldorf, Germany
| | - Leonhard Schilbach
- Max Planck Institute of Psychiatry, Munich, Germany
- Department of Psychiatry, University Hospital Cologne, Cologne, Germany
| | - Karl Verfaillie
- Laboratory of Experimental Psychology, KU Leuven, Leuven, Belgium
| | - Cristina Becchio
- Department of Robotics, Brain and Cognitive Sciences, Fondazione Istituto Italiano di Tecnologia, Genova, Italy.
- Department of Psychology, University of Turin, Via Po 14, 10123, Turin, Italy.
| |
Collapse
|
15
|
Vanrie J, Dekeyser M, Verfaillie K. Bistability and Biasing Effects in the Perception of Ambiguous Point-Light Walkers. Perception 2016; 33:547-60. [PMID: 15250660 DOI: 10.1068/p5004] [Citation(s) in RCA: 57] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
The perceptually bistable character of point-light walkers has been examined in three experiments. A point-light figure without explicit depth cues constitutes a perfectly ambiguous stimulus: from all viewpoints, multiple interpretations are possible concerning the depth orientation of the figure. In the first experiment, it is shown that non-lateral views of the walker are indeed interpreted in two orientations, either as facing towards the viewer or as facing away from the viewer, but that the interpretation in which the walker is oriented towards the viewer is reported more frequently. In the second experiment the point-light figure was walking backwards, making the global orientation of the point-light figure opposite to the direction of global motion. The interpretation in which the walker was facing the viewer was again reported more frequently. The robustness of these findings was examined in the final experiment, in which the effects of disambiguating the stimulus by introducing a local depth cue (occlusion) or a more global depth cue (applying perspective projection) were explored.
Collapse
Affiliation(s)
- Jan Vanrie
- Laboratory of Experimental Psychology, K.U.Leuven, Tiensestraat 102, B-3000 Leuven, Belgium.
| | | | | |
Collapse
|
16
|
|
17
|
Lindner I, Schain C, Echterhoff G. Other-self confusions in action memory: The role of motor processes. Cognition 2016; 149:67-76. [PMID: 26803394 DOI: 10.1016/j.cognition.2016.01.003] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2013] [Revised: 11/30/2015] [Accepted: 01/05/2016] [Indexed: 11/26/2022]
Abstract
People can come to falsely remember performing actions that they have not actually performed. Common accounts of such false action memories have invoked source confusion from the overlap of sensory features but largely ignored the role of motor processes. We addressed this lacuna with a paradigm in which participants first perform (vs. do not perform) actions and then observe another person performing some of the non-performed actions. In this paradigm, observation of videos showing another's actions can later induce false self-attributions of these actions, the observation-inflation effect. Contrary to a sensory-feature account but consistent with a motor-simulation account, we found the effect even with perceptually impoverished action videos in which the majority of sensory features is absent, but motion cues are preserved (Experiment 1). We then created conditions during action observation that should (vs. should not) impede motor simulation. As predicted we found that the effect of observation was reduced when participants executed movements that were incongruent (vs. congruent) with the observed actions (Experiment 2). We discuss the processes that can produce associations of self with observed others' actions and later affect observers' action memory.
Collapse
Affiliation(s)
- Isabel Lindner
- Department of Psychology, University of Kassel, Holländische Str. 36-38, 34127 Kassel, Germany.
| | - Cécile Schain
- Department of Psychology, University of Münster, Fliednerstr. 21, 48149 Münster, Germany.
| | - Gerald Echterhoff
- Department of Psychology, University of Münster, Fliednerstr. 21, 48149 Münster, Germany.
| |
Collapse
|
18
|
Manera V, Ianì F, Bourgeois J, Haman M, Okruszek ŁP, Rivera SM, Robert P, Schilbach L, Sievers E, Verfaillie K, Vogeley K, von der Lühe T, Willems S, Becchio C. The Multilingual CID-5: A New Tool to Study the Perception of Communicative Interactions in Different Languages. Front Psychol 2015; 6:1724. [PMID: 26635651 PMCID: PMC4648072 DOI: 10.3389/fpsyg.2015.01724] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2015] [Accepted: 10/26/2015] [Indexed: 01/29/2023] Open
Abstract
The investigation of the ability to perceive, recognize, and judge upon social intentions, such as communicative intentions, on the basis of body motion is a growing research area. Cross-cultural differences in ability to perceive and interpret biological motion, however, have been poorly investigated so far. Progress in this domain strongly depends on the availability of suitable stimulus material. In the present method paper, we describe the multilingual CID-5, an extension of the CID-5 database, allowing for the investigation of how non-conventional communicative gestures are classified and identified by speakers of different languages. The CID-5 database contains 14 communicative interactions and 7 non-communicative actions performed by couples of agents and presented as point-light displays. For each action, the database provides movie files with the point-light animation, text files with the 3-D spatial coordinates of the point-lights, and five different response alternatives. In the multilingual CID-5 the alternatives were translated into seven languages (Chinese, Dutch, English, French, German, Italian, and Polish). Preliminary data collected to assess the recognizability of the actions in the different languages suggest that, for most of the action stimuli, information presented in point-light displays is sufficient for the distinctive classification of the action as communicative vs. individual, as well as for identification of the specific communicative gesture performed by the actor in all the available languages.
Collapse
Affiliation(s)
- Valeria Manera
- CoBTeK Laboratory, Faculty of Medicine, University of Nice Sophia Antipolis Nice, France
| | - Francesco Ianì
- Department of Psychology, University of Turin Turin, Italy
| | - Jérémy Bourgeois
- CoBTeK Laboratory, Faculty of Medicine, University of Nice Sophia Antipolis Nice, France
| | - Maciej Haman
- Faculty of Psychology, University of Warsaw Warsaw, Poland
| | | | - Susan M Rivera
- Department of Psychology, Center for Mind and Brain & The MIND Institute, University of California, Davis Davis, CA, USA
| | - Philippe Robert
- CoBTeK Laboratory, Faculty of Medicine, University of Nice Sophia Antipolis Nice, France ; Centre Mémoire de Ressources et de Recherche, CHU de Nice Nice, France
| | - Leonhard Schilbach
- Department of Psychiatry, University Hospital Cologne Cologne, Germany ; Max Planck Institute of Psychiatry Munich, Germany
| | - Emily Sievers
- Department of Psychology, Center for Mind and Brain & The MIND Institute, University of California, Davis Davis, CA, USA
| | - Karl Verfaillie
- Laboratory of Experimental Psychology, KU Leuven Leuven, Belgium
| | - Kai Vogeley
- Department of Psychiatry, University Hospital Cologne Cologne, Germany ; Cognitive Neuroscience - Institute of Neuroscience and Medicine (INM3), Research Center Jülich Jülich, Germany
| | - Tabea von der Lühe
- Department of Psychiatry and Psychotherapy, Heinrich-Heine-University of Düsseldorf, Rhineland State Clinics Düsseldorf Düsseldorf, Germany
| | - Sam Willems
- Laboratory of Experimental Psychology, KU Leuven Leuven, Belgium
| | - Cristina Becchio
- Department of Psychology, University of Turin Turin, Italy ; Department of Robotics, Brain and Cognitive Sciences, Fondazione Istituto Italiano di Tecnologia Genova, Italy
| |
Collapse
|
19
|
Piwek L, Pollick F, Petrini K. Audiovisual integration of emotional signals from others' social interactions. Front Psychol 2015; 9:116. [PMID: 26005430 PMCID: PMC4424808 DOI: 10.3389/fpsyg.2015.00611] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2015] [Accepted: 04/23/2015] [Indexed: 11/13/2022] Open
Abstract
Audiovisual perception of emotions has been typically examined using displays of a solitary character (e.g., the face-voice and/or body-sound of one actor). However, in real life humans often face more complex multisensory social situations, involving more than one person. Here we ask if the audiovisual facilitation in emotion recognition previously found in simpler social situations extends to more complex and ecological situations. Stimuli consisting of the biological motion and voice of two interacting agents were used in two experiments. In Experiment 1, participants were presented with visual, auditory, auditory filtered/noisy, and audiovisual congruent and incongruent clips. We asked participants to judge whether the two agents were interacting happily or angrily. In Experiment 2, another group of participants repeated the same task, as in Experiment 1, while trying to ignore either the visual or the auditory information. The findings from both experiments indicate that when the reliability of the auditory cue was decreased participants weighted more the visual cue in their emotional judgments. This in turn translated in increased emotion recognition accuracy for the multisensory condition. Our findings thus point to a common mechanism of multisensory integration of emotional signals irrespective of social stimulus complexity.
Collapse
Affiliation(s)
- Lukasz Piwek
- Behaviour Research Lab, Bristol Business School, University of the West of England Bristol, UK
| | - Frank Pollick
- School of Psychology, College of Science and Engineering, University of Glasgow Glasgow, UK
| | - Karin Petrini
- Department of Psychology, Faculty of Humanities & Social Sciences, University of Bath Bath, UK
| |
Collapse
|
20
|
Davila A, Schouten B, Verfaillie K. Perceiving the direction of articulatory motion in point-light actions. PLoS One 2014; 9:e115117. [PMID: 25526397 PMCID: PMC4272303 DOI: 10.1371/journal.pone.0115117] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2013] [Accepted: 11/19/2014] [Indexed: 11/23/2022] Open
Abstract
Human observers are able to perceive the motion direction of actions (either forward or backward) on the basis of the articulatory, relative motion of the limbs, even when the actions are shown under point-light conditions. However, most studies have focused on the action of walking. The primary purpose of the present study is to further investigate the perception of articulatory motion in different point-light actions (walking, crawling, hand walking, and rowing). On each trial, participants were presented with a forward or backward moving person and they had to decide on the direction of articulatory motion of the person. We analyzed sensitivity (d') as well as response bias (c). In addition to the type of action, the diagnosticity of the available information was manipulated by varying the visibility of the body parts (full body, only upper limbs, or only lower limbs) and the viewpoint from which the action was seen (from frontal view to sagittal view). We observe that, depending on the specific action, perception of direction of motion is driven by different body parts. Implications for the possible existence of a life detector, i.e., an evolutionarily old and innate visual filter that is tuned to quickly and automatically detect the presence of a moving living organism and direct attention to it, are discussed.
Collapse
Affiliation(s)
- Alex Davila
- Laboratory of Experimental Psychology, University of Leuven, Leuven, Belgium
| | - Ben Schouten
- Laboratory of Experimental Psychology, University of Leuven, Leuven, Belgium
| | - Karl Verfaillie
- Laboratory of Experimental Psychology, University of Leuven, Leuven, Belgium
- * E-mail:
| |
Collapse
|
21
|
Abstract
We present a database of high-definition (HD) videos for the study of traits inferred from whole-body actions. Twenty-nine actors (19 female) were filmed performing different actions-walking, picking up a box, putting down a box, jumping, sitting down, and standing and acting-while conveying different traits, including four emotions (anger, fear, happiness, sadness), untrustworthiness, and neutral, where no specific trait was conveyed. For the actions conveying the four emotions and untrustworthiness, the actions were filmed multiple times, with the actor conveying the traits with different levels of intensity. In total, we made 2,783 action videos (in both two-dimensional and three-dimensional format), each lasting 7 s with a frame rate of 50 fps. All videos were filmed in a green-screen studio in order to isolate the action information from all contextual detail and to provide a flexible stimulus set for future use. In order to validate the traits conveyed by each action, we asked participants to rate each of the actions corresponding to the trait that the actor portrayed in the two-dimensional videos. To provide a useful database of stimuli of multiple actions conveying multiple traits, each video name contains information on the gender of the actor, the action executed, the trait conveyed, and the rating of its perceived intensity. All videos can be downloaded free at the following address: http://www-users.york.ac.uk/~neb506/databases.html. We discuss potential uses for the database in the analysis of the perception of whole-body actions.
Collapse
|
22
|
The relative influences of movement kinematics and extrinsic object characteristics on the perception of lifted weight. Atten Percept Psychophys 2013; 75:1906-13. [PMID: 24027032 DOI: 10.3758/s13414-013-0539-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Humans are able to perceive unique types of biological motion presented as point-light displays (PLDs). Thirty years ago, Runeson and Frykholm (Human Perception and Performance, 7(4), 733, 1981, Journal of Experimental Psychology: General, 112(4), 585, 1983) studied observers' perceptions of weights lifted by actors and identified that the kinematic information in a PLD is sufficient for an observer to form an accurate perception of the object weight. However, research has also shown that extrinsic object size characteristics also influence the perception of object weight (Gordon, Forssberg, Johansson, & Westling in Experimental Brain Research, 83(3), 477-482, 1991). This study addresses the relative contributions of these two types of visual information to observers' perceptions of lifted weight, through an experiment in which participants viewed an actor lifting boxes of various sizes (small, medium, or large) and weights (25, 50, or 75 lb) under four PLD conditions-box-at-rest, moving-box, actor-only, and actor-and-box-and one full-vision video condition, and then provided a weight estimate for each box lifted. The results indicated that lift kinematics and box size contributed independently to weight perception. Interestingly, the most robust weight differentiations were elicited in the conditions in which both types of information were presented concurrently, despite their converse natures. Furthermore, full-vision video presentation, which contained visual information beyond kinematics and object information, elicited the best estimates.
Collapse
|
23
|
Communicative and noncommunicative point-light actions featuring high-resolution representation of the hands and fingers. Behav Res Methods 2013; 45:319-28. [PMID: 23073730 DOI: 10.3758/s13428-012-0273-2] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
We describe the creation of a set of point-light movies depicting 43 communicative gestures and 43 noncommunicative, pantomimed actions. These actions were recorded using a motion capture system that is worn on the body and provides accurate capture of the positions and movements of individual fingers. The movies created thus include point-lights on the fingers, allowing for representation of actions and gestures that would not be possible with a conventional, line-of-sight-based motion capture system. These videos would be suitable for use in cognitive and cognitive neuroscientific studies of biological motion and gesture perception. Each video is described, along with an H statistic indicating the consistency of the descriptive labels that 20 observers gave to the actions. We also produced a scrambled version of each movie, in which the starting position of each point was randomized but its local motion vector was preserved. These scrambled movies would be suitable for use as control stimuli in experimental studies. As supplementary materials, we provide QuickTime movie files of each action, along with text files specifying the three-dimensional coordinates of each point-light in each frame of each movie.
Collapse
|
24
|
Meyer GF, Harrison NR, Wuerger SM. The time course of auditory-visual processing of speech and body actions: evidence for the simultaneous activation of an extended neural network for semantic processing. Neuropsychologia 2013; 51:1716-25. [PMID: 23727570 DOI: 10.1016/j.neuropsychologia.2013.05.014] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2012] [Revised: 04/16/2013] [Accepted: 05/20/2013] [Indexed: 11/17/2022]
Abstract
An extensive network of cortical areas is involved in multisensory object and action recognition. This network draws on inferior frontal, posterior temporal, and parietal areas; activity is modulated by familiarity and the semantic congruency of auditory and visual component signals even if semantic incongruences are created by combining visual and auditory signals representing very different signal categories, such as speech and whole body actions. Here we present results from a high-density ERP study designed to examine the time-course and source location of responses to semantically congruent and incongruent audiovisual speech and body actions to explore whether the network involved in action recognition consists of a hierarchy of sequentially activated processing modules or a network of simultaneously active processing sites. We report two main results:1) There are no significant early differences in the processing of congruent and incongruent audiovisual action sequences. The earliest difference between congruent and incongruent audiovisual stimuli occurs between 240 and 280 ms after stimulus onset in the left temporal region. Between 340 and 420 ms, semantic congruence modulates responses in central and right frontal areas. Late differences (after 460 ms) occur bilaterally in frontal areas.2) Source localisation (dipole modelling and LORETA) reveals that an extended network encompassing inferior frontal, temporal, parasaggital, and superior parietal sites are simultaneously active between 180 and 420 ms to process auditory–visual action sequences. Early activation (before 120 ms) can be explained by activity in mainly sensory cortices. . The simultaneous activation of an extended network between 180 and 420 ms is consistent with models that posit parallel processing of complex action sequences in frontal, temporal and parietal areas rather than models that postulate hierarchical processing in a sequence of brain regions.
Collapse
Affiliation(s)
- Georg F Meyer
- Department of Psychological Sciences, University of Liverpool, Liverpool L697ZA, UK.
| | | | | |
Collapse
|
25
|
Manera V, Schouten B, Verfaillie K, Becchio C. Time will show: real time predictions during interpersonal action perception. PLoS One 2013; 8:e54949. [PMID: 23349992 PMCID: PMC3551817 DOI: 10.1371/journal.pone.0054949] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2012] [Accepted: 12/18/2012] [Indexed: 01/08/2023] Open
Abstract
Predictive processes are crucial not only for interpreting the actions of individual agents, but also to predict how, in the context of a social interaction between two agents, the actions of one agent relate to the actions of a second agent. In the present study we investigated whether, in the context of a communicative interaction between two agents, observers can use the actions of one agent to predict when the action of a second agent will take place. Participants observed point-light displays of two agents (A and B) performing separate actions. In the communicative condition, the action performed by agent B responded to a communicative gesture performed by agent A. In the individual condition, agent A's communicative action was substituted with a non-communicative action. For each condition, we manipulated the temporal coupling of the actions of the two agents, by varying the onset of agent A's action. Using a simultaneous masking detection task, we demonstrated that the timing manipulation had a critical effect on the communicative condition, with the visual discrimination of agent B increasing linearly while approaching the original interaction timing. No effect of the timing manipulation was found for the individual condition. Our finding complements and extends previous evidence for interpersonal predictive coding, suggesting that the communicative gestures of one agent can serve not only to predict what the second agent will do, but also when his/her action will take place.
Collapse
Affiliation(s)
- Valeria Manera
- Center for Cognitive Science, Department of Psychology, University of Turin, Turin, Italy
| | - Ben Schouten
- Laboratory of Experimental Psychology, K.U. Leuven, Leuven, Belgium
| | - Karl Verfaillie
- Laboratory of Experimental Psychology, K.U. Leuven, Leuven, Belgium
| | - Cristina Becchio
- Center for Cognitive Science, Department of Psychology, University of Turin, Turin, Italy
- * E-mail:
| |
Collapse
|
26
|
Thoresen JC, Vuong QC, Atkinson AP. First impressions: gait cues drive reliable trait judgements. Cognition 2012; 124:261-71. [PMID: 22717166 DOI: 10.1016/j.cognition.2012.05.018] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2011] [Revised: 05/23/2012] [Accepted: 05/24/2012] [Indexed: 10/28/2022]
Abstract
Personality trait attribution can underpin important social decisions and yet requires little effort; even a brief exposure to a photograph can generate lasting impressions. Body movement is a channel readily available to observers and allows judgements to be made when facial and body appearances are less visible; e.g., from great distances. Across three studies, we assessed the reliability of trait judgements of point-light walkers and identified motion-related visual cues driving observers' judgements. The findings confirm that observers make reliable, albeit inaccurate, trait judgements, and these were linked to a small number of motion components derived from a Principal Component Analysis of the motion data. Parametric manipulation of the motion components linearly affected trait ratings, providing strong evidence that the visual cues captured by these components drive observers' trait judgements. Subsequent analyses suggest that reliability of trait ratings was driven by impressions of emotion, attractiveness and masculinity.
Collapse
|
27
|
Manera V, Cavallo A, Chiavarino C, Schouten B, Verfaillie K, Becchio C. Are you approaching me? Motor execution influences perceived action orientation. PLoS One 2012; 7:e37514. [PMID: 22624042 PMCID: PMC3356325 DOI: 10.1371/journal.pone.0037514] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2011] [Accepted: 04/22/2012] [Indexed: 11/18/2022] Open
Abstract
Human observers are especially sensitive to the actions of conspecifics that match their own actions. This has been proposed to be critical for social interaction, providing the basis for empathy and joint action. However, the precise relation between observed and executed actions is still poorly understood. Do ongoing actions change the way observers perceive others' actions? To pursue this question, we exploited the bistability of depth-ambiguous point-light walkers, which can be perceived as facing towards the viewer or as facing away from the viewer. We demonstrate that point-light walkers are perceived more often as facing the viewer when the observer is walking on a treadmill compared to when the observer is performing an action that does not match the observed behavior (e.g., cycling). These findings suggest that motor processes influence the perceived orientation of observed actions: Acting observers tend to perceive similar actions by conspecifics as oriented towards themselves. We discuss these results in light of the possible mechanisms subtending action-induced modulation of perception.
Collapse
Affiliation(s)
- Valeria Manera
- Department of Psychology, Center for Cognitive Science, University of Turin, Turin, Italy
- Laboratory of Experimental Psychology, K.U. Leuven, Leuven, Belgium
| | - Andrea Cavallo
- Department of Psychology, Center for Cognitive Science, University of Turin, Turin, Italy
| | - Claudia Chiavarino
- Department of Psychology, Center for Cognitive Science, University of Turin, Turin, Italy
| | - Ben Schouten
- Laboratory of Experimental Psychology, K.U. Leuven, Leuven, Belgium
| | - Karl Verfaillie
- Laboratory of Experimental Psychology, K.U. Leuven, Leuven, Belgium
| | - Cristina Becchio
- Department of Psychology, Center for Cognitive Science, University of Turin, Turin, Italy
- * E-mail:
| |
Collapse
|
28
|
Ida H, Fukuhara K, Ishii M. Recognition of tennis serve performed by a digital player: comparison among polygon, shadow, and stick-figure models. PLoS One 2012; 7:e33879. [PMID: 22439009 PMCID: PMC3306305 DOI: 10.1371/journal.pone.0033879] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2011] [Accepted: 02/20/2012] [Indexed: 11/19/2022] Open
Abstract
The objective of this study was to assess the cognitive effect of human character models on the observer's ability to extract relevant information from computer graphics animation of tennis serve motions. Three digital human models (polygon, shadow, and stick-figure) were used to display the computationally simulated serve motions, which were perturbed at the racket-arm by modulating the speed (slower or faster) of one of the joint rotations (wrist, elbow, or shoulder). Twenty-one experienced tennis players and 21 novices made discrimination responses about the modulated joint and also specified the perceived swing speeds on a visual analogue scale. The result showed that the discrimination accuracies of the experienced players were both above and below chance level depending on the modulated joint whereas those of the novices mostly remained at chance or guessing levels. As far as the experienced players were concerned, the polygon model decreased the discrimination accuracy as compared with the stick-figure model. This suggests that the complicated pictorial information may have a distracting effect on the recognition of the observed action. On the other hand, the perceived swing speed of the perturbed motion relative to the control was lower for the stick-figure model than for the polygon model regardless of the skill level. This result suggests that the simplified visual information can bias the perception of the motion speed toward slower. It was also shown that the increasing the joint rotation speed increased the perceived swing speed, although the resulting racket velocity had little correlation with this speed sensation. Collectively, observer's recognition of the motion pattern and perception of the motion speed can be affected by the pictorial information of the human model as well as by the perturbation processing applied to the observed motion.
Collapse
Affiliation(s)
- Hirofumi Ida
- Department of Human System Science, Tokyo Institute of Technology, Tokyo, Japan.
| | | | | |
Collapse
|
29
|
Poljac E, de-Wit L, Wagemans J. Perceptual wholes can reduce the conscious accessibility of their parts. Cognition 2012; 123:308-12. [PMID: 22306190 DOI: 10.1016/j.cognition.2012.01.001] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2011] [Revised: 01/11/2012] [Accepted: 01/11/2012] [Indexed: 10/14/2022]
Abstract
Humans can rapidly extract object and category information from an image despite surprising limitations in detecting changes to the individual parts of that image. In this article we provide evidence that the construction of a perceptual whole, or Gestalt, reduces awareness of changes to the parts of this object. This result suggests that the rapid extraction of a perceptual Gestalt, and the inaccessibility of the parts that make up that Gestalt, may in fact reflect two sides of the same coin whereby human vision provides only the most useful level of abstraction to conscious awareness.
Collapse
Affiliation(s)
- Ervin Poljac
- Laboratory of Experimental Psychology, University of Leuven (K.U. Leuven), Belgium.
| | | | | |
Collapse
|
30
|
A study of kinematic cues and anticipatory performance in tennis using computational manipulation and computer graphics. Behav Res Methods 2012; 43:781-90. [PMID: 21487901 DOI: 10.3758/s13428-011-0084-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Computer graphics of digital human models can be used to display human motions as visual stimuli. This study presents our technique for manipulating human motion with a forward kinematics calculation without violating anatomical constraints. A motion modulation of the upper extremity was conducted by proportionally modulating the anatomical joint angular velocity calculated by motion analysis. The effect of this manipulation was examined in a tennis situation--that is, the receiver's performance of predicting ball direction when viewing a digital model of the server's motion derived by modulating the angular velocities of the forearm or that of the elbow during the forward swing. The results showed that the faster the server's forearm pronated, the more the receiver's anticipation of the ball direction tended to the left side of the serve box. In contrast, the faster the server's elbow extended, the more the receiver's anticipation of the ball direction tended to the right. This suggests that tennis players are sensitive to the motion modulation of their opponent's racket-arm.
Collapse
|
31
|
Mouta S, Santos JDA. Percepção de velocidade do movimento biológico: mais resistente ao fenômeno de interferência? ESTUDOS DE PSICOLOGIA (CAMPINAS) 2011. [DOI: 10.1590/s0103-166x2011000400008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
O sistema visual humano é frequentemente referido como altamente preparado para extrair informação relevante de padrões de movimento biológico. Nesse sentido, este estudo analisa o efeito de contraste na percepção de velocidade. Os participantes realizaram o julgamento de velocidade numa situação na qual dois point-light walkers simultâneos foram apresentados com diferentes contrastes relativamente ao fundo e com diferentes velocidades de translação. Na Experiência 1, o movimento de translação biológico canônico foi comparado com o movimento de translação rígido, enquanto na Experiência 2 ele foi comparado com o movimento de translação biológico invertido. O padrão biológico canônico apresenta maior taxa de erro, tempos de reação mais elevados e maior vulnerabilidade ao efeito de contraste na percepção da velocidade do que o padrão rígido. No entanto, não foram encontradas diferenças significativas entre o estímulo canônico e o invertido. A Experiência 3 foi implementada com o objetivo de se controlar o papel das pistas posicionais na tarefa de julgamento de velocidade. Os pontos iniciais e finais da trajetória foram combinados de modo a que os point-light walkers mais rápidos e os mais lentos pudessem terminar o ensaio numa posição relativamente mais avançada ou atrasada. Apesar desta variação, o padrão de resultados foi congruente com as observações das Experiências 1 e 2. Aparentemente, os participantes realizaram julgamentos de velocidade factuais, ao invés do uso de pistas espaciais como uma espécie de referência ou comparação de posicionamento. Dado que a percepção dos padrões biológicos foi mais vulnerável aos efeitos de contraste, mas não foi afetada pela familiaridade, este estudo sugere que a percepção de movimento biológico e rígido poderá obedecer às mesmas regras computacionais, pelo menos em tarefas que envolvam padrões em translação e julgamentos de velocidade.
Collapse
Affiliation(s)
- Sandra Mouta
- Universitat de Barcelona, España; Universidade do Porto, Portugal
| | | |
Collapse
|
32
|
Poljac E, Verfaillie K, Wagemans J. Integrating biological motion: the role of grouping in the perception of point-light actions. PLoS One 2011; 6:e25867. [PMID: 21991376 PMCID: PMC3185055 DOI: 10.1371/journal.pone.0025867] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2011] [Accepted: 09/13/2011] [Indexed: 11/29/2022] Open
Abstract
The human visual system is highly sensitive to biological motion and manages to organize even a highly reduced point-light stimulus into a vivid percept of human action. The current study investigated to what extent the origin of this saliency of point-light displays is related to its intrinsic Gestalt qualities. In particular, we studied whether biological motion perception is facilitated when the elements can be grouped according to good continuation and similarity as Gestalt principles of perceptual organization. We found that both grouping principles enhanced biological motion perception but their effects differed when stimuli were inverted. These results provide evidence that Gestalt principles of good continuity and similarity also apply to more complex and dynamic meaningful stimuli.
Collapse
Affiliation(s)
- Ervin Poljac
- Laboratory of Experimental Psychology, University of Leuven (K.U. Leuven), Leuven, Belgium.
| | | | | |
Collapse
|
33
|
Vanrie J, Verfaillie K. On the depth reversibility of point-light actions. VISUAL COGNITION 2011. [DOI: 10.1080/13506285.2011.614381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
|
34
|
The second-agent effect: communicative gestures increase the likelihood of perceiving a second agent. PLoS One 2011; 6:e22650. [PMID: 21829472 PMCID: PMC3145660 DOI: 10.1371/journal.pone.0022650] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2011] [Accepted: 06/27/2011] [Indexed: 11/20/2022] Open
Abstract
Background Beyond providing cues about an agent's intention, communicative actions convey information about the presence of a second agent towards whom the action is directed (second-agent information). In two psychophysical studies we investigated whether the perceptual system makes use of this information to infer the presence of a second agent when dealing with impoverished and/or noisy sensory input. Methodology/Principal Findings Participants observed point-light displays of two agents (A and B) performing separate actions. In the Communicative condition, agent B's action was performed in response to a communicative gesture by agent A. In the Individual condition, agent A's communicative action was replaced with a non-communicative action. Participants performed a simultaneous masking yes-no task, in which they were asked to detect the presence of agent B. In Experiment 1, we investigated whether criterion c was lowered in the Communicative condition compared to the Individual condition, thus reflecting a variation in perceptual expectations. In Experiment 2, we manipulated the congruence between A's communicative gesture and B's response, to ascertain whether the lowering of c in the Communicative condition reflected a truly perceptual effect. Results demonstrate that information extracted from communicative gestures influences the concurrent processing of biological motion by prompting perception of a second agent (second-agent effect). Conclusions/Significance We propose that this finding is best explained within a Bayesian framework, which gives a powerful rationale for the pervasive role of prior expectations in visual perception.
Collapse
|
35
|
Woo KL, Rieucau G. From dummies to animations: a review of computer-animated stimuli used in animal behavior studies. Behav Ecol Sociobiol 2011. [DOI: 10.1007/s00265-011-1226-y] [Citation(s) in RCA: 47] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
|
36
|
Manera V, Becchio C, Schouten B, Bara BG, Verfaillie K. Communicative interactions improve visual detection of biological motion. PLoS One 2011; 6:e14594. [PMID: 21297865 PMCID: PMC3027618 DOI: 10.1371/journal.pone.0014594] [Citation(s) in RCA: 63] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2010] [Accepted: 01/03/2011] [Indexed: 11/18/2022] Open
Abstract
BACKGROUND In the context of interacting activities requiring close-body contact such as fighting or dancing, the actions of one agent can be used to predict the actions of the second agent. In the present study, we investigated whether interpersonal predictive coding extends to interactive activities--such as communicative interactions--in which no physical contingency is implied between the movements of the interacting individuals. METHODOLOGY/PRINCIPAL FINDINGS Participants observed point-light displays of two agents (A and B) performing separate actions. In the communicative condition, the action performed by agent B responded to a communicative gesture performed by agent A. In the individual condition, agent A's communicative action was substituted with a non-communicative action. Using a simultaneous masking detection task, we demonstrate that observing the communicative gesture performed by agent A enhanced visual discrimination of agent B. CONCLUSIONS/SIGNIFICANCE Our finding complements and extends previous evidence for interpersonal predictive coding, suggesting that the communicative gestures of one agent can serve as a predictor for the expected actions of the respondent, even if no physical contact between agents is implied.
Collapse
Affiliation(s)
- Valeria Manera
- Department of Psychology, Center for Cognitive Science, University of Turin, Turin, Italy.
| | | | | | | | | |
Collapse
|
37
|
Ida H, Fukuhara K, Sawada M, Ishii M. Quantitative Relation between Server Motion and Receiver Anticipation in Tennis: Implications of Responses to Computer-Simulated Motions. Perception 2011; 40:1221-36. [DOI: 10.1068/p7041] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
Abstract
The purpose of this study was to determine the quantitative relationships between the server's motion and the receiver's anticipation using a computer graphic animation of tennis serves. The test motions were determined by capturing the motion of a model player and estimating the computational perturbations caused by modulating the rotation of the player's elbow and forearm joints. Eight experienced and eight novice players rated their anticipation of the speed, direction, and spin of the ball on a visual analogue scale. The experienced players significantly altered some of their anticipatory judgment depending on the percentage of both the forearm and elbow modulations, while the novice players indicated no significant changes. Multiple regression analyses, including that of the racket's kinematic parameters immediately before racket – ball impact as independent variables, showed that the experienced players demonstrated a higher coefficient of determination than the novice players in their anticipatory judgment of the ball direction. The results have implications on the understanding of the functional relation between a player's motion and the opponent's anticipatory judgment during real play.
Collapse
Affiliation(s)
- Hirofumi Ida
- Human Media Research Center, Kanagawa Institute of Technology, 1030 Shimo-ogino, Atsugi, Kanagawa, 243-0292 Japan
| | | | - Misako Sawada
- Department of Child Studies, Japan Women's University, 2-8-1 Mejirodai, Bunkyo, Tokyo, 112-8681 Japan
| | | |
Collapse
|
38
|
Determining the point of subjective ambiguity of ambiguous biological-motion figures with perspective cues. Behav Res Methods 2010; 42:161-7. [PMID: 20160296 DOI: 10.3758/brm.42.1.161] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Orthographic frontal/back projections of biological-motion figures are bistable: The point-light figure in principle can be perceived either as facing toward the viewer or as facing away from the viewer. Some point-light actions--for example, walking--elicit a strong "facing bias": Despite the absence of objective cues to depth, observers tend to interpret the figure as facing toward the viewer in most of the cases. In this article, we present and experimentally validate a technique that affords full experimental control of the perceived in-depth orientation of point-light figures. We demonstrate that by parametrically manipulating the amount of perspective information in the stimulus, it is possible to obtain any desired level of subjective ambiguity. Directions for future research, in which this technique can be fruitfully implemented, are suggested. Program code of a demo is provided that can be modified easily for program code of new experiments. The demo and QuickTime movie files illustrating our perspective manipulation technique may be downloaded from http://brm.psychonomic-journals.org/content/supplemental.
Collapse
|
39
|
Inferring intentions from biological motion: A stimulus set of point-light communicative interactions. Behav Res Methods 2010; 42:168-78. [DOI: 10.3758/brm.42.1.168] [Citation(s) in RCA: 72] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
40
|
Vangeneugden J, Pollick F, Vogels R. Functional Differentiation of Macaque Visual Temporal Cortical Neurons Using a Parametric Action Space. Cereb Cortex 2008; 19:593-611. [DOI: 10.1093/cercor/bhn109] [Citation(s) in RCA: 69] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
|
41
|
Jackson S, Brady N, Cummins F, Monaghan K. Interaction effects in simultaneous motor control and movement perception tasks. Artif Intell Rev 2007. [DOI: 10.1007/s10462-007-9035-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
42
|
Abstract
The present study investigates how observers assign depth in point-light figures, by manipulating spatiotemporal characteristics of the stimuli. Previous research on the perception of point-light walkers revealed bistability (i.e., that a point-light walker is perceived as either facing the viewer or facing away from the viewer) and the presence of a perceptual bias (i.e., a tendency to perceive the figure as facing the viewer). Here, we study the generality of these phenomena by having observers indicate the global depth orientation of different ambiguous point-light actions. Results demonstrate bistability for all actions, but the presence of a preferred interpretation depends strongly on the performed action, showing that the process of depth assignment takes into account the movements the point-light figure performs. Two additional experiments, using unfamiliar movement patterns without strong semantic correlates, show that purely kinematic aspects of a naction also strongly affect d epth assignment. Together, the results reveal the perception of depth in point-light figures to be a flexible processinvolving both bottom-up and top-down components.
Collapse
Affiliation(s)
- Jan Vanrie
- Katholieke Universiteit Leuven, Leuven, Belgium
| | | |
Collapse
|
43
|
Ma Y, Paterson HM, Pollick FE. A motion capture library for the study of identity, gender, and emotion perception from biological motion. Behav Res Methods 2006; 38:134-41. [PMID: 16817522 DOI: 10.3758/bf03192758] [Citation(s) in RCA: 114] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
We present the methods that were used in capturing a library of human movements for use in computer-animated displays of human movement. The library is an attempt to systematically tap into and represent the wide range of personal properties, such as identity, gender, and emotion, that are available in a person's movements. The movements from a total of 30 nonprofessional actors (15 of them female) were captured while they performed walking, knocking, lifting, and throwing actions, as well as their combination in angry, happy, neutral, and sad affective styles. From the raw motion capture data, a library of 4,080 movements was obtained, using techniques based on Character Studio (plug-ins for 3D Studio MAX, AutoDesk, Inc.), MATLAB The MathWorks, Inc.), or a combination of these two. For the knocking, lifting, and throwing actions, 10 repetitions of the simple action unit were obtained for each affect, and for the other actions, two longer movement recordings were obtained for each affect. We discuss the potential use of the library for computational and behavioral analyses of movement variability, of human character animation, and of how gender, emotion, and identity are encoded and decoded from human movement.
Collapse
|
44
|
Perception and Synthesis of Biologically Plausible Motion: From Human Physiology to Virtual Reality. LECTURE NOTES IN COMPUTER SCIENCE 2006. [DOI: 10.1007/11678816_1] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
|
45
|
Abstract
Using functional magnetic resonance imaging and point light displays portraying six different human actions, we were able to show that several visual cortical regions, including human MT/V5 complex, posterior inferior temporal gyrus and superior temporal sulcus, are differentially active in the subtraction comparing biological motion to scrambled motion. Comparison of biological motion to three-dimensional rotation (of a human figure), articulated motion and translation suggests that human superior temporal sulcus activity reflects the action portrayed in the biological motion stimuli, whereas posterior inferior temporal gyrus responds to the figure and hMT/V5+ to the complex motion pattern present in biological motion stimuli. These results were confirmed with implied action stimuli.
Collapse
Affiliation(s)
- H Peuskens
- Laboratorium voor Neuro- en Psychofysiologie, K.U. Leuven, Campus Gasthuisberg O&N, Herestraat 49, B-3000 Leuven, Belgium
| | | | | | | |
Collapse
|
46
|
Vanrie J, Verfaillie K. Perception of biological motion: A stimulus set of human point-light actions. ACTA ACUST UNITED AC 2004; 36:625-9. [PMID: 15641407 DOI: 10.3758/bf03206542] [Citation(s) in RCA: 141] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
We present a set of stimuli representing human actions under point-light conditions, as seen from different viewpoints. The set contains 22 fairly short, well-delineated, and visually "loopable" actions. For each action, we provide movie files from five different viewpoints as well as a text file with the three spatial coordinates of the point lights, allowing researchers to construct customized versions. The full set of stimuli may be downloaded from www.psychonomic.org/archive/.
Collapse
Affiliation(s)
- Jan Vanrie
- Katholieke Universiteit Leuven, Leuven, Belgium.
| | | |
Collapse
|
47
|
Abstract
Ambiguity has long been used as a probe into visual processing. Here, we describe a new dynamic ambiguous figure-the chimeric point-light walker--which we hope will prove to be a useful tool for exploring biological motion. We begin by describing the construction of the stimulus and discussing the compelling finding that, when presented in a mask, observers consistently fail to notice anything odd about the walker, reporting instead that they are watching an unambiguous figure moving either to the left or right. Some observers report that the initial percept fluctuates, moving first to the left, then to the right, or vice versa; others always perceive a constant direction. All observers, when briefly shown the unmasked ambiguous figure, have no difficulty in perceiving the novel motion pattern once the mask is returned. These two findings--the initial report of unambiguous motion and the subsequent 'primed' perception of the ambiguity--are both consistent with an important role for top-down processing in biological motion. We conclude by suggesting several domains within the realm of biological-motion processing where this simple stimulus may prove to be useful.
Collapse
Affiliation(s)
- Ian M Thornton
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany.
| | | | | |
Collapse
|