1
|
Christensen JF, Fernández A, Smith RA, Michalareas G, Yazdi SHN, Farahi F, Schmidt EM, Bahmanian N, Roig G. EMOKINE: A software package and computational framework for scaling up the creation of highly controlled emotional full-body movement datasets. Behav Res Methods 2024; 56:7498-7542. [PMID: 38918315 DOI: 10.3758/s13428-024-02433-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/15/2024] [Indexed: 06/27/2024]
Abstract
EMOKINE is a software package and dataset creation suite for emotional full-body movement research in experimental psychology, affective neuroscience, and computer vision. A computational framework, comprehensive instructions, a pilot dataset, observer ratings, and kinematic feature extraction code are provided to facilitate future dataset creations at scale. In addition, the EMOKINE framework outlines how complex sequences of movements may advance emotion research. Traditionally, often emotional-'action'-based stimuli are used in such research, like hand-waving or walking motions. Here instead, a pilot dataset is provided with short dance choreographies, repeated several times by a dancer who expressed different emotional intentions at each repetition: anger, contentment, fear, joy, neutrality, and sadness. The dataset was simultaneously filmed professionally, and recorded using XSENS® motion capture technology (17 sensors, 240 frames/second). Thirty-two statistics from 12 kinematic features were extracted offline, for the first time in one single dataset: speed, acceleration, angular speed, angular acceleration, limb contraction, distance to center of mass, quantity of motion, dimensionless jerk (integral), head angle (with regards to vertical axis and to back), and space (convex hull 2D and 3D). Average, median absolute deviation (MAD), and maximum value were computed as applicable. The EMOKINE software is appliable to other motion-capture systems and is openly available on the Zenodo Repository. Releases on GitHub include: (i) the code to extract the 32 statistics, (ii) a rigging plugin for Python for MVNX file-conversion to Blender format (MVNX=output file XSENS® system), and (iii) a Python-script-powered custom software to assist with blurring faces; latter two under GPLv3 licenses.
Collapse
Affiliation(s)
- Julia F Christensen
- Department of Cognitive Neuropsychology, Max Planck Institute for Empirical Aesthetics, Frankfurt/M, Germany.
| | - Andrés Fernández
- Methods of Machine Learning, University of Tübingen, Tübingen, Germany
- International Max Planck Research School for Intelligent Systems, Tübingen, Germany
| | - Rebecca A Smith
- Department of Psychology, University of Glasgow, Glasgow, Scotland
| | - Georgios Michalareas
- Department of Cognitive Neuropsychology, Max Planck Institute for Empirical Aesthetics, Frankfurt/M, Germany
| | | | | | - Eva-Madeleine Schmidt
- Department of Cognitive Neuropsychology, Max Planck Institute for Empirical Aesthetics, Frankfurt/M, Germany
- Max Planck School of Cognition, Leipzig, Germany
- Center for Humans and Machines, Max Planck Institute for Human Development, Berlin, Germany
| | - Nasimeh Bahmanian
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt/M, Germany
- Department of Modern Languages, Goethe University, Frankfurt/M, Germany
| | - Gemma Roig
- Computer Science Department, Goethe University, Frankfurt/M, Germany
- The Hessian Center for Artificial Intelligence (hessian.AI), Darmstadt, Germany
| |
Collapse
|
2
|
Bidet-Ildei C, BenAhmed O, Bouidaine D, Francisco V, Decatoire A, Blandin Y, Pylouster J, Fernandez-Maloigne C. SmartDetector: Automatic and vision-based approach to point-light display generation for human action perception. Behav Res Methods 2024:10.3758/s13428-024-02478-1. [PMID: 39138735 DOI: 10.3758/s13428-024-02478-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/11/2024] [Indexed: 08/15/2024]
Abstract
Over the past four decades, point-light displays (PLD) have been integrated into psychology and psychophysics, providing a valuable means to probe human perceptual skills. Leveraging the inherent kinematic information and controllable display parameters, researchers have utilized this technique to examine the mechanisms involved in learning and rehabilitation. However, classical PLD generation methods (e.g., motion capture) are difficult to apply for behavior analysis in real-world situations, such as patient care or sports activities. Therefore, there is a demand for automated and affordable tools that enable efficient and real-world-compatible generation of PLDs for psychological research. In this paper, we propose SmartDetector, a new artificial intelligence (AI)-based tool for automatic PLD creation from RGB videos. To evaluate humans' perceptual skills for processing PLD building with SmartDetector, 126 participants were randomly assigned to recognition, discrimination, or detection tasks. Results demonstrated that, irrespective of the task, PLDs generated by SmartDetector exhibited commendable perceptual performance in terms of accuracy and response times compared to literature findings. Moreover, to enhance usability and broaden accessibility, we developed an intuitive web interface for our method, making it available to a wider audience. The resulting application is available at https://plavimop.prd.fr/index.php/en/automatic-creation-pld . SmartDetector offers interesting possibilities for using PLD in research and makes the use of PLD more accessible for nonacademic applications.
Collapse
Affiliation(s)
- Christel Bidet-Ildei
- CNRS, Centre de Recherches sur la Cognition et l'Apprentissage CeRCA/MSHS, Université de Poitiers, Université de Tours, Bâtiment A5, 5, rue Théodore Lefebvre, TSA 21103, 86073, Poitiers Cedex 9, France.
- Institut Universitaire de France (IUF), Paris, France.
| | - Olfa BenAhmed
- XLIM Research Institute, UMR CNRS 7252, University of Poitiers, Poitiers, France
| | - Diaddin Bouidaine
- XLIM Research Institute, UMR CNRS 7252, University of Poitiers, Poitiers, France
| | - Victor Francisco
- CNRS, Centre de Recherches sur la Cognition et l'Apprentissage CeRCA/MSHS, Université de Poitiers, Université de Tours, Bâtiment A5, 5, rue Théodore Lefebvre, TSA 21103, 86073, Poitiers Cedex 9, France
- ISAE-ENSMA, CNRS, PPRIME, Université de Poitiers, Poitiers, France
- Melioris, Centre de Médecine Physique et de Réadaptation Fonctionnelle Le Grand Feu, Niort, France
| | - Arnaud Decatoire
- ISAE-ENSMA, CNRS, PPRIME, Université de Poitiers, Poitiers, France
| | - Yannick Blandin
- CNRS, Centre de Recherches sur la Cognition et l'Apprentissage CeRCA/MSHS, Université de Poitiers, Université de Tours, Bâtiment A5, 5, rue Théodore Lefebvre, TSA 21103, 86073, Poitiers Cedex 9, France
| | - Jean Pylouster
- CNRS, Centre de Recherches sur la Cognition et l'Apprentissage CeRCA/MSHS, Université de Poitiers, Université de Tours, Bâtiment A5, 5, rue Théodore Lefebvre, TSA 21103, 86073, Poitiers Cedex 9, France
| | | |
Collapse
|
3
|
Tully LM, Blendermann M, Fine JR, Zakskorn LN, Fritz M, Hamlett GE, Lamb ST, Moody AK, Ng J, Parakul N, Ritter BM, Rahim R, Yu G, Taylor SL. The SocialVidStim: a video database of positive and negative social evaluation stimuli for use in social cognitive neuroscience paradigms. Soc Cogn Affect Neurosci 2024; 19:nsae024. [PMID: 38597895 PMCID: PMC11015894 DOI: 10.1093/scan/nsae024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2021] [Revised: 01/06/2023] [Accepted: 04/05/2024] [Indexed: 04/11/2024] Open
Abstract
This paper describes the SocialVidStim-a database of video stimuli available to the scientific community depicting positive and negative social evaluative and neutral statements. The SocialVidStim comprises 53 diverse individuals reflecting the demographic makeup of the USA, ranging from 9 to 41 years old, saying 20-60 positive and 20-60 negative social evaluative statements (e.g. 'You are a very trustworthy/annoying person'), and 20-60 neutral statements (e.g. 'The sky is blue'), totaling 5793 videos post-production. The SocialVidStim are designed for use in behavioral and functional magetic resonance imaging paradigms, across developmental stages, in diverse populations. This study describes stimuli development and reports initial validity and reliability data on a subset videos (N = 1890) depicting individuals aged 18-41 years. Raters perceive videos as expected: positive videos elicit positively valenced ratings, negative videos elicit negatively valenced ratings and neutral videos are rated as neutral. Test-retest reliability data demonstrate intraclass correlations in the good-to-excellent range for negative and positive videos and the moderate range for neutral videos. We also report small effects on valence and arousal that should be considered during stimuli selection, including match between rater and actor sex and actor believability. The SocialVidStim is a resource for researchers and we offer suggestions for using the SocialVidStim in future research.
Collapse
Affiliation(s)
- Laura M Tully
- Department of Psychiatry and Behavioral Sciences, University of California, Davis, Sacramento, CA 95817, USA
| | - Mary Blendermann
- Department of Psychiatry and Behavioral Sciences, University of California, Davis, Sacramento, CA 95817, USA
| | - Jeffrey R Fine
- Division of Biostatistics, Department of Public Health Sciences, University of California, Davis, Sacramento, CA 95817, USA
| | - Lauren N Zakskorn
- Department of Psychiatry and Behavioral Sciences, University of California, Davis, Sacramento, CA 95817, USA
| | - Matilda Fritz
- Department of Psychiatry and Behavioral Sciences, University of California, Davis, Sacramento, CA 95817, USA
| | - Gabriella E Hamlett
- Department of Psychiatry and Behavioral Sciences, University of California, Davis, Sacramento, CA 95817, USA
| | - Shannon T Lamb
- Department of Psychiatry and Behavioral Sciences, University of California, Davis, Sacramento, CA 95817, USA
| | - Anna K Moody
- Department of Psychiatry and Behavioral Sciences, University of California, Davis, Sacramento, CA 95817, USA
| | - Julenne Ng
- Department of Psychiatry and Behavioral Sciences, University of California, Davis, Sacramento, CA 95817, USA
| | - Narimes Parakul
- Department of Psychiatry and Behavioral Sciences, University of California, Davis, Sacramento, CA 95817, USA
| | - Bryn M Ritter
- Department of Psychiatry and Behavioral Sciences, University of California, Davis, Sacramento, CA 95817, USA
| | - Raisa Rahim
- Department of Psychiatry and Behavioral Sciences, University of California, Davis, Sacramento, CA 95817, USA
| | - Grace Yu
- Department of Psychiatry and Behavioral Sciences, University of California, Davis, Sacramento, CA 95817, USA
| | - Sandra L Taylor
- Division of Biostatistics, Department of Public Health Sciences, University of California, Davis, Sacramento, CA 95817, USA
| |
Collapse
|
4
|
Cortês AB, Duarte JV, Castelo-Branco M. Hysteresis reveals a happiness bias effect in dynamic emotion recognition from ambiguous biological motion. J Vis 2023; 23:5. [PMID: 37962533 PMCID: PMC10653266 DOI: 10.1167/jov.23.13.5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2023] [Accepted: 10/10/2023] [Indexed: 11/15/2023] Open
Abstract
Considering the nonlinear dynamic nature of emotion recognition, it is believed to be strongly dependent on temporal context. This can be investigated by resorting to the phenomenon of hysteresis, which features a form of serial dependence, entailed by continuous temporal stimulus trajectories. Under positive hysteresis, the percept remains stable in visual memory (persistence) while in negative hysteresis, it shifts earlier (adaptation) to the opposite interpretation. Here, we asked whether positive or negative hysteresis occurs in emotion recognition of inherently ambiguous biological motion, while testing for the controversial debate of a negative versus positive emotional bias. Participants (n = 22) performed a psychophysical experiment in which they were asked to judge stimulus transitions between two emotions, happiness and sadness, from an actor database, and report perceived emotion across time, from one emotion to the opposite as physical cues were continuously changing. Our results reveal perceptual hysteresis in ambiguous emotion recognition, with positive hysteresis (visual persistence) predominating. However, negative hysteresis (adaptation/fatigue) was also observed in particular in the direction from sadness to happiness. This demonstrates a positive (happiness) bias in emotion recognition in ambiguous biological motion recognition. Finally, the interplay between positive and negative hysteresis suggests an underlying competition between visual persistence and adaptation mechanisms during ambiguous emotion recognition.
Collapse
Affiliation(s)
- Ana Borges Cortês
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), Institute of Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Coimbra, Portugal
| | - João Valente Duarte
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), Institute of Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Coimbra, Portugal
- Faculty of Medicine, University of Coimbra, Coimbra, Portugal
| | - Miguel Castelo-Branco
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), Institute of Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Coimbra, Portugal
- Faculty of Medicine, University of Coimbra, Coimbra, Portugal
| |
Collapse
|
5
|
Chen YC, Pollick F, Lu H. Aesthetic preferences for prototypical movements in human actions. Cogn Res Princ Implic 2023; 8:55. [PMID: 37589891 PMCID: PMC10435434 DOI: 10.1186/s41235-023-00510-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Accepted: 08/05/2023] [Indexed: 08/18/2023] Open
Abstract
A commonplace sight is seeing other people walk. Our visual system specializes in processing such actions. Notably, we are not only quick to recognize actions, but also quick to judge how elegantly (or not) people walk. What movements appear appealing, and why do we have such aesthetic experiences? Do aesthetic preferences for body movements arise simply from perceiving others' positive emotions? To answer these questions, we showed observers different point-light walkers who expressed neutral, happy, angry, or sad emotions through their movements and measured the observers' impressions of aesthetic appeal, emotion positivity, and naturalness of these movements. Three experiments were conducted. People showed consensus in aesthetic impressions even after controlling for emotion positivity, finding prototypical walks more aesthetically pleasing than atypical walks. This aesthetic prototype effect could be accounted for by a computational model in which walking actions are treated as a single category (as opposed to multiple emotion categories). The aesthetic impressions were affected both directly by the objective prototypicality of the movements, and indirectly through the mediation of perceived naturalness. These findings extend the boundary of category learning, and hint at possible functions for action aesthetics.
Collapse
Affiliation(s)
- Yi-Chia Chen
- Department of Psychology, University of California, Los Angeles, USA.
| | - Frank Pollick
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, UK
| | - Hongjing Lu
- Department of Psychology, University of California, Los Angeles, USA
- Department of Statistics, University of California, Los Angeles, USA
| |
Collapse
|
6
|
Zhang M, Yu L, Zhang K, Du B, Zhan B, Jia S, Chen S, Han F, Li Y, Liu S, Yi X, Liu S, Luo W. Construction and validation of the Dalian emotional movement open-source set (DEMOS). Behav Res Methods 2023; 55:2353-2366. [PMID: 35931937 DOI: 10.3758/s13428-022-01887-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/24/2022] [Indexed: 11/08/2022]
Abstract
Human body movements are important for emotion recognition and social communication and have received extensive attention from researchers. In this field, emotional biological motion stimuli, as depicted by point-light displays, are widely used. However, the number of stimuli in the existing material library is small, and there is a lack of standardized indicators, which subsequently limits experimental design and conduction. Therefore, based on our prior kinematic dataset, we constructed the Dalian Emotional Movement Open-source Set (DEMOS) using computational modeling. The DEMOS has three views (i.e., frontal 0°, left 45°, and left 90°) and in total comprises 2664 high-quality videos of emotional biological motion, each displaying happiness, sadness, anger, fear, disgust, and neutral. All stimuli were validated in terms of recognition accuracy, emotional intensity, and subjective movement. The objective movement for each expression was also calculated. The DEMOS can be downloaded for free from https://osf.io/83fst/ . To our knowledge, this is the largest multi-view emotional biological motion set based on the whole body. The DEMOS can be applied in many fields, including affective computing, social cognition, and psychiatry.
Collapse
Affiliation(s)
- Mingming Zhang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Lu Yu
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Keye Zhang
- School of Social and Behavioral Sciences, Nanjing University, Nanjing, 210023, China
| | - Bixuan Du
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Bin Zhan
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Shuxin Jia
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Shaohua Chen
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Fengxu Han
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Yiwen Li
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Shuaicheng Liu
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Xi Yi
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China
| | - Shenglan Liu
- School of Innovation and Entrepreneurship, Dalian University of Technology, Dalian, 116024, China.
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, China.
| | - Wenbo Luo
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China.
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, China.
| |
Collapse
|
7
|
Christensen JF, Bruhn L, Schmidt EM, Bahmanian N, Yazdi SHN, Farahi F, Sancho-Escanero L, Menninghaus W. A 5-emotions stimuli set for emotion perception research with full-body dance movements. Sci Rep 2023; 13:8757. [PMID: 37253770 DOI: 10.1038/s41598-023-33656-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Accepted: 04/17/2023] [Indexed: 06/01/2023] Open
Abstract
Ekman famously contended that there are different channels of emotional expression (face, voice, body), and that emotion recognition ability confers an adaptive advantage to the individual. Yet, still today, much emotion perception research is focussed on emotion recognition from the face, and few validated emotionally expressive full-body stimuli sets are available. Based on research on emotional speech perception, we created a new, highly controlled full-body stimuli set. We used the same-sequence approach, and not emotional actions (e.g., jumping of joy, recoiling in fear): One professional dancer danced 30 sequences of (dance) movements five times each, expressing joy, anger, fear, sadness or a neutral state, one at each repetition. We outline the creation of a total of 150, 6-s-long such video stimuli, that show the dancer as a white silhouette on a black background. Ratings from 90 participants (emotion recognition, aesthetic judgment) showed that intended emotion was recognized above chance (chance: 20%; joy: 45%, anger: 48%, fear: 37%, sadness: 50%, neutral state: 51%), and that aesthetic judgment was sensitive to the intended emotion (beauty ratings: joy > anger > fear > neutral state, and sad > fear > neutral state). The stimuli set, normative values and code are available for download.
Collapse
Affiliation(s)
- Julia F Christensen
- Department of Language and Literature, Max-Planck-Institute for Empirical Aesthetics, Frankfurt/M, Germany.
- Department of Cognitive Neuropsychology, Max Planck Institute for Empirical Aesthetics, Frankfurt/M, Germany.
| | - Laura Bruhn
- Department of Language and Literature, Max-Planck-Institute for Empirical Aesthetics, Frankfurt/M, Germany
| | - Eva-Madeleine Schmidt
- Department of Language and Literature, Max-Planck-Institute for Empirical Aesthetics, Frankfurt/M, Germany
- Max Planck School of Cognition, Max Planck Institute, Leipzig, Germany
| | - Nasimeh Bahmanian
- Department of Language and Literature, Max-Planck-Institute for Empirical Aesthetics, Frankfurt/M, Germany
- Department of Modern Languages, Goethe University, Frankfurt, Germany
| | | | | | | | - Winfried Menninghaus
- Department of Language and Literature, Max-Planck-Institute for Empirical Aesthetics, Frankfurt/M, Germany
| |
Collapse
|
8
|
Bidet-Ildei C, Francisco V, Decatoire A, Pylouster J, Blandin Y. PLAViMoP database: A new continuously assessed and collaborative 3D point-light display dataset. Behav Res Methods 2023; 55:694-715. [PMID: 35441360 DOI: 10.3758/s13428-022-01850-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/23/2022] [Indexed: 11/08/2022]
Abstract
It was more than 45 years ago that Gunnar Johansson invented the point-light display technique. This showed for the first time that kinematics is crucial for action recognition, and that humans are very sensitive to their conspecifics' movements. As a result, many of today's researchers use point-light displays to better understand the mechanisms behind this recognition ability. In this paper, we propose PLAViMoP, a new database of 3D point-light displays representing everyday human actions (global and fine-motor control movements), sports movements, facial expressions, interactions, and robotic movements. Access to the database is free, at https://plavimop.prd.fr/en/motions . Moreover, it incorporates a search engine to facilitate action retrieval. In this paper, we describe the construction, functioning, and assessment of the PLAViMoP database. Each sequence was analyzed according to four parameters: type of movement, movement label, sex of the actor, and age of the actor. We provide both the mean scores for each assessment of each point-light display, and the comparisons between the different categories of sequences. Our results are discussed in the light of the literature and the suitability of our stimuli for research and applications.
Collapse
Affiliation(s)
- Christel Bidet-Ildei
- Centre de Recherches sur la Cognition et l'Apprentissage (UMR CNRS 7295), Université de Poitiers, Université de Tours, Centre National de la Recherche Scientifique, Poitiers, France.
- MSHS, Bâtiment A5, 5 rue Théodore Lefebvre TSA 21103, 86073, Poitiers, Cedex 9, France.
| | - Victor Francisco
- Centre de Recherches sur la Cognition et l'Apprentissage (UMR CNRS 7295), Université de Poitiers, Université de Tours, Centre National de la Recherche Scientifique, Poitiers, France
| | - Arnaud Decatoire
- Institut PPRIME (UPR CNRS 3346), Université de Poitiers, Centre National de la Recherche Scientifique, Poitiers, France
| | - Jean Pylouster
- Centre de Recherches sur la Cognition et l'Apprentissage (UMR CNRS 7295), Université de Poitiers, Université de Tours, Centre National de la Recherche Scientifique, Poitiers, France
| | - Yannick Blandin
- Centre de Recherches sur la Cognition et l'Apprentissage (UMR CNRS 7295), Université de Poitiers, Université de Tours, Centre National de la Recherche Scientifique, Poitiers, France
| |
Collapse
|
9
|
Smith RA, Cross ES. The McNorm library: creating and validating a new library of emotionally expressive whole body dance movements. PSYCHOLOGICAL RESEARCH 2023; 87:484-508. [PMID: 35385989 PMCID: PMC8985749 DOI: 10.1007/s00426-022-01669-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2021] [Accepted: 02/23/2022] [Indexed: 11/28/2022]
Abstract
The ability to exchange affective cues with others plays a key role in our ability to create and maintain meaningful social relationships. We express our emotions through a variety of socially salient cues, including facial expressions, the voice, and body movement. While significant advances have been made in our understanding of verbal and facial communication, to date, understanding of the role played by human body movement in our social interactions remains incomplete. To this end, here we describe the creation and validation of a new set of emotionally expressive whole-body dance movement stimuli, named the Motion Capture Norming (McNorm) Library, which was designed to reconcile a number of limitations associated with previous movement stimuli. This library comprises a series of point-light representations of a dancer's movements, which were performed to communicate to observers neutrality, happiness, sadness, anger, and fear. Based on results from two validation experiments, participants could reliably discriminate the intended emotion expressed in the clips in this stimulus set, with accuracy rates up to 60% (chance = 20%). We further explored the impact of dance experience and trait empathy on emotion recognition and found that neither significantly impacted emotion discrimination. As all materials for presenting and analysing this movement library are openly available, we hope this resource will aid other researchers in further exploration of affective communication expressed by human bodily movement.
Collapse
Affiliation(s)
- Rebecca A. Smith
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland
| | - Emily S. Cross
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland ,Department of Cognitive Science, Macquarie University, Sydney, Australia
| |
Collapse
|
10
|
Pavlova MA, Romagnano V, Kubon J, Isernia S, Fallgatter AJ, Sokolov AN. Ties between reading faces, bodies, eyes, and autistic traits. Front Neurosci 2022; 16:997263. [PMID: 36248653 PMCID: PMC9554539 DOI: 10.3389/fnins.2022.997263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Accepted: 08/12/2022] [Indexed: 11/29/2022] Open
Abstract
While reading covered with masks faces during the COVID-19 pandemic, for efficient social interaction, we need to combine information from different sources such as the eyes (without faces hidden by masks) and bodies. This may be challenging for individuals with neuropsychiatric conditions, in particular, autism spectrum disorders. Here we examined whether reading of dynamic faces, bodies, and eyes are tied in a gender-specific way, and how these capabilities are related to autistic traits expression. Females and males accomplished a task with point-light faces along with a task with point-light body locomotion portraying different emotional expressions. They had to infer emotional content of displays. In addition, participants were administered the Reading the Mind in the Eyes Test, modified and Autism Spectrum Quotient questionnaire. The findings show that only in females, inferring emotions from dynamic bodies and faces are firmly linked, whereas in males, reading in the eyes is knotted with face reading. Strikingly, in neurotypical males only, accuracy of face, body, and eyes reading was negatively tied with autistic traits. The outcome points to gender-specific modes in social cognition: females rely upon merely dynamic cues while reading faces and bodies, whereas males most likely trust configural information. The findings are of value for examination of face and body language reading in neuropsychiatric conditions, in particular, autism, most of which are gender/sex-specific. This work suggests that if male individuals with autistic traits experience difficulties in reading covered with masks faces, these deficits may be unlikely compensated by reading (even dynamic) bodies and faces. By contrast, in females, reading covered faces as well as reading language of dynamic bodies and faces are not compulsorily connected to autistic traits preventing them from paying high costs for maladaptive social interaction.
Collapse
Affiliation(s)
- Marina A. Pavlova
- Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health (TüCMH), Medical School and University Hospital, Eberhard Karls University of Tübingen, Tübingen, Germany
- *Correspondence: Marina A. Pavlova,
| | - Valentina Romagnano
- Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health (TüCMH), Medical School and University Hospital, Eberhard Karls University of Tübingen, Tübingen, Germany
| | - Julian Kubon
- Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health (TüCMH), Medical School and University Hospital, Eberhard Karls University of Tübingen, Tübingen, Germany
| | - Sara Isernia
- IRCCS Fondazione Don Carlo Gnocchi ONLUS, Milan, Italy
| | - Andreas J. Fallgatter
- Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health (TüCMH), Medical School and University Hospital, Eberhard Karls University of Tübingen, Tübingen, Germany
| | - Alexander N. Sokolov
- Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health (TüCMH), Medical School and University Hospital, Eberhard Karls University of Tübingen, Tübingen, Germany
| |
Collapse
|
11
|
Wang C, Zhou Y, Li C, Tian W, He Y, Fang P, Li Y, Yuan H, Li X, Li B, Luo X, Zhang Y, Liu X, Wu S. Working Memory Capacity of Biological Motion's Basic Unit: Decomposing Biological Motion From the Perspective of Systematic Anatomy. Front Psychol 2022; 13:830555. [PMID: 35391972 PMCID: PMC8980279 DOI: 10.3389/fpsyg.2022.830555] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Accepted: 02/24/2022] [Indexed: 11/13/2022] Open
Abstract
Many studies have shown that about three biological motions (BMs) can be maintained in working memory. However, no study has yet analyzed the difficulties of experiment materials used, which partially affect the ecological validity of the experiment results. We use the perspective of system anatomy to decompose BM, and thoroughly explore the influencing factors of difficulties of BMs, including presentation duration, joints to execute motions, limbs to execute motions, type of articulation interference tasks, and number of joints and planes involved in the BM. We apply the change detection paradigm supplemented by the articulation interference task to measure the BM working memory capacity (WMC) of participants. Findings show the following: the shorter the presentation duration, the less participants remembered; the more their wrist moved, the less accurate their memory was; repeating verbs provided better results than did repeating numerals to suppress verbal encoding; the more complex the BM, the less participants remembered; and whether the action was executed by the handed limbs did not affect the WMC. These results indicate that there are many factors that can be used to adjust BM memory load. These factors can help sports psychology professionals to better evaluate the difficulty of BMs, and can also partially explain the differences in estimations of BM WMC in previous studies.
Collapse
Affiliation(s)
- Chaoxian Wang
- Department of Military Medical Psychology, Air Force Medical University, Xi'an, China
| | - Yue Zhou
- Department of Military Medical Psychology, Air Force Medical University, Xi'an, China
| | - Congchong Li
- Department of Military Medical Psychology, Air Force Medical University, Xi'an, China
| | - Wenqing Tian
- Department of Military Medical Psychology, Air Force Medical University, Xi'an, China
| | - Yang He
- Department of Military Medical Psychology, Air Force Medical University, Xi'an, China
| | - Peng Fang
- Department of Military Medical Psychology, Air Force Medical University, Xi'an, China
| | - Yijun Li
- Department of Military Medical Psychology, Air Force Medical University, Xi'an, China
| | - Huiling Yuan
- Department of Military Medical Psychology, Air Force Medical University, Xi'an, China
| | - Xiuxiu Li
- School of Computer Science and Engineering, Xi'an University of Technology, Xi'an, China
| | - Bin Li
- School of Information Technology, Northwest University, Xi'an, China
| | - Xuelin Luo
- School of Martial Arts, Xi'an Physical Education University, Xi'an, China
| | - Yun Zhang
- School of Electronic and Information Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Xufeng Liu
- Department of Military Medical Psychology, Air Force Medical University, Xi'an, China
| | - Shengjun Wu
- Department of Military Medical Psychology, Air Force Medical University, Xi'an, China
| |
Collapse
|
12
|
Anderson KA. Moral distress in The Last of Us: Moral agency, character realism, and navigating fixed gaming narratives. COMPUTERS IN HUMAN BEHAVIOR REPORTS 2022. [DOI: 10.1016/j.chbr.2021.100163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
|
13
|
Diconne K, Kountouriotis GK, Paltoglou AE, Parker A, Hostler TJ. Presenting KAPODI – The Searchable Database of Emotional Stimuli Sets. EMOTION REVIEW 2022. [DOI: 10.1177/17540739211072803] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Abstract
Emotional stimuli such as images, words, or video clips are often used in studies researching emotion. New sets are continuously being published, creating an immense number of available sets and complicating the task for researchers who are looking for suitable stimuli. This paper presents the KAPODI-database of emotional stimuli sets that are freely available or available upon request. Over 45 aspects including over 25 key set characteristics have been extracted and listed for each set. The database facilitates finding of and comparison between individual sets. It currently contains sets published between 1963 and 2020. A searchable online version ( https://airtable.com/shrnVoUZrwu6riP9b ) allows users to select specific set characteristics and to find matching sets accordingly, as well as to add new published sets.
Collapse
Affiliation(s)
- Kathrin Diconne
- Department of Psychology, Manchester Metropolitan University
| | | | | | - Andrew Parker
- Department of Psychology, Manchester Metropolitan University
| | | |
Collapse
|
14
|
Randhavane T, Bera A, Kubin E, Gray K, Manocha D. Modeling Data-Driven Dominance Traits for Virtual Characters Using Gait Analysis. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:2967-2979. [PMID: 31751243 DOI: 10.1109/tvcg.2019.2953063] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
We present a data-driven algorithm for generating gaits of virtual characters with varying dominance traits. Our formulation utilizes a user study to establish a data-driven dominance mapping between gaits and dominance labels. We use our dominance mapping to generate walking gaits for virtual characters that exhibit a variety of dominance traits while interacting with the user. Furthermore, we extract gait features based on known criteria in visual perception and psychology literature that can be used to identify the dominance levels of any walking gait. We validate our mapping and the perceived dominance traits by a second user study in an immersive virtual environment. Our gait dominance classification algorithm can classify the dominance traits of gaits with ˜73 percent accuracy. We also present an application of our approach that simulates interpersonal relationships between virtual characters. To the best of our knowledge, ours is the first practical approach to classifying gait dominance and generate dominance traits in virtual characters.
Collapse
|
15
|
Abstract
In this paper, we propose a new data-driven framework for 3D hand and full-body motion emotion transfer. Specifically, we formulate the motion synthesis task as an image-to-image translation problem. By presenting a motion sequence as an image representation, the emotion can be transferred by our framework using StarGAN. To evaluate our proposed method’s effectiveness, we first conducted a user study to validate the perceived emotion from the captured and synthesized hand motions. We further evaluate the synthesized hand and full body motions qualitatively and quantitatively. Experimental results show that our synthesized motions are comparable to the captured motions and those created by an existing method in terms of naturalness and visual quality.
Collapse
|
16
|
Zhang M, Yu L, Zhang K, Du B, Zhan B, Chen S, Jiang X, Guo S, Zhao J, Wang Y, Wang B, Liu S, Luo W. Kinematic dataset of actors expressing emotions. Sci Data 2020; 7:292. [PMID: 32901035 PMCID: PMC7478954 DOI: 10.1038/s41597-020-00635-7] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Accepted: 08/07/2020] [Indexed: 11/09/2022] Open
Abstract
Human body movements can convey a variety of emotions and even create advantages in some special life situations. However, how emotion is encoded in body movements has remained unclear. One reason is that there is a lack of public human body kinematic dataset regarding the expressing of various emotions. Therefore, we aimed to produce a comprehensive dataset to assist in recognizing cues from all parts of the body that indicate six basic emotions (happiness, sadness, anger, fear, disgust, surprise) and neutral expression. The present dataset was created using a portable wireless motion capture system. Twenty-two semi-professional actors (half male) completed performances according to the standardized guidance and preferred daily events. A total of 1402 recordings at 125 Hz were collected, consisting of the position and rotation data of 72 anatomical nodes. To our knowledge, this is now the largest emotional kinematic dataset of the human body. We hope this dataset will contribute to multiple fields of research and practice, including social neuroscience, psychiatry, computer vision, and biometric and information forensics.
Collapse
Affiliation(s)
- Mingming Zhang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, Liaoning, China
| | - Lu Yu
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, Liaoning, China
| | - Keye Zhang
- School of Social and Behavioral Sciences, Nanjing University, Nanjing, 210023, Jiangsu, China
| | - Bixuan Du
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, Liaoning, China
| | - Bin Zhan
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, Liaoning, China
| | - Shaohua Chen
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, Liaoning, China
| | - Xiuhao Jiang
- School of Innovation and Entrepreneurship, Dalian University of Technology, Dalian, 116024, Liaoning, China
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, Liaoning, China
| | - Shuai Guo
- School of Innovation and Entrepreneurship, Dalian University of Technology, Dalian, 116024, Liaoning, China
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, Liaoning, China
| | - Jiafeng Zhao
- School of Innovation and Entrepreneurship, Dalian University of Technology, Dalian, 116024, Liaoning, China
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, Liaoning, China
| | - Yang Wang
- School of Innovation and Entrepreneurship, Dalian University of Technology, Dalian, 116024, Liaoning, China
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, Liaoning, China
| | - Bin Wang
- School of Innovation and Entrepreneurship, Dalian University of Technology, Dalian, 116024, Liaoning, China
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, Liaoning, China
| | - Shenglan Liu
- School of Innovation and Entrepreneurship, Dalian University of Technology, Dalian, 116024, Liaoning, China.
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024, Liaoning, China.
| | - Wenbo Luo
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, Liaoning, China.
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, 116029, Liaoning, China.
| |
Collapse
|
17
|
Golestani N, Moghaddam M. Human activity recognition using magnetic induction-based motion signals and deep recurrent neural networks. Nat Commun 2020; 11:1551. [PMID: 32214095 PMCID: PMC7096402 DOI: 10.1038/s41467-020-15086-2] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Accepted: 02/17/2020] [Indexed: 12/02/2022] Open
Abstract
Recognizing human physical activities using wireless sensor networks has attracted significant research interest due to its broad range of applications, such as healthcare, rehabilitation, athletics, and senior monitoring. There are critical challenges inherent in designing a sensor-based activity recognition system operating in and around a lossy medium such as the human body to gain a trade-off among power consumption, cost, computational complexity, and accuracy. We introduce an innovative wireless system based on magnetic induction for human activity recognition to tackle these challenges and constraints. The magnetic induction system is integrated with machine learning techniques to detect a wide range of human motions. This approach is successfully evaluated using synthesized datasets, laboratory measurements, and deep recurrent neural networks.
Collapse
Affiliation(s)
- Negar Golestani
- Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, 90089, USA.
| | - Mahta Moghaddam
- Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, 90089, USA
| |
Collapse
|
18
|
Deligianni F, Guo Y, Yang GZ. From Emotions to Mood Disorders: A Survey on Gait Analysis Methodology. IEEE J Biomed Health Inform 2019; 23:2302-2316. [PMID: 31502995 DOI: 10.1109/jbhi.2019.2938111] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Mood disorders affect more than 300 million people worldwide and can cause devastating consequences. Elderly people and patients with neurological conditions are particularly susceptible to depression. Gait and body movements can be affected by mood disorders, and thus they can be used as a surrogate sign, as well as an objective index for pervasive monitoring of emotion and mood disorders in daily life. Here we review evidence that demonstrates the relationship between gait, emotions and mood disorders, highlighting the potential of a multimodal approach that couples gait data with physiological signals and home-based monitoring for early detection and management of mood disorders. This could enhance self-awareness, enable the development of objective biomarkers that identify high risk subjects and promote subject-specific treatment.
Collapse
|
19
|
The Relationship between Biological Motion-Based Visual Consciousness and Attention: An Electroencephalograph Study. Neuroscience 2019; 415:230-240. [PMID: 31301367 DOI: 10.1016/j.neuroscience.2019.06.040] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2019] [Revised: 06/22/2019] [Accepted: 06/27/2019] [Indexed: 11/24/2022]
Abstract
Understanding and predicting the intentions of others through limb movements are vital to social interaction. The processing of biological motion is unique from the processing of motion of inanimate objects. Presently, there is controversy over whether visual consciousness of biological motion is regulated by visual attention. In addition, the neural mechanisms involved in biological motion-related visual awareness are not known. In the current study, the relationship between visual awareness (aware vs unaware), represented by a point-light walker and biological-motion-based attention, manipulated by a difference in congruence (congruent, incongruent) between the direction of a pre-cue and that of biological motion was explored. The neural mechanisms involved in processing the stimuli were explored through electroencephalography. Both early (50-150 ms, 100-200 ms, and 174-226 ms after target presentation) and late (350-550 ms after target presentation) awareness-related neural processings were observed during a biological motion-based congruency task. Early processing was localized to occipital-parietal regions, such as the left postcentral gyrus, the left middle occipital gyrus, and the right precentral gyrus. In the 174-226-ms window, the activity in the occipital region was gradually replaced by activity in the parietal and frontal regions. Late processing was localized to frontal-parietal regions, such as the right dorsal superior frontal gyrus, the left medial superior frontal gyrus, and the occipito-temporal regions. Congruency-related processing occurred in the 246-260-ms window and was localized to the right superior occipital gyrus. In summary, due to its complexity, biological motion awareness has a unique neural basis.
Collapse
|
20
|
Bekemeier HHH, Maycock JW, Ritter HJJ. What Does a Hand-Over Tell?-Individuality of ShortMotion Sequences. Biomimetics (Basel) 2019; 4:biomimetics4030055. [PMID: 31394826 PMCID: PMC6784304 DOI: 10.3390/biomimetics4030055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2019] [Revised: 08/02/2019] [Accepted: 08/05/2019] [Indexed: 11/16/2022] Open
Abstract
How much information with regard to identity and further individual participant characteristics are revealed by relatively short spatio-temporal motion trajectories of a person? We study this question by selecting a set of individual participant characteristics and analysing motion captured trajectories of an exemplary class of familiar movements, namely handover of an object to another person. The experiment is performed with different participants under different, predefined conditions. A selection of participant characteristics, such as the Big Five personality traits, gender, weight, or sportiness, are assessed and we analyse the impact of the three factor groups “participant identity”, “participant characteristics”, and “experimental conditions” on the observed hand trajectories. The participants’ movements are recorded via optical marker-based hand motion capture. One participant, the giver, hands over an object to the receiver. The resulting time courses of three-dimensional positions of markers are analysed. Multidimensional scaling is used to project trajectories to points in a dimension-reduced feature space. Supervised learning is also applied. We find that “participant identity” seems to have the highest correlation with the trajectories, with factor group “experimental conditions” ranking second. On the other hand, it is not possible to find a correlation between the “participant characteristics” and the hand trajectory features.
Collapse
Affiliation(s)
| | | | - Helge J J Ritter
- Neuroinformatics Group, Bielefeld University, 33615 Bielefeld, Germany.
| |
Collapse
|
21
|
Sievers B, Lee C, Haslett W, Wheatley T. A multi-sensory code for emotional arousal. Proc Biol Sci 2019; 286:20190513. [PMID: 31288695 DOI: 10.1098/rspb.2019.0513] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
People express emotion using their voice, face and movement, as well as through abstract forms as in art, architecture and music. The structure of these expressions often seems intuitively linked to its meaning: romantic poetry is written in flowery curlicues, while the logos of death metal bands use spiky script. Here, we show that these associations are universally understood because they are signalled using a multi-sensory code for emotional arousal. Specifically, variation in the central tendency of the frequency spectrum of a stimulus-its spectral centroid-is used by signal senders to express emotional arousal, and by signal receivers to make emotional arousal judgements. We show that this code is used across sounds, shapes, speech and human body movements, providing a strong multi-sensory signal that can be used to efficiently estimate an agent's level of emotional arousal.
Collapse
Affiliation(s)
- Beau Sievers
- 1 Department of Psychology, Harvard University , Cambridge, MA 02138 , USA
| | - Caitlyn Lee
- 2 Department of Psychological and Brain Sciences, Dartmouth College , Hanover, NH 03755 , USA
| | - William Haslett
- 3 Department of Biomedical Data Science, Geisel School of Medicine at Dartmouth , Hanover, NH 03755 , USA
| | - Thalia Wheatley
- 2 Department of Psychological and Brain Sciences, Dartmouth College , Hanover, NH 03755 , USA
| |
Collapse
|
22
|
Lindor ER, van Boxtel JJ, Rinehart NJ, Fielding J. Motor difficulties are associated with impaired perception of interactive human movement in autism spectrum disorder: A pilot study. J Clin Exp Neuropsychol 2019; 41:856-874. [DOI: 10.1080/13803395.2019.1634181] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Ebony R. Lindor
- School of Psychological Sciences and Monash Institute of Cognitive and Clinical Neurosciences, Monash University, Victoria, Australia
- Deakin Child Study Centre, School of Psychology, Faculty of Health, Deakin University Geelong, Victoria, Australia
| | - Jeroen J.A. van Boxtel
- School of Psychological Sciences and Monash Institute of Cognitive and Clinical Neurosciences, Monash University, Victoria, Australia
- School of Psychology, Faculty of Health, University of Canberra, Canberra, Australia
| | - Nicole J. Rinehart
- School of Psychological Sciences and Monash Institute of Cognitive and Clinical Neurosciences, Monash University, Victoria, Australia
- Deakin Child Study Centre, School of Psychology, Faculty of Health, Deakin University Geelong, Victoria, Australia
| | - Joanne Fielding
- School of Psychological Sciences and Monash Institute of Cognitive and Clinical Neurosciences, Monash University, Victoria, Australia
- Department of Neuroscience, Central Clinical School, Monash University, Melbourne, Australia
| |
Collapse
|
23
|
Ida H, Fukuhara K, Ishii M, Inoue T. Anticipatory judgements associated with vision of an opponent’s end-effector: An approach by motion perturbation and spatial occlusion. Q J Exp Psychol (Hove) 2018; 72:1131-1140. [DOI: 10.1177/1747021818782419] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
This study was aimed at determining how the visual information of an end-effector (racket) and the intermediate extremity (arm) of a tennis server contribute to the receiver’s anticipatory judgement of ball direction. In all, 15 experienced tennis players and 15 novice counterparts viewed a spatially occluded computer graphics animation of a tennis serve (no-occlusion, racket-occlusion, and body-occlusion) and made anticipatory judgements of ball direction on a visual analogue scale (VAS). The patterns of the serve motions were generated by a simulation technique that computationally perturbs the rotation speed of the selected racket-arm joint (forearm pronation and elbow extension) on a captured serve motion. The results suggested that the anticipatory judgements were monotonically attuned with the perturbation rate of the forearm pronation speed excepting under the conditions of the racket-occlusion model. Although such attunements were not observed in the elbow perturbation conditions, the results of correlation analysis indicated that the residual information in the spatially occluded models had a similar effect to the no-occlusion model within the individual experienced participants. The findings support the notion that end-effector (racket) provides deterministic cues for anticipation, as well as imply that players are able to benefit from the relative motion of an intermediate extremity (elbow extension).
Collapse
Affiliation(s)
- Hirofumi Ida
- Department of Sports and Health Management, Jobu University, Isesaki, Japan
| | - Kazunobu Fukuhara
- Department of Health Promotion Science, Tokyo Metropolitan University, Hachioji, Japan
| | - Motonobu Ishii
- Department of Human System Science, Tokyo Institute of Technology, Tokyo, Japan
| | - Tetsuri Inoue
- Department of Network and Communication, Kanagawa Institute of Technology, Atsugi, Japan
| |
Collapse
|
24
|
Kawai Y, Nagai Y, Asada M. Prediction Error in the PMd As a Criterion for Biological Motion Discrimination: A Computational Account. IEEE Trans Cogn Dev Syst 2018. [DOI: 10.1109/tcds.2017.2668446] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
25
|
Christensen JF, Cela-Conde CJ, Gomila A. Not all about sex: neural and biobehavioral functions of human dance. Ann N Y Acad Sci 2017; 1400:8-32. [PMID: 28787539 DOI: 10.1111/nyas.13420] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2017] [Revised: 04/23/2017] [Accepted: 05/31/2017] [Indexed: 12/15/2022]
Abstract
This paper provides an integrative review of neuroscientific and biobehavioral evidence about the effects of dance on the individual across cultural differences. Dance moves us, and many derive aesthetic pleasure from it. However, in addition-and beyond aesthetics-we propose that dance has noteworthy, deeper neurobiological effects. We first summarize evidence that illustrates the centrality of dance to human life indirectly from archaeology, comparative psychology, developmental psychology, and cross-cultural psychology. Second, we review empirical evidence for six neural and biobehavioral functions of dance: (1) attentional focus/flow, (2) basic emotional experiences, (3) imagery, (4) communication, (5) self-intimation, and (6) social cohesion. We discuss the reviewed evidence in relation to current debates in the field of empirical enquiry into the functions of human dance, questioning the positions that dance is (1) just for pleasure, (2) all about sex, (3) just for mood management and well-being, and (4) for experts only. Being a young field, evidence is still piecemeal and inconclusive. This review aims to take a step toward a systematization of an emerging avenue of research: a neuro- and biobehavioral science of dance.
Collapse
Affiliation(s)
- Julia F Christensen
- Cognitive Neuroscience Research Unit, Department of Psychology, School of Arts and Social Sciences, City, University of London, London, United Kingdom.,Autism Research Group, Department of Psychology, City, University of London, London, United Kingdom
| | - Camilo José Cela-Conde
- Department of Ecology and Evolutionary Biology, University of California Irvine, Irvine, California
| | - Antoni Gomila
- Department of Psychology, University of the Balearic Islands, Palma, Spain
| |
Collapse
|
26
|
Abstract
Biological motion (BM) is the movement of animate entities, which conveys rich social information. To obtain pure BM, researchers nowadays predominantly use point-light displays (PLDs), which depict BM through a set of light points (e.g., 12 points) placed at distinct joints of a moving human body. Most prevalent BM stimuli are created by state-of-the-art motion capture systems. Although these stimuli are highly precise, the motion capture system is expensive and bulky, and its process of constructing a PLD-based BM is time-consuming and complex. These factors impede the investigation of BM mechanisms. In this study, we propose a free Kinect-based biological motion capture (KBC) toolbox based on the Kinect Sensor 2.0 in C++. The KBC toolbox aims to help researchers acquire PLD-based BM in an easy, low-cost, and user-friendly way. We conducted three experiments to examine whether KBC-generated BM can genuinely reflect the processing characteristics of BM: (1) Is BM from this source processed globally in vision? (2) Does its BM (e.g., from the feet) retain detailed local information? and (3) Does the BM convey emotional information? We obtained positive results in response to all three questions. Therefore, we think that the KBC toolbox can be useful in generating BM for future research.
Collapse
|
27
|
Human biological and nonbiological point-light movements: Creation and validation of the dataset. Behav Res Methods 2016; 49:2083-2092. [DOI: 10.3758/s13428-016-0843-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
28
|
Lee H, Kim J. Facilitating Effects of Emotion on the Perception of Biological Motion: Evidence for a Happiness Superiority Effect. Perception 2016; 46:679-697. [PMID: 27903922 DOI: 10.1177/0301006616681809] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
It has been reported that visual perception can be influenced not only by the physical features of a stimulus but also by the emotional valence of the stimulus, even without explicit emotion recognition. Some previous studies reported an anger superiority effect while others found a happiness superiority effect during visual perception. It thus remains unclear as to which emotion is more influential. In the present study, we conducted two experiments using biological motion (BM) stimuli to examine whether emotional valence of the stimuli would affect BM perception; and if so, whether a specific type of emotion is associated with a superiority effect. Point-light walkers with three emotion types (anger, happiness, and neutral) were used, and the threshold to detect BM within noise was measured in Experiment 1. Participants showed higher performance in detecting happy walkers compared with the angry and neutral walkers. Follow-up motion velocity analysis revealed that physical difference among the stimuli was not the main factor causing the effect. The results of the emotion recognition task in Experiment 2 also showed a happiness superiority effect, as in Experiment 1. These results show that emotional valence (happiness) of the stimuli can facilitate the processing of BM.
Collapse
Affiliation(s)
- Hannah Lee
- Department of Psychology, Duksung Women's University, Republic of Korea
| | - Jejoong Kim
- Department of Psychology, Duksung Women's University, Republic of Korea
| |
Collapse
|
29
|
Piwek L, Petrini K, Pollick F. A dyadic stimulus set of audiovisual affective displays for the study of multisensory, emotional, social interactions. Behav Res Methods 2016; 48:1285-1295. [PMID: 26542970 PMCID: PMC5101291 DOI: 10.3758/s13428-015-0654-4] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
We describe the creation of the first multisensory stimulus set that consists of dyadic, emotional, point-light interactions combined with voice dialogues. Our set includes 238 unique clips, which present happy, angry and neutral emotional interactions at low, medium and high levels of emotional intensity between nine different actor dyads. The set was evaluated in a between-design experiment, and was found to be suitable for a broad potential application in the cognitive and neuroscientific study of biological motion and voice, perception of social interactions and multisensory integration. We also detail in this paper a number of supplementary materials, comprising AVI movie files for each interaction, along with text files specifying the three dimensional coordinates of each point-light in each frame of the movie, as well as unprocessed AIFF audio files for each dialogue captured. The full set of stimuli is available to download from: http://motioninsocial.com/stimuli_set/ .
Collapse
Affiliation(s)
- Lukasz Piwek
- Centre for the Study of Behaviour Change and Influence, University of the West of England, 4D17, Coldharbour Lane, BS16 1QY Bristol, UK
| | - Karin Petrini
- Department of Psychology, University of Bath, Claverton Down, BA2 7AY Bath, UK
| | - Frank Pollick
- School of Psychology, University of Glasgow, 58 Hillhead Street, G12 8QB Glasgow, UK
| |
Collapse
|
30
|
Manera V, von der Lühe T, Schilbach L, Verfaillie K, Becchio C. Communicative interactions in point-light displays: Choosing among multiple response alternatives. Behav Res Methods 2016; 48:1580-1590. [PMID: 26487054 PMCID: PMC5101265 DOI: 10.3758/s13428-015-0669-x] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Vision scientists are increasingly relying on the point-light technique as a way to investigate the perception of human motion. Unfortunately, the lack of standardized stimulus sets has so far limited the use of this technique for studying social interaction. Here, we describe a new tool to study the interaction between two agents starting from point-light displays: the Communicative Interaction Database - 5AFC format (CID-5). The CID-5 consists of 14 communicative and seven non-communicative individual actions performed by two agents. Stimuli were constructed by combining motion capture techniques and 3-D animation software to provide precise control over the computer-generated actions. For each action stimulus, we provide coordinate files and movie files depicting the action as seen from four different perspectives. Furthermore, the archive contains a text file with a list of five alternative action descriptions to construct forced-choice paradigms. In order to validate the CID-5 format, we provide normative data collected to assess action identification within a 5AFC tasks. The CID-5 archive is freely downloadable from http://bsb-lab.org/research/ and from the supplementary materials of this article.
Collapse
Affiliation(s)
- Valeria Manera
- CoBTek Laboratory, University of Nice Sophia Antipolis, Nice, France
| | - Tabea von der Lühe
- Department of Psychiatry and Psychotherapy, Heinrich-Heine-University of Düsseldorf, Rhineland State Clinics Düsseldorf, Düsseldorf, Germany
| | - Leonhard Schilbach
- Max Planck Institute of Psychiatry, Munich, Germany
- Department of Psychiatry, University Hospital Cologne, Cologne, Germany
| | - Karl Verfaillie
- Laboratory of Experimental Psychology, KU Leuven, Leuven, Belgium
| | - Cristina Becchio
- Department of Robotics, Brain and Cognitive Sciences, Fondazione Istituto Italiano di Tecnologia, Genova, Italy.
- Department of Psychology, University of Turin, Via Po 14, 10123, Turin, Italy.
| |
Collapse
|
31
|
Aung MSH, Kaltwang S, Romera-Paredes B, Martinez B, Singh A, Cella M, Valstar M, Meng H, Kemp A, Shafizadeh M, Elkins AC, Kanakam N, de Rothschild A, Tyler N, Watson PJ, de C Williams AC, Pantic M, Bianchi-Berthouze N. The Automatic Detection of Chronic Pain-Related Expression: Requirements, Challenges and the Multimodal EmoPain Dataset. IEEE TRANSACTIONS ON AFFECTIVE COMPUTING 2016; 7:435-451. [PMID: 30906508 PMCID: PMC6430129 DOI: 10.1109/taffc.2015.2462830] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Pain-related emotions are a major barrier to effective self rehabilitation in chronic pain. Automated coaching systems capable of detecting these emotions are a potential solution. This paper lays the foundation for the development of such systems by making three contributions. First, through literature reviews, an overview of how pain is expressed in chronic pain and the motivation for detecting it in physical rehabilitation is provided. Second, a fully labelled multimodal dataset (named 'EmoPain') containing high resolution multiple-view face videos, head mounted and room audio signals, full body 3D motion capture and electromyographic signals from back muscles is supplied. Natural unconstrained pain related facial expressions and body movement behaviours were elicited from people with chronic pain carrying out physical exercises. Both instructed and non-instructed exercises were considered to reflect traditional scenarios of physiotherapist directed therapy and home-based self-directed therapy. Two sets of labels were assigned: level of pain from facial expressions annotated by eight raters and the occurrence of six pain-related body behaviours segmented by four experts. Third, through exploratory experiments grounded in the data, the factors and challenges in the automated recognition of such expressions and behaviour are described, the paper concludes by discussing potential avenues in the context of these findings also highlighting differences for the two exercise scenarios addressed.
Collapse
Affiliation(s)
- Min S H Aung
- UCL Interaction Centre, University, College London, London WC1E 6BT, Unithed Kingdom
| | - Sebastian Kaltwang
- Department of Computing, Imperial College London, London SW7 2AZ, Unithed Kingdom
| | | | - Brais Martinez
- Department of Computing, Imperial College London, London SW7 2AZ, Unithed Kingdom
| | - Aneesha Singh
- UCL Interaction Centre, University, College London, London WC1E 6BT, Unithed Kingdom
| | - Matteo Cella
- Department of Clinical, Educational & Health Psychology, University College London, London WC1E 6BT, Unithed Kingdom
| | - Michel Valstar
- Department of Computing, Imperial College London, London SW7 2AZ, Unithed Kingdom
| | - Hongying Meng
- UCL Interaction Centre, University, College London, London WC1E 6BT, Unithed Kingdom
| | - Andrew Kemp
- Physiotherapy Department, Maidstone & Tunbridge Wells NHS Trust, TN2 4QJ
| | - Moshen Shafizadeh
- UCL Interaction Centre, University, College London, London WC1E 6BT, Unithed Kingdom
| | - Aaron C Elkins
- Department of Computing, Imperial College London, London SW7 2AZ, Unithed Kingdom
| | - Natalie Kanakam
- Department of Clinical, Educational & Health Psychology, University College London, London WC1E 6BT, Unithed Kingdom
| | - Amschel de Rothschild
- Department of Clinical, Educational & Health Psychology, University College London, London WC1E 6BT, Unithed Kingdom
| | - Nick Tyler
- Department of Civil, Environmental & Geomatic Engineering, University College London, London WC1E 6BT, Unithed Kingdom
| | - Paul J Watson
- Department of Health Sciences, University of Leicester, Leicester LE5 7PW, Unithed Kingdom
| | - Amanda C de C Williams
- Department of Clinical, Educational & Health Psychology, University College London, London WC1E 6BT, Unithed Kingdom
| | - Maja Pantic
- Department of Computing, Imperial College London, London SW7 2AZ, Unithed Kingdom
| | | |
Collapse
|
32
|
Mandery C, Terlemez O, Do M, Vahrenkamp N, Asfour T. Unifying Representations and Large-Scale Whole-Body Motion Databases for Studying Human Motion. IEEE T ROBOT 2016. [DOI: 10.1109/tro.2016.2572685] [Citation(s) in RCA: 63] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
33
|
Piana S, Staglianò A, Odone F, Camurri A. Adaptive Body Gesture Representation for Automatic Emotion Recognition. ACM T INTERACT INTEL 2016. [DOI: 10.1145/2818740] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
We present a computational model and a system for the automated recognition of emotions starting from full-body movement. Three-dimensional motion data of full-body movements are obtained either from professional optical motion-capture systems (Qualisys) or from low-cost RGB-D sensors (Kinect and Kinect2). A number of features are then automatically extracted at different levels, from kinematics of a single joint to more global expressive features inspired by psychology and humanistic theories (e.g., contraction index, fluidity, and impulsiveness). An abstraction layer based on dictionary learning further processes these movement features to increase the model generality and to deal with intraclass variability, noise, and incomplete information characterizing emotion expression in human movement. The resulting feature vector is the input for a classifier performing real-time automatic emotion recognition based on linear support vector machines. The recognition performance of the proposed model is presented and discussed, including the tradeoff between precision of the tracking measures (we compare the Kinect RGB-D sensor and the Qualisys motion-capture system) versus dimension of the training dataset. The resulting model and system have been successfully applied in the development of serious games for helping autistic children learn to recognize and express emotions by means of their full-body movement.
Collapse
Affiliation(s)
- Stefano Piana
- DIBRIS—Università degli Studi di Genova, Genova - Italy
| | | | | | | |
Collapse
|
34
|
|
35
|
A Survey of Autonomous Human Affect Detection Methods for Social Robots Engaged in Natural HRI. J INTELL ROBOT SYST 2015. [DOI: 10.1007/s10846-015-0259-2] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
36
|
Khair NM, Hariharan M, Yaacob S, Basah SN. Locality sensitivity discriminant analysis-based feature ranking of human emotion actions recognition. J Phys Ther Sci 2015; 27:2649-53. [PMID: 26357453 PMCID: PMC4563335 DOI: 10.1589/jpts.27.2649] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2015] [Accepted: 05/18/2015] [Indexed: 11/24/2022] Open
Abstract
[Purpose] Computational intelligence similar to pattern recognition is frequently confronted with high-dimensional data. Therefore, the reduction of the dimensionality is critical to make the manifold features amenable. Procedures that are analytically or computationally manageable in smaller amounts of data and low-dimensional space can become important to produce a better classification performance. [Methods] Thus, we proposed two stage reduction techniques. Feature selection-based ranking using information gain (IG) and Chi-square (Chisq) are used to identify the best ranking of the features selected for emotion classification in different actions including knocking, throwing, and lifting. Then, feature reduction-based locality sensitivity discriminant analysis (LSDA) and principal component analysis (PCA) are used to transform the selected feature to low-dimensional space. Two-stage feature selection-reduction methods such as IG-PCA, IG-LSDA, Chisq-PCA, and Chisq-LSDA are proposed. [Results] The result confirms that applying feature ranking combined with a dimensional-reduction method increases the performance of the classifiers. [Conclusion] The dimension reduction was performed using LSDA by denoting the features of the highest importance determined using IG and Chisq to not only improve the effectiveness but also reduce the computational time.
Collapse
Affiliation(s)
- Nurnadia M Khair
- School of Mechatronic Engineering, Universiti Malaysia Perlis (UniMAP), Malaysia
| | - M Hariharan
- School of Mechatronic Engineering, Universiti Malaysia Perlis (UniMAP), Malaysia
| | - S Yaacob
- Universiti Kuala Lumpur Malaysian Spanish Institute, Kulim Hi-Tech Park, Malaysia
| | - Shafriza Nisha Basah
- School of Mechatronic Engineering, Universiti Malaysia Perlis (UniMAP), Malaysia
| |
Collapse
|
37
|
Destephe M, Maruyama T, Zecca M, Hashimoto K, Takanishi A. The influences of emotional intensity for happiness and sadness on walking. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2015; 2013:7452-5. [PMID: 24111468 DOI: 10.1109/embc.2013.6611281] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Walking is one of the most common activities that we perform every day. Even if the main goal of walking is to move from one place to another place, walking can also convey emotional clues in social context. Those clues can be used to improve interactions or any messages we want to express. However, there are not many studies on the effects of the intensity of the emotions on the walking. In this paper, the authors propose to assess the differences between the expression of emotion regarding the expressed intensity (low, middle, high and exaggerated). We observed two professional actors perform emotive walking, with different intensities and we analyzed the recorded data. For each emotion, we analyzed characteristic features which can be used in the future to model gait patterns and to recognize emotions from the gait parameters. Additionally, we found characteristics which can be used to create new emotion expression for our biped robot Kobian, improving the human-robot interaction.
Collapse
|
38
|
Piwek L, Pollick F, Petrini K. Audiovisual integration of emotional signals from others' social interactions. Front Psychol 2015; 9:116. [PMID: 26005430 PMCID: PMC4424808 DOI: 10.3389/fpsyg.2015.00611] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2015] [Accepted: 04/23/2015] [Indexed: 11/13/2022] Open
Abstract
Audiovisual perception of emotions has been typically examined using displays of a solitary character (e.g., the face-voice and/or body-sound of one actor). However, in real life humans often face more complex multisensory social situations, involving more than one person. Here we ask if the audiovisual facilitation in emotion recognition previously found in simpler social situations extends to more complex and ecological situations. Stimuli consisting of the biological motion and voice of two interacting agents were used in two experiments. In Experiment 1, participants were presented with visual, auditory, auditory filtered/noisy, and audiovisual congruent and incongruent clips. We asked participants to judge whether the two agents were interacting happily or angrily. In Experiment 2, another group of participants repeated the same task, as in Experiment 1, while trying to ignore either the visual or the auditory information. The findings from both experiments indicate that when the reliability of the auditory cue was decreased participants weighted more the visual cue in their emotional judgments. This in turn translated in increased emotion recognition accuracy for the multisensory condition. Our findings thus point to a common mechanism of multisensory integration of emotional signals irrespective of social stimulus complexity.
Collapse
Affiliation(s)
- Lukasz Piwek
- Behaviour Research Lab, Bristol Business School, University of the West of England Bristol, UK
| | - Frank Pollick
- School of Psychology, College of Science and Engineering, University of Glasgow Glasgow, UK
| | - Karin Petrini
- Department of Psychology, Faculty of Humanities & Social Sciences, University of Bath Bath, UK
| |
Collapse
|
39
|
Ghanouni P, Memari AH, Shayestehfar M, Moshayedi P, Gharibzadeh S, Ziaee V. Biological motion perception is affected by age and cognitive style in children aged 8-15. Neurol Res Int 2015; 2015:594042. [PMID: 25861473 PMCID: PMC4378609 DOI: 10.1155/2015/594042] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2014] [Revised: 02/19/2015] [Accepted: 02/20/2015] [Indexed: 11/25/2022] Open
Abstract
The current paper aims to address the question of how biological motion perception in different social contexts is influenced by age or also affected by cognitive styles. We examined developmental changes of biological motion perception among 141 school children aged 8-15 using point-light displays in monadic and dyadic social contexts. Furthermore, the cognitive styles of participants were investigated using empathizing-systemizing questionnaires. Results showed that the age and empathizing ability strongly predicted improvement in action perception in both contexts. However the systemizing ability was an independent predictor of performance only in monadic contexts. Furthermore, accuracy of action perception increased significantly from 46.4% (SD = 16.1) in monadic to 62.5% (SD = 11.5) in dyadic social contexts. This study can help to identify the roles of social context in biological motion perception and shows that children with different cognitive styles may present different biological motion perception.
Collapse
Affiliation(s)
- Parisa Ghanouni
- Occupational Science and Occupational Therapy, Faculty of Medicine, University of British Columbia, Vancouver, Canada
| | - Amir Hossein Memari
- Neuroscience Institute, Sports Medicine Research Center, Tehran University of Medical Sciences, Tehran, Iran
| | - Monir Shayestehfar
- Neuroscience Institute, Sports Medicine Research Center, Tehran University of Medical Sciences, Tehran, Iran
| | - Pouria Moshayedi
- Department of Neurology, University of California, Los Angeles, CA, USA
| | - Shahriar Gharibzadeh
- Department of Biomedical Engineering, Amirkabir University of Technology, Tehran, Iran
| | - Vahid Ziaee
- Growth and Development Research Center, Tehran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
40
|
Yiltiz H, Chen L. Tactile input and empathy modulate the perception of ambiguous biological motion. Front Psychol 2015; 6:161. [PMID: 25750631 PMCID: PMC4335391 DOI: 10.3389/fpsyg.2015.00161] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2014] [Accepted: 02/01/2015] [Indexed: 11/25/2022] Open
Abstract
Evidence has shown that task-irrelevant auditory cues can bias perceptual decisions regarding directional information associated with biological motion, as indicated in perceptual tasks using point-light walkers (PLWs) (Brooks et al., 2007). In the current study, we extended the investigation of cross-modal influences to the tactile domain by asking how tactile input resolves perceptual ambiguity in visual apparent motion, and how empathy plays a role in this cross-modal interaction. In Experiment 1, we simulated the tactile feedback on the observers' fingertips when the (upright or inverted) PLWs (comprised of either all red or all green dots) were walking (leftwards or rightwards). The temporal periods between tactile events and critical visual events (the PLW's feet hitting the ground) were manipulated so that the tap could lead, synchronize, or lag the visual foot-hitting-ground event. We found that the temporal structures between tactile (feedback) and visual (hitting) events systematically biases the directional perception for upright PLWs, making either leftwards or rightwards more dominant. However, this effect was absent for inverted PLWs. In Experiment 2, we examined how empathy modulates cross-modal capture. Instead of giving tactile feedback on participants' fingertips, we gave taps on their ankles and presented the PLWs with motion directions of approaching (facing toward observer)/receding (facing away from observer) to resemble normal walking postures. With the same temporal structure, we found that individuals with higher empathy were more subject to perceptual bias in the presence of tactile feedback. Taken together, our findings showed that task-irrelevant tactile input can resolve the otherwise ambiguous perception of the direction of biological motion, and this cross-modal bias was mediated by higher level social-cognitive factors, including empathy.
Collapse
Affiliation(s)
| | - Lihan Chen
- Department of Psychology, Peking University Beijing, China ; Key Laboratory of Machine Perception (Ministry of Education), Peking University Beijing, China
| |
Collapse
|
41
|
Volkova E, de la Rosa S, Bülthoff HH, Mohler B. The MPI emotional body expressions database for narrative scenarios. PLoS One 2014; 9:e113647. [PMID: 25461382 PMCID: PMC4252031 DOI: 10.1371/journal.pone.0113647] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2014] [Accepted: 10/22/2014] [Indexed: 12/03/2022] Open
Abstract
Emotion expression in human-human interaction takes place via various types of information, including body motion. Research on the perceptual-cognitive mechanisms underlying the processing of natural emotional body language can benefit greatly from datasets of natural emotional body expressions that facilitate stimulus manipulation and analysis. The existing databases have so far focused on few emotion categories which display predominantly prototypical, exaggerated emotion expressions. Moreover, many of these databases consist of video recordings which limit the ability to manipulate and analyse the physical properties of these stimuli. We present a new database consisting of a large set (over 1400) of natural emotional body expressions typical of monologues. To achieve close-to-natural emotional body expressions, amateur actors were narrating coherent stories while their body movements were recorded with motion capture technology. The resulting 3-dimensional motion data recorded at a high frame rate (120 frames per second) provides fine-grained information about body movements and allows the manipulation of movement on a body joint basis. For each expression it gives the positions and orientations in space of 23 body joints for every frame. We report the results of physical motion properties analysis and of an emotion categorisation study. The reactions of observers from the emotion categorisation study are included in the database. Moreover, we recorded the intended emotion expression for each motion sequence from the actor to allow for investigations regarding the link between intended and perceived emotions. The motion sequences along with the accompanying information are made available in a searchable MPI Emotional Body Expression Database. We hope that this database will enable researchers to study expression and perception of naturally occurring emotional body expressions in greater depth.
Collapse
Affiliation(s)
- Ekaterina Volkova
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Graduate School of Neural & Behavioural Sciences, Tübingen, Germany
- * E-mail: (EV); (HHB)
| | - Stephan de la Rosa
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Heinrich H. Bülthoff
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
- * E-mail: (EV); (HHB)
| | - Betty Mohler
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| |
Collapse
|
42
|
Christensen JF, Nadal M, Cela-Conde CJ. A norming study and library of 203 dance movements. Perception 2014; 43:178-206. [PMID: 24919352 DOI: 10.1068/p7581] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
Dance stimuli have been used in experimental studies of (i) how movement is processed in the brain; (ii) how affect is perceived from bodily movement; and (iii) how dance can be a source of aesthetic experience. However, stimulus materials across--and even within--these three domains of research have varied considerably. Thus, integrative conclusions remain elusive. Moreover, concerns have been raised that the movements selected for such stimuli are qualitatively too different from the actual art form dance, potentially introducing noise in the data. We propose a library of dance stimuli which responds to the stimuli requirements and design criteria of these three areas of research, while at the same time respecting a dance art-historical perspective, offering greater ecological validity as compared with previous dance stimulus sets. The stimuli are 5-6 s long video clips, selected from genuine ballet performances. Following a number of coding experiments, the resulting stimulus library comprises 203 ballet dance stimuli coded in (i) 25 qualitative and quantitative movement variables; (ii) affective valence and arousal; and (iii) the aesthetic qualities beauty, liking, and interest. An Excel spreadsheet with these data points accompanies this manuscript, and the stimuli can be obtained from the authors upon request.
Collapse
|
43
|
Abstract
We present a database of high-definition (HD) videos for the study of traits inferred from whole-body actions. Twenty-nine actors (19 female) were filmed performing different actions-walking, picking up a box, putting down a box, jumping, sitting down, and standing and acting-while conveying different traits, including four emotions (anger, fear, happiness, sadness), untrustworthiness, and neutral, where no specific trait was conveyed. For the actions conveying the four emotions and untrustworthiness, the actions were filmed multiple times, with the actor conveying the traits with different levels of intensity. In total, we made 2,783 action videos (in both two-dimensional and three-dimensional format), each lasting 7 s with a frame rate of 50 fps. All videos were filmed in a green-screen studio in order to isolate the action information from all contextual detail and to provide a flexible stimulus set for future use. In order to validate the traits conveyed by each action, we asked participants to rate each of the actions corresponding to the trait that the actor portrayed in the two-dimensional videos. To provide a useful database of stimuli of multiple actions conveying multiple traits, each video name contains information on the gender of the actor, the action executed, the trait conveyed, and the rating of its perceived intensity. All videos can be downloaded free at the following address: http://www-users.york.ac.uk/~neb506/databases.html. We discuss potential uses for the database in the analysis of the perception of whole-body actions.
Collapse
|
44
|
Piwek L, McKay LS, Pollick FE. Empirical evaluation of the uncanny valley hypothesis fails to confirm the predicted effect of motion. Cognition 2013; 130:271-7. [PMID: 24374019 DOI: 10.1016/j.cognition.2013.11.001] [Citation(s) in RCA: 53] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2012] [Revised: 09/24/2013] [Accepted: 11/01/2013] [Indexed: 10/25/2022]
Abstract
The uncanny valley hypothesis states that the acceptability of an artificial character will not increase linearly in relation to its likeness to human form. Instead, after an initial rise in acceptability there will be a pronounced decrease when the character is similar, but not identical to human form (Mori, 1970/2012). Moreover, it has been claimed but never directly tested that movement would accentuate this dip and make moving characters less acceptable. We used a number of full-body animated computer characters along with a parametrically defined motion set to examine the effect of motion quality on the uncanny valley. We found that improving the motion quality systematically improved the acceptability of the characters. In particular, the character classified in the deepest location of the uncanny valley became more acceptable when it was animated. Our results showed that although an uncanny valley was found for static characters, the deepening of the valley with motion, originally predicted by Mori (1970/2012), was not obtained.
Collapse
Affiliation(s)
- Lukasz Piwek
- University of Glasgow, School of Psychology, Glasgow, UK
| | - Lawrie S McKay
- Netherlands Institute for Neuroscience, Amsterdam, The Netherlands
| | | |
Collapse
|
45
|
Krüger S, Sokolov AN, Enck P, Krägeloh-Mann I, Pavlova MA. Emotion through locomotion: gender impact. PLoS One 2013; 8:e81716. [PMID: 24278456 PMCID: PMC3838416 DOI: 10.1371/journal.pone.0081716] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2013] [Accepted: 10/18/2013] [Indexed: 01/29/2023] Open
Abstract
Body language reading is of significance for daily life social cognition and successful social interaction, and constitutes a core component of social competence. Yet it is unclear whether our ability for body language reading is gender specific. In the present work, female and male observers had to visually recognize emotions through point-light human locomotion performed by female and male actors with different emotional expressions. For subtle emotional expressions only, males surpass females in recognition accuracy and readiness to respond to happy walking portrayed by female actors, whereas females exhibit a tendency to be better in recognition of hostile angry locomotion expressed by male actors. In contrast to widespread beliefs about female superiority in social cognition, the findings suggest that gender effects in recognition of emotions from human locomotion are modulated by emotional content of actions and opposite actor gender. In a nutshell, the study makes a further step in elucidation of gender impact on body language reading and on neurodevelopmental and psychiatric deficits in visual social cognition.
Collapse
Affiliation(s)
- Samuel Krüger
- Department of Pediatric Neurology and Developmental Medicine, Children's Hospital, Medical School, Eberhard Karls University of Tübingen, Tübingen, Germany
| | - Alexander N. Sokolov
- Department of Psychosomatic Medicine and Psychotherapy, Medical School, Eberhard Karls University of Tübingen, Tübingen, Germany
- Center for Pediatric Clinical Studies, Children's Hospital, Medical School, Eberhard Karls University of Tübingen, Tübingen, Germany
| | - Paul Enck
- Department of Psychosomatic Medicine and Psychotherapy, Medical School, Eberhard Karls University of Tübingen, Tübingen, Germany
| | - Ingeborg Krägeloh-Mann
- Department of Pediatric Neurology and Developmental Medicine, Children's Hospital, Medical School, Eberhard Karls University of Tübingen, Tübingen, Germany
- Werner Reichardt Center for Integrative Neuroscience, Eberhard Karls University of Tübingen, Tübingen, Germany
| | - Marina A. Pavlova
- Department of Pediatric Neurology and Developmental Medicine, Children's Hospital, Medical School, Eberhard Karls University of Tübingen, Tübingen, Germany
- Institute for Women's Health Baden-Württemberg, Eberhard Karls University of Tübingen, Tübingen, Germany
- * E-mail:
| |
Collapse
|
46
|
Communicative and noncommunicative point-light actions featuring high-resolution representation of the hands and fingers. Behav Res Methods 2013; 45:319-28. [PMID: 23073730 DOI: 10.3758/s13428-012-0273-2] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
We describe the creation of a set of point-light movies depicting 43 communicative gestures and 43 noncommunicative, pantomimed actions. These actions were recorded using a motion capture system that is worn on the body and provides accurate capture of the positions and movements of individual fingers. The movies created thus include point-lights on the fingers, allowing for representation of actions and gestures that would not be possible with a conventional, line-of-sight-based motion capture system. These videos would be suitable for use in cognitive and cognitive neuroscientific studies of biological motion and gesture perception. Each video is described, along with an H statistic indicating the consistency of the descriptive labels that 20 observers gave to the actions. We also produced a scrambled version of each movie, in which the starting position of each point was randomized but its local motion vector was preserved. These scrambled movies would be suitable for use as control stimuli in experimental studies. As supplementary materials, we provide QuickTime movie files of each action, along with text files specifying the three-dimensional coordinates of each point-light in each frame of each movie.
Collapse
|
47
|
Rodrigues ST, Castello VM, Jardim JG, Aguiar SA. Aprendizagem motora baseada em demonstrações de movimento biológico. MOTRIZ: REVISTA DE EDUCACAO FISICA 2012. [DOI: 10.1590/s1980-65742012000400002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
O objetivo deste estudo foi avaliar o processo de aprendizagem motora de uma habilidade complexa da Ginástica Artística a partir da observação de demonstrações de modelos de pontos de luz e vídeo. Dezesseis participantes divididas em grupos dos respectivos modelos executaram um pré-teste, seguido de 100 tentativas de uma parada de mãos, igualmente distribuídas em blocos de 10 tentativas em dois dias, alternando períodos de demonstração e prática, com um teste de retenção após um dia. Cinemática de braço, tronco e perna das participantes possibilitaram análise da semelhança entre a coordenação de cada participante e do modelo e do tempo de movimento; a performance das participantes também foi avaliada por duas especialistas em Ginástica Artística. Ambas as análises indicaram que os grupos não diferiram. Os resultados são discutidos em termos da hipótese de suficiência de informação nos modelos de movimento biológico particularmente aplicada ao processo de aprendizagem de habilidades motoras complexas.
Collapse
|
48
|
van Boxtel JJA, Lu H. Signature movements lead to efficient search for threatening actions. PLoS One 2012; 7:e37085. [PMID: 22649510 PMCID: PMC3359369 DOI: 10.1371/journal.pone.0037085] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2011] [Accepted: 04/18/2012] [Indexed: 11/19/2022] Open
Abstract
The ability to find and evade fighting persons in a crowd is potentially life-saving. To investigate how the visual system processes threatening actions, we employed a visual search paradigm with threatening boxer targets among emotionally-neutral walker distractors, and vice versa. We found that a boxer popped out for both intact and scrambled actions, whereas walkers did not. A reverse correlation analysis revealed that observers' responses clustered around the time of the “punch", a signature movement of boxing actions, but not around specific movements of the walker. These findings support the existence of a detector for signature movements in action perception. This detector helps in rapidly detecting aggressive behavior in a crowd, potentially through an expedited (sub)cortical threat-detection mechanism.
Collapse
Affiliation(s)
- Jeroen J. A. van Boxtel
- Department of Psychology, University of California Los Angeles, Los Angeles, California, United States of America
- * E-mail: (JJAvB); (HL)
| | - Hongjing Lu
- Department of Psychology, University of California Los Angeles, Los Angeles, California, United States of America
- Department of Statistics, University of California Los Angeles, Los Angeles, California, United States of America
- * E-mail: (JJAvB); (HL)
| |
Collapse
|
49
|
HWANG BONWOO, KIM SUNGMIN, LEE SEONGWHAN. A FULL-BODY GESTURE DATABASE FOR HUMAN GESTURE ANALYSIS. INT J PATTERN RECOGN 2011. [DOI: 10.1142/s0218001407005806] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
This paper presents a full-body gesture database which contains 2D video data and 3D motion data of 14 normal gestures, 10 abnormal gestures and 30 command gestures for 20 subjects. We call this database the Korea University Gesture (KUG) database. Using 3D motion cameras and 3 sets of stereo cameras, we captured 3D motion data and 3 pairs of stereo-video data in 3 different directions for normal and abnormal gestures. In case of command gestures, 2 pairs of stereo-video data were obtained by 2 sets of stereo cameras with different focal lengths in order to capture views of whole body and upper body, simultaneously. The 2D silhouette data was synthesized by separating a subject and background in 2D stereo-video data. In this paper, we describe the gesture capture system, the organization of database, the potential usages of the database and the contact point for the KUG database. We expect that this database would be very useful for the study of 2D/3D human gesture and its application.
Collapse
Affiliation(s)
- BON-WOO HWANG
- Division of Computer and Communications Engineering, Korea University, Anam-dong, Seongbuk-ku, Seoul 136-713, Korea
| | - SUNGMIN KIM
- Division of Computer and Communications Engineering, Korea University, Anam-dong, Seongbuk-ku, Seoul 136-713, Korea
| | - SEONG-WHANe LEE
- Division of Computer and Communications Engineering, Korea University, Anam-dong, Seongbuk-ku, Seoul 136-713, Korea
| |
Collapse
|
50
|
McKay LS, Simmons DR, McAleer P, Marjoram D, Piggot J, Pollick FE. Do distinct atypical cortical networks process biological motion information in adults with Autism Spectrum Disorders? Neuroimage 2011; 59:1524-33. [PMID: 21888982 DOI: 10.1016/j.neuroimage.2011.08.033] [Citation(s) in RCA: 50] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2011] [Revised: 08/09/2011] [Accepted: 08/11/2011] [Indexed: 10/17/2022] Open
Abstract
Whether people with Autism Spectrum Disorders (ASDs) have a specific deficit when processing biological motion has been a topic of much debate. We used psychophysical methods to determine individual behavioural thresholds in a point-light direction discrimination paradigm for a small but carefully matched groups of adults (N=10 per group) with and without ASDs. These thresholds were used to derive individual stimulus levels in an identical fMRI task, with the purpose of equalising task performance across all participants whilst inside the scanner. The results of this investigation show that despite comparable behavioural performance both inside and outside the scanner, the group with ASDs shows a different pattern of BOLD activation from the TD group in response to the same stimulus levels. Furthermore, connectivity analysis suggests that the main differences between the groups are that the TD group utilise a unitary network with information passing from temporal to parietal regions, whilst the ASD group utilise two distinct networks; one utilising motion sensitive areas and another utilising form selective areas. Furthermore, a temporal-parietal link that is present in the TD group is missing in the ASD group. We tentatively propose that these differences may occur due to early dysfunctional connectivity in the brains of people with ASDs, which to some extent is compensated for by rewiring in high functioning adults.
Collapse
Affiliation(s)
- Lawrie S McKay
- School of Psychology, University of Glasgow, Glasgow G12 8QB, UK.
| | | | | | | | | | | |
Collapse
|