1
|
Ito M, Suzuki A. Discrepancies in perceived humanness between spatially filtered and unfiltered faces and their associations with uncanny feelings. Perception 2024; 53:529-543. [PMID: 38752230 DOI: 10.1177/03010066241252355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/24/2024]
Abstract
Human and artificial features that coexist in certain types of human-like robots create a discrepancy in perceived humanness and evoke uncanny feelings in human observers. However, whether this perceptual mismatch in humanness occurs for all faces, and whether it is related to the uncanny feelings toward them, is unknown. We investigated this by examining perceived humanness for a variety of natural images of robot and human faces with different spatial frequency (SF) information: that is, faces with only low SF, middle SF, and high SF information, and intact (spatially unfiltered) faces. Uncanny feelings elicited by these faces were also measured. The results showed perceptual mismatches that LSF, MSF, and HSF faces were perceived as more human than intact faces. This was particularly true for intact robot faces that looked slightly human, which tended to evoke strong uncanny feelings. Importantly, the mismatch in perceived humanness between the intact and spatially filtered faces was positively correlated with uncanny feelings toward intact faces. Given that the human visual system performs SF analysis when processing faces, the perceptual mismatches observed in this study likely occur in real life for all faces, and as such might be a ubiquitous source of uncanny feelings in real-life situations.
Collapse
|
2
|
Wu J, Du X, Liu Y, Tang W, Xue C. How the Degree of Anthropomorphism of Human-like Robots Affects Users' Perceptual and Emotional Processing: Evidence from an EEG Study. SENSORS (BASEL, SWITZERLAND) 2024; 24:4809. [PMID: 39123856 PMCID: PMC11314648 DOI: 10.3390/s24154809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/18/2024] [Revised: 07/16/2024] [Accepted: 07/22/2024] [Indexed: 08/12/2024]
Abstract
Anthropomorphized robots are increasingly integrated into human social life, playing vital roles across various fields. This study aimed to elucidate the neural dynamics underlying users' perceptual and emotional responses to robots with varying levels of anthropomorphism. We investigated event-related potentials (ERPs) and event-related spectral perturbations (ERSPs) elicited while participants viewed, perceived, and rated the affection of robots with low (L-AR), medium (M-AR), and high (H-AR) levels of anthropomorphism. EEG data were recorded from 42 participants. Results revealed that H-AR induced a more negative N1 and increased frontal theta power, but decreased P2 in early time windows. Conversely, M-AR and L-AR elicited larger P2 compared to H-AR. In later time windows, M-AR generated greater late positive potential (LPP) and enhanced parietal-occipital theta oscillations than H-AR and L-AR. These findings suggest distinct neural processing phases: early feature detection and selective attention allocation, followed by later affective appraisal. Early detection of facial form and animacy, with P2 reflecting higher-order visual processing, appeared to correlate with anthropomorphism levels. This research advances the understanding of emotional processing in anthropomorphic robot design and provides valuable insights for robot designers and manufacturers regarding emotional and feature design, evaluation, and promotion of anthropomorphic robots.
Collapse
Affiliation(s)
| | | | | | | | - Chengqi Xue
- School of Mechanical Engineering, Southeast University, Suyuan Avenue 79, Nanjing 211189, China; (J.W.); (X.D.); (Y.L.); (W.T.)
| |
Collapse
|
3
|
Jastrzab LE, Chaudhury B, Ashley SA, Koldewyn K, Cross ES. Beyond human-likeness: Socialness is more influential when attributing mental states to robots. iScience 2024; 27:110070. [PMID: 38947497 PMCID: PMC11214418 DOI: 10.1016/j.isci.2024.110070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 03/08/2024] [Accepted: 05/17/2024] [Indexed: 07/02/2024] Open
Abstract
We sought to replicate and expand previous work showing that the more human-like a robot appears, the more willing people are to attribute mind-like capabilities and socially engage with it. Forty-two participants played games against a human, a humanoid robot, a mechanoid robot, and a computer algorithm while undergoing functional neuroimaging. We confirmed that the more human-like the agent, the more participants attributed a mind to them. However, exploratory analyses revealed that the perceived socialness of an agent appeared to be as, if not more, important for mind attribution. Our findings suggest top-down knowledge cues may be equally or possibly more influential than bottom-up stimulus cues when exploring mind attribution in non-human agents. While further work is now required to test this hypothesis directly, these preliminary findings hold important implications for robotic design and to understand and test the flexibility of human social cognition when people engage with artificial agents.
Collapse
Affiliation(s)
- Laura E. Jastrzab
- Institute for Cognitive Neuroscience, School of Human and Behavioural Science, Bangor University, Wales, UK
- Institute for Neuroscience and Psychology, School of Psychology, University of Glasgow, Glasgow, UK
| | - Bishakha Chaudhury
- Institute for Neuroscience and Psychology, School of Psychology, University of Glasgow, Glasgow, UK
| | - Sarah A. Ashley
- Institute for Cognitive Neuroscience, School of Human and Behavioural Science, Bangor University, Wales, UK
- Division of Psychiatry, Institute of Mental Health, University College London, London, UK
| | - Kami Koldewyn
- Institute for Cognitive Neuroscience, School of Human and Behavioural Science, Bangor University, Wales, UK
| | - Emily S. Cross
- Institute for Neuroscience and Psychology, School of Psychology, University of Glasgow, Glasgow, UK
- Chair for Social Brain Sciences, Department of Humanities, Social and Political Sciences, ETHZ, Zürich, Switzerland
| |
Collapse
|
4
|
Oudah M, Makovi K, Gray K, Battu B, Rahwan T. Perception of experience influences altruism and perception of agency influences trust in human-machine interactions. Sci Rep 2024; 14:12410. [PMID: 38811749 PMCID: PMC11136977 DOI: 10.1038/s41598-024-63360-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Accepted: 05/28/2024] [Indexed: 05/31/2024] Open
Abstract
As robots become increasingly integrated into social economic interactions, it becomes crucial to understand how people perceive a robot's mind. It has been argued that minds are perceived along two dimensions: experience, i.e., the ability to feel, and agency, i.e., the ability to act and take responsibility for one's actions. However, the influence of these perceived dimensions on human-machine interactions, particularly those involving altruism and trust, remains unknown. We hypothesize that the perception of experience influences altruism, while the perception of agency influences trust. To test these hypotheses, we pair participants with bot partners in a dictator game (to measure altruism) and a trust game (to measure trust) while varying the bots' perceived experience and agency, either by manipulating the degree to which the bot resembles humans, or by manipulating the description of the bots' ability to feel and exercise self-control. The results demonstrate that the money transferred in the dictator game is influenced by the perceived experience, while the money transferred in the trust game is influenced by the perceived agency, thereby confirming our hypotheses. More broadly, our findings support the specificity of the mind hypothesis: Perceptions of different dimensions of the mind lead to different kinds of social behavior.
Collapse
Affiliation(s)
- Mayada Oudah
- Social Science Division, New York University Abu Dhabi, Abu Dhabi, UAE
| | - Kinga Makovi
- Social Science Division, New York University Abu Dhabi, Abu Dhabi, UAE
| | - Kurt Gray
- Department of Psychology and Neuroscience, University of North Carolina, Chapel Hill, USA
| | - Balaraju Battu
- Computer Science, Science Division, New York University Abu Dhabi, Abu Dhabi, UAE.
| | - Talal Rahwan
- Computer Science, Science Division, New York University Abu Dhabi, Abu Dhabi, UAE.
| |
Collapse
|
5
|
Yam J, Gong T, Xu H. A stimulus exposure of 50 ms elicits the uncanny valley effect. Heliyon 2024; 10:e27977. [PMID: 38533075 PMCID: PMC10963319 DOI: 10.1016/j.heliyon.2024.e27977] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Revised: 03/04/2024] [Accepted: 03/08/2024] [Indexed: 03/28/2024] Open
Abstract
The uncanny valley (UV) effect captures the observation that artificial entities with near-human appearances tend to create feelings of eeriness. Researchers have proposed many hypotheses to explain the UV effect, but the visual processing mechanisms of the UV have yet to be fully understood. In the present study, we examined if the UV effect is as accessible in brief stimulus exposures compared to long stimulus exposures (Experiment 1). Forty-one participants, aged 21-31, rated each human-robot face presented for either a brief (50 ms) or long duration (3 s) in terms of attractiveness, eeriness, and humanness (UV indices) in a 7-point Likert scale. We found that brief and long exposures to stimuli generated a similar UV effect. This suggests that the UV effect is accessible at early visual processing. We then examined the effect of exposure duration on the categorisation of visual stimuli in Experiment 2. Thirty-three participants, aged 21-31, categorised faces as either human or robot in a two-alternative forced choice task. Their response accuracy and variance were recorded. We found that brief stimulus exposures generated significantly higher response variation and errors than the long exposure condition. This indicated that participants were more uncertain in categorising faces in the brief exposure condition due to insufficient time. Further comparisons between Experiment 1 and 2 revealed that the eeriest faces were not the hardest to categorise. Overall, these findings indicate (1) that both the UV effect and categorical uncertainty can be elicited through brief stimulus exposure, but (2) that categorical uncertainty is unlikely to cause the UV effect. These findings provide insights towards the perception of robotic faces and implications for the design of robots, androids, avatars, and artificial intelligence agents.
Collapse
Affiliation(s)
- Jodie Yam
- Psychology, School of Social Sciences, Nanyang Technological University, Singapore
| | - Tingchen Gong
- Department of Neuroscience, Physiology and Pharmacology, University College London, UK
| | - Hong Xu
- Psychology, School of Social Sciences, Nanyang Technological University, Singapore
| |
Collapse
|
6
|
Chen Y, Stephani T, Bagdasarian MT, Hilsmann A, Eisert P, Villringer A, Bosse S, Gaebler M, Nikulin VV. Realness of face images can be decoded from non-linear modulation of EEG responses. Sci Rep 2024; 14:5683. [PMID: 38454099 PMCID: PMC10920746 DOI: 10.1038/s41598-024-56130-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Accepted: 03/01/2024] [Indexed: 03/09/2024] Open
Abstract
Artificially created human faces play an increasingly important role in our digital world. However, the so-called uncanny valley effect may cause people to perceive highly, yet not perfectly human-like faces as eerie, bringing challenges to the interaction with virtual agents. At the same time, the neurocognitive underpinnings of the uncanny valley effect remain elusive. Here, we utilized an electroencephalography (EEG) dataset of steady-state visual evoked potentials (SSVEP) in which participants were presented with human face images of different stylization levels ranging from simplistic cartoons to actual photographs. Assessing neuronal responses both in frequency and time domain, we found a non-linear relationship between SSVEP amplitudes and stylization level, that is, the most stylized cartoon images and the real photographs evoked stronger responses than images with medium stylization. Moreover, realness of even highly similar stylization levels could be decoded from the EEG data with task-related component analysis (TRCA). Importantly, we also account for confounding factors, such as the size of the stimulus face's eyes, which previously have not been adequately addressed. Together, this study provides a basis for future research and neuronal benchmarking of real-time detection of face realness regarding three aspects: SSVEP-based neural markers, efficient classification methods, and low-level stimulus confounders.
Collapse
Affiliation(s)
- Yonghao Chen
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
| | - Tilman Stephani
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | | | - Anna Hilsmann
- Department of Vision and Imaging Technologies, Fraunhofer HHI, Berlin, Germany
- Visual Computing Group, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Peter Eisert
- Department of Vision and Imaging Technologies, Fraunhofer HHI, Berlin, Germany
- Visual Computing Group, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Arno Villringer
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Clinic of Cognitive Neurology, University Hospital Leipzig, Leipzig, Germany
- MindBrainBody Institute at the Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Sebastian Bosse
- Department of Vision and Imaging Technologies, Fraunhofer HHI, Berlin, Germany
| | - Michael Gaebler
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- MindBrainBody Institute at the Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Vadim V Nikulin
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
| |
Collapse
|
7
|
Niewrzol DB, Ostermann T. Development and Validation of the Attitudes towards Social Robots Scale. Healthcare (Basel) 2024; 12:286. [PMID: 38338172 PMCID: PMC10855967 DOI: 10.3390/healthcare12030286] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Revised: 01/19/2024] [Accepted: 01/20/2024] [Indexed: 02/12/2024] Open
Abstract
The idea of artificially created social robots has a long tradition. Today, attitudes towards robots play a central role in the field of healthcare. Our research aimed to develop a scale to measure attitudes towards robots. The survey consisted of nine questions on attitudes towards robots, sociodemographic questions, the SWOP-K9, measuring self-efficacy, optimism, and pessimism, and the BFI-10, measuring personality dimensions. Structural relations between the items were detected using principal components analysis (PCA) with Varimax rotation. Correlations and Analysis of Variance were used for external validation. In total, 214 participants (56.1% female, mean age: 30.8 ± 14.4 years) completed the survey. The PCA found two main components, "Robot as a helper and assistant" (RoHeA) and "Robot as an equal partner" (RoEqP), with four items each explaining 53.2% and 17.5% of the variance with a Cronbach's α of 0.915 and 0.768. In the personality traits, "Conscientiousness" correlated weakly with both subscales and "Extraversion" correlated with RoHeA, while none the subscales of the SWOP-K9 significantly correlated with RoEqP or RoHeA. Male participants scored significantly higher than female participants. Our survey yielded a stable and convergent two-factor instrument that exhibited convincing validity and complements other findings in the field. The ASRS can easily be used to describe attitudes towards social robots in human society. Further research, however, should be carried out to investigate the discriminant and convergent validity of the ASRS.
Collapse
Affiliation(s)
| | - Thomas Ostermann
- Department of Psychology and Psychotherapy, Witten/Herdecke University, 58452 Witten, Germany;
| |
Collapse
|
8
|
Pierce JE. AI-Generated Images for Speech Pathology-An Exploratory Application to Aphasia Assessment and Intervention Materials. AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2024; 33:443-451. [PMID: 37856083 DOI: 10.1044/2023_ajslp-23-00142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2023]
Abstract
PURPOSE Images are a core component of aphasia assessment and intervention that require significant resources to produce or source. Text-to-image generation is an Artificial Intelligence (AI) technology that has recently made significant advances and could be a source of low-cost, highly customizable images. The aim of this study was to explore the potential of AI image generation for use in aphasia by examining its efficiency and cost during generation of typical images. METHOD Two hundred targets (80 nouns, 80 verbs, and 40 sentences) were selected at random from existing aphasia assessments and treatment software. A widely known image generator, DALL-E 2, was given text prompts for each target. The success rate, number of prompts required, and costs were summarized across target categories (noun/verb/sentence) and compared to frequency and imageability. RESULTS Of 200 targets, 189 (94.5%) successfully conveyed the key concept. The process took a mean of 2.3 min per target at a cost of $0.31 in U.S. dollars each. However, there were aesthetic flaws in many successful images that could impact their utility. Noun images were generated with the highest efficiency and accuracy, followed by verbs, while sentences were more challenging, particularly those with unusual scenes. Patterns of flaws and errors in image generation are discussed. CONCLUSION The ability to rapidly generate low-cost, high-quality images using AI is likely to be a major contribution to aphasia assessment and treatment going forward, particularly as advances in this technology continue.
Collapse
Affiliation(s)
- John E Pierce
- Centre of Research Excellence in Aphasia Rehabilitation and Recovery, School of Allied Health Sciences and Sport, La Trobe University, Melbourne, Victoria, Australia
| |
Collapse
|
9
|
Zhang Y, Cao Y, Proctor RW, Liu Y. Emotional experiences of service robots' anthropomorphic appearance: a multimodal measurement method. ERGONOMICS 2023; 66:2039-2057. [PMID: 36803343 DOI: 10.1080/00140139.2023.2182751] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Accepted: 02/14/2023] [Indexed: 06/18/2023]
Abstract
Anthropomorphic appearance is a key factor to affect users' attitudes and emotions. This research aimed to measure emotional experience caused by robots' anthropomorphic appearance with three levels - high, moderate, and low - using multimodal measurement. Fifty participants' physiological and eye-tracker data were recorded synchronously while they observed robot images that were displayed in random order. Afterward, the participants reported subjective emotional experiences and attitudes towards those robots. The results showed that the images of the moderately anthropomorphic service robots induced higher pleasure and arousal ratings, and yielded significantly larger pupil diameter and faster saccade velocity, than did the low or high robots. Moreover, participants' facial electromyography, skin conductance, and heart-rate responses were higher when observing moderately anthropomorphic service robots. An implication of the research is that service robots' appearance should be designed to be moderately anthropomorphic; too many human-like features or machine-like features may disturb users' positive emotions and attitudes.Practitioner Summary: This research aimed to measure emotional experience caused by three types of anthropomorphic service robots using a multimodal measurement experiment. The results showed that moderately anthropomorphic service robots evoked more positive emotion than high and low anthropomorphic robots. Too many human-like features or machine-like features may disturb users' positive emotions.
Collapse
Affiliation(s)
- Yun Zhang
- School of Economics and Management, Anhui Polytechnic University, Wuhu, P. R. China
| | - Yaqin Cao
- School of Economics and Management, Anhui Polytechnic University, Wuhu, P. R. China
| | - Robert W Proctor
- Department of Psychological Sciences, Purdue University, West Lafayette, USA
| | - Yu Liu
- School of Economics and Management, Anhui Polytechnic University, Wuhu, P. R. China
| |
Collapse
|
10
|
Diel A, Sato W, Hsu CT, Minato T. The inversion effect on the cubic humanness-uncanniness relation in humanlike agents. Front Psychol 2023; 14:1222279. [PMID: 37705949 PMCID: PMC10497116 DOI: 10.3389/fpsyg.2023.1222279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2023] [Accepted: 08/11/2023] [Indexed: 09/15/2023] Open
Abstract
The uncanny valley describes the typically nonlinear relation between the esthetic appeal of artificial entities and their human likeness. The effect has been attributed to specialized (configural) processing that increases sensitivity to deviations from human norms. We investigate this effect in computer-generated, humanlike android and human faces using dynamic facial expressions. Angry and happy expressions with varying degrees of synchrony were presented upright and inverted and rated on their eeriness, strangeness, and human likeness. A sigmoidal function of human likeness and uncanniness ("uncanny slope") was found for upright expressions and a linear relation for inverted faces. While the function is not indicative of an uncanny valley, the results support the view that configural processing moderates the effect of human likeness on uncanniness and extend its role to dynamic facial expressions.
Collapse
Affiliation(s)
- Alexander Diel
- Guardian Robot Project, RIKEN, Kyoto, Japan
- Cardiff University School of Psychology, Cardiff University, Cardiff, United Kingdom
| | | | | | | |
Collapse
|
11
|
Soldavini AM, Diaz H, Ennis JM, Simons CT. Understanding the Effects of Smart-Speaker-Based Surveys on Panelist Experience in Immersive Consumer Testing. Foods 2023; 12:2537. [PMID: 37444274 DOI: 10.3390/foods12132537] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Revised: 06/12/2023] [Accepted: 06/27/2023] [Indexed: 07/15/2023] Open
Abstract
Utilizing immersive technologies to reintroduce the environmental context (i.e., visual, auditory, and olfactory cues) in sensory testing has been one area of research for improving panelist engagement. The current study sought to understand whether pairing smart-speaker questionnaires in immersive spaces could positively affect the panelist experience through enhanced ecological validity. To this end, subjects performed an immersive consumer test in which responses were collected using a traditional computer-based survey, a smart-speaker approach incorporating a direct translation of the computer questionnaire into a verbal survey requiring numeric responses, and an optimized smart-speaker survey with alternative question formatting requiring spoken word-based responses. After testing, participants answered the Engagement Questionnaire (EQ) to assess participant engagement during the test, and the System Usability Scale (SUS) survey to understand the ease, and potential adoption, of using the various survey technologies in the study. Results indicated that the traditional computer-based survey was the most engaging (p < 0.001) and usable (p < 0.001), with no differences found between the two smart-speaker surveys (p = 0.803 and p = 0.577, respectively). This suggests that the proposed optimizations for the smart-speaker surveys were not robust enough to influence engagement and usability, and further research is needed to enhance their conversational capabilities.
Collapse
Affiliation(s)
- Ashley M Soldavini
- Department of Food Science & Technology, The Ohio State University, 2015 Fyffe Rd., Columbus, OH 43210, USA
| | - Hamza Diaz
- Aigora LLC, 2515 Whispering Oaks Ct., Midlothian, VA 23112, USA
| | - John M Ennis
- Aigora LLC, 2515 Whispering Oaks Ct., Midlothian, VA 23112, USA
| | - Christopher T Simons
- Department of Food Science & Technology, The Ohio State University, 2015 Fyffe Rd., Columbus, OH 43210, USA
| |
Collapse
|
12
|
Montag C, Klugah-Brown B, Zhou X, Wernicke J, Liu C, Kou J, Chen Y, Haas BW, Becker B. Trust toward humans and trust toward artificial intelligence are not associated: Initial insights from self-report and neurostructural brain imaging. PERSONALITY NEUROSCIENCE 2023; 6:e3. [PMID: 38107776 PMCID: PMC10725778 DOI: 10.1017/pen.2022.5] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Revised: 12/01/2022] [Accepted: 12/05/2022] [Indexed: 12/19/2023]
Abstract
The present study examines whether self-reported trust in humans and self-reported trust in [(different) products with built-in] artificial intelligence (AI) are associated with one another and with brain structure. We sampled 90 healthy participants who provided self-reported trust in humans and AI and underwent brain structural magnetic resonance imaging assessment. We found that trust in humans, as measured by the trust facet of the personality inventory NEO-PI-R, and trust in AI products, as measured by items assessing attitudes toward AI and by a composite score based on items assessing trust toward products with in-built AI, were not significantly correlated. We also used a concomitant dimensional neuroimaging approach employing a data-driven source-based morphometry (SBM) analysis of gray-matter-density to investigate neurostructural associations with each trust domain. We found that trust in humans was negatively (and significantly) correlated with an SBM component encompassing striato-thalamic and prefrontal regions. We did not observe significant brain structural association with trust in AI. The present findings provide evidence that trust in humans and trust in AI seem to be dissociable constructs. While the personal disposition to trust in humans might be "hardwired" to the brain's neurostructural architecture (at least from an individual differences perspective), a corresponding significant link for the disposition to trust AI was not observed. These findings represent an initial step toward elucidating how different forms of trust might be processed on the behavioral and brain level.
Collapse
Affiliation(s)
- Christian Montag
- Department of Molecular Psychology, Institute of Psychology and Education, Ulm University, Ulm, Germany
- Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, School of Life Science and Technology, University of Electronic Science and Technology, Chengdu, China
| | - Benjamin Klugah-Brown
- Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, School of Life Science and Technology, University of Electronic Science and Technology, Chengdu, China
| | - Xinqi Zhou
- Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, School of Life Science and Technology, University of Electronic Science and Technology, Chengdu, China
- Institute of Brain and Psychological Sciences, Sichuan Normal University, Chengdu, China
| | - Jennifer Wernicke
- Department of Molecular Psychology, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Congcong Liu
- Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, School of Life Science and Technology, University of Electronic Science and Technology, Chengdu, China
- Department of Psychology, Xinxiang Medical University, Henan, China
| | - Juan Kou
- Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, School of Life Science and Technology, University of Electronic Science and Technology, Chengdu, China
- Institute of Brain and Psychological Sciences, Sichuan Normal University, Chengdu, China
| | - Yuanshu Chen
- Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, School of Life Science and Technology, University of Electronic Science and Technology, Chengdu, China
| | - Brian W. Haas
- Department of Psychology, University of Georgia, Athens, GA, USA
| | - Benjamin Becker
- Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, School of Life Science and Technology, University of Electronic Science and Technology, Chengdu, China
| |
Collapse
|
13
|
Esposito A, Amorese T, Cuciniello M, Esposito AM, Cordasco G. Do you like me? Behavioral and physical features for socially and emotionally engaging interactive systems. FRONTIERS IN COMPUTER SCIENCE 2023. [DOI: 10.3389/fcomp.2023.1138501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/11/2023] Open
Abstract
With the aim to give an overview of the most recent discoveries in the field of socially engaging interactive systems, the present paper discusses features affecting users' acceptance of virtual agents, robots, and chatbots. In addition, questionnaires exploited in several investigations to assess the acceptance of virtual agents, robots, and chatbots (voice only) are discussed and reported in the Supplementary material to make them available to the scientific community. These questionnaires were developed by the authors as a scientific contribution to the H2020 project EMPATHIC (http://www.empathic-project.eu/), Menhir (https://menhir-project.eu/), and the Italian-funded projects SIROBOTICS (https://www.exprivia.it/it-tile-6009-si-robotics/) and ANDROIDS (https://www.psicologia.unicampania.it/android-project) to guide the design and implementation of the promised assistive interactive dialog systems. They aimed to quantitatively evaluate Virtual Agents Acceptance (VAAQ), Robot Acceptance (RAQ), and Synthetic Virtual Agent Voice Acceptance (VAVAQ).
Collapse
|
14
|
Benjamin R, Heine SJ. From Freud to Android: Constructing a Scale of Uncanny Feelings. J Pers Assess 2023; 105:121-133. [PMID: 35353019 DOI: 10.1080/00223891.2022.2048842] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
The uncanny valley is a topic for engineers, animators, and psychologists, yet uncanny emotions are without a clear definition. Across three studies, we developed an 8-item measure of unnerved feelings, finding that it was discriminable from other affective experiences. In Study 1, we conducted an exploratory factor analysis that yielded two factors; an unnerved factor, which connects to emotional reactions to the uncanny, and a disoriented factor, which connects to mental state changes more distally following uncanny experiences. Focusing on the unnerved measure, Study 2 tests the scale's convergent and discriminant validity, concluding that participants who watched an uncanny video were more unnerved than those who watched a disgusting, fearful, or a neutral video. In Study 3, we determined that our scale detects unnerved feelings created during early 2020 by the coronavirus pandemic; a distinct source of uncanniness. These studies contribute to the psychological and interdisciplinary understanding of this strange, eerie phenomenon of being confronted with what looms just beyond our understanding.
Collapse
Affiliation(s)
- Rachele Benjamin
- Department of Psychology, University of British Columbia, Vancouver, Canada
| | - Steven J Heine
- Department of Psychology, University of British Columbia, Vancouver, Canada
| |
Collapse
|
15
|
Vaitonytė J, Alimardani M, Louwerse MM. Scoping review of the neural evidence on the uncanny valley. COMPUTERS IN HUMAN BEHAVIOR REPORTS 2022. [DOI: 10.1016/j.chbr.2022.100263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022] Open
|
16
|
Social perception of embodied digital technologies—a closer look at bionics and social robotics. GIO-GRUPPE-INTERAKTION-ORGANISATION-ZEITSCHRIFT FUER ANGEWANDTE ORGANISATIONSPSYCHOLOGIE 2022. [DOI: 10.1007/s11612-022-00644-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
AbstractThis contribution of the journal Gruppe. Interaktion. Organisation. (GIO) presents a study on the social perception of Embodied Digital Technologies (EDTs) and provides initial insights into social perception processes concerning technicality and anthropomorphism of robots and users of prostheses. EDTs such as bionic technologies and robots are becoming increasingly common in workspaces and private lives, raising questions surrounding their perception and their acceptance. According to the Stereotype Content Model (SCM), social perception and stereotyping are based on two fundamental dimensions: Warmth (recently distinguished into Morality and Sociability) and Competence. We investigate how human actors, namely able-bodied individuals, users of low-tech prostheses and users of bionic prostheses, as well as artificial actors, such as industrial robots, social robots, and android robots, are perceived in terms of Competence, Sociability, and Morality. Results show that individuals with low-tech prostheses were perceived as competent as users of bionic prostheses, but only users of low-tech prostheses were perceived less competent than able-bodied individuals. Sociability did not differ between users of low-tech or bionic prostheses or able-bodied individuals. Perceived morality was higher for users of low-tech prostheses than users of bionic prostheses or able-bodied individuals. For robots, attributions of competence showed that industrial robots were perceived as more competent than more anthropomorphized robots. Sociability was attributed to robots to a lesser extent. Morality was not attributed to robots, regardless of their level of anthropomorphism.
Collapse
|
17
|
Cross-Cultural Differences in Comfort with Humanlike Robots. Int J Soc Robot 2022; 14:1865-1873. [PMID: 36120116 PMCID: PMC9466302 DOI: 10.1007/s12369-022-00920-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/26/2022] [Indexed: 11/10/2022]
Abstract
The uncanny valley hypothesis describes how people are often less comfortable with highly humanlike robots. However, this discomfort may vary cross-culturally. This research tests how increasing robots’ physical and mental human likeness affects people’s comfort with robots in the United States and Japan, countries whose cultural and religious contexts differ in ways that are relevant to the evaluation of humanlike robots. We find that increasing physical and mental human likeness decreases comfort among Americans but not among Japanese participants. One potential explanation for these differences it that Japanese participants perceived robots to be more animate, having more of a mind, a soul, and consciousness, relative to American participants.
Collapse
|
18
|
Diel A, Lewis M. The deviation-from-familiarity effect: Expertise increases uncanniness of deviating exemplars. PLoS One 2022; 17:e0273861. [PMID: 36048801 PMCID: PMC9436138 DOI: 10.1371/journal.pone.0273861] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 08/16/2022] [Indexed: 11/19/2022] Open
Abstract
Humanlike entities deviating from the norm of human appearance are perceived as strange or uncanny. Explanations for the eeriness of deviating humanlike entities include ideas specific to human or animal stimuli like mate selection, avoidance of threat or disease, or dehumanization; however, deviation from highly familiar categories may provide a better explanation. Here it is tested whether experts and novices in a novel (greeble) category show different patterns of abnormality, attractiveness, and uncanniness responses to distorted and averaged greebles. Greeble-trained participants assessed the abnormality, attractiveness, uncanniness of normal, averaged, and distorted greebles and their responses were compared to participants who had not previously seen greebles. The data show that distorted greebles were more uncanny than normal greebles only in the training condition, and distorted greebles were more uncanny in the training compared to the control condition. In addition, averaged greebles were not more attractive than normal greebles regardless of condition. The results suggest uncanniness is elicited by deviations from stimulus categories of expertise rather than being a purely biological human- or animal-specific response.
Collapse
Affiliation(s)
- Alexander Diel
- School of Psychology, Cardiff University, Cardiff, United Kingdom
- * E-mail:
| | - Michael Lewis
- School of Psychology, Cardiff University, Cardiff, United Kingdom
| |
Collapse
|
19
|
Shape of the Uncanny Valley and Emotional Attitudes Toward Robots Assessed by an Analysis of YouTube Comments. Int J Soc Robot 2022. [DOI: 10.1007/s12369-022-00905-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
Abstract
AbstractThe uncanny valley hypothesis (UVH) suggests that almost, but not fully, humanlike artificial characters elicit a feeling of eeriness or discomfort in observers. This study used Natural Language Processing of YouTube comments to provide ecologically-valid, non-laboratory results about people’s emotional reactions toward robots. It contains analyses of 224,544 comments from 1515 videos showing robots from a wide humanlikeness spectrum. The humanlikeness scores were acquired from the Anthropomorphic roBOT database. The analysis showed that people use words related to eeriness to describe very humanlike robots. Humanlikeness was linearly related to both general sentiment and perceptions of eeriness—-more humanlike robots elicit more negative emotions. One of the subscales of humanlikeness, Facial Features, showed a UVH-like relationship with both sentiment and eeriness. The exploratory analysis demonstrated that the most suitable words for measuring the self-reported uncanny valley effect are: ‘scary’ and ‘creepy’. In contrast to theoretical expectations, the results showed that humanlikeness was not related to either pleasantness or attractiveness. Finally, it was also found that the size of robots influences sentiment toward the robots. According to the analysis, the reason behind this is the perception of smaller robots as more playable (as toys), although the prediction that bigger robots would be perceived as more threatening was not supported.
Collapse
|
20
|
Diel A, Lewis M. The uncanniness of written text is explained by configural deviation and not by processing disfluency. Perception 2022; 51:3010066221114436. [PMID: 35912496 DOI: 10.1177/03010066221114436] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Deviating from human norms in human-looking artificial entities can elicit uncanny sensations, described as the uncanny valley. This study investigates in three tasks whether configural deviation in written text also increases uncanniness. It further evaluates whether the uncanniness of text is better explained by perceptual disfluency and especially deviations from specialized categories, or conceptual disfluency caused by ambiguity. In the first task, lower sentence readability predicted uncanniness, but deviating sentences were more uncanny than typical sentences despite being just as readable. Furthermore, familiarity with a language increased the effect of configural deviation on uncanniness but not the effect of non-configural deviation (blur). In the second and third tasks, semantically ambiguous words and sentences were not uncannier than typical sentences, but deviating, non-ambiguous sentences were. Deviations from categories with specialized processing mechanisms thus better fit the observed results as an explanation of the uncanny valley than ambiguity-based explanations.
Collapse
|
21
|
Two uncanny valleys: Re-evaluating the uncanny valley across the full spectrum of real-world human-like robots. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
22
|
Dubé S, Santaguida M, Anctil D, Zhu CY, Thomasse L, Giaccari L, Oassey R, Vachon D, Johnson A. Perceived stigma and erotic technology: From sex toys to erobots. PSYCHOLOGY & SEXUALITY 2022. [DOI: 10.1080/19419899.2022.2067783] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Affiliation(s)
- S. Dubé
- Department of Psychology, Concordia University, Montreal, Québec, Canada
| | - M. Santaguida
- Department of Psychology, Concordia University, Montreal, Québec, Canada
| | - D. Anctil
- Department of Philosophy, Jean-de-Brébeuf College, Montreal, Québec, Canada
- International Observatory on the Societal Impacts of Artificial Intelligence and Digital Technology, Laval University, Montreal, Québec, Canada
| | - C. Y. Zhu
- Department of Psychology, Concordia University, Montreal, Québec, Canada
| | - L. Thomasse
- Department of Psychology, Concordia University, Montreal, Québec, Canada
| | - L. Giaccari
- Department of Psychology, Concordia University, Montreal, Québec, Canada
| | - R. Oassey
- Department of Psychology, Concordia University, Montreal, Québec, Canada
| | - D. Vachon
- Department of Psychology, McGill University, Montreal, Québec, Canada
| | - A. Johnson
- Department of Psychology, Concordia University, Montreal, Québec, Canada
| |
Collapse
|
23
|
Kumar S, Miller EG, Mende M, Scott ML. Language matters: humanizing service robots through the use of language during the COVID-19 pandemic. MARKETING LETTERS 2022; 33:607-623. [PMID: 35469318 PMCID: PMC9020763 DOI: 10.1007/s11002-022-09630-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 04/05/2022] [Indexed: 06/14/2023]
Abstract
UNLABELLED Service robots are emerging quickly in the marketplace (e.g., in hotels, restaurants, and healthcare), especially as COVID-19-related health concerns and social distancing guidelines have affected people's desire and ability to interact with other humans. However, while robots can increase efficiency and enable service offerings with reduced human contact, prior research shows a systematic consumer aversion toward service robots relative to human service providers. This potential dilemma raises the managerial question of how firms can overcome consumer aversion and better employ service robots. Drawing on prior research that supports the use of language for building interpersonal relationships, this research examines whether the type of language (social-oriented vs. task-oriented language) a service robot uses can improve consumer responses to and evaluations of the focal service robot, particularly in light of consumers' COVID-19-related stress. The results show that consumers respond more favorably to a service robot that uses a social-oriented (vs. task-oriented) language style, particularly when these consumers experience relatively higher levels of COVID-19-related stress. These findings contribute to initial empirical evidence in marketing for the efficacy of leveraging robots' language style to improve customer evaluations of service robots, especially under stressful circumstances. Overall, the results from two experimental studies not only point to actionable managerial implications but also to a new avenue of research on service robots that examines customer-robot interactions through the lens of language and in contexts that can be stressful for consumers (e.g., healthcare or some financial service settings). SUPPLEMENTARY INFORMATION The online version contains supplementary material available at 10.1007/s11002-022-09630-x.
Collapse
Affiliation(s)
- Smriti Kumar
- Marketing Department, Isenberg School of Management, University of Massachusetts Amherst, Amherst, MA USA
| | - Elizabeth G. Miller
- Marketing Department, Isenberg School of Management, University of Massachusetts Amherst, Amherst, MA USA
| | - Martin Mende
- Marketing Department, College of Business, Florida State University, Tallahassee, FL USA
| | - Maura L. Scott
- Marketing Department, College of Business, Florida State University, Tallahassee, FL USA
| |
Collapse
|
24
|
Sharma M, Vemuri K. Accepting Human-like Avatars in Social and Professional Roles. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3526026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Humans report perceptions of unease or eeriness as humanoid/android robots and digital avatars approach human-like physical resemblance, a phenomenon alluded by the Uncanny Valley theory. This study extends the discussions on interactions and acceptance of digital avatars with findings from three experiments. In the first, perceptive evaluation of actors in clips from computer-generated animation and a live-action version of the same movie was examined. In the second experiment, we considered short clips with highly realistic digital avatars to measure recognition ability, the extent of eeriness, and specific physical features identified as unreal. The fixation area and pupil size variation recorded using an eye tracker were analyzed to infer attention to the body, face, and emotional response, respectively. Building on these findings, the third experiment looked at acceptance in roles requiring human skill, empathy, and cognitive ability. The results show that based on perceptions from physical attributes, the eeriness scores diverge from the uncanny valley theory as human-likeness increases. The realistic CGI and mocap technology could have helped cross the valley. Visual attention inferred from gaze behavior was similar for live-action and CGI. At the same time, we observe pupil size changes reflecting emotions like eeriness when the avatars either talked or smiled. Proficiency and acceptance scores were lower for roles requiring complex social cognition processes, such as friends and judicial decision-making. Interestingly, real-life stereotypes of gender roles were transferred to digital avatars too. The findings suggest an ambiguity in accepting human-like avatars in social and professional interactions, emphasizing the need for a multi-dimensional approach when applying the uncanny valley theory. A detailed and contextual examination is imperative as technological advancements have placed humans closer to co-existing with digital or physical android/humanoid robots.
Collapse
Affiliation(s)
- Medha Sharma
- Cognitive Science Lab, International Institute of Information Technology, Hyderabad
| | - Kavita Vemuri
- Cognitive Science Lab, International Institute of Information Technology, Hyderabad
| |
Collapse
|
25
|
Onnasch L, Hildebrandt CL. Impact of Anthropomorphic Robot Design on Trust and Attention in Industrial Human-Robot Interaction. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3472224] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
The application of anthropomorphic features to robots is generally considered beneficial for
human-robot interaction (HRI
). Although previous research has mainly focused on social robots, the phenomenon gains increasing attention in industrial human-Robot interaction as well. In this study, the impact of anthropomorphic design of a collaborative industrial robot on the dynamics of trust and visual attention allocation was examined. Participants interacted with a robot, which was either anthropomorphically or non-anthropomorphically designed. Unexpectedly, attribute-based trust measures revealed no beneficial effect of anthropomorphism but even a negative impact on the perceived reliability of the robot. Trust behavior was not significantly affected by an anthropomorphic robot design during faultless interactions, but showed a relatively steeper decrease after participants experienced a failure of the robot. With regard to attention allocation, the study clearly reveals a distracting effect of anthropomorphic robot design. The results emphasize that anthropomorphism might not be an appropriate feature in industrial HRI as it not only failed to reveal positive effects on trust, but distracted participants from relevant task areas which might be a significant drawback with regard to occupational safety in HRI.
Collapse
Affiliation(s)
- Linda Onnasch
- Engineering Psychology, Humboldt-Universität zu Berlin, Berlin, Germany
| | | |
Collapse
|
26
|
Diel A, Weigelt S, Macdorman KF. A Meta-analysis of the Uncanny Valley's Independent and Dependent Variables. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3470742] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
The
uncanny valley (UV)
effect is a negative affective reaction to human-looking artificial entities. It hinders comfortable, trust-based interactions with android robots and virtual characters. Despite extensive research, a consensus has not formed on its theoretical basis or methodologies. We conducted a meta-analysis to assess operationalizations of human likeness (independent variable) and the UV effect (dependent variable). Of 468 studies, 72 met the inclusion criteria. These studies employed 10 different stimulus creation techniques, 39 affect measures, and 14 indirect measures. Based on 247 effect sizes, a three-level meta-analysis model revealed the UV effect had a large effect size, Hedges’
g
= 1.01 [0.80, 1.22]. A mixed-effects meta-regression model with creation technique as the moderator variable revealed
face distortion
produced the largest effect size,
g
= 1.46 [0.69, 2.24], followed by
distinct entities, g
= 1.20 [1.02, 1.38],
realism render, g
= 0.99 [0.62, 1.36], and
morphing, g
= 0.94 [0.64, 1.24]. Affective indices producing the largest effects were
threatening, likable, aesthetics, familiarity
, and
eeriness
, and indirect measures were
dislike frequency, categorization reaction time, like frequency, avoidance
, and
viewing duration
. This meta-analysis—the first on the UV effect—provides a methodological foundation and design principles for future research.
Collapse
Affiliation(s)
- Alexander Diel
- School of Psychology, Cardiff University, Cardiff, United Kingdom
| | - Sarah Weigelt
- Department of Vision, Visual Impairments & Blindness, Faculty of Rehabilitation Sciences, Technical University of Dortmund, Dortmund, Germany
| | - Karl F. Macdorman
- School of Informatics and Computing, Indiana University, Indianapolis, IN, USA
| |
Collapse
|
27
|
Spontaneous perspective taking toward robots: The unique impact of humanlike appearance. Cognition 2022; 224:105076. [PMID: 35364401 DOI: 10.1016/j.cognition.2022.105076] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2021] [Revised: 02/25/2022] [Accepted: 02/28/2022] [Indexed: 11/20/2022]
Abstract
As robots rapidly enter society, how does human social cognition respond to their novel presence? Focusing on one foundational social-cognitive capacity-visual perspective taking-seven studies reveal that people spontaneously adopt a robot's unique perspective and do so with patterns of variation that mirror perspective taking toward humans. As they do with humans, people take a robot's visual perspective when it displays goal-directed actions. Moreover, perspective taking is absent when the agent lacks human appearance, increases when the agent looks highly humanlike, and persists even when the humanlike agent is perceived as eerie or as obviously lacking a mind. These results suggest that visual perspective taking toward robots is consistent with a "mere appearance hypothesis"-a form of stimulus generalization based on humanlike appearance-rather than following an "uncanny valley" pattern or arising from mind perception. Robots' superficial human resemblance may trigger and modulate social-cognitive responses in human observers originally developed for human interaction.
Collapse
|
28
|
Lv L, Huang M, Huang R. Anthropomorphize service robots: the role of human nature traits. SERVICE INDUSTRIES JOURNAL 2022. [DOI: 10.1080/02642069.2022.2048821] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Linxiang Lv
- Economics and Management School, Wuhan University, Wuhan, People’s Republic of China
| | - Minxue Huang
- Economics and Management School, Wuhan University, Wuhan, People’s Republic of China
| | - Ruyao Huang
- Economics and Management School, Wuhan University, Wuhan, People’s Republic of China
| |
Collapse
|
29
|
Diel A, Lewis M. Familiarity, orientation, and realism increase face uncanniness by sensitizing to facial distortions. J Vis 2022; 22:14. [PMID: 35344022 PMCID: PMC8982630 DOI: 10.1167/jov.22.4.14] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The uncanny valley predicts aversive reactions toward near-humanlike entities. Greater uncanniness is elicited by distortions in realistic than unrealistic faces, possibly due to familiarity. Experiment 1 investigated how familiarity and inversion affect uncanniness of facial distortions and the ability to detect differences between the distorted variants of the same face (distortion sensitivity). Familiar or unfamiliar celebrity faces were incrementally distorted and presented either upright or inverted. Uncanniness ratings increased across the distortion levels, and were stronger for familiar and upright faces. Distortion sensitivity increased with increasing distortion difference levels, again stronger for familiar and upright faces. Experiment 2 investigated how face realism, familiarity, and face orientation interacted for the increase of uncanniness across distortions. Realism increased the increase of uncanniness across the distortion levels, further enhanced by upright orientation and familiarity. The findings show that familiarity, upright orientation, and high face realism increase the sensitivity of uncanniness, likely by increasing distortion sensitivity. Finally, a moderated linear function of face realism and deviation level could explain the uncanniness of stimuli better than a quadratic function. A re-interpretation of the uncanny valley as sensitivity toward deviations from familiarized patterns is discussed.
Collapse
Affiliation(s)
| | - Michael Lewis
- School of Psychology, Cardiff University, Cardiff, UK.,
| |
Collapse
|
30
|
Kathleen B, Víctor FC, Amandine M, Aurélie C, Elisabeth P, Michèle G, Rachid A, Hélène C. Addressing joint action challenges in HRI: Insights from psychology and philosophy. Acta Psychol (Amst) 2022; 222:103476. [PMID: 34974283 DOI: 10.1016/j.actpsy.2021.103476] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Revised: 11/19/2021] [Accepted: 12/15/2021] [Indexed: 11/24/2022] Open
Abstract
The vast expansion of research in human-robot interactions (HRI) these last decades has been accompanied by the design of increasingly skilled robots for engaging in joint actions with humans. However, these advances have encountered significant challenges to ensure fluent interactions and sustain human motivation through the different steps of joint action. After exploring current literature on joint action in HRI, leading to a more precise definition of these challenges, the present article proposes some perspectives borrowed from psychology and philosophy showing the key role of communication in human interactions. From mutual recognition between individuals to the expression of commitment and social expectations, we argue that communicative cues can facilitate coordination, prediction, and motivation in the context of joint action. The description of several notions thus suggests that some communicative capacities can be implemented in the context of joint action for HRI, leading to an integrated perspective of robotic communication.
Collapse
Affiliation(s)
- Belhassein Kathleen
- CLLE, UMR5263, Toulouse University, CNRS, UT2J, France; LAAS-CNRS, UPR8001, Toulouse University, CNRS, France
| | | | | | | | | | | | - Alami Rachid
- LAAS-CNRS, UPR8001, Toulouse University, CNRS, France
| | - Cochet Hélène
- CLLE, UMR5263, Toulouse University, CNRS, UT2J, France
| |
Collapse
|
31
|
Kumazaki H, Muramatsu T, Yoshikawa Y, Matsumoto Y, Kuwata M, Takata K, Ishiguro H, Mimura M. Differences in the Optimal Motion of Android Robots for the Ease of Communications Among Individuals With Autism Spectrum Disorders. Front Psychiatry 2022; 13:883371. [PMID: 35722543 PMCID: PMC9203835 DOI: 10.3389/fpsyt.2022.883371] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Accepted: 05/13/2022] [Indexed: 12/01/2022] Open
Abstract
Android robots are employed in various fields. Many individuals with autism spectrum disorders (ASD) have the motivation and aptitude for using such robots. Interactions with these robots are structured to resemble social situations in which certain social behaviors can occur and to simulate daily life. Considering that individuals with ASD have strong likes and dislikes, ensuring not only the optimal appearance but also the optimal motion of robots is important to achieve smooth interaction and to draw out the potential of robotic interventions. We investigated whether individuals with ASD found it easier to talk to an android robot with little motion (i.e., only opening and closing its mouth during speech) or an android robot with much motion (i.e., in addition to opening and closing its mouth during speech, moving its eyes from side to side and up and down, blinking, deeply breathing, and turning or moving its head or body at random). This was a crossover study in which a total of 25 participants with ASD experienced mock interviews conducted by an android robot with much spontaneous facial and bodily motion and an android robot with little motion. We compared demographic data between participants who answered that the android robot with much motion was easier to talk to than android robot with little motion and those who answered the opposite. In addition, we investigated how each type of demographic data was related to participants' feeling of comfort in an interview setting with an android robot. Fourteen participants indicated that the android robot with little motion was easier to talk to than the robot with much motion, whereas 11 participants answered the opposite. There were significant differences between these two groups in the sensory sensitivity score, which reflects the tendency to show a low neurological threshold. In addition, we found correlations between the sensation seeking score, which reflects the tendency to show a high neurological threshold, and self-report ratings of comfort in each condition. These results provide preliminary support for the importance of setting the motion of an android robot considering the sensory traits of ASD.
Collapse
Affiliation(s)
- Hirokazu Kumazaki
- Department of Future Psychiatric Medicine, Graduate School of Biomedical Sciences, Nagasaki University, Nagasaki, Japan.,National Center of Neurology and Psychiatry, Department of Preventive Intervention for Psychiatric Disorders, National Institute of Mental Health, Tokyo, Japan.,College of Science and Engineering, Kanazawa University, Kanazawa, Japan.,Department of Neuropsychiatry, Keio University School of Medicine, Tokyo, Japan.,Human Augmentation Research Center, National Institute of Advanced Industrial Science and Technology, Chiba, Japan
| | - Taro Muramatsu
- Department of Neuropsychiatry, Keio University School of Medicine, Tokyo, Japan
| | - Yuichiro Yoshikawa
- Department of Systems Innovation, Graduate School of Engineering Science, Osaka University, Osaka, Japan
| | - Yoshio Matsumoto
- National Center of Neurology and Psychiatry, Department of Preventive Intervention for Psychiatric Disorders, National Institute of Mental Health, Tokyo, Japan.,Human Augmentation Research Center, National Institute of Advanced Industrial Science and Technology, Chiba, Japan.,Department of Clinical Research on Social Recognition and Memory, Research Center for Child Mental Development, Kanazawa University, Kanazawa, Japan
| | - Masaki Kuwata
- Human Augmentation Research Center, National Institute of Advanced Industrial Science and Technology, Chiba, Japan
| | - Keiji Takata
- National Center of Neurology and Psychiatry, Department of Preventive Intervention for Psychiatric Disorders, National Institute of Mental Health, Tokyo, Japan
| | - Hiroshi Ishiguro
- Department of Systems Innovation, Graduate School of Engineering Science, Osaka University, Osaka, Japan
| | - Masaru Mimura
- Department of Neuropsychiatry, Keio University School of Medicine, Tokyo, Japan
| |
Collapse
|
32
|
Mara M, Appel M, Gnambs T. Human-Like Robots and the Uncanny Valley. ZEITSCHRIFT FUR PSYCHOLOGIE-JOURNAL OF PSYCHOLOGY 2022. [DOI: 10.1027/2151-2604/a000486] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
Abstract
Abstract. In the field of human-robot interaction, the well-known uncanny valley hypothesis proposes a curvilinear relationship between a robot’s degree of human likeness and the observers’ responses to the robot. While low to medium human likeness should be associated with increased positive responses, a shift to negative responses is expected for highly anthropomorphic robots. As empirical findings on the uncanny valley hypothesis are inconclusive, we conducted a random-effects meta-analysis of 49 studies (total N = 3,556) that reported 131 evaluations of robots based on the Godspeed scales for anthropomorphism (i.e., human likeness) and likeability. Our results confirm more positive responses for more human-like robots at low to medium anthropomorphism, with moving robots rated as more human-like but not necessarily more likable than static ones. However, because highly anthropomorphic robots were sparsely utilized in previous studies, no conclusions regarding proposed adverse effects at higher levels of human likeness can be made at this stage.
Collapse
Affiliation(s)
- Martina Mara
- LIT Robopsychology Lab, Johannes Kepler University Linz, Austria
| | - Markus Appel
- Psychology of Communication and New Media, University of Würzburg, Germany
| | - Timo Gnambs
- Leibniz Institute for Educational Trajectories (LIfBi), University of Bamberg, Germany
| |
Collapse
|
33
|
|
34
|
Schreuter D, van der Putten P, Lamers MH. Trust Me on This One: Conforming to Conversational Assistants. Minds Mach (Dordr) 2021. [DOI: 10.1007/s11023-021-09581-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
35
|
Fortunati L, Manganelli AM, Höflich J, Ferrin G. Exploring the Perceptions of Cognitive and Affective Capabilities of Four, Real, Physical Robots with a Decreasing Degree of Morphological Human Likeness. Int J Soc Robot 2021. [DOI: 10.1007/s12369-021-00827-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
AbstractThis paper describes an investigation of student perceptions of the cognitive and affective capabilities of four robots that have a decreasing degree of morphological human likeness. We showed and illustrated the robots (i.e., InMoov, Padbot, Joy Robot and Turtlebot) to 62 students. After showing the students each of these robots, and explaining their main features and capabilities, we administered a fill-in questionnaire to the students. Our main hypothesis was that the perception of a robot’s cognitive and affective capabilities varied in correspondence with their appearance and in particular with their different degree of human likeness. The main results of this study indicate that the scores attributed to the cognitive and emotional capabilities of these robots are not modulated correspondingly to their different morphological similarity to humans. Furthermore, overall, the scores given to all of these robots regarding their ability to explicate mental functions are low, and even lower scores are given to their ability to feel emotions. There is a split between InMoov, the robot which has the highest degree of human likeness, and all of the others. Our results also indicate that: (1) morphological similarity of a robot to humans is not perceived automatically as such by observers, which is not considered a value in itself for the robot; and (2) even at lower levels of robot–human likeness, an uncanny valley effect arises but is quite mitigated by curiosity.
Collapse
|
36
|
|
37
|
More than appearance: the uncanny valley effect changes with a robot’s mental capacity. CURRENT PSYCHOLOGY 2021. [DOI: 10.1007/s12144-021-02298-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
38
|
Stock-Homburg R. Survey of Emotions in Human–Robot Interactions: Perspectives from Robotic Psychology on 20 Years of Research. Int J Soc Robot 2021. [DOI: 10.1007/s12369-021-00778-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
AbstractKnowledge production within the interdisciplinary field of human–robot interaction (HRI) with social robots has accelerated, despite the continued fragmentation of the research domain. Together, these features make it hard to remain at the forefront of research or assess the collective evidence pertaining to specific areas, such as the role of emotions in HRI. This systematic review of state-of-the-art research into humans’ recognition and responses to artificial emotions of social robots during HRI encompasses the years 2000–2020. In accordance with a stimulus–organism–response framework, the review advances robotic psychology by revealing current knowledge about (1) the generation of artificial robotic emotions (stimulus), (2) human recognition of robotic artificial emotions (organism), and (3) human responses to robotic emotions (response), as well as (4) other contingencies that affect emotions as moderators.
Collapse
|
39
|
Lin C, Šabanović S, Dombrowski L, Miller AD, Brady E, MacDorman KF. Parental Acceptance of Children's Storytelling Robots: A Projection of the Uncanny Valley of AI. Front Robot AI 2021; 8:579993. [PMID: 34095237 PMCID: PMC8172185 DOI: 10.3389/frobt.2021.579993] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2020] [Accepted: 02/09/2021] [Indexed: 11/13/2022] Open
Abstract
Parent-child story time is an important ritual of contemporary parenting. Recently, robots with artificial intelligence (AI) have become common. Parental acceptance of children's storytelling robots, however, has received scant attention. To address this, we conducted a qualitative study with 18 parents using the research technique design fiction. Overall, parents held mixed, though generally positive, attitudes toward children's storytelling robots. In their estimation, these robots would outperform screen-based technologies for children's story time. However, the robots' potential to adapt and to express emotion caused some parents to feel ambivalent about the robots, which might hinder their adoption. We found three predictors of parental acceptance of these robots: context of use, perceived agency, and perceived intelligence. Parents' speculation revealed an uncanny valley of AI: a nonlinear relation between the human likeness of the artificial agent's mind and affinity for the agent. Finally, we consider the implications of children's storytelling robots, including how they could enhance equity in children's access to education, and propose directions for research on their design to benefit family well-being.
Collapse
Affiliation(s)
- Chaolan Lin
- Department of Cognitive Science, University of California, San Diego, CA, United States
| | - Selma Šabanović
- The Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, United States
| | - Lynn Dombrowski
- The School of Informatics and Computing, Indiana University, Indianapolis, IN, United States
| | - Andrew D Miller
- The School of Informatics and Computing, Indiana University, Indianapolis, IN, United States
| | - Erin Brady
- The School of Informatics and Computing, Indiana University, Indianapolis, IN, United States
| | - Karl F MacDorman
- The School of Informatics and Computing, Indiana University, Indianapolis, IN, United States
| |
Collapse
|
40
|
Oleksy T, Wnuk A. Do women perceive sex robots as threatening? The role of political views and presenting the robot as a female-vs male-friendly product. COMPUTERS IN HUMAN BEHAVIOR 2021. [DOI: 10.1016/j.chb.2020.106664] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
|
41
|
Diel A, MacDorman KF. Creepy cats and strange high houses: Support for configural processing in testing predictions of nine uncanny valley theories. J Vis 2021; 21:1. [PMID: 33792617 PMCID: PMC8024776 DOI: 10.1167/jov.21.4.1] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
In 1970, Masahiro Mori proposed the uncanny valley (UV), a region in a human-likeness continuum where an entity risks eliciting a cold, eerie, repellent feeling. Recent studies have shown that this feeling can be elicited by entities modeled not only on humans but also nonhuman animals. The perceptual and cognitive mechanisms underlying the UV effect are not well understood, although many theories have been proposed to explain them. To test the predictions of nine classes of theories, a within-subjects experiment was conducted with 136 participants. The theories' predictions were compared with ratings of 10 classes of stimuli on eeriness and coldness indices. One type of theory, configural processing, predicted eight out of nine significant effects. Atypicality, in its extended form, in which the uncanny valley effect is amplified by the stimulus appearing more human, also predicted eight. Threat avoidance predicted seven; atypicality, perceptual mismatch, and mismatch+ predicted six; category+, novelty avoidance, mate selection, and psychopathy avoidance predicted five; and category uncertainty predicted three. Empathy's main prediction was not supported. Given that the number of significant effects predicted depends partly on our choice of hypotheses, a detailed consideration of each result is advised. We do, however, note the methodological value of examining many competing theories in the same experiment.
Collapse
Affiliation(s)
- Alexander Diel
- School of Psychology, Cardiff University, Cardiff, United Kingdom.,Indiana University School of Informatics and Computing, Indianapolis, IN, USA.,
| | - Karl F MacDorman
- Indiana University School of Informatics and Computing, Indianapolis, IN, USA.,
| |
Collapse
|
42
|
A Novel Speech to Mouth Articulation System for Realistic Humanoid Robots. J INTELL ROBOT SYST 2021. [DOI: 10.1007/s10846-021-01332-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
43
|
The creepy, the bad and the ugly: exploring perceptions of moral character and social desirability in uncanny faces. CURRENT PSYCHOLOGY 2021. [DOI: 10.1007/s12144-021-01452-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
44
|
Abel: Integrating Humanoid Body, Emotions, and Time Perception to Investigate Social Interaction and Human Cognition. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11031070] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Humanoids have been created for assisting or replacing humans in many applications, providing encouraging results in contexts where social and emotional interaction is required, such as healthcare, education, and therapy. Bioinspiration, that has often guided the design of their bodies and minds, made them also become excellent research tools, probably the best platform by which we can model, test, and understand the human mind and behavior. Driven by the aim of creating a believable robot for interactive applications, as well as a research platform for investigating human cognition and emotion, we are constructing a new humanoid social robot: Abel. In this paper, we discussed three of the fundamental principles that motivated the design of Abel and its cognitive and emotional system: hyper-realistic humanoid aesthetics, human-inspired emotion processing, and human-like perception of time. After reporting a brief state-of-the-art on the related topics, we present the robot at its stage of development, what are the perspectives for its application, and how it could satisfy the expectations as a tool to investigate the human mind, behavior, and consciousness.
Collapse
|
45
|
The Importance of Realism, Character, and Genre: How Theatre Can Support the Creation of Likeable Sociable Robots. Int J Soc Robot 2020. [DOI: 10.1007/s12369-020-00637-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
AbstractStage plays, theories of theatre, narrative studies, and robotics research can serve to identify, explore, and interrogate theatrical elements that support the effective performance of sociable humanoid robots. Theatre, including its parts of performance, aesthetics, character, and genre, can also reveal features of human–robot interaction key to creating humanoid robots that are likeable rather than uncanny. In particular, this can be achieved by relating Mori's (1970/2012) concept of total appearance to realism. Realism is broader and more subtle in its workings than is generally recognised in its operationalization in studies that focus solely on appearance. For example, it is complicated by genre. A realistic character cast in a detective drama will convey different qualities and expectations than the same character in a dystopian drama or romantic comedy. The implications of realism and genre carry over into real life. As stage performances and robotics studies reveal, likeability depends on creating aesthetically coherent representations of character, where all the parts coalesce to produce a socially identifiable figure demonstrating predictable behaviour.
Collapse
|
46
|
de la Rosa S, Meilinger T, Streuber S, Saulton A, Fademrecht L, Quiros-Ramirez MA, Bülthoff H, Bülthoff I, Cañal-Bruland R. Visual appearance modulates motor control in social interactions. Acta Psychol (Amst) 2020; 210:103168. [PMID: 32919093 DOI: 10.1016/j.actpsy.2020.103168] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2020] [Revised: 06/24/2020] [Accepted: 08/14/2020] [Indexed: 11/28/2022] Open
Abstract
The goal of new adaptive technologies is to allow humans to interact with technical devices, such as robots, in natural ways akin to human interaction. Essential for achieving this goal, is the understanding of the factors that support natural interaction. Here, we examined whether human motor control is linked to the visual appearance of the interaction partner. Motor control theories consider kinematic-related information but not visual appearance as important for the control of motor movements (Flash & Hogan, 1985; Harris & Wolpert, 1998; Viviani & Terzuolo, 1982). We investigated the sensitivity of motor control to visual appearance during the execution of a social interaction, i.e. a high-five. In a novel mixed reality setup participants executed a high-five with a three-dimensional life-size human- or a robot-looking avatar. Our results demonstrate that movement trajectories and adjustments to perturbations depended on the visual appearance of the avatar despite both avatars carrying out identical movements. Moreover, two well-known motor theories (minimum jerk, two-thirds power law) better predict robot than human interaction trajectories. The dependence of motor control on the human likeness of the interaction partner suggests that different motor control principles might be at work in object and human directed interactions.
Collapse
Affiliation(s)
- Stephan de la Rosa
- Department for Perception Cognition and Action, Max Planck Institute for Biological Cybernetics, Germany; Department of Psychology, FOM University, Augsburg, Germany.
| | - Tobias Meilinger
- Department for Perception Cognition and Action, Max Planck Institute for Biological Cybernetics, Germany
| | - Stephan Streuber
- Department of Computer and Information Science, Universität Konstanz, Germany
| | - Aurelie Saulton
- Department for Perception Cognition and Action, Max Planck Institute for Biological Cybernetics, Germany; IMPRS for Cognitive and Systems Neuroscience, Eberhard Karls Universität Tübingen, Germany
| | - Laura Fademrecht
- Department for Perception Cognition and Action, Max Planck Institute for Biological Cybernetics, Germany; IMPRS for Cognitive and Systems Neuroscience, Eberhard Karls Universität Tübingen, Germany
| | | | - Heinrich Bülthoff
- Department for Perception Cognition and Action, Max Planck Institute for Biological Cybernetics, Germany
| | - Isabelle Bülthoff
- Department for Perception Cognition and Action, Max Planck Institute for Biological Cybernetics, Germany
| | - Rouwen Cañal-Bruland
- Department for the Psychology of Human Movement and Sport, Institute of Sport Science, Friedrich Schiller University Jena, Germany
| |
Collapse
|
47
|
Wang S, Cheong YF, Dilks DD, Rochat P. The Uncanny Valley Phenomenon and the Temporal Dynamics of Face Animacy Perception. Perception 2020; 49:1069-1089. [PMID: 32903162 DOI: 10.1177/0301006620952611] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Human replicas highly resembling people tend to elicit eerie sensations-a phenomenon known as the uncanny valley. To test whether this effect is attributable to people's ascription of mind to (i.e., mind perception hypothesis) or subtraction of mind from androids (i.e., dehumanization hypothesis), in Study 1, we examined the effect of face exposure time on the perceived animacy of human, android, and mechanical-looking robot faces. In Study 2, in addition to exposure time, we also manipulated the spatial frequency of faces, by preserving either their fine (high spatial frequency) or coarse (low spatial frequency) information, to examine its effect on faces' perceived animacy and uncanniness. We found that perceived animacy decreased as a function of exposure time only in android but not in human or mechanical-looking robot faces (Study 1). In addition, the manipulation of spatial frequency eliminated the decrease in android faces' perceived animacy and reduced their perceived uncanniness (Study 2). These findings link perceived uncanniness in androids to the temporal dynamics of face animacy perception. We discuss these findings in relation to the dehumanization hypothesis and alternative hypotheses of the uncanny valley phenomenon.
Collapse
|
48
|
Henkel AP, Čaić M, Blaurock M, Okan M. Robotic transformative service research: deploying social robots for consumer well-being during COVID-19 and beyond. JOURNAL OF SERVICE MANAGEMENT 2020. [DOI: 10.1108/josm-05-2020-0145] [Citation(s) in RCA: 67] [Impact Index Per Article: 16.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
PurposeBesides the direct physical health consequences, through social isolation COVID-19 affects a considerably larger share of consumers with deleterious effects for their psychological well-being. Two vulnerable consumer groups are particularly affected: older adults and children. The purpose of the underlying paper is to take a transformative research perspective on how social robots can be deployed for advancing the well-being of these vulnerable consumers and to spur robotic transformative service research (RTSR).Design/methodology/approachThis paper follows a conceptual approach that integrates findings from various domains: service research, social robotics, social psychology and medicine.FindingsTwo key findings advanced in this paper are (1) a typology of robotic transformative service (i.e. entertainer, social enabler, mentor and friend) as a function of consumers' state of social isolation, well-being focus and robot capabilities and (2) a future research agenda for RTSR.Practical implicationsThis paper guides service consumers and providers and robot developers in identifying and developing the most appropriate social robot type for advancing the well-being of vulnerable consumers in social isolation.Originality/valueThis study is the first to integrate social robotics and transformative service research by developing a typology of social robots as a guiding framework for assessing the status quo of transformative robotic service on the basis of which it advances a future research agenda for RTSR. It further complements the underdeveloped body of service research with a focus on eudaimonic consumer well-being.
Collapse
|
49
|
Kim W, Kim N, Lyons JB, Nam CS. Factors affecting trust in high-vulnerability human-robot interaction contexts: A structural equation modelling approach. APPLIED ERGONOMICS 2020; 85:103056. [PMID: 32174344 DOI: 10.1016/j.apergo.2020.103056] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2018] [Revised: 11/28/2019] [Accepted: 01/13/2020] [Indexed: 06/10/2023]
Abstract
The current research proposed and tested a structural equation model (SEM) that describes hypothesized relationships among factors affecting trust in human-robot interaction (HRI) such as trustworthiness, human-likeness, intelligence, perfect automation schema (PAS), and affect. A video stimulus depicting an autonomous guard robot interacting with humans was employed as a stimulus via Amazon's Mechanical Turk to recruit 233 participants. Human-related and robot-related metrics were found to affect trustworthiness that subsequently affected trust. In particular, ability (as a trustworthiness facet) was a dominant factor affecting trust in HRI. Integrity was found to mediate the relationships between robot- and human-related metrics and trustworthiness. This study also showed a correlation between intelligence and trustworthiness, as well as between PAS and trustworthiness. The findings of the present study have significant implications for both theory and practice on factors and levels that affect trust in HRI.
Collapse
Affiliation(s)
- Wonjoon Kim
- Department of Industrial & Management Engineering, Sungkyul University, South Korea
| | - Nayoung Kim
- Edward P. Fitts Department of Industrial & Systems Engineering, North Carolina State University, USA
| | | | - Chang S Nam
- Edward P. Fitts Department of Industrial & Systems Engineering, North Carolina State University, USA.
| |
Collapse
|
50
|
Mathur MB, Reichling DB, Lunardini F, Geminiani A, Antonietti A, Ruijten PA, Levitan CA, Nave G, Manfredi D, Bessette-Symons B, Szuts A, Aczel B. Uncanny but not confusing: Multisite study of perceptual category confusion in the Uncanny Valley. COMPUTERS IN HUMAN BEHAVIOR 2020. [DOI: 10.1016/j.chb.2019.08.029] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|