1
|
Igwe K, Durrheim K. Using artificial agents to nudge outgroup altruism and reduce ingroup favoritism in human-agent interaction. Sci Rep 2024; 14:15850. [PMID: 38982070 PMCID: PMC11233637 DOI: 10.1038/s41598-024-64682-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Accepted: 06/12/2024] [Indexed: 07/11/2024] Open
Abstract
Ingroup favoritism and intergroup discrimination can be mutually reinforcing during social interaction, threatening intergroup cooperation and the sustainability of societies. In two studies (N = 880), we investigated whether promoting prosocial outgroup altruism would weaken the ingroup favoritism cycle of influence. Using novel methods of human-agent interaction via a computer-mediated experimental platform, we introduced outgroup altruism by (i) nonadaptive artificial agents with preprogrammed outgroup altruistic behavior (Study 1; N = 400) and (ii) adaptive artificial agents whose altruistic behavior was informed by the prediction of a machine learning algorithm (Study 2; N = 480). A rating task ensured that the observed behavior did not result from the participant's awareness of the artificial agents. In Study 1, nonadaptive agents prompted ingroup members to withhold cooperation from ingroup agents and reinforced ingroup favoritism among humans. In Study 2, adaptive agents were able to weaken ingroup favoritism over time by maintaining a good reputation with both the ingroup and outgroup members, who perceived agents as being fairer than humans and rated agents as more human than humans. We conclude that a good reputation of the individual exhibiting outgroup altruism is necessary to weaken ingroup favoritism and improve intergroup cooperation. Thus, reputation is important for designing nudge agents.
Collapse
Affiliation(s)
- Kevin Igwe
- Department of Psychology, Faculty of Humanities, University of Johannesburg, Bunting Road, Auckland Park, Johannesburg, 2092, South Africa
| | - Kevin Durrheim
- Department of Psychology, Faculty of Humanities, University of Johannesburg, Bunting Road, Auckland Park, Johannesburg, 2092, South Africa.
| |
Collapse
|
2
|
Rossignoli D, Manzi F, Gaggioli A, Marchetti A, Massaro D, Riva G, Maggioni MA. The Importance of Being Consistent: Attribution of Mental States in Strategic Human-Robot Interactions. CYBERPSYCHOLOGY, BEHAVIOR AND SOCIAL NETWORKING 2024; 27:498-506. [PMID: 38770627 DOI: 10.1089/cyber.2023.0353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2024]
Abstract
This article investigates the attribution of mental state (AMS) to an anthropomorphic robot by humans in a strategic interaction. We conducted an experiment in which human subjects are paired with either a human or an anthropomorphic robot to play an iterated Prisoner's Dilemma game, and we tested whether AMS is dependent on the robot "consistency," that is, the correspondence between the robot's verbal reaction and its behavior after a nonoptimal social outcome of the game is obtained. We find that human partners are attributed a higher mental state level than robotic partners, regardless of the partner's consistency between words and actions. Conversely, the level of AMS assigned to the robot is significantly higher when the robot is consistent in its words and actions. This finding is robust to the inclusion of psychological factors such as risk attitude and trust, and it holds regardless of subjects' initial beliefs about the adaptability of the robot. Finally, we find that when the robot apologizes for its behavior and defects in the following stage, the epistemic component of the AMS significantly increases.
Collapse
Affiliation(s)
- Domenico Rossignoli
- DISEIS, Department of International Economics, Institutions and Development, Università Cattolica del Sacro Cuore, Milano, Italy
- CSCC, Cognitive Science and Communication research Center, Università Cattolica del Sacro Cuore, Milano, Italy
- HuRoLab, Università Cattolica del Sacro Cuore, Milano, Italy
| | - Federico Manzi
- HuRoLab, Università Cattolica del Sacro Cuore, Milano, Italy
- UniToM, Università Cattolica del Sacro Cuore, Milano, Italy
- Department of Psychology, Università Cattolica del Sacro Cuore, Milano, Italy
| | - Andrea Gaggioli
- Research Center of Communication Psychology, Università Cattolica del Sacro Cuore, Milano, Italy
- Humane Technology Lab, Università Cattolica del Sacro Cuore, Milano, Italy
- ATN-P Lab, IRCCS Istituto Auxologico Italiano, Milano, Italy
| | - Antonella Marchetti
- HuRoLab, Università Cattolica del Sacro Cuore, Milano, Italy
- UniToM, Università Cattolica del Sacro Cuore, Milano, Italy
- Humane Technology Lab, Università Cattolica del Sacro Cuore, Milano, Italy
- Department of Psychology, Università Cattolica del Sacro Cuore, Milano, Italy
| | - Davide Massaro
- HuRoLab, Università Cattolica del Sacro Cuore, Milano, Italy
- Humane Technology Lab, Università Cattolica del Sacro Cuore, Milano, Italy
- Department of Psychology, Università Cattolica del Sacro Cuore, Milano, Italy
| | - Giuseppe Riva
- Research Center of Communication Psychology, Università Cattolica del Sacro Cuore, Milano, Italy
- Humane Technology Lab, Università Cattolica del Sacro Cuore, Milano, Italy
- Department of Psychology, Università Cattolica del Sacro Cuore, Milano, Italy
| | - Mario A Maggioni
- DISEIS, Department of International Economics, Institutions and Development, Università Cattolica del Sacro Cuore, Milano, Italy
- CSCC, Cognitive Science and Communication research Center, Università Cattolica del Sacro Cuore, Milano, Italy
- HuRoLab, Università Cattolica del Sacro Cuore, Milano, Italy
- Humane Technology Lab, Università Cattolica del Sacro Cuore, Milano, Italy
| |
Collapse
|
3
|
Maalouly E, Yamazaki R, Nishio S, Nørskov M, Kamaga K, Komai S, Chiba K, Atsumi K, Akao KI. The effect of conversation on altruism: A comparative study with different media and generations. PLoS One 2024; 19:e0301769. [PMID: 38875175 PMCID: PMC11178171 DOI: 10.1371/journal.pone.0301769] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Accepted: 03/21/2024] [Indexed: 06/16/2024] Open
Abstract
Despite the overwhelming evidence of climate change and its effects on future generations, most individuals are still hesitant to make environmental changes that would especially benefit future generations. In this study, we investigate whether dialogue can influence people's altruistic behavior toward future generations of humans, and how it may be affected by participant age and the appearance of the conversation partner. We used a human, an android robot called Telenoid, and a speaker as representatives of future generations. Participants were split among an old age group and a young age group and were randomly assigned to converse with one of the aforementioned representatives. We asked the participants to play a round of the Dictator Game with the representative they were assigned, followed by an interactive conversation and another round of the Dictator Game in order to gauge their level of altruism. The results show that, on average, participants gave more money after having an interactive conversation, and that older adults tend to give more money than young adults. There were no significant differences between the three representatives. The results show that empathy might have been the most important factor in the increase in altruistic behavior for all participants.
Collapse
Affiliation(s)
- Elie Maalouly
- Graduate School of Engineering Science, Osaka University, Osaka, Japan
| | - Ryuji Yamazaki
- Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Osaka, Japan
| | - Shuichi Nishio
- Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Osaka, Japan
| | - Marco Nørskov
- Department for Philosophy and the History of Ideas, Aarhus University, Aarhus, Denmark
| | - Kohei Kamaga
- Faculty of Economics, Sophia University, Tokyo, Japan
| | - Shoji Komai
- Faculty of Engineering, International Professional University of Technology in Tokyo, Tokyo, Japan
| | - Kiyoshi Chiba
- School of Social Sciences, Waseda University, Tokyo, Japan
| | | | - Ken-Ichi Akao
- School of Social Sciences, Waseda University, Tokyo, Japan
| |
Collapse
|
4
|
Sauer J, Sonderegger A, Semmer NK. The role of social support in human-automation interaction. ERGONOMICS 2024; 67:732-743. [PMID: 38414262 DOI: 10.1080/00140139.2024.2314580] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Accepted: 01/27/2024] [Indexed: 02/29/2024]
Abstract
This theoretical article examines the concept of social support in the context of human-automation interaction, outlining several critical issues. We identified several factors that we expect to influence the consequences of social support and to what extent it is perceived as appropriate (e.g. provider possibilities, recipient expectations), notably regarding potential threats to self-esteem. We emphasise the importance of performance (including extra-role performance) as a potential outcome, whereas previous research has primarily concentrated on health and well-being. We discuss to what extent automation may provide different types of social support (e.g. emotional, instrumental), and how it differs from human support. Finally, we propose a taxonomy of automated support, arguing that source of support is not a binary concept. We conclude that more empirical work is needed to examine the multiple effects of social support for core performance indicators and extra-role performance and emphasise that there are ethical questions involved.
Collapse
Affiliation(s)
- Juergen Sauer
- Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Andreas Sonderegger
- Department of Psychology, University of Fribourg, Fribourg, Switzerland
- Business School, Institute for New Work, Bern University of Applied Sciences, Bern, Switzerland
| | | |
Collapse
|
5
|
Terrucha I, Fernández Domingos E, C. Santos F, Simoens P, Lenaerts T. The art of compensation: How hybrid teams solve collective-risk dilemmas. PLoS One 2024; 19:e0297213. [PMID: 38335192 PMCID: PMC10857581 DOI: 10.1371/journal.pone.0297213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Accepted: 12/29/2023] [Indexed: 02/12/2024] Open
Abstract
It is widely known how the human ability to cooperate has influenced the thriving of our species. However, as we move towards a hybrid human-machine future, it is still unclear how the introduction of artificial agents in our social interactions affect this cooperative capacity. In a one-shot collective risk dilemma, where enough members of a group must cooperate in order to avoid a collective disaster, we study the evolutionary dynamics of cooperation in a hybrid population. In our model, we consider a hybrid population composed of both adaptive and fixed behavior agents. The latter serve as proxies for the machine-like behavior of artificially intelligent agents who implement stochastic strategies previously learned offline. We observe that the adaptive individuals adjust their behavior in function of the presence of artificial agents in their groups to compensate their cooperative (or lack of thereof) efforts. We also find that risk plays a determinant role when assessing whether or not we should form hybrid teams to tackle a collective risk dilemma. When the risk of collective disaster is high, cooperation in the adaptive population falls dramatically in the presence of cooperative artificial agents. A story of compensation, rather than cooperation, where adaptive agents have to secure group success when the artificial agents are not cooperative enough, but will rather not cooperate if the others do so. On the contrary, when risk of collective disaster is low, success is highly improved while cooperation levels within the adaptive population remain the same. Artificial agents can improve the collective success of hybrid teams. However, their application requires a true risk assessment of the situation in order to actually benefit the adaptive population (i.e. the humans) in the long-term.
Collapse
Affiliation(s)
- Inês Terrucha
- IDLab, Ghent University-IMEC, Gent, Belgium
- AILab, Vrije Universiteit Brussel, Brussels, Belgium
| | - Elias Fernández Domingos
- Machine Learning Group, Université Libre de Bruxelles, Brussels, Belgium
- FARI Institute, Université Libre de Bruxelles-Vrije Universiteit Brussel, Brussels, Belgium
| | - Francisco C. Santos
- INESC-ID & Instituto Superior Técnico, Universidade de Lisboa, Porto Salvo, Portugal
- ATP-group, Porto Salvo, Portugal
| | | | - Tom Lenaerts
- AILab, Vrije Universiteit Brussel, Brussels, Belgium
- Machine Learning Group, Université Libre de Bruxelles, Brussels, Belgium
- FARI Institute, Université Libre de Bruxelles-Vrije Universiteit Brussel, Brussels, Belgium
- Center for Human-Compatible AI, UC Berkeley, Berkeley, California, United States of America
| |
Collapse
|
6
|
Lebrun B, Temtsin S, Vonasch A, Bartneck C. Detecting the corruption of online questionnaires by artificial intelligence. Front Robot AI 2024; 10:1277635. [PMID: 38371744 PMCID: PMC10869497 DOI: 10.3389/frobt.2023.1277635] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Accepted: 12/04/2023] [Indexed: 02/20/2024] Open
Abstract
Online questionnaires that use crowdsourcing platforms to recruit participants have become commonplace, due to their ease of use and low costs. Artificial intelligence (AI)-based large language models (LLMs) have made it easy for bad actors to automatically fill in online forms, including generating meaningful text for open-ended tasks. These technological advances threaten the data quality for studies that use online questionnaires. This study tested whether text generated by an AI for the purpose of an online study can be detected by both humans and automatic AI detection systems. While humans were able to correctly identify the authorship of such text above chance level (76% accuracy), their performance was still below what would be required to ensure satisfactory data quality. Researchers currently have to rely on a lack of interest among bad actors to successfully use open-ended responses as a useful tool for ensuring data quality. Automatic AI detection systems are currently completely unusable. If AI submissions of responses become too prevalent, then the costs associated with detecting fraudulent submissions will outweigh the benefits of online questionnaires. Individual attention checks will no longer be a sufficient tool to ensure good data quality. This problem can only be systematically addressed by crowdsourcing platforms. They cannot rely on automatic AI detection systems and it is unclear how they can ensure data quality for their paying clients.
Collapse
Affiliation(s)
- Benjamin Lebrun
- School of Psychology, Speech, and Hearing, University of Canterbury, Christchurch, New Zealand
| | - Sharon Temtsin
- Department of Computer Science and Software Engineering, University of Canterbury, Christchurch, New Zealand
| | - Andrew Vonasch
- School of Psychology, Speech, and Hearing, University of Canterbury, Christchurch, New Zealand
| | - Christoph Bartneck
- Department of Computer Science and Software Engineering, University of Canterbury, Christchurch, New Zealand
| |
Collapse
|
7
|
von Schenk A, Klockmann V, Köbis N. Social Preferences Toward Humans and Machines: A Systematic Experiment on the Role of Machine Payoffs. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2023:17456916231194949. [PMID: 37751604 DOI: 10.1177/17456916231194949] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/28/2023]
Abstract
There is growing interest in the field of cooperative artificial intelligence (AI), that is, settings in which humans and machines cooperate. By now, more than 160 studies from various disciplines have reported on how people cooperate with machines in behavioral experiments. Our systematic review of the experimental instructions reveals that the implementation of the machine payoffs and the information participants receive about them differ drastically across these studies. In an online experiment (N = 1,198), we compare how these different payoff implementations shape people's revealed social preferences toward machines. When matched with machine partners, people reveal substantially stronger social preferences and reciprocity when they know that a human beneficiary receives the machine payoffs than when they know that no such "human behind the machine" exists. When participants are not informed about machine payoffs, we found weak social preferences toward machines. Comparing survey answers with those from a follow-up study (N = 150), we conclude that people form their beliefs about machine payoffs in a self-serving way. Thus, our results suggest that the extent to which humans cooperate with machines depends on the implementation and information about the machine's earnings.
Collapse
Affiliation(s)
- Alicia von Schenk
- Center for Humans and Machines, Max Planck Institute for Human Development
- Department of Economics, University of Würzburg
| | - Victor Klockmann
- Center for Humans and Machines, Max Planck Institute for Human Development
- Department of Economics, University of Würzburg
| | - Nils Köbis
- Center for Humans and Machines, Max Planck Institute for Human Development
| |
Collapse
|
8
|
Santos FP. On consensus and cooperation: Comment on "Reputation and reciprocity" by Xia et al. Phys Life Rev 2023; 46:187-189. [PMID: 37480728 DOI: 10.1016/j.plrev.2023.07.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Accepted: 07/06/2023] [Indexed: 07/24/2023]
Affiliation(s)
- Fernando P Santos
- Informatics Institute, University of Amsterdam, Amsterdam, the Netherlands.
| |
Collapse
|
9
|
Kamino W, Hsu LJ, Joshi S, Randall N, Agnihotri A, Tsui KM, Šabanović S. Making Meaning Together: Co-designing a Social Robot for Older Adults with Ikigai Experts. Int J Soc Robot 2023; 15:1-16. [PMID: 37359428 PMCID: PMC10200010 DOI: 10.1007/s12369-023-01006-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/21/2023] [Indexed: 06/28/2023]
Abstract
A sense of meaning and purpose in life-known in Japan as one's ikigai-can lead to better health outcomes, an improved sense of well-being, and longer life as people age. The design of socially assistive robots, however, has so far focused largely on the more hedonic aims of supporting positive affect and happiness through interactions with robots. To explore how social robots might be able to support people's ikigai, we performed (1) in-depth interviews with 12 'ikigai experts' who formally support and/or study older adults (OAs)' ikigai and (2) 5 co-design workshop sessions with 10 such experts. Our interview findings show that expert practitioners define ikigai in a holistic way in their everyday experience and practice, incorporating physical, social, and mental activities that relate not only to the individual and their behaviors, but also to their relationships with other people and to their connection with the broader community (3 levels of ikigai). Our co-design workshops showed that ikigai experts were overall positive towards the use of social robots to support OAs' ikigai, particularly in the roles of an information-provider and social enabler that connects OAs to other people and activities in their communities. They also point out areas of potential risk, including the need to maintain OAs' independence, relationships with others, and privacy, which should be considered in design. This research is the first to explore the co-design of social robots that can support people's sense of ikigai-meaning and purpose-as they age.
Collapse
Affiliation(s)
- Waki Kamino
- Department of Informatics, Indiana University Bloomington, Bloomington, IN USA
| | - Long-Jing Hsu
- Department of Informatics, Indiana University Bloomington, Bloomington, IN USA
| | - Swapna Joshi
- Department of Informatics, Indiana University Bloomington, Bloomington, IN USA
| | - Natasha Randall
- Department of Informatics, Indiana University Bloomington, Bloomington, IN USA
| | | | | | - Selma Šabanović
- Department of Informatics, Indiana University Bloomington, Bloomington, IN USA
| |
Collapse
|
10
|
Maalouly E, Yamazaki R, Nishio S, Nørskov M, Kamaga K, Komai S, Chiba K, Atsumi K, Akao KI. Assessing the effect of dialogue on altruism toward future generations: A preliminary study. FRONTIERS IN COMPUTER SCIENCE 2023. [DOI: 10.3389/fcomp.2023.1129340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/03/2023] Open
Abstract
IntroductionDespite the abundance of evidence on climate change and its consequences on future generations, people, in general, are still reluctant to change their actions and behaviors toward the environment that would particularly benefit posterity. In this study, we took a preliminary step in a new research direction to explore humans' altruistic behavior toward future generations of people and whether it can be affected by dialogue.MethodsWe used an android robot called Telenoid as a representative of future generations by explaining that the robot is controlled by an Artificial Intelligence (AI) living in a simulation of our world in the future. To measure people's altruistic behavior toward it, we asked the participants to play a round of the Dictator Game with the Telenoid before having an interactive conversation with the Telenoid and then playing another round.ResultsOn average, participants gave more money to the Telenoid in the second round (after having an interactive conversation). The average amount of money increased from 20% in the first to about 30% in the second round.DiscussionThe results indicate that the conversation with the robot might have been responsible for the change in altruistic behavior toward the Telenoid. Contrary to our expectations, the personality of the participants did not appear to have an influence on their change of behavior, but other factors might have contributed. We finally discuss the influence of other possible factors such as empathy and the appearance of the robot. However, the preliminary nature of this study should deter us from making any definitive conclusions, but the results are promising for establishing the ground for future experiments.
Collapse
|
11
|
Zonca J, Folsø A, Sciutti A. Social Influence Under Uncertainty in Interaction with Peers, Robots and Computers. Int J Soc Robot 2023. [DOI: 10.1007/s12369-022-00959-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
Abstract
AbstractTaking advice from others requires confidence in their competence. This is important for interaction with peers, but also for collaboration with social robots and artificial agents. Nonetheless, we do not always have access to information about others’ competence or performance. In these uncertain environments, do our prior beliefs about the nature and the competence of our interacting partners modulate our willingness to rely on their judgments? In a joint perceptual decision making task, participants made perceptual judgments and observed the simulated estimates of either a human participant, a social humanoid robot or a computer. Then they could modify their estimates based on this feedback. Results show participants’ belief about the nature of their partner biased their compliance with its judgments: participants were more influenced by the social robot than human and computer partners. This difference emerged strongly at the very beginning of the task and decreased with repeated exposure to empirical feedback on the partner’s responses, disclosing the role of prior beliefs in social influence under uncertainty. Furthermore, the results of our functional task suggest an important difference between human–human and human–robot interaction in the absence of overt socially relevant signal from the partner: the former is modulated by social normative mechanisms, whereas the latter is guided by purely informational mechanisms linked to the perceived competence of the partner.
Collapse
|
12
|
Mahmoudi Asl A, Molinari Ulate M, Franco Martin M, van der Roest H. Methodologies Used to Study the Feasibility, Usability, Efficacy, and Effectiveness of Social Robots For Elderly Adults: Scoping Review. J Med Internet Res 2022; 24:e37434. [PMID: 35916695 PMCID: PMC9379790 DOI: 10.2196/37434] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 05/22/2022] [Accepted: 06/08/2022] [Indexed: 11/13/2022] Open
Abstract
Background New research fields to design social robots for older people are emerging. By providing support with communication and social interaction, these robots aim to increase quality of life. Because of the decline in functioning due to cognitive impairment in older people, social robots are regarded as promising, especially for people with dementia. Although study outcomes are hopeful, the quality of studies on the effectiveness of social robots for the elderly is still low due to many methodological limitations. Objective We aimed to review the methodologies used thus far in studies evaluating the feasibility, usability, efficacy, and effectiveness of social robots in clinical and social settings for elderly people, including persons with dementia. Methods Dedicated search strings were developed. Searches in MEDLINE (PubMed), Web of Science, PsycInfo, and CINAHL were performed on August 13, 2020. Results In the 33 included papers, 23 different social robots were investigated for their feasibility, usability, efficacy, and effectiveness. A total of 8 (24.2%) studies included elderly persons in the community, 9 (27.3%) included long-term care facility residents, and 16 (48.5%) included people with dementia. Most of the studies had a single aim, of which 7 (21.2%) focused on efficacy and 7 (21.2%) focused on effectiveness. Moreover, forms of randomized controlled trials were the most applied designs. Feasibility and usability were often studied together in mixed methods or experimental designs and were most often studied in individual interventions. Feasibility was often assessed with the Unified Theory of the Acceptance and Use of Technology model. Efficacy and effectiveness studies used a range of psychosocial and cognitive outcome measures. However, the included studies failed to find significant improvements in quality of life, depression, and cognition. Conclusions This study identified several shortcomings in methodologies used to evaluate social robots, resulting in ambivalent study findings. To improve the quality of these types of studies, efficacy/effectiveness studies will benefit from appropriate randomized controlled trial designs with large sample sizes and individual intervention sessions. Experimental designs might work best for feasibility and usability studies. For each of the 3 goals (efficacy/effectiveness, feasibility, and usability) we also recommend a mixed method of data collection. Multiple interaction sessions running for at least 1 month might aid researchers in drawing significant results and prove the real long-term impact of social robots.
Collapse
Affiliation(s)
- Aysan Mahmoudi Asl
- Department of Research and Development, Iberian Institute of Research in Psycho-Sciences, INTRAS Foundation, Zamora, Spain
- Psycho-Sciences Research Group, Salamanca Biomedical Research Institute, Salamanca University, Salamanca, Spain
| | - Mauricio Molinari Ulate
- Department of Research and Development, Iberian Institute of Research in Psycho-Sciences, INTRAS Foundation, Zamora, Spain
- Psycho-Sciences Research Group, Salamanca Biomedical Research Institute, Salamanca University, Salamanca, Spain
| | - Manuel Franco Martin
- Psycho-Sciences Research Group, Salamanca Biomedical Research Institute, Salamanca University, Salamanca, Spain
- Psychiatry and Mental Health Service, Assistance Complex of Zamora, Zamora, Spain
| | - Henriëtte van der Roest
- Department on Aging, Netherlands Institute of Mental Health and Addiction, Trimbos Insititute, Utrecht, Netherlands
| |
Collapse
|
13
|
Mara M, Appel M, Gnambs T. Human-Like Robots and the Uncanny Valley. ZEITSCHRIFT FUR PSYCHOLOGIE-JOURNAL OF PSYCHOLOGY 2022. [DOI: 10.1027/2151-2604/a000486] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
Abstract
Abstract. In the field of human-robot interaction, the well-known uncanny valley hypothesis proposes a curvilinear relationship between a robot’s degree of human likeness and the observers’ responses to the robot. While low to medium human likeness should be associated with increased positive responses, a shift to negative responses is expected for highly anthropomorphic robots. As empirical findings on the uncanny valley hypothesis are inconclusive, we conducted a random-effects meta-analysis of 49 studies (total N = 3,556) that reported 131 evaluations of robots based on the Godspeed scales for anthropomorphism (i.e., human likeness) and likeability. Our results confirm more positive responses for more human-like robots at low to medium anthropomorphism, with moving robots rated as more human-like but not necessarily more likable than static ones. However, because highly anthropomorphic robots were sparsely utilized in previous studies, no conclusions regarding proposed adverse effects at higher levels of human likeness can be made at this stage.
Collapse
Affiliation(s)
- Martina Mara
- LIT Robopsychology Lab, Johannes Kepler University Linz, Austria
| | - Markus Appel
- Psychology of Communication and New Media, University of Würzburg, Germany
| | - Timo Gnambs
- Leibniz Institute for Educational Trajectories (LIfBi), University of Bamberg, Germany
| |
Collapse
|
14
|
Zonca J, Folsø A, Sciutti A. The role of reciprocity in human-robot social influence. iScience 2021; 24:103424. [PMID: 34877490 PMCID: PMC8633024 DOI: 10.1016/j.isci.2021.103424] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Revised: 10/11/2021] [Accepted: 11/05/2021] [Indexed: 11/19/2022] Open
Abstract
Humans are constantly influenced by others’ behavior and opinions. Of importance, social influence among humans is shaped by reciprocity: we follow more the advice of someone who has been taking into consideration our opinions. In the current work, we investigate whether reciprocal social influence can emerge while interacting with a social humanoid robot. In a joint task, a human participant and a humanoid robot made perceptual estimates and then could overtly modify them after observing the partner’s judgment. Results show that endowing the robot with the ability to express and modulate its own level of susceptibility to the human’s judgments represented a double-edged sword. On the one hand, participants lost confidence in the robot’s competence when the robot was following their advice; on the other hand, participants were unwilling to disclose their lack of confidence to the susceptible robot, suggesting the emergence of reciprocal mechanisms of social influence supporting human-robot collaboration. If a social robot is susceptible to our advice, we lose confidence in it However, robot’s susceptibility does not deteriorate social influence These effects do not appear during interaction with a computer Susceptible robots can promote reciprocity but also hinder social learning
Collapse
Affiliation(s)
- Joshua Zonca
- Cognitive Architecture for Collaborative Technologies (CONTACT) Unit, Italian Institute of Technology, Via Enrico Melen, 83, 16152 Genoa, GE, Italy
- Corresponding author
| | - Anna Folsø
- Department of Informatics, Bioengineering, Robotics and Systems Engineering, University of Genoa, 16145 Genoa, Italy
| | - Alessandra Sciutti
- Cognitive Architecture for Collaborative Technologies (CONTACT) Unit, Italian Institute of Technology, Via Enrico Melen, 83, 16152 Genoa, GE, Italy
| |
Collapse
|
15
|
Human-Robot Interaction in Groups: Methodological and Research Practices. MULTIMODAL TECHNOLOGIES AND INTERACTION 2021. [DOI: 10.3390/mti5100059] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Understanding the behavioral dynamics that underline human-robot interactions in groups remains one of the core challenges in social robotics research. However, despite a growing interest in this topic, there is still a lack of established and validated measures that allow researchers to analyze human-robot interactions in group scenarios; and very few that have been developed and tested specifically for research conducted in-the-wild. This is a problem because it hinders the development of general models of human-robot interaction, and makes the comprehension of the inner workings of the relational dynamics between humans and robots, in group contexts, significantly more difficult. In this paper, we aim to provide a reflection on the current state of research on human-robot interaction in small groups, as well as to outline directions for future research with an emphasis on methodological and transversal issues.
Collapse
|
16
|
Greitemeyer T. Prosocial modeling: person role models and the media. Curr Opin Psychol 2021; 44:135-139. [PMID: 34628366 DOI: 10.1016/j.copsyc.2021.08.024] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Revised: 08/16/2021] [Accepted: 08/19/2021] [Indexed: 11/19/2022]
Abstract
Much of human learning comes from learning from others. In this article, I review empirical work on the extent to which prosocial behavior is inspired by exposure to prosocial models. In fact, witnessing a prosocial model in person leads to an increase in the future prosocial behavior of the observer. Other research has shown that exposure to media (TV, music, video games) with depictions of prosocial behavior can also lead to an increase in prosocial behavior. Theoretical explanations and underlying mechanisms of the prosocial modeling effect are discussed. As prosocial behavior seems to be contagious, exposure to prosocial models is an effective way to encourage positive social encounters.
Collapse
Affiliation(s)
- Tobias Greitemeyer
- Institut für Psychologie, Universität Innsbruck, Innrain 52, 6020 Innsbruck, Austria.
| |
Collapse
|
17
|
Jin SV, Youn S. Why do consumers with social phobia prefer anthropomorphic customer service chatbots? Evolutionary explanations of the moderating roles of social phobia. TELEMATICS AND INFORMATICS 2021. [DOI: 10.1016/j.tele.2021.101644] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
18
|
Prosocial behavior toward machines. Curr Opin Psychol 2021; 43:260-265. [PMID: 34481333 DOI: 10.1016/j.copsyc.2021.08.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Revised: 08/02/2021] [Accepted: 08/05/2021] [Indexed: 01/19/2023]
Abstract
Building on the computers are social actors framework, we provide an overview of research demonstrating that humans behave prosocially toward machines. In doing so, we outline that similar motivational and cognitive processes play a role when people act in prosocial ways toward humans and machines. These include perceiving the machine as somewhat human, applying social categories to the machine, being socially influenced by the machine, and experiencing social emotions toward the machine. We conclude that studying prosocial behavior toward machines is important to facilitate proper functioning of human-machine interactions. We further argue that machines provide an interesting yet underutilized resource in the study of prosocial behavior because they are both highly controllable and humanlike.
Collapse
|
19
|
Peter J, Kühne R, Barco A. Can social robots affect children's prosocial behavior? An experimental study on prosocial robot models. COMPUTERS IN HUMAN BEHAVIOR 2021. [DOI: 10.1016/j.chb.2021.106712] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
20
|
van Straten CL, Peter J, Kühne R, Barco A. The wizard and I: How transparent teleoperation and self-description (do not) affect children’s robot perceptions and child-robot relationship formation. AI & SOCIETY 2021. [DOI: 10.1007/s00146-021-01202-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
AbstractIt has been well documented that children perceive robots as social, mental, and moral others. Studies on child-robot interaction may encourage this perception of robots, first, by using a Wizard of Oz (i.e., teleoperation) set-up and, second, by having robots engage in self-description. However, much remains unknown about the effects of transparent teleoperation and self-description on children’s perception of, and relationship formation with a robot. To address this research gap initially, we conducted an experimental study with a 2 × 2 (teleoperation: overt/covert; self-description: yes/no) between-subject design in which 168 children aged 7–10 interacted with a Nao robot once. Transparency about the teleoperation procedure decreased children’s perceptions of the robot’s autonomy and anthropomorphism. Self-description reduced the degree to which children perceived the robot as being similar to themselves. Transparent teleoperation and self-description affected neither children’s perceptions of the robot’s animacy and social presence nor their closeness to and trust in the robot.
Collapse
|