1
|
Chen Y, Zou X, Wang Y, He H, Zhang X. The enhancement of temporal binding effect after negative social feedback. Cogn Emot 2024; 38:691-708. [PMID: 38381089 DOI: 10.1080/02699931.2024.2314985] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 10/12/2023] [Accepted: 01/16/2024] [Indexed: 02/22/2024]
Abstract
The present study investigated the effect of social feedback on the experiences of our actions and the outcomes (e.g. temporal binding between an action and its outcome, reflecting individuals' causal beliefs modulated by their agency judgments). In Experiment 1a, participants freely decided (voluntary action) their action timing to cause an outcome, which was followed by social feedback. A larger temporal binding (TB) following negative vs. positive events was found. This effect appeared neither in the random context where the causal belief between the action and outcome was absent (Experiment 1b) nor in the involuntary action context where participants' action timing was instructed (Experiment 1c). Experiments 2a and 2b examined the effect when the action-outcome was occluded, including reversing the order of outcome and feedback in Experiment 2b. Experiments 3a and 3b investigated the effect with only social feedback or only action-outcome presented. Results revealed that the effect found in Experiment 1 was driven by social feedback and independent of the availability of the action-outcome and the position of social feedback. Our findings demonstrate a stronger temporal integration of the action and its outcome following negative social feedback, reflecting fluctuations in sense of agency when faced with social feedback.
Collapse
Affiliation(s)
- Yunyun Chen
- Beijing Key Laboratory of Applied Experimental Psychology, National Demonstration Center for Experimental Psychology Education (Beijing Normal University), Faculty of Psychology, Beijing Normal University, Beijing, People's Republic of China
| | - Xintong Zou
- Beijing Key Laboratory of Applied Experimental Psychology, National Demonstration Center for Experimental Psychology Education (Beijing Normal University), Faculty of Psychology, Beijing Normal University, Beijing, People's Republic of China
| | - Yuying Wang
- Beijing Key Laboratory of Applied Experimental Psychology, National Demonstration Center for Experimental Psychology Education (Beijing Normal University), Faculty of Psychology, Beijing Normal University, Beijing, People's Republic of China
| | - Hong He
- Institute of Brain and Psychological Sciences, Sichuan Normal University, Chengdu, People's Republic of China
| | - Xuemin Zhang
- Beijing Key Laboratory of Applied Experimental Psychology, National Demonstration Center for Experimental Psychology Education (Beijing Normal University), Faculty of Psychology, Beijing Normal University, Beijing, People's Republic of China
| |
Collapse
|
2
|
Mosca O, Manunza A, Manca S, Vivanet G, Fornara F. Digital technologies for behavioral change in sustainability domains: a systematic mapping review. Front Psychol 2024; 14:1234349. [PMID: 38239482 PMCID: PMC10795171 DOI: 10.3389/fpsyg.2023.1234349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2023] [Accepted: 11/27/2023] [Indexed: 01/22/2024] Open
Abstract
Sustainability research has emerged as an interdisciplinary area of knowledge about how to achieve sustainable development, while political actions toward the goal are still in their infancy. A sustainable world is mirrored by a healthy environment in which humans can live without jeopardizing the survival of future generations. The main aim of this contribution was to carry out a systematic mapping (SM) of the applications of digital technologies in promoting environmental sustainability. From a rigorous search of different databases, a set of more than 1000 studies was initially retrieved and then, following screening criteria based on the ROSES (RepOrting standards for Systematic Evidence Syntheses) procedure, a total of N = 37 studies that met the eligibility criteria were selected. The studies were coded according to different descriptive variables, such as digital technology used for the intervention, type of sustainable behavior promoted, research design, and population for whom the intervention was applied. Results showed the emergence of three main clusters of Digital Technologies (i.e., virtual/immersive/augmented reality, gamification, and power-metering systems) and two main Sustainable Behaviors (SBs) (i.e., energy and water-saving, and pollution reduction). The need for a clearer knowledge of which digital interventions work and the reasons why they work (or do not work) does not emerge from the outcomes of this set of studies. Future studies on digital interventions should better detail intervention design characteristics, alongside the reasons underlying design choices, both behaviourally and technologically. This should increase the likelihood of the successful adoption of digital interventions promoting behavioral changes in a more sustainable direction.
Collapse
Affiliation(s)
- Oriana Mosca
- Department of Pedagogy, Psychology and Philosophy, University of Cagliari, Cagliari, Italy
| | | | | | | | | |
Collapse
|
3
|
Parenti L, Belkaid M, Wykowska A. Differences in Social Expectations About Robot Signals and Human Signals. Cogn Sci 2023; 47:e13393. [PMID: 38133602 DOI: 10.1111/cogs.13393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 11/22/2023] [Accepted: 11/27/2023] [Indexed: 12/23/2023]
Abstract
In our daily lives, we are continually involved in decision-making situations, many of which take place in the context of social interaction. Despite the ubiquity of such situations, there remains a gap in our understanding of how decision-making unfolds in social contexts, and how communicative signals, such as social cues and feedback, impact the choices we make. Interestingly, there is a new social context to which humans are recently increasingly more frequently exposed-social interaction with not only other humans but also artificial agents, such as robots or avatars. Given these new technological developments, it is of great interest to address the question of whether-and in what way-social signals exhibited by non-human agents influence decision-making. The present study aimed to examine whether robot non-verbal communicative behavior has an effect on human decision-making. To this end, we implemented a two-alternative-choice task where participants were to guess which of two presented cups was covering a ball. This game was an adaptation of a "Shell Game." A robot avatar acted as a game partner producing social cues and feedback. We manipulated robot's cues (pointing toward one of the cups) before the participant's decision and the robot's feedback ("thumb up" or no feedback) after the decision. We found that participants were slower (compared to other conditions) when cues were mostly invalid and the robot reacted positively to wins. We argue that this was due to the incongruence of the signals (cue vs. feedback), and thus violation of expectations. In sum, our findings show that incongruence in pre- and post-decision social signals from a robot significantly influences task performance, highlighting the importance of understanding expectations toward social robots for effective human-robot interactions.
Collapse
Affiliation(s)
- Lorenzo Parenti
- Social Cognition in Human-Robot Interaction, Istituto Italiano di Tecnologia (IIT)
- Department of Psychology, University of Turin
| | - Marwen Belkaid
- Social Cognition in Human-Robot Interaction, Istituto Italiano di Tecnologia (IIT)
- ETIS UMR 8051, CY Cergy Paris Université, ENSEA, CNRS
| | - Agnieszka Wykowska
- Social Cognition in Human-Robot Interaction, Istituto Italiano di Tecnologia (IIT)
| |
Collapse
|
4
|
Huang G, Moore RK. Using social robots for language learning: are we there yet? JOURNAL OF CHINA COMPUTER-ASSISTED LANGUAGE LEARNING 2023; 3:208-230. [PMID: 38013743 PMCID: PMC10464067 DOI: 10.1515/jccall-2023-0013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/18/2023] [Accepted: 05/29/2023] [Indexed: 11/29/2023]
Abstract
Along with the development of speech and language technologies and growing market interest, social robots have attracted more academic and commercial attention in recent decades. Their multimodal embodiment offers a broad range of possibilities, which have gained importance in the education sector. It has also led to a new technology-based field of language education: robot-assisted language learning (RALL). RALL has developed rapidly in second language learning, especially driven by the need to compensate for the shortage of first-language tutors. There are many implementation cases and studies of social robots, from early government-led attempts in Japan and South Korea to increasing research interests in Europe and worldwide. Compared with RALL used for English as a foreign language (EFL), however, there are fewer studies on applying RALL for teaching Chinese as a foreign language (CFL). One potential reason is that RALL is not well-known in the CFL field. This scope review paper attempts to fill this gap by addressing the balance between classroom implementation and research frontiers of social robots. The review first introduces the technical tool used in RALL, namely the social robot, at a high level. It then presents a historical overview of the real-life implementation of social robots in language classrooms in East Asia and Europe. It then provides a summary of the evaluation of RALL from the perspectives of L2 learners, teachers and technology developers. The overall goal of this paper is to gain insights into RALL's potential and challenges and identify a rich set of open research questions for applying RALL to CFL. It is hoped that the review may inform interdisciplinary analysis and practice for scientific research and front-line teaching in future.
Collapse
|
5
|
Naito M, Rea DJ, Kanda T. Hey Robot, Tell It to Me Straight: How Different Service Strategies Affect Human and Robot Service Outcomes. Int J Soc Robot 2023; 15:1-14. [PMID: 37359426 PMCID: PMC10189699 DOI: 10.1007/s12369-023-01013-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/27/2023] [Indexed: 06/28/2023]
Abstract
With robots already entering simple service tasks in shops, it is important to understand how robots should perform customer service to increase customer satisfaction. We investigate two methods of customer service we theorize are better suited for robots than human shopkeepers: straight communication and data-driven communication. Along with an additional, more traditional customer service style, we compare these methods of customer service performed by a robot, to a human performing the same service styles in 3 online studies with over 1300 people. We find that while traditional customer service styles are best suited for human shopkeepers, robot shopkeepers using straight or data driven customer service styles increase customer satisfaction, make customers feel more informed, and feel more natural than when a human uses them. Our work highlights the need for investigating robot-specific best practices for customer service, but also for social interaction at large, as simply duplicating typical human-human interaction may not produce the best results for a robot.
Collapse
|
6
|
Higashino K, Kimoto M, Iio T, Shimohara K, Shiomi M. Is Politeness Better than Impoliteness? Comparisons of Robot's Encouragement Effects Toward Performance, Moods, and Propagation. Int J Soc Robot 2023. [DOI: 10.1007/s12369-023-00971-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/12/2023]
Abstract
AbstractThis study experimentally compared the effects of encouragement with polite/ impolite attitudes from a robot in a monotonous task from three viewpoints: performance, mood, and propagation. Experiment I investigated encouragement effects on performance and mood. The participants did a monotonous task during which a robot continuously provided polite, neutral, or impolite encouragement. Our experiment results showed that polite and impolite encouragement significantly improved performance more than neutral comments, although there was no significant difference between polite and impolite encouragement. In addition, impolite encouragement caused significantly more negative moods than polite encouragement. Experiment II determined whether the robot's encouragement influenced the participants' encouragement styles. The participants behaved similarly to the robot in Experiment I, i.e., they selected polite, neutral, and impolite encouragements by observing the progress of a monotonous task by a dummy participant. The experiment results, which showed that the robot's encouragement significantly influenced the participants' encouragement styles, suggest that polite encouragement is more advantageous than impolite encouragement.
Collapse
|
7
|
Edirisinghe S, Satake S, Kanda T. Field Trial of a Shopworker Robot with Friendly Guidance and Appropriate Admonishments. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2023. [DOI: 10.1145/3575805] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
We developed an admonishing service for a shopworker robot and conducted a field trial to investigate the impressions of a shop’s staff and customers. Applying the admonishing service in a real-world robot is difficult due to the high risk of rejection by society. We wanted to achieve an acceptable admonishing service while simultaneously avoiding the impression of a forceful request from a machine. We proposed a harmonized design that provided friendly and admonishing services. First, we interviewed a shop’s staff to learn their strategies for both friendly and admonishing services. From our evaluation of the interview results, we derived three design principles: friendly impressions, zero erroneous admonishments, and polite requests. Based on the design principles, we implemented our harmonized design on a social robot that guides customers to product locations and admonishes those who are not wearing face masks. We conducted a 13-day field trial in a retail shop and interviewed the customers and shopworkers to learn their impressions of our robot. The results of the field trial imply that our harmonized design approach is successful. Both the customers and the shop staff had overall positive impressions of the robot, its admonishing and friendly services, and expressed an intention to use it in the future. Furthermore, we studied the robot’s autonomous service-providing capability in the field and conducted an evaluation with hired participants to deepen our study of the robot’s mask recognition capability.
Collapse
Affiliation(s)
- Sachi Edirisinghe
- ATR Deep Interaction Laboratories, Japan and Kyoto University Graduate School of Informatics, Japan
| | | | - Takayuki Kanda
- ATR Deep Interaction Laboratories, Japan and Kyoto University Graduate School of Informatics, Japan
| |
Collapse
|
8
|
Parenti L, Lukomski AW, De Tommaso D, Belkaid M, Wykowska A. Human-Likeness of Feedback Gestures Affects Decision Processes and Subjective Trust. Int J Soc Robot 2022. [DOI: 10.1007/s12369-022-00927-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
AbstractTrust is fundamental in building meaningful social interactions. With the advance of social robotics in collaborative settings, trust in Human–Robot Interaction (HRI) is gaining more and more scientific attention. Indeed, understanding how different factors may affect users’ trust toward robots is of utmost importance. In this study, we focused on two factors related to the robot’s behavior that could modulate trust. In a two-forced choice task where a virtual robot reacted to participants’ performance, we manipulated the human-likeness of the robot’s motion and the valence of the feedback it provided. To measure participant’s subjective level of trust, we used subjective ratings throughout the task as well as a post-task questionnaire, which distinguishes capacity and moral dimensions of trust. We expected the presence of feedback to improve trust toward the robot and human-likeness to strengthen this effect. Interestingly, we observed that humans equally trust the robot in most conditions but distrust it when it shows no social feedback nor human-like behavior. In addition, we only observed a positive correlation between subjective trust ratings and the moral and capacity dimensions of trust when robot was providing feedback during the task. These findings suggest that the presence and human-likeness of feedback behaviors positively modulate trust in HRI and thereby provide important insights for the development of non-verbal communicative behaviors in social robots.
Collapse
|
9
|
Galatolo A, Melsión GI, Leite I, Winkle K. The Right (Wo)Man for the Job? Exploring the Role of Gender when Challenging Gender Stereotypes with a Social Robot. Int J Soc Robot 2022. [DOI: 10.1007/s12369-022-00938-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
AbstractRecent works have identified both risks and opportunities afforded by robot gendering. Specifically, robot gendering risks the propagation of harmful gender stereotypes, but may positively influence robot acceptance/impact, and/or actually offer a vehicle with which to educate about and challenge traditional gender stereotypes. Our work sits at the intersection of these ideas, to explore whether robot gendering might impact robot credibility and persuasiveness specifically when that robot is being used to try and dispel gender stereotypes and change interactant attitudes. Whilst we demonstrate no universal impact of robot gendering on first impressions of the robot, we demonstrate complex interactions between robot gendering, interactant gender and observer gender which emerge when the robot engages in challenging gender stereotypes. Combined with previous work, our results paint a mixed picture regarding how best to utilise robot gendering when challenging gender stereotypes this way. Specifically, whilst we find some potential evidence in favour of utilising male presenting robots for maximum impact in this context, we question whether this actually reflects the kind of gender biases we actually set out to challenge with this work.
Collapse
|
10
|
A study on the influence of service robots’ level of anthropomorphism on the willingness of users to follow their recommendations. Sci Rep 2022; 12:15266. [PMID: 36088470 PMCID: PMC9463504 DOI: 10.1038/s41598-022-19501-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2022] [Accepted: 08/30/2022] [Indexed: 11/16/2022] Open
Abstract
Service robots are increasingly deployed in various industries including tourism. In spite of extensive research on the user’s experience in interaction with these robots, there are yet unanswered questions about the factors that influence user’s compliance. Through three online studies, we investigate the effect of the robot anthropomorphism and language style on customers’ willingness to follow its recommendations. The mediating role of the perceived mind and persuasiveness in this relationship is also investigated. Study 1 (n = 89) shows that a service robot with a higher level of anthropomorphic features positively influences the willingness of users to follow its recommendations while language style does not affect compliance. Study 2a (n = 168) further confirms this finding when we presented participants with a tablet vs. a service robot with an anthropomorphic appearance while communication style does not affect compliance. Finally, Study 2b (n = 122) supports the indirect effect of anthropomorphism level on the willingness to follow recommendations through perceived mind followed by persuasiveness. The findings provide valuable insight to enhance human–robot interaction in service settings.
Collapse
|
11
|
The Effectiveness of Robot-Enacted Messages to Reduce the Consumption of High-Sugar Energy Drinks. INFORMATICS 2022. [DOI: 10.3390/informatics9020049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
This exploratory study examines the effectiveness of social robots’ ability to deliver advertising messages using different “appeals” in a business environment. Specifically, it explores the use of three types of message appeals in a human-robot interaction scenario: guilt, humour and non-emotional. The study extends past research in advertising by exploring whether messages communicated by social robots can impact consumers’ behaviour. Using an experimental research design, the emotional-themed messages focus on the health-related properties of two fictitious energy drink brands. The findings show mixed results for humour and guilt messages. When the robot delivered a promotion message using humour, participants perceived it as being less manipulative. Participants who were exposed to humourous messages also demonstrated a significantly greater intent for future purchase decisions. However, guilt messages were more likely to persuade consumers to change their brand selection. This study contributes to the literature as it provides empirical evidence on the social robots’ ability to deliver different advertising messages. It has practical implications for businesses as a growing number seek to employ humanoids to promote their services.
Collapse
|
12
|
Liu B, Tetteroo D, Markopoulos P. A Systematic Review of Experimental Work on Persuasive Social Robots. Int J Soc Robot 2022. [DOI: 10.1007/s12369-022-00870-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
AbstractThere is a growing body of work reporting on experimental work on social robotics (SR) used for persuasive purposes. We report a comprehensive review on persuasive social robotics research with the aim to better inform their design, by summarizing literature on factors impacting their persuasiveness. From 54 papers, we extracted the SR’s design features evaluated in the studies and the evidence of their efficacy. We identified five main categories in the factors that were evaluated: modality, interaction, social character, context and persuasive strategies. Our literature review finds generally consistent effects for factors in modality, interaction and context, whereas more mixed results were shown for social character and persuasive strategies. This review further summarizes findings on interaction effects of multiple factors for the persuasiveness of social robots. Finally, based on the analysis of the papers reviewed, suggestions for factor expression design and evaluation, and the potential for using qualitative methods and more longer-term studies are discussed.
Collapse
|
13
|
Saunderson S, Nejat G. Hybrid Hierarchical Learning for Adaptive Persuasion in Human-Robot Interaction. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3140813] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
14
|
Step Aside! VR-Based Evaluation of Adaptive Robot Conflict Resolution Strategies for Domestic Service Robots. Int J Soc Robot 2022. [DOI: 10.1007/s12369-021-00858-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
AbstractAs domestic service robots become more prevalent and act autonomously, conflicts of interest between humans and robots become more likely. Hereby, the robot shall be able to negotiate with humans effectively and appropriately to fulfill its tasks. One promising approach could be the imitation of human conflict resolution behaviour and the use of persuasive requests. The presented study complements previous work by investigating combinations of assertive and polite request elements (appeal, showing benefit, command), which have been found to be effective in HRI. The conflict resolution strategies each contained two types of requests, the order of which was varied to either mimic or contradict human conflict resolution behaviour. The strategies were also adapted to the users’ compliance behaviour. If the participant complied after the first request, no second request was issued. In a virtual reality experiment ($$N = 57$$
N
=
57
) with two trials, six different strategies were evaluated regarding user compliance, robot acceptance, trust, and fear and compared to a control condition featuring no request elements. The experiment featured a human-robot goal conflict scenario concerning household tasks at home. The results show that in trial 1, strategies reflecting human politeness and conflict resolution norms were more accepted, polite, and trustworthier than strategies entailing a command. No differences were found for trial 2. Overall, compliance rates were comparable to human-human-requests. Compliance rates did not differ between strategies. The contribution is twofold: presenting an experimental paradigm to investigate a human-robot conflict scenario and providing a first step to developing acceptable robot conflict resolution strategies based on human behaviour.
Collapse
|
15
|
“I Have to Praise You Like I Should?” The Effects of Implicit Self-Theories and Robot-Delivered Praise on Evaluations of a Social Robot. Int J Soc Robot 2022. [DOI: 10.1007/s12369-021-00848-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
16
|
Saunderson S, Nejat G. Investigating Strategies for Robot Persuasion in Social Human-Robot Interaction. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:641-653. [PMID: 32452790 DOI: 10.1109/tcyb.2020.2987463] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Persuasion is a fundamental aspect of how people interact with each other. As robots become integrated into our daily lives and take on increasingly social roles, their ability to persuade will be critical to their success during human-robot interaction (HRI). In this article, we present a novel HRI study that investigates how a robot's persuasive behavior influences people's decision making. The study consisted of two small social robots trying to influence a person's answer during a jelly bean guessing game. One robot used either an emotional or logical persuasive strategy during the game, while the other robot displayed a neutral control behavior. The results showed that the Emotion strategy had significantly higher persuasive influence compared to both the Logic and Control conditions. With respect to participant demographics, no significant differences in influence were observed between age or gender groups; however, significant differences were observed when considering participant occupation/field of study (FOS). Namely, participants in business, engineering, and physical sciences fields were more influenced by the robots and aligned their answers closer to the robot's suggestion than did those in the life sciences and humanities professions. The discussions provide insight into the potential use of robot persuasion in social HRI task scenarios; in particular, considering the influence that a robot displaying emotional behaviors has when persuading people.
Collapse
|
17
|
Social cues and implications for designing expert and competent artificial agents: A systematic review. TELEMATICS AND INFORMATICS 2021. [DOI: 10.1016/j.tele.2021.101721] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
18
|
Ajibo CA, Ishi CT, Ishiguro H. Advocating Attitudinal Change Through Android Robot's Intention-Based Expressive Behaviors: Toward WHO COVID-19 Guidelines Adherence. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3094783] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
19
|
Saunderson SP, Nejat G. Persuasive robots should avoid authority: The effects of formal and real authority on persuasion in human-robot interaction. Sci Robot 2021; 6:eabd5186. [PMID: 34550717 DOI: 10.1126/scirobotics.abd5186] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
Social robots must take on many roles when interacting with people in everyday settings, some of which may be authoritative, such as a nurse, teacher, or guard. It is important to investigate whether and how authoritative robots can influence people in applications ranging from health care and education to security and in the home. Here, we present a human-robot interaction study that directly investigates the effect of a robot’s peer or authority role (formal authority) and control of monetary rewards and penalties (real authority) on its persuasive influence. The study consisted of a social robot attempting to persuade people to change their answers to the robot’s suggestion in a series of challenging attention and memory tasks. Our results show that the robot in a peer role was more persuasive than when in an authority role, contrary to expectations from human-human interactions. The robot was also more persuasive when it offered rewards over penalties, suggesting that participants perceived the robot’s suggestions as a less risky option than their own estimates, in line with prospect theory. In general, the results show an aversion to the persuasive influence of authoritative robots, potentially due to the robot’s legitimacy as an authority figure, its behavior being perceived as dominant, or participant feelings of threatened autonomy. This paper explores the importance of persuasion for robots in different social roles while providing critical insight into the perception of robots in these roles, people’s behavior around these robots, and the development of human-robot relationships.
Collapse
Affiliation(s)
- Shane P Saunderson
- Autonomous Systems and Biomechatronics Lab, Department of Mechanical and Industrial Engineering, University of Toronto, 5 King's College Road, Toronto, ON M5S 3G8, Canada
| | - Goldie Nejat
- Autonomous Systems and Biomechatronics Lab, Department of Mechanical and Industrial Engineering, University of Toronto, 5 King's College Road, Toronto, ON M5S 3G8, Canada
| |
Collapse
|
20
|
Haring KS, Satterfield KM, Tossell CC, de Visser EJ, Lyons JR, Mancuso VF, Finomore VS, Funke GJ. Robot Authority in Human-Robot Teaming: Effects of Human-Likeness and Physical Embodiment on Compliance. Front Psychol 2021; 12:625713. [PMID: 34135804 PMCID: PMC8202405 DOI: 10.3389/fpsyg.2021.625713] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2020] [Accepted: 03/12/2021] [Indexed: 11/19/2022] Open
Abstract
The anticipated social capabilities of robots may allow them to serve in authority roles as part of human-machine teams. To date, it is unclear if, and to what extent, human team members will comply with requests from their robotic teammates, and how such compliance compares to requests from human teammates. This research examined how the human-likeness and physical embodiment of a robot affect compliance to a robot's request to perseverate utilizing a novel task paradigm. Across a set of two studies, participants performed a visual search task while receiving ambiguous performance feedback. Compliance was evaluated when the participant requested to stop the task and the coach urged the participant to keep practicing multiple times. In the first study, the coach was either physically co-located with the participant or located remotely via a live-video. Coach type varied in human-likeness and included either a real human (confederate), a Nao robot, or a modified Roomba robot. The second study expanded on the first by including a Baxter robot as a coach and replicated the findings in a different sample population with a strict chain of command culture. Results from both studies showed that participants comply with the requests of a robot for up to 11 min. Compliance is less than to a human and embodiment and human-likeness on had weak effects on compliance.
Collapse
Affiliation(s)
- Kerstin S Haring
- Humane Robot Technology Laboratory, Ritchie School of Engineering and Computer Science, Department of Computer Science, University of Denver, Denver, CO, United States
| | | | - Chad C Tossell
- Department of Behavioral Sciences and Leadership, Warfighter Effectiveness Research Center, United States Air Force Academy, Colorado Springs, CO, United States
| | - Ewart J de Visser
- Department of Behavioral Sciences and Leadership, Warfighter Effectiveness Research Center, United States Air Force Academy, Colorado Springs, CO, United States
| | - Joseph R Lyons
- Air Force Research Laboratory, Wright-Patterson AFB, Dayton, OH, United States
| | - Vincent F Mancuso
- MIT Lincoln Laboratory, Massachusetts Institute of Technology, Boston, MA, United States
| | - Victor S Finomore
- Rockefeller Neuroscience Institute, University of West Virginia, Morgantown, WV, United States
| | - Gregory J Funke
- Air Force Research Laboratory, Wright-Patterson AFB, Dayton, OH, United States
| |
Collapse
|
21
|
Babel F, Kraus JM, Baumann M. Development and Testing of Psychological Conflict Resolution Strategies for Assertive Robots to Resolve Human-Robot Goal Conflict. Front Robot AI 2021; 7:591448. [PMID: 33718437 PMCID: PMC7945950 DOI: 10.3389/frobt.2020.591448] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2020] [Accepted: 12/14/2020] [Indexed: 11/13/2022] Open
Abstract
As service robots become increasingly autonomous and follow their own task-related goals, human-robot conflicts seem inevitable, especially in shared spaces. Goal conflicts can arise from simple trajectory planning to complex task prioritization. For successful human-robot goal-conflict resolution, humans and robots need to negotiate their goals and priorities. For this, the robot might be equipped with effective conflict resolution strategies to be assertive and effective but similarly accepted by the user. In this paper, conflict resolution strategies for service robots (public cleaning robot, home assistant robot) are developed by transferring psychological concepts (e.g., negotiation, cooperation) to HRI. Altogether, fifteen strategies were grouped by the expected affective outcome (positive, neutral, negative). In two online experiments, the acceptability of and compliance with these conflict resolution strategies were tested with humanoid and mechanic robots in two application contexts (public: n1 = 61; private: n2 = 93). To obtain a comparative value, the strategies were also applied by a human. As additional outcomes trust, fear, arousal, and valence, as well as perceived politeness of the agent were assessed. The positive/neutral strategies were found to be more acceptable and effective than negative strategies. Some negative strategies (i.e., threat, command) even led to reactance and fear. Some strategies were only positively evaluated and effective for certain agents (human or robot) or only acceptable in one of the two application contexts (i.e., approach, empathy). Influences on strategy acceptance and compliance in the public context could be found: acceptance was predicted by politeness and trust. Compliance was predicted by interpersonal power. Taken together, psychological conflict resolution strategies can be applied in HRI to enhance robot task effectiveness. If applied robot-specifically and context-sensitively they are accepted by the user. The contribution of this paper is twofold: conflict resolution strategies based on Human Factors and Social Psychology are introduced and empirically evaluated in two online studies for two application contexts. Influencing factors and requirements for the acceptance and effectiveness of robot assertiveness are discussed.
Collapse
Affiliation(s)
- Franziska Babel
- Department of Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Johannes M Kraus
- Department of Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Martin Baumann
- Department of Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| |
Collapse
|
22
|
|
23
|
Ghazali AS, Ham J, Barakova E, Markopoulos P. Persuasive Robots Acceptance Model (PRAM): Roles of Social Responses Within the Acceptance Model of Persuasive Robots. Int J Soc Robot 2020. [DOI: 10.1007/s12369-019-00611-1] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
AbstractIn the last years, there have been rapid developments in social robotics, which bring about the prospect of their application as persuasive robots to support behavior change. In order to guide related developments and pave the way for their adoption, it is important to understand the factors that influence the acceptance of social robots as persuasive agents. This study extends the technology acceptance model by including measures of social responses. The social responses include trusting belief, compliance, liking, and psychological reactance. Using the Wizard of Oz method, a laboratory experiment was conducted to evaluate user acceptance and social responses towards a social robot called SociBot. This robot was used as a persuasive agent in making decisions in donating to charities. Using partial least squares method, results showed that trusting beliefs and liking towards the robot significantly add the predictive power of the acceptance model of persuasive robots. However, due to the limitations of the study design, psychological reactance and compliance were not found to contribute to the prediction of persuasive robots’ acceptance. Implications for the development of persuasive robots are discussed.
Collapse
|
24
|
Tan SM, Liew TW, Gan CL. Motivational virtual agent in e-learning: the roles of regulatory focus and message framing. INFORMATION AND LEARNING SCIENCES 2020. [DOI: 10.1108/ils-09-2019-0088] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Purpose
The aim of this paper is to examine the effects of a learner’s regulatory focus orientation and message frame of a motivational virtual agent in an e-learning environment.
Design/methodology/approach
On the basis of quasi-experimental design, university sophomores (n = 210) categorized as chronic promotion-focus, chronic prevention-focus or neutral regulatory focus interacted with either an agent that conveyed gain-frame message or an agent that conveyed loss-frame message to persuade learners to engage with the e-learning content. Statistical analyses assessed the effects of regulatory focus and message frame on agent perception, motivation and cognitive load.
Findings
The results of this paper did not support the hypotheses that chronic promotion-focus learners will benefit more with gain-frame agent than a loss-frame agent, and that chronic prevention-focus learners will benefit more with loss-frame agent than a gain-frame agent. There were main effects of message frame (albeit small effects) – the loss-frame agent was perceived to be more engaging, induced higher motivation and prompted higher germane load than the gain-frame agent. With gain-frame agent, chronic promotion-focus learners had higher motivation toward the e-learning task than other learners.
Originality/value
Prior studies have examined regulatory focus and message frame with agents simulating virtual health advocates. This paper extended on this by examining these roles with a persuasive agent simulating virtual tutor in an e-learning environment.
Collapse
|
25
|
Nakagawa Y, Park K, Ueda H, Ono H, Miyake H. Being watched over by a conversation robot may enhance safety in simulated driving. JOURNAL OF SAFETY RESEARCH 2019; 71:207-218. [PMID: 31862032 DOI: 10.1016/j.jsr.2019.09.010] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/30/2018] [Revised: 06/07/2019] [Accepted: 09/25/2019] [Indexed: 06/10/2023]
Abstract
INTRODUCTION In an aging society that is more and more information-oriented, being able to replace human passengers' protective effects on vehicle drivers with those of social robots is both essential and promising. However, the effects of a social robot's presence on drivers have not yet been fully explored. Thus, using a driving simulator and a conversation robot, this experimental study had two main goals: (a) to find out whether social robots' anthropomorphic qualities (i.e., not the practical information the robot provides drivers) have protective effects by promoting attentive driving and alleviating crash risks; and (b) by what psychological processes such effects emerge. METHOD Participants were recruited from young (n = 38), the middle-aged (n = 39), and the elderly (n = 49) age groups. They were assigned to either the treatment group (simulated driving in a conversation robot's presence) or the control group (simulated driving alone), and their driving performance was measured. Mental states (peaceful, concentrating, and reflective) also were assessed in a post-driving questionnaire using our original scales. RESULTS Although the group of older participants did not experience protective effects (perhaps due to motion sickness), the young participants drove attentively, with the robot enhancing peace of mind. The protective effect was also observed among the middle-aged participants, and the verbal data analysis ascribed this to the robot's role of expressing sympathy, especially when the middle-aged drivers nearly had not-at-fault crashes, which caused them to be stressed. In conclusion, we discuss the practical implications of the results.
Collapse
Affiliation(s)
- Yoshinori Nakagawa
- Department of Management, Kochi University of Technology, 185 Miyanokuchi, Kami City, Kochi Prefecture, Japan.
| | - Kaechang Park
- Research Organization for Regional Alliance, Kochi University of Technology, 185 Miyanokuchi, Kami City, Kochi Prefecture, Japan
| | - Hirotada Ueda
- Research Organization for Regional Alliance, Kochi University of Technology, 185 Miyanokuchi, Kami City, Kochi Prefecture, Japan
| | - Hiroshi Ono
- Honda Motor Co., Ltd., 1-10-1 Shin Sayama, Sayama City, Saitama Prefecture, Japan
| | - Hiroki Miyake
- Nissho Electronics Corporation, 3-5, Nibancho, Chiyoda-ku, Tokyo, Japan
| |
Collapse
|
26
|
Fountoukidou S, Ham J, Matzat U, Midden C. Effects of an artificial agent as a behavioral model on motivational and learning outcomes. COMPUTERS IN HUMAN BEHAVIOR 2019. [DOI: 10.1016/j.chb.2019.03.013] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
27
|
Saunderson S, Nejat G. It Would Make Me Happy if You Used My Guess: Comparing Robot Persuasive Strategies in Social Human–Robot Interaction. IEEE Robot Autom Lett 2019. [DOI: 10.1109/lra.2019.2897143] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
28
|
Ghazali AS, Ham J, Barakova E, Markopoulos P. Assessing the effect of persuasive robots interactive social cues on users’ psychological reactance, liking, trusting beliefs and compliance. Adv Robot 2019. [DOI: 10.1080/01691864.2019.1589570] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- Aimi Shazwani Ghazali
- Department of Industrial Design, Eindhoven University of Technology, Eindhoven, AZ, Netherlands
- Department of Mechatronics Engineering, International Islamic University Malaysia, Kuala Lumpur, Malaysia
| | - Jaap Ham
- Department of Industrial Engineering & Innovation Sciences, Eindhoven University of Technology, Eindhoven, AZ, Netherland
| | - Emilia Barakova
- Department of Industrial Design, Eindhoven University of Technology, Eindhoven, AZ, Netherlands
| | - Panos Markopoulos
- Department of Industrial Design, Eindhoven University of Technology, Eindhoven, AZ, Netherlands
| |
Collapse
|
29
|
Lee SA, Liang Y(J. Robotic foot-in-the-door: Using sequential-request persuasive strategies in human-robot interaction. COMPUTERS IN HUMAN BEHAVIOR 2019. [DOI: 10.1016/j.chb.2018.08.026] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
30
|
|
31
|
How do performance feedback characteristics influence recipients’ reactions? A state-of-the-art review on feedback source, timing, and valence effects. ACTA ACUST UNITED AC 2018. [DOI: 10.1007/s11301-018-0136-8] [Citation(s) in RCA: 43] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
32
|
Borenstein J, Arkin RC. Nudging for good: robots and the ethical appropriateness of nurturing empathy and charitable behavior. AI & SOCIETY 2017. [DOI: 10.1007/s00146-016-0684-1] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
|
33
|
Lee SA, Liang Y(J. The Role of Reciprocity in Verbally Persuasive Robots. CYBERPSYCHOLOGY BEHAVIOR AND SOCIAL NETWORKING 2016; 19:524-7. [DOI: 10.1089/cyber.2016.0124] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Affiliation(s)
- Seungcheol Austin Lee
- Department of Communication, Northern Kentucky University, Highland Heights, Kentucky
| | | |
Collapse
|
34
|
Sexton CA. The overlooked potential for social factors to improve effectiveness of brain-computer interfaces. Front Syst Neurosci 2015; 9:70. [PMID: 25999824 PMCID: PMC4422002 DOI: 10.3389/fnsys.2015.00070] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2015] [Accepted: 04/17/2015] [Indexed: 12/27/2022] Open
Affiliation(s)
- Cheryl Ann Sexton
- Department of Physiology and Pharmacology, Wake Forest University, School of Medicine Winston-Salem, NC, USA
| |
Collapse
|
35
|
Ahn SJ(G, Bailenson JN, Park D. Short- and long-term effects of embodied experiences in immersive virtual environments on environmental locus of control and behavior. COMPUTERS IN HUMAN BEHAVIOR 2014. [DOI: 10.1016/j.chb.2014.07.025] [Citation(s) in RCA: 114] [Impact Index Per Article: 11.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|