1
|
Wasic C, Erzgräber R, Unger-Büttner M, Donath C, Böhme HJ, Graessel E. What helps, what hinders?-Focus group findings on barriers and facilitators for mobile service robot use in a psychosocial group therapy for people with dementia. Front Robot AI 2024; 11:1258847. [PMID: 38973971 PMCID: PMC11224299 DOI: 10.3389/frobt.2024.1258847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Accepted: 05/10/2024] [Indexed: 07/09/2024] Open
Abstract
Introduction Many countries are facing a shortage of healthcare workers. Furthermore, healthcare workers are experiencing many stressors, resulting in psychological issues, impaired health, and increased intentions to leave the workplace. In recent years, different technologies have been implemented to lighten workload on healthcare workers, such as electronic patient files. Robotic solutions are still rather uncommon. To help with acceptance and actual use of robots their functionalities should correspond to the users' needs. Method In the pilot study Care4All-Initial, we developed and field-tested applications for a mobile service robot in a psychosocial, multimodal group therapy for people with dementia. To guide the process and assess possible facilitators and barriers, we conducted a reoccurring focus group including people with dementia, therapists, professional caregivers as well as researchers from different disciplines with a user-centered design approach. The focus group suggested and reviewed applications and discussed ethical implications. We recorded the focus group discussions in writing and used content analysis. Results The focus group discussed 15 different topics regarding ethical concerns that we used as a framework for the research project: Ethical facilitators were respect for the autonomy of the people with dementia and their proxies regarding participating and data sharing. Furthermore, the robot had to be useful for the therapists and attendees. Ethical barriers were the deception and possible harm of the people with dementia or therapists. The focus group suggested 32 different applications. We implemented 13 applications that centered on the robot interacting with the people with dementia and lightening the workload off the therapists. The implemented applications were facilitated through utilizing existing hard- and software and building on applications. Barriers to implementation were due to hardware, software, or applications not fitting the scope of the project. Discussion To prevent barriers of robot employment in a group therapy for people with dementia, the robot's applications have to be developed sufficiently for a flawless and safe use, the use of the robot should not cause irritation or agitation, but rather be meaningful and useful to its users. To facilitate the development sufficient time, money, expertise and planning is essential.
Collapse
Affiliation(s)
- Catharina Wasic
- Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), University Hospital Erlangen, Department of Psychiatry and Psychotherapy, Center for Health Services Research in Medicine, Erlangen, Germany
| | - Robert Erzgräber
- Department of Artificial Intelligence/Cognitive Robotics, Faculty of Informatics/Mathematics, University of Applied Science Dresden (HTW Dresden), Dresden, Germany
| | | | - Carolin Donath
- Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), University Hospital Erlangen, Department of Psychiatry and Psychotherapy, Center for Health Services Research in Medicine, Erlangen, Germany
| | - Hans-Joachim Böhme
- Department of Artificial Intelligence/Cognitive Robotics, Faculty of Informatics/Mathematics, University of Applied Science Dresden (HTW Dresden), Dresden, Germany
| | - Elmar Graessel
- Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), University Hospital Erlangen, Department of Psychiatry and Psychotherapy, Center for Health Services Research in Medicine, Erlangen, Germany
| |
Collapse
|
2
|
Blindheim K, Solberg M, Hameed IA, Alnes RE. Promoting activity in long-term care facilities with the social robot Pepper: a pilot study. Inform Health Soc Care 2022; 48:181-195. [PMID: 35702818 DOI: 10.1080/17538157.2022.2086465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
About 40 000 individuals depend on assisted living in long-term care facilities in Norway. Around 80% of these have a cognitive impairment or suffer from dementia. This actualizes the need for activities that are tailored to individual needs. For some users, technology-assisted participation in communal activities can be an alternative approach to increasing their quality of life. To gain insight about the experiences of residents and healthcare professionals in long-term care facilities when interacting with the social robot Pepper. This is a qualitative pilot study. After a series of interventions with the robot in a long-term care facility, data were collected through individual interviews with healthcare professional and residents. These were analyzed through a qualitative content analysis. A thematic analysis identified three major themes: 1) Activity, joy and ambivalence, 2) challenges when introducing social robots in contexts of care and 3) thoughts about the future. Although employees and residents report that they enjoyed interactions with the social robot, highlighting opportunities for novel types of activities and action that differed from the daily routine, the subjects articulated several concerns and challenges. Developments in intelligent social robots is still in its infancy, despite much hype.
Collapse
Affiliation(s)
- Kari Blindheim
- Department of Health Sciences Aalesund, Norwegian University of Science and Technology (NTNU), Aalesund, Norway and Centre of Care Research, Steinkjer, Norway
| | - Mads Solberg
- Department of Health Sciences Aalesund, Norwegian University of Science and Technology (NTNU), Aalesund, Norway
| | - Ibrahim A Hameed
- Department of ICT and Natural Sciences, Norwegian University of Science and Technology (NTNU), Aalesund, Norway
| | - Rigmor Einang Alnes
- Department of Health Sciences Aalesund, Norwegian University of Science and Technology (NTNU), Aalesund, Norway
| |
Collapse
|
3
|
Epstein R, Bordyug M, Chen YH, Chen Y, Ginther A, Kirkish G, Stead H. Toward the search for the perfect blade runner: a large-scale, international assessment of a test that screens for “humanness sensitivity”. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01398-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
4
|
Perceptions of AI engaging in human expression. Sci Rep 2021; 11:21181. [PMID: 34707148 PMCID: PMC8551225 DOI: 10.1038/s41598-021-00426-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Accepted: 10/01/2021] [Indexed: 11/18/2022] Open
Abstract
Though humans should defer to the superior judgement of AI in an increasing number of domains, certain biases prevent us from doing so. Understanding when and why these biases occur is a central challenge for human-computer interaction. One proposed source of such bias is task subjectivity. We test this hypothesis by having both real and purported AI engage in one of the most subjective expressions possible: Humor. Across two experiments, we address the following: Will people rate jokes as less funny if they believe an AI created them? When asked to rate jokes and guess their likeliest source, participants evaluate jokes that they attribute to humans as the funniest and those to AI as the least funny. However, when these same jokes are explicitly framed as either human or AI-created, there is no such difference in ratings. Our findings demonstrate that user attitudes toward AI are more malleable than once thought—even when they (seemingly) attempt the most fundamental of human expressions.
Collapse
|
5
|
Pitardi V, Wirtz J, Paluch S, Kunz WH. Service robots, agency and embarrassing service encounters. JOURNAL OF SERVICE MANAGEMENT 2021. [DOI: 10.1108/josm-12-2020-0435] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
PurposeExtant research mainly focused on potentially negative customer responses to service robots. In contrast, this study is one of the first to explore a service context where service robots are likely to be the preferred service delivery mechanism over human frontline employees. Specifically, the authors examine how customers respond to service robots in the context of embarrassing service encounters.Design/methodology/approachThis study employs a mixed-method approach, whereby an in-depth qualitative study (study 1) is followed by two lab experiments (studies 2 and 3).FindingsResults show that interactions with service robots attenuated customers' anticipated embarrassment. Study 1 identifies a number of factors that can reduce embarrassment. These include the perception that service robots have reduced agency (e.g. are not able to make moral or social judgements) and emotions (e.g. are not able to have feelings). Study 2 tests the base model and shows that people feel less embarrassed during a potentially embarrassing encounter when interacting with service robots compared to frontline employees. Finally, Study 3 confirms that perceived agency, but not emotion, fully mediates frontline counterparty (employee vs robot) effects on anticipated embarrassment.Practical implicationsService robots can add value by reducing potential customer embarrassment because they are perceived to have less agency than service employees. This makes service robots the preferred service delivery mechanism for at least some customers in potentially embarrassing service encounters (e.g. in certain medical contexts).Originality/valueThis study is one of the first to examine a context where service robots are the preferred service delivery mechanism over human employees.
Collapse
|
6
|
Babel F, Kraus JM, Baumann M. Development and Testing of Psychological Conflict Resolution Strategies for Assertive Robots to Resolve Human-Robot Goal Conflict. Front Robot AI 2021; 7:591448. [PMID: 33718437 PMCID: PMC7945950 DOI: 10.3389/frobt.2020.591448] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2020] [Accepted: 12/14/2020] [Indexed: 11/13/2022] Open
Abstract
As service robots become increasingly autonomous and follow their own task-related goals, human-robot conflicts seem inevitable, especially in shared spaces. Goal conflicts can arise from simple trajectory planning to complex task prioritization. For successful human-robot goal-conflict resolution, humans and robots need to negotiate their goals and priorities. For this, the robot might be equipped with effective conflict resolution strategies to be assertive and effective but similarly accepted by the user. In this paper, conflict resolution strategies for service robots (public cleaning robot, home assistant robot) are developed by transferring psychological concepts (e.g., negotiation, cooperation) to HRI. Altogether, fifteen strategies were grouped by the expected affective outcome (positive, neutral, negative). In two online experiments, the acceptability of and compliance with these conflict resolution strategies were tested with humanoid and mechanic robots in two application contexts (public: n1 = 61; private: n2 = 93). To obtain a comparative value, the strategies were also applied by a human. As additional outcomes trust, fear, arousal, and valence, as well as perceived politeness of the agent were assessed. The positive/neutral strategies were found to be more acceptable and effective than negative strategies. Some negative strategies (i.e., threat, command) even led to reactance and fear. Some strategies were only positively evaluated and effective for certain agents (human or robot) or only acceptable in one of the two application contexts (i.e., approach, empathy). Influences on strategy acceptance and compliance in the public context could be found: acceptance was predicted by politeness and trust. Compliance was predicted by interpersonal power. Taken together, psychological conflict resolution strategies can be applied in HRI to enhance robot task effectiveness. If applied robot-specifically and context-sensitively they are accepted by the user. The contribution of this paper is twofold: conflict resolution strategies based on Human Factors and Social Psychology are introduced and empirically evaluated in two online studies for two application contexts. Influencing factors and requirements for the acceptance and effectiveness of robot assertiveness are discussed.
Collapse
Affiliation(s)
- Franziska Babel
- Department of Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Johannes M Kraus
- Department of Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Martin Baumann
- Department of Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| |
Collapse
|
7
|
González-González CS, Gil-Iranzo RM, Paderewski-Rodríguez P. Human-Robot Interaction and Sexbots: A Systematic Literature Review. SENSORS 2020; 21:s21010216. [PMID: 33396356 PMCID: PMC7795467 DOI: 10.3390/s21010216] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/21/2020] [Revised: 12/13/2020] [Accepted: 12/26/2020] [Indexed: 11/29/2022]
Abstract
At present, sexual robots have become a new paradigm of social robots. In this paper, we developed a systematic literature review about sexual robots (sexbots). To do this, we used the Scopus and WoS databases to answer different research questions regarding the design, interaction, and gender and ethical approaches from 1980 until 2020. In our review, we found a male bias in this discipline, and in recent years, articles have shown that user opinion has become more relevant. Some insights and recommendations on gender and ethics in designing sexual robots were also made.
Collapse
Affiliation(s)
- Carina Soledad González-González
- Departamento de Ingeniería Informática y de Sistemas, Escuela de Ingeniería y Tecnología, Universidad de La Laguna, 38204 La Laguna, Spain
- Correspondence:
| | - Rosa María Gil-Iranzo
- Departamento de Informática e Ingeniería Industrial, Escuela Politécnica Superior, Universitat de Lleida, 25001 LLeida, Spain;
| | - Patricia Paderewski-Rodríguez
- Departmento de Lenguajes y Sistemas Informáticos, Escuela Técnica Superior de Ingenierías Informática y de Telecomunicación, Universidad de Granada, 18071 Granada, Spain;
| |
Collapse
|
8
|
|
9
|
Improving Interactions with Healthcare Robots: A Review of Communication Behaviours in Social and Healthcare Contexts. Int J Soc Robot 2020. [DOI: 10.1007/s12369-020-00719-9] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
10
|
De Keyser A, Köcher S, Alkire (née Nasr) L, Verbeeck C, Kandampully J. Frontline Service Technology infusion: conceptual archetypes and future research directions. JOURNAL OF SERVICE MANAGEMENT 2019. [DOI: 10.1108/josm-03-2018-0082] [Citation(s) in RCA: 149] [Impact Index Per Article: 29.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
PurposeSmart technologies and connected objects are rapidly changing the organizational frontline. Yet, our understanding of how these technologies infuse service encounters remains limited. Therefore, the purpose of this paper is to update existing classifications of Frontline Service Technology (FST) infusion. Moreover, the authors discuss three promising smart and connected technologies – conversational agents, extended reality (XR) and blockchain technology – and their respective implications for customers, frontline employees and service organizations.Design/methodology/approachThis paper uses a conceptual approach integrating existing work on FST infusion with artificial intelligence, robotics, XR and blockchain literature, while also building on insights gathered through expert interviews and focus group conversations with members of two service research centers.FindingsThe authors define FST and propose a set of FST infusion archetypes at the organizational frontline. Additionally, the authors develop future research directions focused on understanding how conversational agents, XR and blockchain technology will impact service.Originality/valueThis paper updates and extends existing classifications of FST, while paving the road for further work on FST infusion.
Collapse
|
11
|
|
12
|
Abstract
Existing robots only try to give reasonable answers to users, not considering if the answer can make user happy or not. This paper proposed a new kind of robot, lie robot. Lie robot does not like existing robots which always select answers for users according to the truth, lie robot select answers for users mainly according to the user preferences in order to make user happier. The main idea of lie robot is to predict the user feedbacks for answers based on the big data of the user past feedbacks detected by the lie robot.
Collapse
Affiliation(s)
- Dingju Zhu
- School of Computer Science, South China Normal University, Guangzhou, P. R. China
| |
Collapse
|