1
|
Saeki W, Ueda Y. Sequential model based on human cognitive processing to robot acceptance. Front Robot AI 2024; 11:1362044. [PMID: 38560097 PMCID: PMC10978770 DOI: 10.3389/frobt.2024.1362044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2023] [Accepted: 03/04/2024] [Indexed: 04/04/2024] Open
Abstract
Robots have tremendous potential, and have recently been introduced not only for simple operations in factories, but also in workplaces where customer service communication is required. However, communication robots have not always been accepted. This study proposes a three-stage (first contact, interaction, and decision) model for robot acceptance based on the human cognitive process flow to design preferred robots and clarifies the elements of the robot and the processes that affect robot acceptance decision-making. Unlike previous robot acceptance models, the current model focuses on a sequential account of how people decide to accept, considering the interaction (or carry-over) effect between impressions established at each stage. According to the model, this study conducted a scenario-based experiment focusing on the impression of the first contact (a robot's appearance) and that formed during the interaction with robot (politeness of its conversation and behavior) on robot acceptance in both successful and slightly failed situations. The better the appearance of the robot and the more polite its behavior, the greater the acceptance rate. Importantly, there was no interaction between these two factors. The results indicating that the impressions of the first contact and interaction are additively processed suggest that we should accumulate findings that improving the appearance of the robot and making its communication behavior more human-like in politeness will lead to a more acceptable robot design.
Collapse
Affiliation(s)
- Waka Saeki
- Graduate School of Education, Kyoto University, Kyoto, Japan
| | - Yoshiyuki Ueda
- Institute for the Future of Human Society, Kyoto University, Kyoto, Japan
| |
Collapse
|
2
|
Ismatullaev UVU, Kim SH. Review of the Factors Affecting Acceptance of AI-Infused Systems. HUMAN FACTORS 2024; 66:126-144. [PMID: 35344676 DOI: 10.1177/00187208211064707] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
OBJECTIVE The study aimed to provide a comprehensive overview of the factors impacting technology adoption, to predict the acceptance of artificial intelligence (AI)-based technologies. BACKGROUND Although the acceptance of AI devices is usually defined by behavioural factors in theories of user acceptance, the effects of technical and human factors are often overlooked. However, research shows that user behaviour can vary depending on a system's technical characteristics and differences in users. METHOD A systematic review was conducted. A total of 85 peer-reviewed journal articles that met the inclusion criteria and provided information on the factors influencing the adoption of AI devices were selected for the analysis. RESULTS Research on the adoption of AI devices shows that users' attitudes, trust and perceptions about the technology can be improved by increasing transparency, compatibility, and reliability, and simplifying tasks. Moreover, technological factors are also important for reducing issues related to human factors (e.g. distrust, scepticism, inexperience) and supporting users with lower intention to use and lower trust in AI-infused systems. CONCLUSION As prior research has confirmed the interrelationship among factors with and without behaviour theories, this review suggests extending the technology acceptance model that integrates the factors studied in this review to define the acceptance of AI devices across different application areas. However, further research is needed to collect more data and validate the study's findings. APPLICATION A comprehensive overview of factors influencing the acceptance of AI devices could help researchers and practitioners evaluate user behaviour when adopting new technologies.
Collapse
Affiliation(s)
| | - Sang-Ho Kim
- Department of Industrial Engineering, Kumoh National Institute of Technology, South Korea
| |
Collapse
|
3
|
Ide H, Suwa S, Akuta Y, Kodate N, Tsujimura M, Ishimaru M, Shimamura A, Kitinoja H, Donnelly S, Hallila J, Toivonen M, Bergman-Kärpijoki C, Takahashi E, Yu W. Developing a model to explain users' ethical perceptions regarding the use of care robots in home care: A cross-sectional study in Ireland, Finland, and Japan. Arch Gerontol Geriatr 2024; 116:105137. [PMID: 37541051 DOI: 10.1016/j.archger.2023.105137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Revised: 07/20/2023] [Accepted: 07/22/2023] [Indexed: 08/06/2023]
Abstract
To date, research on ethical issues regarding care robots for older adults, family caregivers, and care workers has not progressed sufficiently. This study aimed to build a model that universally explains the relationship between the use of care robots and ethical awareness, such as regarding personal information and privacy protection in home care. We examined data obtained from cross-sectional surveys conducted in Japan (n=528), Ireland (n=296), and Finland (n=180). We performed a confirmatory factor analysis by using responses to 11 items related to the ethical use of care robots. We evaluated the model based on the chi-square to degrees of freedom ratio, the comparative fit index, and the root mean square error of approximation. Subsequently, we compared the model with the Akaike's information criterion. Ten items were adopted in the final model. There were 4 factors in the model: 'acquisition of personal information', 'use of personal information for medical and long-term care', 'secondary use of personal information', and 'participation in research and development'. All factor loadings of the final model ranged between 0.63 and 0.92, which were greater than 0.6, showing that the factors had a high influence on the model. The final model was applied to each country; the fit was relatively good in Finland and poor in Ireland. Although the three countries have different geographies, cultures, demographics, and systems, this study showed that the impact of ethical issues regarding the use of care robots in home care can be universally explained by the same model.
Collapse
Affiliation(s)
- Hiroo Ide
- Institute for Future Initiatives, The University of Tokyo
| | - Sayuri Suwa
- Department of Community Health Nursing, Division of Innovative Nursing for Life Course, Graduate School of Nursing, Chiba University.
| | - Yumi Akuta
- Division of Nursing, Faculty of Healthcare, Tokyo Healthcare University
| | - Naonori Kodate
- UCD School of Social Policy, Social Work and Social Justice, University College Dublin
| | - Mayuko Tsujimura
- Division of Visiting Nursing, School of Nursing, Shiga University of Medical Science
| | - Mina Ishimaru
- Department of Community Health Nursing, Division of Innovative Nursing for Life Course, Graduate School of Nursing, Chiba University
| | - Atsuko Shimamura
- Division of Community Health Nursing, Department of Nursing, Faculty of Health Science, Toho University
| | | | - Sarah Donnelly
- UCD School of Social Policy, Social Work and Social Justice, University College Dublin
| | | | | | | | | | - Wenwei Yu
- Center for Frontier Medical Engineering, Chiba University
| |
Collapse
|
4
|
Choudhury A, Shamszare H. Investigating the Impact of User Trust on the Adoption and Use of ChatGPT: Survey Analysis. J Med Internet Res 2023; 25:e47184. [PMID: 37314848 DOI: 10.2196/47184] [Citation(s) in RCA: 26] [Impact Index Per Article: 26.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Revised: 04/19/2023] [Accepted: 05/25/2023] [Indexed: 06/15/2023] Open
Abstract
BACKGROUND ChatGPT (Chat Generative Pre-trained Transformer) has gained popularity for its ability to generate human-like responses. It is essential to note that overreliance or blind trust in ChatGPT, especially in high-stakes decision-making contexts, can have severe consequences. Similarly, lacking trust in the technology can lead to underuse, resulting in missed opportunities. OBJECTIVE This study investigated the impact of users' trust in ChatGPT on their intent and actual use of the technology. Four hypotheses were tested: (1) users' intent to use ChatGPT increases with their trust in the technology; (2) the actual use of ChatGPT increases with users' intent to use the technology; (3) the actual use of ChatGPT increases with users' trust in the technology; and (4) users' intent to use ChatGPT can partially mediate the effect of trust in the technology on its actual use. METHODS This study distributed a web-based survey to adults in the United States who actively use ChatGPT (version 3.5) at least once a month between February 2023 through March 2023. The survey responses were used to develop 2 latent constructs: Trust and Intent to Use, with Actual Use being the outcome variable. The study used partial least squares structural equation modeling to evaluate and test the structural model and hypotheses. RESULTS In the study, 607 respondents completed the survey. The primary uses of ChatGPT were for information gathering (n=219, 36.1%), entertainment (n=203, 33.4%), and problem-solving (n=135, 22.2%), with a smaller number using it for health-related queries (n=44, 7.2%) and other activities (n=6, 1%). Our model explained 50.5% and 9.8% of the variance in Intent to Use and Actual Use, respectively, with path coefficients of 0.711 and 0.221 for Trust on Intent to Use and Actual Use, respectively. The bootstrapped results failed to reject all 4 null hypotheses, with Trust having a significant direct effect on both Intent to Use (β=0.711, 95% CI 0.656-0.764) and Actual Use (β=0.302, 95% CI 0.229-0.374). The indirect effect of Trust on Actual Use, partially mediated by Intent to Use, was also significant (β=0.113, 95% CI 0.001-0.227). CONCLUSIONS Our results suggest that trust is critical to users' adoption of ChatGPT. It remains crucial to highlight that ChatGPT was not initially designed for health care applications. Therefore, an overreliance on it for health-related advice could potentially lead to misinformation and subsequent health risks. Efforts must be focused on improving the ChatGPT's ability to distinguish between queries that it can safely handle and those that should be redirected to human experts (health care professionals). Although risks are associated with excessive trust in artificial intelligence-driven chatbots such as ChatGPT, the potential risks can be reduced by advocating for shared accountability and fostering collaboration between developers, subject matter experts, and human factors researchers.
Collapse
Affiliation(s)
- Avishek Choudhury
- Industrial and Management Systems Engineering, Benjamin M. Statler College of Engineering and Mineral Resources, West Virginia University, Morgantown, WV, United States
| | - Hamid Shamszare
- Industrial and Management Systems Engineering, Benjamin M. Statler College of Engineering and Mineral Resources, West Virginia University, Morgantown, WV, United States
| |
Collapse
|
5
|
Östlund B, Malvezzi M, Frennert S, Funk M, Gonzalez-Vargas J, Baur K, Alimisis D, Thorsteinsson F, Alonso-Cepeda A, Fau G, Haufe F, Di Pardo M, Moreno JC. Interactive robots for health in Europe: Technology readiness and adoption potential. Front Public Health 2023; 11:979225. [PMID: 36992891 PMCID: PMC10042286 DOI: 10.3389/fpubh.2023.979225] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Accepted: 02/16/2023] [Indexed: 03/14/2023] Open
Abstract
IntroductionSocial robots are accompanied by high expectations of what they can bring to society and in the healthcare sector. So far, promising assumptions have been presented about how and where social robots are most relevant. We know that the industry has used robots for a long time, but what about social uptake outside industry, specifically, in the healthcare sector? This study discusses what trends are discernible, to better understand the gap between technology readiness and adoption of interactive robots in the welfare and health sectors in Europe.MethodsAn assessment of interactive robot applications at the upper levels of the Technology Readiness Level scale is combined with an assessment of adoption potential based on Rogers' theory of diffusion of innovation. Most robot solutions are dedicated to individual rehabilitation or frailty and stress. Fewer solutions are developed for managing welfare services or public healthcare.ResultsThe results show that while robots are ready from the technological point of view, most of the applications had a low score for demand according to the stakeholders.DiscussionTo enhance social uptake, a more initiated discussion, and more studies on the connections between technology readiness and adoption and use are suggested. Applications being available to users does not mean they have an advantage over previous solutions. Acceptance of robots is also heavily dependent on the impact of regulations as part of the welfare and healthcare sectors in Europe.
Collapse
Affiliation(s)
- Britt Östlund
- Department of Biomedical Engineering and Health Systems, Royal Institute of Technology (KTH), Stockholm, Sweden
| | - Monica Malvezzi
- Department of Information Engineering and Mathematics, University of Siena, Siena, Italy
- *Correspondence: Monica Malvezzi
| | - Susanne Frennert
- Internet of Things and People Research Center, Malmö University, Malmö, Sweden
| | - Michael Funk
- Cooperative Systems, Faculty of Computer Science, University of Vienna, Vienna, Austria
| | | | | | | | | | | | | | - Florian Haufe
- Sensory-Motor Systems Lab, Institute of Robotics and Intelligent Systems, ETH Zürich, Zürich, Switzerland
| | - Massimo Di Pardo
- SPW, Research and Innovation Department, Centro Ricerche Fiat (CRF), Orbassano, Italy
| | - Juan C. Moreno
- Neural Rehabilitation Group, Cajal Institute, Spanish National Research Council (CSIC), Madrid, Spain
| |
Collapse
|
6
|
On the Role of Beliefs and Trust for the Intention to Use Service Robots: An Integrated Trustworthiness Beliefs Model for Robot Acceptance. Int J Soc Robot 2023. [DOI: 10.1007/s12369-022-00952-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/27/2023]
Abstract
AbstractWith the increasing abilities of robots, the prediction of user decisions needs to go beyond the usability perspective, for example, by integrating distinctive beliefs and trust. In an online study (N = 400), first, the relationship between general trust in service robots and trust in a specific robot was investigated, supporting the role of general trust as a starting point for trust formation. On this basis, it was explored—both for general acceptance of service robots and acceptance of a specific robot—if technology acceptance models can be meaningfully complemented by specific beliefs from the theory of planned behavior (TPB) and trust literature to enhance understanding of robot adoption. First, models integrating all belief groups were fitted, providing essential variance predictions at both levels (general and specific) and a mediation of beliefs via trust to the intention to use. The omission of the performance expectancy and reliability belief was compensated for by more distinctive beliefs. In the final model (TB-RAM), effort expectancy and competence predicted trust at the general level. For a specific robot, competence and social influence predicted trust. Moreover, the effect of social influence on trust was moderated by the robot's application area (public > private), supporting situation-specific belief relevance in robot adoption. Taken together, in line with the TPB, these findings support a mediation cascade from beliefs via trust to the intention to use. Furthermore, an incorporation of distinctive instead of broad beliefs is promising for increasing the explanatory and practical value of acceptance modeling.
Collapse
|
7
|
Tojib D, Abdi E, Tian L, Rigby L, Meads J, Prasad T. What’s Best for Customers: Empathetic Versus Solution-Oriented Service Robots. Int J Soc Robot 2023. [DOI: 10.1007/s12369-023-00970-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/12/2023]
Abstract
AbstractA promising application of social robots highlighted by the ongoing labor shortage is to deploy them as service robots at organizational frontlines. As the face of the firms, service robots are expected to provide cognitive and affective supports in response to customer inquiries. However, one question remains unanswered: Would having a robot with a high level of affective support be helpful when such a robot cannot provide a satisfactory level of cognitive support to users? In this study, we aim to address this question by showing that empathetic service robots can be beneficial, although the extent of such benefits depends on the quality of services they provide. Our in-person human–robot interaction study (n = 55) shows that when a service robot can only provide a partial solution, it is preferable for it to express more empathetic behaviors, as users will perceive it to be more useful and will have a better customer experience. However, when a service robot is able to provide a full solution, the level of empathy displayed by it does not result in significant differences on perceived usefulness and customer experience. These findings are further validated in an online experimental study performed in another country (n = 395).
Collapse
|
8
|
Luciani B, Braghin F, Pedrocchi ALG, Gandolla M. Technology Acceptance Model for Exoskeletons for Rehabilitation of the Upper Limbs from Therapists' Perspectives. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23031721. [PMID: 36772758 PMCID: PMC9919869 DOI: 10.3390/s23031721] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Revised: 01/27/2023] [Accepted: 02/02/2023] [Indexed: 06/12/2023]
Abstract
Over the last few years, exoskeletons have been demonstrated to be useful tools for supporting the execution of neuromotor rehabilitation sessions. However, they are still not very present in hospitals. Therapists tend to be wary of this type of technology, thus reducing its acceptability and, therefore, its everyday use in clinical practice. The work presented in this paper investigates a novel point of view that is different from that of patients, which is normally what is considered for similar analyses. Through the realization of a technology acceptance model, we investigate the factors that influence the acceptability level of exoskeletons for rehabilitation of the upper limbs from therapists' perspectives. We analyzed the data collected from a pool of 55 physiotherapists and physiatrists through the distribution of a questionnaire. Pearson's correlation and multiple linear regression were used for the analysis. The relations between the variables of interest were also investigated depending on participants' age and experience with technology. The model built from these data demonstrated that the perceived usefulness of a robotic system, in terms of time and effort savings, was the first factor influencing therapists' willingness to use it. Physiotherapists' perception of the importance of interacting with an exoskeleton when carrying out an enhanced therapy session increased if survey participants already had experience with this type of rehabilitation technology, while their distrust and the consideration of others' opinions decreased. The conclusions drawn from our analyses show that we need to invest in making this technology better known to the public-in terms of education and training-if we aim to make exoskeletons genuinely accepted and usable by therapists. In addition, integrating exoskeletons with multi-sensor feedback systems would help provide comprehensive information about the patients' condition and progress. This can help overcome the gap that a robot creates between a therapist and the patient's human body, reducing the fear that specialists have of this technology, and this can demonstrate exoskeletons' utility, thus increasing their perceived level of usefulness.
Collapse
Affiliation(s)
- Beatrice Luciani
- Department of Mechanical Engineering, Politecnico di Milano, Via La Masa 1, 20156 Milano, Italy
- NeuroEngineering And Medical Robotics Laboratory (NEARLab), Department of Electronics, Information and Bioengineering, Politecnico di Milano, Piazza Leonardo da Vinci, 32, 20133 Milano, Italy
| | - Francesco Braghin
- Department of Mechanical Engineering, Politecnico di Milano, Via La Masa 1, 20156 Milano, Italy
| | - Alessandra Laura Giulia Pedrocchi
- NeuroEngineering And Medical Robotics Laboratory (NEARLab), Department of Electronics, Information and Bioengineering, Politecnico di Milano, Piazza Leonardo da Vinci, 32, 20133 Milano, Italy
- WE-COBOT Lab, Politecnico di Milano, Polo Territoriale di Lecco, Via G. Previati, 1/c, 23900 Lecco, Italy
| | - Marta Gandolla
- Department of Mechanical Engineering, Politecnico di Milano, Via La Masa 1, 20156 Milano, Italy
- NeuroEngineering And Medical Robotics Laboratory (NEARLab), Department of Electronics, Information and Bioengineering, Politecnico di Milano, Piazza Leonardo da Vinci, 32, 20133 Milano, Italy
- WE-COBOT Lab, Politecnico di Milano, Polo Territoriale di Lecco, Via G. Previati, 1/c, 23900 Lecco, Italy
| |
Collapse
|
9
|
Forgas-Coll S, Huertas-Garcia R, Andriella A, Alenyà G. Social robot-delivered customer-facing services: an assessment of the experience. SERVICE INDUSTRIES JOURNAL 2023. [DOI: 10.1080/02642069.2022.2163995] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
Affiliation(s)
| | | | | | - Guillem Alenyà
- Institut de Robòtica i Informàtica Industrial CSIC-UPC, Barcelona, Spain
| |
Collapse
|
10
|
The effect of required warmth on consumer acceptance of artificial intelligence in service: The moderating role of AI-human collaboration. INTERNATIONAL JOURNAL OF INFORMATION MANAGEMENT 2022. [DOI: 10.1016/j.ijinfomgt.2022.102533] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
11
|
Choudhury A. Factors influencing clinicians' willingness to use an AI-based clinical decision support system. Front Digit Health 2022; 4:920662. [PMID: 36339516 PMCID: PMC9628998 DOI: 10.3389/fdgth.2022.920662] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 08/01/2022] [Indexed: 11/07/2022] Open
Abstract
Background Given the opportunities created by artificial intelligence (AI) based decision support systems in healthcare, the vital question is whether clinicians are willing to use this technology as an integral part of clinical workflow. Purpose This study leverages validated questions to formulate an online survey and consequently explore cognitive human factors influencing clinicians' intention to use an AI-based Blood Utilization Calculator (BUC), an AI system embedded in the electronic health record that delivers data-driven personalized recommendations for the number of packed red blood cells to transfuse for a given patient. Method A purposeful sampling strategy was used to exclusively include BUC users who are clinicians in a university hospital in Wisconsin. We recruited 119 BUC users who completed the entire survey. We leveraged structural equation modeling to capture the direct and indirect effects of “AI Perception” and “Expectancy” on clinicians' Intention to use the technology when mediated by “Perceived Risk”. Results The findings indicate a significant negative relationship concerning the direct impact of AI's perception on BUC Risk (ß = −0.23, p < 0.001). Similarly, Expectancy had a significant negative effect on Risk (ß = −0.49, p < 0.001). We also noted a significant negative impact of Risk on the Intent to use BUC (ß = −0.34, p < 0.001). Regarding the indirect effect of Expectancy on the Intent to Use BUC, the findings show a significant positive impact mediated by Risk (ß = 0.17, p = 0.004). The study noted a significant positive and indirect effect of AI Perception on the Intent to Use BUC when mediated by risk (ß = 0.08, p = 0.027). Overall, this study demonstrated the influences of expectancy, perceived risk, and perception of AI on clinicians' intent to use BUC (an AI system). AI developers need to emphasize the benefits of AI technology, ensure ease of use (effort expectancy), clarify the system's potential (performance expectancy), and minimize the risk perceptions by improving the overall design. Conclusion Identifying the factors that determine clinicians' intent to use AI-based decision support systems can help improve technology adoption and use in the healthcare domain. Enhanced and safe adoption of AI can uplift the overall care process and help standardize clinical decisions and procedures. An improved AI adoption in healthcare will help clinicians share their everyday clinical workload and make critical decisions.
Collapse
|
12
|
Kraus J, Babel F, Hock P, Hauber K, Baumann M. The trustworthy and acceptable HRI checklist (TA-HRI): questions and design recommendations to support a trust-worthy and acceptable design of human-robot interaction. GIO-GRUPPE-INTERAKTION-ORGANISATION-ZEITSCHRIFT FUER ANGEWANDTE ORGANISATIONSPSYCHOLOGIE 2022. [DOI: 10.1007/s11612-022-00643-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
Abstract
AbstractThis contribution to the journal Gruppe. Interaktion. Organisation. (GIO) presents a checklist of questions and design recommendations for designing acceptable and trustworthy human-robot interaction (HRI). In order to extend the application scope of robots towards more complex contexts in the public domain and in private households, robots have to fulfill requirements regarding social interaction between humans and robots in addition to safety and efficiency. In particular, this results in recommendations for the design of the appearance, behavior, and interaction strategies of robots that can contribute to acceptance and appropriate trust. The presented checklist was derived from existing guidelines of associated fields of application, the current state of research on HRI, and the results of the BMBF-funded project RobotKoop. The trustworthy and acceptable HRI checklist (TA-HRI) contains 60 design topics with questions and design recommendations for the development and design of acceptable and trustworthy robots. The TA-HRI Checklist provides a basis for discussion of the design of service robots for use in public and private environments and will be continuously refined based on feedback from the community.
Collapse
|
13
|
Fan H, Gao W, Han B. How does (im)balanced acceptance of robots between customers and frontline employees affect hotels’ service quality? COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107287] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
|
14
|
Lei M, Clemente IM, Liu H, Bell J. The Acceptance of Telepresence Robots in Higher Education. Int J Soc Robot 2022; 14:1025-1042. [PMID: 35103081 PMCID: PMC8791687 DOI: 10.1007/s12369-021-00837-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/18/2021] [Indexed: 01/15/2023]
Abstract
While telepresence robots have increasingly become accepted in diverse settings, the research on their acceptance in educational contexts has been underdeveloped. This study analyzed how the use intention of telepresence robots can be influenced by perceived usefulness, perceived ease of use, subjective norm, and perceived risk for students, faculty, and staff in higher education. Survey data were collected from 60 participants with direct operator experience with a variety of telepresence robots deployed in a large research university in the Midwest region of the United States. Path analysis results indicated that perceived usefulness was the only significant direct predictor of use intention of telepresence robots. Both perceived ease of use and subjective norm had a significant positive effect on perceived usefulness. Subjective norm also had a significant positive indirect effect on use intention, mediated by perceived usefulness. Perceived risk had a negative effect on perceived ease of use. These findings indicated that the usefulness of robots was central to operators’ decisions to use telepresence robots. Therefore, design choice for telepresence robots should prioritize usefulness. Secondly, the design of telepresence robots should minimize complexity for the end user and minimize cognitive demand. Having nominal difficulty of use would also facilitate multiple embodiments by combining telepresence robots with other technologies to support more rich social interactions.
Collapse
Affiliation(s)
- Ming Lei
- Department of Counseling, Educational Psychology and Special Education, Michigan State University, 620 Farm Lane, 513 Erickson Hall, East Lansing, MI 48824 USA
| | - Ian M. Clemente
- Department of Counseling, Educational Psychology and Special Education, Michigan State University, 620 Farm Lane, 513 Erickson Hall, East Lansing, MI 48824 USA
| | - Haixia Liu
- Integrative, Religious, and Intercultural Studies Department, Grand Valley State University, One Campus Dr, Allendale, MI 49401 USA
| | - John Bell
- Department of Counseling, Educational Psychology and Special Education, Michigan State University, 620 Farm Lane, 513 Erickson Hall, East Lansing, MI 48824 USA
| |
Collapse
|
15
|
Investigating e-Retailers’ Intentions to Adopt Cryptocurrency Considering the Mediation of Technostress and Technology Involvement. SUSTAINABILITY 2022. [DOI: 10.3390/su14020641] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Cryptocurrencies have transgressed ever-changing economic trends in the global economy, owing to their conveyance, security, trust, and the ability to make transactions without the aid of formal institutions and governing bodies. However, the adoption of cryptocurrency remains low among stakeholders, including e-retailers. Thus, the current work explores the intentions of e-retailers in the Asia and Pacific region to adopt cryptocurrencies. This study considers the TAM-based SOR, with a combination of non-cognitive attributes (compatibility and convenience) proposed as stimuli for e-retailers to adopt the examined cryptocurrencies. The findings indicate that the proposed non-cognitive attributes are critical in determining e-retailers’ technostress (emotional state). Moreover, it was found that technostress among e-retailers profoundly impacts their intentions to adopt cryptocurrency in business settings. Meanwhile, regulatory support communication can be used to help regulatory bodies and governing institutions control the future economy worldwide. The proposed study offers significant theoretical and practical contributions through its investigation of e-retailers’ intentions to adopt cryptocurrency for the first time in the particular context of technostress and regulatory support.
Collapse
|
16
|
Forgas-Coll S, Huertas-Garcia R, Andriella A, Alenyà G. The effects of gender and personality of robot assistants on customers’ acceptance of their service. SERVICE BUSINESS 2022; 16:359-389. [PMCID: PMC9039270 DOI: 10.1007/s11628-022-00492-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/05/2021] [Accepted: 04/11/2022] [Indexed: 08/16/2023]
Abstract
The Covid-19 pandemic has stimulated the use of social robots in front-office services. However, some initial applications yielded disappointing results, as managers were unaware of the level of development of the robots’ artificial intelligence systems. This study proposes to adapt the Almere model to estimate the technological acceptance of service robots, which express their gender and personality, whilst assisting consumers. A 2 × 2 (two genders vs. two personalities) between-subjects experiment was conducted with 219 participants. Model estimation with Structural Equation Modelling confirmed seven out of eight hypotheses, and all four scenarios were estimated with Ordinary Least Squares, showing that robot gender and personality affected their technological acceptance.
Collapse
Affiliation(s)
- Santiago Forgas-Coll
- Business Department, University of Barcelona, Avda. Diagonal, 690, 08034 Barcelona, Spain
| | - Ruben Huertas-Garcia
- Business Department, University of Barcelona, Avda. Diagonal, 690, 08034 Barcelona, Spain
| | - Antonio Andriella
- Institut de Robòtica i Informàtica Industrial CSIC-UPC, C/ Llorens i Artigas 4-6, 08028 Barcelona, Spain
| | - Guillem Alenyà
- Institut de Robòtica i Informàtica Industrial CSIC-UPC, C/ Llorens i Artigas 4-6, 08028 Barcelona, Spain
| |
Collapse
|
17
|
Song X, Xu B, Zhao Z. Can People Experience Romantic Love for Artificial Intelligence? An Empirical Study of Intelligent Assistants. INFORMATION & MANAGEMENT 2022. [DOI: 10.1016/j.im.2022.103595] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
18
|
Canal G, Torras C, Alenyà G. Are Preferences Useful for Better Assistance? ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2021. [DOI: 10.1145/3472208] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Assistive Robots have an inherent need of adapting to the user they are assisting. This is crucial for the correct development of the task, user safety, and comfort. However, adaptation can be performed in several manners. We believe user preferences are key to this adaptation. In this article, we evaluate the use of preferences for Physically Assistive Robotics tasks in a Human-Robot Interaction user evaluation. Three assistive tasks have been implemented consisting of assisted feeding, shoe-fitting, and jacket dressing, where the robot performs each task in a different manner based on user preferences. We assess the ability of the users to determine which execution of the task used their chosen preferences (if any). The obtained results show that most of the users were able to successfully guess the cases where their preferences were used even when they had not seen the task before. We also observe that their satisfaction with the task increases when the chosen preferences are employed. Finally, we also analyze the user’s opinions regarding assistive tasks and preferences, showing promising expectations as to the benefits of adapting the robot behavior to the user through preferences.
Collapse
Affiliation(s)
- Gerard Canal
- Institut de Robòtica i Informàtica Industrial, CSIC-UPC and Department of Informatics, King’s College London, Aldwych, London, United Kingdom
| | - Carme Torras
- Institut de Robòtica i Informàtica Industrial, CSIC-UPC, Barcelona, Spain
| | - Guillem Alenyà
- Institut de Robòtica i Informàtica Industrial, CSIC-UPC, Barcelona, Spain
| |
Collapse
|
19
|
Forgas-Coll S, Huertas-Garcia R, Andriella A, Alenyà G. How do Consumers’ Gender and Rational Thinking Affect the Acceptance of Entertainment Social Robots? Int J Soc Robot 2021. [DOI: 10.1007/s12369-021-00845-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
AbstractIn recent years, the rapid ageing of the population, a longer life expectancy and elderly people’s desire to live independently are social changes that put pressure on healthcare systems. This context is boosting the demand for companion and entertainment social robots on the market and, consequently, producers and distributors are interested in knowing how these social robots are accepted by consumers. Based on technology acceptance models, a parsimonious model is proposed to estimate the intention to use this new advanced social robot technology and, in addition, an analysis is performed to determine how consumers’ gender and rational thinking condition the precedents of the intention to use. The results show that gender differences are more important than suggested by the literature. While women gave greater social influence and perceived enjoyment as the main motives for using a social robot, in contrast, men considered their perceived usefulness to be the principal reason and, as a differential argument, the ease of use. Regarding the reasoning system, the most significant differences occurred between heuristic individuals, who stated social influence as the main reason for using a robot, and the more rational consumers, who gave ease of use as a differential argument.
Collapse
|
20
|
Turja T, Taipale S, Niemelä M, Oinas T. Positive Turn in Elder-Care Workers' Views Toward Telecare Robots. Int J Soc Robot 2021; 14:931-944. [PMID: 34873425 PMCID: PMC8636069 DOI: 10.1007/s12369-021-00841-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/02/2021] [Indexed: 11/30/2022]
Abstract
Robots have been slowly but steadily introduced to welfare sectors. Our previous observations based on a large-scale survey study on Finnish elder-care workers in 2016 showed that while robots were perceived to be useful in certain telecare tasks, using robots may also prove to be incompatible with the care workers' personal values. The current study presents the second wave of the survey data from 2020, with the same respondents (N = 190), and shows how these views have changed for the positive, including higher expectations of telecare robotization and decreased concerns over care robots' compatibility with personal values. In a longitudinal analysis (Phase 1), the positive change in views toward telecare robots was found to be influenced by the care robots' higher value compatibility. In an additional cross-sectional analysis (Phase 2), focusing on the factors underlying personal values, care robots' value compatibility was associated with social norms toward care robots, the threat of technological unemployment, and COVID-19 stress. The significance of social norms in robot acceptance came down to more universal ethical standards of care work rather than shared norms in the workplace. COVID-19 stress did not explain the temporal changes in views about robot use in care but had a role in assessments of the compatibility between personal values and care robot use. In conclusion, for care workers to see potential in care robots, the new technology must support ethical standards of care work, such as respectfulness, compassion, and trustworthiness of the nurse-patient interaction. In robotizing care work, personal values are significant predictors of the task values.
Collapse
Affiliation(s)
- Tuuli Turja
- Faculty of Social Sciences, Tampere University, Kalevantie 5, 33014 Tampere, Finland
| | | | | | - Tomi Oinas
- University of Jyväskylä, Jyväskylä, Finland
| |
Collapse
|
21
|
Esmaeilzadeh P, Mirzaei T, Dharanikota S. Patients' Perceptions Toward Human-Artificial Intelligence Interaction in Health Care: Experimental Study. J Med Internet Res 2021; 23:e25856. [PMID: 34842535 PMCID: PMC8663518 DOI: 10.2196/25856] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Revised: 05/04/2021] [Accepted: 10/26/2021] [Indexed: 12/24/2022] Open
Abstract
Background It is believed that artificial intelligence (AI) will be an integral part of health care services in the near future and will be incorporated into several aspects of clinical care such as prognosis, diagnostics, and care planning. Thus, many technology companies have invested in producing AI clinical applications. Patients are one of the most important beneficiaries who potentially interact with these technologies and applications; thus, patients’ perceptions may affect the widespread use of clinical AI. Patients should be ensured that AI clinical applications will not harm them, and that they will instead benefit from using AI technology for health care purposes. Although human-AI interaction can enhance health care outcomes, possible dimensions of concerns and risks should be addressed before its integration with routine clinical care. Objective The main objective of this study was to examine how potential users (patients) perceive the benefits, risks, and use of AI clinical applications for their health care purposes and how their perceptions may be different if faced with three health care service encounter scenarios. Methods We designed a 2×3 experiment that crossed a type of health condition (ie, acute or chronic) with three different types of clinical encounters between patients and physicians (ie, AI clinical applications as substituting technology, AI clinical applications as augmenting technology, and no AI as a traditional in-person visit). We used an online survey to collect data from 634 individuals in the United States. Results The interactions between the types of health care service encounters and health conditions significantly influenced individuals’ perceptions of privacy concerns, trust issues, communication barriers, concerns about transparency in regulatory standards, liability risks, benefits, and intention to use across the six scenarios. We found no significant differences among scenarios regarding perceptions of performance risk and social biases. Conclusions The results imply that incompatibility with instrumental, technical, ethical, or regulatory values can be a reason for rejecting AI applications in health care. Thus, there are still various risks associated with implementing AI applications in diagnostics and treatment recommendations for patients with both acute and chronic illnesses. The concerns are also evident if the AI applications are used as a recommendation system under physician experience, wisdom, and control. Prior to the widespread rollout of AI, more studies are needed to identify the challenges that may raise concerns for implementing and using AI applications. This study could provide researchers and managers with critical insights into the determinants of individuals’ intention to use AI clinical applications. Regulatory agencies should establish normative standards and evaluation guidelines for implementing AI in health care in cooperation with health care institutions. Regular audits and ongoing monitoring and reporting systems can be used to continuously evaluate the safety, quality, transparency, and ethical factors of AI clinical applications.
Collapse
Affiliation(s)
- Pouyan Esmaeilzadeh
- Department of Information Systems and Business Analytics, College of Business, Florida International University, Miami, FL, United States
| | - Tala Mirzaei
- Department of Information Systems and Business Analytics, College of Business, Florida International University, Miami, FL, United States
| | - Spurthy Dharanikota
- Department of Information Systems and Business Analytics, College of Business, Florida International University, Miami, FL, United States
| |
Collapse
|
22
|
Abstract
Understanding the factors affecting the use of healthcare technologies is a crucial topic that has been extensively studied, specifically during the last decade. These factors were studied using different technology acceptance models and theories. However, a systematic review that offers extensive understanding into what affects healthcare technologies and services and covers distinctive trends in large-scale research remains lacking. Therefore, this review aims to systematically review the articles published on technology acceptance in healthcare. From a yield of 1768 studies collected, 142 empirical studies have met the eligibility criteria and were extensively analyzed. The key findings confirmed that TAM and UTAUT are the most prevailing models in explaining what affects the acceptance of various healthcare technologies through different user groups, settings, and countries. Apart from the core constructs of TAM and UTAUT, the results showed that anxiety, computer self-efficacy, innovativeness, and trust are the most influential factors affecting various healthcare technologies. The results also revealed that Taiwan and the USA are leading the research of technology acceptance in healthcare, with a remarkable increase in studies focusing on telemedicine and electronic medical records solutions. This review is believed to enhance our understanding through a number of theoretical contributions and practical implications by unveiling the full potential of technology acceptance in healthcare and opening the door for further research opportunities.
Collapse
|
23
|
Prakash AV, Das S. Medical practitioner's adoption of intelligent clinical diagnostic decision support systems: A mixed-methods study. INFORMATION & MANAGEMENT 2021. [DOI: 10.1016/j.im.2021.103524] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
24
|
Esmaeilzadeh P. Use of AI-based tools for healthcare purposes: a survey study from consumers' perspectives. BMC Med Inform Decis Mak 2020; 20:170. [PMID: 32698869 PMCID: PMC7376886 DOI: 10.1186/s12911-020-01191-1] [Citation(s) in RCA: 103] [Impact Index Per Article: 25.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Accepted: 07/16/2020] [Indexed: 12/31/2022] Open
Abstract
BACKGROUND Several studies highlight the effects of artificial intelligence (AI) systems on healthcare delivery. AI-based tools may improve prognosis, diagnostics, and care planning. It is believed that AI will be an integral part of healthcare services in the near future and will be incorporated into several aspects of clinical care. Thus, many technology companies and governmental projects have invested in producing AI-based clinical tools and medical applications. Patients can be one of the most important beneficiaries and users of AI-based applications whose perceptions may affect the widespread use of AI-based tools. Patients should be ensured that they will not be harmed by AI-based devices, and instead, they will be benefited by using AI technology for healthcare purposes. Although AI can enhance healthcare outcomes, possible dimensions of concerns and risks should be addressed before its integration with routine clinical care. METHODS We develop a model mainly based on value perceptions due to the specificity of the healthcare field. This study aims at examining the perceived benefits and risks of AI medical devices with clinical decision support (CDS) features from consumers' perspectives. We use an online survey to collect data from 307 individuals in the United States. RESULTS The proposed model identifies the sources of motivation and pressure for patients in the development of AI-based devices. The results show that technological, ethical (trust factors), and regulatory concerns significantly contribute to the perceived risks of using AI applications in healthcare. Of the three categories, technological concerns (i.e., performance and communication feature) are found to be the most significant predictors of risk beliefs. CONCLUSIONS This study sheds more light on factors affecting perceived risks and proposes some recommendations on how to practically reduce these concerns. The findings of this study provide implications for research and practice in the area of AI-based CDS. Regulatory agencies, in cooperation with healthcare institutions, should establish normative standard and evaluation guidelines for the implementation and use of AI in healthcare. Regular audits and ongoing monitoring and reporting systems can be used to continuously evaluate the safety, quality, transparency, and ethical factors of AI-based services.
Collapse
Affiliation(s)
- Pouyan Esmaeilzadeh
- Department of Information Systems and Business Analytics, College of Business, Florida International University, Miami, FL, 33199, USA.
| |
Collapse
|