1
|
Pilacinski A, Pinto A, Oliveira S, Araújo E, Carvalho C, Silva PA, Matias R, Menezes P, Sousa S. The robot eyes don't have it. The presence of eyes on collaborative robots yields marginally higher user trust but lower performance. Heliyon 2023; 9:e18164. [PMID: 37520993 PMCID: PMC10382291 DOI: 10.1016/j.heliyon.2023.e18164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 06/21/2023] [Accepted: 07/10/2023] [Indexed: 08/01/2023] Open
Abstract
Eye gaze is a prominent feature of human social lives, but little is known on whether fitting eyes on machines makes humans trust them more. In this study we compared subjective and objective markers of human trust when collaborating with eyed and non-eyed robots of the same type. We used virtual reality scenes in which we manipulated distance and the presence of eyes on a robot's display during simple collaboration scenes. We found that while collaboration with eyed cobots resulted in slightly higher subjective trust ratings, the objective markers such as pupil size and task completion time indicated it was in fact less comfortable to collaborate with eyed robots. These findings are in line with recent suggestions that anthropomorphism may be actually a detrimental feature of collaborative robots. These findings also show the complex relationship between human objective and subjective markers of trust when collaborating with artificial agents.
Collapse
Affiliation(s)
- Artur Pilacinski
- Medical Faculty, Ruhr University Bochum, Bochum, Germany
- CINEICC - Center for Research in Neuropsychology and Cognitive Behavioral Intervention, University of Coimbra, Coimbra, Portugal
- Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
| | - Ana Pinto
- Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
- Faculty of Sciences and Technology, University of Coimbra, Coimbra, Portugal
- CeBER – Centre for Business and Economics Research, University of Coimbra, Coimbra, Portugal
| | - Soraia Oliveira
- Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
| | - Eduardo Araújo
- Faculty of Sciences and Technology, University of Coimbra, Coimbra, Portugal
- Department of Informatics Engineering, University of Coimbra, Coimbra, Portugal
| | - Carla Carvalho
- CINEICC - Center for Research in Neuropsychology and Cognitive Behavioral Intervention, University of Coimbra, Coimbra, Portugal
- Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
| | - Paula Alexandra Silva
- Faculty of Sciences and Technology, University of Coimbra, Coimbra, Portugal
- Department of Informatics Engineering, University of Coimbra, Coimbra, Portugal
- CISUC - Centre for Informatics and Systems of the University of Coimbra, Coimbra, Portugal
| | - Ricardo Matias
- Faculty of Sciences and Technology, University of Coimbra, Coimbra, Portugal
- Electrical and Computer Engineering Department, University of Coimbra, Coimbra, Portugal
| | - Paulo Menezes
- Faculty of Sciences and Technology, University of Coimbra, Coimbra, Portugal
- Electrical and Computer Engineering Department, University of Coimbra, Coimbra, Portugal
| | - Sonia Sousa
- University of Trás-os-Montes e Alto Douro, Vila Real, Portugal
- School of Digital Technologies, Tallinn University, Tallinn, Estonia
| |
Collapse
|
2
|
Algorithmic Management. BUSINESS & INFORMATION SYSTEMS ENGINEERING 2022. [DOI: 10.1007/s12599-022-00764-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
3
|
Jain R, Garg N, Khera SN. Adoption of AI-Enabled Tools in Social Development Organizations in India: An Extension of UTAUT Model. Front Psychol 2022; 13:893691. [PMID: 35795409 PMCID: PMC9251489 DOI: 10.3389/fpsyg.2022.893691] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Accepted: 05/16/2022] [Indexed: 11/16/2022] Open
Abstract
Social development organizations increasingly employ artificial intelligence (AI)-enabled tools to help team members collaborate effectively and efficiently. These tools are used in various team management tasks and activities. Based on the unified theory of acceptance and use of technology (UTAUT), this study explores various factors influencing employees' use of AI-enabled tools. The study extends the model in two ways: a) by evaluating the impact of these tools on the employees' collaboration and b) by exploring the moderating role of AI aversion. Data were collected through an online survey of employees working with AI-enabled tools. The analysis of the research model was conducted using partial least squares (PLS), with a two-step model - measurement and structural models of assessment. The results revealed that the antecedent variables, such as effort expectancy, performance expectancy, social influence, and facilitating conditions, are positively associated with using AI-enabled tools, which have a positive relationship with collaboration. It also concluded a significant effect of AI aversion in the relationship between performance expectancy and use of technology. These findings imply that organizations should focus on building an environment to adopt AI-enabled tools while also addressing employees' concerns about AI.
Collapse
Affiliation(s)
| | - Naval Garg
- University School of Management and Entrepreneurship, Delhi Technological University, Rohini, India
| | | |
Collapse
|
4
|
Unlocking the value of artificial intelligence in human resource management through AI capability framework. HUMAN RESOURCE MANAGEMENT REVIEW 2022. [DOI: 10.1016/j.hrmr.2022.100899] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
5
|
Jussupow E, Spohrer K, Heinzl A, Gawlitza J. Augmenting Medical Diagnosis Decisions? An Investigation into Physicians’ Decision-Making Process with Artificial Intelligence. INFORMATION SYSTEMS RESEARCH 2021. [DOI: 10.1287/isre.2020.0980] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
Systems based on artificial intelligence (AI) increasingly support physicians in diagnostic decisions, but they are not without errors and biases. Failure to detect those may result in wrong diagnoses and medical errors. Compared with rule-based systems, however, these systems are less transparent and their errors less predictable. Thus, it is difficult, yet critical, for physicians to carefully evaluate AI advice. This study uncovers the cognitive challenges that medical decision makers face when they receive potentially incorrect advice from AI-based diagnosis systems and must decide whether to follow or reject it. In experiments with 68 novice and 12 experienced physicians, novice physicians with and without clinical experience as well as experienced radiologists made more inaccurate diagnosis decisions when provided with incorrect AI advice than without advice at all. We elicit five decision-making patterns and show that wrong diagnostic decisions often result from shortcomings in utilizing metacognitions related to decision makers’ own reasoning (self-monitoring) and metacognitions related to the AI-based system (system monitoring). As a result, physicians fall for decisions based on beliefs rather than actual data or engage in unsuitably superficial evaluation of the AI advice. Our study has implications for the training of physicians and spotlights the crucial role of human actors in compensating for AI errors.
Collapse
Affiliation(s)
- Ekaterina Jussupow
- Business School, Area Information Systems, Chair of General Management and Information Systems, University of Mannheim, 68161 Mannheim, Germany
| | - Kai Spohrer
- Business School, Area Information Systems, Chair of General Management and Information Systems, University of Mannheim, 68161 Mannheim, Germany
| | - Armin Heinzl
- Business School, Area Information Systems, Chair of General Management and Information Systems, University of Mannheim, 68161 Mannheim, Germany
| | - Joshua Gawlitza
- Institute of Diagnostic and Interventional Radiology, Thoracic Imaging, University Hospital Rechts der Isar, Technical University Munich, 81675 Munich, Germany
| |
Collapse
|
6
|
Canhoto AI. Leveraging machine learning in the global fight against money laundering and terrorism financing: An affordances perspective. JOURNAL OF BUSINESS RESEARCH 2021; 131:441-452. [PMID: 33100427 PMCID: PMC7568127 DOI: 10.1016/j.jbusres.2020.10.012] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/31/2019] [Revised: 10/01/2020] [Accepted: 10/03/2020] [Indexed: 05/31/2023]
Abstract
Financial services organisations facilitate the movement of money worldwide, and keep records of their clients' identity and financial behaviour. As such, they have been enlisted by governments worldwide to assist with the detection and prevention of money laundering, which is a key tool in the fight to reduce crime and create sustainable economic development, corresponding to Goal 16 of the United Nations Sustainable Development Goals. In this paper, we investigate how the technical and contextual affordances of machine learning algorithms may enable these organisations to accomplish that task. We find that, due to the unavailability of high-quality, large training datasets regarding money laundering methods, there is limited scope for using supervised machine learning. Conversely, it is possible to use reinforced machine learning and, to an extent, unsupervised learning, although only to model unusual financial behaviour, not actual money laundering.
Collapse
Affiliation(s)
- Ana Isabel Canhoto
- Brunel Business School, Brunel University London, Kingston Lane, Uxbridge, Middlesex UB8 3PH, United Kingdom
| |
Collapse
|
7
|
Chen Y, Zahedi FM, Abbasi A, Dobolyi D. Trust calibration of automated security IT artifacts: A multi-domain study of phishing-website detection tools. INFORMATION & MANAGEMENT 2021. [DOI: 10.1016/j.im.2020.103394] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
8
|
Berger B, Adam M, Rühr A, Benlian A. Watch Me Improve—Algorithm Aversion and Demonstrating the Ability to Learn. BUSINESS & INFORMATION SYSTEMS ENGINEERING 2020. [DOI: 10.1007/s12599-020-00678-5] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
AbstractOwing to advancements in artificial intelligence (AI) and specifically in machine learning, information technology (IT) systems can support humans in an increasing number of tasks. Yet, previous research indicates that people often prefer human support to support by an IT system, even if the latter provides superior performance – a phenomenon called algorithm aversion. A possible cause of algorithm aversion put forward in literature is that users lose trust in IT systems they become familiar with and perceive to err, for example, making forecasts that turn out to deviate from the actual value. Therefore, this paper evaluates the effectiveness of demonstrating an AI-based system’s ability to learn as a potential countermeasure against algorithm aversion in an incentive-compatible online experiment. The experiment reveals how the nature of an erring advisor (i.e., human vs. algorithmic), its familiarity to the user (i.e., unfamiliar vs. familiar), and its ability to learn (i.e., non-learning vs. learning) influence a decision maker’s reliance on the advisor’s judgement for an objective and non-personal decision task. The results reveal no difference in the reliance on unfamiliar human and algorithmic advisors, but differences in the reliance on familiar human and algorithmic advisors that err. Demonstrating an advisor’s ability to learn, however, offsets the effect of familiarity. Therefore, this study contributes to an enhanced understanding of algorithm aversion and is one of the first to examine how users perceive whether an IT system is able to learn. The findings provide theoretical and practical implications for the employment and design of AI-based systems.
Collapse
|
9
|
Twyman NW, Proudfoot JG, Cameron AF, Case E, Burgoon JK, Twitchell DP. Too Busy to Be Manipulated: How Multitasking with Technology Improves Deception Detection in Collaborative Teamwork. J MANAGE INFORM SYST 2020. [DOI: 10.1080/07421222.2020.1759938] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Affiliation(s)
- Nathan W. Twyman
- Department of Information Systems, Marriott School of Business, Brigham Young University, Provo, UT, USA
| | - Jeffrey G. Proudfoot
- Information and Process Management Department, Bentley University, Waltham, MA, USA
| | | | - Eric Case
- Ira A. Fulton Schools of Engineering, Arizona State University, Tucson, AZ, USA
- Information Security, TuSimple, Inc., Tucson, AZ, USA
| | - Judee K. Burgoon
- Center for the Management of Information, Eller College of Management, University of Arizona, Tucson, AZ, USA
| | - Douglas P. Twitchell
- Information Technology Management, College of Business and Economics, Boise State University, Boise, ID, USA
| |
Collapse
|
10
|
Seeber I, Bittner E, Briggs RO, de Vreede T, de Vreede GJ, Elkins A, Maier R, Merz AB, Oeste-Reiß S, Randrup N, Schwabe G, Söllner M. Machines as teammates: A research agenda on AI in team collaboration. INFORMATION & MANAGEMENT 2020. [DOI: 10.1016/j.im.2019.103174] [Citation(s) in RCA: 67] [Impact Index Per Article: 16.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
11
|
Dunbar NE, Miller CH, Lee YH, Jensen ML, Anderson C, Adams AS, Elizondo J, Thompson W, Massey Z, Nicholls SB, Ralston R, Donovan J, Mathews E, Roper B, Wilson SN. Reliable deception cues training in an interactive video game. COMPUTERS IN HUMAN BEHAVIOR 2018. [DOI: 10.1016/j.chb.2018.03.027] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
12
|
Ho SM, Hancock JT, Booth C, Liu X. Computer-Mediated Deception: Strategies Revealed by Language-Action Cues in Spontaneous Communication. J MANAGE INFORM SYST 2016. [DOI: 10.1080/07421222.2016.1205924] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
13
|
Proudfoot JG, Jenkins JL, Burgoon JK, Nunamaker JF. More Than Meets the Eye: How Oculometric Behaviors Evolve Over the Course of Automated Deception Detection Interactions. J MANAGE INFORM SYST 2016. [DOI: 10.1080/07421222.2016.1205929] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
14
|
Ludwig S, van Laer T, de Ruyter K, Friedman M. Untangling a Web of Lies: Exploring Automated Detection of Deception in Computer-Mediated Communication. J MANAGE INFORM SYST 2016. [DOI: 10.1080/07421222.2016.1205927] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
15
|
Adame BJ. Training in the mitigation of anchoring bias: A test of the consider-the-opposite strategy. LEARNING AND MOTIVATION 2016. [DOI: 10.1016/j.lmot.2015.11.002] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
16
|
|
17
|
Twyman NW, Lowry PB, Burgoon JK, Nunamaker JF. Autonomous Scientifically Controlled Screening Systems for Detecting Information Purposely Concealed by Individuals. J MANAGE INFORM SYST 2015. [DOI: 10.1080/07421222.2014.995535] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|