1
|
Schelble BG, Lopez J, Textor C, Zhang R, McNeese NJ, Pak R, Freeman G. Towards Ethical AI: Empirically Investigating Dimensions of AI Ethics, Trust Repair, and Performance in Human-AI Teaming. HUMAN FACTORS 2024; 66:1037-1055. [PMID: 35938319 DOI: 10.1177/00187208221116952] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
OBJECTIVE Determining the efficacy of two trust repair strategies (apology and denial) for trust violations of an ethical nature by an autonomous teammate. BACKGROUND While ethics in human-AI interaction is extensively studied, little research has investigated how decisions with ethical implications impact trust and performance within human-AI teams and their subsequent repair. METHOD Forty teams of two participants and one autonomous teammate completed three team missions within a synthetic task environment. The autonomous teammate made an ethical or unethical action during each mission, followed by an apology or denial. Measures of individual team trust, autonomous teammate trust, human teammate trust, perceived autonomous teammate ethicality, and team performance were taken. RESULTS Teams with unethical autonomous teammates had significantly lower trust in the team and trust in the autonomous teammate. Unethical autonomous teammates were also perceived as substantially more unethical. Neither trust repair strategy effectively restored trust after an ethical violation, and autonomous teammate ethicality was not related to the team score, but unethical autonomous teammates did have shorter times. CONCLUSION Ethical violations significantly harm trust in the overall team and autonomous teammate but do not negatively impact team score. However, current trust repair strategies like apologies and denials appear ineffective in restoring trust after this type of violation. APPLICATION This research highlights the need to develop trust repair strategies specific to human-AI teams and trust violations of an ethical nature.
Collapse
Affiliation(s)
- Beau G Schelble
- Human-Centered Computing, Clemson University, Clemson, SC, USA
| | - Jeremy Lopez
- Department of Psychology, Clemson University, Clemson, SC, USA
| | - Claire Textor
- Department of Psychology, Clemson University, Clemson, SC, USA
| | - Rui Zhang
- Human-Centered Computing, Clemson University, Clemson, SC, USA
| | | | - Richard Pak
- Department of Psychology, Clemson University, Clemson, SC, USA
| | - Guo Freeman
- Human-Centered Computing, Clemson University, Clemson, SC, USA
| |
Collapse
|
2
|
Lopez J, Watkins H, Pak R. Enhancing component-specific trust with consumer automated systems through humanness design. ERGONOMICS 2023; 66:291-302. [PMID: 35583421 DOI: 10.1080/00140139.2022.2079728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Accepted: 05/06/2022] [Indexed: 06/15/2023]
Abstract
Consumer automation is a suitable venue for studying the efficacy of untested humanness design methods for promoting specific trust in multi-component systems. Subjective (trust, self-confidence) and behavioural (use, manual override) measures were recorded as 82 participants interacted with a four-component automation-bearing system in a simulated smart home task for two experimental blocks. During the first block all components were perfectly reliable (100%). During the second block, one component became unreliable (60%). Participants interacted with a system containing either a single or four simulated voice assistants. In the single-assistant condition, the unreliable component resulted in trust changes for every component. In the four-assistant condition, trust decreased for only the unreliable component. Across agent-number conditions, use decreased between blocks for only the unreliable component. Self-confidence and overrides exhibited ceiling and floor effects, respectively. Our findings provide the first evidence of effectively using humanness design to enhance component-specific trust in consumer systems.Practitioner summary: Participants interacted with simulated smart-home multi-component systems that contained one or four voiced assistants. In the single-voice condition, one component's decreasing reliability coincided with trust changes for all components. In the four-voice condition, trust decreased for only the decreasingly reliable component. The number of voices did not influence use strategies.Abbreviations: ACC: adaptive cruise control; CST: component-specific trust; SWT: system-wide trust; UAV: unmanned aerial vehicle; CPRS: complacency potential rating scale; MANOVA: multivariate analysis of variance.
Collapse
Affiliation(s)
- Jeremy Lopez
- Department of Psychology, Clemson University, Clemson, SC, USA
| | - Heather Watkins
- Department of Psychology, Clemson University, Clemson, SC, USA
| | - Richard Pak
- Department of Psychology, Clemson University, Clemson, SC, USA
| |
Collapse
|
3
|
Exploring Age and Gender Differences in ICT Cybersecurity Behaviour. HUMAN BEHAVIOR AND EMERGING TECHNOLOGIES 2022. [DOI: 10.1155/2022/2693080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Known age differences exist in relation to information and communication technology (ICT) use, attitudes, access, and literacy. Less is known about age differences in relation to cybersecurity risks and associated cybersecurity behaviours. Using an online survey, this study analyses data from 579 participants to investigate age differences across four key cybersecurity behaviours: device securement, password generation, proactive checking, and software updating. Significant age differences were found; however, this is not a straightforward relationship. Older users appear less likely to secure their devices compared to younger users; however, the reverse was found for the other behaviours, with older users appearing more likely to generate secure passwords and show proactive risk awareness and regularly install updates. Gender was not a significant predictor of security behaviour (although males scored higher for self-reported computer self-efficacy and general resilience). Self-efficacy was identified as a mediator between age and three of the cybersecurity behaviours (password generation, proactive checking, and updating). General resilience was also a significant mediator for device securement, password generation, and updating; however, resilience acted as a moderator for proactive checking. Implications of these findings are twofold: firstly, helping to guide the development of training and interventions tailored to different cybersecurity behaviours and secondly informing cybersecurity policy development.
Collapse
|
4
|
Lacroux A, Martin-Lacroux C. Should I Trust the Artificial Intelligence to Recruit? Recruiters' Perceptions and Behavior When Faced With Algorithm-Based Recommendation Systems During Resume Screening. Front Psychol 2022; 13:895997. [PMID: 35874355 PMCID: PMC9298741 DOI: 10.3389/fpsyg.2022.895997] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 05/30/2022] [Indexed: 11/13/2022] Open
Abstract
Resume screening assisted by decision support systems that incorporate artificial intelligence is currently undergoing a strong development in many organizations, raising technical, managerial, legal, and ethical issues. The purpose of the present paper is to better understand the reactions of recruiters when they are offered algorithm-based recommendations during resume screening. Two polarized attitudes have been identified in the literature on users' reactions to algorithm-based recommendations: algorithm aversion, which reflects a general distrust and preference for human recommendations; and automation bias, which corresponds to an overconfidence in the decisions or recommendations made by algorithmic decision support systems (ADSS). Drawing on results obtained in the field of automated decision support areas, we make the general hypothesis that recruiters trust human experts more than ADSS, because they distrust algorithms for subjective decisions such as recruitment. An experiment on resume screening was conducted on a sample of professionals (N = 694) involved in the screening of job applications. They were asked to study a job offer, then evaluate two fictitious resumes in a 2 × 2 factorial design with manipulation of the type of recommendation (no recommendation/algorithmic recommendation/human expert recommendation) and of the consistency of the recommendations (consistent vs. inconsistent recommendation). Our results support the general hypothesis of preference for human recommendations: recruiters exhibit a higher level of trust toward human expert recommendations compared with algorithmic recommendations. However, we also found that recommendation's consistence has a differential and unexpected impact on decisions: in the presence of an inconsistent algorithmic recommendation, recruiters favored the unsuitable over the suitable resume. Our results also show that specific personality traits (extraversion, neuroticism, and self-confidence) are associated with a differential use of algorithmic recommendations. Implications for research and HR policies are finally discussed.
Collapse
Affiliation(s)
- Alain Lacroux
- Univ. Polytechnique Hauts de France, IDH, CRISS, Valenciennes, France
| | | |
Collapse
|
5
|
Reiter AMF, Diaconescu AO, Eppinger B, Li SC. Human aging alters social inference about others' changing intentions. Neurobiol Aging 2021; 103:98-108. [PMID: 33845400 DOI: 10.1016/j.neurobiolaging.2021.01.034] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2020] [Revised: 01/08/2021] [Accepted: 01/09/2021] [Indexed: 01/28/2023]
Abstract
Decoding others' intentions accurately in order to adapt one's own behavior is pivotal throughout life. In this study, we asked how younger and older adults deal with uncertainty in dynamic social environments. We used an advice-taking paradigm together with Bayesian modeling to characterize effects of aging on learning about others' time-varying intentions. We observed age differences when comparing learning on two levels of social uncertainty: the fidelity of the adviser and the volatility of intentions. Older adults expected the adviser to change his/her intentions more frequently (i.e., a higher volatility of the adviser). They also showed higher confidence (i.e., precision) in their volatility beliefs and were less willing to change their beliefs about volatility over the course of the experiment. This led them to update their predictions about the fidelity of the adviser more quickly. Potentially indicative of stereotype effects, we observed that older advisers were perceived as more volatile, but also more faithful than younger advisers. This offers new insights into adult age differences in response to social uncertainty.
Collapse
Affiliation(s)
- Andrea M F Reiter
- Lifespan Developmental Neuroscience, Faculty of Psychology, Technische Universität Dresden, Germany; Department of Neurology, Max-Planck-Institute for Human Cognitive and Brain Sciences, Leipzig, Germany; Max Planck UCL Centre for Computational Psychiatry and Ageing Research, University College London, London, UK; Department of Child and Adolescent Psychiatry, Psychosomatics and Psychotherapy, Center of Mental Health, University of Würzburg, Würzburg, Germany.
| | - Andreea O Diaconescu
- Translational Neuromodeling Unit, University of Zurich & ETH Zurich, Switzerland; Department of Psychiatry, University of Basel, Switzerland; Krembil Centre for Neuroinformatics, Centre for Addiction and Mental Health (CAMH), University of Toronto, Canada
| | - Ben Eppinger
- Lifespan Developmental Neuroscience, Faculty of Psychology, Technische Universität Dresden, Germany; Department of Psychology, Concordia University, Canada; PERFORM Centre, Concordia University, Canada
| | - Shu-Chen Li
- Lifespan Developmental Neuroscience, Faculty of Psychology, Technische Universität Dresden, Germany; CeTI - Centre for Tactile Internet With Human-in-the-Loop, Technische Universität Dresden, Germany
| |
Collapse
|
6
|
Wei J, Bolton ML, Humphrey L. The level of measurement of trust in automation. THEORETICAL ISSUES IN ERGONOMICS SCIENCE 2020. [DOI: 10.1080/1463922x.2020.1766596] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Affiliation(s)
- Jiajun Wei
- Department of Industrial and Systems Engineering, University at Buffalo, The State University of New York, Buffalo, New York, USA
| | - Matthew L. Bolton
- Department of Industrial and Systems Engineering, University at Buffalo, The State University of New York, Buffalo, New York, USA
| | - Laura Humphrey
- Autonomous Controls Branch, Aerospace Systems Directorate, Air Force Research Laboratory, Wright-Patterson AFB, Ohio, USA
| |
Collapse
|
7
|
Pak R, Crumley-Branyon JJ, de Visser EJ, Rovira E. Factors that affect younger and older adults' causal attributions of robot behaviour. ERGONOMICS 2020; 63:421-439. [PMID: 32096445 DOI: 10.1080/00140139.2020.1734242] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/30/2019] [Accepted: 01/21/2020] [Indexed: 06/10/2023]
Abstract
Stereotypes are cognitive shortcuts that facilitate efficient social judgments about others. Just as causal attributions affect perceptions of people, they may similarly affect perceptions of technology, particularly anthropomorphic technology such as robots. In a scenario-based study, younger and older adults judged the performance and capability of an anthropomorphised robot that appeared young or old. In some cases, the robot successfully performed a task while at other times it failed. Results showed that older adult participants were more susceptible to aging stereotypes as indicated by trust. In addition, both younger and older adult participants succumbed to aging stereotypes when measuring perceived capability of the robots. Finally, a summary of causal reasoning results showed that our participants may have applied aging stereotypes to older-appearing robots: they were most likely to give credit to a properly functioning robot when it appeared young and performed a cognitive task. Our results tentatively suggest that human theories of social cognition do not wholly translate to technology-based contexts and that future work may elaborate on these findings. Practitioner summary: Perception and expectations of the capabilities of robots may influence whether users accept and use them, especially older users. The current results suggest that care must be taken in the design of these robots as users may stereotype them.
Collapse
Affiliation(s)
- Richard Pak
- Department of Psychology, Clemson University, Clemson, SC, USA
| | | | - Ewart J de Visser
- Department of Behavioral Sciences and Leadership, Warfighter Effectiveness Research Center, U. S. Air Force Academy, Colorado Springs, CO, USA
| | - Ericka Rovira
- Department of Behavioral Sciences and Leadership, U.S. Military Academy, West Point, NY, USA
| |
Collapse
|
8
|
Rovira E, McLaughlin AC, Pak R, High L. Looking for Age Differences in Self-Driving Vehicles: Examining the Effects of Automation Reliability, Driving Risk, and Physical Impairment on Trust. Front Psychol 2019; 10:800. [PMID: 31105610 PMCID: PMC6498898 DOI: 10.3389/fpsyg.2019.00800] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2018] [Accepted: 03/25/2019] [Indexed: 12/04/2022] Open
Abstract
PURPOSE Self-driving cars are an extremely high level of autonomous technology and represent a promising technology that may help older adults safely maintain independence. However, human behavior with automation is complex and not straightforward (Parasuraman and Riley, 1997; Parasuraman, 2000; Rovira et al., 2007; Parasuraman and Wickens, 2008; Parasuraman and Manzey, 2010; Parasuraman et al., 2012). In addition, because no fully self-driving vehicles are yet available to the public, most research has been limited to subjective survey-based assessments that depend on the respondents' limited knowledge based on second-hand reports and do not reflect the complex situational and dispositional factors known to affect trust and technology adoption. METHODS To address these issues, the current study examined the specific factors that affect younger and older adults' trust in self-driving vehicles. RESULTS The results showed that trust in self-driving vehicles depended on multiple interacting variables, such as the age of the respondent, risk during travel, impairment level of the hypothesized driver, and whether the self-driving car was reliable. CONCLUSION The primary contribution of this work is that, contrary to existing opinion surveys which suggest broad distrust in self-driving cars, the ratings of trust in self-driving cars varied with situational characteristics (reliability, driver impairment, risk level). Specifically, individuals reported less trust in the self-driving car when there was a failure with the car technology; and more trust in the technology in a low risk driving situation with an unimpaired driver when the automation was unreliable.
Collapse
Affiliation(s)
- Ericka Rovira
- Department of Behavioral Sciences and Leadership, US Military Academy, West Point, NY, United States
| | | | - Richard Pak
- Department of Psychology, Clemson University, Clemson, SC, United States
| | - Luke High
- Department of Behavioral Sciences and Leadership, US Military Academy, West Point, NY, United States
| |
Collapse
|
9
|
Lyons JB, Guznov SY. Individual differences in human–machine trust: A multi-study look at the perfect automation schema. THEORETICAL ISSUES IN ERGONOMICS SCIENCE 2018. [DOI: 10.1080/1463922x.2018.1491071] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- Joseph B. Lyons
- Airman Systems Directorate, Air Force Research Laboratory, Dayton, OH, USA
| | | |
Collapse
|
10
|
de Visser EJ, Pak R, Shaw TH. From 'automation' to 'autonomy': the importance of trust repair in human-machine interaction. ERGONOMICS 2018; 61:1409-1427. [PMID: 29578376 DOI: 10.1080/00140139.2018.1457725] [Citation(s) in RCA: 55] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/16/2017] [Accepted: 03/15/2018] [Indexed: 05/27/2023]
Abstract
Modern interactions with technology are increasingly moving away from simple human use of computers as tools to the establishment of human relationships with autonomous entities that carry out actions on our behalf. In a recent commentary, Peter Hancock issued a stark warning to the field of human factors that attention must be focused on the appropriate design of a new class of technology: highly autonomous systems. In this article, we heed the warning and propose a human-centred approach directly aimed at ensuring that future human-autonomy interactions remain focused on the user's needs and preferences. By adapting literature from industrial psychology, we propose a framework to infuse a unique human-like ability, building and actively repairing trust, into autonomous systems. We conclude by proposing a model to guide the design of future autonomy and a research agenda to explore current challenges in repairing trust between humans and autonomous systems. Practitioner Summary: This paper is a call to practitioners to re-cast our connection to technology as akin to a relationship between two humans rather than between a human and their tools. To that end, designing autonomy with trust repair abilities will ensure future technology maintains and repairs relationships with their human partners.
Collapse
Affiliation(s)
- Ewart J de Visser
- a Human Factors and Applied Cognition, Department of Psychology , George Mason University , Fairfax , VA , USA
- b Warfighter Effectiveness Research Center, Department of Behavioral Sciences and Leadership , United States Air Force Academy , Colorado Springs , CO , USA
| | - Richard Pak
- c Department of Psychology , Clemson University , Clemson , SC , USA
| | - Tyler H Shaw
- a Human Factors and Applied Cognition, Department of Psychology , George Mason University , Fairfax , VA , USA
| |
Collapse
|
11
|
|
12
|
Watson JM, Salmon PM, Lacey D, Kerr D. Continuance in online participation following the compromise of older adults’ identity information: a literature review. THEORETICAL ISSUES IN ERGONOMICS SCIENCE 2018. [DOI: 10.1080/1463922x.2018.1432714] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Judy M. Watson
- Centre for Human Factors and Sociotechnical Systems, University of the Sunshine Coast, Maroochydore DC, Qld, Australia
| | - Paul M. Salmon
- Centre for Human Factors and Sociotechnical Systems, University of the Sunshine Coast, Maroochydore DC, Qld, Australia
| | - David Lacey
- University of the Sunshine Coast and IDCARE, Maroochydore DC, Qld, Australia
| | - Don Kerr
- University of the Sunshine Coast, Maroochydore, Qld, Australia
| |
Collapse
|
13
|
Pak R, Rovira E, McLaughlin AC, Leidheiser W. Evaluating Attitudes and Experience With Emerging Technology in Cadets and Civilian Undergraduates. MILITARY PSYCHOLOGY 2017. [DOI: 10.1037/mil0000175] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Richard Pak
- Department of Psychology, Clemson University
| | - Ericka Rovira
- Department of Behavioral Sciences and Leadership, U.S. Military Academy
| | | | | |
Collapse
|
14
|
McMurray J, Strudwick G, Forchuk C, Morse A, Lachance J, Baskaran A, Allison L, Booth R. The Importance of Trust in the Adoption and Use of Intelligent Assistive Technology by Older Adults to Support Aging in Place: Scoping Review Protocol. JMIR Res Protoc 2017; 6:e218. [PMID: 29097354 PMCID: PMC5691240 DOI: 10.2196/resprot.8772] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2017] [Revised: 09/13/2017] [Accepted: 09/16/2017] [Indexed: 11/18/2022] Open
Abstract
Background Intelligent assistive technologies that complement and extend human abilities have proliferated in recent years. Service robots, home automation equipment, and other digital assistant devices possessing artificial intelligence are forms of assistive technologies that have become popular in society. Older adults (>55 years of age) have been identified by industry, government, and researchers as a demographic who can benefit significantly from the use of intelligent assistive technology to support various activities of daily living. Objective The purpose of this scoping review is to summarize the literature on the importance of the concept of “trust” in the adoption of intelligent assistive technologies to assist aging in place by older adults. Methods Using a scoping review methodology, our search strategy will examine the following databases: ACM Digital Library, Allied and Complementary Medicine Database (AMED), Cumulative Index to Nursing and Allied Health Literature (CINAHL), Medline, PsycINFO, Scopus, and Web of Science. Two reviewers will independently screen the initial titles obtained from the search, and these results will be further inspected by other members of the research team for inclusion in the review. Results This review will provide insights into how the concept of trust is actualized in the adoption of intelligent assistive technology by older adults. Preliminary sensitization to the literature suggests that the concept of trust is fluid, unstable, and intimately tied to the type of intelligent assistive technology being examined. Furthermore, a wide range of theoretical lenses that include elements of trust have been used to examine this concept. Conclusions This review will describe the concept of trust in the adoption of intelligent assistive technology by older adults, and will provide insights for practitioners, policy makers, and technology vendors for future practice.
Collapse
Affiliation(s)
| | | | - Cheryl Forchuk
- Western University, Arthur Labatt Family School of Nursing, London, ON, Canada
| | - Adam Morse
- Western University, Arthur Labatt Family School of Nursing, London, ON, Canada
| | - Jessica Lachance
- Western University, Arthur Labatt Family School of Nursing, London, ON, Canada
| | - Arani Baskaran
- Western University, Arthur Labatt Family School of Nursing, London, ON, Canada
| | - Lauren Allison
- Western University, Arthur Labatt Family School of Nursing, London, ON, Canada
| | - Richard Booth
- Western University, Arthur Labatt Family School of Nursing, London, ON, Canada
| |
Collapse
|