1
|
Volosin M, Kálnay M, Bánffi Á, Nyeső N, Molnár GV, Palatinus Z, Martos T. The leading role of personality in concerns about autonomous vehicles. PLoS One 2024; 19:e0301895. [PMID: 38837940 DOI: 10.1371/journal.pone.0301895] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Accepted: 03/25/2024] [Indexed: 06/07/2024] Open
Abstract
Development of autonomous vehicles (AVs) is growing in a rapid rate, however, the most dominant barriers in their adoption seem to be rather psychological than technical. The present online survey study aimed to investigate which demographical and personality dimensions predict attitudes towards AVs on a Hungarian sample (N = 328). Data was collected by convenience and snowball sampling. Three-level hierarchical regression models were applied: in the first level, demographical variables, then general personality traits and third, attitude-like personality factors were entered. We demonstrated that the predictive effect of age, gender and education disappeared when personality dimensions were included into the models. Importantly, more positive general attitudes towards technology and higher optimism regarding innovations predicted eagerness to adopt AVs. On the other hand, individuals with more negative attitudes and higher dependence on technology as well as those with lower level of Sensory Sensation Seeking and higher level of Conscientiousness were more concerned about AVs. Our results suggest that AV acceptance cannot be regarded as a one-dimensional construct and that certain personality traits might be stronger predictors of AV acceptance than demographical factors.
Collapse
Affiliation(s)
- Márta Volosin
- Institute of Psychology, University of Szeged, Szeged, Hungary
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Budapest, Hungary
| | - Martin Kálnay
- Institute of Psychology, University of Szeged, Szeged, Hungary
- Department of Ergonomics and Psychology, Budapest University of Technology and Economics, Budapest, Hungary
| | - Ádám Bánffi
- Institute of Psychology, University of Szeged, Szeged, Hungary
| | - Natália Nyeső
- Institute of Psychology, University of Szeged, Szeged, Hungary
| | | | - Zsolt Palatinus
- Institute of Psychology, University of Szeged, Szeged, Hungary
| | - Tamás Martos
- Institute of Psychology, University of Szeged, Szeged, Hungary
| |
Collapse
|
2
|
Liao S, Lin L, Pei H, Chen Q. How does the status of errant robot affect our desire for contact? - The moderating effect of team interdependence. ERGONOMICS 2024:1-19. [PMID: 38781044 DOI: 10.1080/00140139.2024.2348672] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Accepted: 04/20/2024] [Indexed: 05/25/2024]
Abstract
Technological breakthroughs such as artificial intelligence and sensors make human-robot collaboration a reality. Robots with highly reliable, specialised skills gain informal status in collaborative teams, but factors such as unstructured work environments and task requirements make robot error inevitable. So how do status differences of errant robots affect the desire for contact, and do team characteristics also have an impact? This paper describes an intergroup experiment using the Experimental Vignette Method (EVM), based on the Expectation Violation Theory, 214 subjects were invited to test the following hypotheses: (1) Errant robot status has an influence on employees' desire for contact and support for robotics research through negative emotions; (2) Team interdependence is a boundary condition for the effect of errant robot status on negative emotions. This paper contributes to the literature on employee reactions to robot errors in human-robot collaboration and provides suggestions for robot status design.
Collapse
Affiliation(s)
- Shilong Liao
- School of Economics and Management, Lanzhou University of Technology, Lanzhou, China
| | - Long Lin
- School of Economics and Management, Lanzhou University of Technology, Lanzhou, China
| | - Hairun Pei
- School of Economics and Management, Lanzhou University of Technology, Lanzhou, China
| | - Qin Chen
- School of Economics and Management, Lanzhou Institute of Technology, Lanzhou, China
| |
Collapse
|
3
|
Zhou Y, Guo H, Shi H, Jiang S, Liao Y. Key factors capturing the willingness to use automated vehicles for travel in China. PLoS One 2024; 19:e0298348. [PMID: 38363740 PMCID: PMC10871520 DOI: 10.1371/journal.pone.0298348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Accepted: 01/22/2024] [Indexed: 02/18/2024] Open
Abstract
With the continuous advancement of technology, automated vehicle technology is progressively maturing. It is crucial to comprehend the factors influencing individuals' intention to utilize automated vehicles. This study examined user willingness to adopt automated vehicles. By incorporating age and educational background as random parameters, an ordered Probit model with random parameters was constructed to analyze the influential factors affecting respondents' adoption of automated vehicles. We devised and conducted an online questionnaire survey, yielding 2105 valid questionnaires. The findings reveal significant positive correlations between positive social trust, perceived ease of use, perceived usefulness, low levels of perceived risk, and the acceptance of automated vehicles. Additionally, our study identifies extraversion and openness as strong mediators in shaping individuals' intentions to use automated vehicles. Furthermore, prior experience with assisted driving negatively impacts people's inclination toward embracing automated vehicles. Our research also provides insights for promoting the adoption of automated vehicles: favorable media coverage and a reasonable division of responsibilities can enhance individuals' intentions to adopt this technology.
Collapse
Affiliation(s)
- Yongjiang Zhou
- School of Automobile and Transportation, Xihua University, Chengdu, Sichuan, China
| | - Hanying Guo
- School of Automobile and Transportation, Xihua University, Chengdu, Sichuan, China
| | - Hongguo Shi
- School of Transportation and Logistics, Southwest Jiaotong University, Chengdu, Sichuan, China
| | - Siyi Jiang
- School of Automobile and Transportation, Xihua University, Chengdu, Sichuan, China
| | - Yang Liao
- School of Automobile and Transportation, Xihua University, Chengdu, Sichuan, China
| |
Collapse
|
4
|
Qu J, Zhou R, Zhang Y, Ma Q. Understanding trust calibration in automated driving: the effect of time, personality, and system warning design. ERGONOMICS 2023; 66:2165-2181. [PMID: 36920361 DOI: 10.1080/00140139.2023.2191907] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Accepted: 03/13/2023] [Indexed: 06/18/2023]
Abstract
Under the human-automation codriving future, dynamic trust should be considered. This paper explored how trust changes over time and how multiple factors (time, trust propensity, neuroticism, and takeover warning design) calibrate trust together. We launched two driving simulator experiments to measure drivers' trust before, during, and after the experiment under takeover scenarios. The results showed that trust in automation increased during short-term interactions and dropped after four months, which is still higher than pre-experiment trust. Initial trust and trust propensity had a stable impact on trust. Drivers trusted the system more with the two-stage (MR + TOR) warning design than the one-stage (TOR). Neuroticism had a significant effect on the countdown compared with the content warning.Practitioner summary: The results provide new data and knowledge for trust calibration in the takeover scenario. The findings can help design a more reasonable automated driving system in long-term human-automation interactions.
Collapse
Affiliation(s)
- Jianhong Qu
- School of Economics and Management, Beihang University, Beijing, P. R. China
| | - Ronggang Zhou
- School of Economics and Management, Beihang University, Beijing, P. R. China
| | - Yaping Zhang
- School of Economics and Management, Beihang University, Beijing, P. R. China
| | - Qianli Ma
- School of Economics and Management, Beihang University, Beijing, P. R. China
| |
Collapse
|
5
|
Walker F, Forster Y, Hergeth S, Kraus J, Payre W, Wintersberger P, Martens M. Trust in automated vehicles: constructs, psychological processes, and assessment. Front Psychol 2023; 14:1279271. [PMID: 38078237 PMCID: PMC10701515 DOI: 10.3389/fpsyg.2023.1279271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Accepted: 10/30/2023] [Indexed: 10/16/2024] Open
Abstract
There is a growing body of research on trust in driving automation systems. In this paper, we seek to clarify the way trust is conceptualized, calibrated and measured taking into account issues related to specific levels of driving automation. We find that: (1) experience plays a vital role in trust calibration; (2) experience should be measured not just in terms of distance traveled, but in terms of the range of situations encountered; (3) system malfunctions and recovery from such malfunctions is a fundamental part of this experience. We summarize our findings in a framework describing the dynamics of trust calibration. We observe that methods used to quantify trust often lack objectivity, reliability, and validity, and propose a set of recommendations for researchers seeking to select suitable trust measures for their studies. In conclusion, we argue that the safe deployment of current and future automated vehicles depends on drivers developing appropriate levels of trust. Given the potentially severe consequences of miscalibrated trust, it is essential that drivers incorporate the possibility of new and unexpected driving situations in their mental models of system capabilities. It is vitally important that we develop methods that contribute to this goal.
Collapse
Affiliation(s)
| | | | | | | | | | - Philipp Wintersberger
- TU Wien, Vienna, Austria
- University of Applied Sciences Upper Austria, Hagenberg, Austria
| | - Marieke Martens
- Industrial Design, Eindhoven University of Technology, Eindhoven, Netherlands
| |
Collapse
|
6
|
Momen A, de Visser EJ, Fraune MR, Madison A, Rueben M, Cooley K, Tossell CC. Group trust dynamics during a risky driving experience in a Tesla Model X. Front Psychol 2023; 14:1129369. [PMID: 37408965 PMCID: PMC10319128 DOI: 10.3389/fpsyg.2023.1129369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Accepted: 05/23/2023] [Indexed: 07/07/2023] Open
Abstract
The growing concern about the risk and safety of autonomous vehicles (AVs) has made it vital to understand driver trust and behavior when operating AVs. While research has uncovered human factors and design issues based on individual driver performance, there remains a lack of insight into how trust in automation evolves in groups of people who face risk and uncertainty while traveling in AVs. To this end, we conducted a naturalistic experiment with groups of participants who were encouraged to engage in conversation while riding a Tesla Model X on campus roads. Our methodology was uniquely suited to uncover these issues through naturalistic interaction by groups in the face of a risky driving context. Conversations were analyzed, revealing several themes pertaining to trust in automation: (1) collective risk perception, (2) experimenting with automation, (3) group sense-making, (4) human-automation interaction issues, and (5) benefits of automation. Our findings highlight the untested and experimental nature of AVs and confirm serious concerns about the safety and readiness of this technology for on-road use. The process of determining appropriate trust and reliance in AVs will therefore be essential for drivers and passengers to ensure the safe use of this experimental and continuously changing technology. Revealing insights into social group-vehicle interaction, our results speak to the potential dangers and ethical challenges with AVs as well as provide theoretical insights on group trust processes with advanced technology.
Collapse
Affiliation(s)
- Ali Momen
- United States Air Force Academy, Colorado Springs, CO, United States
| | | | - Marlena R. Fraune
- Department of Psychology, New Mexico State University, Las Cruces, NM, United States
| | - Anna Madison
- United States Air Force Academy, Colorado Springs, CO, United States
- United States Army Research Laboratory, Aberdeen Proving Ground, Aberdeen, MD, United States
| | - Matthew Rueben
- Department of Psychology, New Mexico State University, Las Cruces, NM, United States
| | - Katrina Cooley
- United States Air Force Academy, Colorado Springs, CO, United States
| | - Chad C. Tossell
- United States Air Force Academy, Colorado Springs, CO, United States
| |
Collapse
|
7
|
On the Role of Beliefs and Trust for the Intention to Use Service Robots: An Integrated Trustworthiness Beliefs Model for Robot Acceptance. Int J Soc Robot 2023. [DOI: 10.1007/s12369-022-00952-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/27/2023]
Abstract
AbstractWith the increasing abilities of robots, the prediction of user decisions needs to go beyond the usability perspective, for example, by integrating distinctive beliefs and trust. In an online study (N = 400), first, the relationship between general trust in service robots and trust in a specific robot was investigated, supporting the role of general trust as a starting point for trust formation. On this basis, it was explored—both for general acceptance of service robots and acceptance of a specific robot—if technology acceptance models can be meaningfully complemented by specific beliefs from the theory of planned behavior (TPB) and trust literature to enhance understanding of robot adoption. First, models integrating all belief groups were fitted, providing essential variance predictions at both levels (general and specific) and a mediation of beliefs via trust to the intention to use. The omission of the performance expectancy and reliability belief was compensated for by more distinctive beliefs. In the final model (TB-RAM), effort expectancy and competence predicted trust at the general level. For a specific robot, competence and social influence predicted trust. Moreover, the effect of social influence on trust was moderated by the robot's application area (public > private), supporting situation-specific belief relevance in robot adoption. Taken together, in line with the TPB, these findings support a mediation cascade from beliefs via trust to the intention to use. Furthermore, an incorporation of distinctive instead of broad beliefs is promising for increasing the explanatory and practical value of acceptance modeling.
Collapse
|
8
|
Nordhoff S, Stapel J, He X, Gentner A, Happee R. Do driver's characteristics, system performance, perceived safety, and trust influence how drivers use partial automation? A structural equation modelling analysis. Front Psychol 2023; 14:1125031. [PMID: 37139004 PMCID: PMC10150639 DOI: 10.3389/fpsyg.2023.1125031] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Accepted: 03/06/2023] [Indexed: 05/05/2023] Open
Abstract
The present study surveyed actual extensive users of SAE Level 2 partially automated cars to investigate how driver’s characteristics (i.e., socio-demographics, driving experience, personality), system performance, perceived safety, and trust in partial automation influence use of partial automation. 81% of respondents stated that they use their automated car with speed (ACC) and steering assist (LKA) at least 1–2 times a week, and 84 and 92% activate LKA and ACC at least occasionally. Respondents positively rated the performance of Adaptive Cruise Control (ACC) and Lane Keeping Assistance (LKA). ACC was rated higher than LKA and detection of lead vehicles and lane markings was rated higher than smooth control for ACC and LKA, respectively. Respondents reported to primarily disengage (i.e., turn off) partial automation due to a lack of trust in the system and when driving is fun. They rarely disengaged the system when they noticed they become bored or sleepy. Structural equation modelling revealed that trust had a positive effect on driver’s propensity for secondary task engagement during partially automated driving, while the effect of perceived safety was not significant. Regarding driver’s characteristics, we did not find a significant effect of age on perceived safety and trust in partial automation. Neuroticism negatively correlated with perceived safety and trust, while extraversion did not impact perceived safety and trust. The remaining three personality dimensions ‘openness’, ‘conscientiousness’, and ‘agreeableness’ did not form valid and reliable scales in the confirmatory factor analysis, and could thus not be subjected to the structural equation modelling analysis. Future research should re-assess the suitability of the short 10-item scale as measure of the Big-Five personality traits, and investigate the impact on perceived safety, trust, use and use of automation.
Collapse
Affiliation(s)
- Sina Nordhoff
- Department Transport and Planning, Delft University of Technology, Delft, Netherlands
- *Correspondence: Sina Nordhoff,
| | - Jork Stapel
- Department Cognitive Robotics, Delft University of Technology, Delft, Netherlands
| | - Xiaolin He
- Department Cognitive Robotics, Delft University of Technology, Delft, Netherlands
| | | | - Riender Happee
- Department Cognitive Robotics, Delft University of Technology, Delft, Netherlands
| |
Collapse
|
9
|
Kraus J, Babel F, Hock P, Hauber K, Baumann M. The trustworthy and acceptable HRI checklist (TA-HRI): questions and design recommendations to support a trust-worthy and acceptable design of human-robot interaction. GIO-GRUPPE-INTERAKTION-ORGANISATION-ZEITSCHRIFT FUER ANGEWANDTE ORGANISATIONSPSYCHOLOGIE 2022. [DOI: 10.1007/s11612-022-00643-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
Abstract
AbstractThis contribution to the journal Gruppe. Interaktion. Organisation. (GIO) presents a checklist of questions and design recommendations for designing acceptable and trustworthy human-robot interaction (HRI). In order to extend the application scope of robots towards more complex contexts in the public domain and in private households, robots have to fulfill requirements regarding social interaction between humans and robots in addition to safety and efficiency. In particular, this results in recommendations for the design of the appearance, behavior, and interaction strategies of robots that can contribute to acceptance and appropriate trust. The presented checklist was derived from existing guidelines of associated fields of application, the current state of research on HRI, and the results of the BMBF-funded project RobotKoop. The trustworthy and acceptable HRI checklist (TA-HRI) contains 60 design topics with questions and design recommendations for the development and design of acceptable and trustworthy robots. The TA-HRI Checklist provides a basis for discussion of the design of service robots for use in public and private environments and will be continuously refined based on feedback from the community.
Collapse
|
10
|
Grinschgl S, Neubauer AC. Supporting Cognition With Modern Technology: Distributed Cognition Today and in an AI-Enhanced Future. Front Artif Intell 2022; 5:908261. [PMID: 35910191 PMCID: PMC9329671 DOI: 10.3389/frai.2022.908261] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Accepted: 06/24/2022] [Indexed: 11/29/2022] Open
Abstract
In the present article, we explore prospects for using artificial intelligence (AI) to distribute cognition via cognitive offloading (i.e., to delegate thinking tasks to AI-technologies). Modern technologies for cognitive support are rapidly developing and increasingly popular. Today, many individuals heavily rely on their smartphones or other technical gadgets to support their daily life but also their learning and work. For instance, smartphones are used to track and analyze changes in the environment, and to store and continually update relevant information. Thus, individuals can offload (i.e., externalize) information to their smartphones and refresh their knowledge by accessing it. This implies that using modern technologies such as AI empowers users via offloading and enables them to function as always-updated knowledge professionals, so that they can deploy their insights strategically instead of relying on outdated and memorized facts. This AI-supported offloading of cognitive processes also saves individuals' internal cognitive resources by distributing the task demands into their environment. In this article, we provide (1) an overview of empirical findings on cognitive offloading and (2) an outlook on how individuals' offloading behavior might change in an AI-enhanced future. More specifically, we first discuss determinants of offloading such as the design of technical tools and links to metacognition. Furthermore, we discuss benefits and risks of cognitive offloading. While offloading improves immediate task performance, it might also be a threat for users' cognitive abilities. Following this, we provide a perspective on whether individuals will make heavier use of AI-technologies for offloading in the future and how this might affect their cognition. On one hand, individuals might heavily rely on easily accessible AI-technologies which in return might diminish their internal cognition/learning. On the other hand, individuals might aim at enhancing their cognition so that they can keep up with AI-technologies and will not be replaced by them. Finally, we present own data and findings from the literature on the assumption that individuals' personality is a predictor of trust in AI. Trust in modern AI-technologies might be a strong determinant for wider appropriation and dependence on these technologies to distribute cognition and should thus be considered in an AI-enhanced future.
Collapse
|
11
|
Miller L, Kraus J, Babel F, Baumann M. More Than a Feeling-Interrelation of Trust Layers in Human-Robot Interaction and the Role of User Dispositions and State Anxiety. Front Psychol 2021; 12:592711. [PMID: 33912098 PMCID: PMC8074795 DOI: 10.3389/fpsyg.2021.592711] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Accepted: 01/27/2021] [Indexed: 12/02/2022] Open
Abstract
With service robots becoming more ubiquitous in social life, interaction design needs to adapt to novice users and the associated uncertainty in the first encounter with this technology in new emerging environments. Trust in robots is an essential psychological prerequisite to achieve safe and convenient cooperation between users and robots. This research focuses on psychological processes in which user dispositions and states affect trust in robots, which in turn is expected to impact the behavior and reactions in the interaction with robotic systems. In a laboratory experiment, the influence of propensity to trust in automation and negative attitudes toward robots on state anxiety, trust, and comfort distance toward a robot were explored. Participants were approached by a humanoid domestic robot two times and indicated their comfort distance and trust. The results favor the differentiation and interdependence of dispositional, initial, and dynamic learned trust layers. A mediation from the propensity to trust to initial learned trust by state anxiety provides an insight into the psychological processes through which personality traits might affect interindividual outcomes in human-robot interaction (HRI). The findings underline the meaningfulness of user characteristics as predictors for the initial approach to robots and the importance of considering users’ individual learning history regarding technology and robots in particular.
Collapse
Affiliation(s)
- Linda Miller
- Department Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Johannes Kraus
- Department Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Franziska Babel
- Department Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| | - Martin Baumann
- Department Human Factors, Institute of Psychology and Education, Ulm University, Ulm, Germany
| |
Collapse
|
12
|
Small Talk with a Robot? The Impact of Dialog Content, Talk Initiative, and Gaze Behavior of a Social Robot on Trust, Acceptance, and Proximity. Int J Soc Robot 2021. [DOI: 10.1007/s12369-020-00730-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|