1
|
de Winter JCF, Petermeijer SM, Abbink DA. Shared control versus traded control in driving: a debate around automation pitfalls. ERGONOMICS 2023; 66:1494-1520. [PMID: 36476120 DOI: 10.1080/00140139.2022.2153175] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Accepted: 11/22/2022] [Indexed: 06/17/2023]
Abstract
A major question in human-automation interaction is whether tasks should be traded or shared between human and automation. This work presents reflections-which have evolved through classroom debates between the authors over the past 10 years-on these two forms of human-automation interaction, with a focus on the automated driving domain. As in the lectures, we start with a historically informed survey of six pitfalls of automation: (1) Loss of situation and mode awareness, (2) Deskilling, (3) Unbalanced mental workload, (4) Behavioural adaptation, (5) Misuse, and (6) Disuse. Next, one of the authors explains why he believes that haptic shared control may remedy the pitfalls. Next, another author rebuts these arguments, arguing that traded control is the most promising way to improve road safety. This article ends with a common ground, explaining that shared and traded control outperform each other at medium and low environmental complexity, respectively. Practitioner summary: Designers of automation systems will have to consider whether humans and automation should perform tasks alternately or simultaneously. The present article provides an in-depth reflection on this dilemma, which may prove insightful and help guide design. Abbreviations: ACC: Adaptive Cruise Control: A system that can automatically maintain a safe distance from the vehicle in front; AEB: Advanced Emergency Braking (also known as Autonomous Emergency Braking): A system that automatically brakes to a full stop in an emergency situation; AES: Automated Evasive Steering: A system that automatically steers the car back into safety in an emergency situation; ISA: Intelligent Speed Adaptation: A system that can limit engine power automatically so that the driving speed does not exceed a safe or allowed speed.
Collapse
Affiliation(s)
- J C F de Winter
- Department of Cognitive Robotics, Delft University of Technology, Delft, The Netherlands
| | | | - D A Abbink
- Department of Cognitive Robotics, Delft University of Technology, Delft, The Netherlands
| |
Collapse
|
2
|
Hancock PA. Reacting and responding to rare, uncertain and unprecedented events. ERGONOMICS 2023; 66:454-478. [PMID: 35758330 DOI: 10.1080/00140139.2022.2095443] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/03/2021] [Accepted: 06/23/2022] [Indexed: 06/15/2023]
Abstract
This work examines how we may be able to anticipate, respond to, and train for the occurrence of rare, uncertain, and unexpected events in human-machine systems operations. In particular, it uses a foundational matrix which describes the combinations of the state-of-the-world and the state-of-the-respondent, to formulate preferred response strategies, contingent upon what is knowable and actionable in each circumstance. It employs the dichotomy of System I and System II forms of cognitive response and augments these perspectives with a further form of decision-making, namely Systems III. The latter is predicated upon reactions to novel, unprecedented, and even 'unthinkable' events. The degree to which any human operator, the associated automation and/or the autonomy of a system, or each of these acting in concert, can best deal with these 'blue swan' events is explored. Potential forms of remediation, especially featuring training, are discussed, and evaluated in light of the skills needed to respond to even prohibitive degrees of situational uncertainty.Practitioners summary: Practitioners are liable to witness a growing spectrum of unusual and, on occasion, even unprecedented events in the operation of systems for which they are responsible. They will be required to account for their response to these circumstances to a spectrum of involved constituencies to whom they answer. This work aids them in succeeding to bring clarity to such difficult and challenging processes.Abbreviations: K: Known; Unk: Unknown; AI: Artificial Intelligence; ML: Machine Learning; CHARM: Cockpit Human-Automation Resource Management; SDT: signal detection theory; ASRS: Aviation Safety Reporting System.
Collapse
Affiliation(s)
- P A Hancock
- Department of Psychology, and the Institute for Simulation and Training, University of Central Florida, Orlando, FL, USA
| |
Collapse
|
3
|
Hancock PA, Kessler TT, Kaplan AD, Stowers K, Brill JC, Billings DR, Schaefer KE, Szalma JL. How and why humans trust: A meta-analysis and elaborated model. Front Psychol 2023; 14:1081086. [PMID: 37051611 PMCID: PMC10083508 DOI: 10.3389/fpsyg.2023.1081086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Accepted: 01/26/2023] [Indexed: 03/29/2023] Open
Abstract
Trust exerts an impact on essentially all forms of social relationships. It affects individuals in deciding whether and how they will or will not interact with other people. Equally, trust also influences the stance of entire nations in their mutual dealings. In consequence, understanding the factors that influence the decision to trust, or not to trust, is crucial to the full spectrum of social dealings. Here, we report the most comprehensive extant meta-analysis of experimental findings relating to such human-to-human trust. Our analysis provides a quantitative evaluation of the factors that influence interpersonal trust, the initial propensity to trust, as well as an assessment of the general trusting of others. Over 2,000 relevant studies were initially identified for potential inclusion in the meta-analysis. Of these, (n = 338) passed all screening criteria and provided therefrom a total of (n = 2,185) effect sizes for analysis. The identified dependent variables were trustworthiness, propensity to trust, general trust, and the trust that supervisors and subordinates express in each other. Correlational results demonstrated that a large range of trustor, trustee, and shared, contextual factors impact each of trustworthiness, the propensity to trust, and trust within working relationships. The emphasis in the present work on contextual factors being one of several trust dimensions herein originated. Experimental results established that the reputation of the trustee and the shared closeness of trustor and trustee were the most predictive factors of trustworthiness outcome. From these collective findings, we propose an elaborated, overarching descriptive theory of trust in which special note is taken of the theory’s application to the growing human need to trust in non-human entities. The latter include diverse forms of automation, robots, artificially intelligent entities, as well as specific implementations such as driverless vehicles to name but a few. Future directions as to the momentary dynamics of trust development, its sustenance and its dissipation are also evaluated.
Collapse
Affiliation(s)
- P. A. Hancock
- Department of Psychology and Institute for Simulation and Training, University of Central Florida, Orlando, FL, United States
- *Correspondence: P. A. Hancock,
| | - Theresa T. Kessler
- Department of Psychology, University of Central Florida, Orlando, FL, United States
| | - Alexandra D. Kaplan
- Department of Psychology, University of Central Florida, Orlando, FL, United States
| | - Kimberly Stowers
- Department of Management, University of Alabama, Tuscaloosa, AL, United States
| | - J. Christopher Brill
- United States Air Force Research Laboratory, Wright Patterson Air Force Base, Dayton, NV, United States
| | | | - Kristin E. Schaefer
- DEVCOM Army Research Laboratory, Aberdeen Proving Ground, Adelphi, MD, United States
| | - James L. Szalma
- Department of Psychology, University of Central Florida, Orlando, FL, United States
| |
Collapse
|
4
|
de Sio FS, Mecacci G, Calvert S, Heikoop D, Hagenzieker M, van Arem B. Realising Meaningful Human Control Over Automated Driving Systems: A Multidisciplinary Approach. Minds Mach (Dordr) 2022:1-25. [PMID: 35915817 PMCID: PMC9330947 DOI: 10.1007/s11023-022-09608-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Accepted: 06/14/2022] [Indexed: 11/29/2022]
Abstract
The paper presents a framework to realise "meaningful human control" over Automated Driving Systems. The framework is based on an original synthesis of the results of the multidisciplinary research project "Meaningful Human Control over Automated Driving Systems" lead by a team of engineers, philosophers, and psychologists at Delft University of the Technology from 2017 to 2021. Meaningful human control aims at protecting safety and reducing responsibility gaps. The framework is based on the core assumption that human persons and institutions, not hardware and software and their algorithms, should remain ultimately-though not necessarily directly-in control of, and thus morally responsible for, the potentially dangerous operation of driving in mixed traffic. We propose an Automated Driving System to be under meaningful human control if it behaves according to the relevant reasons of the relevant human actors (tracking), and that any potentially dangerous event can be related to a human actor (tracing). We operationalise the requirements for meaningful human control through multidisciplinary work in philosophy, behavioural psychology and traffic engineering. The tracking condition is operationalised via a proximal scale of reasons and the tracing condition via an evaluation cascade table. We review the implications and requirements for the behaviour and skills of human actors, in particular related to supervisory control and driver education. We show how the evaluation cascade table can be applied in concrete engineering use cases in combination with the definition of core components to expose deficiencies in traceability, thereby avoiding so-called responsibility gaps. Future research directions are proposed to expand the philosophical framework and use cases, supervisory control and driver education, real-world pilots and institutional embedding.
Collapse
Affiliation(s)
| | - Giulio Mecacci
- Delft University of Technology, Delft, The Netherlands
- Donders Institute, Radboud University, Nijmegen, The Netherlands
| | | | | | | | - Bart van Arem
- Delft University of Technology, Delft, The Netherlands
| |
Collapse
|
5
|
Rieth M, Hagemann V. Veränderte Kompetenzanforderungen an Mitarbeitende infolge zunehmender Automatisierung – Eine Arbeitsfeldbetrachtung. GIO-GRUPPE-INTERAKTION-ORGANISATION-ZEITSCHRIFT FUER ANGEWANDTE ORGANISATIONSPSYCHOLOGIE 2021. [DOI: 10.1007/s11612-021-00561-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
ZusammenfassungBasierend auf einer Arbeitsfeldbetrachtung im Bereich der Flugsicherung in Österreich und der Schweiz liefert dieser Artikel der Zeitschrift Gruppe. Interaktion. Organisation. (GIO) einen Überblick über automatisierungsbedingte Veränderungen und die daraus resultierenden neuen Kompetenzanforderungen an die Beschäftigten im Hochverantwortungsbereich. Bestehende Tätigkeitsstrukturen und Arbeitsrollen verändern sich infolge zunehmender Automatisierung grundlegend, sodass Organisationen neuen Herausforderungen gegenüberstehen und sich neue Kompetenzanforderungen an Mitarbeitende ergeben. Auf Grundlage von 9 problemzentrierten Interviews mit Fluglotsen sowie 4 problemzentrierten Interviews mit Piloten werden die Veränderungen infolge zunehmender Automatisierung und die daraus resultierenden neuen Kompetenzanforderungen an die Beschäftigten in einer High Reliability Organization dargestellt. Dieser Organisationskontext blieb bisher in der wissenschaftlichen Debatte um neue Kompetenzen infolge von Automatisierung weitestgehend unberücksichtigt. Die Ergebnisse deuten darauf hin, dass der Mensch in High Reliability Organizations durch Technik zwar entlastet und unterstützt werden kann, aber nicht zu ersetzen ist. Die Rolle des Menschen wird im Sinne eines Systemüberwachenden passiver, wodurch die Gefahr eines Fähigkeitsverlustes resultiert und der eigene Einfluss der Beschäftigten abnimmt. Ferner scheinen die Anforderungen, denen sie sich infolge zunehmender Automatisierung gegenüberstehen sehen, zuzunehmen, was in einem Spannungsfeld zu ihrer passiven Rolle zu stehen scheint. Die Erkenntnisse werden diskutiert und praktische Implikationen für das Kompetenzmanagement und die Arbeitsgestaltung zur Minimierung der identifizierten restriktiven Arbeitsbedingungen abgeleitet.
Collapse
|