1
|
Lynch KM, Banks VA, Roberts APJ, Radcliffe S, Plant KL. Maritime autonomous surface ships: can we learn from unmanned aerial vehicle incidents using the perceptual cycle model? ERGONOMICS 2023; 66:772-790. [PMID: 36136049 DOI: 10.1080/00140139.2022.2126896] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Accepted: 09/14/2022] [Indexed: 05/24/2023]
Abstract
Interest in Maritime Autonomous Surface Ships (MASS) is increasing as it is predicted that they can bring improved safety, performance and operational capabilities. However, their introduction is associated with a number of enduring Human Factors challenges (e.g. difficulties monitoring automated systems) for human operators, with their 'remoteness' in shore-side control centres exacerbating issues. This paper aims to investigate underlying decision-making processes of operators of uncrewed vehicles using the theoretical foundation of the Perceptual Cycle Model (PCM). A case study of an Unmanned Aerial Vehicle (UAV) accident has been chosen as it bears similarities to the operation of MASS through means of a ground-based control centre. Two PCMs were developed; one to demonstrate what actually happened and one to demonstrate what should have happened. Comparing the models demonstrates the importance of operator situational awareness, clearly defined operator roles, training and interface design in making decisions when operating from remote control centres. Practitioner Summary: To investigate underlying decision-making processes of operators of uncrewed vehicles using the Perceptual Cycle Model, by using an UAV accident case study. The findings showed the importance of operator situational awareness, clearly defined operator roles, training and interface design in making decisions when monitoring uncrewed systems from remote control centres.
Collapse
Affiliation(s)
- Kirsty M Lynch
- Human Factors Engineering, Transportation Research Group, Faculty of Engineering and Physical Science, Boldrewood Innovation Campus, University of Southampton, Southampton, UK
| | - Victoria A Banks
- Human Factors Engineering, Transportation Research Group, Faculty of Engineering and Physical Science, Boldrewood Innovation Campus, University of Southampton, Southampton, UK
- Thales UK Limited, Berkshire, UK
| | - Aaron P J Roberts
- Human Factors Engineering, Transportation Research Group, Faculty of Engineering and Physical Science, Boldrewood Innovation Campus, University of Southampton, Southampton, UK
- Thales UK Limited, Berkshire, UK
| | | | - Katherine L Plant
- Human Factors Engineering, Transportation Research Group, Faculty of Engineering and Physical Science, Boldrewood Innovation Campus, University of Southampton, Southampton, UK
| |
Collapse
|
2
|
Barresi G, Pacchierotti C, Laffranchi M, De Michieli L. Beyond Digital Twins: Phygital Twins for Neuroergonomics in Human-Robot Interaction. Front Neurorobot 2022; 16:913605. [PMID: 35845760 PMCID: PMC9277562 DOI: 10.3389/fnbot.2022.913605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Accepted: 05/23/2022] [Indexed: 12/02/2022] Open
Affiliation(s)
- Giacinto Barresi
- Rehab Technologies, Istituto Italiano di Tecnologia, Genoa, Italy
| | | | | | | |
Collapse
|
3
|
Holzinger A, Saranti A, Angerschmid A, Retzlaff CO, Gronauer A, Pejakovic V, Medel-Jimenez F, Krexner T, Gollob C, Stampfer K. Digital Transformation in Smart Farm and Forest Operations Needs Human-Centered AI: Challenges and Future Directions. SENSORS (BASEL, SWITZERLAND) 2022; 22:3043. [PMID: 35459028 PMCID: PMC9029836 DOI: 10.3390/s22083043] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Revised: 04/06/2022] [Accepted: 04/13/2022] [Indexed: 02/01/2023]
Abstract
The main impetus for the global efforts toward the current digital transformation in almost all areas of our daily lives is due to the great successes of artificial intelligence (AI), and in particular, the workhorse of AI, statistical machine learning (ML). The intelligent analysis, modeling, and management of agricultural and forest ecosystems, and of the use and protection of soils, already play important roles in securing our planet for future generations and will become irreplaceable in the future. Technical solutions must encompass the entire agricultural and forestry value chain. The process of digital transformation is supported by cyber-physical systems enabled by advances in ML, the availability of big data and increasing computing power. For certain tasks, algorithms today achieve performances that exceed human levels. The challenge is to use multimodal information fusion, i.e., to integrate data from different sources (sensor data, images, *omics), and explain to an expert why a certain result was achieved. However, ML models often react to even small changes, and disturbances can have dramatic effects on their results. Therefore, the use of AI in areas that matter to human life (agriculture, forestry, climate, health, etc.) has led to an increased need for trustworthy AI with two main components: explainability and robustness. One step toward making AI more robust is to leverage expert knowledge. For example, a farmer/forester in the loop can often bring in experience and conceptual understanding to the AI pipeline-no AI can do this. Consequently, human-centered AI (HCAI) is a combination of "artificial intelligence" and "natural intelligence" to empower, amplify, and augment human performance, rather than replace people. To achieve practical success of HCAI in agriculture and forestry, this article identifies three important frontier research areas: (1) intelligent information fusion; (2) robotics and embodied intelligence; and (3) augmentation, explanation, and verification for trusted decision support. This goal will also require an agile, human-centered design approach for three generations (G). G1: Enabling easily realizable applications through immediate deployment of existing technology. G2: Medium-term modification of existing technology. G3: Advanced adaptation and evolution beyond state-of-the-art.
Collapse
Affiliation(s)
- Andreas Holzinger
- Human-Centered AI Lab, Institute of Forest Engineering, Department of Forest and Soil Sciences, University of Natural Resources and Life Sciences Vienna, 1190 Wien, Austria; (A.S.); (A.A.); (C.O.R.)
- xAI Lab, Alberta Machine Intelligence Institute, University of Alberta, Edmonton, AB T5J 3B1, Canada
| | - Anna Saranti
- Human-Centered AI Lab, Institute of Forest Engineering, Department of Forest and Soil Sciences, University of Natural Resources and Life Sciences Vienna, 1190 Wien, Austria; (A.S.); (A.A.); (C.O.R.)
| | - Alessa Angerschmid
- Human-Centered AI Lab, Institute of Forest Engineering, Department of Forest and Soil Sciences, University of Natural Resources and Life Sciences Vienna, 1190 Wien, Austria; (A.S.); (A.A.); (C.O.R.)
| | - Carl Orge Retzlaff
- Human-Centered AI Lab, Institute of Forest Engineering, Department of Forest and Soil Sciences, University of Natural Resources and Life Sciences Vienna, 1190 Wien, Austria; (A.S.); (A.A.); (C.O.R.)
- DAI Lab, Technical University Berlin, 10623 Berlin, Germany
| | - Andreas Gronauer
- Institute of Agricultural Engineering, Department of Sustainable Agricultural Systems, University of Natural Resources and Life Sciences Vienna, 1180 Wien, Austria; (A.G.); (V.P.); (F.M.-J.); (T.K.)
| | - Vladimir Pejakovic
- Institute of Agricultural Engineering, Department of Sustainable Agricultural Systems, University of Natural Resources and Life Sciences Vienna, 1180 Wien, Austria; (A.G.); (V.P.); (F.M.-J.); (T.K.)
| | - Francisco Medel-Jimenez
- Institute of Agricultural Engineering, Department of Sustainable Agricultural Systems, University of Natural Resources and Life Sciences Vienna, 1180 Wien, Austria; (A.G.); (V.P.); (F.M.-J.); (T.K.)
| | - Theresa Krexner
- Institute of Agricultural Engineering, Department of Sustainable Agricultural Systems, University of Natural Resources and Life Sciences Vienna, 1180 Wien, Austria; (A.G.); (V.P.); (F.M.-J.); (T.K.)
| | - Christoph Gollob
- Institute of Forest Growth, Department of Forest and Soil Sciences, University of Natural Resources and Life Sciences Vienna, 1180 Wien, Austria;
| | - Karl Stampfer
- Institute of Forest Engineering, Department of Forest and Soil Sciences, University of Natural Resources and Life Sciences Vienna, 1180 Wien, Austria;
| |
Collapse
|
4
|
Online Multimodal Inference of Mental Workload for Cognitive Human Machine Systems. COMPUTERS 2021. [DOI: 10.3390/computers10060081] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
With increasingly higher levels of automation in aerospace decision support systems, it is imperative that the human operator maintains a high level of situational awareness in different operational conditions and a central role in the decision-making process. While current aerospace systems and interfaces are limited in their adaptability, a Cognitive Human Machine System (CHMS) aims to perform dynamic, real-time system adaptation by estimating the cognitive states of the human operator. Nevertheless, to reliably drive system adaptation of current and emerging aerospace systems, there is a need to accurately and repeatably estimate cognitive states, particularly for Mental Workload (MWL), in real-time. As part of this study, two sessions were performed during a Multi-Attribute Task Battery (MATB) scenario, including a session for offline calibration and validation and a session for online validation of eleven multimodal inference models of MWL. The multimodal inference model implemented included an Adaptive Neuro Fuzzy Inference System (ANFIS), which was used in different configurations to fuse data from an Electroencephalogram (EEG) model’s output, four eye activity features and a control input feature. The results from the online validation of the ANFIS models demonstrated that five of the ANFIS models (containing different feature combinations of eye activity and control input features) all demonstrated good results, while the best performing model (containing all four eye activity features and the control input feature) showed an average Mean Absolute Error (MAE) = 0.67 ± 0.18 and Correlation Coefficient (CC) = 0.71 ± 0.15. The remaining six ANFIS models included data from the EEG model’s output, which had an offset discrepancy. This resulted in an equivalent offset for the online multimodal fusion. Nonetheless, the efficacy of these ANFIS models could be seen with the pairwise correlation with the task level, where one model demonstrated a CC = 0.77 ± 0.06, which was the highest among all the ANFIS models tested. Hence, this study demonstrates the ability for online multimodal fusion from features extracted from EEG signals, eye activity and control inputs to produce an accurate and repeatable inference of MWL.
Collapse
|
5
|
Abstract
There is a need for semi-autonomous systems capable of performing both automated tasks and supervised maneuvers. When dealing with multiple robots or robots with high complexity (such as humanoids), we face the issue of effectively coordinating operators across robots. We build on our previous work to present a methodology for designing trajectories and policies for robots such that a few operators can supervise multiple robots. Specifically, we: (1) Analyze the complexity of the problem, (2) Design a procedure for generating policies allowing operators to oversee many robots, (3) Present a method for designing policies and robot trajectories to allow operators to oversee multiple robots, and (4) Include both simulation and hardware experiments demonstrating our methodologies.
Collapse
|
6
|
Müezzinoğlu T, Karaköse M. An Intelligent Human-Unmanned Aerial Vehicle Interaction Approach in Real Time Based on Machine Learning Using Wearable Gloves. SENSORS 2021; 21:s21051766. [PMID: 33806388 PMCID: PMC7961434 DOI: 10.3390/s21051766] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/07/2021] [Revised: 02/24/2021] [Accepted: 03/01/2021] [Indexed: 11/16/2022]
Abstract
The interactions between humans and unmanned aerial vehicles (UAVs), whose applications are increasing in the civilian field rather than for military purposes, are a popular future research area. Human–UAV interactions are a challenging problem because UAVs move in a three-dimensional space. In this paper, we present an intelligent human–UAV interaction approach in real time based on machine learning using wearable gloves. The proposed approach offers scientific contributions such as a multi-mode command structure, machine-learning-based recognition, task scheduling algorithms, real-time usage, robust and effective use, and high accuracy rates. For this purpose, two wearable smart gloves working in real time were designed. The signal data obtained from the gloves were processed with machine-learning-based methods and classified multi-mode commands were included in the human–UAV interaction process via the interface according to the task scheduling algorithm to facilitate sequential and fast operation. The performance of the proposed approach was verified on a data set created using 25 different hand gestures from 20 different people. In a test using the proposed approach on 49,000 datapoints, process time performance of a few milliseconds was achieved with approximately 98 percent accuracy.
Collapse
|