1
|
Naughton M, Salmon PM, Compton HR, McLean S. Challenges and opportunities of artificial intelligence implementation within sports science and sports medicine teams. Front Sports Act Living 2024; 6:1332427. [PMID: 38832311 PMCID: PMC11144926 DOI: 10.3389/fspor.2024.1332427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Accepted: 05/07/2024] [Indexed: 06/05/2024] Open
Abstract
The rapid progress in the development of automation and artificial intelligence (AI) technologies, such as ChatGPT, represents a step-wise change in human's interactions with technology as part of a broader complex, sociotechnical system. Based on historical parallels to the present moment, such changes are likely to bring forth structural shifts to the nature of work, where near and future technologies will occupy key roles as workers or assistants in sports science and sports medicine multidisciplinary teams (MDTs). This envisioned future may bring enormous benefits, as well as a raft of potential challenges. These challenges include the potential to remove many human roles and allocate them to semi- or fully-autonomous AI. Removing such roles and tasks from humans will make many current jobs and careers untenable, leaving a set of difficult and unrewarding tasks for the humans that remain. Paradoxically, replacing humans with technology increases system complexity and makes them more prone to failure. The automation and AI boom also brings substantial opportunities. Among them are automated sentiment analysis and Digital Twin technologies which may reveal novel insights into athlete health and wellbeing and team tactical patterns, respectively. However, without due consideration of the interactions between humans and technology in the broader system of sport, adverse impacts are likely to be felt. Human and AI teamwork may require new ways of thinking.
Collapse
Affiliation(s)
- Mitchell Naughton
- School of Biomedical Science and Pharmacy, University of Newcastle, Callaghan, NSW, Australia
- Applied Sports Science and Exercise Testing Laboratory, University of Newcastle, Ourimbah, NSW, Australia
| | - Paul M. Salmon
- Centre for Human Factors and Sociotechnical Systems, University of the Sunshine Coast, Sippy Downs, QLD, Australia
| | - Heidi R. Compton
- School of Biomedical Science and Pharmacy, University of Newcastle, Callaghan, NSW, Australia
- Applied Sports Science and Exercise Testing Laboratory, University of Newcastle, Ourimbah, NSW, Australia
| | - Scott McLean
- Centre for Human Factors and Sociotechnical Systems, University of the Sunshine Coast, Sippy Downs, QLD, Australia
| |
Collapse
|
2
|
Schraagen JM. Responsible use of AI in military systems: prospects and challenges. ERGONOMICS 2023; 66:1719-1729. [PMID: 37905780 DOI: 10.1080/00140139.2023.2278394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Accepted: 10/29/2023] [Indexed: 11/02/2023]
Abstract
Artificial Intelligence (AI) holds great potential for the military domain but is also seen as prone to data bias and lacking transparency and explainability. In order to advance the trustworthiness of AI-enabled systems, a dynamic approach to the development, deployment and use of AI systems is required. This approach, when incorporating ethical principles such as lawfulness, traceability, reliability and bias mitigation, is called 'Responsible AI'. This article describes the challenges of using AI responsibly in the military domain from a human factors and ergonomics perspective. Many of the ironies of automation originally described by Bainbridge still apply in the field of AI, but there are also some unique challenges and requirements that need to be considered, such as a larger emphasis on ethical risk analyses and validation and verification up-front, as well as moral situation awareness during deployment and use of AI in military systems.
Collapse
|
3
|
Contucci P, Kertész J, Osabutey G. Human-AI ecosystem with abrupt changes as a function of the composition. PLoS One 2022; 17:e0267310. [PMID: 35622778 PMCID: PMC9140255 DOI: 10.1371/journal.pone.0267310] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Accepted: 04/05/2022] [Indexed: 12/04/2022] Open
Abstract
The progressive advent of artificial intelligence machines may represent both an opportunity or a threat. In order to have an idea of what is coming we propose a model that simulate a Human-AI ecosystem. In particular we consider systems where agents present biases, peer-to-peer interactions and also three body interactions that are crucial and describe two humans interacting with an artificial agent and two artificial intelligence agents interacting with a human. We focus our analysis by exploring how the relative fraction of artificial intelligence agents affect that ecosystem. We find evidence that for suitable values of the interaction parameters, arbitrarily small changes in such percentage may trigger dramatic changes for the system that can be either in one of the two polarised states or in an undecided state.
Collapse
Affiliation(s)
| | - János Kertész
- Department of Network and Data Science, Central European University, Vienna, Austria
| | - Godwin Osabutey
- Department of Mathematics, University of Bologna, Bologna, Italy
- * E-mail:
| |
Collapse
|
4
|
Thompson J, Read GJM, Wijnands JS, Salmon PM. The perils of perfect performance; considering the effects of introducing autonomous vehicles on rates of car vs cyclist conflict. ERGONOMICS 2020; 63:981-996. [PMID: 32138601 DOI: 10.1080/00140139.2020.1739326] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2019] [Accepted: 02/07/2020] [Indexed: 06/10/2023]
Abstract
How humans will adapt and respond to the introduction of autonomous vehicles (AVs) is uncertain. This study used an agent-based model to explore how AVs, human-operated vehicles, and cyclists might interact based on the introduction of flawlessly performing AVs. Under two separate experimental conditions, results of experiment 1 showed that, despite no conflicts occurring between cyclists and AVs, modelled conflicts among human-operated cars and cyclists increased with the introduction of AVs due to cyclists' adjusted expectations of the behaviour and capability of human-operated and autonomous cars. Similarly, when human-operated cars were replaced with AVs over time in experiment 2, cyclist conflict rates did not follow a linear reduction consistent with the replacement rate but decreased more slowly in the early stages of replacement before 50% substitution. It is concluded that, although flawlessly performing AVs might reduce total conflicts, the introduction of AVs into a transport system where humans adjust to the behaviour and risk presented by AVs could create new sources of error that offset some of AVs assumed safety benefits. Practitioner summary: Ergonomics is an applied science that studies interactions between humans and other elements of a system, including non-human agents. Agent-Based Modelling (ABM) provides an approach for exploring dynamic and emergent interactions between agents. In this article, we demonstrate ABM through an analysis of how cyclists and pedestrians might interact with Autonomous Vehicles (AVs) in future road transport systems. Abbreviations: ABM: agent-based model; AV: autonomous vehicle; ODD; overview, design concepts and details; RW: rescorla-wagner.
Collapse
Affiliation(s)
- Jason Thompson
- Transport, Health and Urban Design Research Hub, University of Melbourne, Melbourne, Australia
- Centre for Human Factors and Sociotechnical Systems, University of the Sunshine Coast, Sunshine Coast, Australia
| | - Gemma J M Read
- Centre for Human Factors and Sociotechnical Systems, University of the Sunshine Coast, Sunshine Coast, Australia
| | - Jasper S Wijnands
- Transport, Health and Urban Design Research Hub, University of Melbourne, Melbourne, Australia
| | - Paul M Salmon
- Centre for Human Factors and Sociotechnical Systems, University of the Sunshine Coast, Sunshine Coast, Australia
| |
Collapse
|
5
|
Pöllänen E, Read GJM, Lane BR, Thompson J, Salmon PM. Who is to blame for crashes involving autonomous vehicles? Exploring blame attribution across the road transport system. ERGONOMICS 2020; 63:525-537. [PMID: 32180531 DOI: 10.1080/00140139.2020.1744064] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/19/2019] [Accepted: 03/10/2020] [Indexed: 06/10/2023]
Abstract
The introduction of fully autonomous vehicles is approaching. This warrants a re-consideration of road crash liability, given drivers will have diminished control. This study, underpinned by attribution theory, investigated blame attribution to different road transport system actors following crashes involving manually driven, semi-autonomous and fully autonomous vehicles. It also examined whether outcome severity alters blame ratings. 396 participants attributed blame to five actors (vehicle driver/user, pedestrian, vehicle, manufacturer, government) in vehicle-pedestrian crash scenarios. Different and unique patterns of blame were found across actors, according to the three vehicle types. In crashes involving fully autonomous vehicles, vehicle users received low blame, while vehicle manufacturers and government were highly blamed. There was no difference in the level of blame attributed between high and low severity crashes regarding vehicle type. However, the government received more blame in high severity crashes. The findings have implications for policy and legislation surrounding crash liability. Practitioner summary: Public views relating to blame and liability in transport accidents is a vital consideration for the introduction of new technologies such as autonomous vehicles. This study demonstrates how a systems ergonomics framework can assist to identify the implications of changing public opinion on blame for future road transport systems. Abbreviation: ANOVA: analysis of variance; DAT: defensive attribution theory; IV: independent variable.
Collapse
Affiliation(s)
- Elin Pöllänen
- School of Social Sciences, University of the Sunshine Coast, Maroochydore, Australia
| | - Gemma J M Read
- Centre for Human Factors and Sociotechnical Systems, University of the Sunshine Coast, Maroochydore, Australia
| | - Ben R Lane
- Centre for Human Factors and Sociotechnical Systems, University of the Sunshine Coast, Maroochydore, Australia
| | - Jason Thompson
- Faculty of Architecture, Building and Planning, Melbourne School of Design, Transport, Health and Urban Design (THUD) Research Hub, University of Melbourne, Melbourne, Australia
| | - Paul M Salmon
- Centre for Human Factors and Sociotechnical Systems, University of the Sunshine Coast, Maroochydore, Australia
| |
Collapse
|
6
|
Long J, Edwin M, Albolino S, Toccafondi G. Ergonomics in the Future World: Perspectives from Australia and New Zealand. Work 2019; 64:859-868. [DOI: 10.3233/wor-193026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Affiliation(s)
- Jennifer Long
- Jennifer Long Visual Ergonomics, Katoomba, NSW, Australia
- School of Optometry and Vision Science, UNSW Sydney, NSW, Australia
- Human Factors and Ergonomics Society of Australia (HFESA), Baulkham Hills, NSW, Australia
| | - Marion Edwin
- Optimise Limited, Motueka, New Zealand
- Human Factors and Ergonomics Society of New Zealand (HFESNZ), Richmond, Nelson, New Zealand
| | - Sara Albolino
- Center for Risk Management and Patient Safety - Tuscany Region, Florence, Italy
- International Ergonomics Association (IEA), Geneva, Switzerland
| | - Giulio Toccafondi
- Center for Risk Management and Patient Safety - Tuscany Region, Florence, Italy
| |
Collapse
|
7
|
Hancock PA. Some promises in the pitfalls of automated and autonomous vehicles: A response to commentators. ERGONOMICS 2019; 62:514-520. [PMID: 30794098 DOI: 10.1080/00140139.2019.1586103] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
My interlocutors have offered numerous and important responses to my target article. Here, I endeavour to respond to the issues raised. Despite some contentions over specifics, the overall tenor of these commentaries is one of general agreement. One particular challenge, as noted, is how to disseminate our discipline's knowledge beyond the pages of our journals to effect the impact and change in the world to which we aspire. This is a challenge that transcends efforts solely associated with automated vehicles, but it may be in this specific realm that our science can offer its most widespread impact in the immediate, coming future.
Collapse
Affiliation(s)
- P A Hancock
- a Department of Psychology, Institute for Simulation and Training , University of Central Florida , Orlando , FL , USA
| |
Collapse
|