1
|
Hatherley J. Are clinicians ethically obligated to disclose their use of medical machine learning systems to patients? JOURNAL OF MEDICAL ETHICS 2024:jme-2024-109905. [PMID: 39117396 DOI: 10.1136/jme-2024-109905] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Accepted: 07/26/2024] [Indexed: 08/10/2024]
Abstract
It is commonly accepted that clinicians are ethically obligated to disclose their use of medical machine learning systems to patients, and that failure to do so would amount to a moral fault for which clinicians ought to be held accountable. Call this 'the disclosure thesis.' Four main arguments have been, or could be, given to support the disclosure thesis in the ethics literature: the risk-based argument, the rights-based argument, the materiality argument and the autonomy argument. In this article, I argue that each of these four arguments are unconvincing, and therefore, that the disclosure thesis ought to be rejected. I suggest that mandating disclosure may also even risk harming patients by providing stakeholders with a way to avoid accountability for harm that results from improper applications or uses of these systems.
Collapse
Affiliation(s)
- Joshua Hatherley
- Department of Philosophy and History of Ideas, Aarhus University, Aarhus, Denmark
| |
Collapse
|
2
|
Jessup SA, Alarcon GM, Willis SM, Lee MA. A closer look at how experience, task domain, and self-confidence influence reliance towards algorithms. APPLIED ERGONOMICS 2024; 121:104363. [PMID: 39096745 DOI: 10.1016/j.apergo.2024.104363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 06/20/2024] [Accepted: 07/25/2024] [Indexed: 08/05/2024]
Abstract
Prior research has demonstrated experience with a forecasting algorithm decreases reliance behaviors (i.e., the action of relying on the algorithm). However, the influence of model experience on reliance intentions (i.e., an intention or willingness to rely on the algorithm) has not been explored. Additionally, other factors such as self-confidence and domain knowledge are posited to influence algorithm reliance. The objective of this research was to examine how experience with a statistical model, task domain (used car sales, college grade point average (GPA), GitHub pull requests), and self-confidence influence reliance intentions, reliance behaviors, and perceived accuracy of one's own estimates and the model's estimates. Participants (N = 347) were recruited online and completed a forecasting task. Results indicate that there was a statistically significant effect of self-confidence and task domain on reliance intentions, reliance behaviors, and perceived accuracy. However, unlike previous findings, model experience did not significantly influence reliance behavior, nor did it lead to significant changes in reliance intentions or perceived accuracy of oneself or the model. Our data suggest that factors such as task domain and self-confidence influence algorithm use more so than model experience. Individual differences and situational factors should be considered important aspects that influence forecasters' decisions to rely on predictions from a model or to instead use their own estimates, which can lead to sub-optimal performance.
Collapse
Affiliation(s)
- Sarah A Jessup
- Consortium of Universities, Wright-Patterson AFB, OH, United States.
| | - Gene M Alarcon
- Air Force Research Laboratory, Wright-Patterson AFB, OH, United States
| | - Sasha M Willis
- General Dynamics Information Technology, Dayton, OH, United States
| | - Michael A Lee
- General Dynamics Information Technology, Dayton, OH, United States
| |
Collapse
|
3
|
Trail M. Child welfare predictive risk models and legal decision making. CHILD ABUSE & NEGLECT 2024; 154:106943. [PMID: 39018749 DOI: 10.1016/j.chiabu.2024.106943] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Revised: 04/28/2024] [Accepted: 07/04/2024] [Indexed: 07/19/2024]
Abstract
BACKGROUND Child welfare agencies around the world have experimented with algorithmic predictive modeling as a method to assist in decision making regarding foster child risk, removal and placement. OBJECTIVE Thus far, all of the predictive risk models have been confined to the employees of the various child welfare agencies at the early removal stages and none have been used by attorneys in legal arguments or by judges in making child welfare legal decisions. This study will show the effects of a predictive model on legal decision making within a child welfare context. PARTICIPANTS AND SETTING Lawyers, judges and law students with experience in child welfare or juvenile law were recruited to take an online randomized vignette survey. METHODS The survey consisted of two vignettes describing complex foster child removal and placement legal decisions where participants were exposed to one of three randomized predictive risk model scores. They were then asked follow up questions regarding their decisions to see if the risk models changed their answers. RESULTS Using structural equation modeling, high predictive model risk scores showed consistent ability to change legal decisions about removal and placement across both vignettes. Medium and low scores, though less consistent, also significantly influenced legal decision making. CONCLUSIONS Child welfare legal decision making can be affected by the use of a predictive risk model, which has implications for the development and use of these models as well as legal education for attorneys and judges in the field.
Collapse
|
4
|
Zeiser J. Owning Decisions: AI Decision-Support and the Attributability-Gap. SCIENCE AND ENGINEERING ETHICS 2024; 30:27. [PMID: 38888795 PMCID: PMC11189344 DOI: 10.1007/s11948-024-00485-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Accepted: 04/30/2024] [Indexed: 06/20/2024]
Abstract
Artificial intelligence (AI) has long been recognised as a challenge to responsibility. Much of this discourse has been framed around robots, such as autonomous weapons or self-driving cars, where we arguably lack control over a machine's behaviour and therefore struggle to identify an agent that can be held accountable. However, most of today's AI is based on machine-learning technology that does not act on its own, but rather serves as a decision-support tool, automatically analysing data to help human agents make better decisions. I argue that decision-support tools pose a challenge to responsibility that goes beyond the familiar problem of finding someone to blame or punish for the behaviour of agent-like systems. Namely, they pose a problem for what we might call "decision ownership": they make it difficult to identify human agents to whom we can attribute value-judgements that are reflected in decisions. Drawing on recent philosophical literature on responsibility and its various facets, I argue that this is primarily a problem of attributability rather than of accountability. This particular responsibility problem comes in different forms and degrees, most obviously when an AI provides direct recommendations for actions, but also, less obviously, when it provides mere descriptive information on the basis of which a decision is made.
Collapse
Affiliation(s)
- Jannik Zeiser
- Leibniz Universität Hannover, Institut für Philosophie, Im Moore 21, 30167, Hannover, Germany.
| |
Collapse
|
5
|
Proksch S, Schühle J, Streeb E, Weymann F, Luther T, Kimmerle J. The impact of text topic and assumed human vs. AI authorship on competence and quality assessment. Front Artif Intell 2024; 7:1412710. [PMID: 38881953 PMCID: PMC11176609 DOI: 10.3389/frai.2024.1412710] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2024] [Accepted: 05/14/2024] [Indexed: 06/18/2024] Open
Abstract
Background While Large Language Models (LLMs) are considered positively with respect to technological progress and abilities, people are rather opposed to machines making moral decisions. But the circumstances under which algorithm aversion or algorithm appreciation are more likely to occur with respect to LLMs have not yet been sufficiently investigated. Therefore, the aim of this study was to investigate how texts with moral or technological topics, allegedly written either by a human author or by ChatGPT, are perceived. Methods In a randomized controlled experiment, n = 164 participants read six texts, three of which had a moral and three a technological topic (predictor text topic). The alleged author of each text was randomly either labeled "ChatGPT" or "human author" (predictor authorship). We captured three dependent variables: assessment of author competence, assessment of content quality, and participants' intention to submit the text in a hypothetical university course (sharing intention). We hypothesized interaction effects, that is, we expected ChatGPT to score lower than alleged human authors for moral topics and higher than alleged human authors for technological topics and vice versa. Results We only found a small interaction effect for perceived author competence, p = 0.004, d = 0.40, but not for the other dependent variables. However, ChatGPT was consistently devalued compared to alleged human authors across all dependent variables: there were main effects of authorship for assessment of the author competence, p < 0.001, d = 0.95; for assessment of content quality, p < 0.001, d = 0.39; as well as for sharing intention, p < 0.001, d = 0.57. There was also a small main effect of text topic on the assessment of text quality, p = 0.002, d = 0.35. Conclusion These results are more in line with previous findings on algorithm aversion than with algorithm appreciation. We discuss the implications of these findings for the acceptance of the use of LLMs for text composition.
Collapse
Affiliation(s)
- Sebastian Proksch
- Department of Psychology, Eberhard Karls University Tuebingen, Tuebingen, Germany
| | - Julia Schühle
- Department of Psychology, Eberhard Karls University Tuebingen, Tuebingen, Germany
| | - Elisabeth Streeb
- Department of Psychology, Eberhard Karls University Tuebingen, Tuebingen, Germany
| | - Finn Weymann
- Department of Psychology, Eberhard Karls University Tuebingen, Tuebingen, Germany
| | - Teresa Luther
- Knowledge Construction Lab, Leibniz-Institut fuer Wissensmedien, Tuebingen, Germany
| | - Joachim Kimmerle
- Department of Psychology, Eberhard Karls University Tuebingen, Tuebingen, Germany
- Knowledge Construction Lab, Leibniz-Institut fuer Wissensmedien, Tuebingen, Germany
| |
Collapse
|
6
|
Hasan E, Duhaime E, Trueblood JS. Boosting wisdom of the crowd for medical image annotation using training performance and task features. Cogn Res Princ Implic 2024; 9:31. [PMID: 38763994 PMCID: PMC11102897 DOI: 10.1186/s41235-024-00558-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Accepted: 04/29/2024] [Indexed: 05/21/2024] Open
Abstract
A crucial bottleneck in medical artificial intelligence (AI) is high-quality labeled medical datasets. In this paper, we test a large variety of wisdom of the crowd algorithms to label medical images that were initially classified by individuals recruited through an app-based platform. Individuals classified skin lesions from the International Skin Lesion Challenge 2018 into 7 different categories. There was a large dispersion in the geographical location, experience, training, and performance of the recruited individuals. We tested several wisdom of the crowd algorithms of varying complexity from a simple unweighted average to more complex Bayesian models that account for individual patterns of errors. Using a switchboard analysis, we observe that the best-performing algorithms rely on selecting top performers, weighting decisions by training accuracy, and take into account the task environment. These algorithms far exceed expert performance. We conclude by discussing the implications of these approaches for the development of medical AI.
Collapse
Affiliation(s)
- Eeshan Hasan
- Department of Psychological and Brain Sciences, Indiana University, 1101 E. 10th St., Bloomington, IN, 47405-7007, USA.
- Cognitive Science Program, Indiana University, Bloomington, USA.
| | | | - Jennifer S Trueblood
- Department of Psychological and Brain Sciences, Indiana University, 1101 E. 10th St., Bloomington, IN, 47405-7007, USA.
- Cognitive Science Program, Indiana University, Bloomington, USA.
| |
Collapse
|
7
|
Li Y, Wu B, Huang Y, Luan S. Developing trustworthy artificial intelligence: insights from research on interpersonal, human-automation, and human-AI trust. Front Psychol 2024; 15:1382693. [PMID: 38694439 PMCID: PMC11061529 DOI: 10.3389/fpsyg.2024.1382693] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Accepted: 04/04/2024] [Indexed: 05/04/2024] Open
Abstract
The rapid advancement of artificial intelligence (AI) has impacted society in many aspects. Alongside this progress, concerns such as privacy violation, discriminatory bias, and safety risks have also surfaced, highlighting the need for the development of ethical, responsible, and socially beneficial AI. In response, the concept of trustworthy AI has gained prominence, and several guidelines for developing trustworthy AI have been proposed. Against this background, we demonstrate the significance of psychological research in identifying factors that contribute to the formation of trust in AI. Specifically, we review research findings on interpersonal, human-automation, and human-AI trust from the perspective of a three-dimension framework (i.e., the trustor, the trustee, and their interactive context). The framework synthesizes common factors related to trust formation and maintenance across different trust types. These factors point out the foundational requirements for building trustworthy AI and provide pivotal guidance for its development that also involves communication, education, and training for users. We conclude by discussing how the insights in trust research can help enhance AI's trustworthiness and foster its adoption and application.
Collapse
Affiliation(s)
- Yugang Li
- CAS Key Laboratory for Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of the Chinese Academy of Sciences, Beijing, China
| | - Baizhou Wu
- CAS Key Laboratory for Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of the Chinese Academy of Sciences, Beijing, China
| | - Yuqi Huang
- CAS Key Laboratory for Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of the Chinese Academy of Sciences, Beijing, China
| | - Shenghua Luan
- CAS Key Laboratory for Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of the Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
8
|
Yin Y, Jia N, Wakslak CJ. AI can help people feel heard, but an AI label diminishes this impact. Proc Natl Acad Sci U S A 2024; 121:e2319112121. [PMID: 38551835 PMCID: PMC10998586 DOI: 10.1073/pnas.2319112121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Accepted: 01/29/2024] [Indexed: 04/02/2024] Open
Abstract
People want to "feel heard" to perceive that they are understood, validated, and valued. Can AI serve the deeply human function of making others feel heard? Our research addresses two fundamental issues: Can AI generate responses that make human recipients feel heard, and how do human recipients react when they believe the response comes from AI? We conducted an experiment and a follow-up study to disentangle the effects of actual source of a message and the presumed source. We found that AI-generated messages made recipients feel more heard than human-generated messages and that AI was better at detecting emotions. However, recipients felt less heard when they realized that a message came from AI (vs. human). Finally, in a follow-up study where the responses were rated by third-party raters, we found that compared with humans, AI demonstrated superior discipline in offering emotional support, a crucial element in making individuals feel heard, while avoiding excessive practical suggestions, which may be less effective in achieving this goal. Our research underscores the potential and limitations of AI in meeting human psychological needs. These findings suggest that while AI demonstrates enhanced capabilities to provide emotional support, the devaluation of AI responses poses a key challenge for effectively leveraging AI's capabilities.
Collapse
Affiliation(s)
- Yidan Yin
- Lloyd Greif Center for Entrepreneurial Studies, Marshall School of Business, University of Southern California, Los Angeles, CA90089
| | - Nan Jia
- Department of Management and Organization, Marshall School of Business, University of Southern California, Los Angeles, CA90089
| | - Cheryl J. Wakslak
- Department of Management and Organization, Marshall School of Business, University of Southern California, Los Angeles, CA90089
| |
Collapse
|
9
|
Campion JR, O'Connor DB, Lahiff C. Human-artificial intelligence interaction in gastrointestinal endoscopy. World J Gastrointest Endosc 2024; 16:126-135. [PMID: 38577646 PMCID: PMC10989254 DOI: 10.4253/wjge.v16.i3.126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 01/18/2024] [Accepted: 02/23/2024] [Indexed: 03/14/2024] Open
Abstract
The number and variety of applications of artificial intelligence (AI) in gastrointestinal (GI) endoscopy is growing rapidly. New technologies based on machine learning (ML) and convolutional neural networks (CNNs) are at various stages of development and deployment to assist patients and endoscopists in preparing for endoscopic procedures, in detection, diagnosis and classification of pathology during endoscopy and in confirmation of key performance indicators. Platforms based on ML and CNNs require regulatory approval as medical devices. Interactions between humans and the technologies we use are complex and are influenced by design, behavioural and psychological elements. Due to the substantial differences between AI and prior technologies, important differences may be expected in how we interact with advice from AI technologies. Human–AI interaction (HAII) may be optimised by developing AI algorithms to minimise false positives and designing platform interfaces to maximise usability. Human factors influencing HAII may include automation bias, alarm fatigue, algorithm aversion, learning effect and deskilling. Each of these areas merits further study in the specific setting of AI applications in GI endoscopy and professional societies should engage to ensure that sufficient emphasis is placed on human-centred design in development of new AI technologies.
Collapse
Affiliation(s)
- John R Campion
- Department of Gastroenterology, Mater Misericordiae University Hospital, Dublin D07 AX57, Ireland
- School of Medicine, University College Dublin, Dublin D04 C7X2, Ireland
| | - Donal B O'Connor
- Department of Surgery, Trinity College Dublin, Dublin D02 R590, Ireland
| | - Conor Lahiff
- Department of Gastroenterology, Mater Misericordiae University Hospital, Dublin D07 AX57, Ireland
- School of Medicine, University College Dublin, Dublin D04 C7X2, Ireland
| |
Collapse
|
10
|
Meng J. AI emerges as the frontier in behavioral science. Proc Natl Acad Sci U S A 2024; 121:e2401336121. [PMID: 38408258 DOI: 10.1073/pnas.2401336121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/28/2024] Open
Affiliation(s)
- Juanjuan Meng
- Guanghua School of Management, Peking University, Beijing 100871, China
| |
Collapse
|
11
|
Rolison JJ, Gooding PLT, Russo R, Buchanan KE. Who should decide how limited healthcare resources are prioritized? Autonomous technology as a compelling alternative to humans. PLoS One 2024; 19:e0292944. [PMID: 38422082 PMCID: PMC10903831 DOI: 10.1371/journal.pone.0292944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Accepted: 10/02/2023] [Indexed: 03/02/2024] Open
Abstract
Who should decide how limited resources are prioritized? We ask this question in a healthcare context where patients must be prioritized according to their need and where advances in autonomous artificial intelligence-based technology offer a compelling alternative to decisions by humans. Qualitative (Study 1a; N = 50) and quantitative (Study 1b; N = 800) analysis identified agency, emotional experience, bias-free, and error-free as four main qualities describing people's perceptions of autonomous computer programs (ACPs) and human staff members (HSMs). Yet, the qualities were not perceived to be possessed equally by HSMs and ACPs. HSMs were endorsed with human qualities of agency and emotional experience, whereas ACPs were perceived as more capable than HSMs of bias- and error-free decision-making. Consequently, better than average (Study 2; N = 371), or relatively better (Studies 3, N = 181; & 4, N = 378), ACP performance, especially on qualities characteristic of ACPs, was sufficient to reverse preferences to favor ACPs over HSMs as the decision makers for how limited healthcare resources should be prioritized. Our findings serve a practical purpose regarding potential barriers to public acceptance of technology, and have theoretical value for our understanding of perceptions of autonomous technologies.
Collapse
Affiliation(s)
| | | | - Riccardo Russo
- Department of Psychology, University of Essex, Colchester, United Kingdom
- Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
| | | |
Collapse
|
12
|
Sele D, Chugunova M. Putting a human in the loop: Increasing uptake, but decreasing accuracy of automated decision-making. PLoS One 2024; 19:e0298037. [PMID: 38335162 PMCID: PMC10857587 DOI: 10.1371/journal.pone.0298037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Accepted: 01/17/2024] [Indexed: 02/12/2024] Open
Abstract
Automated decision-making gains traction, prompting discussions on regulation with calls for human oversight. Understanding how human involvement affects the acceptance of algorithmic recommendations and the accuracy of resulting decisions is vital. In an online experiment (N = 292), for a prediction task, participants choose a recommendation stemming either from an algorithm or another participant. In a between-subject design, we varied if the prediction was delegated completely or if the recommendation could be adjusted. 66% of times, participants preferred to delegate the decision to an algorithm over an equally accurate human. The preference for an algorithm increased by 7 percentage points if participants could monitor and adjust the recommendations. Participants followed algorithmic recommendations more closely. Importantly, they were less likely to intervene with the least accurate recommendations. Hence, in our experiment the human-in-the-loop design increases the uptake but decreases the accuracy of the decisions.
Collapse
Affiliation(s)
- Daniela Sele
- Center for Law & Economics, ETH Zurich, Zurich, Switzerland
| | - Marina Chugunova
- Max Planck Institute for Innovation and Competition, Munich, Germany
| |
Collapse
|
13
|
Semujanga B, Parent-Rocheleau X. Time-Based Stress and Procedural Justice: Can Transparency Mitigate the Effects of Algorithmic Compensation in Gig Work? INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2024; 21:86. [PMID: 38248549 PMCID: PMC10815495 DOI: 10.3390/ijerph21010086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 01/02/2024] [Accepted: 01/09/2024] [Indexed: 01/23/2024]
Abstract
The gig economy has led to a new management style, using algorithms to automate managerial decisions. Algorithmic management has aroused the interest of researchers, particularly regarding the prevalence of precarious working conditions and the health issues related to gig work. Despite algorithmically driven remuneration mechanisms' influence on work conditions, few studies have focused on the compensation dimension of algorithmic management. We investigate the effects of algorithmic compensation on gig workers in relation to perceptions of procedural justice and time-based stress, two important predictors of work-related health problems. Also, this study examines the moderating effect of algorithmic transparency in these relationships. Survey data were collected from 962 gig workers via a research panel. The results of hierarchical multiple regression analysis show that the degree of exposure to algorithmic compensation is positively related to time-based stress. However, contrary to our expectations, algorithmic compensation is also positively associated with procedural justice perceptions and our results indicate that this relation is enhanced at higher levels of perceived algorithmic transparency. Furthermore, transparency does not play a role in the relationship between algorithmic compensation and time-based stress. These findings suggest that perceived algorithmic transparency makes algorithmic compensation even fairer but does not appear to make it less stressful.
Collapse
Affiliation(s)
- Benjamin Semujanga
- Department of Human Resources Management, HEC Montréal, 3000 Côte Ste-Catherine, Montréal, QC H3T 2A7, Canada;
| | | |
Collapse
|
14
|
Kenny R, Fischhoff B, Davis A, Canfield C. Improving Social Bot Detection Through Aid and Training. HUMAN FACTORS 2023:187208231210145. [PMID: 37963198 DOI: 10.1177/00187208231210145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2023]
Abstract
OBJECTIVE We test the effects of three aids on individuals' ability to detect social bots among Twitter personas: a bot indicator score, a training video, and a warning. BACKGROUND Detecting social bots can prevent online deception. We use a simulated social media task to evaluate three aids. METHOD Lay participants judged whether each of 60 Twitter personas was a human or social bot in a simulated online environment, using agreement between three machine learning algorithms to estimate the probability of each persona being a bot. Experiment 1 compared a control group and two intervention groups, one provided a bot indicator score for each tweet; the other provided a warning about social bots. Experiment 2 compared a control group and two intervention groups, one receiving the bot indicator scores and the other a training video, focused on heuristics for identifying social bots. RESULTS The bot indicator score intervention improved predictive performance and reduced overconfidence in both experiments. The training video was also effective, although somewhat less so. The warning had no effect. Participants rarely reported willingness to share content for a persona that they labeled as a bot, even when they agreed with it. CONCLUSIONS Informative interventions improved social bot detection; warning alone did not. APPLICATION We offer an experimental testbed and methodology that can be used to evaluate and refine interventions designed to reduce vulnerability to social bots. We show the value of two interventions that could be applied in many settings.
Collapse
Affiliation(s)
- Ryan Kenny
- United States Army, Fayetteville, NC, USA
| | | | - Alex Davis
- Carnegie Mellon University, Pittsburgh, PA, USA
| | - Casey Canfield
- Missouri University of Science and Technology, Rolla, MO, USA
| |
Collapse
|
15
|
Neumann M, Niessen ASM, Meijer RR. Predicting decision-makers’ algorithm use. COMPUTERS IN HUMAN BEHAVIOR 2023. [DOI: 10.1016/j.chb.2023.107759] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/30/2023]
|
16
|
Jiang L, Qin X, Yam KC, Dong X, Liao W, Chen C. Who should be first? How and when AI-human order influences procedural justice in a multistage decision-making process. PLoS One 2023; 18:e0284840. [PMID: 37459307 PMCID: PMC10351705 DOI: 10.1371/journal.pone.0284840] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 04/08/2023] [Indexed: 07/20/2023] Open
Abstract
Artificial intelligence (AI) has fundamentally changed the way people live and has largely reshaped organizational decision-making processes. Particularly, AI decision making has become involved in almost every aspect of human resource management, including recruiting, selecting, motivating, and retaining employees. However, existing research only considers single-stage decision-making processes and overlooks more common multistage decision-making processes. Drawing upon person-environment fit theory and the algorithm reductionism perceptive, we explore how and when the order of decision makers (i.e., AI-human order vs. human-AI order) affects procedural justice in a multistage decision-making process involving AI and humans. We propose and found that individuals perceived a decision-making process arranged in human-AI order as having less AI ability-power fit (i.e., the fit between the abilities of AI and the power it is granted) than when the process was arranged in AI-human order, which led to less procedural justice. Furthermore, perceived AI ability buffered the indirect effect of the order of decision makers (i.e., AI-human order vs. human-AI order) on procedural justice via AI ability-power fit. Together, our findings suggest that the position of AI in collaborations with humans has profound impacts on individuals' justice perceptions regarding their decision making.
Collapse
Affiliation(s)
- Luyuan Jiang
- Sun Yat-sen Business School, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Xin Qin
- Sun Yat-sen Business School, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Kai Chi Yam
- Business School, National University of Singapore, Singapore, Singapore
| | - Xiaowei Dong
- Sun Yat-sen Business School, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Wanqi Liao
- Sun Yat-sen Business School, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Chen Chen
- Sun Yat-sen Business School, Sun Yat-sen University, Guangzhou, Guangdong, China
| |
Collapse
|
17
|
Kizilcec RF. To Advance AI Use in Education, Focus on Understanding Educators. INTERNATIONAL JOURNAL OF ARTIFICIAL INTELLIGENCE IN EDUCATION 2023:1-8. [PMID: 37359103 PMCID: PMC10255939 DOI: 10.1007/s40593-023-00351-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/28/2023]
Abstract
A better understanding of educators' perspectives of emerging education technology, specifically tools that incorporate AI, is essential to unlock the full potential benefits of these innovations. While prior research has primarily emphasized technological advancements, it has overlooked the profound influence of social, psychological, and cultural factors in shaping educators' perceptions, trust, and adoption of educational technology. As increasingly powerful AI tools emerge, their design must be rooted in a deep understanding of educators' needs and perspectives. It is only with the acceptance and trust of educators that these innovative solutions can elevate learning outcomes, academic achievements, and educational equity.
Collapse
Affiliation(s)
- René F. Kizilcec
- Department of Information Science, Cornell Univeristy, Ithaca, NY USA
| |
Collapse
|
18
|
Kupfer C, Prassl R, Fleiß J, Malin C, Thalmann S, Kubicek B. Check the box! How to deal with automation bias in AI-based personnel selection. Front Psychol 2023; 14:1118723. [PMID: 37089740 PMCID: PMC10113449 DOI: 10.3389/fpsyg.2023.1118723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Accepted: 03/10/2023] [Indexed: 04/08/2023] Open
Abstract
Artificial Intelligence (AI) as decision support for personnel preselection, e.g., in the form of a dashboard, promises a more effective and fairer selection process. However, AI-based decision support systems might prompt decision makers to thoughtlessly accept the system’s recommendation. As this so-called automation bias contradicts ethical and legal requirements of human oversight for the use of AI-based recommendations in personnel preselection, the present study investigates strategies to reduce automation bias and increase decision quality. Based on the Elaboration Likelihood Model, we assume that instructing decision makers about the possibility of system errors and their responsibility for the decision, as well as providing an appropriate level of data aggregation should encourage decision makers to process information systematically instead of heuristically. We conducted a 3 (general information, information about system errors, information about responsibility) x 2 (low vs. high aggregated data) experiment to investigate which strategy can reduce automation bias and enhance decision quality. We found that less automation bias in terms of higher scores on verification intensity indicators correlated with higher objective decision quality, i.e., more suitable applicants selected. Decision makers who received information about system errors scored higher on verification intensity indicators and rated subjective decision quality higher, but decision makers who were informed about their responsibility, unexpectedly, did not. Regarding aggregation level of data, decision makers of the highly aggregated data group spent less time on the level of the dashboard where highly aggregated data were presented. Our results show that it is important to inform decision makers who interact with AI-based decision-support systems about potential system errors and provide them with less aggregated data to reduce automation bias and enhance decision quality.
Collapse
Affiliation(s)
- Cordula Kupfer
- Work and Organizational Psychology, Institute of Psychology, University of Graz, Graz, Austria
- *Correspondence: Cordula Kupfer,
| | - Rita Prassl
- Work and Organizational Psychology, Institute of Psychology, University of Graz, Graz, Austria
| | - Jürgen Fleiß
- Business Analytics and Data Science-Center, University of Graz, Graz, Austria
| | - Christine Malin
- Business Analytics and Data Science-Center, University of Graz, Graz, Austria
| | - Stefan Thalmann
- Business Analytics and Data Science-Center, University of Graz, Graz, Austria
| | - Bettina Kubicek
- Work and Organizational Psychology, Institute of Psychology, University of Graz, Graz, Austria
| |
Collapse
|
19
|
Benjamin DM, Morstatter F, Abbas AE, Abeliuk A, Atanasov P, Bennett S, Beger A, Birari S, Budescu DV, Catasta M, Ferrara E, Haravitch L, Himmelstein M, Hossain KSMT, Huang Y, Jin W, Joseph R, Leskovec J, Matsui A, Mirtaheri M, Ren X, Satyukov G, Sethi R, Singh A, Sosic R, Steyvers M, Szekely PA, Ward MD, Galstyan A. Hybrid forecasting of geopolitical events
†. AI MAG 2023. [DOI: 10.1002/aaai.12085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/31/2023]
|
20
|
Bezrukova K, Griffith TL, Spell C, Rice V, Yang HE. Artificial Intelligence and Groups: Effects of Attitudes and Discretion on Collaboration. GROUP & ORGANIZATION MANAGEMENT 2023. [DOI: 10.1177/10596011231160574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
Abstract
We theorize about human-team collaboration with AI by drawing upon research in groups and teams, social psychology, information systems, engineering, and beyond. Based on our review, we focus on two main issues in the teams and AI arena. The first is whether the team generally views AI positively or negatively. The second is whether the decision to use AI is left up to the team members (voluntary use of AI) or mandated by top management or other policy-setters in the organization. These two aspects guide our creation of a team-level conceptual framework modeling how AI introduced as a mandated addition to the team can have asymmetric effects on collaboration level depending on the team’s attitudes about AI. When AI is viewed positively by the team, the effect of mandatory use suppresses collaboration in the team. But when a team has negative attitudes toward AI, mandatory use elevates team collaboration. Our model emphasizes the need for managing team attitudes and discretion strategies and promoting new research directions regarding AI’s implications for teamwork.
Collapse
Affiliation(s)
| | | | - Chester Spell
- Rutgers University School of Business, Camden NJ, USA
| | | | | |
Collapse
|
21
|
Feldkamp T, Langer M, Wies L, König CJ. Justice, trust, and moral judgements when personnel selection is supported by algorithms. EUROPEAN JOURNAL OF WORK AND ORGANIZATIONAL PSYCHOLOGY 2023. [DOI: 10.1080/1359432x.2023.2169140] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/23/2023]
Affiliation(s)
- Tina Feldkamp
- Fachrichtung Psychologie, Universität des Saarlandes, Saarbrücken, Germany
| | - Markus Langer
- Fachrichtung Psychologie, Universität des Saarlandes, Saarbrücken, Germany
| | - Leo Wies
- Fachrichtung Psychologie, Universität des Saarlandes, Saarbrücken, Germany
| | - Cornelius J. König
- Fachrichtung Psychologie, Universität des Saarlandes, Saarbrücken, Germany
| |
Collapse
|
22
|
Westphal M, Vössing M, Satzger G, Yom-Tov GB, Rafaeli A. Decision control and explanations in human-AI collaboration: Improving user perceptions and compliance. COMPUTERS IN HUMAN BEHAVIOR 2023. [DOI: 10.1016/j.chb.2023.107714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/18/2023]
|
23
|
Aysolmaz B, Müller R, Meacham D. The public perceptions of algorithmic decision-making systems: results from a large-scale survey. TELEMATICS AND INFORMATICS 2023. [DOI: 10.1016/j.tele.2023.101954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/17/2023]
|
24
|
Does human–AI collaboration lead to more creative art? Aesthetic evaluation of human-made and AI-generated haiku poetry. COMPUTERS IN HUMAN BEHAVIOR 2023. [DOI: 10.1016/j.chb.2022.107502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
25
|
Neumann M, Niessen ASM, Hurks PPM, Meijer RR. Holistic and mechanical combination in psychological assessment: Why algorithms are underutilized and what is needed to increase their use. INTERNATIONAL JOURNAL OF SELECTION AND ASSESSMENT 2023. [DOI: 10.1111/ijsa.12416] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
Affiliation(s)
- Marvin Neumann
- Department of Psychometrics and Statistics, Faculty of Behavioral and Social Sciences University of Groningen Groningen The Netherlands
| | - A. Susan M. Niessen
- Department of Psychometrics and Statistics, Faculty of Behavioral and Social Sciences University of Groningen Groningen The Netherlands
| | - Petra P. M. Hurks
- Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience University of Maastricht Maastricht The Netherlands
| | - Rob R. Meijer
- Department of Psychometrics and Statistics, Faculty of Behavioral and Social Sciences University of Groningen Groningen The Netherlands
| |
Collapse
|
26
|
Abstract
Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are "black boxes." The initial response in the literature was a demand for "explainable AI." However, recently, several authors have suggested that making AI more explainable or "interpretable" is likely to be at the cost of the accuracy of these systems and that prioritizing interpretability in medical AI may constitute a "lethal prejudice." In this paper, we defend the value of interpretability in the context of the use of AI in medicine. Clinicians may prefer interpretable systems over more accurate black boxes, which in turn is sufficient to give designers of AI reason to prefer more interpretable systems in order to ensure that AI is adopted and its benefits realized. Moreover, clinicians may be justified in this preference. Achieving the downstream benefits from AI is critically dependent on how the outputs of these systems are interpreted by physicians and patients. A preference for the use of highly accurate black box AI systems, over less accurate but more interpretable systems, may itself constitute a form of lethal prejudice that may diminish the benefits of AI to-and perhaps even harm-patients.
Collapse
Affiliation(s)
- Joshua Hatherley
- School of Philosophical, Historical, and International Studies, Monash University, Clayton, Victoria, Australia
| | - Robert Sparrow
- School of Philosophical, Historical, and International Studies, Monash University, Clayton, Victoria, Australia
| | - Mark Howard
- School of Philosophical, Historical, and International Studies, Monash University, Clayton, Victoria, Australia
| |
Collapse
|
27
|
Algorithms as partners in crime: A lesson in ethics by design. COMPUTERS IN HUMAN BEHAVIOR 2023. [DOI: 10.1016/j.chb.2022.107483] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
28
|
What type of algorithm is perceived as fairer and more acceptable? A comparative analysis of rule-driven versus data-driven algorithmic decision-making in public affairs. GOVERNMENT INFORMATION QUARTERLY 2023. [DOI: 10.1016/j.giq.2023.101803] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|
29
|
Filiz I, Judek JR, Lorenz M, Spiwoks M. The extent of algorithm aversion in decision-making situations with varying gravity. PLoS One 2023; 18:e0278751. [PMID: 36809526 PMCID: PMC9942970 DOI: 10.1371/journal.pone.0278751] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Accepted: 11/15/2022] [Indexed: 02/23/2023] Open
Abstract
Algorithms already carry out many tasks more reliably than human experts. Nevertheless, some subjects have an aversion towards algorithms. In some decision-making situations an error can have serious consequences, in others not. In the context of a framing experiment, we examine the connection between the consequences of a decision-making situation and the frequency of algorithm aversion. This shows that the more serious the consequences of a decision are, the more frequently algorithm aversion occurs. Particularly in the case of very important decisions, algorithm aversion thus leads to a reduction of the probability of success. This can be described as the tragedy of algorithm aversion.
Collapse
Affiliation(s)
- Ibrahim Filiz
- Faculty of Business, Ostfalia University of Applied Sciences, Wolfsburg, Germany
| | - Jan René Judek
- Faculty of Business, Ostfalia University of Applied Sciences, Wolfsburg, Germany
| | - Marco Lorenz
- Faculty of Economic Sciences, Georg August University Göttingen, Göttingen, Germany
- * E-mail:
| | - Markus Spiwoks
- Faculty of Business, Ostfalia University of Applied Sciences, Wolfsburg, Germany
| |
Collapse
|
30
|
Hatherley J, Sparrow R, Howard M. The Virtues of Interpretable Medical Artificial Intelligence. Camb Q Healthc Ethics 2022:1-10. [PMID: 36524245 DOI: 10.1017/s0963180122000305] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are "black boxes." The initial response in the literature was a demand for "explainable AI." However, recently, several authors have suggested that making AI more explainable or "interpretable" is likely to be at the cost of the accuracy of these systems and that prioritizing interpretability in medical AI may constitute a "lethal prejudice." In this article, we defend the value of interpretability in the context of the use of AI in medicine. Clinicians may prefer interpretable systems over more accurate black boxes, which in turn is sufficient to give designers of AI reason to prefer more interpretable systems in order to ensure that AI is adopted and its benefits realized. Moreover, clinicians may be justified in this preference. Achieving the downstream benefits from AI is critically dependent on how the outputs of these systems are interpreted by physicians and patients. A preference for the use of highly accurate black box AI systems, over less accurate but more interpretable systems, may itself constitute a form of lethal prejudice that may diminish the benefits of AI to-and perhaps even harm-patients.
Collapse
Affiliation(s)
- Joshua Hatherley
- School of Philosophical, Historical, and International Studies, Monash University, Clayton, Victoria3168, Australia
| | - Robert Sparrow
- School of Philosophical, Historical, and International Studies, Monash University, Clayton, Victoria3168, Australia
| | - Mark Howard
- School of Philosophical, Historical, and International Studies, Monash University, Clayton, Victoria3168, Australia
| |
Collapse
|
31
|
Adam M, Roethke K, Benlian A. Human Versus Automated Sales Agents: How and Why Customer Responses Shift Across Sales Stages. INFORMATION SYSTEMS RESEARCH 2022. [DOI: 10.1287/isre.2022.1171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
Abstract
Customers in sales processes increasingly encounter automated sales agents that complement or replace human sales agents. Yet, little is known about whether, how, and why customers respond to automated agents in contrast to human agents across successive decision stages of the same sales process. Even less is known about customer responses to combinations where both agents assume distinct roles and focus on complementary tasks that are traditionally performed by only one single agent. Against this backdrop, this paper explores the influence of increasingly common sales representative types on customer decisions across sales stages. Our findings demonstrate that customer responses to automated (versus human) sales agents are not stable in sales processes and instead, shift as customers move across sales stages. What is more, the paper shows that combinations of sales agents versus single sales agents do matter, yet their differential effects depend on contextual features of the sales setting. These insights are important because vendors may assume that a certain type of sales agent is always more appreciated by customers, whereas in fact, different sales agent types bring distinct attributes to the table, and customers’ appreciation of these attributes shifts across sales stages.
Collapse
Affiliation(s)
- Martin Adam
- Information Systems and E-Services, Technical University of Darmstadt, 64293 Darmstadt, Germany
| | - Konstantin Roethke
- Information Systems and E-Services, Technical University of Darmstadt, 64293 Darmstadt, Germany
| | - Alexander Benlian
- Information Systems and E-Services, Technical University of Darmstadt, 64293 Darmstadt, Germany
| |
Collapse
|
32
|
Fan Y, Liu X. Exploring the role of AI algorithmic agents: The impact of algorithmic decision autonomy on consumer purchase decisions. Front Psychol 2022; 13:1009173. [PMID: 36337487 PMCID: PMC9630471 DOI: 10.3389/fpsyg.2022.1009173] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2022] [Accepted: 09/26/2022] [Indexed: 11/13/2022] Open
Abstract
Although related studies have examined the impact of different images of artificial intelligence products on consumer evaluation, exploring the impact on consumer purchase decisions from the perspective of algorithmic decision autonomy remains under-explored. Based on the self-determination theory, this research discusses the influence of the agent decision-making role played by different AI algorithmic decision autonomy on consumer purchase decisions. The results of the 3 studies indicate that algorithmic decision autonomy has an inverted U-shaped effect on consumer’s purchase decisions, consumer’s self-efficacy mediates the relationship between algorithmic decision autonomy and purchase decisions, and consumer’s power distance moderates the relationship between algorithmic decision autonomy, self-efficacy, and purchase decisions. The research results can provide references for marketers, retailers, algorithm designers, and other parties to formulate algorithm marketing strategies, make AI algorithm decisions better serve consumers, and achieve value co-creation with consumers.
Collapse
|
33
|
Bogert E, Lauharatanahirun N, Schecter A. Human preferences toward algorithmic advice in a word association task. Sci Rep 2022; 12:14501. [PMID: 36008508 PMCID: PMC9411628 DOI: 10.1038/s41598-022-18638-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Accepted: 08/16/2022] [Indexed: 11/09/2022] Open
Abstract
Algorithms provide recommendations to human decision makers across a variety of task domains. For many problems, humans will rely on algorithmic advice to make their choices and at times will even show complacency. In other cases, humans are mistrustful of algorithmic advice, or will hold algorithms to higher standards of performance. Given the increasing use of algorithms to support creative work such as text generation and brainstorming, it is important to understand how humans will respond to algorithms in those scenarios—will they show appreciation or aversion? This study tests the effects of algorithmic advice for a word association task, the remote associates test (RAT). The RAT task is an established instrument for testing critical and creative thinking with respect to multiple word association. We conducted a preregistered online experiment (154 participants, 2772 observations) to investigate whether humans had stronger reactions to algorithmic or crowd advice when completing multiple instances of the RAT. We used an experimental format in which subjects see a question, answer the question, then receive advice and answer the question a second time. Advice was provided in multiple formats, with advice varying in quality and questions varying in difficulty. We found that individuals receiving algorithmic advice changed their responses 13\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\%$$\end{document}% more frequently (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\chi ^{2} = 59.06$$\end{document}χ2=59.06, \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$p < 0.001$$\end{document}p<0.001) and reported greater confidence in their final solutions. However, individuals receiving algorithmic advice also were 13\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\%$$\end{document}% less likely to identify the correct solution (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\chi ^{2} = 58.79$$\end{document}χ2=58.79, \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$p < 0.001$$\end{document}p<0.001). This study highlights both the promises and pitfalls of leveraging algorithms to support creative work.
Collapse
Affiliation(s)
- Eric Bogert
- Department of Supply Chain and Information Management, Northeastern University, Boston, MA, 02115, USA
| | - Nina Lauharatanahirun
- Departments of Biomedical Engineering and Biobehavioral Health, Pennsylvania State University, University Park, PA, 16802, USA
| | - Aaron Schecter
- Department of Management Information Systems, University of Georgia, Athens, GA, 30602, USA.
| |
Collapse
|
34
|
Lacroux A, Martin-Lacroux C. Should I Trust the Artificial Intelligence to Recruit? Recruiters' Perceptions and Behavior When Faced With Algorithm-Based Recommendation Systems During Resume Screening. Front Psychol 2022; 13:895997. [PMID: 35874355 PMCID: PMC9298741 DOI: 10.3389/fpsyg.2022.895997] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 05/30/2022] [Indexed: 11/13/2022] Open
Abstract
Resume screening assisted by decision support systems that incorporate artificial intelligence is currently undergoing a strong development in many organizations, raising technical, managerial, legal, and ethical issues. The purpose of the present paper is to better understand the reactions of recruiters when they are offered algorithm-based recommendations during resume screening. Two polarized attitudes have been identified in the literature on users' reactions to algorithm-based recommendations: algorithm aversion, which reflects a general distrust and preference for human recommendations; and automation bias, which corresponds to an overconfidence in the decisions or recommendations made by algorithmic decision support systems (ADSS). Drawing on results obtained in the field of automated decision support areas, we make the general hypothesis that recruiters trust human experts more than ADSS, because they distrust algorithms for subjective decisions such as recruitment. An experiment on resume screening was conducted on a sample of professionals (N = 694) involved in the screening of job applications. They were asked to study a job offer, then evaluate two fictitious resumes in a 2 × 2 factorial design with manipulation of the type of recommendation (no recommendation/algorithmic recommendation/human expert recommendation) and of the consistency of the recommendations (consistent vs. inconsistent recommendation). Our results support the general hypothesis of preference for human recommendations: recruiters exhibit a higher level of trust toward human expert recommendations compared with algorithmic recommendations. However, we also found that recommendation's consistence has a differential and unexpected impact on decisions: in the presence of an inconsistent algorithmic recommendation, recruiters favored the unsuitable over the suitable resume. Our results also show that specific personality traits (extraversion, neuroticism, and self-confidence) are associated with a differential use of algorithmic recommendations. Implications for research and HR policies are finally discussed.
Collapse
Affiliation(s)
- Alain Lacroux
- Univ. Polytechnique Hauts de France, IDH, CRISS, Valenciennes, France
| | | |
Collapse
|
35
|
Shaikh M, Vaast E. Algorithmic Interactions in Open Source Work. INFORMATION SYSTEMS RESEARCH 2022. [DOI: 10.1287/isre.2022.1153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
This study focuses on algorithmic interactions in open source work. Algorithms are essential in open source because they remedy concerns incompletely addressed by parallel development or modularity. Following algorithmic interactions in open source allows us to map the operational performance of algorithms to understand how algorithms work with multiple other algorithms to accomplish work. Studying algorithms working together shows us how residual interdependencies of modularity and problems not resolved by dependence on parallel development are worked around to perform open source work. We examine the Linux Kernel case that reveals how algorithmic interactions facilitate open source work through the three processes of managing, organizing, and supervising development work. Our qualitative study theorizes how algorithmic interactions intensify through these processes that work together to facilitate development. We make a theoretical contribution to open source scholarship by explaining how algorithmic interactions navigate across module rigidity and enhance parallel development. Our work also reveals how, in open source, developers work to automate most tasks and augmentation is a bidirectional relationship of algorithms augmenting the work of developers and of developers augmenting the work of algorithms.
Collapse
Affiliation(s)
- Maha Shaikh
- King’s College London, London WC2R 2LS, United Kingdom
| | - Emmanuelle Vaast
- Desautels Faculty of Management, McGill University, Montreal, Quebec H3A 1G5, Canada
| |
Collapse
|
36
|
Park EH, Werder K, Cao L, Ramesh B. Why do Family Members Reject AI in Health Care? Competing Effects of Emotions. J MANAGE INFORM SYST 2022. [DOI: 10.1080/07421222.2022.2096550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
Affiliation(s)
- Eun Hee Park
- Information Technology & Decision Sciences, Strome College of Business, Old Dominion University, Norfolk, VA 23529, USA
| | - Karl Werder
- Cologne Institute for Information Systems, Faculty of Management, Economics and Social Sciences, University of Cologne, Pohligstr. 1, 50969 Cologne, Germany
| | - Lan Cao
- Information Technology & Decision Sciences, Strome College of Business, Old Dominion University, Norfolk, VA 23529, USA
| | - Balasubramaniam Ramesh
- Computer Information Systems, J. Mack Robinson College of Business, Georgia State University, Atlanta, GA 30303, USA
| |
Collapse
|
37
|
When Self-Humanization Leads to Algorithm Aversion. BUSINESS & INFORMATION SYSTEMS ENGINEERING 2022. [DOI: 10.1007/s12599-022-00754-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
AbstractDecision support systems are increasingly being adopted by various digital platforms. However, prior research has shown that certain contexts can induce algorithm aversion, leading people to reject their decision support. This paper investigates how and why the context in which users are making decisions (for-profit versus prosocial microlending decisions) affects their degree of algorithm aversion and ultimately their preference for more human-like (versus computer-like) decision support systems. The study proposes that contexts vary in their affordances for self-humanization. Specifically, people perceive prosocial decisions as more relevant to self-humanization than for-profit contexts, and, in consequence, they ascribe more importance to empathy and autonomy while making decisions in prosocial contexts. This increased importance of empathy and autonomy leads to a higher degree of algorithm aversion. At the same time, it also leads to a stronger preference for human-like decision support, which could therefore serve as a remedy for an algorithm aversion induced by the need for self-humanization. The results from an online experiment support the theorizing. The paper discusses both theoretical and design implications, especially for the potential of anthropomorphized conversational agents on platforms for prosocial decision-making.
Collapse
|
38
|
Jain R, Garg N, Khera SN. Adoption of AI-Enabled Tools in Social Development Organizations in India: An Extension of UTAUT Model. Front Psychol 2022; 13:893691. [PMID: 35795409 PMCID: PMC9251489 DOI: 10.3389/fpsyg.2022.893691] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Accepted: 05/16/2022] [Indexed: 11/16/2022] Open
Abstract
Social development organizations increasingly employ artificial intelligence (AI)-enabled tools to help team members collaborate effectively and efficiently. These tools are used in various team management tasks and activities. Based on the unified theory of acceptance and use of technology (UTAUT), this study explores various factors influencing employees' use of AI-enabled tools. The study extends the model in two ways: a) by evaluating the impact of these tools on the employees' collaboration and b) by exploring the moderating role of AI aversion. Data were collected through an online survey of employees working with AI-enabled tools. The analysis of the research model was conducted using partial least squares (PLS), with a two-step model - measurement and structural models of assessment. The results revealed that the antecedent variables, such as effort expectancy, performance expectancy, social influence, and facilitating conditions, are positively associated with using AI-enabled tools, which have a positive relationship with collaboration. It also concluded a significant effect of AI aversion in the relationship between performance expectancy and use of technology. These findings imply that organizations should focus on building an environment to adopt AI-enabled tools while also addressing employees' concerns about AI.
Collapse
Affiliation(s)
| | - Naval Garg
- University School of Management and Entrepreneurship, Delhi Technological University, Rohini, India
| | | |
Collapse
|
39
|
Maasland C, Weißmüller KS. Blame the Machine? Insights From an Experiment on Algorithm Aversion and Blame Avoidance in Computer-Aided Human Resource Management. Front Psychol 2022; 13:779028. [PMID: 35693517 PMCID: PMC9177159 DOI: 10.3389/fpsyg.2022.779028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2021] [Accepted: 04/11/2022] [Indexed: 11/25/2022] Open
Abstract
Algorithms have become increasingly relevant in supporting human resource (HR) management, but their application may entail psychological biases and unintended side effects on employee behavior. This study examines the effect of the type of HR decision (i.e., promoting or dismissing staff) on the likelihood of delegating these HR decisions to an algorithm-based decision support system. Based on prior research on algorithm aversion and blame avoidance, we conducted a quantitative online experiment using a 2×2 randomly controlled design with a sample of N = 288 highly educated young professionals and graduate students in Germany. This study partly replicates and substantially extends the methods and theoretical insights from a 2015 study by Dietvorst and colleagues. While we find that respondents exhibit a tendency of delegating presumably unpleasant HR tasks (i.e., dismissals) to the algorithm—rather than delegating promotions—this effect is highly conditional upon the opportunity to pretest the algorithm, as well as individuals’ level of trust in machine-based and human forecast. Respondents’ aversion to algorithms dominates blame avoidance by delegation. This study is the first to provide empirical evidence that the type of HR decision affects algorithm aversion only to a limited extent. Instead, it reveals the counterintuitive effect of algorithm pretesting and the relevance of confidence in forecast models in the context of algorithm-aided HRM, providing theoretical and practical insights.
Collapse
Affiliation(s)
| | - Kristina S. Weißmüller
- KPM Center for Public Management, University of Bern, Bern, Switzerland
- *Correspondence: Kristina S. Weißmüller,
| |
Collapse
|
40
|
Trust and ethics in AI. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01473-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
41
|
Himmelstein M, Budescu DV. Preference for human or algorithmic forecasting advice does not predict if and how it is used. JOURNAL OF BEHAVIORAL DECISION MAKING 2022. [DOI: 10.1002/bdm.2285] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
42
|
Wesche JS, Langer M, Sonderegger A, Landers R. Editorial to the virtual Special Issue: Human-automation interaction in the workplace: A broadened scope of paradigms. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
43
|
Expectations of how machines use individuating information and base-rates. JUDGMENT AND DECISION MAKING 2022. [DOI: 10.1017/s1930297500003600] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
AbstractMachines are increasingly used to make decisions. We investigated people’s beliefs about how they do so. In six experiments, participants (total N = 2664) predicted how computer and human judges would decide legal cases on the basis of limited evidence — either individuating information from witness testimony or base-rate information. In Experiments 1 to 4, participants predicted that computer judges would be more likely than human ones to reach a guilty verdict, regardless of which kind of evidence was available. Besides asking about punishment, Experiment 5 also included conditions where the judge had to decide whether to reward suspected helpful behavior. Participants again predicted that computer judges would be more likely than human judges to decide based on the available evidence, but also predicted that computer judges would be relatively more punitive than human ones. Also, whereas participants predicted the human judge would give more weight to individuating than base-rate evidence, they expected the computer judge to be insensitive to the distinction between these kinds of evidence. Finally, Experiment 6 replicated the finding that people expect greater sensitivity to the distinction between individuating and base-rate information from humans than computers, but found that the use of cartoon images, as in the first four studies, prevented this effect. Overall, the findings suggest people expect machines to differ from humans in how they weigh different kinds of information when deciding.
Collapse
|
44
|
Jussupow E, Spohrer K, Heinzl A. Radiologists’ Usage of Diagnostic AI Systems. BUSINESS & INFORMATION SYSTEMS ENGINEERING 2022. [DOI: 10.1007/s12599-022-00750-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
AbstractWhile diagnostic AI systems are implemented in medical practice, it is still unclear how physicians embed them in diagnostic decision making. This study examines how radiologists come to use diagnostic AI systems in different ways and what role AI assessments play in this process if they confirm or disconfirm radiologists’ own judgment. The study draws on rich qualitative data from a revelatory case study of an AI system for stroke diagnosis at a University Hospital to elaborate how three sensemaking processes revolve around confirming and disconfirming AI assessments. Through context-specific sensedemanding, sensegiving, and sensebreaking, radiologists develop distinct usage patterns of AI systems. The study reveals that diagnostic self-efficacy influences which of the three sensemaking processes radiologists engage in. In deriving six propositions, the account of sensemaking and usage of diagnostic AI systems in medical practice paves the way for future research.
Collapse
|
45
|
Lünich M, Kieslich K. Exploring the roles of trust and social group preference on the legitimacy of algorithmic decision-making vs. human decision-making for allocating COVID-19 vaccinations. AI & SOCIETY 2022; 39:1-19. [PMID: 35469142 PMCID: PMC9022054 DOI: 10.1007/s00146-022-01412-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2021] [Accepted: 02/14/2022] [Indexed: 11/24/2022]
Abstract
In combating the ongoing global health threat of the COVID-19 pandemic, decision-makers have to take actions based on a multitude of relevant health data with severe potential consequences for the affected patients. Because of their presumed advantages in handling and analyzing vast amounts of data, computer systems of algorithmic decision-making (ADM) are implemented and substitute humans in decision-making processes. In this study, we focus on a specific application of ADM in contrast to human decision-making (HDM), namely the allocation of COVID-19 vaccines to the public. In particular, we elaborate on the role of trust and social group preference on the legitimacy of vaccine allocation. We conducted a survey with a 2 × 2 randomized factorial design among n = 1602 German respondents, in which we utilized distinct decision-making agents (HDM vs. ADM) and prioritization of a specific social group (teachers vs. prisoners) as design factors. Our findings show that general trust in ADM systems and preference for vaccination of a specific social group influence the legitimacy of vaccine allocation. However, contrary to our expectations, trust in the agent making the decision did not moderate the link between social group preference and legitimacy. Moreover, the effect was also not moderated by the type of decision-maker (human vs. algorithm). We conclude that trustworthy ADM systems must not necessarily lead to the legitimacy of ADM systems.
Collapse
Affiliation(s)
- Marco Lünich
- Department of Social Sciences, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
| | - Kimon Kieslich
- Department of Social Sciences, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
| |
Collapse
|
46
|
Predicting the future impact of Computer Science researchers: Is there a gender bias? Scientometrics 2022. [DOI: 10.1007/s11192-022-04337-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
AbstractThe advent of large-scale bibliographic databases and powerful prediction algorithms led to calls for data-driven approaches for targeting scarce funds at researchers with high predicted future scientific impact. The potential side-effects and fairness implications of such approaches are unknown, however. Using a large-scale bibliographic data set of N = 111,156 Computer Science researchers active from 1993 to 2016, I build and evaluate a realistic scientific impact prediction model. Given the persistent under-representation of women in Computer Science, the model is audited for disparate impact based on gender. Random forests and Gradient Boosting Machines are used to predict researchers’ h-index in 2010 from their bibliographic profiles in 2005. Based on model predictions, it is determined whether the researcher will become a high-performer with an h-index in the top-25% of the discipline-specific h-index distribution. The models predict the future h-index with an accuracy of $$R^2 = 0.875$$
R
2
=
0.875
and correctly classify 91.0% of researchers as high-performers and low-performers. Overall accuracy does not vary strongly across researcher gender. Nevertheless, there is indication of disparate impact against women. The models under-estimate the true h-index of female researchers more strongly than the h-index of male researchers. Further, women are 8.6% less likely to be predicted to become high-performers than men. In practice, hiring, tenure, and funding decisions that are based on model predictions risk to perpetuate the under-representation of women in Computer Science.
Collapse
|
47
|
Xie NG, Wang M, Dai YY, Ye Y, Lai JW, Wang L, Cheong KH. Decision-making psychology and method under zero-knowledge context. Sci Rep 2022; 12:3187. [PMID: 35210448 PMCID: PMC8873423 DOI: 10.1038/s41598-022-06753-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2021] [Accepted: 02/02/2022] [Indexed: 11/09/2022] Open
Abstract
For a certain kind of decision event, the decision maker does not know the internal mechanism and knowledge information of the decision events.When this kind of decision events gives multiple selection branches, it is found that there is a decision psychological tendency to find the most common features by comparing the selection branches. Based on this, a zero-knowledge decision making (ZKDM) method is proposed. By defining the feature points and feature sets of the selection branches of the decision events, the characteristic moments of the system are constructed and the branch with the most common characteristics is obtained. It is observed that through the findings of investigation the probability of arriving at the correct choice based on the ZKDM method is high. The effectiveness of the ZKDM method may be related to the fact that the designers of decision events usually determine the correct selection branch first, before changing it to design other branches. A questionnaire survey of 279 respondents reveals that more than half of them actually adopt such a design idea. Furthermore, a separate questionnaire survey of 465 decision-makers reveal that 19.14% of the respondents clearly adopt ZKDM.
Collapse
Affiliation(s)
- Neng-Gang Xie
- School of Management Science and Engineering, Anhui University of Technology, Ma'anshan, 243002, Anhui, China.
| | - Meng Wang
- School of Business, Anhui University of Technology, Ma'anshan, 243002, Anhui, China
| | - Ya-Yun Dai
- School of Management Science and Engineering, Anhui University of Technology, Ma'anshan, 243002, Anhui, China
| | - Ye Ye
- School of Mechanical Engineering, Anhui University of Technology, Ma'anshan, 243002, Anhui, China
| | - Joel Weijia Lai
- Science, Mathematics and Technology Cluster, Singapore University of Technology and Design (SUTD), 8 Somapah Road, Singapore, 487372, Singapore
| | - Lu Wang
- School of Mechanical Engineering, Anhui University of Technology, Ma'anshan, 243002, Anhui, China
| | - Kang Hao Cheong
- Science, Mathematics and Technology Cluster, Singapore University of Technology and Design (SUTD), 8 Somapah Road, Singapore, 487372, Singapore.
| |
Collapse
|
48
|
Chacon A, Kausel EE, Reyes T. A longitudinal approach for understanding algorithm use. JOURNAL OF BEHAVIORAL DECISION MAKING 2022. [DOI: 10.1002/bdm.2275] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Affiliation(s)
- Alvaro Chacon
- School of Engineering Pontificia Universidad Católica de Chile Santiago Chile
| | - Edgar E. Kausel
- School of Management Pontificia Universidad Católica de Chile Santiago Chile
| | - Tomas Reyes
- School of Engineering Pontificia Universidad Católica de Chile Santiago Chile
| |
Collapse
|
49
|
Neumann M, Niessen ASM, Tendeiro JN, Meijer RR. The autonomy‐validity dilemma in mechanical prediction procedures: The quest for a compromise. JOURNAL OF BEHAVIORAL DECISION MAKING 2021. [DOI: 10.1002/bdm.2270] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Affiliation(s)
- Marvin Neumann
- Department of Psychometrics and Statistics, Faculty of Behavioral and Social Sciences University of Groningen Groningen The Netherlands
| | - A. Susan M. Niessen
- Department of Psychometrics and Statistics, Faculty of Behavioral and Social Sciences University of Groningen Groningen The Netherlands
| | - Jorge N. Tendeiro
- Office of Research and Academia‐Government‐Community Collaboration, Education, Research Center for Artificial Intelligence and Data Innovation Hiroshima University Hiroshima Japan
| | - Rob R. Meijer
- Department of Psychometrics and Statistics, Faculty of Behavioral and Social Sciences University of Groningen Groningen The Netherlands
| |
Collapse
|
50
|
Threlkeld R, Ashiku L, Canfield C, Shank DB, Schnitzler MA, Lentine KL, Axelrod DA, Battineni ACR, Randall H, Dagli C. Reducing Kidney Discard With Artificial Intelligence Decision Support: the Need for a Transdisciplinary Systems Approach. CURRENT TRANSPLANTATION REPORTS 2021; 8:263-271. [PMID: 35059280 PMCID: PMC8727423 DOI: 10.1007/s40472-021-00351-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/24/2021] [Indexed: 11/20/2022]
Abstract
PURPOSE OF REVIEW A transdisciplinary systems approach to the design of an artificial intelligence (AI) decision support system can more effectively address the limitations of AI systems. By incorporating stakeholder input early in the process, the final product is more likely to improve decision-making and effectively reduce kidney discard. RECENT FINDINGS Kidney discard is a complex problem that will require increased coordination between transplant stakeholders. An AI decision support system has significant potential, but there are challenges associated with overfitting, poor explainability, and inadequate trust. A transdisciplinary approach provides a holistic perspective that incorporates expertise from engineering, social science, and transplant healthcare. A systems approach leverages techniques for visualizing the system architecture to support solution design from multiple perspectives. SUMMARY Developing a systems-based approach to AI decision support involves engaging in a cycle of documenting the system architecture, identifying pain points, developing prototypes, and validating the system. Early efforts have focused on describing process issues to prioritize tasks that would benefit from AI support.
Collapse
Affiliation(s)
- Richard Threlkeld
- Engineering Management & Systems Engineering, Missouri University of Science & Technology, 223 Engineering Management 600 W 14th St, MO 65409 Rolla, USA
| | - Lirim Ashiku
- Engineering Management & Systems Engineering, Missouri University of Science & Technology, 223 Engineering Management 600 W 14th St, MO 65409 Rolla, USA
| | - Casey Canfield
- Engineering Management & Systems Engineering, Missouri University of Science & Technology, 223 Engineering Management 600 W 14th St, MO 65409 Rolla, USA
| | - Daniel B. Shank
- Psychological Science, Missouri University of Science & Technology, Rolla, MO USA
| | | | | | | | | | - Henry Randall
- Saint Louis University Transplant Center, St. Louis, MO USA
| | - Cihan Dagli
- Engineering Management & Systems Engineering, Missouri University of Science & Technology, 223 Engineering Management 600 W 14th St, MO 65409 Rolla, USA
| |
Collapse
|