1
|
Liao G, Wang F, Zhu W, Zhang Q. Negative performance feedback from algorithms or humans? effect of medical researchers' algorithm aversion on scientific misconduct. BMC Med Ethics 2024; 25:118. [PMID: 39443942 PMCID: PMC11515751 DOI: 10.1186/s12910-024-01121-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2024] [Accepted: 10/17/2024] [Indexed: 10/25/2024] Open
Abstract
Institutions are increasingly employing algorithms to provide performance feedback to individuals by tracking productivity, conducting performance appraisals, and developing improvement plans, compared to traditional human managers. However, this shift has provoked considerable debate over the effectiveness and fairness of algorithmic feedback. This study investigates the effects of negative performance feedback (NPF) on the attitudes, cognition and behavior of medical researchers, comparing NPF from algorithms versus humans. Two scenario-based experimental studies were conducted with a total sample of 660 medical researchers (algorithm group: N1 = 411; human group: N2 = 249). Study 1 analyzes the differences in scientific misconduct, moral disengagement, and algorithmic attitudes between the two sources of NPF. The findings reveal that NPF from algorithms shows higher levels of moral disengagement, scientific misconduct, and negative attitudes towards algorithms compared to NPF from humans. Study 2, grounded in trait activation theory, investigates how NPF from algorithms triggers individual's egoism and algorithm aversion, potentially leading to moral disengagement and scientific misconduct. Results indicate that algorithm aversion triggers individuals' egoism, and their interaction enhances moral disengagement, which in turn leads to increased scientific misconduct among researchers. This relationship is also moderated by algorithmic transparency. The study concludes that while algorithms can streamline performance evaluations, they pose significant risks to scientific misconduct of researchers if not properly designed. These findings extend our understanding of NPF by highlighting the emotional and cognitive challenges algorithms face in decision-making processes, while also underscoring the importance of balancing technological efficiency with moral considerations to promote a healthy research environment. Moreover, managerial implications include integrating human oversight in algorithmic NPF processes and enhancing transparency and fairness to mitigate negative impacts on medical researchers' attitudes and behaviors.
Collapse
Affiliation(s)
- Ganli Liao
- Business School, Beijing Information Science and Technology University, Beijing, China.
| | - Feiwen Wang
- Business School, Beijing Information Science and Technology University, Beijing, China
| | - Wenhui Zhu
- Zhongguancun Smart City Co., Ltd, Beijing, China
| | - Qichao Zhang
- Business School, Beijing Information Science and Technology University, Beijing, China.
| |
Collapse
|
2
|
Dima J, Gilbert MH, Dextras-Gauthier J, Giraud L. The effects of artificial intelligence on human resource activities and the roles of the human resource triad: opportunities and challenges. Front Psychol 2024; 15:1360401. [PMID: 38903456 PMCID: PMC11188403 DOI: 10.3389/fpsyg.2024.1360401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2023] [Accepted: 05/13/2024] [Indexed: 06/22/2024] Open
Abstract
Introduction This study analyzes the existing academic literature to identify the effects of artificial intelligence (AI) on human resource (HR) activities, highlighting both opportunities and associated challenges, and on the roles of employees, line managers, and HR professionals, collectively referred to as the HR triad. Methods We employed the scoping review method to capture and synthesize relevant academic literature in the AI-human resource management (HRM) field, examining 27 years of research (43 peer-reviewed articles are included). Results Based on the results, we propose an integrative framework that outlines the five primary effects of AI on HR activities: task automation, optimized HR data use, augmentation of human capabilities, work context redesign, and transformation of the social and relational aspects of work. We also detail the opportunities and challenges associated with each of these effects and the changes in the roles of the HR triad. Discussion This research contributes to the ongoing debate on AI-augmented HRM by discussing the theoretical contributions and managerial implications of our findings, along with avenues for future research. By considering the most recent studies on the topic, this scoping review sheds light on the effects of AI on the roles of the HR triad, enabling these key stakeholders to better prepare for this technological change. The findings can inform future academic research, organizations using or considering the application of AI in HRM, and policymakers. This is particularly timely, given the growing adoption of AI in HRM activities.
Collapse
Affiliation(s)
- Justine Dima
- School of Engineering and Management Vaud, HES-SO, Yverdon-les-Bains, Switzerland
| | - Marie-Hélène Gilbert
- Department of Management, Faculty of Business Administration, Laval University, Quebec, QC, Canada
| | - Julie Dextras-Gauthier
- Department of Management, Faculty of Business Administration, Laval University, Quebec, QC, Canada
| | - Laurent Giraud
- IREGE, IAE Savoie Mont Blanc, Savoie Mont Blanc University, Annecy, France
| |
Collapse
|
3
|
Kupfer C, Prassl R, Fleiß J, Malin C, Thalmann S, Kubicek B. Check the box! How to deal with automation bias in AI-based personnel selection. Front Psychol 2023; 14:1118723. [PMID: 37089740 PMCID: PMC10113449 DOI: 10.3389/fpsyg.2023.1118723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Accepted: 03/10/2023] [Indexed: 04/08/2023] Open
Abstract
Artificial Intelligence (AI) as decision support for personnel preselection, e.g., in the form of a dashboard, promises a more effective and fairer selection process. However, AI-based decision support systems might prompt decision makers to thoughtlessly accept the system’s recommendation. As this so-called automation bias contradicts ethical and legal requirements of human oversight for the use of AI-based recommendations in personnel preselection, the present study investigates strategies to reduce automation bias and increase decision quality. Based on the Elaboration Likelihood Model, we assume that instructing decision makers about the possibility of system errors and their responsibility for the decision, as well as providing an appropriate level of data aggregation should encourage decision makers to process information systematically instead of heuristically. We conducted a 3 (general information, information about system errors, information about responsibility) x 2 (low vs. high aggregated data) experiment to investigate which strategy can reduce automation bias and enhance decision quality. We found that less automation bias in terms of higher scores on verification intensity indicators correlated with higher objective decision quality, i.e., more suitable applicants selected. Decision makers who received information about system errors scored higher on verification intensity indicators and rated subjective decision quality higher, but decision makers who were informed about their responsibility, unexpectedly, did not. Regarding aggregation level of data, decision makers of the highly aggregated data group spent less time on the level of the dashboard where highly aggregated data were presented. Our results show that it is important to inform decision makers who interact with AI-based decision-support systems about potential system errors and provide them with less aggregated data to reduce automation bias and enhance decision quality.
Collapse
Affiliation(s)
- Cordula Kupfer
- Work and Organizational Psychology, Institute of Psychology, University of Graz, Graz, Austria
- *Correspondence: Cordula Kupfer,
| | - Rita Prassl
- Work and Organizational Psychology, Institute of Psychology, University of Graz, Graz, Austria
| | - Jürgen Fleiß
- Business Analytics and Data Science-Center, University of Graz, Graz, Austria
| | - Christine Malin
- Business Analytics and Data Science-Center, University of Graz, Graz, Austria
| | - Stefan Thalmann
- Business Analytics and Data Science-Center, University of Graz, Graz, Austria
| | - Bettina Kubicek
- Work and Organizational Psychology, Institute of Psychology, University of Graz, Graz, Austria
| |
Collapse
|
4
|
Jacob Fernandes França T, São Mamede H, Pereira Barroso JM, Pereira Duarte dos Santos VM. Artificial intelligence applied to potential assessment and talent identification in an organisational context. Heliyon 2023; 9:e14694. [PMID: 37035361 PMCID: PMC10073758 DOI: 10.1016/j.heliyon.2023.e14694] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Revised: 03/14/2023] [Accepted: 03/15/2023] [Indexed: 03/28/2023] Open
Abstract
Our study provides valuable insights into the relationship between artificial intelligence (AI) and Human Resource Management (HRM). We have minimised bias and ensured reliable findings by employing a systematic literature review and the PRISMA statement. Our comprehensive synthesis of the studies included in this research, along with a bibliometric analysis of articles, journals, indexes, authors' affiliations, citations, keyword co-occurrences, and co-authorship analysis, has produced robust results. The discussion of our findings focuses on critical areas of interest, such as AI and Talent, AI Bias, Ethics and Law, and their impact on Human Resource (HR) management. Our research highlights the recognition by organisations of the importance of talent management in achieving a competitive advantage as higher-level skills become increasingly necessary. Although some HR managers have adopted AI technology for talent acquisition, our study reveals that there is still room for improvement. Our study is in line with previous research that acknowledges the potential for AI to revolutionise HR management and the future of work. Our findings emphasise the need for HR managers to be proactive in embracing technology and bridging the technological, human, societal, and governmental gaps. Our study contributes to the growing body of AI and HR management knowledge, providing essential insights and recommendations for future research. The importance of our study lies in its focus on the role of HR in promoting the benefits of AI-based applications, thereby creating a larger body of knowledge from an organisational perspective.
Collapse
|
5
|
Feldkamp T, Langer M, Wies L, König CJ. Justice, trust, and moral judgements when personnel selection is supported by algorithms. EUROPEAN JOURNAL OF WORK AND ORGANIZATIONAL PSYCHOLOGY 2023. [DOI: 10.1080/1359432x.2023.2169140] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/23/2023]
Affiliation(s)
- Tina Feldkamp
- Fachrichtung Psychologie, Universität des Saarlandes, Saarbrücken, Germany
| | - Markus Langer
- Fachrichtung Psychologie, Universität des Saarlandes, Saarbrücken, Germany
| | - Leo Wies
- Fachrichtung Psychologie, Universität des Saarlandes, Saarbrücken, Germany
| | - Cornelius J. König
- Fachrichtung Psychologie, Universität des Saarlandes, Saarbrücken, Germany
| |
Collapse
|
6
|
Koch‐Bayram IF, Kaibel C, Biemann T, Triana MDC. </Click to begin your digital interview>: Applicants' experiences with discrimination explain their reactions to algorithms in personnel selection. INTERNATIONAL JOURNAL OF SELECTION AND ASSESSMENT 2023. [DOI: 10.1111/ijsa.12417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Affiliation(s)
| | - Chris Kaibel
- Department of Management University of Mannheim Mannheim Germany
| | - Torsten Biemann
- Department of Management University of Mannheim Mannheim Germany
| | - María del Carmen Triana
- Owen Graduate School of Management, Organization Studies Area Vanderbilt University Nashville Tennessee USA
| |
Collapse
|
7
|
Building trust in automatic video interviews using various AI interfaces: Tangibility, immediacy, and transparency. COMPUTERS IN HUMAN BEHAVIOR 2023. [DOI: 10.1016/j.chb.2023.107713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/16/2023]
|
8
|
What type of algorithm is perceived as fairer and more acceptable? A comparative analysis of rule-driven versus data-driven algorithmic decision-making in public affairs. GOVERNMENT INFORMATION QUARTERLY 2023. [DOI: 10.1016/j.giq.2023.101803] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|
9
|
What shapes our attitudes towards algorithms in urban governance? The role of perceived friendliness and controllability of the city, and human-algorithm cooperation. COMPUTERS IN HUMAN BEHAVIOR 2023. [DOI: 10.1016/j.chb.2023.107653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
|
10
|
Köchling A, Wehner MC. Better explaining the benefits why AI? Analyzing the impact of explaining the benefits of AI‐supported selection on applicant responses. INTERNATIONAL JOURNAL OF SELECTION AND ASSESSMENT 2022. [DOI: 10.1111/ijsa.12412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
11
|
Wesche JS, Hennig F, Kollhed CS, Quade J, Kluge S, Sonderegger A. People’s reactions to decisions by human vs. algorithmic decision-makers: the role of explanations and type of selection tests. EUROPEAN JOURNAL OF WORK AND ORGANIZATIONAL PSYCHOLOGY 2022. [DOI: 10.1080/1359432x.2022.2132940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Affiliation(s)
- Jenny S. Wesche
- Sozial-, Organisations- und Wirtschaftspsychologie, Freie Universität Berlin, Germany
| | - Frederike Hennig
- Sozial-, Organisations- und Wirtschaftspsychologie, Freie Universität Berlin, Germany
| | | | - Jessica Quade
- Sozial-, Organisations- und Wirtschaftspsychologie, Freie Universität Berlin, Germany
| | - Sören Kluge
- Sozial-, Organisations- und Wirtschaftspsychologie, Freie Universität Berlin, Germany
| | | |
Collapse
|
12
|
Hellwig P, Buchholz V, Kopp S, Maier GW. Let the user have a say - voice in automated decision-making. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107446] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
13
|
Hunkenschroer AL, Kriebitz A. Is AI recruiting (un)ethical? A human rights perspective on the use of AI for hiring. AI AND ETHICS 2022; 3:199-213. [PMID: 35909984 PMCID: PMC9309597 DOI: 10.1007/s43681-022-00166-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Accepted: 04/19/2022] [Indexed: 06/15/2023]
Abstract
The use of artificial intelligence (AI) technologies in organizations' recruiting and selection procedures has become commonplace in business practice; accordingly, research on AI recruiting has increased substantially in recent years. But, though various articles have highlighted the potential opportunities and ethical risks of AI recruiting, the topic has not been normatively assessed yet. We aim to fill this gap by providing an ethical analysis of AI recruiting from a human rights perspective. In doing so, we elaborate on human rights' theoretical implications for corporate use of AI-driven hiring solutions. Therefore, we analyze whether AI hiring practices inherently conflict with the concepts of validity, autonomy, nondiscrimination, privacy, and transparency, which represent the main human rights relevant in this context. Concluding that these concepts are not at odds, we then use existing legal and ethical implications to determine organizations' responsibility to enforce and realize human rights standards in the context of AI recruiting.
Collapse
Affiliation(s)
- Anna Lena Hunkenschroer
- Chair of Business Ethics, Technical University of Munich, Arcisstr. 21, 80333 Munich, Germany
| | - Alexander Kriebitz
- Chair of Business Ethics, Technical University of Munich, Arcisstr. 21, 80333 Munich, Germany
| |
Collapse
|
14
|
Will P, Krpan D, Lordan G. People versus machines: introducing the HIRE framework. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10193-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
AbstractThe use of Artificial Intelligence (AI) in the recruitment process is becoming a more common method for organisations to hire new employees. Despite this, there is little consensus on whether AI should have widespread use in the hiring process, and in which contexts. In order to bring more clarity to research findings, we propose the HIRE (Human, (Artificial) Intelligence, Recruitment, Evaluation) framework with the primary aim of evaluating studies which investigate how Artificial Intelligence can be integrated into the recruitment process with respect to gauging whether AI is an adequate, better, or worse substitute for human recruiters. We illustrate the simplicity of this framework by conducting a systematic literature review on the empirical studies assessing AI in the recruitment process, with 22 final papers included. The review shows that AI is equal to or better than human recruiters when it comes to efficiency and performance. We also find that AI is mostly better than humans in improving diversity. Finally, we demonstrate that there is a perception among candidates and recruiters that AI is worse than humans. Overall, we conclude based on the evidence, that AI is equal to or better to humans when utilised in the hiring process, however, humans hold a belief of their own superiority. Our aim is that future authors adopt the HIRE framework when conducting research in this area to allow for easier comparability, and ideally place the HIRE framework outcome of AI being better, equal, worse, or unclear in the abstract.
Collapse
|
15
|
Gonzalez MF, Liu W, Shirase L, Tomczak DL, Lobbe CE, Justenhoven R, Martin NR. Allying with AI? Reactions toward human-based, AI/ML-based, and augmented hiring processes. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107179] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
|
16
|
Yang D, (Will) Zhao WG, Du J, Yang Y. Approaching Artificial Intelligence in business and economics research:a bibliometric panorama (1966–2020). TECHNOLOGY ANALYSIS & STRATEGIC MANAGEMENT 2022. [DOI: 10.1080/09537325.2022.2043268] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Dong Yang
- School of Management Science and Engineering, Dongbei University of Finance and Economics, Dalian, China
- School of Business Administration, Anhui University of Finance and Economics, Bengbu, China
| | - W. G. (Will) Zhao
- Faculty of Business Administration, Lakehead University, Thunder Bay, Canada
- Centre for Research in the Behavioural Sciences, Nottingham University Business School, Nottingham, UK
- Stratford School of Interaction Design and Business, University of Waterloo, Stratford, Canada
| | - Jingjing Du
- School of Business Administration, Anhui University of Finance and Economics, Bengbu, China
| | - Yimin Yang
- Department of Computer Science, Lakehead University, Thunder Bay, Canada
- Vector Institute for Artificial Intelligence, Toronto, Canada
| |
Collapse
|
17
|
Rieger T, Roesler E, Manzey D. Challenging presumed technological superiority when working with (artificial) colleagues. Sci Rep 2022; 12:3768. [PMID: 35260683 PMCID: PMC8904495 DOI: 10.1038/s41598-022-07808-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Accepted: 02/25/2022] [Indexed: 12/12/2022] Open
Abstract
Technological advancements are ubiquitously supporting or even replacing humans in all areas of life, bringing the potential for human-technology symbiosis but also novel challenges. To address these challenges, we conducted three experiments in different task contexts ranging from loan assignment over X-Ray evaluation to process industry. Specifically, we investigated the impact of support agent (artificial intelligence, decision support system, or human) and failure experience (one vs. none) on trust-related aspects of human-agent interaction. This included not only the subjective evaluation of the respective agent in terms of trust, reliability, and responsibility, when working together, but also a change in perspective to the willingness to be assessed oneself by the agent. In contrast to a presumed technological superiority, we show a general advantage with regard to trust and responsibility of human support over both technical support systems (i.e., artificial intelligence and decision support system), regardless of task context from the collaborative perspective. This effect reversed to a preference for technical systems when switching the perspective to being assessed. These findings illustrate an imperfect automation schema from the perspective of the advice-taker and demonstrate the importance of perspective when working with or being assessed by machine intelligence.
Collapse
Affiliation(s)
- Tobias Rieger
- Department of Psychology and Ergonomics, Technische Universität Berlin, Marchstr. 12, F7, 10587, Berlin, Germany.
| | - Eileen Roesler
- Department of Psychology and Ergonomics, Technische Universität Berlin, Marchstr. 12, F7, 10587, Berlin, Germany.
| | - Dietrich Manzey
- Department of Psychology and Ergonomics, Technische Universität Berlin, Marchstr. 12, F7, 10587, Berlin, Germany
| |
Collapse
|
18
|
Köchling A, Wehner MC, Warkocz J. Can I show my skills? Affective responses to artificial intelligence in the recruitment process. REVIEW OF MANAGERIAL SCIENCE 2022. [DOI: 10.1007/s11846-021-00514-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
AbstractCompanies increasingly use artificial intelligence (AI) and algorithmic decision-making (ADM) for their recruitment and selection process for cost and efficiency reasons. However, there are concerns about the applicant’s affective response to AI systems in recruitment, and knowledge about the affective responses to the selection process is still limited, especially when AI supports different selection process stages (i.e., preselection, telephone interview, and video interview). Drawing on the affective response model, we propose that affective responses (i.e., opportunity to perform, emotional creepiness) mediate the relationships between an increasing AI-based selection process and organizational attractiveness. In particular, by using a scenario-based between-subject design with German employees (N = 160), we investigate whether and how AI-support during a complete recruitment process diminishes the opportunity to perform and increases emotional creepiness during the process. Moreover, we examine the influence of opportunity to perform and emotional creepiness on organizational attractiveness. We found that AI-support at later stages of the selection process (i.e., telephone and video interview) decreased the opportunity to perform and increased emotional creepiness. In turn, the opportunity to perform and emotional creepiness mediated the association of AI-support in telephone/video interviews on organizational attractiveness. However, we did not find negative affective responses to AI-support earlier stage of the selection process (i.e., during preselection). As we offer evidence for possible adverse reactions to the usage of AI in selection processes, this study provides important practical and theoretical implications.
Collapse
|
19
|
Repelled at first sight? Expectations and intentions of job-seekers reading about AI selection in job advertisements. COMPUTERS IN HUMAN BEHAVIOR 2021. [DOI: 10.1016/j.chb.2021.106931] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
20
|
Langer M, Landers RN. The future of artificial intelligence at work: A review on effects of decision automation and augmentation on workers targeted by algorithms and third-party observers. COMPUTERS IN HUMAN BEHAVIOR 2021. [DOI: 10.1016/j.chb.2021.106878] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
|
21
|
Zhao S, Liu M, Xi M. Guest editorial. CHINESE MANAGEMENT STUDIES 2021. [DOI: 10.1108/cms-09-2021-797] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
22
|
Schlicker N, Langer M, Ötting SK, Baum K, König CJ, Wallach D. What to expect from opening up ‘black boxes’? Comparing perceptions of justice between human and automated agents. COMPUTERS IN HUMAN BEHAVIOR 2021. [DOI: 10.1016/j.chb.2021.106837] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
23
|
“I” feel(s) left out: The importance of information and communication technology in personnel selection research. INDUSTRIAL AND ORGANIZATIONAL PSYCHOLOGY-PERSPECTIVES ON SCIENCE AND PRACTICE 2021. [DOI: 10.1017/iop.2021.79] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
24
|
Langer M, Baum K, König CJ, Hähne V, Oster D, Speith T. Spare me the details: How the type of information about automated interviews influences applicant reactions. INTERNATIONAL JOURNAL OF SELECTION AND ASSESSMENT 2021. [DOI: 10.1111/ijsa.12325] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Affiliation(s)
- Markus Langer
- Fachrichtung Psychologie Universität des Saarlandes Saarbrücken Germany
| | - Kevin Baum
- Fachrichtung Philosophie und Fachrichtung Informatik Universität des Saarlandes Saarbrücken Germany
| | | | - Viviane Hähne
- Fachrichtung Psychologie Universität des Saarlandes Saarbrücken Germany
| | - Daniel Oster
- Fachrichtung Philosophie und Fachrichtung Informatik Universität des Saarlandes Saarbrücken Germany
| | - Timo Speith
- Fachrichtung Philosophie und Fachrichtung Informatik Universität des Saarlandes Saarbrücken Germany
| |
Collapse
|
25
|
Noble SM, Foster LL, Craig SB. The procedural and interpersonal justice of automated application and resume screening. INTERNATIONAL JOURNAL OF SELECTION AND ASSESSMENT 2021. [DOI: 10.1111/ijsa.12320] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Affiliation(s)
- Sean M. Noble
- Department of Psychology North Carolina State University Raleigh NC USA
| | - Lori L. Foster
- Department of Psychology North Carolina State University Raleigh NC USA
| | | |
Collapse
|