1
|
Monaco F, Andretta V, Bellocchio U, Cerrone V, Cascella M, Piazza O. Bibliometric Analysis (2000-2024) of Research on Artificial Intelligence in Nursing. ANS Adv Nurs Sci 2024:00012272-990000000-00099. [PMID: 39356114 DOI: 10.1097/ans.0000000000000542] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/03/2024]
Abstract
We conducted a bibliometrics analysis utilizing the Web of Science database, selecting 1925 articles concerning artificial intelligence (AI) in nursing. The analysis utilized the network visualization tool VOSviewer to explore global collaborations, highlighting prominent roles played by the United States, China, and Japan, as well as institutional partnerships involving Columbia University and Harvard Medical School. Keyword analysis identified prevalent themes and co-citation analysis highlighted influential journals. A notable increase in AI-related publications in nursing was observed over time, reflecting the growing interest in AI in nursing. However, high-quality clinical research and increased scientific collaboration are needed.
Collapse
Affiliation(s)
- Federica Monaco
- Author Affiliations: Department of Critical Care, Anesthesia and Pain Medicine. ASL NA1, Napoli, Italy (Dr Monaco); Department of Medicine, A.O.U. San Giovanni di Dio e Ruggi D'Aragona, U.O.C. Hospital Hygiene and Epidemiology, Salerno, Italy (Prof Andretta); Department of Urology, Istituto Nazionale Tumori-IRCCS, Fondazione Pascale, Naples, Italy (Dr Bellocchio); Department of Medicine, A.O.U. San Giovanni di Dio e Ruggi D'Aragona, U.O.C. Oncology, Salerno, Italy (Dr Cerrone); and Anesthesia and Pain Medicine, Department of Medicine, Surgery and Dentistry "Scuola Medica Salernitana", University of Salerno, Baronissi, Italy (Profs Cascella, and Piazza)
| | | | | | | | | | | |
Collapse
|
2
|
Federico CA, Trotsyuk AA. Biomedical Data Science, Artificial Intelligence, and Ethics: Navigating Challenges in the Face of Explosive Growth. Annu Rev Biomed Data Sci 2024; 7:1-14. [PMID: 38598860 DOI: 10.1146/annurev-biodatasci-102623-104553] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/12/2024]
Abstract
Advances in biomedical data science and artificial intelligence (AI) are profoundly changing the landscape of healthcare. This article reviews the ethical issues that arise with the development of AI technologies, including threats to privacy, data security, consent, and justice, as they relate to donors of tissue and data. It also considers broader societal obligations, including the importance of assessing the unintended consequences of AI research in biomedicine. In addition, this article highlights the challenge of rapid AI development against the backdrop of disparate regulatory frameworks, calling for a global approach to address concerns around data misuse, unintended surveillance, and the equitable distribution of AI's benefits and burdens. Finally, a number of potential solutions to these ethical quandaries are offered. Namely, the merits of advocating for a collaborative, informed, and flexible regulatory approach that balances innovation with individual rights and public welfare, fostering a trustworthy AI-driven healthcare ecosystem, are discussed.
Collapse
Affiliation(s)
- Carole A Federico
- Center for Biomedical Ethics, Stanford University School of Medicine, Stanford, California, USA; ,
| | - Artem A Trotsyuk
- Center for Biomedical Ethics, Stanford University School of Medicine, Stanford, California, USA; ,
| |
Collapse
|
3
|
Ying Y, Jin S. Artificial intelligence and green product innovation: Moderating effect of organizational capital. Heliyon 2024; 10:e28572. [PMID: 38590843 PMCID: PMC10999920 DOI: 10.1016/j.heliyon.2024.e28572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2023] [Revised: 03/14/2024] [Accepted: 03/20/2024] [Indexed: 04/10/2024] Open
Abstract
Green product innovation (GPDI) is crucial for addressing ecological issues and essential for enterprises' green operations and long-term growth. Digitization offers new possibilities for enhancing corporate green practices. Nevertheless, previous studies have predominantly addressed the association between overall digitalization and corporate green innovation, and research on the outcome of specific digital technology categories on green innovation is lacking. Within this framework, this study broadens the investigation into the connection between distinct categories of digital technologies and corporate green innovation. The period 2013-2022 was selected as the sample observation period, with companies listed on China's A-share market as the study objects. The fixed-effects model was applied to investigate the impact of artificial intelligence (AI) on firms' GPDI while exploring the interaction effect of firms' organizational capital. The findings indicate that AI is beneficial to GPDI in businesses. This effect is enhanced by employee and board human capital but diminished by board social capital. These results remained valid after two-stage least squares regression. This study broadens the utilization of the resource-based view and dynamic capacity theory in business implementation. Furthermore, it extends the resulting study of AI and provides a digital enhancement pathway for corporate GPDI. This study has significant theoretical and practical implications.
Collapse
Affiliation(s)
- Ying Ying
- College of Business, Gachon University, Seongnam, 13120, Republic of Korea
| | - Shanyue Jin
- College of Business, Gachon University, Seongnam, 13120, Republic of Korea
| |
Collapse
|
4
|
Liang X, Liang Y, Kang W, Wei H. Research on military-civilian collaborative innovation of science and technology based on a stochastic differential game model. PLoS One 2024; 19:e0292635. [PMID: 38180981 PMCID: PMC10769100 DOI: 10.1371/journal.pone.0292635] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2023] [Accepted: 09/25/2023] [Indexed: 01/07/2024] Open
Abstract
The construction of an integrated national strategic system and capability is an essential goal of implementing the strategy of military-civilian integration in the contemporary era. And the collaborative innovation of military-civilian S&T is an inevitable choice to achieve this goal. Due to the dynamic, complex, and stochastic characteristics of military-civilian S&T collaborative innovation, the level of S&T innovation is highly volatile. This paper takes the internal and external stochastic disturbance factors of military-civilian S&T collaborative innovation as the perspective, studies the strategy selection problem of military-civilian S&T collaborative innovation under military domination, constructs a differential game model to explore the innovation strategies under the non-cooperative model without military subsidies, the non-cooperative model with military subsidies, and the collaborative model. Finally, we use numerical experiments to verify the validity of the conclusions. The study shows that: (1) Within a reasonable range of values of the benefit distribution coefficient, the system can achieve the Pareto optimum, and the collaborative model is conducive to improving the S&T innovation level and the optimum benefit level of the system. (2) Military subsidies can increase the benefits of the system and the parties involved to achieve Pareto improvement. (3) The level of S&T innovation under the collaborative model has dynamic evolutionary characteristics of maximum expectation and variance. As the intensity of disturbance increases, the stability of the system may be destroyed. Risk-averse civil enterprises prefer the cooperative mode, whereas risk-averse civil enterprises prefer the non-cooperative model.
Collapse
Affiliation(s)
- Xin Liang
- Department of Management Engineering and Equipment Economics, Naval University of Engineering, Wuhan, China
| | - Yunjuan Liang
- Department of Management Engineering and Equipment Economics, Naval University of Engineering, Wuhan, China
| | - Weijia Kang
- Department of Management Engineering and Equipment Economics, Naval University of Engineering, Wuhan, China
| | - Hua Wei
- Department of Management Engineering and Equipment Economics, Naval University of Engineering, Wuhan, China
| |
Collapse
|
5
|
Ivanova S, Kuznetsov A, Zverev R, Rada A. Artificial Intelligence Methods for the Construction and Management of Buildings. SENSORS (BASEL, SWITZERLAND) 2023; 23:8740. [PMID: 37960440 PMCID: PMC10650802 DOI: 10.3390/s23218740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/03/2023] [Revised: 10/04/2023] [Accepted: 10/23/2023] [Indexed: 11/15/2023]
Abstract
Artificial intelligence covers a variety of methods and disciplines including vision, perception, speech and dialogue, decision making and planning, problem solving, robotics and other applications in which self-learning is possible. The aim of this work was to study the possibilities of using AI algorithms at various stages of construction to ensure the safety of the process. The objects of this research were scientific publications about the use of artificial intelligence in construction and ways to optimize this process. To search for information, Scopus and Web of Science databases were used for the period from the early 1990s (the appearance of the first publication on the topic) until the end of 2022. Generalization was the main method. It has been established that artificial intelligence is a set of technologies and methods used to complement traditional human qualities, such as intelligence as well as analytical and other abilities. The use of 3D modeling for the design of buildings, machine learning for the conceptualization of design in 3D, computer vision, planning for the effective use of construction equipment, artificial intelligence and artificial superintelligence have been studied. It is proven that automatic programming for natural language processing, knowledge-based systems, robots, building maintenance, adaptive strategies, adaptive programming, genetic algorithms and the use of unmanned aircraft systems allow an evaluation of the use of artificial intelligence in construction. The prospects of using AI in construction are shown.
Collapse
Affiliation(s)
- Svetlana Ivanova
- Natural Nutraceutical Biotesting Laboratory, Kemerovo State University, Krasnaya Street 6, 650043 Kemerovo, Russia
- Department of TNSMD Theory and Methods, Kemerovo State University, Krasnaya Street 6, 650043 Kemerovo, Russia
| | - Aleksandr Kuznetsov
- Computer Engineering Center, Digital Institute, Kemerovo State University, Krasnaya Street 6, 650043 Kemerovo, Russia;
| | - Roman Zverev
- Digital Institute, Kemerovo State University, Krasnaya Street 6, 650043 Kemerovo, Russia; (R.Z.); (A.R.)
| | - Artem Rada
- Digital Institute, Kemerovo State University, Krasnaya Street 6, 650043 Kemerovo, Russia; (R.Z.); (A.R.)
| |
Collapse
|
6
|
Governance framework for autonomous and cognitive digital twins in agile supply chains. COMPUT IND 2023. [DOI: 10.1016/j.compind.2023.103857] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
|
7
|
De Gagne JC. The State of Artificial Intelligence in Nursing Education: Past, Present, and Future Directions. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:4884. [PMID: 36981790 PMCID: PMC10049425 DOI: 10.3390/ijerph20064884] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 03/08/2023] [Indexed: 06/18/2023]
Abstract
As health care continues to evolve and become increasingly complex, nursing education must also evolve to keep pace with the changing landscape [...].
Collapse
Affiliation(s)
- Jennie C De Gagne
- School of Nursing, Duke University, 307 Trent Drive, DUMC 3322, Durham, NC 27710, USA
| |
Collapse
|
8
|
Filgueiras F, Junquilho TA. The Brazilian (Non)perspective on national strategy for artificial intelligence. DISCOVER ARTIFICIAL INTELLIGENCE 2023. [DOI: 10.1007/s44163-023-00052-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/25/2023]
Abstract
AbstractThis article examines the design dynamics and process of the Brazilian National Strategy for Artificial Intelligence (EBIA). We argue that Brazil has a long history of policies for digital development, covering a range of policies that encourage research and development and deploy AI-based digital technologies in industry and governments. Specifically for the AI policy, we analyze how these policies are fragmented into different initiatives without an approach to integrating the instruments and their mixes. We analyze how and why the Brazilian National Strategy for Artificial Intelligence design fails to implement and create an integrated perspective of these policies. This perspective for AI policy in Brazil results in fragmentation and failures to involve actors in the formulation and implementation process. The article concludes that the Brazilian perspective for artificial intelligence, emerging with EBIA, reproduces a situation of path dependence without promoting significant policy changes for the development and application of this technology in society.
Collapse
|
9
|
Hadjiiski L, Cha K, Chan HP, Drukker K, Morra L, Näppi JJ, Sahiner B, Yoshida H, Chen Q, Deserno TM, Greenspan H, Huisman H, Huo Z, Mazurchuk R, Petrick N, Regge D, Samala R, Summers RM, Suzuki K, Tourassi G, Vergara D, Armato SG. AAPM task group report 273: Recommendations on best practices for AI and machine learning for computer-aided diagnosis in medical imaging. Med Phys 2023; 50:e1-e24. [PMID: 36565447 DOI: 10.1002/mp.16188] [Citation(s) in RCA: 19] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 11/13/2022] [Accepted: 11/22/2022] [Indexed: 12/25/2022] Open
Abstract
Rapid advances in artificial intelligence (AI) and machine learning, and specifically in deep learning (DL) techniques, have enabled broad application of these methods in health care. The promise of the DL approach has spurred further interest in computer-aided diagnosis (CAD) development and applications using both "traditional" machine learning methods and newer DL-based methods. We use the term CAD-AI to refer to this expanded clinical decision support environment that uses traditional and DL-based AI methods. Numerous studies have been published to date on the development of machine learning tools for computer-aided, or AI-assisted, clinical tasks. However, most of these machine learning models are not ready for clinical deployment. It is of paramount importance to ensure that a clinical decision support tool undergoes proper training and rigorous validation of its generalizability and robustness before adoption for patient care in the clinic. To address these important issues, the American Association of Physicists in Medicine (AAPM) Computer-Aided Image Analysis Subcommittee (CADSC) is charged, in part, to develop recommendations on practices and standards for the development and performance assessment of computer-aided decision support systems. The committee has previously published two opinion papers on the evaluation of CAD systems and issues associated with user training and quality assurance of these systems in the clinic. With machine learning techniques continuing to evolve and CAD applications expanding to new stages of the patient care process, the current task group report considers the broader issues common to the development of most, if not all, CAD-AI applications and their translation from the bench to the clinic. The goal is to bring attention to the proper training and validation of machine learning algorithms that may improve their generalizability and reliability and accelerate the adoption of CAD-AI systems for clinical decision support.
Collapse
Affiliation(s)
- Lubomir Hadjiiski
- Department of Radiology, University of Michigan, Ann Arbor, Michigan, USA
| | - Kenny Cha
- U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | - Heang-Ping Chan
- Department of Radiology, University of Michigan, Ann Arbor, Michigan, USA
| | - Karen Drukker
- Department of Radiology, University of Chicago, Chicago, Illinois, USA
| | - Lia Morra
- Department of Control and Computer Engineering, Politecnico di Torino, Torino, Italy
| | - Janne J Näppi
- 3D Imaging Research, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts, USA
| | - Berkman Sahiner
- U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | - Hiroyuki Yoshida
- 3D Imaging Research, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts, USA
| | - Quan Chen
- Department of Radiation Medicine, University of Kentucky, Lexington, Kentucky, USA
| | - Thomas M Deserno
- Peter L. Reichertz Institute for Medical Informatics of TU Braunschweig and Hannover Medical School, Braunschweig, Germany
| | - Hayit Greenspan
- Department of Biomedical Engineering, Faculty of Engineering, Tel Aviv, Israel & Department of Radiology, Ichan School of Medicine, Tel Aviv University, Mt Sinai, New York, New York, USA
| | - Henkjan Huisman
- Radboud Institute for Health Sciences, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Zhimin Huo
- Tencent America, Palo Alto, California, USA
| | - Richard Mazurchuk
- Division of Cancer Prevention, National Cancer Institute, National Institutes of Health, Bethesda, Maryland, USA
| | | | - Daniele Regge
- Radiology Unit, Candiolo Cancer Institute, FPO-IRCCS, Candiolo, Italy.,Department of Surgical Sciences, University of Turin, Turin, Italy
| | - Ravi Samala
- U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | - Ronald M Summers
- Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Maryland, USA
| | - Kenji Suzuki
- Institute of Innovative Research, Tokyo Institute of Technology, Tokyo, Japan
| | | | - Daniel Vergara
- Department of Radiology, Yale New Haven Hospital, New Haven, Connecticut, USA
| | - Samuel G Armato
- Department of Radiology, University of Chicago, Chicago, Illinois, USA
| |
Collapse
|
10
|
Mashar M, Chawla S, Chen F, Lubwama B, Patel K, Kelshiker MA, Bachtiger P, Peters NS. Artificial Intelligence Algorithms in Health Care: Is the Current Food and Drug Administration Regulation Sufficient? JMIR AI 2023; 2:e42940. [PMID: 38875544 PMCID: PMC11041443 DOI: 10.2196/42940] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/24/2022] [Revised: 12/11/2022] [Accepted: 12/28/2022] [Indexed: 06/16/2024]
Abstract
Given the growing use of machine learning (ML) technologies in health care, regulatory bodies face unique challenges in governing their clinical use. Under the regulatory framework of the Food and Drug Administration, approved ML algorithms are practically locked, preventing their adaptation in the ever-changing clinical environment, defeating the unique adaptive trait of ML technology in learning from real-world feedback. At the same time, regulations must enforce a strict level of patient safety to mitigate risk at a systemic level. Given that ML algorithms often support, or at times replace, the role of medical professionals, we have proposed a novel regulatory pathway analogous to the regulation of medical professionals, encompassing the life cycle of an algorithm from inception, development to clinical implementation, and continual clinical adaptation. We then discuss in-depth technical and nontechnical challenges to its implementation and offer potential solutions to unleash the full potential of ML technology in health care while ensuring quality, equity, and safety. References for this article were identified through searches of PubMed with the search terms "Artificial intelligence," "Machine learning," and "regulation" from June 25, 2017, until June 25, 2022. Articles were also identified through searches of the reference list of the articles. Only papers published in English were reviewed. The final reference list was generated based on originality and relevance to the broad scope of this paper.
Collapse
Affiliation(s)
- Meghavi Mashar
- University College London NHS Foundation Trust, London, United Kingdom
| | - Shreya Chawla
- Faculty of Life Sciences and Medicine, King's College of London, London, United Kingdom
| | - Fangyue Chen
- School of Public Health, Faculty of Medicine, Imperial College London, London, United Kingdom
| | - Baker Lubwama
- School of Clinical Medicine, University of Cambridge, Cambridge, United Kingdom
| | - Kyle Patel
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Mihir A Kelshiker
- National Heart and Lung Institute, Faculty of Medicine, Imperial College London, London, United Kingdom
| | - Patrik Bachtiger
- National Heart and Lung Institute, Faculty of Medicine, Imperial College London, London, United Kingdom
| | - Nicholas S Peters
- National Heart and Lung Institute, Faculty of Medicine, Imperial College London, London, United Kingdom
| |
Collapse
|
11
|
Mao Y, Shi-Kupfer K. Online public discourse on artificial intelligence and ethics in China: context, content, and implications. AI & SOCIETY 2023; 38:373-389. [PMID: 34803237 PMCID: PMC8594647 DOI: 10.1007/s00146-021-01309-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2021] [Accepted: 09/05/2021] [Indexed: 02/07/2023]
Abstract
The societal and ethical implications of artificial intelligence (AI) have sparked discussions among academics, policymakers and the public around the world. What has gone unnoticed so far are the likewise vibrant discussions in China. We analyzed a large sample of discussions about AI ethics on two Chinese social media platforms. Findings suggest that participants were diverse, and included scholars, IT industry actors, journalists, and members of the general public. They addressed a broad range of concerns associated with the application of AI in various fields. Some even gave recommendations on how to tackle these issues. We argue that these discussions are a valuable source for understanding the future trajectory of AI development in China as well as implications for global dialogue on AI governance.
Collapse
Affiliation(s)
- Yishu Mao
- Lise Meitner Research Group “China in the Global System of Science”, Max Planck Institute for the History of Science, Berlin, Germany
| | | |
Collapse
|
12
|
Blanchard A, Taddeo M. The Ethics of Artificial Intelligence for Intelligence Analysis: a Review of the Key Challenges with Recommendations. DIGITAL SOCIETY : ETHICS, SOCIO-LEGAL AND GOVERNANCE OF DIGITAL TECHNOLOGY 2023; 2:12. [PMID: 37034181 PMCID: PMC10073779 DOI: 10.1007/s44206-023-00036-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Accepted: 02/09/2023] [Indexed: 04/11/2023]
Abstract
Intelligence agencies have identified artificial intelligence (AI) as a key technology for maintaining an edge over adversaries. As a result, efforts to develop, acquire, and employ AI capabilities for purposes of national security are growing. This article reviews the ethical challenges presented by the use of AI for augmented intelligence analysis. These challenges have been identified through a qualitative systematic review of the relevant literature. The article identifies five sets of ethical challenges relating to intrusion, explainability and accountability, bias, authoritarianism and political security, and collaboration and classification, and offers a series of recommendations targeted at intelligence agencies to address and mitigate these challenges.
Collapse
Affiliation(s)
| | - Mariarosaria Taddeo
- The Alan Turing Institute, London, UK
- Oxford Internet Institute, University of Oxford, Oxford, UK
| |
Collapse
|
13
|
Bosses without a heart: socio-demographic and cross-cultural determinants of attitude toward Emotional AI in the workplace. AI & SOCIETY 2023; 38:97-119. [PMID: 34776651 PMCID: PMC8571983 DOI: 10.1007/s00146-021-01290-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2021] [Accepted: 09/17/2021] [Indexed: 02/06/2023]
Abstract
Biometric technologies are becoming more pervasive in the workplace, augmenting managerial processes such as hiring, monitoring and terminating employees. Until recently, these devices consisted mainly of GPS tools that track location, software that scrutinizes browser activity and keyboard strokes, and heat/motion sensors that monitor workstation presence. Today, however, a new generation of biometric devices has emerged that can sense, read, monitor and evaluate the affective state of a worker. More popularly known by its commercial moniker, Emotional AI, the technology stems from advancements in affective computing. But whereas previous generations of biometric monitoring targeted the exterior physical body of the worker, concurrent with the writings of Foucault and Hardt, we argue that emotion-recognition tools signal a far more invasive disciplinary gaze that exposes and makes vulnerable the inner regions of the worker-self. Our paper explores attitudes towards empathic surveillance by analyzing a survey of 1015 responses of future job-seekers from 48 countries with Bayesian statistics. Our findings reveal affect tools, left unregulated in the workplace, may lead to heightened stress and anxiety among disadvantaged ethnicities, gender and income class. We also discuss a stark cross-cultural discrepancy whereby East Asians, compared to Western subjects, are more likely to profess a trusting attitude toward EAI-enabled automated management. While this emerging technology is driven by neoliberal incentives to optimize the worksite and increase productivity, ultimately, empathic surveillance may create more problems in terms of algorithmic bias, opaque decisionism, and the erosion of employment relations. Thus, this paper nuances and extends emerging literature on emotion-sensing technologies in the workplace, particularly through its highly original cross-cultural study. Supplementary Information The online version contains supplementary material available at 10.1007/s00146-021-01290-1.
Collapse
|
14
|
Su Z, Cheshmehzangi A, McDonnell D, Bentley BL, da Veiga CP, Xiang YT. Facial recognition law in China. JOURNAL OF MEDICAL ETHICS 2022; 48:1058-1059. [PMID: 35383129 DOI: 10.1136/medethics-2022-108130] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Accepted: 03/09/2022] [Indexed: 06/14/2023]
Abstract
Although the prevalence of facial recognition-based COVID-19 surveillance tools and techniques, China does not have a facial recognition law to protect its residents' facial data. Oftentimes, neither the public nor the government knows where people's facial images are stored, how they have been used, who might use or misuse them, and to what extent. This reality is alarming, particularly factoring in the wide range of unintended consequences already caused by good-intentioned measures and mandates amid the pandemic. Biometric data are matters of personal rights and national security. In light of worrisome technologies such as deep-fake pornography, the protection of biometric data is also central to the protection of the dignity of the citizens and the government, if not the industry as well. This paper discusses the urgent need for the Chinese government to establish rigorous and timely facial recognition laws to protect the public's privacy, security, and dignity amid COVID-19 and beyond.
Collapse
Affiliation(s)
- Zhaohui Su
- School of Public Health, Southeast University, Nanjing, China
| | - Ali Cheshmehzangi
- Department of Architecture and Built Environment, University of Nottingham - Ningbo China, Ningbo, Zhejiang, China
- Network for Education and Research on Peace and Sustainability, Hiroshima University, Hiroshima, Japan
| | - Dean McDonnell
- Department of Humanities, Institute of Technology Carlow, Carlow, Ireland
| | - Barry L Bentley
- Cardiff School of Technologies, Cardiff Metropolitan University, Cardiff, UK
| | | | - Yu-Tao Xiang
- Department of Public Health and Medicinal Administration, University of Macau, Taipa, Macau, China
- Institute of Translational Medicine, University of Macau, Macau, Macau, China
- Centre for Cognitive and Brain Sciences, University of Macau, Macau, Macau, China
| |
Collapse
|
15
|
Zhu J. AI ethics with Chinese characteristics? Concerns and preferred solutions in Chinese academia. AI & SOCIETY 2022:1-14. [PMID: 36276898 PMCID: PMC9574803 DOI: 10.1007/s00146-022-01578-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 10/04/2022] [Indexed: 10/27/2022]
Abstract
Since Chinese scholars are playing an increasingly important role in shaping the national landscape of discussion on AI ethics, understanding their ethical concerns and preferred solutions is essential for global cooperation on governance of AI. This article, therefore, provides the first elaborated analysis on the discourse on AI ethics in Chinese academia, via a systematic literature review. This article has three main objectives. (1) to identify the most discussed ethical issues of AI in Chinese academia and those being left out (the question of "what"); (2) to analyze the solutions proposed and preferred by Chinese scholars (the question of "how"); and (3) to map out whose voices are dominating and whose are in the marginal (the question of "who"). Findings suggest that in terms of short-term implications, Chinese scholars' concerns over AI resemble predominantly the content of international ethical guidelines. Yet in terms of long-term implications, there are some significant differences needed to be further addressed in a cultural context. Further, among a wide range of solution proposals, Chinese scholars seem to prefer strong-binding regulations to those weak ethical guidelines. In addition, this article also found that the Chinese academic discourse was dominated by male scholars and those who are from elite universities, which arguably is not a unique phenomenon in China. Supplementary Information The online version contains supplementary material available at 10.1007/s00146-022-01578-w.
Collapse
Affiliation(s)
- Junhua Zhu
- Centre for East Asian Studies, University of Turku, Turku, Finland
| |
Collapse
|
16
|
Liebig L, Güttel L, Jobin A, Katzenbach C. Subnational AI policy: shaping AI in a multi-level governance system. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01561-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
AbstractThe promises and risks of Artificial Intelligence permeate current policy statements and have attracted much attention by AI governance research. However, most analyses focus exclusively on AI policy on the national and international level, overlooking existing federal governance structures. This is surprising because AI is connected to many policy areas, where the competences are already distributed between the national and subnational level, such as research or economic policy. Addressing this gap, this paper argues that more attention should be dedicated to subnational efforts to shape AI and asks which themes are discussed in subnational AI policy documents with a case study of Germany’s 16 states. Our qualitative analysis of 34 AI policy documents issued on the subnational level demonstrates that subnational efforts focus on knowledge transfer between research and industry actors, the commercialization of AI, different economic identities of the German states, and the incorporation of ethical principles. Because federal states play an active role in AI policy, analysing AI as a policy issue on different levels of government is necessary and will contribute to a better understanding of the developments and implementations of AI strategies in different national contexts.
Collapse
|
17
|
Peter M, Ho MT. Why we need to be weary of emotional AI. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01576-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
18
|
Wäschle M, Thaler F, Berres A, Pölzlbauer F, Albers A. A review on AI Safety in highly automated driving. Front Artif Intell 2022; 5:952773. [PMID: 36262462 PMCID: PMC9574258 DOI: 10.3389/frai.2022.952773] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Accepted: 09/08/2022] [Indexed: 11/26/2022] Open
Abstract
Remarkable progress in the fields of machine learning (ML) and artificial intelligence (AI) has led to an increased number of applications of (data-driven) AI systems for the partial or complete control of safety-critical systems. Recently, ML solutions have been particularly popular. Such approaches are often met with concerns regarding their correct and safe execution, which is often caused by missing knowledge or intransparency of their exact functionality. The investigation and derivation of methods for the safety assessment of AI systems are thus of great importance. Among others, these issues are addressed in the field of AI Safety. The aim of this work is to provide an overview of this field by means of a systematic literature review with special focus on the area of highly automated driving, as well as to present a selection of approaches and methods for the safety assessment of AI systems. Particularly, validation, verification, and testing are considered in light of this context. In the review process, two distinguished classes of approaches have been identified: On the one hand established methods, either referring to already published standards or well-established concepts from multiple research areas outside ML and AI. On the other hand newly developed approaches, including methods tailored to the scope of ML and AI which gained importance only in recent years.
Collapse
Affiliation(s)
- Moritz Wäschle
- IPEK—Institute of Product Engineering, ASE—Advanced Systems Engineering, Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany,*Correspondence: Moritz Wäschle
| | | | | | | | - Albert Albers
- IPEK—Institute of Product Engineering, ASE—Advanced Systems Engineering, Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany
| |
Collapse
|
19
|
Taddeo M, Blanchard A. A Comparative Analysis of the Definitions of Autonomous Weapons Systems. SCIENCE AND ENGINEERING ETHICS 2022; 28:37. [PMID: 35997901 PMCID: PMC9399191 DOI: 10.1007/s11948-022-00392-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 06/24/2022] [Indexed: 06/15/2023]
Abstract
In this report we focus on the definition of autonomous weapons systems (AWS). We provide a comparative analysis of existing official definitions of AWS as provided by States and international organisations, like ICRC and NATO. The analysis highlights that the definitions draw focus on different aspects of AWS and hence lead to different approaches to address the ethical and legal problems of these weapons systems. This approach is detrimental both in terms of fostering an understanding of AWS and in facilitating agreement around conditions of deployment and regulations of their use and, indeed, whether AWS are to be used at all. We draw from the comparative analysis to identify essential aspects of AWS and then offer a definition that provides a value-neutral ground to address the relevant ethical and legal problems. In particular, we identify four key aspects-autonomy; adapting capabilities of AWS; human control; and purpose of use-as the essential factors to define AWS and which are key when considering the related ethical and legal implications.
Collapse
Affiliation(s)
- Mariarosaria Taddeo
- Oxford Internet Institute, University of Oxford, Oxford, UK.
- Alan Turing Institute, London, UK.
| | | |
Collapse
|
20
|
Zhao B, Li Y, Tan J, Wen C. Whether intelligentization promotes regional industrial competitiveness: Evidence from China. PLoS One 2022; 17:e0271186. [PMID: 35895671 PMCID: PMC9328515 DOI: 10.1371/journal.pone.0271186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Accepted: 06/24/2022] [Indexed: 11/19/2022] Open
Abstract
Intelligentization-oriented development is a fast-developing trend of technological revolution. It promotes the reconstruction of the industrial system of a region and affects its overall industrial competitiveness. This paper sets up a variety of models featuring intelligentization level and multi-dimensional industrial competitiveness, and collects data of 28 provinces and cities in China from 2003 to 2017 to test the influence of industrial intelligentization level on the industrial competitiveness of a region. The result reveals that: 1) In China's provincial jurisdictions, the higher the level of intelligentization is, the lower the overall level of industrial competitiveness and the lower the proportion of industry in the economic system will be. In regions where the facilities are highly intelligentialized, the production sectors tend to move to the less developed regions, and the growth effect of technological dividends is the focus. 2) Compared with the middle region and the Western region of China, the Eastern region, which is more developed with higher intelligentization level, has stronger ability in the research and development (R&D) of technologies, and the economic structure of the industry there tends to be stable, manifesting a strong growth potential.
Collapse
Affiliation(s)
- Bingjian Zhao
- Research Center for Economy of Upper Reaches of the Yangtze River, Chongqing Technology and Business University, Chongqing, China
- School of Economics, Xihua University, Chengdu, China
| | - Yi Li
- Research Center for Economy of Upper Reaches of the Yangtze River, Chongqing Technology and Business University, Chongqing, China
| | - Junyin Tan
- Research Center for Economy of Upper Reaches of the Yangtze River, Chongqing Technology and Business University, Chongqing, China
| | - Chuanhao Wen
- School of Economics, Yunnan University, Kunming, China
| |
Collapse
|
21
|
Bradley F. Representation of Libraries in Artificial Intelligence Regulations and Implications for Ethics and Practice. JOURNAL OF THE AUSTRALIAN LIBRARY AND INFORMATION ASSOCIATION 2022. [DOI: 10.1080/24750158.2022.2101911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
Affiliation(s)
- Fiona Bradley
- School of Social Sciences – Discipline of Political Science & International Relations, University of Western Australia, Perth, Australia; University Library, UNSW Sydney, Sydney, Australia
| |
Collapse
|
22
|
Deranty JP, Corbin T. Artificial intelligence and work: a critical review of recent research from the social sciences. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01496-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
AbstractThis review seeks to present a comprehensive picture of recent discussions in the social sciences of the anticipated impact of AI on the world of work. Issues covered include: technological unemployment, algorithmic management, platform work and the politics of AI work. The review identifies the major disciplinary and methodological perspectives on AI’s impact on work, and the obstacles they face in making predictions. Two parameters influencing the development and deployment of AI in the economy are highlighted: the capitalist imperative and nationalistic pressures.
Collapse
|
23
|
Calzati S. Federated data as a commons: a third way to subject-centric and collective-centric approaches to data epistemology and politics. JOURNAL OF INFORMATION COMMUNICATION & ETHICS IN SOCIETY 2022. [DOI: 10.1108/jices-09-2021-0097] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Purpose
This study advances a reconceptualization of data and information which overcomes normative understandings often contained in data policies at national and international levels. This study aims to propose a conceptual framework that moves beyond subject- and collective-centric normative understandings.
Design/methodology/approach
To do so, this study discusses the European Union (EU) and China’s approaches to data-driven technologies highlighting their similarities and differences when it comes to the vision underpinning how tech innovation is shaped.
Findings
Regardless of the different attention to the subject (the EU) and the collective (China), the normative understandings of technology by both actors remain trapped into a positivist approach that overlooks all that is not and cannot be turned into data, thus hindering the elaboration of a more holistic ecological thinking merging humans and technologies.
Originality/value
Revising the philosophical and political debate on data and data-driven technologies, a third way is elaborated, i.e. federated data as commons. This third way puts the subject as part by default of a collective at the centre of discussion. This framing can serve as the basis for elaborating sociotechnical alternatives when it comes to define and regulate the mash-up of humans and technology.
Collapse
|
24
|
Schmid S, Riebe T, Reuter C. Dual-Use and Trustworthy? A Mixed Methods Analysis of AI Diffusion Between Civilian and Defense R&D. SCIENCE AND ENGINEERING ETHICS 2022; 28:12. [PMID: 35258776 PMCID: PMC8904348 DOI: 10.1007/s11948-022-00364-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/15/2020] [Accepted: 01/25/2022] [Indexed: 06/14/2023]
Abstract
Artificial Intelligence (AI) seems to be impacting all industry sectors, while becoming a motor for innovation. The diffusion of AI from the civilian sector to the defense sector, and AI's dual-use potential has drawn attention from security and ethics scholars. With the publication of the ethical guideline Trustworthy AI by the European Union (EU), normative questions on the application of AI have been further evaluated. In order to draw conclusions on Trustworthy AI as a point of reference for responsible research and development (R&D), we approach the diffusion of AI across both civilian and military spheres in the EU. We capture the extent of technological diffusion and derive European and German patent citation networks. Both networks indicate a low degree of diffusion of AI between civilian and defense sectors. A qualitative investigation of project descriptions of a research institute's work in both civilian and military fields shows that military AI applications stress accuracy or robustness, while civilian AI reflects a focus on human-centric values. Our work represents a first approach by linking processes of technology diffusion with normative evaluations of R&D.
Collapse
Affiliation(s)
- Stefka Schmid
- Science and Technology for Peace and Security (PEASEC), Technische Universität Darmstadt, Pankratiusstraße 2, 64289 Darmstadt, Germany
| | - Thea Riebe
- Science and Technology for Peace and Security (PEASEC), Technische Universität Darmstadt, Pankratiusstraße 2, 64289 Darmstadt, Germany
| | - Christian Reuter
- Science and Technology for Peace and Security (PEASEC), Technische Universität Darmstadt, Pankratiusstraße 2, 64289 Darmstadt, Germany
| |
Collapse
|
25
|
Toward accountable human-centered AI: rationale and promising directions. JOURNAL OF INFORMATION COMMUNICATION & ETHICS IN SOCIETY 2022. [DOI: 10.1108/jices-06-2021-0059] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Purpose
Along with the various beneficial uses of artificial intelligence (AI), there are various unsavory concomitants including the inscrutability of AI tools (and the opaqueness of their mechanisms), the fragility of AI models under adversarial settings, the vulnerability of AI models to bias throughout their pipeline, the high planetary cost of running large AI models and the emergence of exploitative surveillance capitalism-based economic logic built on AI technology. This study aims to document these harms of AI technology and study how these technologies and their developers and users can be made more accountable.
Design/methodology/approach
Due to the nature of the problem, a holistic, multi-pronged approach is required to understand and counter these potential harms. This paper identifies the rationale for urgently focusing on human-centered AI and provide an outlook of promising directions including technical proposals.
Findings
AI has the potential to benefit the entire society, but there remains an increased risk for vulnerable segments of society. This paper provides a general survey of the various approaches proposed in the literature to make AI technology more accountable. This paper reports that the development of ethical accountable AI design requires the confluence and collaboration of many fields (ethical, philosophical, legal, political and technical) and that lack of diversity is a problem plaguing the state of the art in AI.
Originality/value
This paper provides a timely synthesis of the various technosocial proposals in the literature spanning technical areas such as interpretable and explainable AI; algorithmic auditability; as well as policy-making challenges and efforts that can operationalize ethical AI and help in making AI accountable. This paper also identifies and shares promising future directions of research.
Collapse
|
26
|
Ethical framework for Artificial Intelligence and Digital technologies. INTERNATIONAL JOURNAL OF INFORMATION MANAGEMENT 2022. [DOI: 10.1016/j.ijinfomgt.2021.102433] [Citation(s) in RCA: 26] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
27
|
Morley J, Murphy L, Mishra A, Joshi I, Karpathakis K. Governing Data and Artificial Intelligence for Health Care: Developing an International Understanding. JMIR Form Res 2022; 6:e31623. [PMID: 35099403 PMCID: PMC8844981 DOI: 10.2196/31623] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Revised: 10/12/2021] [Accepted: 10/13/2021] [Indexed: 01/04/2023] Open
Abstract
Background Although advanced analytical techniques falling under the umbrella heading of artificial intelligence (AI) may improve health care, the use of AI in health raises safety and ethical concerns. There are currently no internationally recognized governance mechanisms (policies, ethical standards, evaluation, and regulation) for developing and using AI technologies in health care. A lack of international consensus creates technical and social barriers to the use of health AI while potentially hampering market competition. Objective The aim of this study is to review current health data and AI governance mechanisms being developed or used by Global Digital Health Partnership (GDHP) member countries that commissioned this research, identify commonalities and gaps in approaches, identify examples of best practices, and understand the rationale for policies. Methods Data were collected through a scoping review of academic literature and a thematic analysis of policy documents published by selected GDHP member countries. The findings from this data collection and the literature were used to inform semistructured interviews with key senior policy makers from GDHP member countries exploring their countries’ experience of AI-driven technologies in health care and associated governance and inform a focus group with professionals working in international health and technology to discuss the themes and proposed policy recommendations. Policy recommendations were developed based on the aggregated research findings. Results As this is an empirical research paper, we primarily focused on reporting the results of the interviews and the focus group. Semistructured interviews (n=10) and a focus group (n=6) revealed 4 core areas for international collaborations: leadership and oversight, a whole systems approach covering the entire AI pipeline from data collection to model deployment and use, standards and regulatory processes, and engagement with stakeholders and the public. There was a broad range of maturity in health AI activity among the participants, with varying data infrastructure, application of standards across the AI life cycle, and strategic approaches to both development and deployment. A demand for further consistency at the international level and policies was identified to support a robust innovation pipeline. In total, 13 policy recommendations were developed to support GDHP member countries in overcoming core AI governance barriers and establishing common ground for international collaboration. Conclusions AI-driven technology research and development for health care outpaces the creation of supporting AI governance globally. International collaboration and coordination on AI governance for health care is needed to ensure coherent solutions and allow countries to support and benefit from each other’s work. International bodies and initiatives have a leading role to play in the international conversation, including the production of tools and sharing of practical approaches to the use of AI-driven technologies for health care.
Collapse
Affiliation(s)
- Jessica Morley
- Oxford Internet Institute, University of Oxford, Oxford, United Kingdom
| | | | - Abhishek Mishra
- Uehiro Centre for Practical Ethics, University of Oxford, Oxford, United Kingdom
| | | | - Kassandra Karpathakis
- Harvard T.H. Chan School of Public Health, Harvard University, Boston, MA, United States
| |
Collapse
|
28
|
Borsci S, Lehtola VV, Nex F, Yang MY, Augustijn EW, Bagheriye L, Brune C, Kounadi O, Li J, Moreira J, Van Der Nagel J, Veldkamp B, Le DV, Wang M, Wijnhoven F, Wolterink JM, Zurita-Milla R. Embedding artificial intelligence in society: looking beyond the EU AI master plan using the culture cycle. AI & SOCIETY 2022. [DOI: 10.1007/s00146-021-01383-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
AbstractThe European Union (EU) Commission’s whitepaper on Artificial Intelligence (AI) proposes shaping the emerging AI market so that it better reflects common European values. It is a master plan that builds upon the EU AI High-Level Expert Group guidelines. This article reviews the masterplan, from a culture cycle perspective, to reflect on its potential clashes with current societal, technical, and methodological constraints. We identify two main obstacles in the implementation of this plan: (i) the lack of a coherent EU vision to drive future decision-making processes at state and local levels and (ii) the lack of methods to support a sustainable diffusion of AI in our society. The lack of a coherent vision stems from not considering societal differences across the EU member states. We suggest that these differences may lead to a fractured market and an AI crisis in which different members of the EU will adopt nation-centric strategies to exploit AI, thus preventing the development of a frictionless market as envisaged by the EU. Moreover, the Commission aims at changing the AI development culture proposing a human-centred and safety-first perspective that is not supported by methodological advancements, thus taking the risks of unforeseen social and societal impacts of AI. We discuss potential societal, technical, and methodological gaps that should be filled to avoid the risks of developing AI systems at the expense of society. Our analysis results in the recommendation that the EU regulators and policymakers consider how to complement the EC programme with rules and compensatory mechanisms to avoid market fragmentation due to local and global ambitions. Moreover, regulators should go beyond the human-centred approach establishing a research agenda seeking answers to the technical and methodological open questions regarding the development and assessment of human-AI co-action aiming for a sustainable AI diffusion in the society.
Collapse
|
29
|
Roberts H, Cowls J, Hine E, Mazzi F, Tsamados A, Taddeo M, Floridi L. Achieving a 'Good AI Society': Comparing the Aims and Progress of the EU and the US. SCIENCE AND ENGINEERING ETHICS 2021; 27:68. [PMID: 34767085 PMCID: PMC8587491 DOI: 10.1007/s11948-021-00340-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Accepted: 09/10/2021] [Indexed: 06/13/2023]
Abstract
Over the past few years, there has been a proliferation of artificial intelligence (AI) strategies, released by governments around the world, that seek to maximise the benefits of AI and minimise potential harms. This article provides a comparative analysis of the European Union (EU) and the United States' (US) AI strategies and considers (i) the visions of a 'Good AI Society' that are forwarded in key policy documents and their opportunity costs, (ii) the extent to which the implementation of each vision is living up to stated aims and (iii) the consequences that these differing visions of a 'Good AI Society' have for transatlantic cooperation. The article concludes by comparing the ethical desirability of each vision and identifies areas where the EU, and especially the US, need to improve in order to achieve ethical outcomes and deepen cooperation.
Collapse
Affiliation(s)
- Huw Roberts
- Oxford Internet Institute, University of Oxford, 1 St Giles', Oxford, OX1 3JS, UK
| | - Josh Cowls
- Oxford Internet Institute, University of Oxford, 1 St Giles', Oxford, OX1 3JS, UK
- Alan Turing Institute, British Library, 96 Euston Rd, London, NW1 2DB, UK
| | - Emmie Hine
- Oxford Internet Institute, University of Oxford, 1 St Giles', Oxford, OX1 3JS, UK
| | - Francesca Mazzi
- Oxford Internet Institute, University of Oxford, 1 St Giles', Oxford, OX1 3JS, UK
- Saïd Business School, University of Oxford, Park End St, Oxford, OX1 1HP, UK
| | - Andreas Tsamados
- Oxford Internet Institute, University of Oxford, 1 St Giles', Oxford, OX1 3JS, UK
| | - Mariarosaria Taddeo
- Oxford Internet Institute, University of Oxford, 1 St Giles', Oxford, OX1 3JS, UK
- Alan Turing Institute, British Library, 96 Euston Rd, London, NW1 2DB, UK
| | - Luciano Floridi
- Oxford Internet Institute, University of Oxford, 1 St Giles', Oxford, OX1 3JS, UK.
- Alan Turing Institute, British Library, 96 Euston Rd, London, NW1 2DB, UK.
| |
Collapse
|
30
|
Liu N, Shapira P, Yue X. Tracking developments in artificial intelligence research: constructing and applying a new search strategy. Scientometrics 2021; 126:3153-3192. [PMID: 34720254 PMCID: PMC8550099 DOI: 10.1007/s11192-021-03868-4] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2020] [Accepted: 01/12/2021] [Indexed: 12/22/2022]
Abstract
Artificial intelligence, as an emerging and multidisciplinary domain of research and innovation, has attracted growing attention in recent years. Delineating the domain composition of artificial intelligence is central to profiling and tracking its development and trajectories. This paper puts forward a bibliometric definition for artificial intelligence which can be readily applied, including by researchers, managers, and policy analysts. Our approach starts with benchmark records of artificial intelligence captured by using a core keyword and specialized journal search. We then extract candidate terms from high frequency keywords of benchmark records, refine keywords and complement with the subject category “artificial intelligence”. We assess our search approach by comparing it with other three recent search strategies of artificial intelligence, using a common source of articles from the Web of Science. Using this source, we then profile patterns of growth and international diffusion of scientific research in artificial intelligence in recent years, identify top research sponsors in funding artificial intelligence and demonstrate how diverse disciplines contribute to the multidisciplinary development of artificial intelligence. We conclude with implications for search strategy development and suggestions of lines for further research.
Collapse
Affiliation(s)
- Na Liu
- School of Management, Shandong Technology and Business University, Yantai, 264005 China
| | - Philip Shapira
- Manchester Institute of Innovation Research, Alliance Manchester Business School, University of Manchester, Manchester, M13 9PL UK.,School of Public Policy, Georgia Institute of Technology, Atlanta, GA 30332-0345 USA
| | - Xiaoxu Yue
- School of Public Policy and Management, Tsinghua University, Beijing, 100084 China
| |
Collapse
|
31
|
Sapienza S, Vedder A. Principle-based recommendations for big data and machine learning in food safety: the P-SAFETY model. AI & SOCIETY 2021. [DOI: 10.1007/s00146-021-01282-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
AbstractBig data and Machine learning Techniques are reshaping the way in which food safety risk assessment is conducted. The ongoing ‘datafication’ of food safety risk assessment activities and the progressive deployment of probabilistic models in their practices requires a discussion on the advantages and disadvantages of these advances. In particular, the low level of trust in EU food safety risk assessment framework highlighted in 2019 by an EU-funded survey could be exacerbated by novel methods of analysis. The variety of processed data raises unique questions regarding the interplay of multiple regulatory systems alongside food safety legislation. Provisions aiming to preserve the confidentiality of data and protect personal information are juxtaposed to norms prescribing the public disclosure of scientific information. This research is intended to provide guidance for data governance and data ownership issues that unfold from the ongoing transformation of the technical and legal domains of food safety risk assessment. Following the reconstruction of technological advances in data collection and analysis and the description of recent amendments to food safety legislation, emerging concerns are discussed in light of the individual, collective and social implications of the deployment of cutting-edge Big Data collection and analysis techniques. Then, a set of principle-based recommendations is proposed by adapting high-level principles enshrined in institutional documents about Artificial Intelligence to the realm of food safety risk assessment. The proposed set of recommendations adopts Safety, Accountability, Fairness, Explainability, Transparency as core principles (SAFETY), whereas Privacy and data protection are used as a meta-principle.
Collapse
|
32
|
Hongladarom S. The Thailand national AI ethics guideline: an analysis. JOURNAL OF INFORMATION COMMUNICATION & ETHICS IN SOCIETY 2021. [DOI: 10.1108/jices-01-2021-0005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Purpose
The paper aims to analyze the content of the newly published National AI Ethics Guideline in Thailand. Thailand’s ongoing political struggles and transformation has made it a good case to see how a policy document such as a guideline in AI ethics becomes part of the transformations. Looking at how the two are interrelated will help illuminate the political and cultural dynamics of Thailand as well as how governance of ethics itself is conceptualized.
Design/methodology/approach
The author looks at the history of how the National AI Ethics Guidelines came to be and interprets its content, situating the Guideline within the contemporary history of the country as well as comparing the Guideline with some of the leading existing guidelines.
Findings
It is found that the Guideline represents an ambivalent and paradoxical nature that characterizes Thailand’s attempt at modernization. On the one hand, there is a desire to join the ranks of the more advanced economies, but, on the other hand, there is also a strong desire to maintain its own traditional values. Thailand has not been successful in resolving this tension yet, and this lack of success is shown in the way that content of the AI Ethics Guideline is presented.
Practical implications
The findings of the paper could be useful for further attempts in drafting and revising AI ethics guidelines in the future.
Originality/value
The paper represents the first attempt, so far as the author is aware, to analyze the content of the Thai AI Ethics Guideline critically.
Collapse
|
33
|
Floridi L. The European Legislation on AI: a Brief Analysis of its Philosophical Approach. PHILOSOPHY & TECHNOLOGY 2021; 34:215-222. [PMID: 34104628 PMCID: PMC8174763 DOI: 10.1007/s13347-021-00460-9] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 05/23/2021] [Accepted: 05/23/2021] [Indexed: 12/01/2022]
Affiliation(s)
- Luciano Floridi
- Oxford Internet Institute, University of Oxford, 1 St Giles, Oxford, OX1 3JS UK
- The Alan Turing Institute, 96 Euston Road, London, NW1 2DB UK
| |
Collapse
|
34
|
Abstract
The adverse effects of rapid urbanization are of global concern. Careful planning for and accommodation of accelerating urbanization and citizenization (i.e., migrants gaining official urban residency) may be the best approach to limit some of the worst impacts. However, we find that another trajectory may be possible: one linked to the rural development plan adopted in the latest Chinese national development strategy. This plan aims to build rural areas as attractive areas for settlement by 2050 rather than to further urbanize with more people in cities. We assess the political motivations and challenges behind this choice to develop rural areas based on a literature review and empirical case analysis. After assessing the rural and urban policy subsystem, we find five socio-political drivers behind China’s rural development strategy, namely ensuring food security, promoting culture and heritage, addressing overcapacity, emphasizing environmental protection and eradicating poverty. To develop rural areas, China needs to effectively resolve three dilemmas: (1) implementing decentralized policies under central supervision; (2) deploying limited resources efficiently to achieve targets; and (3) addressing competing narratives in current policies. Involving more rural community voices, adopting multiple forms of local governance, and identifying and mitigating negative project impacts can be the starting points to manage these dilemmas.
Collapse
|
35
|
Tsamados A, Aggarwal N, Cowls J, Morley J, Roberts H, Taddeo M, Floridi L. The ethics of algorithms: key problems and solutions. AI & SOCIETY 2021. [DOI: 10.1007/s00146-021-01154-8] [Citation(s) in RCA: 55] [Impact Index Per Article: 18.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
AbstractResearch on the ethics of algorithms has grown substantially over the past decade. Alongside the exponential development and application of machine learning algorithms, new ethical problems and solutions relating to their ubiquitous use in society have been proposed. This article builds on a review of the ethics of algorithms published in 2016 (Mittelstadt et al. Big Data Soc 3(2), 2016). The goals are to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated analysis of epistemic and normative concerns, and to offer actionable guidance for the governance of the design, development and deployment of algorithms.
Collapse
|