26
|
Burr C, Taddeo M, Floridi L. The Ethics of Digital Well-Being: A Thematic Review. SCIENCE AND ENGINEERING ETHICS 2020; 26:2313-2343. [PMID: 31933119 PMCID: PMC7417400 DOI: 10.1007/s11948-020-00175-8] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2019] [Accepted: 01/03/2020] [Indexed: 05/24/2023]
Abstract
This article presents the first thematic review of the literature on the ethical issues concerning digital well-being. The term 'digital well-being' is used to refer to the impact of digital technologies on what it means to live a life that is good for a human being. The review explores the existing literature on the ethics of digital well-being, with the goal of mapping the current debate and identifying open questions for future research. The review identifies major issues related to several key social domains: healthcare, education, governance and social development, and media and entertainment. It also highlights three broader themes: positive computing, personalised human-computer interaction, and autonomy and self-determination. The review argues that three themes will be central to ongoing discussions and research by showing how they can be used to identify open questions related to the ethics of digital well-being.
Collapse
|
27
|
Morley J, Floridi L, Kinsey L, Elhalal A. From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices. SCIENCE AND ENGINEERING ETHICS 2020; 26:2141-2168. [PMID: 31828533 PMCID: PMC7417387 DOI: 10.1007/s11948-019-00165-5] [Citation(s) in RCA: 89] [Impact Index Per Article: 22.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/16/2019] [Accepted: 11/29/2019] [Indexed: 05/24/2023]
Abstract
The debate about the ethical implications of Artificial Intelligence dates from the 1960s (Samuel in Science, 132(3429):741-742, 1960. https://doi.org/10.1126/science.132.3429.741 ; Wiener in Cybernetics: or control and communication in the animal and the machine, MIT Press, New York, 1961). However, in recent years symbolic AI has been complemented and sometimes replaced by (Deep) Neural Networks and Machine Learning (ML) techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles-the 'what' of AI ethics (beneficence, non-maleficence, autonomy, justice and explicability)-rather than on practices, the 'how.' Awareness of the potential issues is increasing at a fast rate, but the AI community's ability to take action to mitigate the associated risks is still at its infancy. Our intention in presenting this research is to contribute to closing the gap between principles and practices by constructing a typology that may help practically-minded developers apply ethics at each stage of the Machine Learning development pipeline, and to signal to researchers where further work is needed. The focus is exclusively on Machine Learning, but it is hoped that the results of this research may be easily applicable to other branches of AI. The article outlines the research method for creating this typology, the initial findings, and provides a summary of future research needs.
Collapse
|
28
|
Morley J, Machado CCV, Burr C, Cowls J, Joshi I, Taddeo M, Floridi L. The ethics of AI in health care: A mapping review. Soc Sci Med 2020; 260:113172. [PMID: 32702587 DOI: 10.1016/j.socscimed.2020.113172] [Citation(s) in RCA: 129] [Impact Index Per Article: 32.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Revised: 06/22/2020] [Accepted: 06/23/2020] [Indexed: 02/06/2023]
Abstract
This article presents a mapping review of the literature concerning the ethics of artificial intelligence (AI) in health care. The goal of this review is to summarise current debates and identify open questions for future research. Five literature databases were searched to support the following research question: how can the primary ethical risks presented by AI-health be categorised, and what issues must policymakers, regulators and developers consider in order to be 'ethically mindful? A series of screening stages were carried out-for example, removing articles that focused on digital health in general (e.g. data sharing, data access, data privacy, surveillance/nudging, consent, ownership of health data, evidence of efficacy)-yielding a total of 156 papers that were included in the review. We find that ethical issues can be (a) epistemic, related to misguided, inconclusive or inscrutable evidence; (b) normative, related to unfair outcomes and transformative effectives; or (c) related to traceability. We further find that these ethical issues arise at six levels of abstraction: individual, interpersonal, group, institutional, and societal or sectoral. Finally, we outline a number of considerations for policymakers and regulators, mapping these to existing literature, and categorising each as epistemic, normative or traceability-related and at the relevant level of abstraction. Our goal is to inform policymakers, regulators and developers of what they must consider if they are to enable health and care systems to capitalise on the dual advantage of ethical AI; maximising the opportunities to cut costs, improve care, and improve the efficiency of health and care systems, whilst proactively avoiding the potential harms. We argue that if action is not swiftly taken in this regard, a new 'AI winter' could occur due to chilling effects related to a loss of public trust in the benefits of AI for health care.
Collapse
|
29
|
Roberts H, Cowls J, Morley J, Taddeo M, Wang V, Floridi L. The Chinese approach to artificial intelligence: an analysis of policy, ethics, and regulation. AI & SOCIETY 2020. [DOI: 10.1007/s00146-020-00992-2] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
AbstractIn July 2017, China’s State Council released the country’s strategy for developing artificial intelligence (AI), entitled ‘New Generation Artificial Intelligence Development Plan’ (新一代人工智能发展规划). This strategy outlined China’s aims to become the world leader in AI by 2030, to monetise AI into a trillion-yuan (ca. 150 billion dollars) industry, and to emerge as the driving force in defining ethical norms and standards for AI. Several reports have analysed specific aspects of China’s AI policies or have assessed the country’s technical capabilities. Instead, in this article, we focus on the socio-political background and policy debates that are shaping China’s AI strategy. In particular, we analyse the main strategic areas in which China is investing in AI and the concurrent ethical debates that are delimiting its use. By focusing on the policy backdrop, we seek to provide a more comprehensive and critical understanding of China’s AI policy by bringing together debates and analyses of a wide array of policy documents.
Collapse
|
30
|
Lee MSA, Floridi L. Algorithmic Fairness in Mortgage Lending: from Absolute Conditions to Relational Trade-offs. Minds Mach (Dordr) 2020. [DOI: 10.1007/s11023-020-09529-4] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
AbstractTo address the rising concern that algorithmic decision-making may reinforce discriminatory biases, researchers have proposed many notions of fairness and corresponding mathematical formalizations. Each of these notions is often presented as a one-size-fits-all, absolute condition; however, in reality, the practical and ethical trade-offs are unavoidable and more complex. We introduce a new approach that considers fairness—not as a binary, absolute mathematical condition—but rather, as a relational notion in comparison to alternative decisionmaking processes. Using US mortgage lending as an example use case, we discuss the ethical foundations of each definition of fairness and demonstrate that our proposed methodology more closely captures the ethical trade-offs of the decision-maker, as well as forcing a more explicit representation of which values and objectives are prioritised.
Collapse
|
31
|
Morley J, Floridi L. The Limits of Empowerment: How to Reframe the Role of mHealth Tools in the Healthcare Ecosystem. SCIENCE AND ENGINEERING ETHICS 2020; 26:1159-1183. [PMID: 31172424 PMCID: PMC7286867 DOI: 10.1007/s11948-019-00115-1] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2019] [Accepted: 05/28/2019] [Indexed: 05/03/2023]
Abstract
This article highlights the limitations of the tendency to frame health- and wellbeing-related digital tools (mHealth technologies) as empowering devices, especially as they play an increasingly important role in the National Health Service (NHS) in the UK. It argues that mHealth technologies should instead be framed as digital companions. This shift from empowerment to companionship is advocated by showing the conceptual, ethical, and methodological issues challenging the narrative of empowerment, and by arguing that such challenges, as well as the risk of medical paternalism, can be overcome by focusing on the potential for mHealth tools to mediate the relationship between recipients of clinical advice and givers of clinical advice, in ways that allow for contextual flexibility in the balance between patiency and agency. The article concludes by stressing that reframing the narrative cannot be the only means for avoiding harm caused to the NHS as a healthcare system by the introduction of mHealth tools. Future discussion will be needed on the overarching role of responsible design.
Collapse
|
32
|
Abstract
Contact tracing is a central public health response to infectious disease outbreaks, especially in the early stages of an outbreak when specific treatments are limited. Importation of novel Coronavirus (COVID-19) from China and elsewhere into the United Kingdom highlights the need to understand the impact of contact tracing as a control measure. Using detailed survey information on social encounters coupled to predictive models, we investigate the likely efficacy of the current UK definition of a close contact (within 2 meters for 15 minutes or more) and the distribution of secondary cases that may go untraced. Taking recent estimates for COVID-19 transmission, we show that less than 1 in 5 cases will generate any subsequent untraced cases, although this comes at a high logistical burden with an average of 36.1 individuals (95th percentiles 0-182) traced per case. Changes to the definition of a close contact can reduce this burden, but with increased risk of untraced cases; we estimate that any definition where close contact requires more than 4 hours of contact is likely to lead to uncontrolled spread.
Collapse
|
33
|
Floridi L, Cowls J, King TC, Taddeo M. How to Design AI for Social Good: Seven Essential Factors. SCIENCE AND ENGINEERING ETHICS 2020; 26:1771-1796. [PMID: 32246245 PMCID: PMC7286860 DOI: 10.1007/s11948-020-00213-5] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/14/2019] [Accepted: 03/25/2020] [Indexed: 05/21/2023]
Abstract
The idea of artificial intelligence for social good (henceforth AI4SG) is gaining traction within information societies in general and the AI community in particular. It has the potential to tackle social problems through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies. This article addresses this gap by identifying seven ethical factors that are essential for future AI4SG initiatives. The analysis is supported by 27 case examples of AI4SG projects. Some of these factors are almost entirely novel to AI, while the significance of other factors is heightened by the use of AI. From each of these factors, corresponding best practices are formulated which, subject to context and balance, may serve as preliminary guidelines to ensure that well-designed AI is more likely to serve the social good.
Collapse
|
34
|
Aggarwal N, Floridi L. Towards the Ethical Publication of Country of Origin Information (COI) in the Asylum Process. Minds Mach (Dordr) 2020. [DOI: 10.1007/s11023-020-09523-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
AbstractThis article addresses the question of how ‘Country of Origin Information’ (COI) reports—that is, research developed and used to support decision-making in the asylum process—can be published in an ethical manner. The article focuses on the risk that published COI reports could be misused and thereby harm the subjects of the reports and/or those involved in their development. It supports a situational approach to assessing data ethics when publishing COI reports, whereby COI service providers must weigh up the benefits and harms of publication based, inter alia, on the foreseeability and probability of harm due to potential misuse of the research, the public good nature of the research, and the need to balance the rights and duties of the various actors in the asylum process, including asylum seekers themselves. Although this article focuses on the specific question of ‘how to publish COI reports in an ethical manner’, it also intends to promote further research on data ethics in the asylum process, particularly in relation to refugees, where more foundational issues should be considered.
Collapse
|
35
|
Abstract
AbstractAn increasing number of technology firms are implementing processes to identify and evaluate the ethical risks of their systems and products. A key part of these review processes is to foresee potential impacts of these technologies on different groups of users. In this article, we use the expression Ethical Foresight Analysis (EFA) to refer to a variety of analytical strategies for anticipating or predicting the ethical issues that new technological artefacts, services, and applications may raise. This article examines several existing EFA methodologies currently in use. It identifies the purposes of ethical foresight, the kinds of methods that current methodologies employ, and the strengths and weaknesses of each of these current approaches. The conclusion is that a new kind of foresight analysis on the ethics of emerging technologies is both feasible and urgently needed.
Collapse
|
36
|
Burr C, Morley J, Taddeo M, Floridi L. Digital Psychiatry: Risks and Opportunities for Public Health and Wellbeing. ACTA ACUST UNITED AC 2020. [DOI: 10.1109/tts.2020.2977059] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
37
|
|
38
|
Abstract
AbstractThis article presents the first, systematic analysis of the ethical challenges posed by recommender systems through a literature review. The article identifies six areas of concern, and maps them onto a proposed taxonomy of different kinds of ethical impact. The analysis uncovers a gap in the literature: currently user-centred approaches do not consider the interests of a variety of other stakeholders—as opposed to just the receivers of a recommendation—in assessing the ethical impacts of a recommender system.
Collapse
|
39
|
|
40
|
King TC, Aggarwal N, Taddeo M, Floridi L. Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions. SCIENCE AND ENGINEERING ETHICS 2020; 26:89-120. [PMID: 30767109 PMCID: PMC6978427 DOI: 10.1007/s11948-018-00081-0] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/10/2018] [Accepted: 12/16/2018] [Indexed: 05/03/2023]
Abstract
Artificial intelligence (AI) research and regulation seek to balance the benefits of innovation against any potential harms and disruption. However, one unintended consequence of the recent surge in AI research is the potential re-orientation of AI technologies to facilitate criminal acts, term in this article AI-Crime (AIC). AIC is theoretically feasible thanks to published experiments in automating fraud targeted at social media users, as well as demonstrations of AI-driven manipulation of simulated markets. However, because AIC is still a relatively young and inherently interdisciplinary area-spanning socio-legal studies to formal science-there is little certainty of what an AIC future might look like. This article offers the first systematic, interdisciplinary literature analysis of the foreseeable threats of AIC, providing ethicists, policy-makers, and law enforcement organisations with a synthesis of the current problems, and a possible solution space.
Collapse
|
41
|
Morley J, Floridi L. An ethically mindful approach to AI for health care. Lancet 2020; 395:254-255. [PMID: 31982053 DOI: 10.1016/s0140-6736(19)32975-7] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/16/2019] [Revised: 10/29/2019] [Accepted: 11/05/2019] [Indexed: 12/15/2022]
|
42
|
Floridi L. The Fight for Digital Sovereignty: What It Is, and Why It Matters, Especially for the EU. PHILOSOPHY & TECHNOLOGY 2020; 33:369-378. [PMID: 35194548 PMCID: PMC8848320 DOI: 10.1007/s13347-020-00423-6] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
43
|
Rich AS, Rudin C, Jacoby DMP, Freeman R, Wearn OR, Shevlin H, Dihal K, ÓhÉigeartaigh SS, Butcher J, Lippi M, Palka P, Torroni P, Wongvibulsin S, Begoli E, Schneider G, Cave S, Sloane M, Moss E, Rahwan I, Goldberg K, Howard D, Floridi L, Stilgoe J. AI reflections in 2019. NAT MACH INTELL 2020. [DOI: 10.1038/s42256-019-0141-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
44
|
Morley J, Taddeo M, Floridi L. Google Health and the NHS: overcoming the trust deficit. THE LANCET DIGITAL HEALTH 2019; 1:e389. [DOI: 10.1016/s2589-7500(19)30193-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/10/2019] [Accepted: 10/17/2019] [Indexed: 10/25/2022]
|
45
|
Taddeo M, McCutcheon T, Floridi L. Trusting artificial intelligence in cybersecurity is a double-edged sword. NAT MACH INTELL 2019. [DOI: 10.1038/s42256-019-0109-1] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
46
|
Krutzinna J, Taddeo M, Floridi L. Enabling Posthumous Medical Data Donation: An Appeal for the Ethical Utilisation of Personal Health Data. SCIENCE AND ENGINEERING ETHICS 2019; 25:1357-1387. [PMID: 30357557 DOI: 10.1007/s11948-018-0067-8] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/02/2018] [Accepted: 09/17/2018] [Indexed: 06/08/2023]
Abstract
This article argues that personal medical data should be made available for scientific research, by enabling and encouraging individuals to donate their medical records once deceased, similar to the way in which they can already donate organs or bodies. This research is part of a project on posthumous medical data donation developed by the Digital Ethics Lab at the Oxford Internet Institute at the University of Oxford. Ten arguments are provided to support the need to foster posthumous medical data donation. Two major risks are also identified-harm to others, and lack of control over the use of data-which could follow from unregulated donation of medical data. The argument that record-based medical research should proceed without the need to secure informed consent is rejected, and instead a voluntary and participatory approach to using personal medical data should be followed. The analysis concludes by stressing the need to develop an ethical code for data donation to minimise the risks, and offers five foundational principles for ethical medical data donation suggested as a draft code.
Collapse
|
47
|
Öhman C, Gorwa R, Floridi L. Prayer-Bots and Religious Worship on Twitter: A Call for a Wider Research Agenda. Minds Mach (Dordr) 2019. [DOI: 10.1007/s11023-019-09498-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
48
|
Watson DS, Krutzinna J, Bruce IN, Griffiths CE, McInnes IB, Barnes MR, Floridi L. Clinical applications of machine learning algorithms: beyond the black box. BMJ 2019; 364:l886. [PMID: 30862612 DOI: 10.1136/bmj.l886] [Citation(s) in RCA: 149] [Impact Index Per Article: 29.8] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
|
49
|
Krutzinna J, Taddeo M, Floridi L. An Ethical Code for Posthumous Medical Data Donation. PHILOSOPHICAL STUDIES SERIES 2019. [DOI: 10.1007/978-3-030-04363-6_12] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
50
|
Floridi L, Cowls J, Beltrametti M, Chatila R, Chazerand P, Dignum V, Luetge C, Madelin R, Pagallo U, Rossi F, Schafer B, Valcke P, Vayena E. AI4People-An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds Mach (Dordr) 2018; 28:689-707. [PMID: 30930541 PMCID: PMC6404626 DOI: 10.1007/s11023-018-9482-5] [Citation(s) in RCA: 272] [Impact Index Per Article: 45.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2018] [Accepted: 11/02/2018] [Indexed: 01/03/2023]
Abstract
This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other stakeholders. If adopted, these recommendations would serve as a firm foundation for the establishment of a Good AI Society.
Collapse
|