1
|
Affiliation(s)
- Jessica Morley
- Digital Ethics Center, Yale University, New Haven, CT, USA
| | | | - Luciano Floridi
- Digital Ethics Center, Yale University, New Haven, CT, USA
- Department of Legal Studies, University of Bologna, Bologna, Italy
| |
Collapse
|
2
|
Hine E, Floridi L. The Blueprint for an AI Bill of Rights: In Search of Enaction, at Risk of Inaction. Minds Mach (Dordr) 2023. [DOI: 10.1007/s11023-023-09625-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
Abstract
AbstractThe US is promoting a new vision of a “Good AI Society” through its recent AI Bill of Rights. This offers a promising vision of community-oriented equity unique amongst peer countries. However, it leaves the door open for potential rights violations. Furthermore, it may have some federal impact, but it is non-binding, and without concrete legislation, the private sector is likely to ignore it.
Collapse
|
3
|
Ghioni R, Taddeo M, Floridi L. Open source intelligence and AI: a systematic review of the GELSI literature. AI Soc 2023:1-16. [PMID: 36741972 PMCID: PMC9883130 DOI: 10.1007/s00146-023-01628-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Accepted: 01/09/2023] [Indexed: 01/29/2023]
Abstract
Today, open source intelligence (OSINT), i.e., information derived from publicly available sources, makes up between 80 and 90 percent of all intelligence activities carried out by Law Enforcement Agencies (LEAs) and intelligence services in the West. Developments in data mining, machine learning, visual forensics and, most importantly, the growing computing power available for commercial use, have enabled OSINT practitioners to speed up, and sometimes even automate, intelligence collection and analysis, obtaining more accurate results more quickly. As the infosphere expands to accommodate ever-increasing online presence, so does the pool of actionable OSINT. These developments raise important concerns in terms of governance, ethical, legal, and social implications (GELSI). New and crucial oversight concerns emerge alongside standard privacy concerns, as some of the more advanced data analysis tools require little to no supervision. This article offers a systematic review of the relevant literature. It analyzes 571 publications to assess the current state of the literature on the use of AI-powered OSINT (and the development of OSINT software) as it relates to the GELSI framework, highlighting potential gaps and suggesting new research directions.
Collapse
Affiliation(s)
- Riccardo Ghioni
- Department of Legal Studies, University of Bologna, Via Zamboni, 27, 40126 Bologna, IT Italy
| | - Mariarosaria Taddeo
- Oxford Internet Institute, University of Oxford, 1St Giles’, Oxford, OX1 3JS UK
- The Alan Turing Institute, British Library, 96 Euston Rd, London, NW1 2DB UK
| | - Luciano Floridi
- Department of Legal Studies, University of Bologna, Via Zamboni, 27, 40126 Bologna, IT Italy
- The Alan Turing Institute, British Library, 96 Euston Rd, London, NW1 2DB UK
| |
Collapse
|
4
|
Mökander J, Sheth M, Watson DS, Floridi L. The Switch, the Ladder, and the Matrix: Models for Classifying AI Systems. Minds Mach (Dordr) 2023. [DOI: 10.1007/s11023-022-09620-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Abstract
AbstractOrganisations that design and deploy artificial intelligence (AI) systems increasingly commit themselves to high-level, ethical principles. However, there still exists a gap between principles and practices in AI ethics. One major obstacle organisations face when attempting to operationalise AI Ethics is the lack of a well-defined material scope. Put differently, the question to which systems and processes AI ethics principles ought to apply remains unanswered. Of course, there exists no universally accepted definition of AI, and different systems pose different ethical challenges. Nevertheless, pragmatic problem-solving demands that things should be sorted so that their grouping will promote successful actions for some specific end. In this article, we review and compare previous attempts to classify AI systems for the purpose of implementing AI governance in practice. We find that attempts to classify AI systems proposed in previous literature use one of three mental models: the Switch, i.e., a binary approach according to which systems either are or are not considered AI systems depending on their characteristics; the Ladder, i.e., a risk-based approach that classifies systems according to the ethical risks they pose; and the Matrix, i.e., a multi-dimensional classification of systems that take various aspects into account, such as context, input data, and decision-model. Each of these models for classifying AI systems comes with its own set of strengths and weaknesses. By conceptualising different ways of classifying AI systems into simple mental models, we hope to provide organisations that design, deploy, or regulate AI systems with the vocabulary needed to demarcate the material scope of their AI governance frameworks.
Collapse
|
5
|
Cowls J, Tsamados A, Taddeo M, Floridi L. The AI gambit: leveraging artificial intelligence to combat climate change-opportunities, challenges, and recommendations. AI Soc 2023; 38:283-307. [PMID: 34690449 PMCID: PMC8522259 DOI: 10.1007/s00146-021-01294-x] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Accepted: 09/06/2021] [Indexed: 02/06/2023]
Abstract
In this article, we analyse the role that artificial intelligence (AI) could play, and is playing, to combat global climate change. We identify two crucial opportunities that AI offers in this domain: it can help improve and expand current understanding of climate change, and it can contribute to combatting the climate crisis effectively. However, the development of AI also raises two sets of problems when considering climate change: the possible exacerbation of social and ethical challenges already associated with AI, and the contribution to climate change of the greenhouse gases emitted by training data and computation-intensive AI systems. We assess the carbon footprint of AI research, and the factors that influence AI's greenhouse gas (GHG) emissions in this domain. We find that the carbon footprint of AI research may be significant and highlight the need for more evidence concerning the trade-off between the GHG emissions generated by AI research and the energy and resource efficiency gains that AI can offer. In light of our analysis, we argue that leveraging the opportunities offered by AI for global climate change whilst limiting its risks is a gambit which requires responsive, evidence-based, and effective governance to become a winning strategy. We conclude by identifying the European Union as being especially well-placed to play a leading role in this policy response and provide 13 recommendations that are designed to identify and harness the opportunities of AI for combatting climate change, while reducing its impact on the environment.
Collapse
Affiliation(s)
- Josh Cowls
- Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS UK
- Alan Turing Institute, British Library, 96 Euston Rd, London, NW1 2DB UK
| | - Andreas Tsamados
- Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS UK
| | - Mariarosaria Taddeo
- Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS UK
- Alan Turing Institute, British Library, 96 Euston Rd, London, NW1 2DB UK
| | - Luciano Floridi
- Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS UK
- Alan Turing Institute, British Library, 96 Euston Rd, London, NW1 2DB UK
| |
Collapse
|
6
|
Roberts H, Zhang J, Bariach B, Cowls J, Gilburt B, Juneja P, Tsamados A, Ziosi M, Taddeo M, Floridi L. Artificial intelligence in support of the circular economy: ethical considerations and a path forward. AI & Soc 2022. [DOI: 10.1007/s00146-022-01596-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
AbstractThe world’s current model for economic development is unsustainable. It encourages high levels of resource extraction, consumption, and waste that undermine positive environmental outcomes. Transitioning to a circular economy (CE) model of development has been proposed as a sustainable alternative. Artificial intelligence (AI) is a crucial enabler for CE. It can aid in designing robust and sustainable products, facilitate new circular business models, and support the broader infrastructures needed to scale circularity. However, to date, considerations of the ethical implications of using AI to achieve a transition to CE have been limited. This article addresses this gap. It outlines how AI is and can be used to transition towards CE, analyzes the ethical risks associated with using AI for this purpose, and supports some recommendations to policymakers and industry on how to minimise these risks.
Collapse
|
7
|
Mökander J, Sheth M, Gersbro-Sundler M, Blomgren P, Floridi L. Challenges and best practices in corporate AI governance: Lessons from the biopharmaceutical industry. Front Comput Sci 2022. [DOI: 10.3389/fcomp.2022.1068361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
While the use of artificial intelligence (AI) systems promises to bring significant economic and social benefits, it is also coupled with ethical, legal, and technical challenges. Business leaders thus face the question of how to best reap the benefits of automation whilst managing the associated risks. As a first step, many companies have committed themselves to various sets of ethics principles aimed at guiding the design and use of AI systems. So far so good. But how can well-intentioned ethical principles be translated into effective practice? And what challenges await companies that attempt to operationalize AI governance? In this article, we address these questions by drawing on our first-hand experience of shaping and driving the roll-out of AI governance within AstraZeneca, a biopharmaceutical company. The examples we discuss highlight challenges that any organization attempting to operationalize AI governance will have to face. These include questions concerning how to define the material scope of AI governance, how to harmonize standards across decentralized organizations, and how to measure the impact of specific AI governance initiatives. By showcasing how AstraZeneca managed these operational questions, we hope to provide project managers, CIOs, AI practitioners, and data privacy officers responsible for designing and implementing AI governance frameworks within other organizations with generalizable best practices. In essence, companies seeking to operationalize AI governance are encouraged to build on existing policies and governance structures, use pragmatic and action-oriented terminology, focus on risk management in development and procurement, and empower employees through continuous education and change management.
Collapse
|
8
|
Ziosi M, Hewitt B, Juneja P, Taddeo M, Floridi L. Smart cities: reviewing the debate about their ethical implications. AI Soc 2022:1-16. [PMID: 36212227 PMCID: PMC9524726 DOI: 10.1007/s00146-022-01558-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2022] [Accepted: 09/01/2022] [Indexed: 11/24/2022]
Abstract
This paper considers a host of definitions and labels attached to the concept of smart cities to identify four dimensions that ground a review of ethical concerns emerging from the current debate. These are: (1) network infrastructure, with the corresponding concerns of control, surveillance, and data privacy and ownership; (2) post-political governance, embodied in the tensions between public and private decision-making and cities as post-political entities; (3) social inclusion, expressed in the aspects of citizen participation and inclusion, and inequality and discrimination; and (4) sustainability, with a specific focus on the environment as an element to protect but also as a strategic element for the future. Given the persisting disagreements around the definition of a smart city, the article identifies in these four dimensions a more stable reference framework within which ethical concerns can be clustered and discussed. Identifying these dimensions makes possible a review of the ethical implications of smart cities that is transversal to their different types and resilient towards the unsettled debate over their definition.
Collapse
Affiliation(s)
- Marta Ziosi
- Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS UK
| | - Benjamin Hewitt
- Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS UK
| | - Prathm Juneja
- Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS UK
| | - Mariarosaria Taddeo
- Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS UK
- Alan Turing Institute, British Library, 96 Euston Rd., London, NW1 2DB UK
| | - Luciano Floridi
- Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS UK
- Department of Legal Studies, University of Bologna, Via Zamboni, 27, 40126 Bologna, Italy
| |
Collapse
|
9
|
Mökander J, Juneja P, Watson DS, Floridi L. The US Algorithmic Accountability Act of 2022 vs. The EU Artificial Intelligence Act: what can they learn from each other? Minds Mach (Dordr) 2022. [DOI: 10.1007/s11023-022-09612-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
AbstractOn the whole, the US Algorithmic Accountability Act of 2022 (US AAA) is a pragmatic approach to balancing the benefits and risks of automated decision systems. Yet there is still room for improvement. This commentary highlights how the US AAA can both inform and learn from the European Artificial Intelligence Act (EU AIA).
Collapse
|
10
|
Mökander J, Floridi L. Operationalising AI governance through ethics-based auditing: an industry case study. AI Ethics 2022; 3:451-468. [PMID: 35669570 PMCID: PMC9152664 DOI: 10.1007/s43681-022-00171-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Accepted: 04/29/2022] [Indexed: 12/31/2022]
Abstract
Ethics-based auditing (EBA) is a structured process whereby an entity's past or present behaviour is assessed for consistency with moral principles or norms. Recently, EBA has attracted much attention as a governance mechanism that may help to bridge the gap between principles and practice in AI ethics. However, important aspects of EBA-such as the feasibility and effectiveness of different auditing procedures-have yet to be substantiated by empirical research. In this article, we address this knowledge gap by providing insights from a longitudinal industry case study. Over 12 months, we observed and analysed the internal activities of AstraZeneca, a biopharmaceutical company, as it prepared for and underwent an ethics-based AI audit. While previous literature concerning EBA has focussed on proposing or analysing evaluation metrics or visualisation techniques, our findings suggest that the main difficulties large multinational organisations face when conducting EBA mirror classical governance challenges. These include ensuring harmonised standards across decentralised organisations, demarcating the scope of the audit, driving internal communication and change management, and measuring actual outcomes. The case study presented in this article contributes to the existing literature by providing a detailed description of the organisational context in which EBA procedures must be integrated to be feasible and effective.
Collapse
Affiliation(s)
- Jakob Mökander
- Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS UK
| | - Luciano Floridi
- Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS UK
- Department of Legal Studies, University of Bologna, Via Zamboni 33, 40126 Bologna, Italy
| |
Collapse
|
11
|
Abstract
AbstractNecessity and sufficiency are the building blocks of all successful explanations. Yet despite their importance, these notions have been conceptually underdeveloped and inconsistently applied in explainable artificial intelligence (XAI), a fast-growing research area that is so far lacking in firm theoretical foundations. In this article, an expanded version of a paper originally presented at the 37th Conference on Uncertainty in Artificial Intelligence (Watson et al., 2021), we attempt to fill this gap. Building on work in logic, probability, and causality, we establish the central role of necessity and sufficiency in XAI, unifying seemingly disparate methods in a single formal framework. We propose a novel formulation of these concepts, and demonstrate its advantages over leading alternatives. We present a sound and complete algorithm for computing explanatory factors with respect to a given context and set of agentive preferences, allowing users to identify necessary and sufficient conditions for desired outcomes at minimal cost. Experiments on real and simulated data confirm our method’s competitive performance against state of the art XAI tools on a diverse array of tasks.
Collapse
|
12
|
Abstract
AbstractBy mid-2019 there were more than 80 AI ethics guides available in the public domain. Despite this, 2020 saw numerous news stories break related to ethically questionable uses of AI. In part, this is because AI ethics theory remains highly abstract, and of limited practical applicability to those actually responsible for designing algorithms and AI systems. Our previous research sought to start closing this gap between the ‘what’ and the ‘how’ of AI ethics through the creation of a searchable typology of tools and methods designed to translate between the five most common AI ethics principles and implementable design practices. Whilst a useful starting point, that research rested on the assumption that all AI practitioners are aware of the ethical implications of AI, understand their importance, and are actively seeking to respond to them. In reality, it is unclear whether this is the case. It is this limitation that we seek to overcome here by conducting a mixed-methods qualitative analysis to answer the following four questions: what do AI practitioners understand about the need to translate ethical principles into practice? What motivates AI practitioners to embed ethical principles into design practices? What barriers do AI practitioners face when attempting to translate ethical principles into practice? And finally, what assistance do AI practitioners want and need when translating ethical principles into practice?
Collapse
|
13
|
Roberts H, Cowls J, Hine E, Mazzi F, Tsamados A, Taddeo M, Floridi L. Achieving a 'Good AI Society': Comparing the Aims and Progress of the EU and the US. Sci Eng Ethics 2021; 27:68. [PMID: 34767085 PMCID: PMC8587491 DOI: 10.1007/s11948-021-00340-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Accepted: 09/10/2021] [Indexed: 06/13/2023]
Abstract
Over the past few years, there has been a proliferation of artificial intelligence (AI) strategies, released by governments around the world, that seek to maximise the benefits of AI and minimise potential harms. This article provides a comparative analysis of the European Union (EU) and the United States' (US) AI strategies and considers (i) the visions of a 'Good AI Society' that are forwarded in key policy documents and their opportunity costs, (ii) the extent to which the implementation of each vision is living up to stated aims and (iii) the consequences that these differing visions of a 'Good AI Society' have for transatlantic cooperation. The article concludes by comparing the ethical desirability of each vision and identifies areas where the EU, and especially the US, need to improve in order to achieve ethical outcomes and deepen cooperation.
Collapse
Affiliation(s)
- Huw Roberts
- Oxford Internet Institute, University of Oxford, 1 St Giles', Oxford, OX1 3JS, UK
| | - Josh Cowls
- Oxford Internet Institute, University of Oxford, 1 St Giles', Oxford, OX1 3JS, UK
- Alan Turing Institute, British Library, 96 Euston Rd, London, NW1 2DB, UK
| | - Emmie Hine
- Oxford Internet Institute, University of Oxford, 1 St Giles', Oxford, OX1 3JS, UK
| | - Francesca Mazzi
- Oxford Internet Institute, University of Oxford, 1 St Giles', Oxford, OX1 3JS, UK
- Saïd Business School, University of Oxford, Park End St, Oxford, OX1 1HP, UK
| | - Andreas Tsamados
- Oxford Internet Institute, University of Oxford, 1 St Giles', Oxford, OX1 3JS, UK
| | - Mariarosaria Taddeo
- Oxford Internet Institute, University of Oxford, 1 St Giles', Oxford, OX1 3JS, UK
- Alan Turing Institute, British Library, 96 Euston Rd, London, NW1 2DB, UK
| | - Luciano Floridi
- Oxford Internet Institute, University of Oxford, 1 St Giles', Oxford, OX1 3JS, UK.
- Alan Turing Institute, British Library, 96 Euston Rd, London, NW1 2DB, UK.
| |
Collapse
|
14
|
Mökander J, Axente M, Casolari F, Floridi L. Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed European AI Regulation. Minds Mach (Dordr) 2021; 32:241-268. [PMID: 34754142 PMCID: PMC8569069 DOI: 10.1007/s11023-021-09577-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Accepted: 10/14/2021] [Indexed: 11/04/2022]
Abstract
The proposed European Artificial Intelligence Act (AIA) is the first attempt to elaborate a general legal framework for AI carried out by any major global economy. As such, the AIA is likely to become a point of reference in the larger discourse on how AI systems can (and should) be regulated. In this article, we describe and discuss the two primary enforcement mechanisms proposed in the AIA: the conformity assessments that providers of high-risk AI systems are expected to conduct, and the post-market monitoring plans that providers must establish to document the performance of high-risk AI systems throughout their lifetimes. We argue that the AIA can be interpreted as a proposal to establish a Europe-wide ecosystem for conducting AI auditing, albeit in other words. Our analysis offers two main contributions. First, by describing the enforcement mechanisms included in the AIA in terminology borrowed from existing literature on AI auditing, we help providers of AI systems understand how they can prove adherence to the requirements set out in the AIA in practice. Second, by examining the AIA from an auditing perspective, we seek to provide transferable lessons from previous research about how to refine further the regulatory approach outlined in the AIA. We conclude by highlighting seven aspects of the AIA where amendments (or simply clarifications) would be helpful. These include, above all, the need to translate vague concepts into verifiable criteria and to strengthen the institutional safeguards concerning conformity assessments based on internal checks.
Collapse
Affiliation(s)
- Jakob Mökander
- Oxford Internet Institute, University of Oxford, 1 St Giles', Oxford, OX1 3JS UK
| | - Maria Axente
- UK All Party Parliamentary Group on AI (APPG AI), London, UK
| | - Federico Casolari
- Department of Legal Studies, University of Bologna, via Zamboni 27/29, 40126 Bologna, Italy
| | - Luciano Floridi
- Oxford Internet Institute, University of Oxford, 1 St Giles', Oxford, OX1 3JS UK.,The Alan Turing Institute, The British Library, 2QR, 96 Euston Rd, London, NW1 2DB UK
| |
Collapse
|
15
|
Mökander J, Morley J, Taddeo M, Floridi L. Ethics-Based Auditing of Automated Decision-Making Systems: Nature, Scope, and Limitations. Sci Eng Ethics 2021; 27:44. [PMID: 34231029 PMCID: PMC8260507 DOI: 10.1007/s11948-021-00319-4] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Accepted: 06/08/2021] [Indexed: 06/13/2023]
Abstract
Important decisions that impact humans lives, livelihoods, and the natural environment are increasingly being automated. Delegating tasks to so-called automated decision-making systems (ADMS) can improve efficiency and enable new solutions. However, these benefits are coupled with ethical challenges. For example, ADMS may produce discriminatory outcomes, violate individual privacy, and undermine human self-determination. New governance mechanisms are thus needed that help organisations design and deploy ADMS in ways that are ethical, while enabling society to reap the full economic and social benefits of automation. In this article, we consider the feasibility and efficacy of ethics-based auditing (EBA) as a governance mechanism that allows organisations to validate claims made about their ADMS. Building on previous work, we define EBA as a structured process whereby an entity's present or past behaviour is assessed for consistency with relevant principles or norms. We then offer three contributions to the existing literature. First, we provide a theoretical explanation of how EBA can contribute to good governance by promoting procedural regularity and transparency. Second, we propose seven criteria for how to design and implement EBA procedures successfully. Third, we identify and discuss the conceptual, technical, social, economic, organisational, and institutional constraints associated with EBA. We conclude that EBA should be considered an integral component of multifaced approaches to managing the ethical risks posed by ADMS.
Collapse
Affiliation(s)
- Jakob Mökander
- Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS UK
| | - Jessica Morley
- Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS UK
| | - Mariarosaria Taddeo
- Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS UK
- Alan Turing Institute, British Library, 96 Euston Rd, London, NW1 2DB UK
| | - Luciano Floridi
- Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS UK
- Alan Turing Institute, British Library, 96 Euston Rd, London, NW1 2DB UK
| |
Collapse
|
16
|
Floridi L. The European Legislation on AI: a Brief Analysis of its Philosophical Approach. Philos Technol 2021; 34:215-222. [PMID: 34104628 PMCID: PMC8174763 DOI: 10.1007/s13347-021-00460-9] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 05/23/2021] [Accepted: 05/23/2021] [Indexed: 12/01/2022]
Affiliation(s)
- Luciano Floridi
- Oxford Internet Institute, University of Oxford, 1 St Giles, Oxford, OX1 3JS UK
- The Alan Turing Institute, 96 Euston Road, London, NW1 2DB UK
| |
Collapse
|
17
|
Affiliation(s)
- Luciano Floridi
- Oxford Internet Institute, University of Oxford, 1 St Giles, Oxford, OX1 3JS UK
- The Alan Turing Institute, 96 Euston Road, London, NW1 2DB UK
| |
Collapse
|
18
|
Abstract
AbstractResearch on the ethics of algorithms has grown substantially over the past decade. Alongside the exponential development and application of machine learning algorithms, new ethical problems and solutions relating to their ubiquitous use in society have been proposed. This article builds on a review of the ethics of algorithms published in 2016 (Mittelstadt et al. Big Data Soc 3(2), 2016). The goals are to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated analysis of epistemic and normative concerns, and to offer actionable guidance for the governance of the design, development and deployment of algorithms.
Collapse
|
19
|
Abstract
AbstractA series of recent developments points towards auditing as a promising mechanism to bridge the gap between principles and practice in AI ethics. Building on ongoing discussions concerning ethics-based auditing, we offer three contributions. First, we argue that ethics-based auditing can improve the quality of decision making, increase user satisfaction, unlock growth potential, enable law-making, and relieve human suffering. Second, we highlight current best practices to support the design and implementation of ethics-based auditing: To be feasible and effective, ethics-based auditing should take the form of a continuous and constructive process, approach ethical alignment from a system perspective, and be aligned with public policies and incentives for ethically desirable behaviour. Third, we identify and discuss the constraints associated with ethics-based auditing. Only by understanding and accounting for these constraints can ethics-based auditing facilitate ethical alignment of AI, while enabling society to reap the full economic and social benefits of automation.
Collapse
|
20
|
|
21
|
|
22
|
Yang GZ, Bellingham J, Dupont PE, Fischer P, Floridi L, Full R, Jacobstein N, Kumar V, McNutt M, Merrifield R, Nelson BJ, Scassellati B, Taddeo M, Taylor R, Veloso M, Wang ZL, Wood R. The grand challenges of Science Robotics. Sci Robot 2021; 3:3/14/eaar7650. [PMID: 33141701 DOI: 10.1126/scirobotics.aar7650] [Citation(s) in RCA: 351] [Impact Index Per Article: 117.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2017] [Accepted: 01/12/2018] [Indexed: 12/17/2022]
Abstract
One of the ambitions of Science Robotics is to deeply root robotics research in science while developing novel robotic platforms that will enable new scientific discoveries. Of our 10 grand challenges, the first 7 represent underpinning technologies that have a wider impact on all application areas of robotics. For the next two challenges, we have included social robotics and medical robotics as application-specific areas of development to highlight the substantial societal and health impacts that they will bring. Finally, the last challenge is related to responsible innovation and how ethics and security should be carefully considered as we develop the technology further.
Collapse
Affiliation(s)
- Guang-Zhong Yang
- Hamlyn Centre for Robotic Surgery, Imperial College London, London, UK.
| | - Jim Bellingham
- Center for Marine Robotics, Woods Hole Oceanographic Institution, Woods Hole, MA 02543, USA
| | - Pierre E Dupont
- Department of Cardiovascular Surgery, Boston Children's Hospital, Harvard Medical School, Boston, MA 02115, USA
| | - Peer Fischer
- Institute of Physical Chemistry, University of Stuttgart, Stuttgart, Germany.,Micro, Nano, and Molecular Systems Laboratory, Max Planck Institute for Intelligent Systems, Stuttgart, Germany
| | - Luciano Floridi
- Centre for Practical Ethics, Faculty of Philosophy, University of Oxford, Oxford, UK.,Digital Ethics Lab, Oxford Internet Institute, University of Oxford, Oxford, UK.,Department of Computer Science, University of Oxford, Oxford, UK.,Data Ethics Group, Alan Turing Institute, London, UK.,Department of Economics, American University, Washington, DC 20016, USA
| | - Robert Full
- Department of Integrative Biology, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Neil Jacobstein
- Singularity University, NASA Research Park, Moffett Field, CA 94035, USA.,MediaX, Stanford University, Stanford, CA 94305, USA
| | - Vijay Kumar
- Department of Mechanical Engineering and Applied Mechanics, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Marcia McNutt
- National Academy of Sciences, Washington, DC 20418, USA
| | - Robert Merrifield
- Hamlyn Centre for Robotic Surgery, Imperial College London, London, UK
| | - Bradley J Nelson
- Institute of Robotics and Intelligent Systems, Department of Mechanical and Process Engineering, ETH Zürich, Zurich, Switzerland
| | - Brian Scassellati
- Department of Computer Science, Yale University, New Haven, CT 06520, USA.,Department Mechanical Engineering and Materials Science, Yale University, New Haven, CT 06520, USA
| | - Mariarosaria Taddeo
- Digital Ethics Lab, Oxford Internet Institute, University of Oxford, Oxford, UK.,Department of Computer Science, University of Oxford, Oxford, UK.,Data Ethics Group, Alan Turing Institute, London, UK
| | - Russell Taylor
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Manuela Veloso
- Machine Learning Department, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Zhong Lin Wang
- School of Materials Science and Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Robert Wood
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, USA.,Wyss Institute for Biologically Inspired Engineering, Harvard University, Cambridge, MA 02138, USA
| |
Collapse
|
23
|
Floridi L. Digital Ethics Online and Off. Am Sci 2021. [DOI: 10.1511/2021.109.4.218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
24
|
Ghezzi P, Bannister PG, Casino G, Catalani A, Goldman M, Morley J, Neunez M, Prados-Bo A, Smeesters PR, Taddeo M, Vanzolini T, Floridi L. Online Information of Vaccines: Information Quality, Not Only Privacy, Is an Ethical Responsibility of Search Engines. Front Med (Lausanne) 2020; 7:400. [PMID: 32850905 PMCID: PMC7431660 DOI: 10.3389/fmed.2020.00400] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2020] [Accepted: 06/26/2020] [Indexed: 11/13/2022] Open
Abstract
The fact that Internet companies may record our personal data and track our online behavior for commercial or political purpose has emphasized aspects related to online privacy. This has also led to the development of search engines that promise no tracking and privacy. Search engines also have a major role in spreading low-quality health information such as that of anti-vaccine websites. This study investigates the relationship between search engines' approach to privacy and the scientific quality of the information they return. We analyzed the first 30 webpages returned searching "vaccines autism" in English, Spanish, Italian, and French. The results show that not only "alternative" search engines (Duckduckgo, Ecosia, Qwant, Swisscows, and Mojeek) but also other commercial engines (Bing, Yahoo) often return more anti-vaccine pages (10-53%) than Google.com (0%). Some localized versions of Google, however, returned more anti-vaccine webpages (up to 10%) than Google.com. Health information returned by search engines has an impact on public health and, specifically, in the acceptance of vaccines. The issue of information quality when seeking information for making health-related decisions also impact the ethical aspect represented by the right to an informed consent. Our study suggests that designing a search engine that is privacy savvy and avoids issues with filter bubbles that can result from user-tracking is necessary but insufficient; instead, mechanisms should be developed to test search engines from the perspective of information quality (particularly for health-related webpages) before they can be deemed trustworthy providers of public health information.
Collapse
Affiliation(s)
- Pietro Ghezzi
- Brighton & Sussex Medical School, Brighton, United Kingdom
| | | | - Gonzalo Casino
- Communication Department, Pompeu Fabra University, Barcelona, Spain.,Iberoamerican Cochrane Center, Barcelona, Spain
| | - Alessia Catalani
- Department of Biomolecular Sciences, University of Urbino Carlo Bo, Urbino, Italy
| | - Michel Goldman
- Institute for Interdisciplinary Innovation in Healthcare (I3h), Université Libre de Bruxelles, Brussels, Belgium
| | - Jessica Morley
- Oxford Internet Institute, University of Oxford, Oxford, United Kingdom
| | - Marie Neunez
- Institute for Interdisciplinary Innovation in Healthcare (I3h), Université Libre de Bruxelles, Brussels, Belgium
| | - Andreu Prados-Bo
- Communication Department, Pompeu Fabra University, Barcelona, Spain.,Blanquerna School of Health Sciences, Ramon Llull University, Barcelona, Spain
| | - Pierre R Smeesters
- Molecular Bacteriology Laboratory, Université Libre de Bruxelles, Brussels, Belgium.,Academic Children Hospital Queen Fabiola, Université libre de Bruxelles, Brussels, Belgium
| | - Mariarosaria Taddeo
- Oxford Internet Institute, University of Oxford, Oxford, United Kingdom.,The Alan Turing Institute, London, United Kingdom
| | - Tania Vanzolini
- Department of Biomolecular Sciences, University of Urbino Carlo Bo, Urbino, Italy
| | - Luciano Floridi
- Oxford Internet Institute, University of Oxford, Oxford, United Kingdom.,The Alan Turing Institute, London, United Kingdom
| |
Collapse
|
25
|
Morley J, Cowls J, Taddeo M, Floridi L. Public Health in the Information Age: Recognizing the Infosphere as a Social Determinant of Health. J Med Internet Res 2020; 22:e19311. [PMID: 32648850 PMCID: PMC7402642 DOI: 10.2196/19311] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Revised: 06/11/2020] [Accepted: 07/08/2020] [Indexed: 02/07/2023] Open
Abstract
Since 2016, social media companies and news providers have come under pressure to tackle the spread of political mis- and disinformation (MDI) online. However, despite evidence that online health MDI (on the web, on social media, and within mobile apps) also has negative real-world effects, there has been a lack of comparable action by either online service providers or state-sponsored public health bodies. We argue that this is problematic and seek to answer three questions: why has so little been done to control the flow of, and exposure to, health MDI online; how might more robust action be justified; and what specific, newly justified actions are needed to curb the flow of, and exposure to, online health MDI? In answering these questions, we show that four ethical concerns—related to paternalism, autonomy, freedom of speech, and pluralism—are partly responsible for the lack of intervention. We then suggest that these concerns can be overcome by relying on four arguments: (1) education is necessary but insufficient to curb the circulation of health MDI, (2) there is precedent for state control of internet content in other domains, (3) network dynamics adversely affect the spread of accurate health information, and (4) justice is best served by protecting those susceptible to inaccurate health information. These arguments provide a strong case for classifying the quality of the infosphere as a social determinant of health, thus making its protection a public health responsibility. In addition, they offer a strong justification for working to overcome the ethical concerns associated with state-led intervention in the infosphere to protect public health.
Collapse
Affiliation(s)
- Jessica Morley
- Oxford Internet Institute, University of Oxford, Oxford, United Kingdom
| | - Josh Cowls
- Oxford Internet Institute, University of Oxford, Oxford, United Kingdom.,Alan Turing Institute, London, United Kingdom
| | - Mariarosaria Taddeo
- Oxford Internet Institute, University of Oxford, Oxford, United Kingdom.,Alan Turing Institute, London, United Kingdom
| | - Luciano Floridi
- Oxford Internet Institute, University of Oxford, Oxford, United Kingdom.,Alan Turing Institute, London, United Kingdom
| |
Collapse
|
26
|
Abstract
This article presents the first thematic review of the literature on the ethical issues concerning digital well-being. The term 'digital well-being' is used to refer to the impact of digital technologies on what it means to live a life that is good for a human being. The review explores the existing literature on the ethics of digital well-being, with the goal of mapping the current debate and identifying open questions for future research. The review identifies major issues related to several key social domains: healthcare, education, governance and social development, and media and entertainment. It also highlights three broader themes: positive computing, personalised human-computer interaction, and autonomy and self-determination. The review argues that three themes will be central to ongoing discussions and research by showing how they can be used to identify open questions related to the ethics of digital well-being.
Collapse
Affiliation(s)
- Christopher Burr
- Oxford Internet Institute, University of Oxford, 1 St Giles, Oxford, OX1 3JS, UK.
| | - Mariarosaria Taddeo
- Oxford Internet Institute, University of Oxford, 1 St Giles, Oxford, OX1 3JS, UK
- The Alan Turing Institute, 96 Euston Road, London, NW1 2DB, UK
| | - Luciano Floridi
- Oxford Internet Institute, University of Oxford, 1 St Giles, Oxford, OX1 3JS, UK
- The Alan Turing Institute, 96 Euston Road, London, NW1 2DB, UK
| |
Collapse
|
27
|
Morley J, Floridi L, Kinsey L, Elhalal A. From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices. Sci Eng Ethics 2020; 26:2141-2168. [PMID: 31828533 PMCID: PMC7417387 DOI: 10.1007/s11948-019-00165-5] [Citation(s) in RCA: 89] [Impact Index Per Article: 22.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/16/2019] [Accepted: 11/29/2019] [Indexed: 05/24/2023]
Abstract
The debate about the ethical implications of Artificial Intelligence dates from the 1960s (Samuel in Science, 132(3429):741-742, 1960. https://doi.org/10.1126/science.132.3429.741 ; Wiener in Cybernetics: or control and communication in the animal and the machine, MIT Press, New York, 1961). However, in recent years symbolic AI has been complemented and sometimes replaced by (Deep) Neural Networks and Machine Learning (ML) techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles-the 'what' of AI ethics (beneficence, non-maleficence, autonomy, justice and explicability)-rather than on practices, the 'how.' Awareness of the potential issues is increasing at a fast rate, but the AI community's ability to take action to mitigate the associated risks is still at its infancy. Our intention in presenting this research is to contribute to closing the gap between principles and practices by constructing a typology that may help practically-minded developers apply ethics at each stage of the Machine Learning development pipeline, and to signal to researchers where further work is needed. The focus is exclusively on Machine Learning, but it is hoped that the results of this research may be easily applicable to other branches of AI. The article outlines the research method for creating this typology, the initial findings, and provides a summary of future research needs.
Collapse
Affiliation(s)
- Jessica Morley
- Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS UK
| | - Luciano Floridi
- Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS UK
- Alan Turing Institute, British Library, 96 Euston Rd, London, NW1 2DB UK
| | - Libby Kinsey
- Digital Catapult, 101 Euston Road, Kings Cross, London, NW1 2RA UK
| | - Anat Elhalal
- Digital Catapult, 101 Euston Road, Kings Cross, London, NW1 2RA UK
| |
Collapse
|
28
|
Morley J, Machado CCV, Burr C, Cowls J, Joshi I, Taddeo M, Floridi L. The ethics of AI in health care: A mapping review. Soc Sci Med 2020; 260:113172. [PMID: 32702587 DOI: 10.1016/j.socscimed.2020.113172] [Citation(s) in RCA: 122] [Impact Index Per Article: 30.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Revised: 06/22/2020] [Accepted: 06/23/2020] [Indexed: 02/06/2023]
Abstract
This article presents a mapping review of the literature concerning the ethics of artificial intelligence (AI) in health care. The goal of this review is to summarise current debates and identify open questions for future research. Five literature databases were searched to support the following research question: how can the primary ethical risks presented by AI-health be categorised, and what issues must policymakers, regulators and developers consider in order to be 'ethically mindful? A series of screening stages were carried out-for example, removing articles that focused on digital health in general (e.g. data sharing, data access, data privacy, surveillance/nudging, consent, ownership of health data, evidence of efficacy)-yielding a total of 156 papers that were included in the review. We find that ethical issues can be (a) epistemic, related to misguided, inconclusive or inscrutable evidence; (b) normative, related to unfair outcomes and transformative effectives; or (c) related to traceability. We further find that these ethical issues arise at six levels of abstraction: individual, interpersonal, group, institutional, and societal or sectoral. Finally, we outline a number of considerations for policymakers and regulators, mapping these to existing literature, and categorising each as epistemic, normative or traceability-related and at the relevant level of abstraction. Our goal is to inform policymakers, regulators and developers of what they must consider if they are to enable health and care systems to capitalise on the dual advantage of ethical AI; maximising the opportunities to cut costs, improve care, and improve the efficiency of health and care systems, whilst proactively avoiding the potential harms. We argue that if action is not swiftly taken in this regard, a new 'AI winter' could occur due to chilling effects related to a loss of public trust in the benefits of AI for health care.
Collapse
Affiliation(s)
- Jessica Morley
- Oxford Internet Institute, University of Oxford, 1 St Giles, Oxford, OX1 3JS, UK.
| | - Caio C V Machado
- Oxford Internet Institute, University of Oxford, 1 St Giles, Oxford, OX1 3JS, UK
| | - Christopher Burr
- Oxford Internet Institute, University of Oxford, 1 St Giles, Oxford, OX1 3JS, UK
| | - Josh Cowls
- Oxford Internet Institute, University of Oxford, 1 St Giles, Oxford, OX1 3JS, UK; Alan Turing Institute, British Library, 96 Euston Rd, London, NW1 2DB, UK
| | - Indra Joshi
- NHSX, Skipton House, 80 London Road, SE1 6LH, UK
| | - Mariarosaria Taddeo
- Oxford Internet Institute, University of Oxford, 1 St Giles, Oxford, OX1 3JS, UK; Alan Turing Institute, British Library, 96 Euston Rd, London, NW1 2DB, UK; Department of Computer Science, University of Oxford, 15 Parks Rd, Oxford, OX1 3QD, UK
| | - Luciano Floridi
- Oxford Internet Institute, University of Oxford, 1 St Giles, Oxford, OX1 3JS, UK; Alan Turing Institute, British Library, 96 Euston Rd, London, NW1 2DB, UK; Department of Computer Science, University of Oxford, 15 Parks Rd, Oxford, OX1 3QD, UK
| |
Collapse
|
29
|
Roberts H, Cowls J, Morley J, Taddeo M, Wang V, Floridi L. The Chinese approach to artificial intelligence: an analysis of policy, ethics, and regulation. AI & Soc 2020. [DOI: 10.1007/s00146-020-00992-2] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
AbstractIn July 2017, China’s State Council released the country’s strategy for developing artificial intelligence (AI), entitled ‘New Generation Artificial Intelligence Development Plan’ (新一代人工智能发展规划). This strategy outlined China’s aims to become the world leader in AI by 2030, to monetise AI into a trillion-yuan (ca. 150 billion dollars) industry, and to emerge as the driving force in defining ethical norms and standards for AI. Several reports have analysed specific aspects of China’s AI policies or have assessed the country’s technical capabilities. Instead, in this article, we focus on the socio-political background and policy debates that are shaping China’s AI strategy. In particular, we analyse the main strategic areas in which China is investing in AI and the concurrent ethical debates that are delimiting its use. By focusing on the policy backdrop, we seek to provide a more comprehensive and critical understanding of China’s AI policy by bringing together debates and analyses of a wide array of policy documents.
Collapse
|
30
|
Abstract
AbstractTo address the rising concern that algorithmic decision-making may reinforce discriminatory biases, researchers have proposed many notions of fairness and corresponding mathematical formalizations. Each of these notions is often presented as a one-size-fits-all, absolute condition; however, in reality, the practical and ethical trade-offs are unavoidable and more complex. We introduce a new approach that considers fairness—not as a binary, absolute mathematical condition—but rather, as a relational notion in comparison to alternative decisionmaking processes. Using US mortgage lending as an example use case, we discuss the ethical foundations of each definition of fairness and demonstrate that our proposed methodology more closely captures the ethical trade-offs of the decision-maker, as well as forcing a more explicit representation of which values and objectives are prioritised.
Collapse
|
31
|
Morley J, Floridi L. The Limits of Empowerment: How to Reframe the Role of mHealth Tools in the Healthcare Ecosystem. Sci Eng Ethics 2020; 26:1159-1183. [PMID: 31172424 PMCID: PMC7286867 DOI: 10.1007/s11948-019-00115-1] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2019] [Accepted: 05/28/2019] [Indexed: 05/03/2023]
Abstract
This article highlights the limitations of the tendency to frame health- and wellbeing-related digital tools (mHealth technologies) as empowering devices, especially as they play an increasingly important role in the National Health Service (NHS) in the UK. It argues that mHealth technologies should instead be framed as digital companions. This shift from empowerment to companionship is advocated by showing the conceptual, ethical, and methodological issues challenging the narrative of empowerment, and by arguing that such challenges, as well as the risk of medical paternalism, can be overcome by focusing on the potential for mHealth tools to mediate the relationship between recipients of clinical advice and givers of clinical advice, in ways that allow for contextual flexibility in the balance between patiency and agency. The article concludes by stressing that reframing the narrative cannot be the only means for avoiding harm caused to the NHS as a healthcare system by the introduction of mHealth tools. Future discussion will be needed on the overarching role of responsible design.
Collapse
Affiliation(s)
- Jessica Morley
- Oxford Internet Institute, University of Oxford, 1 St Giles, Oxford, OX1 3JS, UK.
| | - Luciano Floridi
- Oxford Internet Institute, University of Oxford, 1 St Giles, Oxford, OX1 3JS, UK
- The Alan Turing Institute, 96 Euston Road, London, NW1 2DB, UK
| |
Collapse
|
32
|
Abstract
Contact tracing is a central public health response to infectious disease outbreaks, especially in the early stages of an outbreak when specific treatments are limited. Importation of novel Coronavirus (COVID-19) from China and elsewhere into the United Kingdom highlights the need to understand the impact of contact tracing as a control measure. Using detailed survey information on social encounters coupled to predictive models, we investigate the likely efficacy of the current UK definition of a close contact (within 2 meters for 15 minutes or more) and the distribution of secondary cases that may go untraced. Taking recent estimates for COVID-19 transmission, we show that less than 1 in 5 cases will generate any subsequent untraced cases, although this comes at a high logistical burden with an average of 36.1 individuals (95th percentiles 0-182) traced per case. Changes to the definition of a close contact can reduce this burden, but with increased risk of untraced cases; we estimate that any definition where close contact requires more than 4 hours of contact is likely to lead to uncontrolled spread.
Collapse
|
33
|
Floridi L, Cowls J, King TC, Taddeo M. How to Design AI for Social Good: Seven Essential Factors. Sci Eng Ethics 2020; 26:1771-1796. [PMID: 32246245 PMCID: PMC7286860 DOI: 10.1007/s11948-020-00213-5] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/14/2019] [Accepted: 03/25/2020] [Indexed: 05/21/2023]
Abstract
The idea of artificial intelligence for social good (henceforth AI4SG) is gaining traction within information societies in general and the AI community in particular. It has the potential to tackle social problems through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies. This article addresses this gap by identifying seven ethical factors that are essential for future AI4SG initiatives. The analysis is supported by 27 case examples of AI4SG projects. Some of these factors are almost entirely novel to AI, while the significance of other factors is heightened by the use of AI. From each of these factors, corresponding best practices are formulated which, subject to context and balance, may serve as preliminary guidelines to ensure that well-designed AI is more likely to serve the social good.
Collapse
Affiliation(s)
- Luciano Floridi
- Digital Ethics Lab, Oxford Internet Institute, University of Oxford, Oxford, UK
- The Alan Turing Institute, London, UK
| | - Josh Cowls
- Digital Ethics Lab, Oxford Internet Institute, University of Oxford, Oxford, UK
- The Alan Turing Institute, London, UK
| | - Thomas C. King
- Digital Ethics Lab, Oxford Internet Institute, University of Oxford, Oxford, UK
| | - Mariarosaria Taddeo
- Digital Ethics Lab, Oxford Internet Institute, University of Oxford, Oxford, UK
- The Alan Turing Institute, London, UK
| |
Collapse
|
34
|
Aggarwal N, Floridi L. Towards the Ethical Publication of Country of Origin Information (COI) in the Asylum Process. Minds Mach (Dordr) 2020. [DOI: 10.1007/s11023-020-09523-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
AbstractThis article addresses the question of how ‘Country of Origin Information’ (COI) reports—that is, research developed and used to support decision-making in the asylum process—can be published in an ethical manner. The article focuses on the risk that published COI reports could be misused and thereby harm the subjects of the reports and/or those involved in their development. It supports a situational approach to assessing data ethics when publishing COI reports, whereby COI service providers must weigh up the benefits and harms of publication based, inter alia, on the foreseeability and probability of harm due to potential misuse of the research, the public good nature of the research, and the need to balance the rights and duties of the various actors in the asylum process, including asylum seekers themselves. Although this article focuses on the specific question of ‘how to publish COI reports in an ethical manner’, it also intends to promote further research on data ethics in the asylum process, particularly in relation to refugees, where more foundational issues should be considered.
Collapse
|
35
|
Abstract
AbstractAn increasing number of technology firms are implementing processes to identify and evaluate the ethical risks of their systems and products. A key part of these review processes is to foresee potential impacts of these technologies on different groups of users. In this article, we use the expression Ethical Foresight Analysis (EFA) to refer to a variety of analytical strategies for anticipating or predicting the ethical issues that new technological artefacts, services, and applications may raise. This article examines several existing EFA methodologies currently in use. It identifies the purposes of ethical foresight, the kinds of methods that current methodologies employ, and the strengths and weaknesses of each of these current approaches. The conclusion is that a new kind of foresight analysis on the ethics of emerging technologies is both feasible and urgently needed.
Collapse
|
36
|
|
37
|
|
38
|
Abstract
AbstractThis article presents the first, systematic analysis of the ethical challenges posed by recommender systems through a literature review. The article identifies six areas of concern, and maps them onto a proposed taxonomy of different kinds of ethical impact. The analysis uncovers a gap in the literature: currently user-centred approaches do not consider the interests of a variety of other stakeholders—as opposed to just the receivers of a recommendation—in assessing the ethical impacts of a recommender system.
Collapse
|
39
|
Affiliation(s)
- Jessica Morley
- Nuffield Department of Primary Care, University of Oxford, Oxford OX2 6GG, UK
| | | | - Ben Goldacre
- Nuffield Department of Primary Care, University of Oxford, Oxford OX2 6GG, UK
| |
Collapse
|
40
|
King TC, Aggarwal N, Taddeo M, Floridi L. Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions. Sci Eng Ethics 2020; 26:89-120. [PMID: 30767109 PMCID: PMC6978427 DOI: 10.1007/s11948-018-00081-0] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/10/2018] [Accepted: 12/16/2018] [Indexed: 05/03/2023]
Abstract
Artificial intelligence (AI) research and regulation seek to balance the benefits of innovation against any potential harms and disruption. However, one unintended consequence of the recent surge in AI research is the potential re-orientation of AI technologies to facilitate criminal acts, term in this article AI-Crime (AIC). AIC is theoretically feasible thanks to published experiments in automating fraud targeted at social media users, as well as demonstrations of AI-driven manipulation of simulated markets. However, because AIC is still a relatively young and inherently interdisciplinary area-spanning socio-legal studies to formal science-there is little certainty of what an AIC future might look like. This article offers the first systematic, interdisciplinary literature analysis of the foreseeable threats of AIC, providing ethicists, policy-makers, and law enforcement organisations with a synthesis of the current problems, and a possible solution space.
Collapse
Affiliation(s)
- Thomas C King
- Oxford Internet Institute, University of Oxford, 1 St Giles, Oxford, OX1 3JS, UK
| | - Nikita Aggarwal
- Oxford Internet Institute, University of Oxford, 1 St Giles, Oxford, OX1 3JS, UK
- Faculty of Law, University of Oxford, St Cross Building St. Cross Rd, Oxford, OX1 3UL, UK
| | - Mariarosaria Taddeo
- Oxford Internet Institute, University of Oxford, 1 St Giles, Oxford, OX1 3JS, UK
- The Alan Turing Institute, 96 Euston Road, London, NW1 2DB, UK
| | - Luciano Floridi
- Oxford Internet Institute, University of Oxford, 1 St Giles, Oxford, OX1 3JS, UK.
- The Alan Turing Institute, 96 Euston Road, London, NW1 2DB, UK.
| |
Collapse
|
41
|
Affiliation(s)
- Jessica Morley
- Oxford Internet Institute, University of Oxford, Oxford OX1 3JS, UK.
| | - Luciano Floridi
- Oxford Internet Institute, University of Oxford, Oxford OX1 3JS, UK; Alan Turing Institute, London, UK
| |
Collapse
|
42
|
Affiliation(s)
- Luciano Floridi
- Oxford Internet Institute, University of Oxford, 1 St Giles, Oxford, OX1 3JS UK.,The Alan Turing Institute, 96 Euston Road, London, NW1 2DB UK
| |
Collapse
|
43
|
Rich AS, Rudin C, Jacoby DMP, Freeman R, Wearn OR, Shevlin H, Dihal K, ÓhÉigeartaigh SS, Butcher J, Lippi M, Palka P, Torroni P, Wongvibulsin S, Begoli E, Schneider G, Cave S, Sloane M, Moss E, Rahwan I, Goldberg K, Howard D, Floridi L, Stilgoe J. AI reflections in 2019. NAT MACH INTELL 2020. [DOI: 10.1038/s42256-019-0141-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
44
|
Morley J, Taddeo M, Floridi L. Google Health and the NHS: overcoming the trust deficit. The Lancet Digital Health 2019; 1:e389. [DOI: 10.1016/s2589-7500(19)30193-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/10/2019] [Accepted: 10/17/2019] [Indexed: 10/25/2022]
|
45
|
|
46
|
Krutzinna J, Taddeo M, Floridi L. Enabling Posthumous Medical Data Donation: An Appeal for the Ethical Utilisation of Personal Health Data. Sci Eng Ethics 2019; 25:1357-1387. [PMID: 30357557 DOI: 10.1007/s11948-018-0067-8] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/02/2018] [Accepted: 09/17/2018] [Indexed: 06/08/2023]
Abstract
This article argues that personal medical data should be made available for scientific research, by enabling and encouraging individuals to donate their medical records once deceased, similar to the way in which they can already donate organs or bodies. This research is part of a project on posthumous medical data donation developed by the Digital Ethics Lab at the Oxford Internet Institute at the University of Oxford. Ten arguments are provided to support the need to foster posthumous medical data donation. Two major risks are also identified-harm to others, and lack of control over the use of data-which could follow from unregulated donation of medical data. The argument that record-based medical research should proceed without the need to secure informed consent is rejected, and instead a voluntary and participatory approach to using personal medical data should be followed. The analysis concludes by stressing the need to develop an ethical code for data donation to minimise the risks, and offers five foundational principles for ethical medical data donation suggested as a draft code.
Collapse
Affiliation(s)
- Jenny Krutzinna
- Oxford Internet Institute, University of Oxford, 1 St Giles, Oxford, OX1 3JS, UK.
| | - Mariarosaria Taddeo
- Oxford Internet Institute, University of Oxford, 1 St Giles, Oxford, OX1 3JS, UK
- The Alan Turing Institute, 96 Euston Road, London, NW1 2DB, UK
| | - Luciano Floridi
- Oxford Internet Institute, University of Oxford, 1 St Giles, Oxford, OX1 3JS, UK
- The Alan Turing Institute, 96 Euston Road, London, NW1 2DB, UK
| |
Collapse
|
47
|
|
48
|
Affiliation(s)
- David S Watson
- Oxford Internet Institute, University of Oxford, 1 St Giles', Oxford OX1 3JS, UK
- Centre for Translational Bioinformatics, William Harvey Research Institute, Queen Mary University of London, London, UK
- The Alan Turing Institute, London, UK
| | - Jenny Krutzinna
- Oxford Internet Institute, University of Oxford, 1 St Giles', Oxford OX1 3JS, UK
| | - Ian N Bruce
- Arthritis Research UK Centre for Epidemiology, Centre for Musculoskeletal Research, Faculty of Biology Medicine and Health, The University of Manchester, Manchester, UK
- NIHR Manchester Biomedical Research Centre, Manchester University Hospitals NHS Foundation Trust, Manchester M13 9WL, UK
| | - Christopher Em Griffiths
- NIHR Manchester Biomedical Research Centre, Manchester University Hospitals NHS Foundation Trust, Manchester M13 9WL, UK
- The Dermatology Centre, Salford Royal NHS Foundation Trust, The University of Manchester, Salford, UK
| | - Iain B McInnes
- Institute of Infection, Immunity and Inflammation, University of Glasgow, Glasgow, UK
| | - Michael R Barnes
- Centre for Translational Bioinformatics, William Harvey Research Institute, Queen Mary University of London, London, UK
- The Alan Turing Institute, London, UK
| | - Luciano Floridi
- Oxford Internet Institute, University of Oxford, 1 St Giles', Oxford OX1 3JS, UK
- The Alan Turing Institute, London, UK
| |
Collapse
|
49
|
|
50
|
Floridi L, Cowls J, Beltrametti M, Chatila R, Chazerand P, Dignum V, Luetge C, Madelin R, Pagallo U, Rossi F, Schafer B, Valcke P, Vayena E. AI4People-An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds Mach (Dordr) 2018; 28:689-707. [PMID: 30930541 PMCID: PMC6404626 DOI: 10.1007/s11023-018-9482-5] [Citation(s) in RCA: 272] [Impact Index Per Article: 45.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2018] [Accepted: 11/02/2018] [Indexed: 01/03/2023]
Abstract
This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other stakeholders. If adopted, these recommendations would serve as a firm foundation for the establishment of a Good AI Society.
Collapse
Affiliation(s)
- Luciano Floridi
- Oxford Internet Institute, University of Oxford, Oxford, UK
- The Alan Turing Institute, London, UK
| | - Josh Cowls
- Oxford Internet Institute, University of Oxford, Oxford, UK
- The Alan Turing Institute, London, UK
| | | | - Raja Chatila
- French National Center of Scientific Research, Paris, France
- Institute of Intelligent Systems and Robotics, Pierre and Marie Curie University, Paris, France
| | | | - Virginia Dignum
- University of Umeå, Umeå, Sweden
- Delft Design for Values Institute, Delft University of Technology, Delft, The Netherlands
| | - Christoph Luetge
- TUM School of Governance, Technical University of Munich, Munich, Germany
| | - Robert Madelin
- Centre for Technology and Global Affairs, University of Oxford, Oxford, UK
| | - Ugo Pagallo
- Department of Law, University of Turin, Turin, Italy
| | - Francesca Rossi
- IBM Research, New York, USA
- University of Padova, Padua, Italy
| | | | - Peggy Valcke
- Centre for IT & IP Law, Catholic University of Leuven, Flanders, Belgium
- Bocconi University, Milan, Italy
| | - Effy Vayena
- Bioethics, Health Ethics and Policy Lab, ETH Zurich, Zurich, Switzerland
| |
Collapse
|