1
|
Germani F, Spitale G, Machiri SV, Ho CWL, Ballalai I, Biller-Andorno N, Reis AA. Ethical Considerations in Infodemic Management: Systematic Scoping Review. JMIR INFODEMIOLOGY 2024; 4:e56307. [PMID: 39208420 PMCID: PMC11393515 DOI: 10.2196/56307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Revised: 05/28/2024] [Accepted: 06/24/2024] [Indexed: 09/04/2024]
Abstract
BACKGROUND During health emergencies, effective infodemic management has become a paramount challenge. A new era marked by a rapidly changing information ecosystem, combined with the widespread dissemination of misinformation and disinformation, has magnified the complexity of the issue. For infodemic management measures to be effective, acceptable, and trustworthy, a robust framework of ethical considerations is needed. OBJECTIVE This systematic scoping review aims to identify and analyze ethical considerations and procedural principles relevant to infodemic management, ultimately enhancing the effectiveness of these practices and increasing trust in stakeholders performing infodemic management practices with the goal of safeguarding public health. METHODS The review involved a comprehensive examination of the literature related to ethical considerations in infodemic management from 2002 to 2022, drawing from publications in PubMed, Scopus, and Web of Science. Policy documents and relevant material were included in the search strategy. Papers were screened against inclusion and exclusion criteria, and core thematic areas were systematically identified and categorized following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. We analyzed the literature to identify substantive ethical principles that were crucial for guiding actions in the realms of infodemic management and social listening, as well as related procedural ethical principles. In this review, we consider ethical principles that are extensively deliberated upon in the literature, such as equity, justice, or respect for autonomy. However, we acknowledge the existence and relevance of procedural practices, which we also consider as ethical principles or practices that, when implemented, enhance the efficacy of infodemic management while ensuring the respect of substantive ethical principles. RESULTS Drawing from 103 publications, the review yielded several key findings related to ethical principles, approaches, and guidelines for practice in the context of infodemic management. Community engagement, empowerment through education, and inclusivity emerged as procedural principles and practices that enhance the quality and effectiveness of communication and social listening efforts, fostering trust, a key emerging theme and crucial ethical principle. The review also emphasized the significance of transparency, privacy, and cybersecurity in data collection. CONCLUSIONS This review underscores the pivotal role of ethics in bolstering the efficacy of infodemic management. From the analyzed body of literature, it becomes evident that ethical considerations serve as essential instruments for cultivating trust and credibility while also facilitating the medium-term and long-term viability of infodemic management approaches.
Collapse
Affiliation(s)
- Federico Germani
- Institute of Biomedical Ethics and History of Medicine, University of Zurich, Zurich, Switzerland
| | - Giovanni Spitale
- Institute of Biomedical Ethics and History of Medicine, University of Zurich, Zurich, Switzerland
| | - Sandra Varaidzo Machiri
- Unit for High Impact Events Preparedness, Department of Epidemic and Pandemic Preparedness and Prevention, World Health Organization, Genève, Switzerland
| | | | | | - Nikola Biller-Andorno
- Institute of Biomedical Ethics and History of Medicine, University of Zurich, Zurich, Switzerland
| | - Andreas Alois Reis
- Health Ethics and Governance Unit, Department of Research for Health, World Health Organization, Genève, Switzerland
| |
Collapse
|
2
|
Hine E, Yousefi Y, Osivand P, Brand D, Kugler K, Chiara PG. The AI Act Grand Challenge shows how autonomous robots will be regulated. Sci Robot 2023; 8:eadk5632. [PMID: 37992193 DOI: 10.1126/scirobotics.adk5632] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2023]
Abstract
One of the winning teams of the EU AI Act Grand Challenge analyzes how the AI Act will regulate robots.
Collapse
Affiliation(s)
- Emmie Hine
- Department of Legal Studies, University of Bologna, Via Zamboni 27/29, Bologna 40126, Italy
| | - Yasaman Yousefi
- Department of Legal Studies, University of Bologna, Via Zamboni 27/29, Bologna 40126, Italy
| | - Parisa Osivand
- Dalla Lana School of Public Health, University of Toronto, 155 College St., Toronto, ON M5T 3M7, Canada
| | - Dirk Brand
- School of Public Leadership, Stellenbosch University, Carl Cronje Dr., Cape Town 7530, South Africa
| | - Kholofelo Kugler
- Faculty of Law, University of Lucerne, Frohburgstrasse 3, Postfach 4466, Lucerne 6002, Switzerland
| | - Pier Giorgio Chiara
- Department of Legal Studies, University of Bologna, Via Zamboni 27/29, Bologna 40126, Italy
| |
Collapse
|
3
|
Curto G, Comim F. SAF: Stakeholders' Agreement on Fairness in the Practice of Machine Learning Development. SCIENCE AND ENGINEERING ETHICS 2023; 29:29. [PMID: 37486434 PMCID: PMC10366323 DOI: 10.1007/s11948-023-00448-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/26/2021] [Accepted: 06/16/2023] [Indexed: 07/25/2023]
Abstract
This paper clarifies why bias cannot be completely mitigated in Machine Learning (ML) and proposes an end-to-end methodology to translate the ethical principle of justice and fairness into the practice of ML development as an ongoing agreement with stakeholders. The pro-ethical iterative process presented in the paper aims to challenge asymmetric power dynamics in the fairness decision making within ML design and support ML development teams to identify, mitigate and monitor bias at each step of ML systems development. The process also provides guidance on how to explain the always imperfect trade-offs in terms of bias to users.
Collapse
Affiliation(s)
| | - Flavio Comim
- IQS School of Management, Universitat Ramon Llull, Barcelona, Spain
| |
Collapse
|
4
|
Cowls J, Tsamados A, Taddeo M, Floridi L. The AI gambit: leveraging artificial intelligence to combat climate change-opportunities, challenges, and recommendations. AI & SOCIETY 2023; 38:283-307. [PMID: 34690449 PMCID: PMC8522259 DOI: 10.1007/s00146-021-01294-x] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Accepted: 09/06/2021] [Indexed: 02/06/2023]
Abstract
In this article, we analyse the role that artificial intelligence (AI) could play, and is playing, to combat global climate change. We identify two crucial opportunities that AI offers in this domain: it can help improve and expand current understanding of climate change, and it can contribute to combatting the climate crisis effectively. However, the development of AI also raises two sets of problems when considering climate change: the possible exacerbation of social and ethical challenges already associated with AI, and the contribution to climate change of the greenhouse gases emitted by training data and computation-intensive AI systems. We assess the carbon footprint of AI research, and the factors that influence AI's greenhouse gas (GHG) emissions in this domain. We find that the carbon footprint of AI research may be significant and highlight the need for more evidence concerning the trade-off between the GHG emissions generated by AI research and the energy and resource efficiency gains that AI can offer. In light of our analysis, we argue that leveraging the opportunities offered by AI for global climate change whilst limiting its risks is a gambit which requires responsive, evidence-based, and effective governance to become a winning strategy. We conclude by identifying the European Union as being especially well-placed to play a leading role in this policy response and provide 13 recommendations that are designed to identify and harness the opportunities of AI for combatting climate change, while reducing its impact on the environment.
Collapse
Affiliation(s)
- Josh Cowls
- Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS UK
- Alan Turing Institute, British Library, 96 Euston Rd, London, NW1 2DB UK
| | - Andreas Tsamados
- Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS UK
| | - Mariarosaria Taddeo
- Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS UK
- Alan Turing Institute, British Library, 96 Euston Rd, London, NW1 2DB UK
| | - Luciano Floridi
- Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS UK
- Alan Turing Institute, British Library, 96 Euston Rd, London, NW1 2DB UK
| |
Collapse
|
5
|
Social influence for societal interest: a pro-ethical framework for improving human decision making through multi-stakeholder recommender systems. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01467-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
AbstractIn the contemporary digital age, recommender systems (RSs) play a fundamental role in managing information on online platforms: from social media to e-commerce, from travels to cultural consumptions, automated recommendations influence the everyday choices of users at an unprecedented scale. RSs are trained on users’ data to make targeted suggestions to individuals according to their expected preference, but their ultimate impact concerns all the multiple stakeholders involved in the recommendation process. Therefore, whilst RSs are useful to reduce information overload, their deployment comes with significant ethical challenges, which are still largely unaddressed because of proprietary constraints and regulatory gaps that limit the effects of standard approaches to explainability and transparency. In this context, I address the ethical and social implications of automated recommendations by proposing a pro-ethical design framework aimed at reorienting the influence of RSs towards societal interest. In particular, after highlighting the problem of explanation for RSs, I discuss the application of beneficent informational nudging to the case of conversational recommender systems (CRSs), which rely on user-system dialogic interactions. Subsequently, through a comparison with standard recommendations, I outline the incentives for platforms and providers in adopting this approach and its benefits for both individual users and society.
Collapse
|
6
|
Hermann E. Leveraging Artificial Intelligence in Marketing for Social Good-An Ethical Perspective. JOURNAL OF BUSINESS ETHICS : JBE 2022; 179:43-61. [PMID: 34054170 PMCID: PMC8150633 DOI: 10.1007/s10551-021-04843-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Accepted: 05/12/2021] [Indexed: 05/08/2023]
Abstract
Artificial intelligence (AI) is (re)shaping strategy, activities, interactions, and relationships in business and specifically in marketing. The drawback of the substantial opportunities AI systems and applications (will) provide in marketing are ethical controversies. Building on the literature on AI ethics, the authors systematically scrutinize the ethical challenges of deploying AI in marketing from a multi-stakeholder perspective. By revealing interdependencies and tensions between ethical principles, the authors shed light on the applicability of a purely principled, deontological approach to AI ethics in marketing. To reconcile some of these tensions and account for the AI-for-social-good perspective, the authors make suggestions of how AI in marketing can be leveraged to promote societal and environmental well-being.
Collapse
Affiliation(s)
- Erik Hermann
- Wireless Systems,
IHP - Leibniz-Institut für innovative Mikroelektronik
, Frankfurt (Oder), Germany
| |
Collapse
|
7
|
Ethics-based auditing of automated decision-making systems: intervention points and policy implications. AI & SOCIETY 2021. [DOI: 10.1007/s00146-021-01286-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
AbstractOrganisations increasingly use automated decision-making systems (ADMS) to inform decisions that affect humans and their environment. While the use of ADMS can improve the accuracy and efficiency of decision-making processes, it is also coupled with ethical challenges. Unfortunately, the governance mechanisms currently used to oversee human decision-making often fail when applied to ADMS. In previous work, we proposed that ethics-based auditing (EBA)—that is, a structured process by which ADMS are assessed for consistency with relevant principles or norms—can (a) help organisations verify claims about their ADMS and (b) provide decision-subjects with justifications for the outputs produced by ADMS. In this article, we outline the conditions under which EBA procedures can be feasible and effective in practice. First, we argue that EBA is best understood as a ‘soft’ yet ‘formal’ governance mechanism. This implies that the main responsibility of auditors should be to spark ethical deliberation at key intervention points throughout the software development process and ensure that there is sufficient documentation to respond to potential inquiries. Second, we frame AMDS as parts of larger sociotechnical systems to demonstrate that to be feasible and effective, EBA procedures must link to intervention points that span all levels of organisational governance and all phases of the software lifecycle. The main function of EBA should, therefore, be to inform, formalise, assess, and interlink existing governance structures. Finally, we discuss the policy implications of our findings. To support the emergence of feasible and effective EBA procedures, policymakers and regulators could provide standardised reporting formats, facilitate knowledge exchange, provide guidance on how to resolve normative tensions, and create an independent body to oversee EBA of ADMS.
Collapse
|
8
|
Mökander J, Morley J, Taddeo M, Floridi L. Ethics-Based Auditing of Automated Decision-Making Systems: Nature, Scope, and Limitations. SCIENCE AND ENGINEERING ETHICS 2021; 27:44. [PMID: 34231029 PMCID: PMC8260507 DOI: 10.1007/s11948-021-00319-4] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Accepted: 06/08/2021] [Indexed: 06/13/2023]
Abstract
Important decisions that impact humans lives, livelihoods, and the natural environment are increasingly being automated. Delegating tasks to so-called automated decision-making systems (ADMS) can improve efficiency and enable new solutions. However, these benefits are coupled with ethical challenges. For example, ADMS may produce discriminatory outcomes, violate individual privacy, and undermine human self-determination. New governance mechanisms are thus needed that help organisations design and deploy ADMS in ways that are ethical, while enabling society to reap the full economic and social benefits of automation. In this article, we consider the feasibility and efficacy of ethics-based auditing (EBA) as a governance mechanism that allows organisations to validate claims made about their ADMS. Building on previous work, we define EBA as a structured process whereby an entity's present or past behaviour is assessed for consistency with relevant principles or norms. We then offer three contributions to the existing literature. First, we provide a theoretical explanation of how EBA can contribute to good governance by promoting procedural regularity and transparency. Second, we propose seven criteria for how to design and implement EBA procedures successfully. Third, we identify and discuss the conceptual, technical, social, economic, organisational, and institutional constraints associated with EBA. We conclude that EBA should be considered an integral component of multifaced approaches to managing the ethical risks posed by ADMS.
Collapse
Affiliation(s)
- Jakob Mökander
- Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS UK
| | - Jessica Morley
- Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS UK
| | - Mariarosaria Taddeo
- Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS UK
- Alan Turing Institute, British Library, 96 Euston Rd, London, NW1 2DB UK
| | - Luciano Floridi
- Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS UK
- Alan Turing Institute, British Library, 96 Euston Rd, London, NW1 2DB UK
| |
Collapse
|
9
|
Morley J, Cowls J, Taddeo M, Floridi L. Public Health in the Information Age: Recognizing the Infosphere as a Social Determinant of Health. J Med Internet Res 2020; 22:e19311. [PMID: 32648850 PMCID: PMC7402642 DOI: 10.2196/19311] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Revised: 06/11/2020] [Accepted: 07/08/2020] [Indexed: 02/07/2023] Open
Abstract
Since 2016, social media companies and news providers have come under pressure to tackle the spread of political mis- and disinformation (MDI) online. However, despite evidence that online health MDI (on the web, on social media, and within mobile apps) also has negative real-world effects, there has been a lack of comparable action by either online service providers or state-sponsored public health bodies. We argue that this is problematic and seek to answer three questions: why has so little been done to control the flow of, and exposure to, health MDI online; how might more robust action be justified; and what specific, newly justified actions are needed to curb the flow of, and exposure to, online health MDI? In answering these questions, we show that four ethical concerns-related to paternalism, autonomy, freedom of speech, and pluralism-are partly responsible for the lack of intervention. We then suggest that these concerns can be overcome by relying on four arguments: (1) education is necessary but insufficient to curb the circulation of health MDI, (2) there is precedent for state control of internet content in other domains, (3) network dynamics adversely affect the spread of accurate health information, and (4) justice is best served by protecting those susceptible to inaccurate health information. These arguments provide a strong case for classifying the quality of the infosphere as a social determinant of health, thus making its protection a public health responsibility. In addition, they offer a strong justification for working to overcome the ethical concerns associated with state-led intervention in the infosphere to protect public health.
Collapse
Affiliation(s)
- Jessica Morley
- Oxford Internet Institute, University of Oxford, Oxford, United Kingdom
| | - Josh Cowls
- Oxford Internet Institute, University of Oxford, Oxford, United Kingdom
- Alan Turing Institute, London, United Kingdom
| | - Mariarosaria Taddeo
- Oxford Internet Institute, University of Oxford, Oxford, United Kingdom
- Alan Turing Institute, London, United Kingdom
| | - Luciano Floridi
- Oxford Internet Institute, University of Oxford, Oxford, United Kingdom
- Alan Turing Institute, London, United Kingdom
| |
Collapse
|
10
|
Burr C, Taddeo M, Floridi L. The Ethics of Digital Well-Being: A Thematic Review. SCIENCE AND ENGINEERING ETHICS 2020; 26:2313-2343. [PMID: 31933119 PMCID: PMC7417400 DOI: 10.1007/s11948-020-00175-8] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2019] [Accepted: 01/03/2020] [Indexed: 05/24/2023]
Abstract
This article presents the first thematic review of the literature on the ethical issues concerning digital well-being. The term 'digital well-being' is used to refer to the impact of digital technologies on what it means to live a life that is good for a human being. The review explores the existing literature on the ethics of digital well-being, with the goal of mapping the current debate and identifying open questions for future research. The review identifies major issues related to several key social domains: healthcare, education, governance and social development, and media and entertainment. It also highlights three broader themes: positive computing, personalised human-computer interaction, and autonomy and self-determination. The review argues that three themes will be central to ongoing discussions and research by showing how they can be used to identify open questions related to the ethics of digital well-being.
Collapse
Affiliation(s)
- Christopher Burr
- Oxford Internet Institute, University of Oxford, 1 St Giles, Oxford, OX1 3JS, UK.
| | - Mariarosaria Taddeo
- Oxford Internet Institute, University of Oxford, 1 St Giles, Oxford, OX1 3JS, UK
- The Alan Turing Institute, 96 Euston Road, London, NW1 2DB, UK
| | - Luciano Floridi
- Oxford Internet Institute, University of Oxford, 1 St Giles, Oxford, OX1 3JS, UK
- The Alan Turing Institute, 96 Euston Road, London, NW1 2DB, UK
| |
Collapse
|
11
|
Morley J, Floridi L, Kinsey L, Elhalal A. From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices. SCIENCE AND ENGINEERING ETHICS 2020; 26:2141-2168. [PMID: 31828533 PMCID: PMC7417387 DOI: 10.1007/s11948-019-00165-5] [Citation(s) in RCA: 93] [Impact Index Per Article: 18.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/16/2019] [Accepted: 11/29/2019] [Indexed: 05/24/2023]
Abstract
The debate about the ethical implications of Artificial Intelligence dates from the 1960s (Samuel in Science, 132(3429):741-742, 1960. https://doi.org/10.1126/science.132.3429.741 ; Wiener in Cybernetics: or control and communication in the animal and the machine, MIT Press, New York, 1961). However, in recent years symbolic AI has been complemented and sometimes replaced by (Deep) Neural Networks and Machine Learning (ML) techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles-the 'what' of AI ethics (beneficence, non-maleficence, autonomy, justice and explicability)-rather than on practices, the 'how.' Awareness of the potential issues is increasing at a fast rate, but the AI community's ability to take action to mitigate the associated risks is still at its infancy. Our intention in presenting this research is to contribute to closing the gap between principles and practices by constructing a typology that may help practically-minded developers apply ethics at each stage of the Machine Learning development pipeline, and to signal to researchers where further work is needed. The focus is exclusively on Machine Learning, but it is hoped that the results of this research may be easily applicable to other branches of AI. The article outlines the research method for creating this typology, the initial findings, and provides a summary of future research needs.
Collapse
Affiliation(s)
- Jessica Morley
- Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS UK
| | - Luciano Floridi
- Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford, OX1 3JS UK
- Alan Turing Institute, British Library, 96 Euston Rd, London, NW1 2DB UK
| | - Libby Kinsey
- Digital Catapult, 101 Euston Road, Kings Cross, London, NW1 2RA UK
| | - Anat Elhalal
- Digital Catapult, 101 Euston Road, Kings Cross, London, NW1 2RA UK
| |
Collapse
|
12
|
Morley J, Floridi L. The Limits of Empowerment: How to Reframe the Role of mHealth Tools in the Healthcare Ecosystem. SCIENCE AND ENGINEERING ETHICS 2020; 26:1159-1183. [PMID: 31172424 PMCID: PMC7286867 DOI: 10.1007/s11948-019-00115-1] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2019] [Accepted: 05/28/2019] [Indexed: 05/03/2023]
Abstract
This article highlights the limitations of the tendency to frame health- and wellbeing-related digital tools (mHealth technologies) as empowering devices, especially as they play an increasingly important role in the National Health Service (NHS) in the UK. It argues that mHealth technologies should instead be framed as digital companions. This shift from empowerment to companionship is advocated by showing the conceptual, ethical, and methodological issues challenging the narrative of empowerment, and by arguing that such challenges, as well as the risk of medical paternalism, can be overcome by focusing on the potential for mHealth tools to mediate the relationship between recipients of clinical advice and givers of clinical advice, in ways that allow for contextual flexibility in the balance between patiency and agency. The article concludes by stressing that reframing the narrative cannot be the only means for avoiding harm caused to the NHS as a healthcare system by the introduction of mHealth tools. Future discussion will be needed on the overarching role of responsible design.
Collapse
Affiliation(s)
- Jessica Morley
- Oxford Internet Institute, University of Oxford, 1 St Giles, Oxford, OX1 3JS, UK.
| | - Luciano Floridi
- Oxford Internet Institute, University of Oxford, 1 St Giles, Oxford, OX1 3JS, UK
- The Alan Turing Institute, 96 Euston Road, London, NW1 2DB, UK
| |
Collapse
|
13
|
Empowerment or Engagement? Digital Health Technologies for Mental Healthcare. THE 2019 YEARBOOK OF THE DIGITAL ETHICS LAB 2020. [DOI: 10.1007/978-3-030-29145-7_5] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
14
|
Reijers W, Wright D, Brey P, Weber K, Rodrigues R, O'Sullivan D, Gordijn B. Methods for Practising Ethics in Research and Innovation: A Literature Review, Critical Analysis and Recommendations. SCIENCE AND ENGINEERING ETHICS 2018; 24:1437-1481. [PMID: 28900898 DOI: 10.1007/s11948-017-9961-8] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/12/2017] [Accepted: 08/14/2017] [Indexed: 06/07/2023]
Abstract
This paper provides a systematic literature review, analysis and discussion of methods that are proposed to practise ethics in research and innovation (R&I). Ethical considerations concerning the impacts of R&I are increasingly important, due to the quickening pace of technological innovation and the ubiquitous use of the outcomes of R&I processes in society. For this reason, several methods for practising ethics have been developed in different fields of R&I. The paper first of all presents a systematic search of academic sources that present and discuss such methods. Secondly, it provides a categorisation of these methods according to three main kinds: (1) ex ante methods, dealing with emerging technologies, (2) intra methods, dealing with technology design, and (3) ex post methods, dealing with ethical analysis of existing technologies. Thirdly, it discusses the methods by considering problems in the way they deal with the uncertainty of technological change, ethical technology design, the identification, analysis and resolving of ethical impacts of technologies and stakeholder participation. The results and discussion of our literature review are valuable for gaining an overview of the state of the art and serve as an outline of a future research agenda of methods for practising ethics in R&I.
Collapse
Affiliation(s)
- Wessel Reijers
- ADAPT Centre, Dublin City University, Glasnevin, Dublin 9, Ireland.
| | - David Wright
- Trilateral Research and Consulting, 72 Hammersmith Rd, London, W14, UK
| | - Philip Brey
- Department of Philosophy, University of Twente, Drienerlolaan 5, 7522NB, Enschede, The Netherlands
| | - Karsten Weber
- Institute for Social Research and Technology Assessment (IST), OTH Regensburg, Galgenbergstraße 24, 93053, Regensburg, Germany
| | - Rowena Rodrigues
- Trilateral Research and Consulting, 72 Hammersmith Rd, London, W14, UK
| | - Declan O'Sullivan
- ADAPT Centre, Department of Computer Science, Trinity College Dublin, O'Reilly Institute, Dublin 2, Ireland
| | - Bert Gordijn
- Institute of Ethics, Dublin City University, Glasnevin, Dublin 9, Ireland
| |
Collapse
|
15
|
|