1
|
Guttinger S. Surveillance in the lab? : How datafication is changing the research landscape. EMBO Rep 2024; 25:2525-2528. [PMID: 38730208 PMCID: PMC11169247 DOI: 10.1038/s44319-024-00153-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2024] [Revised: 04/17/2024] [Accepted: 04/26/2024] [Indexed: 05/12/2024] Open
Affiliation(s)
- Stephan Guttinger
- Department of Social and Political Sciences, Philosophy and Anthropology (SPSPA), Egenis Centre for the Study of the Life Sciences, University of Exeter, Exeter, UK.
| |
Collapse
|
2
|
Hu B, Mao Y, Kim KJ. How social anxiety leads to problematic use of conversational AI: The roles of loneliness, rumination, and mind perception. COMPUTERS IN HUMAN BEHAVIOR 2023. [DOI: 10.1016/j.chb.2023.107760] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/31/2023]
|
3
|
Ojea Quintana I, Reimann R, Cheong M, Alfano M, Klein C. Polarization and trust in the evolution of vaccine discourse on Twitter during COVID-19. PLoS One 2022; 17:e0277292. [PMID: 36516117 PMCID: PMC9749990 DOI: 10.1371/journal.pone.0277292] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Accepted: 10/15/2022] [Indexed: 12/15/2022] Open
Abstract
Trust in vaccination is eroding, and attitudes about vaccination have become more polarized. This is an observational study of Twitter analyzing the impact that COVID-19 had on vaccine discourse. We identify the actors, the language they use, how their language changed, and what can explain this change. First, we find that authors cluster into several large, interpretable groups, and that the discourse was greatly affected by American partisan politics. Over the course of our study, both Republicans and Democrats entered the vaccine conversation in large numbers, forming coalitions with Antivaxxers and public health organizations, respectively. After the pandemic was officially declared, the interactions between these groups increased. Second, we show that the moral and non-moral language used by the various communities converged in interesting and informative ways. Finally, vector autoregression analysis indicates that differential responses to public health measures are likely part of what drove this convergence. Taken together, our results suggest that polarization around vaccination discourse in the context of COVID-19 was ultimately driven by a trust-first dynamic of political engagement.
Collapse
Affiliation(s)
| | - Ritsaart Reimann
- Department of Philosophy, Macquarie University, Sydney, NSW, Australia
| | - Marc Cheong
- Centre for AI and Digital Ethics, Faculty of Engineering and IT, University of Melbourne, Melbourne, VIC, Australia
| | - Mark Alfano
- Department of Philosophy, Macquarie University, Sydney, NSW, Australia
| | - Colin Klein
- School of Philosophy, The Australian National University, Canberra, ACT, Australia
| |
Collapse
|
4
|
Klein C, Reimann R, Quintana IO, Cheong M, Ferreira M, Alfano M. Attention and counter-framing in the Black Lives Matter movement on Twitter. HUMANITIES & SOCIAL SCIENCES COMMUNICATIONS 2022; 9:367. [PMID: 36254165 PMCID: PMC9555697 DOI: 10.1057/s41599-022-01384-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/22/2022] [Accepted: 09/28/2022] [Indexed: 05/30/2023]
Abstract
The social media platform Twitter platform has played a crucial role in the Black Lives Matter (BLM) movement. The immediate, flexible nature of tweets plays a crucial role both in spreading information about the movement's aims and in organizing individual protests. Twitter has also played an important role in the right-wing reaction to BLM, providing a means to reframe and recontextualize activists' claims in a more sinister light. The ability to bring about social change depends on the balance of these two forces, and in particular which side can capture and maintain sustained attention. The present study examines 2 years worth of tweets about BLM (about 118 million in total). Timeseries analysis reveals that activists are better at mobilizing rapid attention, whereas right-wing accounts show a pattern of moderate but more sustained activity driven by reaction to political opponents. Topic modeling reveals differences in how different political groups talk about BLM. Most notably, the murder of George Floyd appears to have solidified a right-wing counter-framing of protests as arising from dangerous "terrorist" actors. The study thus sheds light on the complex network and rhetorical effects that drive the struggle for online attention to the BLM movement.
Collapse
Affiliation(s)
- Colin Klein
- The Australian National University, Canberra, ACT Australia
| | | | | | - Marc Cheong
- University of Melbourne, Melbourne, VIC Australia
| | | | | |
Collapse
|
5
|
Kühl N, Goutier M, Baier L, Wolff C, Martin D. Human vs. supervised machine learning: Who learns patterns faster? COGN SYST RES 2022. [DOI: 10.1016/j.cogsys.2022.09.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
6
|
Hutmacher F, Appel M. The Psychology of Personalization in Digital Environments: From Motivation to Well-Being – A Theoretical Integration. REVIEW OF GENERAL PSYCHOLOGY 2022. [DOI: 10.1177/10892680221105663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The personalization of digital environments is becoming ubiquitous due to the rise of AI-based algorithms and recommender systems. Arguably, this technological development has far-reaching consequences for individuals and societies alike. In this article, we propose a psychological model of the effects of personalization in digital environments, which connects personalization with motivational tendencies, psychological needs, and well-being. Based on the model, we review studies from three areas of application—news feeds and websites, music streaming, and online dating—to explain both the positive and the negative effects of personalization on individuals. We conclude that personalization can lead to desirable outcomes such as reducing choice overload. However, personalized digital environments without transparency and without the option for users to play an active role in the personalization process potentially pose a danger to human well-being. Design recommendations as well as avenues for future research that follow from these conclusions are being discussed.
Collapse
Affiliation(s)
- Fabian Hutmacher
- Human-Computer-Media Institute, University of Würzburg, Würzburg, Germany
| | - Markus Appel
- Human-Computer-Media Institute, University of Würzburg, Würzburg, Germany
| |
Collapse
|
7
|
Greene T, Martens D, Shmueli G. Barriers to academic data science research in the new realm of algorithmic behaviour modification by digital platforms. NAT MACH INTELL 2022. [DOI: 10.1038/s42256-022-00475-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
8
|
Solberg E, Kaarstad M, Eitrheim MHR, Bisio R, Reegård K, Bloch M. A Conceptual Model of Trust, Perceived Risk, and Reliance on AI Decision Aids. GROUP & ORGANIZATION MANAGEMENT 2022. [DOI: 10.1177/10596011221081238] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
There is increasing interest in the use of artificial intelligence (AI) to improve organizational decision-making. However, research indicates that people’s trust in and choice to rely on “AI decision aids” can be tenuous. In the present paper, we connect research on trust in AI with Mayer, Davis, and Schoorman’s (1995) model of organizational trust to elaborate a conceptual model of trust, perceived risk, and reliance on AI decision aids at work. Drawing from the trust in technology, trust in automation, and decision support systems literatures, we redefine central concepts in Mayer et al.’s (1995) model, expand the model to include new, relevant constructs (like perceived control over an AI decision aid), and refine propositions about the relationships expected in this context. The conceptual model put forward presents a framework that can help researchers studying trust in and reliance on AI decision aids develop their research models, build systematically on each other’s research, and contribute to a more cohesive understanding of the phenomenon. Our paper concludes with five next steps to take research on the topic forward.
Collapse
Affiliation(s)
- Elizabeth Solberg
- Department of Human-Centred Digitalization, Institute for Energy Technology, Halden, Norway
| | - Magnhild Kaarstad
- Department of Human-Centred Digitalization, Institute for Energy Technology, Halden, Norway
| | - Maren H. Rø Eitrheim
- Department of Human-Centred Digitalization, Institute for Energy Technology, Halden, Norway
| | - Rossella Bisio
- Department of Humans and Automation, Institute for Energy Technology, Halden, Norway
| | - Kine Reegård
- Department of Human-Centred Digitalization, Institute for Energy Technology, Halden, Norway
| | - Marten Bloch
- Department of Humans and Automation, Institute for Energy Technology, Halden, Norway
| |
Collapse
|
9
|
Capasso M, Umbrello S. Responsible nudging for social good: new healthcare skills for AI-driven digital personal assistants. MEDICINE, HEALTH CARE, AND PHILOSOPHY 2022; 25:11-22. [PMID: 34822096 PMCID: PMC8613457 DOI: 10.1007/s11019-021-10062-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Accepted: 11/16/2021] [Indexed: 11/25/2022]
Abstract
Traditional medical practices and relationships are changing given the widespread adoption of AI-driven technologies across the various domains of health and healthcare. In many cases, these new technologies are not specific to the field of healthcare. Still, they are existent, ubiquitous, and commercially available systems upskilled to integrate these novel care practices. Given the widespread adoption, coupled with the dramatic changes in practices, new ethical and social issues emerge due to how these systems nudge users into making decisions and changing behaviours. This article discusses how these AI-driven systems pose particular ethical challenges with regards to nudging. To confront these issues, the value sensitive design (VSD) approach is adopted as a principled methodology that designers can adopt to design these systems to avoid harming and contribute to the social good. The AI for Social Good (AI4SG) factors are adopted as the norms constraining maleficence. In contrast, higher-order values specific to AI, such as those from the EU High-Level Expert Group on AI and the United Nations Sustainable Development Goals, are adopted as the values to be promoted as much as possible in design. The use case of Amazon Alexa's Healthcare Skills is used to illustrate this design approach. It provides an exemplar of how designers and engineers can begin to orientate their design programs of these technologies towards the social good.
Collapse
Affiliation(s)
- Marianna Capasso
- Scuola Superiore Sant’Anna, Piazza Martiri della Libertà 33, 56127 Pisa, Italia
| | - Steven Umbrello
- Department of Values, Technology, & Innovation, School of Technology, Policy & Management, Delft University of Technology, Jaffalaan 5, 2628 BX Delft, The Netherlands
| |
Collapse
|
10
|
Figà Talamanca G, Arfini S. Through the Newsfeed Glass: Rethinking Filter Bubbles and Echo Chambers. PHILOSOPHY & TECHNOLOGY 2022; 35:20. [PMID: 35308101 PMCID: PMC8923337 DOI: 10.1007/s13347-021-00494-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Accepted: 12/05/2021] [Indexed: 11/28/2022]
Abstract
In this paper, we will re-elaborate the notions of filter bubble and of echo chamber by considering human cognitive systems’ limitations in everyday interactions and how they experience digital technologies. Researchers who applied the concept of filter bubble and echo chambers in empirical investigations see them as forms of algorithmically-caused systems that seclude the users of digital technologies from viewpoints and opinions that oppose theirs. However, a significant majority of empirical research has shown that users do find and interact with opposing views. Furthermore, we argue that the notion of filter bubble overestimates the social impact of digital technologies in explaining social and political developments without considering the not-only-technological circumstances of online behavior and interaction. This provides us with motivation to reconsider this notion’s validity and re-elaborate it in light of existing epistemological theories that deal with the discomfort people experience when dealing with what they do not know. Therefore, we will survey a series of philosophical reflections regarding the epistemic limitations of human cognitive systems. In particular, we will discuss how knowledge and mere belief are phenomenologically indistinguishable and how people’s experience of having their beliefs challenged is cause of epistemic discomfort. We will then go on to argue, in contrast with Pariser’s assumptions, that digital media users might tend to conform to their held viewpoints because of the “immediate” way they experience opposing viewpoints. Since online people experience others and their viewpoints as material features of digital environments, we maintain that this modality of confronting oneself with contrasting opinions prompts users to reinforce their preexisting beliefs and attitudes.
Collapse
Affiliation(s)
- Giacomo Figà Talamanca
- Department Ethics and Political Philosophy, Radboud University, Nijmegen, The Netherlands
| | - Selene Arfini
- Department of Humanities - Philosophy Section, University of Pavia, Pavia, Italy
- Computational Philosophy Laboratory, University of Pavia, Pavia, Italy
| |
Collapse
|
11
|
Multiple-Valued Logic Modelling for Agents Controlled via Optical Networks. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12031263] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
The methods of data verification are discussed, which are intended for the distant control of autonomous mobile robotic agents via networks, combining optical data links. The problem of trust servers is considered for position verification and position-based cryptography tasks. In order to obtain flexible quantum and classical verification procedures, one should use the collective interaction of agents and network nodes, including some elements of the blockchain. Multiple-valued logic functions defined within discrete k-valued Allen–Givone algebra are proposed for the logically linked list of entries and the distributed ledger, which can be used for distant data verification and breakdown restoration in mobile agents with the help of partner network nodes. A distributed ledger scheme involves the assigning by distant partners of random hash values, which further can be used as keys for access to a set of distributed data storages, containing verification and restoration data. Multiple-valued logic procedures are simple and clear enough for high-dimensional logic modelling and for the design of combined quantum and classical protocols.
Collapse
|
12
|
Hermann E. Leveraging Artificial Intelligence in Marketing for Social Good-An Ethical Perspective. JOURNAL OF BUSINESS ETHICS : JBE 2022; 179:43-61. [PMID: 34054170 PMCID: PMC8150633 DOI: 10.1007/s10551-021-04843-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Accepted: 05/12/2021] [Indexed: 05/08/2023]
Abstract
Artificial intelligence (AI) is (re)shaping strategy, activities, interactions, and relationships in business and specifically in marketing. The drawback of the substantial opportunities AI systems and applications (will) provide in marketing are ethical controversies. Building on the literature on AI ethics, the authors systematically scrutinize the ethical challenges of deploying AI in marketing from a multi-stakeholder perspective. By revealing interdependencies and tensions between ethical principles, the authors shed light on the applicability of a purely principled, deontological approach to AI ethics in marketing. To reconcile some of these tensions and account for the AI-for-social-good perspective, the authors make suggestions of how AI in marketing can be leveraged to promote societal and environmental well-being.
Collapse
Affiliation(s)
- Erik Hermann
- Wireless Systems,
IHP - Leibniz-Institut für innovative Mikroelektronik
, Frankfurt (Oder), Germany
| |
Collapse
|
13
|
Schreuter D, van der Putten P, Lamers MH. Trust Me on This One: Conforming to Conversational Assistants. Minds Mach (Dordr) 2021. [DOI: 10.1007/s11023-021-09581-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
14
|
Tollon F, Naidoo K. On and beyond artifacts in moral relations: accounting for power and violence in Coeckelbergh’s social relationism. AI & SOCIETY 2021. [DOI: 10.1007/s00146-021-01303-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
AbstractThe ubiquity of technology in our lives and its culmination in artificial intelligence raises questions about its role in our moral considerations. In this paper, we address a moral concern in relation to technological systems given their deep integration in our lives. Coeckelbergh develops a social-relational account, suggesting that it can point us toward a dynamic, historicised evaluation of moral concern. While agreeing with Coeckelbergh’s move away from grounding moral concern in the ontological properties of entities, we suggest that it problematically upholds moral relativism. We suggest that the role of power, as described by Arendt and Foucault, is significant in social relations and as curating moral possibilities. This produces a clearer picture of the relations at hand and opens up the possibility that relations may be deemed violent. Violence as such gives us some way of evaluating the morality of a social relation, moving away from Coeckelbergh’s seeming relativism while retaining his emphasis on social–historical moral precedent.
Collapse
|
15
|
Abstract
AbstractSocial machines are systems formed by material and human elements interacting in a structured way. The use of digital platforms as mediators allows large numbers of humans to participate in such machines, which have interconnected AI and human components operating as a single system capable of highly sophisticated behaviour. Under certain conditions, such systems can be understood as autonomous goal-driven agents. Many popular online platforms can be regarded as instances of this class of agent. We argue that autonomous social machines provide a new paradigm for the design of intelligent systems, marking a new phase in AI. After describing the characteristics of goal-driven social machines, we discuss the consequences of their adoption, for the practice of artificial intelligence as well as for its regulation.
Collapse
|
16
|
Heersmink R. Varieties of Artifacts: Embodied, Perceptual, Cognitive, and Affective. Top Cogn Sci 2021; 13:573-596. [PMID: 34081417 DOI: 10.1111/tops.12549] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Revised: 05/14/2021] [Accepted: 05/19/2021] [Indexed: 12/12/2022]
Abstract
The primary goal of this essay is to provide a comprehensive overview and analysis of the various relations between material artifacts and the embodied mind. A secondary goal of this essay is to identify some of the trends in the design and use of artifacts. First, based on their functional properties, I identify four categories of artifacts co-opted by the embodied mind, namely (a) embodied artifacts, (b) perceptual artifacts, (c) cognitive artifacts, and (d) affective artifacts. These categories can overlap and so some artifacts are members of more than one category. I also identify some of the techniques (or skills) we use when interacting with artifacts. Identifying these categories of artifacts and techniques allows us to map the landscape of relations between embodied minds and the artifactual world. Second, having identified categories of artifacts and techniques, this essay then outlines some of the trends in the design and use of artifacts, focusing on neuroprosthetics, brain-computer interfaces, and personalization algorithms nudging their users toward particular epistemic paths of information consumption.
Collapse
Affiliation(s)
- Richard Heersmink
- Department Department of Politics, Media & Philosophy, La Trobe University
| |
Collapse
|
17
|
Dennis MJ. Towards a Theory of Digital Well-Being: Reimagining Online Life After Lockdown. SCIENCE AND ENGINEERING ETHICS 2021; 27:32. [PMID: 34013496 PMCID: PMC8132735 DOI: 10.1007/s11948-021-00307-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Accepted: 04/24/2021] [Indexed: 06/12/2023]
Abstract
Global lockdowns during the COVID-19 pandemic have offered many people first-hand experience of how their daily online activities threaten their digital well-being. This article begins by critically evaluating the current approaches to digital well-being offered by ethicists of technology, NGOs, and social media corporations. My aim is to explain why digital well-being needs to be reimagined within a new conceptual paradigm. After this, I lay the foundations for such an alternative approach, one that shows how current digital well-being initiatives can be designed in more insightful ways. This new conceptual framework aims to transform how philosophers of technology think about this topic, as well as offering social media corporations practical ways to design their technologies in ways that will improve the digital well-being of users.
Collapse
Affiliation(s)
- Matthew J Dennis
- Philosophy & Ethics Capacity Group, Eindhoven University of Technology, Eindhoven, The Netherlands.
| |
Collapse
|
18
|
Abstract
AbstractIn this paper I critically evaluate the value neutrality thesis regarding technology, and find it wanting. I then introduce the various ways in which artifacts can come to influence moral value, and our evaluation of moral situations and actions. Here, following van de Poel and Kroes, I introduce the idea of value sensitive design. Specifically, I show how by virtue of their designed properties, artifacts may come to embody values. Such accounts, however, have several shortcomings. In agreement with Michael Klenk, I raise epistemic and metaphysical issues with respect to designed properties embodying value. The concept of an affordance, borrowed from ecological psychology, provides a more philosophically fruitful grounding to the potential way(s) in which artifacts might embody values. This is due to the way in which it incorporates key insights from perception more generally, and how we go about determining possibilities for action in our environment specifically. The affordance account as it is presented by Klenk, however, is insufficient. I therefore argue that we understand affordances based on whether they are meaningful, and, secondly, that we grade them based on their force.
Collapse
|
19
|
Dennis MJ. Digital well-being under pandemic conditions: catalysing a theory of online flourishing. ETHICS AND INFORMATION TECHNOLOGY 2021; 23:435-445. [PMID: 33679213 PMCID: PMC7919629 DOI: 10.1007/s10676-021-09584-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 02/17/2021] [Indexed: 06/12/2023]
Abstract
The COVID-19 pandemic has catalysed what may soon become a permanent digital transition in the domains of work, education, medicine, and leisure. This transition has also precipitated a spike in concern regarding our digital well-being. Prominent lobbying groups, such as the Center for Humane Technology (CHT), have responded to this concern. In April 2020, the CHT has offered a set of 'Digital Well-Being Guidelines during the COVID-19 Pandemic.' These guidelines offer a rule-based approach to digital well-being, one which aims to mitigate the effects of moving much of our lives online. The CHT's guidelines follow much recent interest in digital well-being in the last decade. Ethicists of technology have recently argued that character-based strategies and redesigning of online architecture have the potential to promote the digital well-being of online technology users. In this article, I evaluate (1) the CHT's rule-based approach, comparing it with (2) character-based strategies and (3) approaches to redesigning online architecture. I argue that all these approaches have some merit, but that each needs to contribute to an integrated approach to digital well-being in order to surmount the challenges of a post-COVID world in which we may well spend much of our lives online.
Collapse
|
20
|
Abstract
Abstract
Autonomous mechanisms have been proposed to regulate certain aspects of society and are already being used to regulate business organisations. We take seriously recent proposals for algorithmic regulation of society, and we identify the existing technologies that can be used to implement them, most of them originally introduced in business contexts. We build on the notion of ‘social machine’ and we connect it to various ongoing trends and ideas, including crowdsourced task-work, social compiler, mechanism design, reputation management systems, and social scoring. After showing how all the building blocks of algorithmic regulation are already well in place, we discuss the possible implications for human autonomy and social order. The main contribution of this paper is to identify convergent social and technical trends that are leading towards social regulation by algorithms, and to discuss the possible social, political, and ethical consequences of taking this path.
Collapse
|
21
|
Burr C, Taddeo M, Floridi L. The Ethics of Digital Well-Being: A Thematic Review. SCIENCE AND ENGINEERING ETHICS 2020; 26:2313-2343. [PMID: 31933119 PMCID: PMC7417400 DOI: 10.1007/s11948-020-00175-8] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2019] [Accepted: 01/03/2020] [Indexed: 05/24/2023]
Abstract
This article presents the first thematic review of the literature on the ethical issues concerning digital well-being. The term 'digital well-being' is used to refer to the impact of digital technologies on what it means to live a life that is good for a human being. The review explores the existing literature on the ethics of digital well-being, with the goal of mapping the current debate and identifying open questions for future research. The review identifies major issues related to several key social domains: healthcare, education, governance and social development, and media and entertainment. It also highlights three broader themes: positive computing, personalised human-computer interaction, and autonomy and self-determination. The review argues that three themes will be central to ongoing discussions and research by showing how they can be used to identify open questions related to the ethics of digital well-being.
Collapse
Affiliation(s)
- Christopher Burr
- Oxford Internet Institute, University of Oxford, 1 St Giles, Oxford, OX1 3JS, UK.
| | - Mariarosaria Taddeo
- Oxford Internet Institute, University of Oxford, 1 St Giles, Oxford, OX1 3JS, UK
- The Alan Turing Institute, 96 Euston Road, London, NW1 2DB, UK
| | - Luciano Floridi
- Oxford Internet Institute, University of Oxford, 1 St Giles, Oxford, OX1 3JS, UK
- The Alan Turing Institute, 96 Euston Road, London, NW1 2DB, UK
| |
Collapse
|
22
|
Abstract
AbstractThis article presents the first, systematic analysis of the ethical challenges posed by recommender systems through a literature review. The article identifies six areas of concern, and maps them onto a proposed taxonomy of different kinds of ethical impact. The analysis uncovers a gap in the literature: currently user-centred approaches do not consider the interests of a variety of other stakeholders—as opposed to just the receivers of a recommendation—in assessing the ethical impacts of a recommender system.
Collapse
|
23
|
|
24
|
|