1
|
Hutler B, Rieder TN, Mathews DJH, Handelman DA, Greenberg AM. Designing robots that do no harm: understanding the challenges of Ethics for Robots. AI Ethics 2023:1-9. [PMID: 37360148 PMCID: PMC10108783 DOI: 10.1007/s43681-023-00283-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Accepted: 03/28/2023] [Indexed: 06/28/2023]
Abstract
This article describes key challenges in creating an ethics "for" robots. Robot ethics is not only a matter of the effects caused by robotic systems or the uses to which they may be put, but also the ethical rules and principles that these systems ought to follow-what we call "Ethics for Robots." We suggest that the Principle of Nonmaleficence, or "do no harm," is one of the basic elements of an ethics for robots-especially robots that will be used in a healthcare setting. We argue, however, that the implementation of even this basic principle will raise significant challenges for robot designers. In addition to technical challenges, such as ensuring that robots are able to detect salient harms and dangers in the environment, designers will need to determine an appropriate sphere of responsibility for robots and to specify which of various types of harms must be avoided or prevented. These challenges are amplified by the fact that the robots we are currently able to design possess a form of semi-autonomy that differs from other more familiar semi-autonomous agents such as animals or young children. In short, robot designers must identify and overcome the key challenges of an ethics for robots before they may ethically utilize robots in practice.
Collapse
Affiliation(s)
- Brian Hutler
- Department of Philosophy, Temple University, 1114 Polett Walk, Philadelphia, PA 19122 USA
| | - Travis N. Rieder
- Berman Institute of Bioethics, Johns Hopkins University, 1809 Ashland Ave, Baltimore, MD 21205 USA
| | - Debra J. H. Mathews
- Berman Institute of Bioethics, Johns Hopkins University, 1809 Ashland Ave, Baltimore, MD 21205 USA
- Department of Genetic Medicine, Johns Hopkins University School of Medicine, 733 N. Broadway, Baltimore, MD 21205 USA
| | - David A. Handelman
- Applied Physics Laboratory, Johns Hopkins University, 11100 Johns Hopkins Road, Laurel, MD 20723 USA
| | - Ariel M. Greenberg
- Applied Physics Laboratory, Johns Hopkins University, 11100 Johns Hopkins Road, Laurel, MD 20723 USA
| |
Collapse
|
2
|
Génova G, Moreno V, González MR. Machine Ethics: Do Androids Dream of Being Good People? Sci Eng Ethics 2023; 29:10. [PMID: 36952064 PMCID: PMC10036453 DOI: 10.1007/s11948-023-00433-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Accepted: 02/10/2023] [Indexed: 06/18/2023]
Abstract
Is ethics a computable function? Can machines learn ethics like humans do? If teaching consists in no more than programming, training, indoctrinating… and if ethics is merely following a code of conduct, then yes, we can teach ethics to algorithmic machines. But if ethics is not merely about following a code of conduct or about imitating the behavior of others, then an approach based on computing outcomes, and on the reduction of ethics to the compilation and application of a set of rules, either a priori or learned, misses the point. Our intention is not to solve the technical problem of machine ethics, but to learn something about human ethics, and its rationality, by reflecting on the ethics that can and should be implemented in machines. Any machine ethics implementation will have to face a number of fundamental or conceptual problems, which in the end refer to philosophical questions, such as: what is a human being (or more generally, what is a worthy being); what is human intentional acting; and how are intentional actions and their consequences morally evaluated. We are convinced that a proper understanding of ethical issues in AI can teach us something valuable about ourselves, and what it means to lead a free and responsible ethical life, that is, being good people beyond merely "following a moral code". In the end we believe that rationality must be seen to involve more than just computing, and that value rationality is beyond numbers. Such an understanding is a required step to recovering a renewed rationality of ethics, one that is urgently needed in our highly technified society.
Collapse
Affiliation(s)
- Gonzalo Génova
- Computer Science and Engineering Department, Universidad Carlos III de Madrid, Avda. Universidad 30, 28911 Leganés, Madrid, Spain
| | - Valentín Moreno
- Computer Science and Engineering Department, Universidad Carlos III de Madrid, Avda. Universidad 30, 28911 Leganés, Madrid, Spain
| | - M. Rosario González
- Department of Educational Studies, Universidad Complutense de Madrid, Avda. Rector Royo Vilanova S/N, 28040 Madrid, Spain
| |
Collapse
|
3
|
Coggins TN, Steinert S. The seven troubles with norm-compliant robots. Ethics Inf Technol 2023; 25:29. [PMID: 37123285 PMCID: PMC10130815 DOI: 10.1007/s10676-023-09701-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 04/05/2023] [Indexed: 05/03/2023]
Abstract
Many researchers from robotics, machine ethics, and adjacent fields seem to assume that norms represent good behavior that social robots should learn to benefit their users and society. We would like to complicate this view and present seven key troubles with norm-compliant robots: (1) norm biases, (2) paternalism (3) tyrannies of the majority, (4) pluralistic ignorance, (5) paths of least resistance, (6) outdated norms, and (7) technologically-induced norm change. Because discussions of why norm-compliant robots can be problematic are noticeably absent from the robot and machine ethics literature, this paper fills an important research gap. We argue that it is critical for researchers to take these issues into account if they wish to make norm-compliant robots.
Collapse
Affiliation(s)
- Tom N. Coggins
- Department of Values, Technology and Innovation, Faculty of Technology, Policy and Management, Delft University of Technology, Delft, The Netherlands
| | - Steffen Steinert
- Department of Values, Technology and Innovation, Faculty of Technology, Policy and Management, Delft University of Technology, Delft, The Netherlands
| |
Collapse
|
4
|
Ramanayake R, Wicke P, Nallur V. Immune moral models? Pro-social rule breaking as a moral enhancement approach for ethical AI. AI Soc 2022; 38:801-813. [PMID: 35645466 PMCID: PMC9125349 DOI: 10.1007/s00146-022-01478-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Accepted: 04/13/2022] [Indexed: 11/25/2022]
Abstract
We are moving towards a future where Artificial Intelligence (AI) based agents make many decisions on behalf of humans. From healthcare decision-making to social media censoring, these agents face problems, and make decisions with ethical and societal implications. Ethical behaviour is a critical characteristic that we would like in a human-centric AI. A common observation in human-centric industries, like the service industry and healthcare, is that their professionals tend to break rules, if necessary, for pro-social reasons. This behaviour among humans is defined as pro-social rule breaking. To make AI agents more human-centric, we argue that there is a need for a mechanism that helps AI agents identify when to break rules set by their designers. To understand when AI agents need to break rules, we examine the conditions under which humans break rules for pro-social reasons. In this paper, we present a study that introduces a ‘vaccination strategy dilemma’ to human participants and analyzes their response. In this dilemma, one needs to decide whether they would distribute COVID-19 vaccines only to members of a high-risk group (follow the enforced rule) or, in selected cases, administer the vaccine to a few social influencers (break the rule), which might yield an overall greater benefit to society. The results of the empirical study suggest a relationship between stakeholder utilities and pro-social rule breaking (PSRB), which neither deontological nor utilitarian ethics completely explain. Finally, the paper discusses the design characteristics of an ethical agent capable of PSRB and the future research directions on PSRB in the AI realm. We hope that this will inform the design of future AI agents, and their decision-making behaviour.
Collapse
Affiliation(s)
| | - Philipp Wicke
- School of Computer Science, University College Dublin, Dublin, Ireland
| | - Vivek Nallur
- School of Computer Science, University College Dublin, Dublin, Ireland
| |
Collapse
|
5
|
Abstract
Recent advancements in artificial intelligence (AI) have fueled widespread academic discourse on the ethics of AI within and across a diverse set of disciplines. One notable subfield of AI ethics is machine ethics, which seeks to implement ethical considerations into AI systems. However, since different research efforts within machine ethics have discipline-specific concepts, practices, and goals, the resulting body of work is pestered with conflict and confusion as opposed to fruitful synergies. The aim of this paper is to explore ways to alleviate these issues, both on a practical and theoretical level of analysis. First, we describe two approaches to machine ethics: the philosophical approach and the engineering approach and show how tensions between the two arise due to discipline specific practices and aims. Using the concept of disciplinary capture, we then discuss potential promises and pitfalls to cross-disciplinary collaboration. Drawing on recent work in philosophy of science, we finally describe how metacognitive scaffolds can be used to avoid epistemological obstacles and foster innovative collaboration in AI ethics in general and machine ethics in particular.
Collapse
Affiliation(s)
- Jakob Stenseke
- Department of Philosophy, Lund University, Lund, Sweden.
| |
Collapse
|
6
|
Tigard DW. Technological Answerability and the Severance Problem: Staying Connected by Demanding Answers. Sci Eng Ethics 2021; 27:59. [PMID: 34427804 PMCID: PMC8383242 DOI: 10.1007/s11948-021-00334-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Accepted: 08/02/2021] [Indexed: 05/20/2023]
Abstract
Artificial intelligence (AI) and robotic technologies have become nearly ubiquitous. In some ways, the developments have likely helped us, but in other ways sophisticated technologies set back our interests. Among the latter sort is what has been dubbed the 'severance problem'-the idea that technologies sever our connection to the world, a connection which is necessary for us to flourish and live meaningful lives. I grant that the severance problem is a threat we should mitigate and I ask: how can we stave it off? In particular, the fact that some technologies exhibit behavior that is unclear to us seems to constitute a kind of severance. Building upon contemporary work on moral responsibility, I argue for a mechanism I refer to as 'technological answerability', namely the capacity to recognize human demands for answers and to respond accordingly. By designing select devices-such as robotic assistants and personal AI programs-for increased answerability, we see at least one way of satisfying our demands for answers and thereby retaining our connection to a world increasingly occupied by technology.
Collapse
Affiliation(s)
- Daniel W Tigard
- Institute for History and Ethics of Medicine, Technical University of Munich, Ismaninger Str. 22, 81675, Munich, Germany.
| |
Collapse
|
7
|
Herzog C. Three Risks That Caution Against a Premature Implementation of Artificial Moral Agents for Practical and Economical Use. Sci Eng Ethics 2021; 27:3. [PMID: 33496885 PMCID: PMC7838071 DOI: 10.1007/s11948-021-00283-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/15/2020] [Accepted: 11/07/2020] [Indexed: 06/12/2023]
Abstract
In the present article, I will advocate caution against developing artificial moral agents (AMAs) based on the notion that the utilization of preliminary forms of AMAs will potentially negatively feed back on the human social system and on human moral thought itself and its value-e.g., by reinforcing social inequalities, diminishing the breadth of employed ethical arguments and the value of character. While scientific investigations into AMAs pose no direct significant threat, I will argue against their premature utilization for practical and economical use. I will base my arguments on two thought experiments. The first thought experiment deals with the potential to generate a replica of an individual's moral stances with the purpose to increase, what I term, 'moral efficiency'. Hence, as a first risk, an unregulated utilization of premature AMAs in a neoliberal capitalist system is likely to disadvantage those who cannot afford 'moral replicas' and further reinforce social inequalities. The second thought experiment deals with the idea of a 'moral calculator'. As a second risk, I will argue that, even as a device equally accessible to all and aimed at augmenting human moral deliberation, 'moral calculators' as preliminary forms of AMAs are likely to diminish the breadth and depth of concepts employed in moral arguments. Again, I base this claim on the idea that the current most dominant economic system rewards increases in productivity. However, increases in efficiency will mostly stem from relying on the outputs of 'moral calculators' without further scrutiny. Premature AMAs will cover only a limited scope of moral argumentation and, hence, over-reliance on them will narrow human moral thought. In addition and as the third risk, I will argue that an increased disregard of the interior of a moral agent may ensue-a trend that can already be observed in the literature.
Collapse
Affiliation(s)
- Christian Herzog
- Ethical Innovation Hub, Institute for Electrical Engineering in Medicine, University of Lübeck, Ratzeburger Allee 160, 23562, Lübeck, Germany.
| |
Collapse
|
8
|
Chomanski B. Should Moral Machines be Banned? A Commentary on van Wynsberghe and Robbins "Critiquing the Reasons for Making Artificial Moral Agents". Sci Eng Ethics 2020; 26:3469-3481. [PMID: 32876909 DOI: 10.1007/s11948-020-00255-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/21/2020] [Accepted: 07/22/2020] [Indexed: 06/11/2023]
Abstract
In a stimulating recent article for this journal (van Wynsberghe and Robbins in Sci Eng Ethics 25(3):719-735. https://doi.org/10.1007/s11948-018-0030-8 , 2019), Aimee van Wynsberghe and Scott Robbins (hereafter, vW&R) mount a serious critique of a number of reasons advanced in favor of building artificial moral agents (AMAs). In light of their critique, vW&R make two recommendations: they advocate a moratorium on the commercialization of AMAs and suggest that the argumentative burden is now shifted onto the proponents of AMAs to come up with new reasons for building them. This commentary aims to explore the implications vW&R draw from their critique. In particular, it will raise objections to the moratorium argument and propose a presumptive case for commercializing AMAs.
Collapse
Affiliation(s)
- Bartek Chomanski
- Rotman Institute of Philosophy, Western University, 1151 Richmond Street North, London, ON, Canada.
| |
Collapse
|
9
|
Evans K, de Moura N, Chauvier S, Chatila R, Dogan E. Ethical Decision Making in Autonomous Vehicles: The AV Ethics Project. Sci Eng Ethics 2020; 26:3285-3312. [PMID: 33048325 PMCID: PMC7755871 DOI: 10.1007/s11948-020-00272-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/21/2019] [Accepted: 09/30/2020] [Indexed: 06/11/2023]
Abstract
The ethics of autonomous vehicles (AV) has received a great amount of attention in recent years, specifically in regard to their decisional policies in accident situations in which human harm is a likely consequence. Starting from the assumption that human harm is unavoidable, many authors have developed differing accounts of what morality requires in these situations. In this article, a strategy for AV decision-making is proposed, the Ethical Valence Theory, which paints AV decision-making as a type of claim mitigation: different road users hold different moral claims on the vehicle's behavior, and the vehicle must mitigate these claims as it makes decisions about its environment. Using the context of autonomous vehicles, the harm produced by an action and the uncertainties connected to it are quantified and accounted for through deliberation, resulting in an ethical implementation coherent with reality. The goal of this approach is not to define how moral theory requires vehicles to behave, but rather to provide a computational approach that is flexible enough to accommodate a number of 'moral positions' concerning what morality demands and what road users may expect, offering an evaluation tool for the social acceptability of an autonomous vehicle's ethical decision making.
Collapse
Affiliation(s)
- Katherine Evans
- Institut VEDECOM, 21 bis Allée des Marroniers, 78000 Versailles, France
- Sciences, Normes, Démocratie, Sorbonne Université, 1 Rue Victor Cousin, 75005 Paris, France
| | - Nelson de Moura
- Institut VEDECOM, 21 bis Allée des Marroniers, 78000 Versailles, France
- ISIR, Sorbonne Université, 4 Place Jussieu, 75005 Paris, France
| | - Stéphane Chauvier
- Sciences, Normes, Démocratie, Sorbonne Université, 1 Rue Victor Cousin, 75005 Paris, France
| | - Raja Chatila
- ISIR, Sorbonne Université, 4 Place Jussieu, 75005 Paris, France
| | - Ebru Dogan
- Institut VEDECOM, 21 bis Allée des Marroniers, 78000 Versailles, France
| |
Collapse
|
10
|
Abstract
This paper surveys the state-of-the-art in machine ethics, that is, considerations of how to implement ethical behaviour in robots, unmanned autonomous vehicles, or software systems. The emphasis is on covering the breadth of ethical theories being considered by implementors, as well as the implementation techniques being used. There is no consensus on which ethical theory is best suited for any particular domain, nor is there any agreement on which technique is best placed to implement a particular theory. Another unresolved problem in these implementations of ethical theories is how to objectively validate the implementations. The paper discusses the dilemmas being used as validating 'whetstones' and whether any alternative validation mechanism exists. Finally, it speculates that an intermediate step of creating domain-specific ethics might be a possible stepping stone towards creating machines that exhibit ethical behaviour.
Collapse
Affiliation(s)
- Vivek Nallur
- School of Computer Science, University College Dublin, Dublin, D04 V1W8, Republic of Ireland.
| |
Collapse
|
11
|
Cervantes S, López S, Cervantes JA. Toward ethical cognitive architectures for the development of artificial moral agents. COGN SYST RES 2020; 64:117-125. [PMID: 32901198 PMCID: PMC7470787 DOI: 10.1016/j.cogsys.2020.08.010] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2020] [Accepted: 08/26/2020] [Indexed: 11/30/2022]
Abstract
New technologies based on artificial agents promise to change the next generation of autonomous systems and therefore our interaction with them. Systems based on artificial agents such as self-driving cars and social robots are examples of this technology that is seeking to improve the quality of people’s life. Cognitive architectures aim to create some of the most challenging artificial agents commonly known as bio-inspired cognitive agents. This type of artificial agent seeks to embody human-like intelligence in order to operate and solve problems in the real world as humans do. Moreover, some cognitive architectures such as Soar, LIDA, ACT-R, and iCub try to be fundamental architectures for the Artificial General Intelligence model of human cognition. Therefore, researchers in the machine ethics field face ethical questions related to what mechanisms an artificial agent must have for making moral decisions in order to ensure that their actions are always ethically right. This paper aims to identify some challenges that researchers need to solve in order to create ethical cognitive architectures. These cognitive architectures are characterized by the capacity to endow artificial agents with appropriate mechanisms to exhibit explicit ethical behavior. Additionally, we offer some reasons to develop ethical cognitive architectures. We hope that this study can be useful to guide future research on ethical cognitive architectures.
Collapse
Affiliation(s)
- Salvador Cervantes
- Department of Computer Science and Engineering, Universidad de Guadalajara, Ameca, P.C. 46600, Mexico
| | - Sonia López
- Department of Computer Science and Engineering, Universidad de Guadalajara, Ameca, P.C. 46600, Mexico
| | - José-Antonio Cervantes
- Department of Computer Science and Engineering, Universidad de Guadalajara, Ameca, P.C. 46600, Mexico
| |
Collapse
|
12
|
Bradwell HL, Winnington R, Thill S, Jones RB. Ethical perceptions towards real-world use of companion robots with older people and people with dementia: survey opinions among younger adults. BMC Geriatr 2020; 20:244. [PMID: 32664904 PMCID: PMC7359562 DOI: 10.1186/s12877-020-01641-5] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2020] [Accepted: 07/03/2020] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Use of companion robots may reduce older people's depression, loneliness and agitation. This benefit has to be contrasted against possible ethical concerns raised by philosophers in the field around issues such as deceit, infantilisation, reduced human contact and accountability. Research directly assessing prevalence of such concerns among relevant stakeholders, however, remains limited, even though their views clearly have relevance in the debate. For example, any discrepancies between ethicists and stakeholders might in itself be a relevant ethical consideration while concerns perceived by stakeholders might identify immediate barriers to successful implementation. METHODS We surveyed 67 younger adults after they had live interactions with companion robot pets while attending an exhibition on intimacy, including the context of intimacy for older people. We asked about their perceptions of ethical issues. Participants generally had older family members, some with dementia. RESULTS Most participants (40/67, 60%) reported having no ethical concerns towards companion robot use when surveyed with an open question. Twenty (30%) had some concern, the most common being reduced human contact (10%), followed by deception (6%). However, when choosing from a list, the issue perceived as most concerning was equality of access to devices based on socioeconomic factors (m = 4.72 on a scale 1-7), exceeding more commonly hypothesized issues such as infantilising (m = 3.45), and deception (m = 3.44). The lowest-scoring issues were potential for injury or harm (m = 2.38) and privacy concerns (m = 2.17). Over half (39/67 (58%)) would have bought a device for an older relative. Cost was a common reason for choosing not to purchase a device. CONCLUSIONS Although a relatively small study, we demonstrated discrepancies between ethical concerns raised in the philosophical literature and those likely to make the decision to buy a companion robot. Such discrepancies, between philosophers and 'end-users' in care of older people, and in methods of ascertainment, are worthy of further empirical research and discussion. Our participants were more concerned about economic issues and equality of access, an important consideration for those involved with care of older people. On the other hand the concerns proposed by ethicists seem unlikely to be a barrier to use of companion robots.
Collapse
Affiliation(s)
- Hannah L Bradwell
- Center for Health Technology, University of Plymouth, Plymouth, Devon, UK.
| | - Rhona Winnington
- Department of Nursing, Auckland University of Technology, 90 Akoranga Drive, Auckland, New Zealand
| | - Serge Thill
- Donders Centre for Cognition, Radboud University, Nijmegen, 6525, HR, The Netherlands
| | - Ray B Jones
- Center for Health Technology, University of Plymouth, Plymouth, Devon, UK
| |
Collapse
|
13
|
Cervantes JA, López S, Rodríguez LF, Cervantes S, Cervantes F, Ramos F. Artificial Moral Agents: A Survey of the Current Status. Sci Eng Ethics 2020; 26:501-532. [PMID: 31721023 DOI: 10.1007/s11948-019-00151-x] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/12/2018] [Accepted: 10/17/2019] [Indexed: 05/24/2023]
Abstract
One of the objectives in the field of artificial intelligence for some decades has been the development of artificial agents capable of coexisting in harmony with people and other systems. The computing research community has made efforts to design artificial agents capable of doing tasks the way people do, tasks requiring cognitive mechanisms such as planning, decision-making, and learning. The application domains of such software agents are evident nowadays. Humans are experiencing the inclusion of artificial agents in their environment as unmanned vehicles, intelligent houses, and humanoid robots capable of caring for people. In this context, research in the field of machine ethics has become more than a hot topic. Machine ethics focuses on developing ethical mechanisms for artificial agents to be capable of engaging in moral behavior. However, there are still crucial challenges in the development of truly Artificial Moral Agents. This paper aims to show the current status of Artificial Moral Agents by analyzing models proposed over the past two decades. As a result of this review, a taxonomy to classify Artificial Moral Agents according to the strategies and criteria used to deal with ethical problems is proposed. The presented review aims to illustrate (1) the complexity of designing and developing ethical mechanisms for this type of agent, and (2) that there is a long way to go (from a technological perspective) before this type of artificial agent can replace human judgment in difficult, surprising or ambiguous moral situations.
Collapse
Affiliation(s)
- José-Antonio Cervantes
- Department of Computer Science and Engineering, Centro Universitario de los Valles, Universidad de Guadalajara, Carretera Guadalajara - Ameca Km. 45.5, 46600, Ameca, Mexico.
| | - Sonia López
- Department of Computer Science and Engineering, Centro Universitario de los Valles, Universidad de Guadalajara, Carretera Guadalajara - Ameca Km. 45.5, 46600, Ameca, Mexico
| | | | - Salvador Cervantes
- Department of Computer Science and Engineering, Centro Universitario de los Valles, Universidad de Guadalajara, Carretera Guadalajara - Ameca Km. 45.5, 46600, Ameca, Mexico
| | - Francisco Cervantes
- Department of Electronics, Systems and Informatics, Instituto Tecnológico y de Estudios Superiores de Occidente, Tlaquepaque, Mexico
| | - Félix Ramos
- Department of Computer Science, Centro de Investigación y de Estudios Avanzados del Instituto Politécnico Nacional, Guadalajara, Mexico
| |
Collapse
|
14
|
Abstract
Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents (AMA). Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and building these machines would lead to a better understanding of human morality. Although some scholars have challenged the very initiative to develop AMAs, what is currently missing from the debate is a closer examination of the reasons offered by machine ethicists to justify the development of AMAs. This closer examination is especially needed because of the amount of funding currently being allocated to the development of AMAs (from funders like Elon Musk) coupled with the amount of attention researchers and industry leaders receive in the media for their efforts in this direction. The stakes in this debate are high because moral robots would make demands on society; answers to a host of pending questions about what counts as an AMA and whether they are morally responsible for their behavior or not. This paper shifts the burden of proof back to the machine ethicists demanding that they give good reasons to build AMAs. The paper argues that until this is done, the development of commercially available AMAs should not proceed further.
Collapse
Affiliation(s)
| | - Scott Robbins
- Technical University of Delft, Jaffalaan 5, 2628 BX Delft, Netherlands
| |
Collapse
|
15
|
Garner TA, Powell WA, Carr V. Virtual carers for the elderly: A case study review of ethical responsibilities. Digit Health 2016; 2:2055207616681173. [PMID: 29942577 PMCID: PMC6001203 DOI: 10.1177/2055207616681173] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2016] [Accepted: 11/02/2016] [Indexed: 12/04/2022] Open
Abstract
Intelligent digital healthcare systems are becoming an increasingly considered approach to facilitating continued support of our ageing population. Within the remit of such digital systems, ‘Virtual Carer’ is one of the more consistent terms that refers to an artificial system capable of providing various assistive living and communicative functionalities, embodied within a graphical avatar displayed on a screen. As part of the RITA (Responsive Interactive Advocate) project – a proof of concept for one such virtual carer system – a series of semi-structured discussions with various stakeholders was conducted. This paper presents the results of these discussions to highlight data security, replacement of human/physical care and always acting in the user’s best interest. These three ethical concerns and designer responsibilities are identified as highly relevant to both individuals and groups that may, in the future, utilise a system like RITA either as a care receiver or provider. This paper also presents some initial, theoretical safeguard processes relevant to these key concerns.
Collapse
Affiliation(s)
- Tom A Garner
- School of Creative Technologies, University of Portsmouth, UK
| | - Wendy A Powell
- School of Creative Technologies, University of Portsmouth, UK
| | | |
Collapse
|
16
|
Abstract
Recently, there has been a resurgence of interest in general, comprehensive models of human cognition. Such models aim to explain higher-order cognitive faculties, such as deliberation and planning. Given a computational representation, the validity of these models can be tested in computer simulations such as software agents or embodied robots. The push to implement computational models of this kind has created the field of artificial general intelligence (AGI). Moral decision making is arguably one of the most challenging tasks for computational approaches to higher-order cognition. The need for increasingly autonomous artificial agents to factor moral considerations into their choices and actions has given rise to another new field of inquiry variously known as Machine Morality, Machine Ethics, Roboethics, or Friendly AI. In this study, we discuss how LIDA, an AGI model of human cognition, can be adapted to model both affective and rational features of moral decision making. Using the LIDA model, we will demonstrate how moral decisions can be made in many domains using the same mechanisms that enable general decision making. Comprehensive models of human cognition typically aim for compatibility with recent research in the cognitive and neural sciences. Global workspace theory, proposed by the neuropsychologist Bernard Baars (1988), is a highly regarded model of human cognition that is currently being computationally instantiated in several software implementations. LIDA (Franklin, Baars, Ramamurthy, & Ventura, 2005) is one such computational implementation. LIDA is both a set of computational tools and an underlying model of human cognition, which provides mechanisms that are capable of explaining how an agent's selection of its next action arises from bottom-up collection of sensory data and top-down processes for making sense of its current situation. We will describe how the LIDA model helps integrate emotions into the human decision-making process, and we will elucidate a process whereby an agent can work through an ethical problem to reach a solution that takes account of ethically relevant factors.
Collapse
Affiliation(s)
- Wendell Wallach
- Interdisciplinary Center for Bioethics, Yale UniversityInstitute for Intelligent Systems, The University of MemphisCognitive Science Program, Indiana University
| | | | | |
Collapse
|