1
|
Awad E, Levine S, Anderson M, Anderson SL, Conitzer V, Crockett MJ, Everett JAC, Evgeniou T, Gopnik A, Jamison JC, Kim TW, Liao SM, Meyer MN, Mikhail J, Opoku-Agyemang K, Borg JS, Schroeder J, Sinnott-Armstrong W, Slavkovik M, Tenenbaum JB. Computational ethics. Trends Cogn Sci 2022; 26:388-405. [PMID: 35365430 DOI: 10.1016/j.tics.2022.02.009] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Revised: 02/14/2022] [Accepted: 02/25/2022] [Indexed: 12/11/2022]
Abstract
Technological advances are enabling roles for machines that present novel ethical challenges. The study of 'AI ethics' has emerged to confront these challenges, and connects perspectives from philosophy, computer science, law, and economics. Less represented in these interdisciplinary efforts is the perspective of cognitive science. We propose a framework - computational ethics - that specifies how the ethical challenges of AI can be partially addressed by incorporating the study of human moral decision-making. The driver of this framework is a computational version of reflective equilibrium (RE), an approach that seeks coherence between considered judgments and governing principles. The framework has two goals: (i) to inform the engineering of ethical AI systems, and (ii) to characterize human moral judgment and decision-making in computational terms. Working jointly towards these two goals will create the opportunity to integrate diverse research questions, bring together multiple academic communities, uncover new interdisciplinary research topics, and shed light on centuries-old philosophical questions.
Collapse
Affiliation(s)
- Edmond Awad
- Department of Economics, University of Exeter, Exeter, UK; Institute for Data Science and AI, University of Exeter, Exeter, UK; Center for Humans and Machines, Max-Planck Institute for Human Development, Berlin, Germany.
| | - Sydney Levine
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology (MIT), Cambridge, MA, USA; Department of Psychology, Harvard University, Cambridge, MA, USA.
| | - Michael Anderson
- Department of Computer Science, University of Hartford, West Hartford, CT, USA
| | | | - Vincent Conitzer
- Department of Computer Science, Duke University, Durham, NC, USA; Department of Economics, Duke University, Durham, NC, USA; Department of Philosophy, Duke University, Durham, NC, USA; Institute for Ethics in AI, University of Oxford, Oxford, UK
| | - M J Crockett
- Department of Psychology, Yale University, New Haven, CT, USA
| | | | | | - Alison Gopnik
- Department of Psychology, University of California, Berkeley, CA, USA
| | - Julian C Jamison
- Department of Economics, University of Exeter, Exeter, UK; Global Priorities Institute, Oxford University, Oxford, UK
| | - Tae Wan Kim
- Ethics Group, Tepper School of Business, Carnegie Mellon University, Pittsburgh, PA, USA
| | - S Matthew Liao
- Center for Bioethics, New York University, New York, NY, USA
| | - Michelle N Meyer
- Center for Translational Bioethics and Health Care Policy, Geisinger Health System, Danville, PA, USA; Steele Institute for Health Innovation, Geisinger Health System, Danville, PA, USA; Geisinger Commonwealth School of Medicine, Scranton, PA, USA
| | - John Mikhail
- Georgetown University Law Center, Washington, DC, USA
| | - Kweku Opoku-Agyemang
- International Growth Centre, London School of Economics, London, UK; Machine Learning X Doing, Toronto, ON, Canada; Development Economics X, Toronto, ON, Canada
| | - Jana Schaich Borg
- Social Science Research Institute, Duke University, Durham, NC, USA; Duke Institute for Brain Sciences, Duke University, Durham, NC, USA
| | - Juliana Schroeder
- Haas School of Business, University of California, Berkeley, CA, USA
| | - Walter Sinnott-Armstrong
- Department of Philosophy, Duke University, Durham, NC, USA; Duke Institute for Brain Sciences, Duke University, Durham, NC, USA; Kenan Institute for Ethics, Duke University, Durham, NC, USA
| | - Marija Slavkovik
- Department of Information Science and Media Studies, University of Bergen, Bergen, Norway
| | - Josh B Tenenbaum
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology (MIT), Cambridge, MA, USA; Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA; Center for Brains, Minds, and Machines, MIT, Cambridge, MA, USA
| |
Collapse
|
2
|
Fazeli N, Oller M, Wu J, Wu Z, Tenenbaum JB, Rodriguez A. See, feel, act: Hierarchical learning for complex manipulation skills with multisensory fusion. Sci Robot 2021; 4:4/26/eaav3123. [PMID: 33137764 DOI: 10.1126/scirobotics.aav3123] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2018] [Accepted: 01/04/2019] [Indexed: 11/02/2022]
Abstract
Humans are able to seamlessly integrate tactile and visual stimuli with their intuitions to explore and execute complex manipulation skills. They not only see but also feel their actions. Most current robotic learning methodologies exploit recent progress in computer vision and deep learning to acquire data-hungry pixel-to-action policies. These methodologies do not exploit intuitive latent structure in physics or tactile signatures. Tactile reasoning is omnipresent in the animal kingdom, yet it is underdeveloped in robotic manipulation. Tactile stimuli are only acquired through invasive interaction, and interpretation of the data stream together with visual stimuli is challenging. Here, we propose a methodology to emulate hierarchical reasoning and multisensory fusion in a robot that learns to play Jenga, a complex game that requires physical interaction to be played effectively. The game mechanics were formulated as a generative process using a temporal hierarchical Bayesian model, with representations for both behavioral archetypes and noisy block states. This model captured descriptive latent structures, and the robot learned probabilistic models of these relationships in force and visual domains through a short exploration phase. Once learned, the robot used this representation to infer block behavior patterns and states as it played the game. Using its inferred beliefs, the robot adjusted its behavior with respect to both its current actions and its game strategy, similar to the way humans play the game. We evaluated the performance of the approach against three standard baselines and show its fidelity on a real-world implementation of the game.
Collapse
Affiliation(s)
- N Fazeli
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
| | - M Oller
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - J Wu
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Z Wu
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - J B Tenenbaum
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - A Rodriguez
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| |
Collapse
|
4
|
Abstract
Shepard has argued that a universal law should govern generalization across different domains of perception and cognition, as well as across organisms from different species or even different planets. Starting with some basic assumptions about natural kinds, he derived an exponential decay function as the form of the universal generalization gradient, which accords strikingly well with a wide range of empirical data. However, his original formulation applied only to the ideal case of generalization from a single encountered stimulus to a single novel stimulus, and for stimuli that can be represented as points in a continuous metric psychological space. Here we recast Shepard's theory in a more general Bayesian framework and show how this naturally extends his approach to the more realistic situation of generalizing from multiple consequential stimuli with arbitrary representational structure. Our framework also subsumes a version of Tversky's set-theoretic model of similarity, which is conventionally thought of as the primary alternative to Shepard's continuous metric space model of similarity and generalization. This unification allows us not only to draw deep parallels between the set-theoretic and spatial approaches, but also to significantly advance the explanatory power of set-theoretic models.
Collapse
Affiliation(s)
- J B Tenenbaum
- Department of Psychology, Stanford University, Stanford, CA 94305-2130, USA.
| | | |
Collapse
|
5
|
Abstract
Scientists working with large volumes of high-dimensional data, such as global climate patterns, stellar spectra, or human gene distributions, regularly confront the problem of dimensionality reduction: finding meaningful low-dimensional structures hidden in their high-dimensional observations. The human brain confronts the same problem in everyday perception, extracting from its high-dimensional sensory inputs-30,000 auditory nerve fibers or 10(6) optic nerve fibers-a manageably small number of perceptually relevant features. Here we describe an approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set. Unlike classical techniques such as principal component analysis (PCA) and multidimensional scaling (MDS), our approach is capable of discovering the nonlinear degrees of freedom that underlie complex natural observations, such as human handwriting or images of a face under different viewing conditions. In contrast to previous algorithms for nonlinear dimensionality reduction, ours efficiently computes a globally optimal solution, and, for an important class of data manifolds, is guaranteed to converge asymptotically to the true structure.
Collapse
Affiliation(s)
- J B Tenenbaum
- Department of Psychology, Stanford University, Stanford, CA 94305, USA.
| | | | | |
Collapse
|
6
|
Abstract
Perceptual systems routinely separate "content" from "style," classifying familiar words spoken in an unfamiliar accent, identifying a font or handwriting style across letters, or recognizing a familiar face or object seen under unfamiliar viewing conditions. Yet a general and tractable computational model of this ability to untangle the underlying factors of perceptual observations remains elusive (Hofstadter, 1985). Existing factor models (Mardia, Kent, & Bibby, 1979; Hinton & Zemel, 1994; Ghahramani, 1995; Bell & Sejnowski, 1995; Hinton, Dayan, Frey, & Neal, 1995; Dayan, Hinton, Neal, & Zemel, 1995; Hinton & Ghahramani, 1997) are either insufficiently rich to capture the complex interactions of perceptually meaningful factors such as phoneme and speaker accent or letter and font, or do not allow efficient learning algorithms. We present a general framework for learning to solve two-factor tasks using bilinear models, which provide sufficiently expressive representations of factor interactions but can nonetheless be fit to data using efficient algorithms based on the singular value decomposition and expectation-maximization. We report promising results on three different tasks in three different perceptual domains: spoken vowel classification with a benchmark multi-speaker database, extrapolation of fonts to unseen letters, and translation of faces to novel illuminants.
Collapse
Affiliation(s)
- J B Tenenbaum
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge 02139, USA
| | | |
Collapse
|