1
|
Hodge JG, Piatt JL, White EN, Gostin LO. Public Health Legal Protections in an Era of Artificial Intelligence. Am J Public Health 2024; 114:559-563. [PMID: 38635946 PMCID: PMC11079834 DOI: 10.2105/ajph.2024.307619] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/20/2024]
Affiliation(s)
- James G Hodge
- James G. Hodge Jr, Jennifer L. Piatt, and Erica N. White are with the Center for Public Health Law and Policy, Sandra Day O'Connor College of Law, Arizona State University, Phoenix. Lawrence O. Gostin is with the O'Neill Institute for Global and National Health Law, Georgetown University Law Center, Washington, DC
| | - Jennifer L Piatt
- James G. Hodge Jr, Jennifer L. Piatt, and Erica N. White are with the Center for Public Health Law and Policy, Sandra Day O'Connor College of Law, Arizona State University, Phoenix. Lawrence O. Gostin is with the O'Neill Institute for Global and National Health Law, Georgetown University Law Center, Washington, DC
| | - Erica N White
- James G. Hodge Jr, Jennifer L. Piatt, and Erica N. White are with the Center for Public Health Law and Policy, Sandra Day O'Connor College of Law, Arizona State University, Phoenix. Lawrence O. Gostin is with the O'Neill Institute for Global and National Health Law, Georgetown University Law Center, Washington, DC
| | - Lawrence O Gostin
- James G. Hodge Jr, Jennifer L. Piatt, and Erica N. White are with the Center for Public Health Law and Policy, Sandra Day O'Connor College of Law, Arizona State University, Phoenix. Lawrence O. Gostin is with the O'Neill Institute for Global and National Health Law, Georgetown University Law Center, Washington, DC
| |
Collapse
|
2
|
Heidt A. 'Without these tools, I'd be lost': how generative AI aids in accessibility. Nature 2024; 628:462-463. [PMID: 38589449 DOI: 10.1038/d41586-024-01003-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/10/2024]
|
3
|
Duffourc MN, Gerke S. Health Care AI and Patient Privacy-Dinerstein v Google. JAMA 2024; 331:909-910. [PMID: 38373004 DOI: 10.1001/jama.2024.1110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/20/2024]
Abstract
This Viewpoint summarizes a recent lawsuit alleging that a hospital violated patients’ privacy by sharing electronic health record (EHR) data with Google for development of medical artificial intelligence (AI) and discusses how the federal court’s decision in the case provides key insights for hospitals planning to share EHR data with for-profit companies developing medical AI.
Collapse
Affiliation(s)
| | - Sara Gerke
- Penn State Dickinson Law, Carlisle, Pennsylvania
| |
Collapse
|
4
|
King RD, Scassa T, Kramer S, Kitano H. Stockholm declaration on AI ethics: why others should sign. Nature 2024; 626:716. [PMID: 38378827 DOI: 10.1038/d41586-024-00517-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/22/2024]
|
5
|
Mello MM, Guha N. Understanding Liability Risk from Using Health Care Artificial Intelligence Tools. N Engl J Med 2024; 390:271-278. [PMID: 38231630 DOI: 10.1056/nejmhle2308901] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/19/2024]
Affiliation(s)
- Michelle M Mello
- From Stanford Law School (M.M.M., N.G.), the Department of Health Policy, School of Medicine (M.M.M.), the Freeman Spogli Institute for International Studies (M.M.M.), and the Department of Computer Science (N.G.), Stanford University, Stanford, CA
| | - Neel Guha
- From Stanford Law School (M.M.M., N.G.), the Department of Health Policy, School of Medicine (M.M.M.), the Freeman Spogli Institute for International Studies (M.M.M.), and the Department of Computer Science (N.G.), Stanford University, Stanford, CA
| |
Collapse
|
6
|
Suran M, Hswen Y. How Do Policymakers Regulate AI and Accommodate Innovation in Research and Medicine? JAMA 2024; 331:185-187. [PMID: 38117529 DOI: 10.1001/jama.2023.22625] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/21/2023]
Abstract
In this Medical News article, JAMA Editor in Chief Kirsten Bibbins-Domingo, PhD, MD, MAS, and Alondra Nelson, PhD, the Harold F. Linder Professor at the Institute for Advanced Study, discuss effective AI regulation frameworks to accommodate innovation.
Collapse
|
7
|
Computers make mistakes and AI will make things worse - the law must recognize that. Nature 2024; 625:631. [PMID: 38263299 DOI: 10.1038/d41586-024-00168-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2024]
|
8
|
There are holes in Europe's AI Act - and researchers can help to fill them. Nature 2024; 625:216. [PMID: 38200306 DOI: 10.1038/d41586-024-00029-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2024]
|
9
|
Jones N. The world's week on AI safety: powerful computing efforts launched to boost research. Nature 2023; 623:229-230. [PMID: 37923957 DOI: 10.1038/d41586-023-03472-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2023]
|
10
|
Why the UK-led global AI summit is missing the point. Nature 2023; 623:7. [PMID: 37907638 DOI: 10.1038/d41586-023-03333-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2023]
|
11
|
Gervais D. Avoid patenting AI-generated inventions. Nature 2023; 622:31. [PMID: 37789243 DOI: 10.1038/d41586-023-03116-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/05/2023]
|
12
|
Reddy S. Navigating the AI Revolution: The Case for Precise Regulation in Health Care. J Med Internet Res 2023; 25:e49989. [PMID: 37695650 PMCID: PMC10520760 DOI: 10.2196/49989] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 07/27/2023] [Accepted: 08/02/2023] [Indexed: 09/12/2023] Open
Abstract
Health care is undergoing a profound transformation through the integration of artificial intelligence (AI). However, the rapid integration and expansive growth of AI within health care systems present ethical and legal challenges that warrant careful consideration. In this viewpoint, the author argues that the health care domain, due to its complexity, requires specialized approaches to regulating AI. Precise regulation can provide clear guidelines for addressing these challenges, thereby ensuring ethical and legal AI implementations.
Collapse
|
13
|
Abbott R. Allow patents on AI-generated inventions - for the good of science. Nature 2023; 620:699. [PMID: 37608009 DOI: 10.1038/d41586-023-02598-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/24/2023]
|
14
|
Abstract
This Viewpoint discusses how regulators across the world should approach the legal and ethical challenges, including privacy, device regulation, competition, intellectual property rights, cybersecurity, and liability, raised by the medical use of large language models.
Collapse
Affiliation(s)
- Timo Minssen
- Center for Advanced Studies in Biomedical Innovation Law, University of Copenhagen, Copenhagen, Denmark
- Centre for Law, Medicine, and Life Sciences, University of Cambridge, Cambridge, England
| | - Effy Vayena
- Institute of Translational Medicine, Swiss Federal Institute of Technology Zurich (ETH Zurich), Zurich, Switzerland
| | - I Glenn Cohen
- Harvard Law School, Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics, Harvard University, Cambridge, Massachusetts
| |
Collapse
|
15
|
Palmieri S, Goffin T. A Blanket That Leaves the Feet Cold: Exploring the AI Act Safety Framework for Medical AI. Eur J Health Law 2023; 30:406-427. [PMID: 37582525 DOI: 10.1163/15718093-bja10104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Accepted: 01/04/2023] [Indexed: 08/17/2023]
Abstract
The AI Act is based on, and at the same time aims to protect fundamental rights, implying their protection, while fulfilling the safety requirement prescribed by the AI Act within the whole lifecycle of AI systems. Based on a risk classification, the AI Act provides a set of requirements that each risk class must meet in order for AI to be legitimately offered on the EU market and be considered safe. However, despite their classification, some minimal risk AI systems may still be prone to cause risks to fundamental rights and user safety, and therefore require attention. In this paper we explore the assumption that despite the fact that the AI Act can find broad ex litteris coverage, the significance of this applicability is limited.
Collapse
Affiliation(s)
- Sofia Palmieri
- Metamedica, University of Ghent 26656 Universiteitstraat 4, 9000 Ghent Belgium
| | - Tom Goffin
- Metamedica, University of Ghent 26656 Universiteitstraat 4, 9000 Ghent Belgium
| |
Collapse
|
16
|
Matin RN, Dinnes J. AI-based smartphone apps for risk assessment of skin cancer need more evaluation and better regulation. Br J Cancer 2021; 124:1749-1750. [PMID: 33742148 PMCID: PMC8144419 DOI: 10.1038/s41416-021-01302-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2020] [Revised: 01/26/2021] [Accepted: 02/03/2021] [Indexed: 11/08/2022] Open
Abstract
Smartphone applications ("apps") with artificial intelligence (AI) algorithms are increasingly used in healthcare. Widespread adoption of these apps must be supported by a robust evidence-base and app manufacturers' claims appropriately regulated. Current CE marking assessment processes inadequately protect the public against the risks created by using smartphone diagnostic apps.
Collapse
Affiliation(s)
- Rubeta N Matin
- Department of Dermatology, Oxford University Hospitals NHS Foundation Trust, Oxford, UK
| | - Jacqueline Dinnes
- Test Evaluation Research Group, Institute of Applied Health Research, University of Birmingham, Edgbaston, Birmingham, UK.
- NIHR Birmingham Biomedical Research Centre, University Hospitals Birmingham NHS Foundation Trust and University of Birmingham, Birmingham, UK.
| |
Collapse
|
17
|
Abstract
Daniel E Ho and colleagues explore the legal implications of using artificial intelligence in the response to covid-19 and call for more robust evaluation frameworks
Collapse
Affiliation(s)
- Mark Krass
- Stanford Law School, Stanford University, Stanford, CA, USA
- Department of Political Science, Stanford University School of Humanities and Sciences, Stanford, CA, USA
| | - Peter Henderson
- Stanford Law School, Stanford University, Stanford, CA, USA
- Department of Computer Science, Stanford University School of Engineering, Stanford, CA, USA
| | - Michelle M Mello
- Stanford Law School, Stanford University, Stanford, CA, USA
- Stanford Health Policy and Department of Medicine, Stanford University School of Medicine, Stanford, CA, USA
| | - David M Studdert
- Stanford Law School, Stanford University, Stanford, CA, USA
- Stanford Health Policy and Department of Medicine, Stanford University School of Medicine, Stanford, CA, USA
| | - Daniel E Ho
- Stanford Law School, Stanford University, Stanford, CA, USA
- Department of Political Science, Stanford University School of Humanities and Sciences, Stanford, CA, USA
- Stanford Institute for Human-Centered Artificial Intelligence, Stanford, CA, USA
- Stanford Institute for Economic Policy Research, Stanford, CA, USA
| |
Collapse
|
18
|
Lee S, Kang W. Precision Regulation Approach: A COVID-19 Triggered Regulatory Drive in South Korea. Front Public Health 2021; 9:628073. [PMID: 33598446 PMCID: PMC7882901 DOI: 10.3389/fpubh.2021.628073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Accepted: 01/07/2021] [Indexed: 11/13/2022] Open
Abstract
COVID-19 has triggered various changes in our everyday lives and how we conceptualize the functions of governments. Some areas require stricter forms of regulation while others call for deregulation. The challenge for the regulatory authorities is to manage these potentially conflicting demands in regulation and define coherently their overall regulatory rationale. The precision regulation approach can be a helpful approach. It is defined here as a streamlined approach to regulation to deliver the right methods of regulation for the right group of people at the right time. This problem-solving innovation in regulation triggered by the recent epidemiologic crisis in South Korea demonstrates the emergence of the precision regulation approach. South Korea has implemented streamlined fast-track services for the biotechnology industry to produce test kits swiftly. This article expands the definition of precision regulation from AI regulation literature, and positions the term as a new regulatory rationale, not as a regulatory tool, using the case study from South Korea.
Collapse
Affiliation(s)
- Sora Lee
- Menzies Centre for Health Governance, School of Regulation and Global Governance (Regulatory Network), ANU College of Asia & the Pacific, The Australian National University, Canberra, ACT, Australia
| | - Woojin Kang
- Department of Economics, College of Economics and Business Administration, Hanbat National University, Daejeon, South Korea
| |
Collapse
|
19
|
Takshi S. Unexpected Inequality: Disparate-Impact From Artificial Intelligence in Healthcare Decisions. J Law Health 2021; 34:215-251. [PMID: 34185974] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Systemic discrimination in healthcare plagues marginalized groups. Physicians incorrectly view people of color as having high pain tolerance, leading to undertreatment. Women with disabilities are often undiagnosed because their symptoms are dismissed. Low-income patients have less access to appropriate treatment. These patterns, and others, reflect long-standing disparities that have become engrained in U.S. health systems. As the healthcare industry adopts artificial intelligence and algorithminformed (AI) tools, it is vital that regulators address healthcare discrimination. AI tools are increasingly used to make both clinical and administrative decisions by hospitals, physicians, and insurers--yet there is no framework that specifically places nondiscrimination obligations on AI users. The Food and Drug Administration has limited authority to regulate AI and has not sought to incorporate anti-discrimination principles in its guidance. Section 1557 of the Affordable Care Act has not been used to enforce nondiscrimination in healthcare AI and is under-utilized by the Office of Civil Rights. State level protections by medical licensing boards or malpractice liability are similarly untested and have not yet extended nondiscrimination obligations to AI. This Article discusses the role of each legal obligation on healthcare AI and the ways in which each system can improve to address discrimination. It highlights the ways in which industries can self-regulate to set nondiscrimination standards and concludes by recommending standards and creating a super-regulator to address disparate impact by AI. As the world moves towards automation, it is imperative that ongoing concerns about systemic discrimination are removed to prevent further marginalization in healthcare.
Collapse
|
20
|
Affiliation(s)
- Daniel A Hashimoto
- Surgical AI & Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, Harvard Medical School, 15 Parkman Street, WAC460, Boston, MA 02114, USA.
| | - Thomas M Ward
- Surgical AI & Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, Harvard Medical School, 15 Parkman Street, WAC460, Boston, MA 02114, USA
| | - Ozanan R Meireles
- Surgical AI & Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, Harvard Medical School, 15 Parkman Street, WAC460, Boston, MA 02114, USA
| |
Collapse
|
21
|
Affiliation(s)
- John D McGreevey
- University of Pennsylvania Health System, Perelman School of Medicine, Section of Hospital Medicine, Division of General Internal Medicine, Institute for Biomedical Informatics, University of Pennsylvania, Philadelphia
| | - C William Hanson
- Perelman School of Medicine, University of Pennsylvania, University of Pennsylvania Health System, Philadelphia
- School of Engineering and Applied Science, University of Pennsylvania, Philadelphia
| | - Ross Koppel
- Perelman School of Medicine, University of Pennsylvania, University of Pennsylvania Health System, Philadelphia
- Department of Medical Informatics, Jacobs School of Medicine, University at Buffalo, State University of New York, Buffalo
| |
Collapse
|
22
|
Abstract
PURPOSE OF REVIEW Machine learning (ML) is increasingly being studied for the screening, diagnosis, and management of diabetes and its complications. Although various models of ML have been developed, most have not led to practical solutions for real-world problems. There has been a disconnect between ML developers, regulatory bodies, health services researchers, clinicians, and patients in their efforts. Our aim is to review the current status of ML in various aspects of diabetes care and identify key challenges that must be overcome to leverage ML to its full potential. RECENT FINDINGS ML has led to impressive progress in development of automated insulin delivery systems and diabetic retinopathy screening tools. Compared with these, use of ML in other aspects of diabetes is still at an early stage. The Food & Drug Administration (FDA) is adopting some innovative models to help bring technologies to the market in an expeditious and safe manner. ML has great potential in managing diabetes and the future is in furthering the partnership of regulatory bodies with health service researchers, clinicians, developers, and patients to improve the outcomes of populations and individual patients with diabetes.
Collapse
Affiliation(s)
- David T Broome
- Department of Endocrinology, Diabetes & Metabolism, Cleveland Clinic Foundation, F-20 9500 Euclid Avenue, Cleveland, OH, 44195, USA
| | - C Beau Hilton
- Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, 9500 Euclid Ave, Cleveland, OH, 44195, USA
| | - Neil Mehta
- Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, EC-40 9500 Euclid Ave, Cleveland, OH, 44195, USA.
| |
Collapse
|
23
|
Parmentier F. [Healthcare data and artificial intelligence: a geostrategic vision]. Soins 2019; 64:53-55. [PMID: 31542124 DOI: 10.1016/j.soin.2019.06.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The rapid deployment of artificial intelligence (AI) and automation in healthcare is highlighting the importance of health data-driven management as a geostrategic lever. From this point of view, the progress made by the United States and China requires a strong European response to develop a responsible vision which adopts an approach aiming at the positive regulation of AI in healthcare.
Collapse
|
24
|
Frank X. Is Watson for Oncology per se Unreasonably Dangerous?: Making A Case for How to Prove Products Liability Based on a Flawed Artificial Intelligence Design. Am J Law Med 2019; 45:273-294. [PMID: 31722630 DOI: 10.1177/0098858819871109] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Artificial intelligence (AI) machines hold the world's curiosity captive. Futuristic television shows like West World are set in desert lands against pink sunsets where sleek, autonomous AI fulfill every human need, desire, and kink. But I, Robot, a movie where robots turn against the humans they serve, reminds us that AI is precarious. Academicians who study how AI interacts with tort law, such as Jessica Allain, David Vladeck, and Sjur Dyrkoltbotn, claim that the current legal regime is incapable of addressing the liability issues AI present. Both Allain and Vladeck focus their research on whether tort law can accommodate claims against fully autonomous AI machines, while Dyrkoltbotn explores how AI can be leveraged to help plaintiffs identify the genesis of their injuries. The solution this article presents is not exclusively tailored to fully autonomous AI and does not identify how technology can be used in tort claims. It instead demonstrates that the current tort law regime can provide relief to plaintiffs who are injured by AI machines. In particular, this article argues that the manner in which Watson for Oncology is designed presents a new context in which courts should adopt a per se rule of liability that favors plaintiffs who bring damage claims against AI machines by expanding the definition of what it means for a device to be unreasonably dangerous.
Collapse
|
25
|
|
26
|
Abstract
In June 2018, the American Medical Association adopted new policy to provide a broad framework for the evolution of artificial intelligence (AI) in health care that is designed to help ensure that AI realizes the benefits it promises for patients, physicians, and the health care community.
Collapse
Affiliation(s)
- Elliott Crigger
- Director of ethics policy and secretary to the Council on Ethical and Judicial Affairs at the American Medical Association in Chicago, Illinois
| | - Christopher Khoury
- Vice president of the Environmental Intelligence and Strategic Analytics unit at the American Medical Association in Washington, DC
| |
Collapse
|
27
|
Abstract
The positive regulation of artificial intelligence in healthcare represents a major stake to allow a diffusion of digital innovation, in a spirit of openness and coherence with ethical values. Operational principles have been proposed, particularly around the concept of Human Guarantee. The opinion issued at the end of 2018 by the National Consultative Ethics Committee is an important step forward in the recognition of this idea which leaves a large capacity for initiative to professionals and patients.
Collapse
Affiliation(s)
- David Gruson
- Chaire santé de Sciences Po, 13, rue de l'Université, 75007 Paris, France.
| |
Collapse
|
28
|
Abstract
Novel beings-intelligent, conscious life-forms sapient in the same way or greater than are human beings-are no longer the preserve of science fiction. Through technologies such as artificial general intelligence, synthetic genomics, gene printing, cognitive enhancement, advanced neuroscience, and more, they are becoming ever more likely and by some definitions may already be emerging. Consideration of the nature of intelligent, conscious novel beings such as those that may result from these technologies requires analysis of the concept of the 'reasonable creature in being' in English law, as well as of the right to life as founded in the European Convention on Human Rights and the attempts to endow human status on animals in recent years. Our exploration of these issues leads us to conclude that there is a strong case to recognize such 'novel' beings as entitled to the same fundamental rights to life, freedom from inhumane treatment, and liberty as we are.
Collapse
|
29
|
Cath C, Wachter S, Mittelstadt B, Taddeo M, Floridi L. Artificial Intelligence and the 'Good Society': the US, EU, and UK approach. Sci Eng Ethics 2018; 24:505-528. [PMID: 28353045 DOI: 10.1007/s11948-017-9901-7] [Citation(s) in RCA: 49] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/20/2017] [Accepted: 03/19/2017] [Indexed: 06/06/2023]
Abstract
In October 2016, the White House, the European Parliament, and the UK House of Commons each issued a report outlining their visions on how to prepare society for the widespread use of artificial intelligence (AI). In this article, we provide a comparative assessment of these three reports in order to facilitate the design of policies favourable to the development of a 'good AI society'. To do so, we examine how each report addresses the following three topics: (a) the development of a 'good AI society'; (b) the role and responsibility of the government, the private sector, and the research community (including academia) in pursuing such a development; and (c) where the recommendations to support such a development may be in need of improvement. Our analysis concludes that the reports address adequately various ethical, social, and economic topics, but come short of providing an overarching political vision and long-term strategy for the development of a 'good AI society'. In order to contribute to fill this gap, in the conclusion we suggest a two-pronged approach.
Collapse
Affiliation(s)
- Corinne Cath
- Oxford Internet Institute, University of Oxford, 1 St Giles, Oxford, OX1 3JS, UK.
- The Alan Turing Institute, Headquartered at the British Library, 96 Euston Road, London, NW1 2DB, UK.
| | - Sandra Wachter
- Oxford Internet Institute, University of Oxford, 1 St Giles, Oxford, OX1 3JS, UK
- The Alan Turing Institute, Headquartered at the British Library, 96 Euston Road, London, NW1 2DB, UK
| | - Brent Mittelstadt
- Oxford Internet Institute, University of Oxford, 1 St Giles, Oxford, OX1 3JS, UK
- The Alan Turing Institute, Headquartered at the British Library, 96 Euston Road, London, NW1 2DB, UK
| | - Mariarosaria Taddeo
- Oxford Internet Institute, University of Oxford, 1 St Giles, Oxford, OX1 3JS, UK
- The Alan Turing Institute, Headquartered at the British Library, 96 Euston Road, London, NW1 2DB, UK
| | - Luciano Floridi
- Oxford Internet Institute, University of Oxford, 1 St Giles, Oxford, OX1 3JS, UK
- The Alan Turing Institute, Headquartered at the British Library, 96 Euston Road, London, NW1 2DB, UK
| |
Collapse
|
30
|
Abstract
From the enraged robots in the 1920 play R.U.R. to the homicidal computer H.A.L. in 2001: A Space Odyssey, science fiction writers have embraced the dark side of artificial intelligence (AI) ever since the concept entered our collective imagination. Sluggish progress in AI research, especially during the “AI winter” of the 1970s and 1980s, made such worries seem far-fetched. But recent breakthroughs in machine learning and vast improvements in computational power have brought a flood of research funding— and fresh concerns about where AI may lead us. One researcher now speaking up is Stuart Russell, a computer scientist at the University of California, Berkeley, who with Peter Norvig, director of research at Google, wrote the premier AI textbook, Artificial Intelligence: A Modern Approach, now in its third edition. Last year, Russell joined the Centre for the Study of Existential Risk at Cambridge University in the United Kingdom as an AI expert focusing on “risks that could lead to human extinction.” Among his chief concerns, which he aired at an April meeting in Geneva, Switzerland, run by the United Nations, is the danger of putting military drones and weaponry under the full control of AI systems. This interview has been edited for clarity and brevity.
Collapse
|
31
|
Callens S, Galot A, Lamas E. Legal aspects of personal health monitoring. Stud Health Technol Inform 2013; 187:55-63. [PMID: 23920456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Personal health monitoring (PHM) can be defined as comprising all technical systems, processing, collecting, and storing of data linked to a person. PHM involves several legal issues that are described in this paper. This article analyses firstly the short term actions that are needed at the European level to allow personal health monitoring in respect of the interests and rights of patients such as the need to have more harmonised medical liability rules at the EU level. Introducing PHM implies also legal action at the EU level on the long run. These long-term actions are related to e.g. the way in which hospitals are organised in their relation with healthcare professionals and with other hospitals or healthcare actors. The paper analyses finally also how health monitoring projects may change the traditional (non-)relationship between patients and pharmaceutical/medical device industry. Today, the producers and distributors of medicinal products have no specific contact with patients. This situation may change when applying telemonitoring projects and may require new legal rules.
Collapse
|