1
|
Blumenthal-Barby J, Ubel P. Neurorights in question: rethinking the concept of mental integrity. JOURNAL OF MEDICAL ETHICS 2024; 50:670-675. [PMID: 38749651 DOI: 10.1136/jme-2023-109683] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/08/2024] [Accepted: 04/25/2024] [Indexed: 09/22/2024]
Abstract
The idea of a 'right to mental integrity', sometimes referred to as a 'right against mental interference,' is a relatively new concept in bioethics, making its way into debates about neurotechnological advances and the establishment of 'neurorights.' In this paper, we interrogate the idea of a right to mental integrity. First, we argue that some experts define the right to mental integrity so broadly that rights violations become ubiquitous, thereby trivialising some of the very harms the concept is meant to address. Second, rights-based framing results in an overemphasis on the normative importance of consent, implying that neurointerventions are permissible in cases where people consent to have their mental states influenced or read off, a confidence in consent that we argue is misguided. Third, the concept often collapses the ethics of brain inputs and brain outputs, potentially resulting in a loss of important conceptual nuance. Finally, we argue that the concept of a right to mental integrity is superfluous-what is wrong with most violations of mental integrity can be explained by existing concepts such as autonomy, manipulation, privacy, bodily rights, surveillance, harm and exploitation of vulnerabilities. We conclude that bioethicists and policy-makers ought to either make use of these concepts rather than arguing for the existence of a new right, or they need to avoid making rights violations ubiquitous by settling on a narrower and more rigorous definition of the right.
Collapse
Affiliation(s)
| | - Peter Ubel
- Duke University, Durham, North Carolina, USA
| |
Collapse
|
2
|
Wajnerman-Paz A, Aboitiz F, Álamos F, Ramos Vergara P. A healthcare approach to mental integrity. JOURNAL OF MEDICAL ETHICS 2024; 50:664-669. [PMID: 38802142 DOI: 10.1136/jme-2023-109682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Accepted: 04/17/2024] [Indexed: 05/29/2024]
Abstract
The current human rights framework can shield people from many of the risks associated with neurotechnological applications. However, it has been argued that we need either to articulate new rights or reconceptualise existing ones in order to prevent some of these risks. In this paper, we would like to address the recent discussion about whether current reconceptualisations of the right to mental integrity identify an ethical dimension that is not covered by existing moral and/or legal rights. The main challenge of these proposals is that they make mental integrity indistinguishable from autonomy. They define mental integrity in terms of the control we can have over our mental states, which seems to be part of the authenticity condition for autonomous action. Based on a fairly comprehensive notion of mental health (ie, a notion that is not limited to the mere absence of illness), we propose an alternative view according to which mental integrity can be characterised both as a positive right to (medical and non-medical) interventions that restore and sustain mental and neural function, and promote its development and a negative right protecting people from interventions that threaten or undermine these functions or their development. We will argue that this notion is dissociated from cognitive control and therefore can be adequately distinguished from autonomy.
Collapse
Affiliation(s)
- Abel Wajnerman-Paz
- Instituto de Éticas Aplicadas, Pontificia Universidad Catolica de Chile, Santiago, Chile
| | - Francisco Aboitiz
- Centro Interdisciplinario de Neurociencia, Departamento de Psiquiatría, Facultad de Medicina, Pontificia Universidad Catolica de Chile, Santiago, Chile
| | - Florencia Álamos
- Centro de Bioética, Facultad de Medicina, Centro Interdisciplinario de Neurociencia, Pontificia Universidad Catolica de Chile, Santiago, Chile
| | - Paulina Ramos Vergara
- Centro de Bioética, Facultad de Medicina, Pontificia Universidad Catolica de Chile, Santiago, Chile
| |
Collapse
|
3
|
Tesink V, Douglas T, Forsberg L, Ligthart S, Meynen G. Right to mental integrity and neurotechnologies: implications of the extended mind thesis. JOURNAL OF MEDICAL ETHICS 2024; 50:656-663. [PMID: 38408854 DOI: 10.1136/jme-2023-109645] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Accepted: 01/14/2024] [Indexed: 02/28/2024]
Abstract
The possibility of neurotechnological interference with our brain and mind raises questions about the moral rights that would protect against the (mis)use of these technologies. One such moral right that has received recent attention is the right to mental integrity. Though the metaphysical boundaries of the mind are a matter of live debate, most defences of this moral right seem to assume an internalist (brain-based) view of the mind. In this article, we will examine what an extended account of the mind might imply for the right to mental integrity and the protection it provides against neurotechnologies. We argue that, on an extended account of the mind, the scope of the right to mental integrity would expand significantly, implying that neurotechnologies would no longer pose a uniquely serious threat to the right. In addition, some neurotechnologies may even be protected by the right to mental integrity, as the technologies would become part of the mind. We conclude that adopting an extended account of the mind has significant implications for the right to mental integrity in terms of its protective scope and capacity to protect against neurotechnologies, demonstrating that metaphysical assumptions about the mind play an important role in determining the moral protection provided by the right.
Collapse
Affiliation(s)
- Vera Tesink
- Department of Philosophy, Faculty of Humanities, Vrije Universiteit Amsterdam, Amsterdam, Netherlands
| | - Thomas Douglas
- Oxford Uehiro Centre for Practical Ethics, Faculty of Philosophy, University of Oxford, Oxford, UK
- Jesus College, University of Oxford, Oxford, UK
| | - Lisa Forsberg
- Oxford Uehiro Centre for Practical Ethics, Faculty of Philosophy, University of Oxford, Oxford, UK
| | - Sjors Ligthart
- Department of Criminal Law, Tilburg University, Tilburg, Netherlands
- Willem Pompe Institute for Criminal Law and Criminology and UCALL, Utrecht University, Utrecht, Netherlands
| | - Gerben Meynen
- Department of Philosophy, Faculty of Humanities, Vrije Universiteit Amsterdam, Amsterdam, Netherlands
- Willem Pompe Institute for Criminal Law and Criminology and UCALL, Utrecht University, Utrecht, Netherlands
| |
Collapse
|
4
|
Zuk P. Mental integrity, autonomy, and fundamental interests. JOURNAL OF MEDICAL ETHICS 2024; 50:676-683. [PMID: 39137962 DOI: 10.1136/jme-2023-109732] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Accepted: 04/25/2024] [Indexed: 08/15/2024]
Abstract
Many technology ethicists hold that the time has come to articulate neurorights: our normative claims vis-à-vis our brains and minds. One such claim is the right to mental integrity ('MI'). I begin by considering some paradigmatic threats to MI (§1) and how the dominant autonomy-based conception ('ABC') of MI attempts to make sense of them (§2). I next consider the objection that the ABC is overbroad in its understanding of what threatens MI and suggest a friendly revision to the ABC that addresses the objection (§3). I then consider a second objection: that the ABC cannot make sense of the MI of the non-autonomous This objection appears fatal even to the revised ABC (§4). On that basis, I develop an alternative conception on which MI is grounded in a plurality of simpler capacities, namely, those for affect, cognition, and volition Each of these more basic capacities grounds a set of fundamental interests, and they are for that reason worthy of protection even when they do not rise to the level of complexity necessary for autonomy (§5). This yields a fully general theory of MI that accounts for its manifestations in both the autonomous and the non-autonomous.
Collapse
Affiliation(s)
- Peter Zuk
- Center for Bioethics, Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|
5
|
Vakilipour P, Fekrvand S. Brain-to-brain interface technology: A brief history, current state, and future goals. Int J Dev Neurosci 2024; 84:351-367. [PMID: 38711277 DOI: 10.1002/jdn.10334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 04/05/2024] [Accepted: 04/18/2024] [Indexed: 05/08/2024] Open
Abstract
A brain-to-brain interface (BBI), defined as a combination of neuroimaging and neurostimulation methods to extract and deliver information between brains directly without the need for the peripheral nervous system, is a budding communication technique. A BBI system is made up of two parts known as the brain-computer interface part, which reads a sender's brain activity and digitalizes it, and the computer-brain interface part, which writes the delivered brain activity to a receiving brain. As with other technologies, BBI systems have gone through an evolutionary process since they first appeared. The BBI systems have been employed for numerous purposes, including rehabilitation for post-stroke patients, communicating with patients suffering from amyotrophic lateral sclerosis, locked-in syndrome and speech problems following stroke. Also, it has been proposed that a BBI system could play an important role on future battlefields. This technology was not only employed for communicating between two human brains but also for making a direct communication path among different species through which motor or sensory commands could be sent and received. However, the application of BBI systems has provoked significant challenges to human rights principles due to their ability to access and manipulate human brain information. In this study, we aimed to review the brain-computer interface and computer-brain interface technologies as components of BBI systems, the development of BBI systems, applications of this technology, arising ethical issues and expectations for future use.
Collapse
Affiliation(s)
- Pouya Vakilipour
- Student Research Committee, Tabriz University of Medical Sciences, Tabriz, Iran
- Universal Scientific Education and Research Network (USERN), Tehran, Iran
| | - Saba Fekrvand
- Universal Scientific Education and Research Network (USERN), Tehran, Iran
- Brain and Spinal Cord Injury Research Center, Neuroscience Institute, Tehran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
6
|
Cornejo-Plaza MI, Cippitani R, Pasquino V. Chilean Supreme Court ruling on the protection of brain activity: neurorights, personal data protection, and neurodata. Front Psychol 2024; 15:1330439. [PMID: 38476399 PMCID: PMC10929545 DOI: 10.3389/fpsyg.2024.1330439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Accepted: 02/13/2024] [Indexed: 03/14/2024] Open
Abstract
This paper discusses a landmark ruling by the Chilean Supreme Court of August 9, 2023 dealing with the right to mental privacy, originated with an action for constitutional protection filed on behalf of Guido Girardi Lavin against Emotiv Inc., a North American company based in San Francisco, California that is commercializing the device "Insight." This wireless device functions as a headset with sensors that collect information about the brain's electrical activity (i.e., neurodata). The discussion revolves around whether neurodata can be considered personal data and whether they could be classified into a special category. The application of the present legislation on data (the most obsolete, such as the Chilean law, and the most recent EU law) does not seem adequate to protect neurodata. The use of neurodata raises ethical and legal concerns that are not fully addressed by current regulations on personal data protection. Despite not being necessarily considered personal data, neurodata represent the most intimate aspects of human personality and should be protected in light of potential new risks. The unique characteristics of neurodata, including their interpretive nature and potential for revealing thoughts and intentions, pose challenges for regulation. Current data protection laws do not differentiate between different types of data based on their informational content, which is relevant for protecting individual rights. The development of new technologies involving neurodata requires particular attention and careful consideration to prevent possible harm to human dignity. The regulation of neurodata must account for their specific characteristics and the potential risks they pose to privacy, confidentiality, and individual rights. The answer lies in the reconfiguration of human rights known as "neurorights" that goes beyond the protection of personal data.
Collapse
Affiliation(s)
| | - Roberto Cippitani
- Department of the Constitutional Law, Universidad Nacional de Educación a Distancia, Madrid, Spain
- Instituto Nacional de Estudios de Derecho Penal, Mexico City, Mexico
- Institute of Applied Physics, Consiglio Nazionale delle Ricerche, Florence, Italy
- Department of Law, Università degli Studi di Perugia, Perugia, Italy
| | - Vincenzo Pasquino
- Department of Law, Università degli Studi di Perugia, Perugia, Italy
| |
Collapse
|
7
|
Hurley ME, Sonig A, Herrington J, Storch EA, Lázaro-Muñoz G, Blumenthal-Barby J, Kostick-Quenet K. Ethical considerations for integrating multimodal computer perception and neurotechnology. Front Hum Neurosci 2024; 18:1332451. [PMID: 38435745 PMCID: PMC10904467 DOI: 10.3389/fnhum.2024.1332451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Accepted: 01/30/2024] [Indexed: 03/05/2024] Open
Abstract
Background Artificial intelligence (AI)-based computer perception technologies (e.g., digital phenotyping and affective computing) promise to transform clinical approaches to personalized care in psychiatry and beyond by offering more objective measures of emotional states and behavior, enabling precision treatment, diagnosis, and symptom monitoring. At the same time, passive and continuous nature by which they often collect data from patients in non-clinical settings raises ethical issues related to privacy and self-determination. Little is known about how such concerns may be exacerbated by the integration of neural data, as parallel advances in computer perception, AI, and neurotechnology enable new insights into subjective states. Here, we present findings from a multi-site NCATS-funded study of ethical considerations for translating computer perception into clinical care and contextualize them within the neuroethics and neurorights literatures. Methods We conducted qualitative interviews with patients (n = 20), caregivers (n = 20), clinicians (n = 12), developers (n = 12), and clinician developers (n = 2) regarding their perspective toward using PC in clinical care. Transcripts were analyzed in MAXQDA using Thematic Content Analysis. Results Stakeholder groups voiced concerns related to (1) perceived invasiveness of passive and continuous data collection in private settings; (2) data protection and security and the potential for negative downstream/future impacts on patients of unintended disclosure; and (3) ethical issues related to patients' limited versus hyper awareness of passive and continuous data collection and monitoring. Clinicians and developers highlighted that these concerns may be exacerbated by the integration of neural data with other computer perception data. Discussion Our findings suggest that the integration of neurotechnologies with existing computer perception technologies raises novel concerns around dignity-related and other harms (e.g., stigma, discrimination) that stem from data security threats and the growing potential for reidentification of sensitive data. Further, our findings suggest that patients' awareness and preoccupation with feeling monitored via computer sensors ranges from hypo- to hyper-awareness, with either extreme accompanied by ethical concerns (consent vs. anxiety and preoccupation). These results highlight the need for systematic research into how best to implement these technologies into clinical care in ways that reduce disruption, maximize patient benefits, and mitigate long-term risks associated with the passive collection of sensitive emotional, behavioral and neural data.
Collapse
Affiliation(s)
- Meghan E. Hurley
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, United States
| | - Anika Sonig
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, United States
| | - John Herrington
- Department of Child and Adolescent Psychiatry and Behavioral Sciences, Children’s Hospital of Philadelphia, Philadelphia, PA, United States
| | - Eric A. Storch
- Department of Psychiatry and Behavioral Sciences, Baylor College of Medicine, Houston, TX, United States
| | - Gabriel Lázaro-Muñoz
- Center for Bioethics, Harvard Medical School, Boston, MA, United States
- Department of Psychiatry and Behavioral Sciences, Massachusetts General Hospital, Boston, MA, United States
| | | | - Kristin Kostick-Quenet
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, United States
| |
Collapse
|
8
|
Andorno R, Lavazza A. How to deal with mind-reading technologies. Front Psychol 2023; 14:1290478. [PMID: 38034284 PMCID: PMC10682168 DOI: 10.3389/fpsyg.2023.1290478] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Accepted: 10/30/2023] [Indexed: 12/02/2023] Open
Affiliation(s)
- Roberto Andorno
- Institute of Biomedical Ethics and History of Medicine, University of Zurich, Zürich, Switzerland
| | - Andrea Lavazza
- Centro Universitario Internazionale, Arezzo, Italy
- Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
| |
Collapse
|
9
|
Zohny H, Lyreskog DM, Singh I, Savulescu J. The Mystery of Mental Integrity: Clarifying Its Relevance to Neurotechnologies. NEUROETHICS-NETH 2023; 16:20. [PMID: 37614938 PMCID: PMC10442279 DOI: 10.1007/s12152-023-09525-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Accepted: 08/06/2023] [Indexed: 08/25/2023]
Abstract
The concept of mental integrity is currently a significant topic in discussions concerning the regulation of neurotechnologies. Technologies such as deep brain stimulation and brain-computer interfaces are believed to pose a unique threat to mental integrity, and some authors have advocated for a legal right to protect it. Despite this, there remains uncertainty about what mental integrity entails and why it is important. Various interpretations of the concept have been proposed, but the literature on the subject is inconclusive. Here we consider a number of possible interpretations and argue that the most plausible one concerns neurotechnologies that bypass one's reasoning capacities, and do so specifically in ways that reliably lead to alienation from one's mental states. This narrows the scope of what constitutes a threat to mental integrity and offers a more precise role for the concept to play in the ethical evaluation of neurotechnologies.
Collapse
Affiliation(s)
- Hazem Zohny
- Oxford Uehiro Centre for Practical Ethics, University of Oxford, Oxford, UK
- Department of Psychiatry, University of Oxford, Oxford, UK
- Wellcome Centre for Ethics and Humanities, University of Oxford, Oxford, UK
| | - David M. Lyreskog
- Oxford Uehiro Centre for Practical Ethics, University of Oxford, Oxford, UK
- Department of Psychiatry, University of Oxford, Oxford, UK
- Wellcome Centre for Ethics and Humanities, University of Oxford, Oxford, UK
| | - Ilina Singh
- Department of Psychiatry, University of Oxford, Oxford, UK
- Wellcome Centre for Ethics and Humanities, University of Oxford, Oxford, UK
| | - Julian Savulescu
- Oxford Uehiro Centre for Practical Ethics, University of Oxford, Oxford, UK
- Centre for Biomedical Ethics, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Murdoch Children’s Research Institute, Melbourne, Australia
- University of Melbourne, Melbourne, Australia
| |
Collapse
|
10
|
Faraoni S. Persuasive Technology and computational manipulation: hypernudging out of mental self-determination. Front Artif Intell 2023; 6:1216340. [PMID: 37469930 PMCID: PMC10352952 DOI: 10.3389/frai.2023.1216340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Accepted: 06/19/2023] [Indexed: 07/21/2023] Open
Abstract
Artificial Intelligence, unperceived, can acquire the user's data, find connections not visible by a human being, profile the users, and aim at persuading them, resulting in Persuasive Technology (PT). During the persuasive process, PT can use manipulation, finding and using routes to affect System 1, the primordial brain of individuals, in the absence of their awareness, undermining their decision-making processes. Multiple international and European bodies recognized that AI systems could use manipulation at an unprecedented degree via second-generation dark patterns such as the hypernudge and that computational manipulation constitutes a risk for autonomy and different, overlapping, fundamental rights such as privacy, informational self-determination and freedom of thought. However, there is a lack of shared ideas regarding which fundamental rights are violated by computational manipulation and which fundamental rights can protect individuals against it. The right to be let alone and the right to hold and express a thought differ from the right to create a thought, being in control of the decision-making process and free from cognitive interferences operated by computational manipulation. Therefore, this paper argues in favor of recognizing a newly emerged fundamental right, the right to mental self-determination, tailored to the unprecedented abilities of AI-driven manipulative technologies.
Collapse
Affiliation(s)
- Stefano Faraoni
- Law Department, University of York, York, United Kingdom
- Law Department, European Legal Studies, University of Turin, Turin, Italy
| |
Collapse
|
11
|
Gilbert F, Ienca M, Cook M. How I became myself after merging with a computer: Does human-machine symbiosis raise human rights issues? Brain Stimul 2023; 16:783-789. [PMID: 37137387 DOI: 10.1016/j.brs.2023.04.016] [Citation(s) in RCA: 15] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 04/07/2023] [Accepted: 04/20/2023] [Indexed: 05/05/2023] Open
Abstract
Novel usages of brain stimulation combined with artificially intelligent (AI) systems promise to address a large range of diseases. These new conjoined technologies, such as brain-computer interfaces (BCI), are increasingly used in experimental and clinical settings to predict and alleviate symptoms of various neurological and psychiatric disorders. Due to their reliance on AI algorithms for feature extraction and classification, these BCI systems enable a novel, unprecedented, and direct connection between human cognition and artificial information processing. In this paper, we present the results of a study that investigates the phenomenology of human-machine symbiosis during a first-in-human experimental BCI trial designed to predict epileptic seizures. We employed qualitative semi-structured interviews to collect user experience data from a participant over a six-years period. We report on a clinical case where a specific embodied phenomenology emerged: namely, after BCI implantation, the patient reported experiences of increased agential capacity and continuity; and after device explantation, the patient reported persistent traumatic harms linked to agential discontinuity. To our knowledge, this is the first reported clinical case of a patient experiencing persistent agential discontinuity due to BCI explantation and potential evidence of an infringement on patient right, where the implanted person was robbed of her de novo agential capacities when the device was removed.
Collapse
Affiliation(s)
- Frederic Gilbert
- EthicsLab, Philosophy & Gender Studies, School of Humanities, College of Arts, Law and Education, University of Tasmania, Australia.
| | - Marcello Ienca
- Institute for Ethics and History of Medicine, School of Medicine - Technische Universität München (TUM), Ismaninger Str. 22, 81675, München, Germany; Intelligent Systems Ethics Group, College of Humanities (CDH), Swiss Federal Institute of Technology in Lausanne (EPFL), Switzerland
| | - Mark Cook
- Division Engineering and IT - Biomedical Engineering, University of Melbourne, Australia; The Sir John Eccles Chair of Medicine, Director of Clinical Neurosciences, St. Vincent's Hospital, Melbourne, Australia
| |
Collapse
|
12
|
Abstract
It has been recently suggested that if the Extended Mind thesis is true, mental privacy might be under serious threat. In this paper, I look into the details of this claim and propose that one way of dealing with this emerging threat requires that data ontology be enriched with an additional kind of data-viz., mental data. I explore how mental data relates to both data and metadata and suggest that, arguably, and by contrast with these existing categories of informational content, mental data should not be merely legally protected. Rather, if we value mental privacy as we know it, technological measures should be employed to ensure that one's mental data are practically-not just legally-impossible for others to obtain.
Collapse
|
13
|
Lavazza A, Giorgi R. Philosophical foundation of the right to mental integrity in the age of neurotechnologies. NEUROETHICS-NETH 2023. [DOI: 10.1007/s12152-023-09517-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
Abstract
AbstractNeurotechnologies broadly understood are tools that have the capability to read, record and modify our mental activity by acting on its brain correlates. The emergence of increasingly powerful and sophisticated techniques has given rise to the proposal to introduce new rights specifically directed to protect mental privacy, freedom of thought, and mental integrity. These rights, also proposed as basic human rights, are conceived in direct relation to tools that threaten mental privacy, freedom of thought, mental integrity, and personal identity. In this paper, our goal is to give a philosophical foundation to a specific right that we will call right to mental integrity. It encapsulates both the classical concepts of privacy and non-interference in our mind/brain. Such a philosophical foundation refers to certain features of the mind that hitherto could not be reached directly from the outside: intentionality, first-person perspective, personal autonomy in moral choices and in the construction of one's narrative, and relational identity. A variety of neurotechnologies or other tools, including artificial intelligence, alone or in combination can, by their very availability, threaten our mental integrity. Therefore, it is necessary to posit a specific right and provide it with a theoretical foundation and justification. It will be up to a subsequent treatment to define the moral and legal boundaries of such a right and its application.
Collapse
|
14
|
Neurorights – Do we Need New Human Rights? A Reconsideration of the Right to Freedom of Thought. NEUROETHICS-NETH 2023. [DOI: 10.1007/s12152-022-09511-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
AbstractProgress in neurotechnology and Artificial Intelligence (AI) provides unprecedented insights into the human brain. There are increasing possibilities to influence and measure brain activity. These developments raise multifaceted ethical and legal questions. The proponents of neurorights argue in favour of introducing new human rights to protect mental processes and brain data. This article discusses the necessity and advantages of introducing new human rights focusing on the proposed new human right to mental self-determination and the right to freedom of thought as enshrined in Art.18 International Covenant on Civil and Political Rights (ICCPR) and Art. 9 European Convention on Human Rights (ECHR). I argue that the right to freedom of thought can be coherently interpreted as providing comprehensive protection of mental processes and brain data, thus offering a normative basis regarding the use of neurotechnologies. Besides, I claim that an evolving interpretation of the right to freedom of thought is more convincing than introducing a new human right to mental self-determination.
Collapse
|
15
|
Rainey S. Neurorights as Hohfeldian Privileges. NEUROETHICS-NETH 2023. [DOI: 10.1007/s12152-023-09515-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
Abstract
AbstractThis paper argues that calls for neurorights propose an overcomplicated approach. It does this through analysis of ‘rights’ using the influential framework provided by Wesley Hohfeld, whose analytic jurisprudence is still well regarded in its clarificatory approach to discussions of rights. Having disentangled some unclarities in talk about rights, the paper proposes the idea of ‘novel human rights’ is not appropriate for what is deemed worth protecting in terms of mental integrity and cognitive liberty. That is best thought of in terms of Hohfeld’s account of ‘right’ as privilege. It goes on to argue that as privileges, legal protections are not well suited to these cases. As such, they cannot be ‘novel human rights’. Instead, protections for mental integrity and cognitive liberty are best accounted for in terms of familiar and established rational and discursive norms. Mental integrity is best thought of as evaluable in terms of familiar rational norms, and cognitive freedom is constrained by appraisals of sense-making. Concerns about how neurotechnologies might pose particular challenges to mental integrity and cognitive liberty are best protected through careful use of existing legislation on data protection, not novel rights, as it is via data that risks to integrity and liberty are manifested.
Collapse
|
16
|
Meynen G, Van de Pol N, Tesink V, Ligthart S. Neurotechnology to reduce recidivism: Ethical and legal challenges. HANDBOOK OF CLINICAL NEUROLOGY 2023; 197:265-276. [PMID: 37633715 DOI: 10.1016/b978-0-12-821375-9.00006-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/28/2023]
Abstract
Crime comes with enormous costs, not only financial but also in terms of loss of mental and physical health and, in some cases, even loss of life. Recidivism is responsible for a considerable percentage of the crimes, and therefore, society deems reducing recidivism a priority. To reduce recidivism, several types of interventions can be used, such as education and employment-focused rehabilitation programs which are intended to improve psychological and social factors. Another way to prevent reoffending is to influence the offender's brain functions. For example, medication can be offered to treat delusions or to diminish sexual drive. In the near future, innovative neurotechnologies are expected to improve prediction and prevention of reoffending. Potential positive effects of such neurotechniques include a safer society and earlier release of prisoners who are no longer "at high risk" to relapse into criminal behavior. Meanwhile, employing these neurotechniques in the criminal justice system raises fundamental concerns, for example, about autonomy, privacy and mental integrity. This chapter aims to identify some of the ethical and legal challenges of using neurotechnologies to reduce recidivism.
Collapse
Affiliation(s)
- Gerben Meynen
- Willem Pompe Institute for Criminal Law and Criminology, Faculty of Law, Economics and Governance, Utrecht University, Utrecht, The Netherlands; Department of Philosophy, Faculty of Humanities, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands.
| | - Naomi Van de Pol
- Willem Pompe Institute for Criminal Law and Criminology, Faculty of Law, Economics and Governance, Utrecht University, Utrecht, The Netherlands
| | - Vera Tesink
- Department of Philosophy, Faculty of Humanities, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Sjors Ligthart
- Willem Pompe Institute for Criminal Law and Criminology, Faculty of Law, Economics and Governance, Utrecht University, Utrecht, The Netherlands; Department of Criminal Law, Tilburg Law School, Tilburg University, Tilburg, The Netherlands
| |
Collapse
|
17
|
Sparking Religious Conversion through AI? RELIGIONS 2022. [DOI: 10.3390/rel13050413] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
This paper will take the stance that cognitive enhancement promised by the use of AI could be a first step for some in bringing about moral enhancement. It will take a further step in questioning whether moral enhancement using AI could lead to moral and or religious conversion, i.e., a change in direction or behaviour reflecting changed thinking about moral or religious convictions and purpose in life. One challenge is that improved cognition leading to better moral thinking is not always sufficient to motivate a person towards the change in behaviour demanded. While some think moral bioenhancement should be imposed if necessary in urgent situations, most religions today see volition in conversion as essential. Moral and religious conversion should be voluntary and not imposed, and recent studies that show possible dangers of the use of AI here will be discussed along with a recommendation that there be regulatory requirements to counteract manipulation. It is, however, recognized that a change in moral thinking is usually a necessary step in the process of conversion and this paper concludes that voluntary, safe use of AI to help bring that about would be ethically acceptable.
Collapse
|
18
|
Closed-Loop Brain Devices in Offender Rehabilitation: Autonomy, Human Rights, and Accountability. Camb Q Healthc Ethics 2021; 30:669-680. [PMID: 34702411 PMCID: PMC8549003 DOI: 10.1017/s0963180121000141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
The current debate on closed-loop brain devices (CBDs) mainly focuses on their use in a medical context; possible criminal justice applications have only received incidental scholarly attention. Unlike in medicine, in criminal justice, CBDs might be offered on behalf of the State and for the purpose of protecting security, rather than realizing healthcare aims. It would be possible to deploy CBDs in the rehabilitation of convicted offenders, similarly to the much-debated possibility of employing other brain interventions in this context. Although such use of CBDs could in principle be consensual, there are significant differences between the choice faced by a criminal offender offered a CBD in the context of criminal justice, and that faced by a patient offered a CBD in an ordinary healthcare context. Employment of CBDs in criminal justice thus raises ethical and legal intricacies not raised by healthcare applications. This paper examines some of these issues under three heads: autonomy, human rights, and accountability.
Collapse
|
19
|
Abstract
In recent years, philosophical-legal studies on neuroscience (mainly in the fields of neuroethics and neurolaw) have given increasing prominence to a normative analysis of the ethical-legal challenges in the mind and brain sciences in terms of rights, freedoms, entitlements and associated obligations. This way of analyzing the ethical and legal implications of neuroscience has come to be known as “neurorights.” Neurorights can be defined as the ethical, legal, social, or natural principles of freedom or entitlement related to a person’s cerebral and mental domain; that is, the fundamental normative rules for the protection and preservation of the human brain and mind. Although reflections on neurorights have received ample coverage in the mainstream media and have rapidly become a mainstream topic in the public neuroethics discourse, the frequency of such reflections in the academic literature is still relatively scarce. While the prominence of the neurorights debate in public opinion is crucial to ensure public engagement and democratic participation in deliberative processes on this issue, its relatively sporadic presence in the academic literature poses a risk of semantic-normative ambiguity and conceptual confusion. This risk is exacerbated by the presence of multiple and not always reconcilable terminologies. Several meta-ethical, normative ethical, and legal-philosophical questions need to be solved in order to ensure that neurorights can be used as effective instruments of global neurotechnology governance and be adequately imported into international human rights law. To overcome the shortcomings above, this paper attempts to provide a comprehensive normative-ethical, historical and conceptual analysis of neurorights. In particular, it attempts to (i) reconstruct a history of neurorights and locate these rights in the broader history of idea, (ii) outline a systematic conceptual taxonomy of neurorights, (iii) summarize ongoing policy initiatives related to neurorights, (iv) proactively address some unresolved ethico-legal challenges, and (v) identify priority areas for further academic reflection and policy work in this domain.
Collapse
Affiliation(s)
- Marcello Ienca
- College of Humanities, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland.,Department of Health Sciences and Technology, ETH Zürich, Zurich, Switzerland
| |
Collapse
|
20
|
Wajnerman Paz A. Is Your Neural Data Part of Your Mind? Exploring the Conceptual Basis of Mental Privacy. Minds Mach (Dordr) 2021; 32:395-415. [PMID: 34584344 PMCID: PMC8460199 DOI: 10.1007/s11023-021-09574-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2021] [Accepted: 09/14/2021] [Indexed: 12/11/2022]
Abstract
It has been argued that neural data (ND) are an especially sensitive kind of personal information that could be used to undermine the control we should have over access to our mental states (i.e. our mental privacy), and therefore need a stronger legal protection than other kinds of personal data. The Morningside Group, a global consortium of interdisciplinary experts advocating for the ethical use of neurotechnology, suggests achieving this by treating legally ND as a body organ (i.e. protecting them through bodily integrity). Although the proposal is currently shaping ND-related policies (most notably, a Neuroprotection Bill of Law being discussed by the Chilean Senate), it is not clear what its conceptual and legal basis is. Treating legally something as something else requires some kind of analogical reasoning, which is not provided by the authors of the proposal. In this paper, I will try to fill this gap by addressing ontological issues related to neurocognitive processes. The substantial differences between ND and body organs or organic tissue cast doubt on the idea that the former should be covered by bodily integrity. Crucially, ND are not constituted by organic material. Nevertheless, I argue that the ND of a subject s are analogous to neurocognitive properties of her brain. I claim that (i) s’ ND are a ‘medium independent’ property that can be characterized as natural semantic personal information about her brain and that (ii) s’ brain not only instantiates this property but also has an exclusive ontological relationship with it: This information constitutes a domain that is unique to her neurocognitive architecture.
Collapse
Affiliation(s)
- Abel Wajnerman Paz
- Universidad Alberto Hurtado, Almirante Barroso 10, 8340575 Santiago, Chile
| |
Collapse
|
21
|
Inglese S, Lavazza A. What Should We Do With People Who Cannot or Do Not Want to Be Protected From Neurotechnological Threats? Front Hum Neurosci 2021; 15:703092. [PMID: 34421562 PMCID: PMC8371680 DOI: 10.3389/fnhum.2021.703092] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Accepted: 07/13/2021] [Indexed: 11/13/2022] Open
Abstract
Neurotechnologies can pose a threat to people's privacy and mental integrity. Hence the proposal of establishing neurorights (Ienca and Andorno, 2017) and technical principles for the implementation of these rights (Lavazza, 2018). However, concepts such as "the extended mind" and what might be called "the post-human objection" can be said to challenge this protection paradigm. On the one hand, it may be difficult to outline the cognitive boundaries between humans and machines (with the consequent ethical and legal implications). On the other hand, those who wish to make strong use of neurotechnologies, or even hybridize with them, reject the idea that privacy and mental integrity should be protected. However, from the latter view, issues may arise relating to the protection of persons entering into relationships with posthumanist people. This article will discuss these scenarios as well as the ethical, legal, social, and political issues that could follow from them.
Collapse
Affiliation(s)
- Silvia Inglese
- Fondazione IRCCS Ca’ Granda, Ospedale Maggiore Policlinico, Milan, Italy
| | - Andrea Lavazza
- Department of Neuroethics, Centro Universitario Internazionale, Arezzo, Italy
- University of Pavia, Pavia, Italy
| |
Collapse
|
22
|
Ligthart S, Douglas T, Bublitz C, Kooijmans T, Meynen G. Forensic Brain-Reading and Mental Privacy in European Human Rights Law: Foundations and Challenges. NEUROETHICS-NETH 2021; 14:191-203. [PMID: 35186162 PMCID: PMC7612400 DOI: 10.1007/s12152-020-09438-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2020] [Accepted: 06/07/2020] [Indexed: 01/09/2023]
Abstract
A central question in the current neurolegal and neuroethical literature is how brain-reading technologies could contribute to criminal justice. Some of these technologies have already been deployed within different criminal justice systems in Europe, including Slovenia, Italy, England and Wales, and the Netherlands, typically to determine guilt, legal responsibility, or recidivism risk. In this regard, the question arises whether brain-reading could permissibly be used against the person's will. To provide adequate legal protection from such non-consensual brain-reading in the European legal context, ethicists have called for the recognition of a novel fundamental legal right to mental privacy. In this paper, we explore whether these ethical calls for recognising a novel legal right to mental privacy are necessary in the European context. We argue that a right to mental privacy could be derived from, or at least developed within in the jurisprudence of the European Court of Human Rights, and that introducing an additional fundamental right to protect against (forensic) brain-reading is not necessary. What is required, however, is a specification of the implications of existing rights for particular neurotechnologies and purposes.
Collapse
Affiliation(s)
- Sjors Ligthart
- Department of Criminal Law, Tilburg University, Warandelaan 2, 5037AB Tilburg, Netherlands
| | - Thomas Douglas
- Faculty of Philosophy, Oxford Uehiro Centre for Practical Ethics, University of Oxford, Oxford, UK
| | - Christoph Bublitz
- Faculty of Law, Universität Hamburg, Rothenbaumchaussee 33, 20148 Hamburg, Germany
| | - Tijs Kooijmans
- Department of Criminal Law, Tilburg University, Warandelaan 2, 5037AB Tilburg, Netherlands
| | - Gerben Meynen
- Willem Pompe Institute for Criminal Law and Criminology and UCALL, Utrecht University, Utrecht, Netherlands; Faculty of Humanities, VU University Amsterdam, De Boelelaan 1105, 1081HV Amsterdam, Netherlands
| |
Collapse
|
23
|
Hildt E. Affective Brain-Computer Music Interfaces-Drivers and Implications. Front Hum Neurosci 2021; 15:711407. [PMID: 34267633 PMCID: PMC8275997 DOI: 10.3389/fnhum.2021.711407] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2021] [Accepted: 06/02/2021] [Indexed: 11/13/2022] Open
Affiliation(s)
- Elisabeth Hildt
- Center for the Study of Ethics in the Professions, Illinois Institute of Technology, Chicago, IL, United States
| |
Collapse
|
24
|
Ligthart S, Meynen G, Biller-Andorno N, Kooijmans T, Kellmeyer P. Is Virtually Everything Possible? The Relevance of Ethics and Human Rights for Introducing Extended Reality in Forensic Psychiatry. AJOB Neurosci 2021; 13:144-157. [PMID: 33780323 DOI: 10.1080/21507740.2021.1898489] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Extended Reality (XR) systems, such as Virtual Reality (VR) and Augmented Reality (AR), provide a digital simulation either of a complete environment, or of particular objects within the real world. Today, XR is used in a wide variety of settings, including gaming, design, engineering, and the military. In addition, XR has been introduced into psychology, cognitive sciences and biomedicine for both basic research as well as diagnosing or treating neurological and psychiatric disorders. In the context of XR, the simulated 'reality' can be controlled and people may safely learn to cope with their feelings and behavior. XR also enables to simulate environments that cannot easily be accessed or created otherwise. Therefore, Extended Reality systems are thought to be a promising tool in the resocialization of criminal offenders, more specifically for purposes of risk assessment and treatment of forensic patients. Employing XR in forensic settings raises ethical and legal intricacies which are not raised in case of most other healthcare applications. Whereas a variety of normative issues of XR have been discussed in the context of medicine and consumer usage, the debate on XR in forensic settings is, as yet, straggling. By discussing two general arguments in favor of employing XR in criminal justice, and two arguments calling for caution in this regard, the present paper aims to broaden the current ethical and legal debate on XR applications to their use in the resocialization of criminal offenders, mainly focusing on forensic patients.
Collapse
Affiliation(s)
| | | | | | | | - Philipp Kellmeyer
- Institute of Biomedical Ethics and History of Medicine (IBME), University of Zurich.,University Medical Center Freiburg
| |
Collapse
|
25
|
Abstract
Brain-machine interfaces (BMIs), which enable a two-way flow of signals, information, and directions between human neurons and computerized machines, offer spectacular opportunities for therapeutic and consumer applications, but they also present unique dangers to the safety, privacy, psychological health, and spiritual well-being of their users. The sale of these devices as commodities for profit exacerbates such issues and may subject the user to an unequal exchange with corporations. Catholic healthcare professionals and bioethicists should be especially concerned about the implications for the essential dignity of the persons using the new BMIs. Summary The commercial sale of brain-machine interfaces (BMIs) generates and exacerbates problems for end-users' safety, psychological health, and spiritual well-being.
Collapse
|
26
|
Aggarwal S, Chugh N. Ethical Implications of Closed Loop Brain Device: 10-Year Review. Minds Mach (Dordr) 2020. [DOI: 10.1007/s11023-020-09518-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
27
|
Steinert S, Friedrich O. Wired Emotions: Ethical Issues of Affective Brain-Computer Interfaces. SCIENCE AND ENGINEERING ETHICS 2020; 26:351-367. [PMID: 30868377 PMCID: PMC6978299 DOI: 10.1007/s11948-019-00087-2] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2018] [Accepted: 01/24/2019] [Indexed: 05/28/2023]
Abstract
Ethical issues concerning brain-computer interfaces (BCIs) have already received a considerable amount of attention. However, one particular form of BCI has not received the attention that it deserves: Affective BCIs that allow for the detection and stimulation of affective states. This paper brings the ethical issues of affective BCIs in sharper focus. The paper briefly reviews recent applications of affective BCIs and considers ethical issues that arise from these applications. Ethical issues that affective BCIs share with other neurotechnologies are presented and ethical concerns that are specific to affective BCIs are identified and discussed.
Collapse
Affiliation(s)
- Steffen Steinert
- Department of Values, Technology and Innovation, Faculty of Technology, Policy and Management, Delft University of Technology, Delft, The Netherlands
| | - Orsolya Friedrich
- Institute of Ethics, History and Theory of Medicine, Ludwig-Maximilians-Universität München, Lessingstr. 2, 80336 Munich, Germany
| |
Collapse
|
28
|
Sieber A. Souled out of rights? – predicaments in protecting the human spirit in the age of neuromarketing. LIFE SCIENCES, SOCIETY AND POLICY 2019; 15:6. [PMID: 31754881 PMCID: PMC6873768 DOI: 10.1186/s40504-019-0095-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 07/26/2019] [Accepted: 10/14/2019] [Indexed: 11/30/2022]
Abstract
Modern neurotechnologies are rapidly infringing on conventional notions of human dignity and they are challenging what it means to be human. This article is a survey analysis of the future of the digital age, reflecting primarily on the effects of neurotechnology that violate universal human rights to dignity, self-determination, and privacy. In particular, this article focuses on neuromarketing to critically assess potentially negative social ramifications of under-regulated neurotechnological application. Possible solutions are critically evaluated, including the human rights claim to the ‘right to mental privacy’ and the suggestion of a new human right based on spiritual jurisdiction, where the human psyche is a legal space in a substantive legal setting.
Collapse
|
29
|
Hildt E. Multi-Person Brain-To-Brain Interfaces: Ethical Issues. Front Neurosci 2019; 13:1177. [PMID: 31827418 PMCID: PMC6849447 DOI: 10.3389/fnins.2019.01177] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2019] [Accepted: 10/18/2019] [Indexed: 12/03/2022] Open
Affiliation(s)
- Elisabeth Hildt
- Center for the Study of Ethics in the Professions, Illinois Institute of Technology, Chicago, IL, United States
| |
Collapse
|
30
|
Khan S, Aziz T. Transcending the brain: is there a cost to hacking the nervous system? Brain Commun 2019; 1:fcz015. [PMID: 32954260 PMCID: PMC7425343 DOI: 10.1093/braincomms/fcz015] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2019] [Revised: 08/08/2019] [Accepted: 08/19/2019] [Indexed: 11/13/2022] Open
Abstract
Great advancements have recently been made to understand the brain and the potential that we can extract out of it. Much of this has been centred on modifying electrical activity of the nervous system for improved physical and cognitive performance in those with clinical impairment. However, there is a risk of going beyond purely physiological performance improvements and striving for human enhancement beyond traditional human limits. Simple ethical guidelines and legal doctrine must be examined to keep ahead of technological advancement in light of the impending mergence between biology and machine. By understanding the role of modern ethics, this review aims to appreciate the fine boundary between what is considered ethically justified for current neurotechnology.
Collapse
Affiliation(s)
- Shujhat Khan
- School of Medicine, Imperial College London, London SW7 2AZ, UK
| | - Tipu Aziz
- Department of Neurosurgery, John Radcliffe Hospital, University of Oxford, Oxford OX3 9DU, UK
| |
Collapse
|
31
|
Lavazza A. Thought Apprehension: The "True" Self and The Risks of Mind Reading. AJOB Neurosci 2019; 10:19-21. [PMID: 31215338 DOI: 10.1080/21507740.2019.1595784] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
32
|
Kellmeyer P, Mueller O, Feingold-Polak R, Levy-Tzedek S. Social robots in rehabilitation: A question of trust. Sci Robot 2018; 3:3/21/eaat1587. [PMID: 33141717 DOI: 10.1126/scirobotics.aat1587] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2018] [Accepted: 06/15/2018] [Indexed: 11/02/2022]
Abstract
Social robots can help meet the growing need for rehabilitation assistance; measures for creating and maintaining trust in human-robot interactions should be priorities when designing social robots for rehabilitation.
Collapse
Affiliation(s)
- Philipp Kellmeyer
- Translational Neurotechnology Lab, Department of Neurosurgery, University of Freiburg-Medical Center, Freiburg im Breisgau, Germany.,Cluster of Excellence BrainLinks-BrainTools, University of Freiburg, Germany.,Institute for Biomedical Ethics and History of Medicine, University of Zurich, Switzerland
| | - Oliver Mueller
- Cluster of Excellence BrainLinks-BrainTools, University of Freiburg, Germany.,Department of Philosophy, University of Freiburg, Germany
| | - Ronit Feingold-Polak
- Recanati School for Community Health Professions, Department of Physical Therapy, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Shelly Levy-Tzedek
- Recanati School for Community Health Professions, Department of Physical Therapy, Ben-Gurion University of the Negev, Beer-Sheva, Israel. .,Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| |
Collapse
|
33
|
Kellmeyer P. Big Brain Data: On the Responsible Use of Brain Data from Clinical and Consumer-Directed Neurotechnological Devices. NEUROETHICS-NETH 2018. [DOI: 10.1007/s12152-018-9371-x] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
AbstractThe focus of this paper are the ethical, legal and social challenges for ensuring the responsible use of “big brain data”—the recording, collection and analysis of individuals’ brain data on a large scale with clinical and consumer-directed neurotechnological devices. First, I highlight the benefits of big data and machine learning analytics in neuroscience for basic and translational research. Then, I describe some of the technological, social and psychological barriers for securing brain data from unwarranted access. In this context, I then examine ways in which safeguards at the hardware and software level, as well as increasing “data literacy” in society, may enhance the security of neurotechnological devices and protect the privacy of personal brain data. Regarding ethical and legal ramifications of big brain data, I first discuss effects on the autonomy, the sense of agency and authenticity, as well as the self that may result from the interaction between users and intelligent, particularly closed-loop, neurotechnological devices. I then discuss the impact of the “datafication” in basic and clinical neuroscience research on the just distribution of resources and access to these transformative technologies. In the legal realm, I examine possible legal consequences that arises from the increasing abilities to decode brain states and their corresponding subjective phenomenological experiences on the hitherto inaccessible privacy of these information. Finally, I discuss the implications of big brain data for national and international regulatory policies and models of good data governance.
Collapse
|