1
|
Landers C, Blasimme A, Vayena E. Sync fast and solve things-best practices for responsible digital health. NPJ Digit Med 2024; 7:113. [PMID: 38704413 PMCID: PMC11069566 DOI: 10.1038/s41746-024-01105-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 04/15/2024] [Indexed: 05/06/2024] Open
Abstract
Digital health innovation is expected to transform healthcare, but it also generates ethical and societal concerns, such as privacy risks, and biases that can compound existing health inequalities. While such concerns are widely recognized, existing regulatory principles, oversight methods and ethical frameworks seem out of sync with digital health innovation. New governance and innovation best practices are thus needed to bring such principles to bear with the reality of business, innovation, and regulation.To grant practical insight into best practices for responsible digital health innovation, we conducted a qualitative study based on an interactive engagement methodology. We engaged key stakeholders (n = 46) operating at the translational frontier of digital health. This approach allowed us to identify three clusters of governance and innovation best practices in digital health innovation: i) inclusive co-creation, ii) responsive regulation, and iii) value-driven innovation. Our study shows that realizing responsible digital health requires diverse stakeholders' commitment to adapt innovation and regulation practices, embracing co-creation as the default modus operandi for digital health development. We describe these collaborative practices and show how they can ensure that innovation is neither slowed by overregulation, nor leads to unethical outcomes.
Collapse
Affiliation(s)
| | | | - Effy Vayena
- Health Ethics and Policy Lab, ETH Zurich, Switzerland.
| |
Collapse
|
2
|
Andreoletti M, Haller L, Vayena E, Blasimme A. Mapping the ethical landscape of digital biomarkers: A scoping review. PLOS DIGITAL HEALTH 2024; 3:e0000519. [PMID: 38753605 PMCID: PMC11098308 DOI: 10.1371/journal.pdig.0000519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 04/22/2024] [Indexed: 05/18/2024]
Abstract
In the evolving landscape of digital medicine, digital biomarkers have emerged as a transformative source of health data, positioning them as an indispensable element for the future of the discipline. This necessitates a comprehensive exploration of the ethical complexities and challenges intrinsic to this cutting-edge technology. To address this imperative, we conducted a scoping review, seeking to distill the scientific literature exploring the ethical dimensions of the use of digital biomarkers. By closely scrutinizing the literature, this review aims to bring to light the underlying ethical issues associated with the development and integration of digital biomarkers into medical practice.
Collapse
Affiliation(s)
- Mattia Andreoletti
- Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland
| | - Luana Haller
- Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland
| | - Effy Vayena
- Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland
| | - Alessandro Blasimme
- Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland
| |
Collapse
|
3
|
Wimbarti S, Kairupan BHR, Tallei TE. Critical review of self-diagnosis of mental health conditions using artificial intelligence. Int J Ment Health Nurs 2024; 33:344-358. [PMID: 38345132 DOI: 10.1111/inm.13303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Revised: 01/26/2024] [Accepted: 01/30/2024] [Indexed: 03/10/2024]
Abstract
The advent of artificial intelligence (AI) has revolutionised various aspects of our lives, including mental health nursing. AI-driven tools and applications have provided a convenient and accessible means for individuals to assess their mental well-being within the confines of their homes. Nonetheless, the widespread trend of self-diagnosing mental health conditions through AI poses considerable risks. This review article examines the perils associated with relying on AI for self-diagnosis in mental health, highlighting the constraints and possible adverse outcomes that can arise from such practices. It delves into the ethical, psychological, and social implications, underscoring the vital role of mental health professionals, including psychologists, psychiatrists, and nursing specialists, in providing professional assistance and guidance. This article aims to highlight the importance of seeking professional assistance and guidance in addressing mental health concerns, especially in the era of AI-driven self-diagnosis.
Collapse
Affiliation(s)
- Supra Wimbarti
- Faculty of Psychology, Universitas Gadjah Mada, Yogyakarta, Indonesia
| | - B H Ralph Kairupan
- Department of Psychiatry, Faculty of Medicine, Sam Ratulangi University, Manado, North Sulawesi, Indonesia
| | - Trina Ekawati Tallei
- Department of Biology, Faculty of Mathematics and Natural Sciences, Sam Ratulangi University, Manado, North Sulawesi, Indonesia
- Department of Biology, Faculty of Medicine, Sam Ratulangi University, Manado, North Sulawesi, Indonesia
| |
Collapse
|
4
|
Siebelink NM, van Dam KN, Lukkien DRM, Boon B, Smits M, van der Poel A. Action Opportunities to Pursue Responsible Digital Care for People With Intellectual Disabilities: Qualitative Study. JMIR Ment Health 2024; 11:e48147. [PMID: 38416547 PMCID: PMC10938230 DOI: 10.2196/48147] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Revised: 10/26/2023] [Accepted: 01/12/2024] [Indexed: 02/29/2024] Open
Abstract
BACKGROUND Responsible digital care refers to any intentional systematic effort designed to increase the likelihood of a digital care technology developed through ethical decision-making, being socially responsible and aligned with the values and well-being of those impacted by it. OBJECTIVE We aimed to present examples of action opportunities for (1) designing "technology"; (2) shaping the "context" of use; and (3) adjusting the behavior of "users" to guide responsible digital care for people with intellectual disabilities. METHODS Three cases were considered: (1) design of a web application to support the preparation of meals for groups of people with intellectual disabilities, (2) implementation of an app to help people with intellectual disabilities regulate their stress independently, and (3) implementation of a social robot to stimulate interaction and physical activity among people with intellectual disabilities. Overall, 26 stakeholders participated in 3 multistakeholder workshops (case 1: 10/26, 38%; case 2: 10/26, 38%; case 3: 6/26, 23%) based on the "guidance ethics approach." We identified stakeholders' values based on bottom-up exploration of experienced and expected effects of using the technology, and we formulated action opportunities for these values in the specific context of use. Qualitative data were analyzed thematically. RESULTS Overall, 232 effects, 33 values, and 156 action opportunities were collected. General and case-specific themes were identified. Important stakeholder values included quality of care, autonomy, efficiency, health, enjoyment, reliability, and privacy. Both positive and negative effects could underlie stakeholders' values and influence the development of action opportunities. Action opportunities comprised the following: (1) technology: development of the technology (eg, user experience and customization), technology input (eg, recipes for meals, intervention options for reducing stress, and activities), and technology output (eg, storage and use of data); (2) context: guidelines, training and support, policy or agreements, and adjusting the physical environment in which the technology is used; and (3) users: integrating the technology into daily care practice, by diminishing (eg, "letting go" to increase the autonomy of people with intellectual disabilities), retaining (eg, face-to-face contact), and adding (eg, evaluation moments) certain behaviors of care professionals. CONCLUSIONS This is the first study to provide insight into responsible digital care for people with intellectual disabilities by means of bottom-up exploration of action opportunities to take account of stakeholders' values in designing technology, shaping the context of use, and adjusting the behavior of users. Although part of the findings may be generalized, case-specific insights and a complementary top-down approach (eg, predefined ethical frameworks) are essential. The findings represent a part of an ethical discourse that requires follow-up to meet the dynamism of stakeholders' values and further develop and implement action opportunities to achieve socially desirable, ethically acceptable, and sustainable digital care that improves the lives of people with intellectual disabilities.
Collapse
Affiliation(s)
| | - Kirstin N van Dam
- Academy Het Dorp, Arnhem, Netherlands
- Tranzo, Tilburg School of Social and Behavioral Sciences, Tilburg University, Tilburg, Netherlands
| | - Dirk R M Lukkien
- Vilans, Utrecht, Netherlands
- Copernicus Institute of Sustainable Development, Utrecht University, Utrecht, Netherlands
| | - Brigitte Boon
- Academy Het Dorp, Arnhem, Netherlands
- Tranzo, Tilburg School of Social and Behavioral Sciences, Tilburg University, Tilburg, Netherlands
- Siza, Arnhem, Netherlands
| | | | | |
Collapse
|
5
|
Minartz P, Aumann CM, Vondeberg C, Kuske S. Feeling safe in the context of digitalization in healthcare: a scoping review. Syst Rev 2024; 13:62. [PMID: 38331923 PMCID: PMC10851492 DOI: 10.1186/s13643-024-02465-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Accepted: 01/16/2024] [Indexed: 02/10/2024] Open
Abstract
BACKGROUND Digitalization in healthcare and society can be challenging, particularly for people who have limited digital experiences. New digital technologies can influence individuals' perceived safety and well-being. In this study, we aimed to identify and analyze the literature on needs and influencing factors in the context of emotional and psychological safety and digitalization in healthcare. METHODS A scoping review was conducted based on the PRISMA-ScR standard. The literature was searched based on the databases Medline via PubMed, PsycINFO via Ovid, and CINAHL via EBSCO. Literature was included after a review of the titles, abstracts, and full texts published in English or German in the last 5 years (October 2017-September 2022). Eligible literature included definitions and descriptions of emotional and/or psychological safety and was related to digitalization in healthcare and was analyzed qualitatively via inductive content analysis. The findings were analyzed from ethical, psychosocial, legal, economic, and political perspectives. RESULTS A total of 32 publications were finally included thereof qualitative (n = 20), quantitative (n = 3), and mixed methods (n = 2) studies. Other included publications were systematic integrative reviews, scoping reviews, narrative reviews, white papers, and ethical statements. Of these publications, four qualitative studies focused on emotional or psychological safety in the context of digital technology use in healthcare as a primary research aim. Most literature has shown that perceived safety is influenced by perceived changes in healthcare, digital (health) literacy, the design of digital technology, and need orientation. The needs identified in this context overlap strongly with the influencing factors. A low or high perceived safety has an impact on users' thoughts and actions. CONCLUSION The importance of emotional safety in the context of digital technologies in healthcare is growing, while psychological safety seems to be underrepresented. The interaction between the influencing factors and the need to feel safe leads to considerations that can affect user behavior and have far-reaching outcomes for the implementation of digital technology in healthcare. SYSTEMATIC REVIEW REGISTRATION Open Science Framework Registries on 16 December 2022 https://doi.org/10.17605/OSF.IO/HVYPT .
Collapse
Affiliation(s)
- Peter Minartz
- Fliedner Fachhochschule Düsseldorf, University of Applied Science, Alte Landstr. 179, 40489, Düsseldorf, Germany
| | - Christine Maria Aumann
- Fliedner Fachhochschule Düsseldorf, University of Applied Science, Alte Landstr. 179, 40489, Düsseldorf, Germany
| | - Carmen Vondeberg
- Fliedner Fachhochschule Düsseldorf, University of Applied Science, Alte Landstr. 179, 40489, Düsseldorf, Germany
| | - Silke Kuske
- Fliedner Fachhochschule Düsseldorf, University of Applied Science, Alte Landstr. 179, 40489, Düsseldorf, Germany.
| |
Collapse
|
6
|
Arbelaez Ossa L, Lorenzini G, Milford SR, Shaw D, Elger BS, Rost M. Integrating ethics in AI development: a qualitative study. BMC Med Ethics 2024; 25:10. [PMID: 38262986 PMCID: PMC10804710 DOI: 10.1186/s12910-023-01000-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 12/28/2023] [Indexed: 01/25/2024] Open
Abstract
BACKGROUND While the theoretical benefits and harms of Artificial Intelligence (AI) have been widely discussed in academic literature, empirical evidence remains elusive regarding the practical ethical challenges of developing AI for healthcare. Bridging the gap between theory and practice is an essential step in understanding how to ethically align AI for healthcare. Therefore, this research examines the concerns and challenges perceived by experts in developing ethical AI that addresses the healthcare context and needs. METHODS We conducted semi-structured interviews with 41 AI experts and analyzed the data using reflective thematic analysis. RESULTS We developed three themes that expressed the considerations perceived by experts as essential for ensuring AI aligns with ethical practices within healthcare. The first theme explores the ethical significance of introducing AI with a clear and purposeful objective. The second theme focuses on how experts are concerned about the tension that exists between economic incentives and the importance of prioritizing the interests of doctors and patients. The third theme illustrates the need to develop context-sensitive AI for healthcare that is informed by its underlying theoretical foundations. CONCLUSIONS The three themes collectively emphasized that beyond being innovative, AI must genuinely benefit healthcare and its stakeholders, meaning AI also aligns with intricate and context-specific healthcare practices. Our findings signal that instead of narrow product-specific AI guidance, ethical AI development may need a systemic, proactive perspective that includes the ethical considerations (objectives, actors, and context) and focuses on healthcare applications. Ethically developing AI involves a complex interplay between AI, ethics, healthcare, and multiple stakeholders.
Collapse
Affiliation(s)
| | - Giorgia Lorenzini
- Institute for Biomedical Ethics, University of Basel, Basel, Switzerland
| | - Stephen R Milford
- Institute for Biomedical Ethics, University of Basel, Basel, Switzerland
| | - David Shaw
- Institute for Biomedical Ethics, University of Basel, Basel, Switzerland
- Care and Public Health Research Institute, Maastricht University, Maastricht, Netherlands
| | - Bernice S Elger
- Institute for Biomedical Ethics, University of Basel, Basel, Switzerland
- Center for Legal Medicine (CURML), University of Geneva, Geneva, Switzerland
| | - Michael Rost
- Institute for Biomedical Ethics, University of Basel, Basel, Switzerland
| |
Collapse
|
7
|
Těšinová JK, Dobiášová K, Dušek Z, Tobiášová A. Development of telemedicine in the Czech Republic from patients' and other key stakeholders' perspective. Front Public Health 2023; 11:1202182. [PMID: 37937075 PMCID: PMC10626478 DOI: 10.3389/fpubh.2023.1202182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 10/06/2023] [Indexed: 11/09/2023] Open
Abstract
Telemedicine is a way to improve healthcare outcomes with greater efficiency for both patients and care providers. The great potential of digital technologies also lies in strengthening the patient-centered approach. The early successes and benefits of telemedicine in the Czech Republic, amplified by the COVID-19, have contributed to the fact that wider implementation of telemedicine is already generally supported at the expert and public levels. Our research focuses on the identification of key issues in the implementation of telemedicine and the challenges of telemedicine in the future, from the perspective of patients and other stakeholders. The study is based on a qualitative research approach, combining focus groups with key stakeholders, patient panels and expert panels (2021-2022). The lack of rules and uncoordinated development of various activities proved to be the main barriers to the integration of telemedicine in the health system. This regulatory uncertainty can generate a number of problems in the patient-doctor relationship in practice, including ethical ones, and can also lead to inequalities in access to healthcare and affect the overall quality of care provided. Furthermore, it has been shown that patients' interests in the implementation of telemedicine are: 1. a predictable and reliable framework that guarantees them certainty and security in the provision of telemedicine services, 2. telemedicine solutions that increase the availability and efficiency of the care provided while bringing comfort, and 3. user-friendly and simple solutions. At the same time, patients want to understand the new environment and be active participants in the process of digital innovation, including the practical implementation of telemedicine. The research team has developed recommendations for further developments in the implementation of telemedicine that reflect the patient's interest and can be implemented at three levels - the health system, institutional, and community level. In countries with a well-developed and institutionalized patient movement, the community level can be represented by patient organizations, thus becoming the link between telemedicine policy making and implementation at the individual level of healthcare provision. For the further development of telemedicine, the development of a national strategy involving all key stakeholders, including patients, in the implementation has proven essential.
Collapse
Affiliation(s)
- Jolana Kopsa Těšinová
- Institute of Public Health and Medical Law, First Faculty of Medicine, Charles University, Prague, Czechia
| | - Karolína Dobiášová
- Institute of Public Health and Medical Law, First Faculty of Medicine, Charles University, Prague, Czechia
| | | | - Alena Tobiášová
- Institute of Public Health and Medical Law, First Faculty of Medicine, Charles University, Prague, Czechia
| |
Collapse
|
8
|
Landers C, Ormond KE, Blasimme A, Brall C, Vayena E. Talking Ethics Early in Health Data Public Private Partnerships. JOURNAL OF BUSINESS ETHICS : JBE 2023; 190:649-659. [PMID: 38487176 PMCID: PMC10933190 DOI: 10.1007/s10551-023-05425-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Accepted: 04/25/2023] [Indexed: 03/17/2024]
Abstract
Data access and data sharing are vital to advance medicine. A growing number of public private partnerships are set up to facilitate data access and sharing, as private and public actors possess highly complementary health data sets and treatment development resources. However, the priorities and incentives of public and private organizations are frequently in conflict. This has complicated partnerships and sparked public concerns around ethical issues such as trust, justice or privacy-in turn raising an important problem in business and data ethics: how can ethical theory inform the practice of public and private partners to mitigate misaligned incentives, and ensure that they can deliver societally beneficial innovation? In this paper, we report on the development of the Swiss Personalized Health Network's ethical guidelines for health data sharing in public private partnerships. We describe the process of identifying ethical issues and engaging core stakeholders to incorporate their practical reality on these issues. Our report highlights core ethical issues in health data public private partnerships and provides strategies for how to overcome these in the Swiss health data context. By agreeing on and formalizing ethical principles and practices at the beginning of a partnership, partners and society can benefit from a relationship built around a mutual commitment to ethical principles. We present this summary in the hope that it will contribute to the global data sharing dialogue.
Collapse
Affiliation(s)
- Constantin Landers
- Health Ethics and Policy Lab, ETH Zurich, Hottingerstrasse 10, 8032 Zurich, Switzerland
| | - Kelly E. Ormond
- Health Ethics and Policy Lab, ETH Zurich, Hottingerstrasse 10, 8032 Zurich, Switzerland
| | - Alessandro Blasimme
- Health Ethics and Policy Lab, ETH Zurich, Hottingerstrasse 10, 8032 Zurich, Switzerland
| | - Caroline Brall
- Ethics and Policy Lab, Multidisciplinary Center for Infectious Diseases, University of Bern, Länggassstrasse 49a, 3012 Bern, Switzerland
- Institute of Philosophy, University of Bern, Länggassstrasse 49a, 3012 Bern, Switzerland
| | - Effy Vayena
- Health Ethics and Policy Lab, ETH Zurich, Hottingerstrasse 10, 8032 Zurich, Switzerland
- ELSI Advisory Group, Swiss Personalized Health Network, Laupenstrasse 7, 3001 Bern, Switzerland
| |
Collapse
|