1
|
Baumgartner R, Arora P, Bath C, Burljaev D, Ciereszko K, Custers B, Ding J, Ernst W, Fosch-Villaronga E, Galanos V, Gremsl T, Hendl T, Kropp C, Lenk C, Martin P, Mbelu S, Morais Dos Santos Bruss S, Napiwodzka K, Nowak E, Roxanne T, Samerski S, Schneeberger D, Tampe-Mai K, Vlantoni K, Wiggert K, Williams R. Fair and equitable AI in biomedical research and healthcare: Social science perspectives. Artif Intell Med 2023; 144:102658. [PMID: 37783540 DOI: 10.1016/j.artmed.2023.102658] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Revised: 06/30/2023] [Accepted: 09/01/2023] [Indexed: 10/04/2023]
Abstract
Artificial intelligence (AI) offers opportunities but also challenges for biomedical research and healthcare. This position paper shares the results of the international conference "Fair medicine and AI" (online 3-5 March 2021). Scholars from science and technology studies (STS), gender studies, and ethics of science and technology formulated opportunities, challenges, and research and development desiderata for AI in healthcare. AI systems and solutions, which are being rapidly developed and applied, may have undesirable and unintended consequences including the risk of perpetuating health inequalities for marginalized groups. Socially robust development and implications of AI in healthcare require urgent investigation. There is a particular dearth of studies in human-AI interaction and how this may best be configured to dependably deliver safe, effective and equitable healthcare. To address these challenges, we need to establish diverse and interdisciplinary teams equipped to develop and apply medical AI in a fair, accountable and transparent manner. We formulate the importance of including social science perspectives in the development of intersectionally beneficent and equitable AI for biomedical research and healthcare, in part by strengthening AI health evaluation.
Collapse
Affiliation(s)
- Renate Baumgartner
- Center of Gender- and Diversity Research, University of Tübingen, Wilhelmstrasse 56, 72074 Tübingen, Germany; Athena Institute, Vrije Universiteit Amsterdam, De Boelelaan 1085, 1081 HV Amsterdam, The Netherlands.
| | - Payal Arora
- Erasmus School of Philosophy, Erasmus University Rotterdam, Burgemeester Oudlaan 50, 3062 PA Rotterdam, The Netherlands
| | - Corinna Bath
- Gender, Technology and Mobility, Institute for Flight Guidance, TU Braunschweig, Hermann-Blenk-Str. 27, 38108 Braunschweig, Germany
| | - Darja Burljaev
- Center of Gender- and Diversity Research, University of Tübingen, Wilhelmstrasse 56, 72074 Tübingen, Germany
| | - Kinga Ciereszko
- Department of Philosophy, Adam Mickiewicz University in Poznan, Szamarzewski Street 89C, 60-569 Poznan, Poland
| | - Bart Custers
- eLaw - Center for Law and Digital Technologies, Leiden University, Steenschuur 25, 2311 ES Leiden, Netherlands
| | - Jin Ding
- iHuman and Department of Sociological Studies, University of Sheffield, ICOSS, 219 Portobello, Sheffield S1 4DP, United Kingdom
| | - Waltraud Ernst
- Institute for Women's and Gender Studies, Johannes Kepler University Linz, Altenberger Strasse 69, 4040 Linz, Austria
| | - Eduard Fosch-Villaronga
- eLaw - Center for Law and Digital Technologies, Leiden University, Steenschuur 25, 2311 ES Leiden, Netherlands
| | - Vassilis Galanos
- Science, Technology and Innovation Studies, School of Social and Political Science, University of Edinburgh, Old Surgeons' Hall, High School Yards, Edinburgh EH1 1LZ, United Kingdom
| | - Thomas Gremsl
- Institute of Ethics and Social Teaching, Faculty of Catholic Theology, University of Graz, Heinrichstraße 78b/2, 8010 Graz, Austria
| | - Tereza Hendl
- Professorship for Ethics of Medicine, University of Augsburg, Stenglinstraße 2, 86156 Augsburg, Germany; Institute of Ethics, History and Theory of Medicine, Ludwig-Maximilians-University in Munich, Lessingstr. 2, 80336 Munich, Germany
| | - Cordula Kropp
- Center for Interdisciplinary Risk and Innovation Studies (ZIRIUS), University of Stuttgart, Seidenstraße 36, 70174 Stuttgart, Germany
| | - Christian Lenk
- Institute of the History, Philosophy and Ethics of Medicine, Ulm University, Parkstraße 11, 89073 Ulm, Germany
| | - Paul Martin
- iHuman and Department of Sociological Studies, University of Sheffield, ICOSS, 219 Portobello, Sheffield S1 4DP, United Kingdom
| | - Somto Mbelu
- Erasmus School of Philosophy, Erasmus University Rotterdam, 10A Ademola Close off Remi Fani Kayode Street, GRA Ikeja, Lagos, Nigeria
| | | | - Karolina Napiwodzka
- Department of Philosophy, Adam Mickiewicz University in Poznan, Szamarzewski Street 89C, 60-569 Poznan, Poland
| | - Ewa Nowak
- Department of Philosophy, Adam Mickiewicz University in Poznan, Szamarzewski Street 89C, 60-569 Poznan, Poland
| | - Tiara Roxanne
- Data & Society Institute, 228 Park Ave S PMB 83075, New York, NY 10003-1502, United States of America
| | - Silja Samerski
- Fachbereich Soziale Arbeit und Gesundheit, Hochschule Emden/Leer, Constantiaplatz 4, 26723 Emden, Germany
| | - David Schneeberger
- Institute for Medical Informatics, Statistics and Documentation, Medical University of Graz, Auenbruggerplatz 2, 8036 Graz, Austria
| | - Karolin Tampe-Mai
- Center for Interdisciplinary Risk and Innovation Studies (ZIRIUS), University of Stuttgart, Seidenstraße 36, 70174 Stuttgart, Germany
| | - Katerina Vlantoni
- Department of History and Philosophy of Science, School of Science, National and Kapodistrian University of Athens, Panepistimioupoli, Ilisia, Athens 15771, Greece
| | - Kevin Wiggert
- Institute of Sociology, Department Sociology of Technology and Innovation, Technical University of Berlin, Fraunhoferstraße 33-36, 10623 Berlin, Germany
| | - Robin Williams
- Science, Technology and Innovation Studies, School of Social and Political Science, University of Edinburgh, Old Surgeons' Hall, High School Yards, Edinburgh EH1 1LZ, United Kingdom
| |
Collapse
|
2
|
Lockhart JW, King MM, Munsch C. Name-based demographic inference and the unequal distribution of misrecognition. Nat Hum Behav 2023:10.1038/s41562-023-01587-9. [PMID: 37069295 DOI: 10.1038/s41562-023-01587-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Accepted: 03/13/2023] [Indexed: 04/19/2023]
Abstract
Academics and companies increasingly draw on large datasets to understand the social world, and name-based demographic ascription tools are widespread for imputing information that is often missing from these large datasets. These approaches have drawn criticism on ethical, empirical and theoretical grounds. Using a survey of all authors listed on articles in sociology, economics and communication journals in Web of Science between 2015 and 2020, we compared self-identified demographics with name-based imputations of gender and race/ethnicity for 19,924 scholars across four gender ascription tools and four race/ethnicity ascription tools. We found substantial inequalities in how these tools misgender and misrecognize the race/ethnicity of authors, distributing erroneous ascriptions unevenly among other demographic traits. Because of the empirical and ethical consequences of these errors, scholars need to be cautious with the use of demographic imputation. We recommend five principles for the responsible use of name-based demographic inference.
Collapse
Affiliation(s)
| | - Molly M King
- Department of Sociology, Santa Clara University, Santa Clara, CA, USA
| | - Christin Munsch
- Department of Sociology, University of Connecticut, Storrs, CT, USA
| |
Collapse
|
3
|
Accounting for Diversity in Robot Design, Testbeds, and Safety Standardization. Int J Soc Robot 2023. [DOI: 10.1007/s12369-023-00974-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/07/2023]
Abstract
AbstractScience has started highlighting the importance of integrating diversity considerations in medicine and healthcare. However, there is little research into how these considerations apply, affect, and should be integrated into concrete healthcare innovations such as rehabilitation robotics. Robot policy ecosystems are also oblivious to the vast landscape of gender identity understanding, often ignoring these considerations and failing to guide developers in integrating them to ensure they meet user needs. While this ignorance may be for the traditional heteronormative configuration of the medical, technical, and legal world, the ending result is the failure of roboticists to consider them in robot development. However, missing diversity, equity, and inclusion considerations can result in robotic systems that can compromise user safety, be discriminatory, and not respect their fundamental rights. This paper explores the impact of overlooking gender and sex considerations in robot design on users. We focus on the safety standard for personal care robots ISO 13482:2014 and zoom in on lower-limb exoskeletons. Our findings signal that ISO 13482:2014 has significant gaps concerning intersectional aspects like sex, gender, age, or health conditions and, because of that, developers are creating robot systems that, despite adherence to the standard, can still cause harm to users. In short, our observations show that robotic exoskeletons operate intimately with users’ bodies, thus exemplifying how gender and medical conditions might introduce dissimilarities in human–robot interaction that, as long as they remain ignored in regulations, may compromise user safety. We conclude the article by putting forward particular recommendations to update ISO 13482:2014 to reflect better the broad diversity of users of personal care robots.
Collapse
|
5
|
[Artificial intelligence and ethics in healthcare-balancing act or symbiosis?]. Bundesgesundheitsblatt Gesundheitsforschung Gesundheitsschutz 2023; 66:176-183. [PMID: 36650296 PMCID: PMC9892090 DOI: 10.1007/s00103-022-03653-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Accepted: 12/19/2022] [Indexed: 01/19/2023]
Abstract
Artificial intelligence (AI) is becoming increasingly important in healthcare. This development triggers serious concerns that can be summarized by six major "worst-case scenarios". From AI spreading disinformation and propaganda, to a potential new arms race between major powers, to a possible rule of algorithms ("algocracy") based on biased gatekeeper intelligence, the real dangers of an uncontrolled development of AI are by no means to be underestimated, especially in the health sector. However, fear of AI could cause humanity to miss the opportunity to positively shape the development of our society together with an AI that is friendly to us.Use cases in healthcare play a primary role in this discussion, as both the risks and the opportunities of new AI-based systems become particularly clear here. For example, would older people with dementia (PWD) be allowed to entrust aspects of their autonomy to AI-based assistance systems so that they may continue to independently manage other aspects of their daily lives? In this paper, we argue that the classic balancing act between the dangers and opportunities of AI in healthcare can be at least partially overcome by taking a long-term ethical approach toward a symbiotic relationship between humans and AI. We exemplify this approach by showcasing our I‑CARE system, an AI-based recommendation system for tertiary prevention of dementia. This system has been in development since 2015 as the I‑CARE Project at the University of Bremen, where it is still being researched today.
Collapse
|
6
|
Seewann L, Verwiebe R, Buder C, Fritsch NS. “Broadcast your gender.” A comparison of four text-based classification methods of German YouTube channels. Front Big Data 2022; 5:908636. [PMID: 36188727 PMCID: PMC9515904 DOI: 10.3389/fdata.2022.908636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Accepted: 08/16/2022] [Indexed: 11/29/2022] Open
Abstract
Social media platforms provide a large array of behavioral data relevant to social scientific research. However, key information such as sociodemographic characteristics of agents are often missing. This paper aims to compare four methods of classifying social attributes from text. Specifically, we are interested in estimating the gender of German social media creators. By using the example of a random sample of 200 YouTube channels, we compare several classification methods, namely (1) a survey among university staff, (2) a name dictionary method with the World Gender Name Dictionary as a reference list, (3) an algorithmic approach using the website gender-api.com, and (4) a Multinomial Naïve Bayes (MNB) machine learning technique. These different methods identify gender attributes based on YouTube channel names and descriptions in German but are adaptable to other languages. Our contribution will evaluate the share of identifiable channels, accuracy and meaningfulness of classification, as well as limits and benefits of each approach. We aim to address methodological challenges connected to classifying gender attributes for YouTube channels as well as related to reinforcing stereotypes and ethical implications.
Collapse
|
7
|
Using Dynamic Pruned N-Gram Model for Identifying the Gender of the User. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12136378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
Organizations analyze customers’ personal data to understand and model their behavior. Identifying customers’ gender is a significant factor in analyzing markets that help plan the promotional campaigns, determine target customers and provide relevant offers. Several techniques were developed to analyze different types of data, including text, image, speech, and biometrics, to identify the gender of the user. The method of synthesis of the profile name differs from one customer to another. Using numerical substitutions of specific letters, known as Leet language, impedes the gender identification task. Moreover, using acronyms, misspellings, and adjacent names impose additional challenges. Towards this goal, this work uses the customers’ profile names associated with submitted reviews to recognize the customers’ gender. First, we create datasets of profile names extracted from the customers’ reviews. Secondly, we introduce a dynamic pruned n-gram model for identifying the gender of the user. It starts with data segmentation to handle adjacent parts, followed by data conversion and cleaning to fix the use of Leet language. Feature selection through a dynamic pruned n-gram model is the next step with the recurrent misspelling correction using fuzzy matching. We evaluate the proposed approach on the real data collected from active web resources. The obtained results demonstrate its validity and reliability.
Collapse
|
8
|
Singh DK, Kumar M, Fosch-Villaronga E, Singh D, Shukla J. Ethical Considerations from Child-Robot Interactions in Under-Resourced Communities. Int J Soc Robot 2022; 15:1-17. [PMID: 35637787 PMCID: PMC9133315 DOI: 10.1007/s12369-022-00882-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/22/2022] [Indexed: 11/07/2022]
Abstract
Recent advancements in socially assistive robotics (SAR) have shown a significant potential of using social robotics to achieve increasing cognitive and affective outcomes in education. However, the deployments of SAR technologies also bring ethical challenges in tandem, to the fore, especially in under-resourced contexts. While previous research has highlighted various ethical challenges that arise in SAR deployment in real-world settings, most of the research has been centered in resource-rich contexts, mainly in developed countries in the 'Global North,' and the work specifically in the educational setting is limited. This research aims to evaluate and reflect upon the potential ethical and pedagogical challenges of deploying a social robot in an under-resourced context. We base our findings on a 5-week in-the-wild user study conducted with 12 kindergarten students at an under-resourced community school in New Delhi, India. We used interaction analysis with the context of learning, education, and ethics to analyze the user study through video recordings. Our findings highlighted four primary ethical considerations that should be taken into account while deploying social robotics technologies in educational settings; (1) language and accent as barriers in pedagogy, (2) effect of malfunctioning, (un)intended harms, (3) trust and deception, and (4) ecological viability of innovation. Overall, our paper argues for assessing the ethical and pedagogical constraints and bridging the gap between non-existent literature from such a context to evaluate better the potential use of such technologies in under-resourced contexts.
Collapse
Affiliation(s)
| | - Manohar Kumar
- Indraprastha Insitute of Information Technology Delhi, New Delhi, India
| | | | - Deepa Singh
- Department of Philosophy, University of Delhi, New Delhi, India
| | - Jainendra Shukla
- Indraprastha Insitute of Information Technology Delhi, New Delhi, India
| |
Collapse
|
9
|
Further Divided Gender Gaps in Research Productivity and Collaboration during the COVID-19 Pandemic: Evidence from Coronavirus-related Literature. J Informetr 2022; 16:101295. [PMID: 35529705 PMCID: PMC9068670 DOI: 10.1016/j.joi.2022.101295] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Revised: 04/12/2022] [Accepted: 05/03/2022] [Indexed: 11/21/2022]
Abstract
Based on publication data on coronavirus-related fields, this study applies a difference in differences approach to explore the evolution of gender inequalities before and during the COVID-19 pandemic by comparing the differences in the numbers and shares of authorships, leadership in publications, gender composition of collaboration, and scientific impacts. We find that, during the pandemic: (1) females’ leadership in publications as the first author was negatively affected; (2) although both females and males published more papers relative to the pre-pandemic period, the gender gaps in the share of authorships have been strengthened due to the larger increase in males’ authorships; (3) the share of publications by mixed-gender collaboration declined; (4) papers by teams in which females play a key role were less cited in the pre-pandemic period, and this citation disadvantage was exacerbated during the pandemic; and (5) gender inequalities regarding authorships and collaboration were enhanced in the initial stage of COVID-19, widened with the increasing severity of COVID-19, and returned to the pre-pandemic level in September 2020. This study shows that females’ lower participation in teams as major contributors and less collaboration with their male colleagues also reflect their underrepresentation in science in the pandemic period. This investigation significantly deepens our understanding of how the pandemic influenced academia, based on which science policies and gender policy changes are proposed to mitigate the gender gaps.
Collapse
|