1
|
Wen D, Soltan A, Trucco E, Matin RN. From data to diagnosis: skin cancer image datasets for artificial intelligence. Clin Exp Dermatol 2024; 49:675-685. [PMID: 38549552 DOI: 10.1093/ced/llae112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 02/11/2024] [Accepted: 03/25/2024] [Indexed: 06/26/2024]
Abstract
Artificial intelligence (AI) solutions for skin cancer diagnosis continue to gain momentum, edging closer towards broad clinical use. These AI models, particularly deep-learning architectures, require large digital image datasets for development. This review provides an overview of the datasets used to develop AI algorithms and highlights the importance of dataset transparency for the evaluation of algorithm generalizability across varying populations and settings. Current challenges for curation of clinically valuable datasets are detailed, which include dataset shifts arising from demographic variations and differences in data collection methodologies, along with inconsistencies in labelling. These shifts can lead to differential algorithm performance, compromise of clinical utility, and the propagation of discriminatory biases when developed algorithms are implemented in mismatched populations. Limited representation of rare skin cancers and minoritized groups in existing datasets are highlighted, which can further skew algorithm performance. Strategies to address these challenges are presented, which include improving transparency, representation and interoperability. Federated learning and generative methods, which may improve dataset size and diversity without compromising privacy, are also examined. Lastly, we discuss model-level techniques that may address biases entrained through the use of datasets derived from routine clinical care. As the role of AI in skin cancer diagnosis becomes more prominent, ensuring the robustness of underlying datasets is increasingly important.
Collapse
Affiliation(s)
- David Wen
- Department of Dermatology, Oxford University Hospitals NHS Foundation Trust, Oxford, UK
- Oxford University Clinical Academic Graduate School, University of Oxford, Oxford, UK
| | - Andrew Soltan
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
- Oxford Cancer and Haematology Centre, Oxford University Hospitals NHS Foundation Trust, Oxford, UK
- Department of Oncology, University of Oxford, Oxford, UK
| | - Emanuele Trucco
- VAMPIRE Project, Computing, School of Science and Engineering, University of Dundee, Dundee, UK
| | - Rubeta N Matin
- Department of Dermatology, Oxford University Hospitals NHS Foundation Trust, Oxford, UK
- Artificial Intelligence Working Party Group, British Association of Dermatologists, London, UK
| |
Collapse
|
2
|
Montomoli J, Bitondo MM, Cascella M, Rezoagli E, Romeo L, Bellini V, Semeraro F, Gamberini E, Frontoni E, Agnoletti V, Altini M, Benanti P, Bignami EG. Algor-ethics: charting the ethical path for AI in critical care. J Clin Monit Comput 2024:10.1007/s10877-024-01157-y. [PMID: 38573370 DOI: 10.1007/s10877-024-01157-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2023] [Accepted: 03/22/2024] [Indexed: 04/05/2024]
Abstract
The integration of Clinical Decision Support Systems (CDSS) based on artificial intelligence (AI) in healthcare is groundbreaking evolution with enormous potential, but its development and ethical implementation, presents unique challenges, particularly in critical care, where physicians often deal with life-threating conditions requiring rapid actions and patients unable to participate in the decisional process. Moreover, development of AI-based CDSS is complex and should address different sources of bias, including data acquisition, health disparities, domain shifts during clinical use, and cognitive biases in decision-making. In this scenario algor-ethics is mandatory and emphasizes the integration of 'Human-in-the-Loop' and 'Algorithmic Stewardship' principles, and the benefits of advanced data engineering. The establishment of Clinical AI Departments (CAID) is necessary to lead AI innovation in healthcare, ensuring ethical integrity and human-centered development in this rapidly evolving field.
Collapse
Affiliation(s)
- Jonathan Montomoli
- Department of Anesthesia and Intensive Care, Infermi Hospital, Romagna Local Health Authority, Viale Settembrini 2, Rimini, 47923, Italy.
- Health Services Research, Evaluation and Policy Unit, Romagna Local Health Authority, Viale Settembrini 2, Rimini, 47923, Italy.
| | - Maria Maddalena Bitondo
- Department of Anesthesia and Intensive Care, Infermi Hospital, Romagna Local Health Authority, Viale Settembrini 2, Rimini, 47923, Italy
| | - Marco Cascella
- Unit of Anesthesia and Pain Medicine, Department of Medicine, Surgery and Dentistry "Scuola Medica Salernitana, " University of Salerno, Baronissi, Salerno, Italy
| | - Emanuele Rezoagli
- School of Medicine and Surgery, University of Milano-Bicocca, Via Cadore, 48, Monza, 20900, Italy
- Dipartimento di Emergenza e Urgenza, Terapia intensiva e Semintensiva adulti e pediatrica, Fondazione IRCCS San Gerardo dei Tintori, Via Pergolesi, 33, Monza, 20900, Italy
| | - Luca Romeo
- Department of Economics and Law, University of Macerata, Macerata, 62100, Italy
| | - Valentina Bellini
- Anesthesiology, Critical Care and Pain Medicine Division, Department of Medicine and Surgery, University of Parma, Via Gramsci 14, Parma, 43125, Italy
| | - Federico Semeraro
- Department of Anesthesia, Intensive Care and Prehospital Emergency, Ospedale Maggiore Carlo Alberto Pizzardi, Largo Bartolo Nigrisoli, 2, Bologna, 40133, Italy
| | - Emiliano Gamberini
- Department of Anesthesia and Intensive Care, Infermi Hospital, Romagna Local Health Authority, Viale Settembrini 2, Rimini, 47923, Italy
| | - Emanuele Frontoni
- Department of Political Sciences, Communication and International Relations, University of Macerata, Macerata, 62100, Italy
| | - Vanni Agnoletti
- Department of Surgery and Trauma, Anesthesia and Intensive Care Unit, Maurizio Bufalini Hospital, Romagna Local Health Authority, Viale Giovanni Ghirotti, 286, Cesena, 47521, Italy
| | - Mattia Altini
- Hospital Care Sector, Emilia-Romagna Region, Via Aldo Moro, 21, Bologna, 40127, Italy
| | - Paolo Benanti
- Pontifical Gregorian University, Piazza della Pilotta 4, Roma, 00187, Italy
| | - Elena Giovanna Bignami
- Anesthesiology, Critical Care and Pain Medicine Division, Department of Medicine and Surgery, University of Parma, Via Gramsci 14, Parma, 43125, Italy
| |
Collapse
|
3
|
Furfaro D, Celi LA, Schwartzstein RM. Artificial Intelligence in Medical Education: A Long Way to Go. Chest 2024; 165:771-774. [PMID: 38599751 DOI: 10.1016/j.chest.2023.11.028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Revised: 11/16/2023] [Accepted: 11/19/2023] [Indexed: 04/12/2024] Open
Affiliation(s)
- David Furfaro
- Division of Pulmonary, Critical Care, and Sleep Medicine, Beth Israel Deaconess Medical Center, Boston, MA; Harvard Medical School, Boston, MA.
| | - Leo Anthony Celi
- Division of Pulmonary, Critical Care, and Sleep Medicine, Beth Israel Deaconess Medical Center, Boston, MA; Harvard Medical School, Boston, MA; Massachusetts Institute of Technology Laboratory of Computational Physiology, Boston, MA; Harvard T.H. Chan School of Public Health, Boston, MA
| | - Richard M Schwartzstein
- Division of Pulmonary, Critical Care, and Sleep Medicine, Beth Israel Deaconess Medical Center, Boston, MA; Harvard Medical School, Boston, MA
| |
Collapse
|
4
|
Saccà R, Turrini R, Ausania F, Turrina S, De Leo D. The ménage à trois of healthcare: the actors in after-AI era under patient consent. Front Med (Lausanne) 2024; 10:1329087. [PMID: 38269319 PMCID: PMC10807288 DOI: 10.3389/fmed.2023.1329087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 12/27/2023] [Indexed: 01/26/2024] Open
Abstract
Introduction Artificial intelligence has become an increasingly powerful technological instrument in recent years, revolutionizing many sectors, including public health. Its use in this field will inevitably change clinical practice, the patient-caregiver relationship and the concept of the diagnosis and treatment pathway, affecting the balance between the patient's right to self-determination and health, and thus leading to an evolution of the concept of informed consent. The aim was to characterize the guidelines for the use of artificial intelligence, its areas of application and the relevant legislation, to propose guiding principles for the design of optimal informed consent for its use. Materials and methods A classic review by keywords on the main search engines was conducted. An analysis of the guidelines and regulations issued by scientific authorities and legal bodies on the use of artificial intelligence in public health was carried out. Results The current areas of application of this technology were highlighted, divided into sectors, its impact on them, as well as a summary of current guidelines and legislation. Discussion The ethical implications of artificial intelligence in the health care system were assessed, particularly regarding the therapeutic alliance between doctor and patient, and the balance between the right to self-determination and health. Finally, given the evolution of informed consent in relation to the use of this new technology, seven guiding principles were proposed to guarantee the right to the most informed consent or dissent.
Collapse
|
5
|
Agudo U, Liberal KG, Arrese M, Matute H. The impact of AI errors in a human-in-the-loop process. Cogn Res Princ Implic 2024; 9:1. [PMID: 38185767 PMCID: PMC10772030 DOI: 10.1186/s41235-023-00529-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Accepted: 12/12/2023] [Indexed: 01/09/2024] Open
Abstract
Automated decision-making is becoming increasingly common in the public sector. As a result, political institutions recommend the presence of humans in these decision-making processes as a safeguard against potentially erroneous or biased algorithmic decisions. However, the scientific literature on human-in-the-loop performance is not conclusive about the benefits and risks of such human presence, nor does it clarify which aspects of this human-computer interaction may influence the final decision. In two experiments, we simulate an automated decision-making process in which participants judge multiple defendants in relation to various crimes, and we manipulate the time in which participants receive support from a supposed automated system with Artificial Intelligence (before or after they make their judgments). Our results show that human judgment is affected when participants receive incorrect algorithmic support, particularly when they receive it before providing their own judgment, resulting in reduced accuracy. The data and materials for these experiments are freely available at the Open Science Framework: https://osf.io/b6p4z/ Experiment 2 was preregistered.
Collapse
Affiliation(s)
- Ujué Agudo
- Bikolabs/Biko, Pamplona, Spain
- Departamento de Psicología, Universidad de Deusto, Avda. Universidad 24, 48007, Bilbao, Spain
| | | | | | - Helena Matute
- Departamento de Psicología, Universidad de Deusto, Avda. Universidad 24, 48007, Bilbao, Spain.
| |
Collapse
|